AI Chatbots' Inaccurate URLs: A New Opportunity for Criminals

Netcraft's research reveals that AI chatbots like GPT-4.1 frequently provide incorrect website addresses for major companies, achieving only 66% accuracy. This creates an opportunity for cybercriminals to leverage these inaccuracies for phishing attacks by creating fake websites. Researchers found that scammers are even exploiting AI-generated results, creating fake code repositories, tutorials, and social media accounts on GitHub to boost the ranking of malicious sites in chatbot results, enabling supply-chain attacks such as the one targeting the Solana blockchain API. This highlights the risk of solely relying on AI chatbots for information, particularly sensitive data like login URLs, emphasizing the need for careful verification.