AI Code Assistants Under Attack: The 'Rules File Backdoor'
2025-04-14

Pillar Security researchers have discovered a dangerous new supply chain attack vector dubbed "Rules File Backdoor." This technique allows hackers to silently compromise AI-generated code by injecting malicious instructions into seemingly innocuous configuration files used by AI code editors like Cursor and GitHub Copilot. Exploiting hidden Unicode characters and sophisticated evasion techniques, attackers manipulate the AI to insert malicious code bypassing code reviews. This attack is virtually invisible, silently propagating malicious code. Weaponizing the AI itself, this attack transforms developers' trusted assistants into unwitting accomplices, potentially affecting millions of users.
Read more