Image Scaling Attacks: A New Vulnerability in AI Systems

2025-08-21
Image Scaling Attacks: A New Vulnerability in AI Systems

Researchers have discovered a novel AI security vulnerability: data exfiltration can be achieved by sending seemingly harmless images to large language models (LLMs). Attackers leverage the fact that AI systems often downscale images before processing them, embedding malicious prompt injections in the downscaled version that are invisible at full resolution. This allows bypassing user awareness and accessing user data. The vulnerability has been demonstrated on multiple AI systems, including Google Gemini CLI. Researchers developed the open-source tool Anamorpher to generate and analyze these crafted images, and recommend avoiding image downscaling in AI systems or providing users with a preview of the image the model actually sees to mitigate the risk.