The Lethal Trifecta: New Challenges in LLM Security
2025-08-10

A talk on AI security focused on prompt injection, a novel attack exploiting the inherent vulnerabilities of LLMs built through string concatenation. The speaker coined the term "Lethal Trifecta," describing three attack conditions: LLM access to private data, execution of tool calls, and data exfiltration. Numerous examples of prompt injection attacks were discussed, highlighting the inadequacy of current defenses and emphasizing the need to fundamentally restrict LLM access to untrusted input. The presentation also addressed security flaws in the Model Context Protocol (MCP), noting that its mix-and-match approach unreasonably shifts security responsibility to end-users.
AI