Indirect Prompt Injection — How Attacks Hide in Documents Your AI Reads
The attacker never touched the chat interface. They sent an email. The AI read it, followed the hidden instructions inside, and silently exfiltrated data from files the attacker had no access to. This is indirect prompt injection — and it's the most dangerous AI vulnerability most organisations haven't heard of.