LLMs are vulnerable to attacks that split harmful requests across separate conversations—a gap that existing safety measures don't address because they only monitor individual interactions, not patterns across sessions.
This paper introduces Transient Turn Injection (TTI), a new attack technique that exploits how LLMs handle multiple separate conversations without memory between them. By spreading harmful requests across isolated interactions, attackers can bypass safety measures that work within single conversations.