Safety research for multi-agent AI systems needs to focus on how agents interact with each other—not just individual model behavior or aggregate outcomes—to identify the specific interaction patterns that create collective risks.
As AI systems become more agentic with planning, memory, and tool use, safety risks emerge from how multiple agents interact rather than from individual models alone.