LLM agents need to verify their reasoning against logical constraints before committing to actions—not just check if multiple agents agree, which can hide systematic errors.
This paper addresses a critical problem in LLM agents: reasoning trajectories can sound coherent but violate logical constraints, causing errors to accumulate over multiple steps. The authors propose SAVeR, a framework that audits and verifies an agent's internal beliefs before taking actions, catching unsupported assumptions and fixing them with minimal changes.