Think
LLM
Models
Capabilities
Use Cases
Benchmarks
Papers
Glossary
Search
/
Glossary
/
Adversarial Attack
Adversarial Attack
techniques
Intentional manipulation of input data to trick an AI model into making wrong decisions.
Learn more on Wikipedia
Adversarial Attack — Glossary — ThinkLLM