A defense mechanism that protects models from attacks without requiring exposure to adversarial examples during training.