A technique that adds small, trainable layers to a pre-trained model instead of retraining the entire model, making fine-tuning faster and more memory-efficient.