An explanation method applied after a model is trained to interpret its predictions, rather than building interpretability into the model itself.