SVMs trained with orthogonal polynomial kernels can be fully interpreted after training by decomposing their decision function into explicit basis components, revealing feature importance and interaction patterns that accuracy metrics alone don't capture.
This paper presents a method to understand how Support Vector Machines (SVMs) make decisions by breaking down their learned functions into interpretable components.