High quality explanations of neural networks (NNs) should exhibit two key properties. Completeness ensures that they accurately reflect a network’s function and interpretability makes them understandable to humans. Many existing methods provide explanations of individual neurons within a network. In this work we provide evidence that for AlexNet pretrained on ImageNet, neuron-based explanation methods sacrifice both completeness and interpretability compared to activation principal components. Neurons are a poor basis for AlexNet embeddings because they don’t account for the distributed nature of these representations. By examining two quantitative measures of completeness and conducting a user study to measure interpretability, we show the most important principal components provide more complete and interpretable explanations than the most important neurons. Much of the activation variance may be explained by examining relatively few high-variance PCs, as opposed to studying every neuron. These principal components also strongly affect network function, and are significantly more interpretable than neurons. Our findings suggest that explanation methods for networks like AlexNet should avoid using neurons as a basis for embeddings and instead choose a basis, such as principal components, which accounts for the high dimensional and distributed nature of a network’s internal representations. Interactive demo and code available at https://ndey96.github.io/neuron-explanations-sacrifice.
Bibtex
@article{ dey2025neuronbased, title={Neuron-based explanations of neural networks sacrifice completeness and interpretability}, author={Nolan Simran Dey and Eric Taylor and Alexander Wong and Bryan P. Tripp and Graham W. Taylor}, journal={Transactions on Machine Learning Research}, issn={2835-8856}, year={2025}, url={https://openreview.net/forum?id=UWNa9Pv6qA}, note={} }
Related Research
-
Scalable Temporal Domain Generalization via Prompting
Scalable Temporal Domain Generalization via Prompting
S. Hosseini, M. Zhai, H. Hajimirsadeghi, and F. Tung. Workshop at International Conference on Machine Learning (ICML)
Publications
-
Accurate Parameter-Efficient Test-Time Adaptation for Time Series Forecasting
Accurate Parameter-Efficient Test-Time Adaptation for Time Series Forecasting
H. R. Medeiros, H. Sharifi, G. Oliveira, and S. Irandoust. Workshop at International Conference on Machine Learning (ICML)
Publications
-
TabReason: A Reinforcement Learning-Enhanced LLM for Accurate and Explainable Tabular Data Prediction
TabReason: A Reinforcement Learning-Enhanced LLM for Accurate and Explainable Tabular Data Prediction
*T. Xu, *Z. Zhang, *X. Sun, *L. K. Zung, *H. Hajimirsadeghi, and G. Mori. Workshop at International Conference on Machine Learning (ICML)
Publications