Beyond sparsity: Tree regularization of deep models for interpretability

M Wu, M Hughes, S Parbhoo, M Zazzi, V Roth… - Proceedings of the …, 2018 - ojs.aaai.org
The lack of interpretability remains a key barrier to the adoption of deep models in many
applications. In this work, we explicitly regularize deep models so human users might step …

Structured variational learning of Bayesian neural networks with horseshoe priors

S Ghosh, J Yao, F Doshi-Velez - … Conference on Machine …, 2018 - proceedings.mlr.press
Abstract Bayesian Neural Networks (BNNs) have recently received increasing attention for
their ability to provide well-calibrated posterior uncertainties. However, model selection …

Model selection in Bayesian neural networks via horseshoe priors

S Ghosh, J Yao, F Doshi-Velez - Journal of Machine Learning Research, 2019 - jmlr.org
The promise of augmenting accurate predictions provided by modern neural networks with
well-calibrated predictive uncertainties has reinvigorated interest in Bayesian neural …

Model selection in Bayesian neural networks via horseshoe priors

S Ghosh, F Doshi-Velez - arxiv preprint arxiv:1705.10388, 2017 - arxiv.org
Bayesian Neural Networks (BNNs) have recently received increasing attention for their
ability to provide well-calibrated posterior uncertainties. However, model selection---even …

Smooth group L1/2 regularization for input layer of feedforward neural networks

F Li, JM Zurada, W Wu - Neurocomputing, 2018 - Elsevier
A smooth group regularization method is proposed to identify and remove the redundant
input nodes of feedforward neural networks, or equivalently the redundant dimensions of the …

Group feature selection with multiclass support vector machine

F Tang, L Adam, B Si - Neurocomputing, 2018 - Elsevier
Feature reduction is nowadays an important topic in machine learning as it reduces the
complexity of the final model and makes it easier to interpret. In some applications, the …

Spike-and-Slab Shrinkage Priors for Structurally Sparse Bayesian Neural Networks

S Jantre, S Bhattacharya, T Maiti - IEEE Transactions on Neural …, 2024 - ieeexplore.ieee.org
Network complexity and computational efficiency have become increasingly significant
aspects of deep learning. Sparse deep learning addresses these challenges by recovering …

Optimizing for interpretability in deep neural networks with tree regularization

M Wu, S Parbhoo, MC Hughes, V Roth… - Journal of Artificial …, 2021 - jair.org
Deep models have advanced prediction in many domains, but their lack of interpretability
remains a key barrier to the adoption in many real world applications. There exists a large …

A comprehensive study of spike and slab shrinkage priors for structurally sparse Bayesian neural networks

S Jantre, S Bhattacharya, T Maiti - arxiv preprint arxiv:2308.09104, 2023 - arxiv.org
Network complexity and computational efficiency have become increasingly significant
aspects of deep learning. Sparse deep learning addresses these challenges by recovering …

Group Regularization for Pruning Hidden Layer Nodes of Feedforward Neural Networks

HZ Alemu, J Zhao, F Li, W Wu - IEEE Access, 2019 - ieeexplore.ieee.org
A group L 1/2 regularization term is defined and introduced into the conventional error
function for pruning the hidden layer nodes of feedforward neural networks. This group L 1/2 …