Sparse neural additive model: Interpretable deep learning with feature selection via group sparsity

S Xu, Z Bu, P Chaudhari, IJ Barnett - Joint European Conference on …, 2023 - Springer
Interpretable machine learning has demonstrated impressive performance while preserving
explainability. In particular, neural additive models (NAM) offer the interpretability to the …

Spectral estimators for structured generalized linear models via approximate message passing

Y Zhang, HC Ji, R Venkataramanan… - The Thirty Seventh …, 2024 - proceedings.mlr.press
We consider the problem of parameter estimation in a high-dimensional generalized linear
model. Spectral methods obtained via the principal eigenvector of a suitable data …

Near-optimal multiple testing in Bayesian linear models with finite-sample FDR control

T Ahn, L Lin, S Mei - arxiv preprint arxiv:2211.02778, 2022 - arxiv.org
In high dimensional variable selection problems, statisticians often seek to design multiple
testing procedures that control the False Discovery Rate (FDR), while concurrently …

The price of competition: Effect size heterogeneity matters in high dimensions

H Wang, Y Yang, WJ Su - IEEE Transactions on Information …, 2022 - ieeexplore.ieee.org
In high-dimensional sparse regression, would increasing the signal-to-noise ratio while
fixing the sparsity level always lead to better model selection? For high-dimensional sparse …

Weak pattern convergence for SLOPE and its robust versionsFree GPT-4

I Hejný, J Wallin, M Bogdan - arxiv preprint arxiv:2303.10970, 2023 - arxiv.org
The Sorted L-One Estimator (SLOPE) is a popular regularization method in regression,
which induces clustering of the estimated coefficients. That is, the estimator can have …

Asymptotic statistical analysis of sparse group lasso via approximate message passing algorithm

K Chen, Z Bu, S Xu - arxiv preprint arxiv:2107.01266, 2021 - arxiv.org
Sparse Group LASSO (SGL) is a regularized model for high-dimensional linear regression
problems with grouped covariates. SGL applies $ l_1 $ and $ l_2 $ penalties on the …

Near-optimal multiple testing procedure in Bayesian linear models

T Ahn - 2024 - search.proquest.com
In the classical hypothesis testing paradigm, statisticians aim to propose testing
methodologies that effectively manage statistical significance while maximizing power, ie …

Regularized Regression in High Dimensions: Asymptotics, Optimality and Universality

H Hu - 2021 - search.proquest.com
Regularized regression is a classical method for statistical estimation and learning. It has
now been successfully used in many applications including communications, biology …

SLOPE for sparse linear regression: asymptotics and optimal regularization

H Hu, YM Lu - IEEE Transactions on Information Theory, 2022 - ieeexplore.ieee.org
In sparse linear regression, the SLOPE estimator generalizes LASSO by penalizing different
coordinates of the estimate according to their magnitudes. In this paper, we present a …

[ΒΙΒΛΙΟ][B] Topics in Exact Asymptotics for High-Dimensional Regression

M Celentano - 2021 - search.proquest.com
Exact asymptotic theory refers to a collection of techniques for precisely characterizing the
distribution of high-dimensional regression estimators. Examples of the estimators it …