Sparse neural additive model: Interpretable deep learning with feature selection via group sparsity
Interpretable machine learning has demonstrated impressive performance while preserving
explainability. In particular, neural additive models (NAM) offer the interpretability to the …
explainability. In particular, neural additive models (NAM) offer the interpretability to the …
Spectral estimators for structured generalized linear models via approximate message passing
We consider the problem of parameter estimation in a high-dimensional generalized linear
model. Spectral methods obtained via the principal eigenvector of a suitable data …
model. Spectral methods obtained via the principal eigenvector of a suitable data …
Near-optimal multiple testing in Bayesian linear models with finite-sample FDR control
In high dimensional variable selection problems, statisticians often seek to design multiple
testing procedures that control the False Discovery Rate (FDR), while concurrently …
testing procedures that control the False Discovery Rate (FDR), while concurrently …
The price of competition: Effect size heterogeneity matters in high dimensions
In high-dimensional sparse regression, would increasing the signal-to-noise ratio while
fixing the sparsity level always lead to better model selection? For high-dimensional sparse …
fixing the sparsity level always lead to better model selection? For high-dimensional sparse …
Weak pattern convergence for SLOPE and its robust versionsFree GPT-4
The Sorted L-One Estimator (SLOPE) is a popular regularization method in regression,
which induces clustering of the estimated coefficients. That is, the estimator can have …
which induces clustering of the estimated coefficients. That is, the estimator can have …
Asymptotic statistical analysis of sparse group lasso via approximate message passing algorithm
Sparse Group LASSO (SGL) is a regularized model for high-dimensional linear regression
problems with grouped covariates. SGL applies $ l_1 $ and $ l_2 $ penalties on the …
problems with grouped covariates. SGL applies $ l_1 $ and $ l_2 $ penalties on the …
Near-optimal multiple testing procedure in Bayesian linear models
T Ahn - 2024 - search.proquest.com
In the classical hypothesis testing paradigm, statisticians aim to propose testing
methodologies that effectively manage statistical significance while maximizing power, ie …
methodologies that effectively manage statistical significance while maximizing power, ie …
Regularized Regression in High Dimensions: Asymptotics, Optimality and Universality
H Hu - 2021 - search.proquest.com
Regularized regression is a classical method for statistical estimation and learning. It has
now been successfully used in many applications including communications, biology …
now been successfully used in many applications including communications, biology …
SLOPE for sparse linear regression: asymptotics and optimal regularization
In sparse linear regression, the SLOPE estimator generalizes LASSO by penalizing different
coordinates of the estimate according to their magnitudes. In this paper, we present a …
coordinates of the estimate according to their magnitudes. In this paper, we present a …
[ΒΙΒΛΙΟ][B] Topics in Exact Asymptotics for High-Dimensional Regression
M Celentano - 2021 - search.proquest.com
Exact asymptotic theory refers to a collection of techniques for precisely characterizing the
distribution of high-dimensional regression estimators. Examples of the estimators it …
distribution of high-dimensional regression estimators. Examples of the estimators it …