Fast composite optimization and statistical recovery in federated learning

Y Bao, M Crawshaw, S Luo… - … Conference on Machine …, 2022 - proceedings.mlr.press
As a prevalent distributed learning paradigm, Federated Learning (FL) trains a global model
on a massive amount of devices with infrequent communication. This paper investigates a …

Communication-efficient and byzantine-robust distributed learning with statistical guarantee

X Zhou, L Chang, P Xu, S Lv - Pattern Recognition, 2023 - Elsevier
Communication efficiency and robustness are two major issues in modern distributed
learning frameworks. This is due to the practical situations where some computing nodes …

A novel robust adaptive subspace learning framework for dimensionality reduction

W **ong, G Yu, J Ma, S Liu - Applied Intelligence, 2024 - Springer
High-dimensional data is characterized by its sparsity and noise, which can increase the
likelihood of overfitting and compromise the model's generalizability performance. In this …

Robust communication-efficient distributed composite quantile regression and variable selection for massive data

K Wang, S Li, B Zhang - Computational Statistics & Data Analysis, 2021 - Elsevier
Statistical analysis of massive data is becoming more and more common. Distributed
composite quantile regression (CQR) for massive data is proposed in this paper …

Byzantine-robust and efficient distributed sparsity learning: a surrogate composite quantile regression approach

C Chen, Z Zhu - Statistics and Computing, 2024 - Springer
Distributed statistical learning has gained significant traction recently, mainly due to the
availability of unprecedentedly massive datasets. The objective of distributed statistical …

Communication‐efficient low‐dimensional parameter estimation and inference for high‐dimensional L p L^ p‐quantile regression

J Gao, L Wang - Scandinavian Journal of Statistics, 2024 - Wiley Online Library
The L p L^ p‐quantile regression generalizes both quantile regression and expectile
regression, and has become popular for its robustness and effectiveness especially when …

Efficient Byzantine-robust distributed inference with regularization: A trade-off between compression and adversary

X Zhou, G Yang, L Chang, S Lv - Information Sciences, 2024 - Elsevier
In large-scale distributed learning, the direct application of traditional inference is often not
feasible, because it may contain multiple themes, such as communication costs, privacy …

Distributed semi-supervised sparse statistical inference

J Tu, W Liu, X Mao, M Xu - IEEE Transactions on Information …, 2023 - ieeexplore.ieee.org
The debiased estimator is a crucial tool in statistical inference for high-dimensional model
parameters. However, constructing such an estimator involves estimating the high …

High-dimensional M-estimation for Byzantine-robust decentralized learning

X Zhang, L Wang - Information Sciences, 2024 - Elsevier
In this paper, we focus on robust sparse M-estimation over decentralized networks in the
presence of Byzantine attacks. In particular, a decentralized network is modeled as an …

Adaptive weighted approach for high-dimensional statistical learning and inference

J Lu, X Ma, M Li, C Hou - Applied Mathematical Modelling, 2025 - Elsevier
We propose a new weighted average estimator for high-dimensional parameters under the
distributed learning system, where the weight assigned to each coordinate across different …