[PDF][PDF] Understanding clip** for federated learning: Convergence and client-level differential privacy

X Zhang, X Chen, M Hong, ZS Wu, J Yi - International Conference on …, 2022 - par.nsf.gov
Providing privacy protection has been one of the primary motivations of Federated Learning
(FL). Recently, there has been a line of work on incorporating the formal privacy notion of …

Generalization bounds using data-dependent fractal dimensions

B Dupuis, G Deligiannidis… - … Conference on Machine …, 2023 - proceedings.mlr.press
Providing generalization guarantees for modern neural networks has been a crucial task in
statistical learning. Recently, several studies have attempted to analyze the generalization …

Instance-dependent generalization bounds via optimal transport

S Hou, P Kassraie, A Kratsios, A Krause… - The Journal of Machine …, 2023 - dl.acm.org
Existing generalization bounds fail to explain crucial factors that drive the generalization of
modern neural networks. Since such bounds often hold uniformly over all parameters, they …

Total deep variation: A stable regularization method for inverse problems

E Kobler, A Effland, K Kunisch… - IEEE transactions on …, 2021 - ieeexplore.ieee.org
Various problems in computer vision and medical imaging can be cast as inverse problems.
A frequent method for solving inverse problems is the variational approach, which amounts …

Learning to continuously optimize wireless resource in a dynamic environment: A bilevel optimization perspective

H Sun, W Pu, X Fu, TH Chang… - IEEE Transactions on …, 2022 - ieeexplore.ieee.org
There has been a growing interest in develo** data-driven, and in particular deep neural
network (DNN) based methods for modern communication tasks. These methods achieve …

Semialgebraic representation of monotone deep equilibrium models and applications to certification

T Chen, JB Lasserre, V Magron… - Advances in Neural …, 2021 - proceedings.neurips.cc
Deep equilibrium models are based on implicitly defined functional relations and have
shown competitive performance compared with the traditional deep networks. Monotone …

Chordal sparsity for lipschitz constant estimation of deep neural networks

A Xue, L Lindemann, A Robey… - 2022 IEEE 61st …, 2022 - ieeexplore.ieee.org
Computing Lipschitz constants of neural networks allows for robustness guarantees in
image classification, safety in controller design, and generalization beyond the training data …

Improving neural network robustness via persistency of excitation

K Sridhar, O Sokolsky, I Lee… - 2022 American Control …, 2022 - ieeexplore.ieee.org
Improving adversarial robustness of neural networks remains a major challenge.
Fundamentally, training a neural network via gradient descent is a parameter estimation …

Neural jump ordinary differential equations: Consistent continuous-time prediction and filtering

C Herrera, F Krach, J Teichmann - arxiv preprint arxiv:2006.04727, 2020 - arxiv.org
Combinations of neural ODEs with recurrent neural networks (RNN), like GRU-ODE-Bayes
or ODE-RNN are well suited to model irregularly observed time series. While those models …

Distributed Momentum Methods Under Biased Gradient Estimations

A Beikmohammadi, S Khirirat, S Magnússon - arxiv preprint arxiv …, 2024 - arxiv.org
Distributed stochastic gradient methods are gaining prominence in solving large-scale
machine learning problems that involve data distributed across multiple nodes. However …