[PDF][PDF] Understanding clip** for federated learning: Convergence and client-level differential privacy
Providing privacy protection has been one of the primary motivations of Federated Learning
(FL). Recently, there has been a line of work on incorporating the formal privacy notion of …
(FL). Recently, there has been a line of work on incorporating the formal privacy notion of …
Generalization bounds using data-dependent fractal dimensions
Providing generalization guarantees for modern neural networks has been a crucial task in
statistical learning. Recently, several studies have attempted to analyze the generalization …
statistical learning. Recently, several studies have attempted to analyze the generalization …
Instance-dependent generalization bounds via optimal transport
Existing generalization bounds fail to explain crucial factors that drive the generalization of
modern neural networks. Since such bounds often hold uniformly over all parameters, they …
modern neural networks. Since such bounds often hold uniformly over all parameters, they …
Total deep variation: A stable regularization method for inverse problems
Various problems in computer vision and medical imaging can be cast as inverse problems.
A frequent method for solving inverse problems is the variational approach, which amounts …
A frequent method for solving inverse problems is the variational approach, which amounts …
Learning to continuously optimize wireless resource in a dynamic environment: A bilevel optimization perspective
There has been a growing interest in develo** data-driven, and in particular deep neural
network (DNN) based methods for modern communication tasks. These methods achieve …
network (DNN) based methods for modern communication tasks. These methods achieve …
Semialgebraic representation of monotone deep equilibrium models and applications to certification
Deep equilibrium models are based on implicitly defined functional relations and have
shown competitive performance compared with the traditional deep networks. Monotone …
shown competitive performance compared with the traditional deep networks. Monotone …
Chordal sparsity for lipschitz constant estimation of deep neural networks
Computing Lipschitz constants of neural networks allows for robustness guarantees in
image classification, safety in controller design, and generalization beyond the training data …
image classification, safety in controller design, and generalization beyond the training data …
Improving neural network robustness via persistency of excitation
Improving adversarial robustness of neural networks remains a major challenge.
Fundamentally, training a neural network via gradient descent is a parameter estimation …
Fundamentally, training a neural network via gradient descent is a parameter estimation …
Neural jump ordinary differential equations: Consistent continuous-time prediction and filtering
Combinations of neural ODEs with recurrent neural networks (RNN), like GRU-ODE-Bayes
or ODE-RNN are well suited to model irregularly observed time series. While those models …
or ODE-RNN are well suited to model irregularly observed time series. While those models …
Distributed Momentum Methods Under Biased Gradient Estimations
Distributed stochastic gradient methods are gaining prominence in solving large-scale
machine learning problems that involve data distributed across multiple nodes. However …
machine learning problems that involve data distributed across multiple nodes. However …