Mirror Descent Algorithms with Nearly Dimension-Independent Rates for Differentially-Private Stochastic Saddle-Point Problems

T González, C Guzmán, C Paquette - arxiv preprint arxiv:2403.02912, 2024 - arxiv.org
We study the problem of differentially-private (DP) stochastic (convex-concave) saddle-
points in the polyhedral setting. We propose $(\varepsilon,\delta) $-DP algorithms based on …

LPGD: A General Framework for Backpropagation through Embedded Optimization Layers

A Paulus, G Martius, V Musil - arxiv preprint arxiv:2407.05920, 2024 - arxiv.org
Embedding parameterized optimization problems as layers into machine learning
architectures serves as a powerful inductive bias. Training such architectures with stochastic …

Joint Learning of Energy-based Models and their Partition Function

ME Sander, V Roulet, T Liu, M Blondel - arxiv preprint arxiv:2501.18528, 2025 - arxiv.org
Energy-based models (EBMs) offer a flexible framework for parameterizing probability
distributions using neural networks. However, learning EBMs by exact maximum likelihood …

Learning with Fitzpatrick Losses

S Rakotomandimby, JP Chancelier, M De Lara… - arxiv preprint arxiv …, 2024 - arxiv.org
Fenchel-Young losses are a family of convex loss functions, encompassing the squared,
logistic and sparsemax losses, among others. Each Fenchel-Young loss is implicitly …

Machine learning and combinatorial optimization algorithms, with applications to railway planning

G Dalle - 2022 - pastel.hal.science
This thesis investigates the frontier between machine learning and combinatorial
optimization, two active areas of applied mathematics research. We combine theoretical …

A dual-receptor model of serotonergic psychedelics: therapeutic insights from simulated cortical dynamics

A Juliani, V Chelu, L Graesser, A Safron - bioRxiv, 2024 - biorxiv.org
Serotonergic psychedelics have been identified as promising next-generation therapeutic
agents in the treatment of mood and anxiety disorders. While their efficacy has been …

Stochastic first-order methods for differentially private machine learning

TG Lara - 2023 - search.proquest.com
En esta tesis estudiamos, desde un punto de vista teórico, dos problemas relevantes en los
campos de aprendizaje automático y análisis de datos con restricciones de privacidad: la …

Lagrangian Proximal Gradient Descent for Learning Convex Optimization Models

A Paulus, G Martius, V Musil - openreview.net
We propose Lagrangian Proximal Gradient Descent (LPGD), a flexible framework for
learning convex optimization models. Similar to traditional proximal gradient methods, LPGD …

[HTML][HTML] Apprenticeship learning: transferring human motivations to artificial agents

L Hussenot - 2022 - lilloa.univ-lille.fr
L'apprentissage par renforcement est un cadre mathématique et algorithmique générique
qui vise à developper des algorithmes qui interagissent avec leur environnement et s' …