Introduction to online convex optimization

E Hazan - Foundations and Trends® in Optimization, 2016 - nowpublishers.com
This monograph portrays optimization as a process. In many practical applications the
environment is so complex that it is infeasible to lay out a comprehensive theoretical model …

KNN classification with one-step computation

S Zhang, J Li - IEEE Transactions on Knowledge and Data …, 2021 - ieeexplore.ieee.org
KNN classification is an improvisational learning mode, in which they are carried out only
when a test data is predicted that set a suitable K value and search the K nearest neighbors …

Optimal learners for realizable regression: Pac learning and online learning

I Attias, S Hanneke, A Kalavasis… - Advances in …, 2023 - proceedings.neurips.cc
In this work, we aim to characterize the statistical complexity of realizable regression both in
the PAC learning setting and the online learning setting. Previous work had established the …

A theory of PAC learnability of partial concept classes

N Alon, S Hanneke, R Holzman… - 2021 IEEE 62nd Annual …, 2022 - ieeexplore.ieee.org
We extend the classical theory of PAC learning in a way which allows to model a rich variety
of practical learning tasks where the data satisfy special properties that ease the learning …

Information complexity of stochastic convex optimization: Applications to generalization and memorization

I Attias, GK Dziugaite, M Haghifam, R Livni… - arxiv preprint arxiv …, 2024 - arxiv.org
In this work, we investigate the interplay between memorization and learning in the context
of\emph {stochastic convex optimization}(SCO). We define memorization via the information …

Universal Bayes consistency in metric spaces

S Hanneke, A Kontorovich, S Sabato… - 2020 Information …, 2020 - ieeexplore.ieee.org
We show that a recently proposed 1-nearest-neighbor-based multiclass learning algorithm
is universally strongly Bayes consistent in all metric spaces where such Bayes consistency …

Agnostic sample compression schemes for regression

I Attias, S Hanneke, A Kontorovich… - Forty-first International …, 2024 - openreview.net
We obtain the first positive results for bounded sample compression in the agnostic
regression setting with the $\ell_p $ loss, where $ p\in [1,\infty] $. We construct a generic …

Robustness for non-parametric classification: A generic attack and defense

YY Yang, C Rashtchian, Y Wang… - International …, 2020 - proceedings.mlr.press
Adversarially robust machine learning has received much recent attention. However, prior
attacks and defenses for non-parametric classifiers have been developed in an ad-hoc or …

When are non-parametric methods robust?

R Bhattacharjee, K Chaudhuri - International Conference on …, 2020 - proceedings.mlr.press
A growing body of research has shown that many classifiers are susceptible to adversarial
examples–small strategic modifications to test inputs that lead to misclassification. In this …

Stable sample compression schemes: New applications and an optimal SVM margin bound

S Hanneke, A Kontorovich - Algorithmic Learning Theory, 2021 - proceedings.mlr.press
We analyze a family of supervised learning algorithms based on sample compression
schemes that are stable, in the sense that removing points from the training set which were …