The sample complexity of multi-distribution learning

B Peng - The Thirty Seventh Annual Conference on Learning …, 2024 - proceedings.mlr.press
Multi-distribution learning generalizes the classic PAC learning to handle data coming from
multiple distributions. Given a set of $ k $ data distributions and a hypothesis class of VC …

Projection-free adaptive regret with membership oracles

Z Lu, N Brukhim, P Gradu… - … on Algorithmic Learning …, 2023 - proceedings.mlr.press
In the framework of online convex optimization, most iterative algorithms require the
computation of projections onto convex sets, which can be computationally expensive. To …

Improper multiclass boosting

N Brukhim, S Hanneke… - The Thirty Sixth Annual …, 2023 - proceedings.mlr.press
We study the setting of multiclass boosting with a possibly large number of classes. A recent
work by Brukhim, Hazan, Moran, and Schapire, 2021, proved a hardness result for a large …

Multiclass boosting and the cost of weak learning

N Brukhim, E Hazan, S Moran… - Advances in …, 2021 - proceedings.neurips.cc
Boosting is an algorithmic approach which is based on the idea of combining weak and
moderately inaccurate hypotheses to a strong and accurate one. In this work we study …

Closure properties for private classification and online prediction

N Alon, A Beimel, S Moran… - Conference on Learning …, 2020 - proceedings.mlr.press
Let H be a class of boolean functions and consider a composed class H'that is derived from
H using some arbitrary aggregation rule (for example, H'may be the class of all 3-wise …

A boosting approach to reinforcement learning

N Brukhim, E Hazan, K Singh - Advances in Neural …, 2022 - proceedings.neurips.cc
Reducing reinforcement learning to supervised learning is a well-studied and effective
approach that leverages the benefits of compact function approximation to deal with large …

Online optimization with feedback delay and nonlinear switching cost

W Pan, G Shi, Y Lin, A Wierman - … of the ACM on Measurement and …, 2022 - dl.acm.org
We study a variant of online optimization in which the learner receives k-rounddelayed
feedback about hitting cost and there is a multi-step nonlinear switching cost, ie, costs …

Online boosting with bandit feedback

N Brukhim, E Hazan - Algorithmic Learning Theory, 2021 - proceedings.mlr.press
We consider the problem of online boosting for regression tasks, when only limited
information is available to the learner. This setting is motivated by applications in …

Sample-Efficient Agnostic Boosting

U Ghai, K Singh - arxiv preprint arxiv:2410.23632, 2024 - arxiv.org
The theory of boosting provides a computational framework for aggregating approximate
weak learning algorithms, which perform marginally better than a random predictor, into an …

Near-Optimal Algorithms for Omniprediction

P Okoroafor, R Kleinberg, MP Kim - arxiv preprint arxiv:2501.17205, 2025 - arxiv.org
Omnipredictors are simple prediction functions that encode loss-minimizing predictions with
respect to a hypothesis class $\H $, simultaneously for every loss function within a class of …