The sample complexity of multi-distribution learning
B Peng - The Thirty Seventh Annual Conference on Learning …, 2024 - proceedings.mlr.press
Multi-distribution learning generalizes the classic PAC learning to handle data coming from
multiple distributions. Given a set of $ k $ data distributions and a hypothesis class of VC …
multiple distributions. Given a set of $ k $ data distributions and a hypothesis class of VC …
Projection-free adaptive regret with membership oracles
In the framework of online convex optimization, most iterative algorithms require the
computation of projections onto convex sets, which can be computationally expensive. To …
computation of projections onto convex sets, which can be computationally expensive. To …
Improper multiclass boosting
We study the setting of multiclass boosting with a possibly large number of classes. A recent
work by Brukhim, Hazan, Moran, and Schapire, 2021, proved a hardness result for a large …
work by Brukhim, Hazan, Moran, and Schapire, 2021, proved a hardness result for a large …
Multiclass boosting and the cost of weak learning
Boosting is an algorithmic approach which is based on the idea of combining weak and
moderately inaccurate hypotheses to a strong and accurate one. In this work we study …
moderately inaccurate hypotheses to a strong and accurate one. In this work we study …
Closure properties for private classification and online prediction
Let H be a class of boolean functions and consider a composed class H'that is derived from
H using some arbitrary aggregation rule (for example, H'may be the class of all 3-wise …
H using some arbitrary aggregation rule (for example, H'may be the class of all 3-wise …
A boosting approach to reinforcement learning
Reducing reinforcement learning to supervised learning is a well-studied and effective
approach that leverages the benefits of compact function approximation to deal with large …
approach that leverages the benefits of compact function approximation to deal with large …
Online optimization with feedback delay and nonlinear switching cost
We study a variant of online optimization in which the learner receives k-rounddelayed
feedback about hitting cost and there is a multi-step nonlinear switching cost, ie, costs …
feedback about hitting cost and there is a multi-step nonlinear switching cost, ie, costs …
Online boosting with bandit feedback
We consider the problem of online boosting for regression tasks, when only limited
information is available to the learner. This setting is motivated by applications in …
information is available to the learner. This setting is motivated by applications in …
Sample-Efficient Agnostic Boosting
The theory of boosting provides a computational framework for aggregating approximate
weak learning algorithms, which perform marginally better than a random predictor, into an …
weak learning algorithms, which perform marginally better than a random predictor, into an …
Near-Optimal Algorithms for Omniprediction
Omnipredictors are simple prediction functions that encode loss-minimizing predictions with
respect to a hypothesis class $\H $, simultaneously for every loss function within a class of …
respect to a hypothesis class $\H $, simultaneously for every loss function within a class of …