Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Making ai forget you: Data deletion in machine learning
Intense recent discussions have focused on how to provide individuals with control over
when their data can and cannot be used---the EU's Right To Be Forgotten regulation is an …
when their data can and cannot be used---the EU's Right To Be Forgotten regulation is an …
A closer look at smoothness in domain adversarial training
Abstract Domain adversarial training has been ubiquitous for achieving invariant
representations and is used widely for various domain adaptation tasks. In recent times …
representations and is used widely for various domain adaptation tasks. In recent times …
Lower bounds for non-convex stochastic optimization
We lower bound the complexity of finding ϵ-stationary points (with gradient norm at most ϵ)
using stochastic first-order methods. In a well-studied model where algorithms access …
using stochastic first-order methods. In a well-studied model where algorithms access …
Spider: Near-optimal non-convex optimization via stochastic path-integrated differential estimator
In this paper, we propose a new technique named\textit {Stochastic Path-Integrated
Differential EstimatoR}(SPIDER), which can be used to track many deterministic quantities of …
Differential EstimatoR}(SPIDER), which can be used to track many deterministic quantities of …
Fedpd: A federated learning framework with adaptivity to non-iid data
Federated Learning (FL) is popular for communication-efficient learning from distributed
data. To utilize data at different clients without moving them to the cloud, algorithms such as …
data. To utilize data at different clients without moving them to the cloud, algorithms such as …
Solving a class of non-convex min-max games using iterative first order methods
Recent applications that arise in machine learning have surged significant interest in solving
min-max saddle point games. This problem has been extensively studied in the convex …
min-max saddle point games. This problem has been extensively studied in the convex …
Near-optimal algorithms for minimax optimization
This paper resolves a longstanding open question pertaining to the design of near-optimal
first-order algorithms for smooth and strongly-convex-strongly-concave minimax problems …
first-order algorithms for smooth and strongly-convex-strongly-concave minimax problems …
Minibatch vs local sgd for heterogeneous distributed learning
We analyze Local SGD (aka parallel or federated SGD) and Minibatch SGD in the
heterogeneous distributed setting, where each machine has access to stochastic gradient …
heterogeneous distributed setting, where each machine has access to stochastic gradient …
Why are adaptive methods good for attention models?
While stochastic gradient descent (SGD) is still the de facto algorithm in deep learning,
adaptive methods like Clipped SGD/Adam have been observed to outperform SGD across …
adaptive methods like Clipped SGD/Adam have been observed to outperform SGD across …
Convex and non-convex optimization under generalized smoothness
Classical analysis of convex and non-convex optimization methods often requires the
Lipschitz continuity of the gradient, which limits the analysis to functions bounded by …
Lipschitz continuity of the gradient, which limits the analysis to functions bounded by …