Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
A primer on zeroth-order optimization in signal processing and machine learning: Principals, recent advances, and applications
Zeroth-order (ZO) optimization is a subset of gradient-free optimization that emerges in many
signal processing and machine learning (ML) applications. It is used for solving optimization …
signal processing and machine learning (ML) applications. It is used for solving optimization …
Fine-tuning language models with just forward passes
Fine-tuning language models (LMs) has yielded success on diverse downstream tasks, but
as LMs grow in size, backpropagation requires a prohibitively large amount of memory …
as LMs grow in size, backpropagation requires a prohibitively large amount of memory …
Derivative-free optimization methods
In many optimization problems arising from scientific, engineering and artificial intelligence
applications, objective and constraint functions are available only as the output of a black …
applications, objective and constraint functions are available only as the output of a black …
Instructzero: Efficient instruction optimization for black-box large language models
Large language models~(LLMs) are instruction followers, but it can be challenging to find
the best instruction for different situations, especially for black-box LLMs on which …
the best instruction for different situations, especially for black-box LLMs on which …
Data-free model extraction
Current model extraction attacks assume that the adversary has access to a surrogate
dataset with characteristics similar to the proprietary data used to train the victim model. This …
dataset with characteristics similar to the proprietary data used to train the victim model. This …
Autozoom: Autoencoder-based zeroth order optimization method for attacking black-box neural networks
Recent studies have shown that adversarial examples in state-of-the-art image classifiers
trained by deep neural networks (DNN) can be easily generated when the target model is …
trained by deep neural networks (DNN) can be easily generated when the target model is …
Revisiting zeroth-order optimization for memory-efficient llm fine-tuning: A benchmark
In the evolving landscape of natural language processing (NLP), fine-tuning pre-trained
Large Language Models (LLMs) with first-order (FO) optimizers like SGD and Adam has …
Large Language Models (LLMs) with first-order (FO) optimizers like SGD and Adam has …
Derivative-free methods for policy optimization: Guarantees for linear quadratic systems
We study derivative-free methods for policy optimization over the class of linear policies. We
focus on characterizing the convergence rate of these methods when applied to linear …
focus on characterizing the convergence rate of these methods when applied to linear …
Gradient-free methods for deterministic and stochastic nonsmooth nonconvex optimization
Nonsmooth nonconvex optimization problems broadly emerge in machine learning and
business decision making, whereas two core challenges impede the development of …
business decision making, whereas two core challenges impede the development of …
Zeroth-order stochastic variance reduction for nonconvex optimization
As application demands for zeroth-order (gradient-free) optimization accelerate, the need for
variance reduced and faster converging approaches is also intensifying. This paper …
variance reduced and faster converging approaches is also intensifying. This paper …