Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
A comprehensive survey of continual learning: Theory, method and application
To cope with real-world dynamics, an intelligent system needs to incrementally acquire,
update, accumulate, and exploit knowledge throughout its lifetime. This ability, known as …
update, accumulate, and exploit knowledge throughout its lifetime. This ability, known as …
Continual learning of large language models: A comprehensive survey
The recent success of large language models (LLMs) trained on static, pre-collected,
general datasets has sparked numerous research directions and applications. One such …
general datasets has sparked numerous research directions and applications. One such …
Memorization without overfitting: Analyzing the training dynamics of large language models
Despite their wide adoption, the underlying training and memorization dynamics of very
large language models is not well understood. We empirically study exact memorization in …
large language models is not well understood. We empirically study exact memorization in …
Simple and scalable strategies to continually pre-train large language models
Large language models (LLMs) are routinely pre-trained on billions of tokens, only to start
the process over again once new data becomes available. A much more efficient solution is …
the process over again once new data becomes available. A much more efficient solution is …
Architecture matters in continual learning
A large body of research in continual learning is devoted to overcoming the catastrophic
forgetting of neural networks by designing new algorithms that are robust to the distribution …
forgetting of neural networks by designing new algorithms that are robust to the distribution …
The ideal continual learner: An agent that never forgets
The goal of continual learning is to find a model that solves multiple learning tasks which are
presented sequentially to the learner. A key challenge in this setting is that the learner may" …
presented sequentially to the learner. A key challenge in this setting is that the learner may" …
Learning and forgetting unsafe examples in large language models
As the number of large language models (LLMs) released to the public grows, there is a
pressing need to understand the safety implications associated with these models learning …
pressing need to understand the safety implications associated with these models learning …
How catastrophic can catastrophic forgetting be in linear regression?
To better understand catastrophic forgetting, we study fitting an overparameterized linear
model to a sequence of tasks with different input distributions. We analyze how much the …
model to a sequence of tasks with different input distributions. We analyze how much the …
Membership inference attacks and defenses in classification models
We study the membership inference (MI) attack against classifiers, where the attacker's goal
is to determine whether a data instance was used for training the classifier. Through …
is to determine whether a data instance was used for training the classifier. Through …
Coscl: Cooperation of small continual learners is stronger than a big one
Continual learning requires incremental compatibility with a sequence of tasks. However,
the design of model architecture remains an open question: In general, learning all tasks …
the design of model architecture remains an open question: In general, learning all tasks …