Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Continual learning of natural language processing tasks: A survey
Continual learning (CL) is a learning paradigm that emulates the human capability of
learning and accumulating knowledge continually without forgetting the previously learned …
learning and accumulating knowledge continually without forgetting the previously learned …
Towards lifelong learning of large language models: A survey
As the applications of large language models (LLMs) expand across diverse fields, their
ability to adapt to ongoing changes in data, tasks, and user preferences becomes crucial …
ability to adapt to ongoing changes in data, tasks, and user preferences becomes crucial …
Recent advances of foundation language models-based continual learning: A survey
Recently, foundation language models (LMs) have marked significant achievements in the
domains of natural language processing and computer vision. Unlike traditional neural …
domains of natural language processing and computer vision. Unlike traditional neural …
Learn or recall? revisiting incremental learning with pre-trained language models
Incremental Learning (IL) has been a long-standing problem in both vision and Natural
Language Processing (NLP) communities. In recent years, as Pre-trained Language Models …
Language Processing (NLP) communities. In recent years, as Pre-trained Language Models …
Prompts can play lottery tickets well: Achieving lifelong information extraction via lottery prompt tuning
Z Liang, F Wei, Y Jie, Y Qian, Z Hao… - Proceedings of the 61st …, 2023 - aclanthology.org
Thanks to the recent success of Pre-trained Language Models (PLMs), it has become a
promising research direction to develop a universal model (UIE) that can solve all typical …
promising research direction to develop a universal model (UIE) that can solve all typical …
Towards fewer hallucinations in knowledge-grounded dialogue generation via augmentative and contrastive knowledge-dialogue
Existing knowledge-grounded open-domain dialogue generation models often face the
hallucination problem, ie the dialogue generative model will persist in an inappropriate …
hallucination problem, ie the dialogue generative model will persist in an inappropriate …
Semi-supervised lifelong language learning
Lifelong learning aims to accumulate knowledge and alleviate catastrophic forgetting when
learning tasks sequentially. However, existing lifelong language learning methods only …
learning tasks sequentially. However, existing lifelong language learning methods only …
StructSP: Efficient fine-tuning of task-oriented dialog system by using structure-aware boosting and grammar constraints
We have investigated methods utilizing hierarchical structure information representation in
the semantic parsing task and have devised a method that reinforces the semantic …
the semantic parsing task and have devised a method that reinforces the semantic …
Dirichlet continual learning: tackling catastrophic forgetting in NLP
Catastrophic forgetting poses a significant challenge in continual learning (CL). In the
context of Natural Language Processing, generative-based rehearsal CL methods have …
context of Natural Language Processing, generative-based rehearsal CL methods have …
Continual learning with dirichlet generative-based rehearsal
Recent advancements in data-driven task-oriented dialogue systems (ToDs) struggle with
incremental learning due to computational constraints and time-consuming issues …
incremental learning due to computational constraints and time-consuming issues …