Interactive natural language processing
Interactive Natural Language Processing (iNLP) has emerged as a novel paradigm within
the field of NLP, aimed at addressing limitations in existing frameworks while aligning with …
the field of NLP, aimed at addressing limitations in existing frameworks while aligning with …
Full parameter fine-tuning for large language models with limited resources
Large Language Models (LLMs) have revolutionized Natural Language Processing (NLP)
but demand massive GPU resources for training. Lowering the threshold for LLMs training …
but demand massive GPU resources for training. Lowering the threshold for LLMs training …
Parameter-efficient fine-tuning design spaces
Parameter-efficient fine-tuning aims to achieve performance comparable to fine-tuning,
using fewer trainable parameters. Several strategies (eg, Adapters, prefix tuning, BitFit, and …
using fewer trainable parameters. Several strategies (eg, Adapters, prefix tuning, BitFit, and …
Hypertuning: Toward adapting large language models without back-propagation
Fine-tuning large language models for different tasks can be costly and inefficient, and even
methods that reduce the number of tuned parameters still require full gradient-based …
methods that reduce the number of tuned parameters still require full gradient-based …
[PDF][PDF] Team text-understanding-and-analysi at PAN: Utilizing BERT Series Pretraining Model for Multi-Author Writing Style Analysis
Y Huang, L Kong - Working Notes of CLEF, 2024 - downloads.webis.de
We propose a training model based on BERT series. This method uses sliding window
technique to preprocess data sets to train and solve multi-author writing style analysis tasks …
technique to preprocess data sets to train and solve multi-author writing style analysis tasks …
DimA: A Parameter-efficient Fine-tuning Method with Knowledge Transfer Based on Transformer
W Zhang, M Huang, Z Song, Q Miao - Proceedings of the 2024 …, 2024 - aclanthology.org
Fine-tuning is a widely used technique for leveraging pre-trained language models (PLMs)
in downstream tasks, but it can be computationally expensive and storage-intensive. To …
in downstream tasks, but it can be computationally expensive and storage-intensive. To …
Incremental Unified Parameter Additional Tuning with Basic Memory Replaying
J Deng, J Hu, H Zhang, Y Wang - openreview.net
Class incremental learning (CIL) aims to develop an open intelligence system that can
continuously learn new concepts from new tasks while retaining the knowledge to …
continuously learn new concepts from new tasks while retaining the knowledge to …
ESEAD: An Enhanced Simple Ensemble and Distillation Framework for Natural Language Processing
M Mei - openreview.net
Large-scale pre-trained language models (PLM) are today's leading technology for a wide
range of natural language processing tasks. However, the enormous size of these models …
range of natural language processing tasks. However, the enormous size of these models …