Parameter-efficient fine-tuning for large models: A comprehensive survey
Large models represent a groundbreaking advancement in multiple application fields,
enabling remarkable achievements across various tasks. However, their unprecedented …
enabling remarkable achievements across various tasks. However, their unprecedented …
Domain specialization as the key to make large language models disruptive: A comprehensive survey
Large language models (LLMs) have significantly advanced the field of natural language
processing (NLP), providing a highly useful, task-agnostic foundation for a wide range of …
processing (NLP), providing a highly useful, task-agnostic foundation for a wide range of …
Llm-adapters: An adapter family for parameter-efficient fine-tuning of large language models
The success of large language models (LLMs), like GPT-4 and ChatGPT, has led to the
development of numerous cost-effective and accessible alternatives that are created by …
development of numerous cost-effective and accessible alternatives that are created by …
Parameter-efficient fine-tuning methods for pretrained language models: A critical review and assessment
With the continuous growth in the number of parameters of transformer-based pretrained
language models (PLMs), particularly the emergence of large language models (LLMs) with …
language models (PLMs), particularly the emergence of large language models (LLMs) with …
Revisiting the parameter efficiency of adapters from the perspective of precision redundancy
Current state-of-the-art results in computer vision depend in part on fine-tuning large pre-
trained vision models. However, with the exponential growth of model sizes, the …
trained vision models. However, with the exponential growth of model sizes, the …
Facial affective behavior analysis with instruction tuning
Facial affective behavior analysis (FABA) is crucial for understanding human mental states
from images. However, traditional approaches primarily deploy models to discriminate …
from images. However, traditional approaches primarily deploy models to discriminate …
Large language models for automated q&a involving legal documents: a survey on algorithms, frameworks and applications
X Yang, Z Wang, Q Wang, K Wei, K Zhang… - International Journal of …, 2024 - emerald.com
Purpose This study aims to adopt a systematic review approach to examine the existing
literature on law and LLMs. It involves analyzing and synthesizing relevant research papers …
literature on law and LLMs. It involves analyzing and synthesizing relevant research papers …
End-edge-cloud collaborative computing for deep learning: A comprehensive survey
The booming development of deep learning applications and services heavily relies on
large deep learning models and massive data in the cloud. However, cloud-based deep …
large deep learning models and massive data in the cloud. However, cloud-based deep …
Apt: Adaptive pruning and tuning pretrained language models for efficient training and inference
Fine-tuning and inference with large Language Models (LM) are generally known to be
expensive. Parameter-efficient fine-tuning over pretrained LMs reduces training memory by …
expensive. Parameter-efficient fine-tuning over pretrained LMs reduces training memory by …
Toward efficient language model pretraining and downstream adaptation via self-evolution: A case study on superglue
This technical report briefly describes our JDExplore d-team's Vega v2 submission on the
SuperGLUE leaderboard. SuperGLUE is more challenging than the widely used general …
SuperGLUE leaderboard. SuperGLUE is more challenging than the widely used general …