End-edge-cloud collaborative computing for deep learning: A comprehensive survey

Y Wang, C Yang, S Lan, L Zhu… - … Surveys & Tutorials, 2024 - ieeexplore.ieee.org
The booming development of deep learning applications and services heavily relies on
large deep learning models and massive data in the cloud. However, cloud-based deep …

[HTML][HTML] When llms meet cybersecurity: A systematic literature review

J Zhang, H Bu, H Wen, Y Liu, H Fei… - …, 2025 - cybersecurity.springeropen.com
The rapid development of large language models (LLMs) has opened new avenues across
various fields, including cybersecurity, which faces an evolving threat landscape and …

Parameter-efficient fine-tuning of large-scale pre-trained language models

N Ding, Y Qin, G Yang, F Wei, Z Yang, Y Su… - Nature Machine …, 2023 - nature.com
With the prevalence of pre-trained language models (PLMs) and the pre-training–fine-tuning
paradigm, it has been continuously shown that larger models tend to yield better …

On the effectiveness of parameter-efficient fine-tuning

Z Fu, H Yang, AMC So, W Lam, L Bing… - Proceedings of the AAAI …, 2023 - ojs.aaai.org
Fine-tuning pre-trained models has been ubiquitously proven to be effective in a wide range
of NLP tasks. However, fine-tuning the whole model is parameter inefficient as it always …

GPT-3-driven pedagogical agents to train children's curious question-asking skills

R Abdelghani, YH Wang, X Yuan, T Wang… - International Journal of …, 2024 - Springer
The ability of children to ask curiosity-driven questions is an important skill that helps
improve their learning. For this reason, previous research has explored designing specific …

Delta tuning: A comprehensive study of parameter efficient methods for pre-trained language models

N Ding, Y Qin, G Yang, F Wei, Z Yang, Y Su… - arxiv preprint arxiv …, 2022 - arxiv.org
Despite the success, the process of fine-tuning large-scale PLMs brings prohibitive
adaptation costs. In fact, fine-tuning all the parameters of a colossal model and retaining …

Motiongpt: Finetuned llms are general-purpose motion generators

Y Zhang, D Huang, B Liu, S Tang, Y Lu… - Proceedings of the …, 2024 - ojs.aaai.org
Generating realistic human motion from given action descriptions has experienced
significant advancements because of the emerging requirement of digital humans. While …

Modular deep learning

J Pfeiffer, S Ruder, I Vulić, EM Ponti - arxiv preprint arxiv:2302.11529, 2023 - arxiv.org
Transfer learning has recently become the dominant paradigm of machine learning. Pre-
trained models fine-tuned for downstream tasks achieve better performance with fewer …

Hyporadise: An open baseline for generative speech recognition with large language models

C Chen, Y Hu, CHH Yang… - Advances in …, 2023 - proceedings.neurips.cc
Advancements in deep neural networks have allowed automatic speech recognition (ASR)
systems to attain human parity on several publicly available clean speech datasets …

Llms are good sign language translators

J Gong, LG Foo, Y He… - Proceedings of the …, 2024 - openaccess.thecvf.com
Abstract Sign Language Translation (SLT) is a challenging task that aims to translate sign
videos into spoken language. Inspired by the strong translation capabilities of large …