Mobile edge intelligence for large language models: A contemporary survey

G Qu, Q Chen, W Wei, Z Lin, X Chen… - … Surveys & Tutorials, 2025‏ - ieeexplore.ieee.org
On-device large language models (LLMs), referring to running LLMs on edge devices, have
raised considerable interest since they are more cost-effective, latency-efficient, and privacy …

Political-llm: Large language models in political science

L Li, J Li, C Chen, F Gui, H Yang, C Yu, Z Wang… - arxiv preprint arxiv …, 2024‏ - arxiv.org
In recent years, large language models (LLMs) have been widely adopted in political
science tasks such as election prediction, sentiment analysis, policy impact assessment, and …

Electrostatic force regularization for neural structured pruning

A Ferdi, A Taleb-Ahmed, A Nakib, Y Ferdi - arxiv preprint arxiv …, 2024‏ - arxiv.org
The demand for deploying deep convolutional neural networks (DCNNs) on resource-
constrained devices for real-time applications remains substantial. However, existing state …

INF-LLaVA: Dual-perspective Perception for High-Resolution Multimodal Large Language Model

Y Ma, Z Wang, X Sun, W Lin, Q Zhou, J Ji… - arxiv preprint arxiv …, 2024‏ - arxiv.org
With advancements in data availability and computing resources, Multimodal Large
Language Models (MLLMs) have showcased capabilities across various fields. However …

NEAT: Nonlinear Parameter-efficient Adaptation of Pre-trained Models

Y Zhong, H Jiang, L Li, R Nakada, T Liu… - arxiv preprint arxiv …, 2024‏ - arxiv.org
Fine-tuning pre-trained models is crucial for adapting large models to downstream tasks,
often delivering state-of-the-art performance. However, fine-tuning all model parameters is …

Ten Challenging Problems in Federated Foundation Models

T Fan, H Gu, X Cao, CS Chan, Q Chen, Y Chen… - arxiv preprint arxiv …, 2025‏ - arxiv.org
Federated Foundation Models (FedFMs) represent a distributed learning paradigm that
fuses general competences of foundation models as well as privacy-preserving capabilities …

Parameter-Efficient Fine-Tuning for Foundation Models

D Zhang, T Feng, L Xue, Y Wang, Y Dong… - arxiv preprint arxiv …, 2025‏ - arxiv.org
This survey delves into the realm of Parameter-Efficient Fine-Tuning (PEFT) within the
context of Foundation Models (FMs). PEFT, a cost-effective fine-tuning technique, minimizes …

Linear Feedback Control Systems for Iterative Prompt Optimization in Large Language Models

RR Karn - arxiv preprint arxiv:2501.11979, 2025‏ - arxiv.org
Large Language Models (LLMs) have revolutionized various applications by generating
outputs based on given prompts. However, achieving the desired output requires iterative …

Multi-Scenario Reasoning: Unlocking Cognitive Autonomy in Humanoid Robots for Multimodal Understanding

L Wang - arxiv preprint arxiv:2412.20429, 2024‏ - arxiv.org
To improve the cognitive autonomy of humanoid robots, this research proposes a multi-
scenario reasoning architecture to solve the technical shortcomings of multi-modal …