Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Mobile edge intelligence for large language models: A contemporary survey
On-device large language models (LLMs), referring to running LLMs on edge devices, have
raised considerable interest since they are more cost-effective, latency-efficient, and privacy …
raised considerable interest since they are more cost-effective, latency-efficient, and privacy …
Political-llm: Large language models in political science
In recent years, large language models (LLMs) have been widely adopted in political
science tasks such as election prediction, sentiment analysis, policy impact assessment, and …
science tasks such as election prediction, sentiment analysis, policy impact assessment, and …
Electrostatic force regularization for neural structured pruning
The demand for deploying deep convolutional neural networks (DCNNs) on resource-
constrained devices for real-time applications remains substantial. However, existing state …
constrained devices for real-time applications remains substantial. However, existing state …
INF-LLaVA: Dual-perspective Perception for High-Resolution Multimodal Large Language Model
With advancements in data availability and computing resources, Multimodal Large
Language Models (MLLMs) have showcased capabilities across various fields. However …
Language Models (MLLMs) have showcased capabilities across various fields. However …
NEAT: Nonlinear Parameter-efficient Adaptation of Pre-trained Models
Fine-tuning pre-trained models is crucial for adapting large models to downstream tasks,
often delivering state-of-the-art performance. However, fine-tuning all model parameters is …
often delivering state-of-the-art performance. However, fine-tuning all model parameters is …
Ten Challenging Problems in Federated Foundation Models
Federated Foundation Models (FedFMs) represent a distributed learning paradigm that
fuses general competences of foundation models as well as privacy-preserving capabilities …
fuses general competences of foundation models as well as privacy-preserving capabilities …
Parameter-Efficient Fine-Tuning for Foundation Models
This survey delves into the realm of Parameter-Efficient Fine-Tuning (PEFT) within the
context of Foundation Models (FMs). PEFT, a cost-effective fine-tuning technique, minimizes …
context of Foundation Models (FMs). PEFT, a cost-effective fine-tuning technique, minimizes …
Linear Feedback Control Systems for Iterative Prompt Optimization in Large Language Models
Large Language Models (LLMs) have revolutionized various applications by generating
outputs based on given prompts. However, achieving the desired output requires iterative …
outputs based on given prompts. However, achieving the desired output requires iterative …
Multi-Scenario Reasoning: Unlocking Cognitive Autonomy in Humanoid Robots for Multimodal Understanding
To improve the cognitive autonomy of humanoid robots, this research proposes a multi-
scenario reasoning architecture to solve the technical shortcomings of multi-modal …
scenario reasoning architecture to solve the technical shortcomings of multi-modal …