Large language model (llm) for telecommunications: A comprehensive survey on principles, key techniques, and opportunities

H Zhou, C Hu, Y Yuan, Y Cui, Y **… - … Surveys & Tutorials, 2024 - ieeexplore.ieee.org
Large language models (LLMs) have received considerable attention recently due to their
outstanding comprehension and reasoning capabilities, leading to great progress in many …

Long-form factuality in large language models

J Wei, C Yang, X Song, Y Lu, N Hu, J Huang… - arxiv preprint arxiv …, 2024 - arxiv.org
Large language models (LLMs) often generate content that contains factual errors when
responding to fact-seeking prompts on open-ended topics. To benchmark a model's long …

What makes multimodal in-context learning work?

FB Baldassini, M Shukor, M Cord… - Proceedings of the …, 2024 - openaccess.thecvf.com
Abstract Large Language Models have demonstrated remarkable performance across
various tasks exhibiting the capacity to swiftly acquire new skills such as through In-Context …

Do large language models have compositional ability? an investigation into limitations and scalability

Z Xu, Z Shi, Y Liang - arxiv preprint arxiv:2407.15720, 2024 - arxiv.org
Large language models (LLMs) have emerged as powerful tools for many AI problems and
exhibit remarkable in-context learning (ICL) capabilities. Compositional ability, solving …

Data-juicer: A one-stop data processing system for large language models

D Chen, Y Huang, Z Ma, H Chen, X Pan, C Ge… - Companion of the 2024 …, 2024 - dl.acm.org
The immense evolution in Large Language Models (LLMs) has underscored the importance
of massive, heterogeneous, and high-quality data. A data recipe is a mixture of data from …

Rag-driver: Generalisable driving explanations with retrieval-augmented in-context learning in multi-modal large language model

J Yuan, S Sun, D Omeiza, B Zhao, P Newman… - arxiv preprint arxiv …, 2024 - arxiv.org
We need to trust robots that use often opaque AI methods. They need to explain themselves
to us, and we need to trust their explanation. In this regard, explainability plays a critical role …

In-context learning with iterative demonstration selection

C Qin, A Zhang, C Chen, A Dagar, W Ye - arxiv preprint arxiv:2310.09881, 2023 - arxiv.org
Spurred by advancements in scale, large language models (LLMs) have demonstrated
strong few-shot learning ability via in-context learning (ICL). However, the performance of …

[PDF][PDF] Large language model instruction following: A survey of progresses and challenges

R Lou, K Zhang, W Yin - Computational Linguistics, 2024 - direct.mit.edu
Task semantics can be expressed by a set of input-output examples or a piece of textual
instruction. Conventional machine learning approaches for natural language processing …