Selfevolve: A code evolution framework via large language models

S Jiang, Y Wang, Y Wang - arxiv preprint arxiv:2306.02907, 2023 - arxiv.org
Large language models (LLMs) have already revolutionized code generation, after being
pretrained on publicly available code data. However, while various methods have been …

Multilingual machine translation with large language models: Empirical results and analysis

W Zhu, H Liu, Q Dong, J Xu, S Huang, L Kong… - arxiv preprint arxiv …, 2023 - arxiv.org
Large language models (LLMs) have demonstrated remarkable potential in handling
multilingual machine translation (MMT). In this paper, we systematically investigate the …

Pose: Efficient context window extension of llms via positional skip-wise training

D Zhu, N Yang, L Wang, Y Song, W Wu, F Wei… - arxiv preprint arxiv …, 2023 - arxiv.org
Large Language Models (LLMs) are trained with a pre-defined context length, restricting
their use in scenarios requiring long inputs. Previous efforts for adapting LLMs to a longer …

Inference scaling for long-context retrieval augmented generation

Z Yue, H Zhuang, A Bai, K Hui, R Jagerman… - arxiv preprint arxiv …, 2024 - arxiv.org
The scaling of inference computation has unlocked the potential of long-context large
language models (LLMs) across diverse settings. For knowledge-intensive tasks, the …

Probing the decision boundaries of in-context learning in large language models

S Zhao, T Nguyen, A Grover - Advances in Neural …, 2025 - proceedings.neurips.cc
In-context learning is an emergent paradigm in large language models (LLMs) that enables
them to generalize to new tasks and domains by simply prompting these models with a few …

Multimodal task vectors enable many-shot multimodal in-context learning

B Huang, C Mitra, A Arbelle, L Karlinsky… - arxiv preprint arxiv …, 2024 - arxiv.org
The recent success of interleaved Large Multimodal Models (LMMs) in few-shot learning
suggests that in-context learning (ICL) with many examples can be promising for learning …

Evaluating Large Language Models in Echocardiography Reporting: Opportunities and Challenges

CJ Chao, I Banerjee, R Arsanjani, C Ayoub, A Tseng… - medRxiv, 2024 - medrxiv.org
Background The increasing need for diagnostic echocardiography (echo) tests presents
challenges in preserving the quality and promptness of reports. While Large Language …

More is not always better? Enhancing Many-Shot In-Context Learning with Differentiated and Reweighting Objectives

X Zhang, A Lv, Y Liu, F Sung, W Liu, S Shang… - arxiv preprint arxiv …, 2025 - arxiv.org
Large language models (LLMs) excel at few-shot in-context learning (ICL) without requiring
parameter updates. However, as the number of ICL demonstrations increases from a few to …

Vector-ICL: In-context Learning with Continuous Vector Representations

Y Zhuang, C Singh, L Liu, J Shang, J Gao - arxiv preprint arxiv …, 2024 - arxiv.org
Large language models (LLMs) have shown remarkable in-context learning (ICL)
capabilities on textual data. We explore whether these capabilities can be extended to …

A controlled study on long context extension and generalization in llms

Y Lu, JN Yan, S Yang, JT Chiu, S Ren, F Yuan… - arxiv preprint arxiv …, 2024 - arxiv.org
Broad textual understanding and in-context learning require language models that utilize full
document contexts. Due to the implementation challenges associated with directly training …