A review of current trends, techniques, and challenges in large language models (llms)

R Patil, V Gudivada - Applied Sciences, 2024 - mdpi.com
Natural language processing (NLP) has significantly transformed in the last decade,
especially in the field of language modeling. Large language models (LLMs) have achieved …

Fine-tuning aligned language models compromises safety, even when users do not intend to!

X Qi, Y Zeng, T **e, PY Chen, R Jia, P Mittal… - arxiv preprint arxiv …, 2023 - arxiv.org
Optimizing large language models (LLMs) for downstream use cases often involves the
customization of pre-trained LLMs through further fine-tuning. Meta's open release of Llama …

Many-shot in-context learning

R Agarwal, A Singh, L Zhang… - Advances in …, 2025 - proceedings.neurips.cc
Large language models (LLMs) excel at few-shot in-context learning (ICL)--learning from a
few examples provided in context at inference, without any weight updates. Newly expanded …

Tptu: Task planning and tool usage of large language model-based ai agents

J Ruan, Y Chen, B Zhang, Z Xu, T Bao… - … Models for Decision …, 2023 - openreview.net
With recent advancements in natural language processing, Large Language Models (LLMs)
have emerged as powerful tools for various real-world applications. Despite their prowess …

Wise: Rethinking the knowledge memory for lifelong model editing of large language models

P Wang, Z Li, N Zhang, Z Xu, Y Yao… - Advances in …, 2025 - proceedings.neurips.cc
Large language models (LLMs) need knowledge updates to meet the ever-growing world
facts and correct the hallucinated responses, facilitating the methods of lifelong model …

In-context learning with long-context models: An in-depth exploration

A Bertsch, M Ivgi, U Alon, J Berant, MR Gormley… - arxiv preprint arxiv …, 2024 - arxiv.org
As model context lengths continue to increase, the number of demonstrations that can be
provided in-context approaches the size of entire training datasets. We study the behavior of …

Evaluating instruction-tuned large language models on code comprehension and generation

Z Yuan, J Liu, Q Zi, M Liu, X Peng, Y Lou - arxiv preprint arxiv:2308.01240, 2023 - arxiv.org
In this work, we evaluate 10 open-source instructed LLMs on four representative code
comprehension and generation tasks. We have the following main findings. First, for the zero …

Stress-testing capability elicitation with password-locked models

R Greenblatt, F Roger… - Advances in Neural …, 2025 - proceedings.neurips.cc
To determine the safety of large language models (LLMs), AI developers must be able to
assess their dangerous capabilities. But simple prompting strategies often fail to elicit an …

Llmparser: An exploratory study on using large language models for log parsing

Z Ma, AR Chen, DJ Kim, TH Chen, S Wang - Proceedings of the IEEE …, 2024 - dl.acm.org
Logs are important in modern software development with runtime information. Log parsing is
the first step in many log-based analyses, that involve extracting structured information from …

Unveiling the generalization power of fine-tuned large language models

H Yang, Y Zhang, J Xu, H Lu, PA Heng… - arxiv preprint arxiv …, 2024 - arxiv.org
While Large Language Models (LLMs) have demonstrated exceptional multitasking abilities,
fine-tuning these models on downstream, domain-specific datasets is often necessary to …