LIFBench: Evaluating the Instruction Following Performance and Stability of Large Language Models in Long-Context Scenarios

X Wu, M Wang, Y Liu, X Shi, H Yan, X Lu, J Zhu… - arxiv preprint arxiv …, 2024 - arxiv.org
As Large Language Models (LLMs) continue to advance in natural language processing
(NLP), their ability to stably follow instructions in long-context inputs has become crucial for …

Better Think with Tables: Leveraging Tables to Enhance Large Language Model Comprehension

J Oh, G Heo, S Oh, J Wang, X **e… - arxiv preprint arxiv …, 2024 - arxiv.org
Despite the recent advancement of Large Langauge Models (LLMs), they struggle with
complex queries often involving multiple conditions, common in real-world scenarios. We …

Step-by-Step Mastery: Enhancing Soft Constraint Following Ability of Large Language Models

Q Ren, J Zeng, Q He, J Liang, Y **ao, W Zhou… - arxiv preprint arxiv …, 2025 - arxiv.org
It is crucial for large language models (LLMs) to follow instructions that involve multiple
constraints. However, soft constraints are semantically related and difficult to verify through …

CS4: Measuring the Creativity of Large Language Models Automatically by Controlling the Number of Story-Writing Constraints

A Atmakuru, J Nainani, RSR Bheemreddy… - arxiv preprint arxiv …, 2024 - arxiv.org
Evaluating the creativity of large language models (LLMs) in story writing is difficult because
LLM-generated stories could seemingly look creative but be very similar to some existing …

Baichuan Alignment Technical Report

M Lin, F Yang, Y Shen, H Sun, T Li, T Zhang… - arxiv preprint arxiv …, 2024 - arxiv.org
We introduce Baichuan Alignment, a detailed analysis of the alignment techniques
employed in the Baichuan series of models. This represents the industry's first …

IOPO: Empowering LLMs with Complex Instruction Following via Input-Output Preference Optimization

X Zhang, H Yu, C Fu, F Huang, Y Li - arxiv preprint arxiv:2411.06208, 2024 - arxiv.org
In the realm of large language models (LLMs), the ability of models to accurately follow
instructions is paramount as more agents and applications leverage LLMs for construction …

UltraGen: Extremely Fine-grained Controllable Generation via Attribute Reconstruction and Global Preference Optimization

L Yun, L Peng, J Shang - arxiv preprint arxiv:2502.12375, 2025 - arxiv.org
Fine granularity is an essential requirement for controllable text generation, which has seen
rapid growth with the ability of LLMs. However, existing methods focus mainly on a small set …

LHPF: An LLM-Based Hierarchical Pipeline Framework for Spoken Language Understanding

X Zhu, Y Chen, X Rong - 2024 3rd International Conference on …, 2024 - ieeexplore.ieee.org
In this study, we propose a hierarchical task learning framework based on Large Language
Models (LLMs) to improve the performance in spoken language understanding (SLU). Our …

LLM self-correction with DeCRIM: Decompose, critique, and refine for enhanced following of instructions with multiple constraints

TP Ferraz, K Mehta, YH Lin, HS Chang, S Oraby… - arxiv preprint arxiv …, 2024 - arxiv.org
Instruction following is a key capability for LLMs. However, recent studies have shown that
LLMs often struggle with instructions containing multiple constraints (eg a request to create a …