Mathematical language models: A survey

W Liu, H Hu, J Zhou, Y Ding, J Li, J Zeng, M He… - arxiv preprint arxiv …, 2023 - arxiv.org
In recent years, there has been remarkable progress in leveraging Language Models (LMs),
encompassing Pre-trained Language Models (PLMs) and Large-scale Language Models …

Instance-adaptive zero-shot chain-of-thought prompting

X Yuan, C Shen, S Yan, X Zhang, L **e… - arxiv preprint arxiv …, 2024 - arxiv.org
Zero-shot Chain-of-Thought (CoT) prompting emerges as a simple and effective strategy for
enhancing the performance of large language models (LLMs) in real-world reasoning tasks …

Demystifying chains, trees, and graphs of thoughts

M Besta, F Memedi, Z Zhang, R Gerstenberger… - arxiv preprint arxiv …, 2024 - arxiv.org
The field of natural language processing (NLP) has witnessed significant progress in recent
years, with a notable focus on improving large language models'(LLM) performance through …

Unlocking Black-Box Prompt Tuning Efficiency via Zeroth-Order Optimization

H Zhan, C Chen, T Ding, Z Li, R Sun - Findings of the Association …, 2024 - aclanthology.org
Prompt optimization emerges as an important technique for adapting Large Language
Models (LLMs) to specific tasks. Unfortunately, LLM proprietors often limit access to models' …

Towards a unified view of answer calibration for multi-step reasoning

S Deng, N Zhang, N Oo, B Hooi - arxiv preprint arxiv:2311.09101, 2023 - arxiv.org
Large Language Models (LLMs) employing Chain-of-Thought (CoT) prompting have
broadened the scope for improving multi-step reasoning capabilities. We generally divide …

Unveiling the Magic of Code Reasoning through Hypothesis Decomposition and Amendment

Y Zhao, T Ji, W Feng, Z Huang, Q Liu, Z Liu… - arxiv preprint arxiv …, 2025 - arxiv.org
The reasoning abilities are one of the most enigmatic and captivating aspects of large
language models (LLMs). Numerous studies are dedicated to exploring and expanding the …

Evolutionary Pre-Prompt Optimization for Mathematical Reasoning

M Videau, A Leite, M Schoenauer… - arxiv preprint arxiv …, 2024 - arxiv.org
Recent advancements have highlighted that large language models (LLMs), when given a
small set of task-specific examples, demonstrate remarkable proficiency, a capability that …

E&S-Gainer: An Emotion Aware and Strategy Enhanced Model for Emotional Support Conversation

C Yang, D Wang, S Feng, Y Zhang, G Yu - International Conference on …, 2024 - Springer
Abstract The Emotional Support Conversation (ESC) task aims to alleviate emotional
distress in seekers and offer them an outlet for expressing their aggravation, garnering …

Fallback Prompting Guides Large Language Models for Accurate Responses in Complex Reasoning

J Sun, Z Zhang, X Wang, X Ji, Y Zhang - Journal of Networking and …, 2024 - iecscience.org
Since the introduction of Chain-of-Thought (CoT), leveraging Large Language Models
(LLMs) to solve complex reasoning problems has become possible. While an increasing …

DCR: Divide-and-Conquer Reasoning for Multi-choice Question Answering with LLMs

Z Meng, Y Zhang, Z Feng, Z Liu - arxiv preprint arxiv:2401.05190, 2024 - arxiv.org
Large language models (LLMs) have shown impressive performance in reasoning
benchmarks with the emergence of Chain-of-Thought (CoT), particularly in multi-choice …