Improving causal reasoning in large language models: A survey

L Yu, D Chen, S **ong, Q Wu, Q Liu, D Li… - arxiv preprint arxiv …, 2024 - arxiv.org
Causal reasoning (CR) is a crucial aspect of intelligence, essential for problem-solving,
decision-making, and understanding the world. While large language models (LLMs) can …

Causal prompting: Debiasing large language model prompting based on front-door adjustment

C Zhang, L Zhang, J Wu, D Zhou, Y He - arxiv preprint arxiv:2403.02738, 2024 - arxiv.org
Despite the notable advancements of existing prompting methods, such as In-Context
Learning and Chain-of-Thought for Large Language Models (LLMs), they still face …

Chain of thoughtlessness: An analysis of cot in planning

K Stechly, K Valmeekam, S Kambhampati - arxiv preprint arxiv …, 2024 - arxiv.org
Large language model (LLM) performance on reasoning problems typically does not
generalize out of distribution. Previous work has claimed that this can be mitigated by …

Logic-of-thought: Injecting logic into contexts for full reasoning in large language models

T Liu, W Xu, W Huang, X Wang, J Wang… - arxiv preprint arxiv …, 2024 - arxiv.org
Large Language Models (LLMs) have demonstrated remarkable capabilities across various
tasks but their performance in complex logical reasoning tasks remains unsatisfactory …

Credes: Causal reasoning enhancement and dual-end searching for solving long-range reasoning problems using llms

K Wang, X Zhang, H Liu, S Han, H Ma, T Hu - arxiv preprint arxiv …, 2024 - arxiv.org
Large language models (LLMs) have demonstrated limitations in handling combinatorial
optimization problems involving long-range reasoning, partially due to causal hallucinations …

Enhancing Fault Localization Through Ordered Code Analysis with LLM Agents and Self-Reflection

MN Rafi, DJ Kim, TH Chen, S Wang - arxiv preprint arxiv:2409.13642, 2024 - arxiv.org
Locating and fixing software faults is a time-consuming and resource-intensive task in
software development. Traditional fault localization methods, such as Spectrum-Based Fault …

Watch your steps: Observable and modular chains of thought

CA Cohen, WW Cohen - arxiv preprint arxiv:2409.15359, 2024 - arxiv.org
We propose a variant of chain of thought (CoT) prompting called Program Trace Prompting
that makes explanations more observable while preserving the power, generality and …

Can Large Language Models Understand DL-Lite Ontologies? An Empirical Study

K Wang, G Qi, J Li, S Zhai - arxiv preprint arxiv:2406.17532, 2024 - arxiv.org
Large language models (LLMs) have shown significant achievements in solving a wide
range of tasks. Recently, LLMs' capability to store, retrieve and infer with symbolic …

An Empirical Study on Self-correcting Large Language Models for Data Science Code Generation

TT Quoc, DH Minh, TQ Thanh… - arxiv preprint arxiv …, 2024 - arxiv.org
Large Language Models (LLMs) have recently advanced many applications on software
engineering tasks, particularly the potential for code generation. Among contemporary …

Csce: Boosting llm reasoning by simultaneous enhancing of casual significance and consistency

K Wang, X Zhang, Z Guo, T Hu, H Ma - arxiv preprint arxiv:2409.17174, 2024 - arxiv.org
Chain-based reasoning methods like chain of thought (CoT) play a rising role in solving
reasoning tasks for large language models (LLMs). However, the causal illusions …