Improving causal reasoning in large language models: A survey
Causal reasoning (CR) is a crucial aspect of intelligence, essential for problem-solving,
decision-making, and understanding the world. While large language models (LLMs) can …
decision-making, and understanding the world. While large language models (LLMs) can …
Causal prompting: Debiasing large language model prompting based on front-door adjustment
Despite the notable advancements of existing prompting methods, such as In-Context
Learning and Chain-of-Thought for Large Language Models (LLMs), they still face …
Learning and Chain-of-Thought for Large Language Models (LLMs), they still face …
Chain of thoughtlessness: An analysis of cot in planning
Large language model (LLM) performance on reasoning problems typically does not
generalize out of distribution. Previous work has claimed that this can be mitigated by …
generalize out of distribution. Previous work has claimed that this can be mitigated by …
Logic-of-thought: Injecting logic into contexts for full reasoning in large language models
Large Language Models (LLMs) have demonstrated remarkable capabilities across various
tasks but their performance in complex logical reasoning tasks remains unsatisfactory …
tasks but their performance in complex logical reasoning tasks remains unsatisfactory …
Credes: Causal reasoning enhancement and dual-end searching for solving long-range reasoning problems using llms
Large language models (LLMs) have demonstrated limitations in handling combinatorial
optimization problems involving long-range reasoning, partially due to causal hallucinations …
optimization problems involving long-range reasoning, partially due to causal hallucinations …
Enhancing Fault Localization Through Ordered Code Analysis with LLM Agents and Self-Reflection
Locating and fixing software faults is a time-consuming and resource-intensive task in
software development. Traditional fault localization methods, such as Spectrum-Based Fault …
software development. Traditional fault localization methods, such as Spectrum-Based Fault …
Watch your steps: Observable and modular chains of thought
CA Cohen, WW Cohen - arxiv preprint arxiv:2409.15359, 2024 - arxiv.org
We propose a variant of chain of thought (CoT) prompting called Program Trace Prompting
that makes explanations more observable while preserving the power, generality and …
that makes explanations more observable while preserving the power, generality and …
Can Large Language Models Understand DL-Lite Ontologies? An Empirical Study
Large language models (LLMs) have shown significant achievements in solving a wide
range of tasks. Recently, LLMs' capability to store, retrieve and infer with symbolic …
range of tasks. Recently, LLMs' capability to store, retrieve and infer with symbolic …
An Empirical Study on Self-correcting Large Language Models for Data Science Code Generation
TT Quoc, DH Minh, TQ Thanh… - arxiv preprint arxiv …, 2024 - arxiv.org
Large Language Models (LLMs) have recently advanced many applications on software
engineering tasks, particularly the potential for code generation. Among contemporary …
engineering tasks, particularly the potential for code generation. Among contemporary …
Csce: Boosting llm reasoning by simultaneous enhancing of casual significance and consistency
Chain-based reasoning methods like chain of thought (CoT) play a rising role in solving
reasoning tasks for large language models (LLMs). However, the causal illusions …
reasoning tasks for large language models (LLMs). However, the causal illusions …