Retrieval augmented generation (rag) and beyond: A comprehensive survey on how to make your llms use external data more wisely
Large language models (LLMs) augmented with external data have demonstrated
remarkable capabilities in completing real-world tasks. Techniques for integrating external …
remarkable capabilities in completing real-world tasks. Techniques for integrating external …
Agent hospital: A simulacrum of hospital with evolvable medical agents
The recent rapid development of large language models (LLMs) has sparked a new wave of
technological revolution in medical artificial intelligence (AI). While LLMs are designed to …
technological revolution in medical artificial intelligence (AI). While LLMs are designed to …
Recursive introspection: Teaching language model agents how to self-improve
A central piece in enabling intelligent agentic behavior in foundation models is to make them
capable of introspecting upon their behavior, reasoning, and correcting their mistakes as …
capable of introspecting upon their behavior, reasoning, and correcting their mistakes as …
Star-gate: Teaching language models to ask clarifying questions
When prompting language models to complete a task, users often leave important aspects
unsaid. While asking questions could resolve this ambiguity\citep [GATE;][]{li2023eliciting} …
unsaid. While asking questions could resolve this ambiguity\citep [GATE;][]{li2023eliciting} …
Quiet-star: Language models can teach themselves to think before speaking
When writing and talking, people sometimes pause to think. Although reasoning-focused
works have often framed reasoning as a method of answering questions or completing …
works have often framed reasoning as a method of answering questions or completing …
Recursive introspection: Teaching LLM agents how to self-improve
A central piece in enabling intelligent agentic behavior in foundation models is to make them
capable of introspecting upon their behavior, to reason and correct their mistakes. However …
capable of introspecting upon their behavior, to reason and correct their mistakes. However …
Retrieved in-context principles from previous mistakes
In-context learning (ICL) has been instrumental in adapting Large Language Models (LLMs)
to downstream tasks using correct input-output examples. Recent advances have attempted …
to downstream tasks using correct input-output examples. Recent advances have attempted …
Neural-symbolic collaborative distillation: Advancing small language models for complex reasoning tasks
In this paper, we propose $\textbf {Ne} $ ural-$\textbf {Sy} $ mbolic $\textbf {C} $ ollaborative
$\textbf {D} $ istillation ($\textbf {NesyCD} $), a novel knowledge distillation method for …
$\textbf {D} $ istillation ($\textbf {NesyCD} $), a novel knowledge distillation method for …
Investigating the potential of using large language models for scheduling
D Jobson, Y Li - Proceedings of the 1st ACM International Conference …, 2024 - dl.acm.org
The inaugural ACM International Conference on AI-powered Software introduced the AIware
Challenge, prompting researchers to explore AI-driven tools for optimizing conference …
Challenge, prompting researchers to explore AI-driven tools for optimizing conference …
Wrong-of-thought: An integrated reasoning framework with multi-perspective verification and wrong information
Chain-of-Thought (CoT) has become a vital technique for enhancing the performance of
Large Language Models (LLMs), attracting increasing attention from researchers. One …
Large Language Models (LLMs), attracting increasing attention from researchers. One …