On the Brittle Foundations of ReAct Prompting for Agentic Large Language Models

M Verma, S Bhambri, S Kambhampati - arxiv preprint arxiv:2405.13966, 2024 - arxiv.org
The reasoning abilities of Large Language Models (LLMs) remain a topic of debate. Some
methods such as ReAct-based prompting, have gained popularity for claiming to enhance …

InsTALL: Context-aware Instructional Task Assistance with Multi-modal Large Language Models

P Nguyen, S Sengupta, G Malik, A Gupta… - arxiv preprint arxiv …, 2025 - arxiv.org
The improved competence of generative models can help building multi-modal virtual
assistants that leverage modalities beyond language. By observing humans performing …

Guidance Priors to Reduce Human Feedback Burden in Sequential Decision Making

M Verma - 2024 - search.proquest.com
Human in the loop sequential decision making such as Reinforcement Learning from
Human Feedback (RLHF) or behavior synthesis leverages human feedback to the AI system …

Do Think Tags Really Help LLMs Plan? A Critical Evaluation of ReAct-Style Prompting

M Verma, S Bhambri, S Kambhampati - Adaptive Foundation Models … - openreview.net
The reasoning abilities of Large Language Models (LLMs) remain a topic of debate, which
are critically tested in sequential decision-making problems. ReAct, a recently popular …