On the prospects of incorporating large language models (llms) in automated planning and scheduling (aps)

V Pallagani, BC Muppasani, K Roy, F Fabiano… - Proceedings of the …, 2024 - ojs.aaai.org
Abstract Automated Planning and Scheduling is among the growing areas in Artificial
Intelligence (AI) where mention of LLMs has gained popularity. Based on a comprehensive …

Llm+ p: Empowering large language models with optimal planning proficiency

B Liu, Y Jiang, X Zhang, Q Liu, S Zhang… - arxiv preprint arxiv …, 2023 - arxiv.org
Large language models (LLMs) have demonstrated remarkable zero-shot generalization
abilities: state-of-the-art chatbots can provide plausible answers to many common questions …

Leveraging pre-trained large language models to construct and utilize world models for model-based task planning

L Guan, K Valmeekam, S Sreedharan… - Advances in …, 2023 - proceedings.neurips.cc
There is a growing interest in applying pre-trained large language models (LLMs) to
planning problems. However, methods that use LLMs directly as planners are currently …

Translating natural language to planning goals with large-language models

Y **e, C Yu, T Zhu, J Bai, Z Gong, H Soh - arxiv preprint arxiv:2302.05128, 2023 - arxiv.org
Recent large language models (LLMs) have demonstrated remarkable performance on a
variety of natural language processing (NLP) tasks, leading to intense excitement about their …

Generalized planning in pddl domains with pretrained large language models

T Silver, S Dan, K Srinivas, JB Tenenbaum… - Proceedings of the …, 2024 - ojs.aaai.org
Recent work has considered whether large language models (LLMs) can function as
planners: given a task, generate a plan. We investigate whether LLMs can serve as …

Building cooperative embodied agents modularly with large language models

H Zhang, W Du, J Shan, Q Zhou, Y Du… - arxiv preprint arxiv …, 2023 - arxiv.org
Large Language Models (LLMs) have demonstrated impressive planning abilities in single-
agent embodied tasks across various domains. However, their capacity for planning and …

Multimodal neurons in pretrained text-only transformers

S Schwettmann, N Chowdhury… - Proceedings of the …, 2023 - openaccess.thecvf.com
Abstract Language models demonstrate remarkable capacity to generalize representations
learned in one modality to downstream tasks in other modalities. Can we trace this ability to …

Large language models are in-context semantic reasoners rather than symbolic reasoners

X Tang, Z Zheng, J Li, F Meng, SC Zhu, Y Liang… - arxiv preprint arxiv …, 2023 - arxiv.org
The emergent few-shot reasoning capabilities of Large Language Models (LLMs) have
excited the natural language and machine learning community over recent years. Despite of …

[PDF][PDF] Plansformer Tool: Demonstrating Generation of Symbolic Plans Using Transformers.

V Pallagani, B Muppasani, B Srivastava, F Rossi… - IJCAI, 2023 - ijcai.org
Plansformer is a novel tool that utilizes a fine-tuned language model based on transformer
architecture to generate symbolic plans. Transformers are a type of neural network …

Saycanpay: Heuristic planning with large language models using learnable domain knowledge

R Hazra, PZ Dos Martires, L De Raedt - Proceedings of the AAAI …, 2024 - ojs.aaai.org
Large Language Models (LLMs) have demonstrated impressive planning abilities due to
their vast" world knowledge". Yet, obtaining plans that are both feasible (grounded in …