Reasoning with language model is planning with world model S Hao, Y Gu, H Ma, JJ Hong, Z Wang, DZ Wang, Z Hu EMNLP 2023, 2023 | 415 | 2023 |
ToolkenGPT: Augmenting Frozen Language Models with Massive Tools via Tool Embeddings S Hao, T Liu, Z Wang, Z Hu NeurIPS 2023 (Oral); SocalNLP 2023 (Best Paper Award), 2023 | 123 | 2023 |
BertNet: Harvesting Knowledge Graphs with Arbitrary Relations from Pretrained Language Models S Hao, B Tan, K Tang, B Ni, X Shao, H Zhang, E Xing, Z Hu ACL 2023 (Findings), 2022 | 76* | 2022 |
Does Recommend-Revise Produce Reliable Annotations? An Analysis on Missing Instances in DocRED Q Huang, S Hao, Y Ye, S Zhu, Y Feng, D Zhao ACL 2022, 2022 | 34 | 2022 |
Benchmarking Commonsense Knowledge Base Population with an Effective Evaluation Dataset T Fang, W Wang, S Choi, S Hao, H Zhang, Y Song, B He EMNLP 2021, 2021 | 22 | 2021 |
LLM Reasoners: New Evaluation, Library, and Analysis of Step-by-Step Reasoning with Large Language Models S Hao, Y Gu, H Luo, T Liu, X Shao, X Wang, S Xie, H Ma, A Samavedhi, ... COLM 2024, 2024 | 17 | 2024 |
Pandora: Towards General World Model with Natural Language Actions and Video States J Xiang, G Liu, Y Gu, Q Gao, Y Ning, Y Zha, Z Feng, T Tao, S Hao, Y Shi, ... arXiv preprint arXiv:2406.09455, 2024 | 14 | 2024 |
Training large language models to reason in a continuous latent space S Hao, S Sukhbaatar, DJ Su, X Li, Z Hu, J Weston, Y Tian arXiv preprint arXiv:2412.06769, 2024 | 13* | 2024 |
Flow of Reasoning: Efficient Training of LLM Policy with Divergent Thinking F Yu, L Jiang, H Kang, S Hao, L Qin arXiv preprint arXiv:2406.05673, 2024 | 5 | 2024 |
Offline Reinforcement Learning for LLM Multi-Step Reasoning H Wang*, S Hao*, H Dong, S Zhang, Y Bao, Z Yang, Y Wu arXiv preprint arXiv:2412.16145, 2024 | 1 | 2024 |
Chapter 7: Neural-symbolic interaction and co-evolving B Tan, S Hao, E Xing, Z Hu Compendium of Neurosymbolic Artificial Intelligence 369, 125, 2023 | | 2023 |