フォロー
Jacob Andreas
Jacob Andreas
確認したメール アドレス: mit.edu - ホームページ
タイトル
引用先
引用先
Neural module networks
J Andreas, M Rohrbach, T Darrell, D Klein
CVPR, 2016
14712016
Learning to reason: End-to-end module networks for visual question answering
R Hu, J Andreas, M Rohrbach, T Darrell, K Saenko
ICCV, 2017
7262017
Learning to Compose Neural Networks for Question Answering
J Andreas, M Rohrbach, T Darrell, D Klein
NAACL, 2016
7022016
Modular multitask reinforcement learning with policy sketches
J Andreas, D Klein, S Levine
ICML, 2017
5822017
Speaker-follower models for vision-and-language navigation
D Fried, R Hu, V Cirik, A Rohrbach, J Andreas, LP Morency, ...
NeurIPS, 2018
5512018
What learning algorithm is in-context learning? investigations with linear models
E Akyürek, D Schuurmans, J Andreas, T Ma, D Zhou
ICLR, 2023
491*2023
Modeling relationships in referential expressions with compositional modular networks
R Hu, M Rohrbach, J Andreas, T Darrell, K Saenko
CVPR, 2017
4422017
Experience grounds language
Y Bisk, A Holtzman, J Thomason, J Andreas, Y Bengio, J Chai, M Lapata, ...
EMNLP, 2020
4242020
A survey of reinforcement learning informed by natural language
J Luketina, N Nardelli, G Farquhar, J Foerster, J Andreas, E Grefenstette, ...
IJCAI, 2019
3272019
Good-enough compositional data augmentation
J Andreas
EMNLP, 2020
2722020
Pre-trained language models for interactive decision-making
YZ Shuang Li, Xavier Puig, Chris Paxton, Yilun Du, Clinton Wang, Linxi Fan ...
NeurIPS, 2022
250*2022
Explainable neural computation via stack neural module networks
R Hu, J Andreas, T Darrell, K Saenko
ECCV, 2018
2382018
A minimal span-based neural constituency parser
M Stern, J Andreas, D Klein
ACL, 2017
2352017
Compositional explanations of neurons
J Mu, J Andreas
NeurIPS, 2020
2102020
Guiding Pretraining in Reinforcement Learning with Large Language Models
Y Du, O Watkins, Z Wang, C Colas, T Darrell, P Abbeel, A Gupta, ...
ICML, 2023
2072023
Reasoning about pragmatics with neural listeners and speakers
J Andreas, D Klein
EMNLP, 2016
1922016
Language Models as Agent Models
J Andreas
EMNLP Findings, 2022
1802022
Measuring compositionality in representation learning
J Andreas
ICLR, 2019
1772019
Reasoning or reciting? exploring the capabilities and limitations of language models through counterfactual tasks
Z Wu, L Qiu, A Ross, E Akyürek, B Chen, B Wang, N Kim, J Andreas, ...
NAACL, 2024
1712024
Implicit Representations of Meaning in Neural Language Models
BZ Li, M Nye, J Andreas
ACL, 2021
1682021
現在システムで処理を実行できません。しばらくしてからもう一度お試しください。
論文 1–20