Suivre
Swaroop Mishra
Swaroop Mishra
Research Scientist, Google DeepMind
Adresse e-mail validée de google.com - Page d'accueil
Titre
Citée par
Citée par
Année
Gemini: a family of highly capable multimodal models
G Team, R Anil, S Borgeaud, JB Alayrac, J Yu, R Soricut, J Schalkwyk, ...
arXiv preprint arXiv:2312.11805, 2023
30792023
Self-Instruct: Aligning Language Model with Self Generated Instructions
Y Wang, Y Kordi, S Mishra, A Liu, NA Smith, D Khashabi, H Hajishirzi
ACL, 2022
18862022
Beyond the imitation game: Quantifying and extrapolating the capabilities of language models
A Srivastava, A Rastogi, A Rao, AAM Shoeb, A Abid, A Fisch, AR Brown, ...
TMLR, 2022
1429*2022
Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context
G Team, P Georgiev, VI Lei, R Burnell, L Bai, A Gulati, G Tanzer, ...
arXiv preprint arXiv:2403.05530, 2024
10622024
Learn to Explain: Multimodal Reasoning via Thought Chains for Science Question Answering
P Lu, S Mishra, T Xia, L Qiu, KW Chang, SC Zhu, O Tafjord, P Clark, ...
NeurIPS, 2022
9382022
Cross-task generalization via natural language crowdsourcing instructions
S Mishra, D Khashabi, C Baral, H Hajishirzi
ACL, 2021
7112021
Super-naturalinstructions: Generalization via declarative instructions on 1600+ nlp tasks
Y Wang*, S Mishra*, P Alipoormolabashi, Y Kordi, A Mirzaei, A Naik, ...
EMNLP, 2022
5482022
Benchmarking generalization via in-context instructions on 1,600+ language tasks
Y Wang, S Mishra, P Alipoormolabashi, Y Kordi, A Mirzaei, A Arunkumar, ...
arXiv preprint arXiv:2204.07705 2, 2022
339*2022
Gemini: A Family of Highly Capable Multimodal Models, Dec. 2023
G Team
URL http://arxiv. org/abs/2312.11805, 0
333*
Large language models cannot self-correct reasoning yet
J Huang, X Chen, S Mishra, HS Zheng, AW Yu, X Song, D Zhou
ICLR, 2023
3142023
Reframing Instructional Prompts to GPTk's Language
S Mishra, D Khashabi, C Baral, Y Choi, H Hajishirzi
ACL, 2021
2132021
Instruction-following evaluation for large language models
J Zhou, T Lu, S Mishra, S Brahma, S Basu, Y Luan, D Zhou, L Hou
arXiv preprint arXiv:2311.07911, 2023
1892023
Lila: A Unified Benchmark for Mathematical Reasoning
S Mishra, M Finlayson, P Lu, L Tang, S Welleck, C Baral, T Rajpurohit, ...
EMNLP, 2022
1182022
NumGLUE: A Suite of Fundamental yet Challenging Mathematical Reasoning Tasks
S Mishra, A Mitra, N Varshney, B Sachdeva, P Clark, C Baral, A Kalyan
ACL, 2022
105*2022
Commonsense Reasoning with Implicit Knowledge in Natural Language
P Banerjee*, S Mishra*, KK Pal*, A Mitra, C Baral
AKBC, 2021
83*2021
Take a step back: evoking reasoning via abstraction in large language models
HS Zheng, S Mishra, X Chen, HT Cheng, EH Chi, QV Le, D Zhou
ICLR, 2023
822023
In-BoXBART: Get Instructions into Biomedical Multi-Task Learning
M Parmar, S Mishra, M Purohit, M Luo, MH Murad, C Baral
NAACL, 2022
662022
Don't Blame the Annotator: Bias Already Starts in the Annotation Instructions
M Parmar*, S Mishra*, M Geva, C Baral
EACL Outstanding Paper Award, 2022
632022
How FaR Are Large Language Models From Agents with Theory-of-Mind?
P Zhou, A Madaan, SP Potharaju, A Gupta, KR McKee, A Holtzman, ...
arXiv preprint arXiv:2310.03051, 2023
542023
Investigating Selective Prediction Approaches Across Several Tasks in IID, OOD, and Adversarial Settings
N Varshney, S Mishra, C Baral
ACL, 2022
522022
Le système ne peut pas réaliser cette opération maintenant. Veuillez réessayer plus tard.
Articles 1–20