Folgen
Akshat Shrivastava
Akshat Shrivastava
Co-Founder Perceptron AI, ex-RS FAIR, ex-RS Meta AR
Bestätigte E-Mail-Adresse bei cs.uw.edu - Startseite
Titel
Zitiert von
Zitiert von
Jahr
Muppet: Massive multi-task representations with pre-finetuning
A Aghajanyan, A Gupta, A Shrivastava, X Chen, L Zettlemoyer, S Gupta
arXiv preprint arXiv:2101.11038, 2021
2952021
Better fine-tuning by reducing representational collapse
A Aghajanyan, A Shrivastava, A Gupta, N Goyal, L Zettlemoyer, S Gupta
arXiv preprint arXiv:2008.03156, 2020
2572020
Chameleon: Mixed-Modal Early-Fusion Foundation Models
C Team
arXiv preprint arXiv:2405.09818, 2024
155*2024
Conversational semantic parsing
A Aghajanyan, J Maillard, A Shrivastava, K Diedrick, M Haeger, H Li, ...
arXiv preprint arXiv:2009.13655, 2020
542020
Layer skip: Enabling early exit inference and self-speculative decoding
M Elhoushi, A Shrivastava, D Liskovich, B Hosmer, B Wasti, L Lai, ...
arXiv preprint arXiv:2404.16710, 2024
492024
Stop: A dataset for spoken task oriented semantic parsing
P Tomasello, A Shrivastava, D Lazar, PC Hsu, D Le, A Sagar, A Elkahky, ...
2022 IEEE Spoken Language Technology Workshop (SLT), 991-998, 2023
322023
Non-autoregressive semantic parsing for compositional task-oriented dialog
A Babu, A Shrivastava, A Aghajanyan, A Aly, A Fan, M Ghazvininejad
arXiv preprint arXiv:2104.04923, 2021
282021
Cross-lingual transfer learning for intent detection of covid-19 utterances
A Arora, A Shrivastava, M Mohit, LSM Lecanda, A Aly
262020
Span pointer networks for non-autoregressive task-oriented semantic parsing
A Shrivastava, P Chuang, A Babu, S Desai, A Arora, A Zotov, A Aly
arXiv preprint arXiv:2104.07275, 2021
252021
Retronlu: Retrieval augmented task-oriented semantic parsing
V Gupta, A Shrivastava, A Sagar, A Aghajanyan, D Savenkov
arXiv preprint arXiv:2109.10410, 2021
202021
Latency-aware neural architecture search with multi-objective bayesian optimization
D Eriksson, PIJ Chuang, S Daulton, P Xia, A Shrivastava, A Babu, S Zhao, ...
arXiv preprint arXiv:2106.11890, 2021
172021
MoMa: Efficient Early-Fusion Pre-training with Mixture of Modality-Aware Experts
XV Lin, A Shrivastava, L Luo, S Iyer, M Lewis, G Gosh, L Zettlemoyer, ...
arXiv preprint arXiv:2407.21770, 2024
152024
Deliberation model for on-device spoken language understanding
D Le, A Shrivastava, P Tomasello, S Kim, A Livshits, O Kalinli, ML Seltzer
arXiv preprint arXiv:2204.01893, 2022
142022
Privately Customizing Prefinetuning to Better Match User Data in Federated Learning
C Hou, H Zhan, A Shrivastava, S Wang, S Livshits, G Fanti, D Lazar
arXiv preprint arXiv:2302.09042, 2023
132023
Low-resource task-oriented semantic parsing via intrinsic modeling
S Desai, A Shrivastava, A Zotov, A Aly
arXiv preprint arXiv:2104.07224, 2021
112021
iSeqL: interactive sequence learning
A Shrivastava, J Heer
Proceedings of the 25th International Conference on Intelligent User …, 2020
112020
Retrieve-and-Fill for Scenario-based Task-Oriented Semantic Parsing
A Shrivastava, S Desai, A Gupta, A Elkahky, A Livshits, A Zotov, A Aly
arXiv preprint arXiv:2202.00901, 2022
72022
PrE-Text: Training Language Models on Private Federated Data in the Age of LLMs
C Hou, A Shrivastava, H Zhan, R Conway, T Le, A Sagar, G Fanti, D Lazar
Privacy Regulation and Protection in Machine Learning, 0
5*
Muppet: massive multi-task representations with pre-finetuning. 2021
A Aghajanyan, A Gupta, A Shrivastava, X Chen, L Zettlemoyer, S Gupta
URL https://arxiv. org/abs/2101.11038, 0
5
Small But Funny: A Feedback-Driven Approach to Humor Distillation
S Ravi, P Huber, A Shrivastava, A Sagar, A Aly, V Shwartz, A Einolghozati
arXiv preprint arXiv:2402.18113, 2024
42024
Das System kann den Vorgang jetzt nicht ausführen. Versuchen Sie es später erneut.
Artikel 1–20