Seguir
Yichen Jiang
Yichen Jiang
Apple AI/ML
Dirección de correo verificada de apple.com - Página principal
Título
Citado por
Citado por
Año
HoVer: A dataset for many-hop fact extraction and claim verification
Y Jiang, S Bordia, Z Zhong, C Dognin, M Singh, M Bansal
arXiv preprint arXiv:2011.03088, 2020
1392020
Avoiding reasoning shortcuts: Adversarial evaluation, training, and model development for multi-hop QA
Y Jiang, M Bansal
arXiv preprint arXiv:1906.07132, 2019
1092019
Self-assembling modular networks for interpretable multi-hop reasoning
Y Jiang, M Bansal
arXiv preprint arXiv:1909.05803, 2019
1022019
Explore, propose, and assemble: An interpretable model for multi-hop reading comprehension
Y Jiang, N Joshi, YC Chen, M Bansal
arXiv preprint arXiv:1906.05210, 2019
572019
Closed-book training to improve summarization encoder memory
Y Jiang, M Bansal
arXiv preprint arXiv:1809.04585, 2018
412018
Inducing Transformer's Compositional Generalization Ability via Auxiliary Sequence Prediction Tasks
Y Jiang, M Bansal
arXiv preprint arXiv:2109.15256, 2021
312021
Enriching transformers with structured tensor-product representations for abstractive summarization
Y Jiang, A Celikyilmaz, P Smolensky, P Soulos, S Rao, H Palangi, ...
arXiv preprint arXiv:2106.01317, 2021
202021
Data factors for better compositional generalization
X Zhou, Y Jiang, M Bansal
arXiv preprint arXiv:2311.04420, 2023
72023
Mutual exclusivity training and primitive augmentation to induce compositionality
Y Jiang, X Zhou, M Bansal
arXiv preprint arXiv:2211.15578, 2022
72022
Hierarchical and Dynamic Prompt Compression for Efficient Zero-shot API Usage
Y Jiang, M Vecchio, M Bansal, A Johannsen
Findings of the Association for Computational Linguistics: EACL 2024, 2162-2174, 2024
52024
Structural biases for improving transformers on translation into morphologically rich languages
P Soulos, S Rao, C Smith, E Rosen, A Celikyilmaz, RT McCoy, Y Jiang, ...
arXiv preprint arXiv:2208.06061, 2022
32022
Learning and analyzing generation order for undirected sequence models
Y Jiang, M Bansal
arXiv preprint arXiv:2112.09097, 2021
22021
Inducing Systematicity in Transformers by Attending to Structurally Quantized Embeddings
Y Jiang, X Zhou, M Bansal
arXiv preprint arXiv:2402.06492, 2024
12024
El sistema no puede realizar la operación en estos momentos. Inténtalo de nuevo más tarde.
Artículos 1–13