Folgen
Fahim Dalvi
Fahim Dalvi
Qatar Computing Research Institute
Bestätigte E-Mail-Adresse bei hbku.edu.qa - Startseite
Titel
Zitiert von
Zitiert von
Jahr
What do Neural Machine Translation Models Learn about Morphology?
Y Belinkov, N Durrani, F Dalvi, H Sajjad, J Glass
arXiv preprint arXiv:1704.03471, 2017
4762017
Fighting the COVID-19 infodemic: modeling the perspective of journalists, fact-checkers, social media platforms, policy makers, and the society
F Alam, S Shaar, F Dalvi, H Sajjad, A Nikolov, H Mubarak, GDS Martino, ...
arXiv preprint arXiv:2005.00033, 2020
292*2020
Identifying and Controlling Important Neurons in Neural Machine Translation
A Bau, Y Belinkov, H Sajjad, N Durrani, F Dalvi, J Glass
arXiv preprint arXiv:1811.01157, 2018
2112018
What is one grain of sand in the desert? analyzing individual neurons in deep nlp models
F Dalvi, N Durrani, H Sajjad, Y Belinkov, A Bau, J Glass
Proceedings of the AAAI Conference on Artificial Intelligence 33 (01), 6309-6317, 2019
2092019
On the effect of dropping layers of pre-trained transformer models
H Sajjad, F Dalvi, N Durrani, P Nakov
Computer Speech & Language 77, 101429, 2023
1422023
Findings of the IWSLT 2020 Evaluation Campaign
E Ansari, A Axelrod, N Bach, O Bojar, R Cattoni, F Dalvi, N Durrani, ...
Proceedings of the 17th International Conference on Spoken Language …, 2020
1352020
Evaluating Layers of Representation in Neural Machine Translation on Part-of-Speech and Semantic Tagging Tasks
Y Belinkov, L Màrquez, H Sajjad, N Durrani, F Dalvi, J Glass
Proceedings of the Eighth International Joint Conference on Natural Language …, 2017
1352017
Poor man’s bert: Smaller and faster transformer models
H Sajjad, F Dalvi, N Durrani, P Nakov
arXiv preprint arXiv:2004.03844 2 (2), 2020
1152020
Incremental Decoding and Training Methods for Simultaneous Translation in Neural Machine Translation
F Dalvi, N Durrani, H Sajjad, S Vogel
Proceedings of the 2018 Conference of the North American Chapter of the …, 2018
1092018
Analyzing redundancy in pretrained transformer models
F Dalvi, H Sajjad, N Durrani, Y Belinkov
arXiv preprint arXiv:2004.04010, 2020
1052020
Analyzing Individual Neurons in Pre-trained Language Models
N Durrani, H Sajjad, F Dalvi, Y Belinkov
arXiv preprint arXiv:2010.02695, 2020
1042020
Similarity Analysis of Contextual Word Representation Models
JM Wu, Y Belinkov, H Sajjad, N Durrani, F Dalvi, J Glass
arXiv preprint arXiv:2005.01172, 2020
852020
Neuron-level interpretation of deep nlp models: A survey
H Sajjad, N Durrani, F Dalvi
Transactions of the Association for Computational Linguistics 10, 1285-1303, 2022
842022
On the Linguistic Representational Power of Neural Machine Translation Models
Y Belinkov, N Durrani, F Dalvi, H Sajjad, J Glass
Computational Linguistics 46 (1), 1-52, 2020
832020
Understanding and Improving Morphological Learning in the Neural Machine Translation Decoder
F Dalvi, N Durrani, H Sajjad, Y Belinkov, S Vogel
Proceedings of the Eighth International Joint Conference on Natural Language …, 2017
762017
Discovering Latent Concepts Learned in BERT
F Dalvi, AR Khan, F Alam, N Durrani, J Xu, H Sajjad
International Conference on Learning Representations, 2021
742021
NeuroX: A toolkit for analyzing individual neurons in neural networks
F Dalvi, A Nortonsmith, A Bau, Y Belinkov, H Sajjad, N Durrani, J Glass
Proceedings of the AAAI Conference on Artificial Intelligence 33 (01), 9851-9852, 2019
672019
How transfer learning impacts linguistic knowledge in deep NLP models?
N Durrani, H Sajjad, F Dalvi
arXiv preprint arXiv:2105.15179, 2021
572021
Neural Machine Translation Training in a Multi-Domain Scenario
H Sajjad, N Durrani, F Dalvi, Y Belinkov, S Vogel
arXiv preprint arXiv:1708.08712, 2017
562017
One Size Does Not Fit All: Comparing NMT Representations of Different Granularities
N Durrani, F Dalvi, H Sajjad, Y Belinkov, P Nakov
Proceedings of the 2019 Conference of the North American Chapter of the …, 2019
542019
Das System kann den Vorgang jetzt nicht ausführen. Versuchen Sie es später erneut.
Artikel 1–20