Follow
Ahmad Rashid
Ahmad Rashid
Vector Institute; University of Waterloo
Verified email at uwaterloo.ca
Title
Cited by
Cited by
Year
Context-aware adversarial training for name regularity bias in named entity recognition
A Ghaddar, P Langlais, A Rashid, M Rezagholizadeh
Transactions of the Association for Computational Linguistics 9, 586-604, 2021
432021
Kronecker decomposition for gpt compression
A Edalati, M Tahaei, A Rashid, VP Nia, JJ Clark, M Rezagholizadeh
arXiv preprint arXiv:2110.08152, 2021
372021
Mate-kd: Masked adversarial text, a companion to knowledge distillation
A Rashid, V Lioutas, M Rezagholizadeh
arXiv preprint arXiv:2105.05912, 2021
352021
End-to-end self-debiasing framework for robust NLU training
A Ghaddar, P Langlais, M Rezagholizadeh, A Rashid
arXiv preprint arXiv:2109.02071, 2021
332021
Systems and methods for multilingual text generation field
M Rezagholizadeh, MA Haidar, A Do-Omri, A Rashid
US Patent 11,151,334, 2021
312021
Revisiting pre-trained language models and their evaluation for arabic natural language understanding
A Ghaddar, Y Wu, S Bagga, A Rashid, K Bibi, M Rezagholizadeh, C Xing, ...
arXiv preprint arXiv:2205.10687, 2022
242022
A short study on compressing decoder-based language models
T Li, YE Mesbahi, I Kobyzev, A Rashid, A Mahmud, N Anchuri, ...
arXiv preprint arXiv:2110.08460, 2021
242021
Towards zero-shot knowledge distillation for natural language processing
A Rashid, V Lioutas, A Ghaddar, M Rezagholizadeh
arXiv preprint arXiv:2012.15495, 2020
242020
Latent code and text-based generative adversarial networks for soft-text generation
MA Haidar, M Rezagholizadeh, A Do-Omri, A Rashid
arXiv preprint arXiv:1904.07293, 2019
242019
Improving Word Embedding Factorization for Compression Using Distilled Nonlinear Neural Decomposition
V Lioutas, A Rashid, A Do-Omri, M Haidar, M Rezagholizadeh
arXiv preprint arXiv:1910.06720, 0
18*
Stress relief by non-linear fillers in insulating solids
DW Auckland, A Rashid, K Tavernier, BR Varlow
Proceedings of IEEE Conference on Electrical Insulation and Dielectric …, 1994
161994
Bilingual-gan: A step towards parallel text generation
A Rashid, A Do-Omri, MA Haidar, Q Liu, M Rezagholizadeh
arXiv preprint arXiv:1904.04742, 2019
132019
Improving generalization of pre-trained language models via stochastic weight averaging
P Lu, I Kobyzev, M Rezagholizadeh, A Rashid, A Ghodsi, P Langlais
arXiv preprint arXiv:2212.05956, 2022
122022
RW-KD: Sample-wise loss terms re-weighting for knowledge distillation
P Lu, A Ghaddar, A Rashid, M Rezagholizadeh, A Ghodsi, P Langlais
Findings of the Association for Computational Linguistics: EMNLP 2021, 3145-3152, 2021
112021
How to select one among all? an empirical study towards the robustness of knowledge distillation in natural language understanding
T Li, A Rashid, A Jafari, P Sharma, A Ghodsi, M Rezagholizadeh
Findings of the Association for Computational Linguistics: EMNLP 2021, 750-762, 2021
72021
Jaber and saber: Junior and senior arabic bert
A Ghaddar, Y Wu, A Rashid, K Bibi, M Rezagholizadeh, C Xing, Y Wang, ...
arXiv preprint arXiv:2112.04329, 2021
62021
The influence of particular barriers on treeing in polyester resin
DW Auckland, A Rashid, BR Varlow
[1992] Proceedings of the 4th International Conference on Conduction and …, 1992
61992
JABER: junior arabic bert
A Ghaddar, Y Wu, A Rashid, K Bibi, M Rezagholizadeh, C Xing, Y Wang, ...
ArXiv Prepr. ArXiv211204329, 2021
52021
Efficient Citer: Tuning Large Language Models for Enhanced Answer Quality and Verification
M Tahaei, A Jafari, A Rashid, D Alfonso-Hermelo, K Bibi, Y Wu, A Ghodsi, ...
Findings of the Association for Computational Linguistics: NAACL 2024, 4443-4450, 2024
32024
Method and system for training a neural network model using adversarial learning and knowledge distillation
V Lioutas, A Rashid, M Rezagholizadeh
US Patent App. 18/119,211, 2023
32023
The system can't perform the operation now. Try again later.
Articles 1–20