フォロー
Thuy-Trang Vu
Thuy-Trang Vu
確認したメール アドレス: monash.edu - ホームページ
タイトル
引用先
引用先
Continual Learning for Large Language Models: A Survey
T Wu, L Luo, YF Li, S Pan, TT Vu, G Haffari
arXiv preprint arXiv:2402.01364, 2024
1062024
Adapting Large Language Models for Document-Level Machine Translation
M Wu, TT Vu, L Qu, G Foster, G Haffari
arXiv preprint arXiv:2401.06468, 2024
312024
Automatic Post-Editing of Machine Translation: A Neural Programmer-Interpreter Approach
TT Vu, G Haffari
Proceedings of the 2018 Conference on Empirical Methods in Natural Language …, 2018
302018
Effective Unsupervised Domain Adaptation with Adversarially Trained Language Models
TT Vu, D Phung, G Haffari
Proceedings of the 2020 Conference on Empirical Methods in Natural Language …, 2020
272020
Learning How to Active Learn by Dreaming
TT Vu, M Liu, D Phung, G Haffari
Proceedings of the 57th Annual Meeting of the Association for Computational …, 2019
272019
Modeling the energy efficiency of heterogeneous clusters
L Ramapantulu, BM Tudor, D Loghin, T Vu, YM Teo
2014 43rd International Conference on Parallel Processing, 321-330, 2014
262014
Systematic Assessment of Factual Knowledge in Large Language Models
L Luo, TT Vu, D Phung, G Haffari
Findings of EMNLP 2023, 2023
202023
Domain generalisation of NMT: Fusing adapters with leave-one-domain-out training
TT Vu, S Khadivi, D Phung, G Haffari
Annual Meeting of the Association of Computational Linguistics 2022, 582-588, 2022
122022
Generalised Unsupervised Domain Adaptation of Neural Machine Translation with Cross-Lingual Data Selection
TT Vu, X He, D Phung, G Haffari
Proceedings of the 2021 Conference on Empirical Methods in Natural Language …, 2021
102021
Simultaneous Machine Translation with Large Language Models
M Wang, J Zhao, TT Vu, F Shiri, E Shareghi, G Haffari
ALTA 2024, 2023
82023
Direct Evaluation of Chain-of-Thought in Multi-hop Reasoning with Knowledge Graphs
MV Nguyen, L Luo, F Shiri, D Phung, YF Li, TT Vu, G Haffari
Findings of ACL 2024, 2024
72024
Koala: An Index for Quantifying Overlaps with Pre-training Corpora
TT Vu, X He, G Haffari, E Shareghi
EMNLP 2023: System Demonstration, 2023
62023
Conversational simulmt: Efficient simultaneous translation with large language models
M Wang, TT Vu, Y Wang, E Shareghi, G Haffari
arXiv preprint arXiv:2402.10552, 2024
52024
Active Continual Learning: On Balancing Knowledge Retention and Learnability
TT Vu, S Khadivi, M Ghorbanali, D Phung, G Haffari
Australasian Joint Conference on Artificial Intelligence, 137-150, 2024
42024
The Best of Both Worlds: Bridging Quality and Diversity in Data Selection with Bipartite Graph
M Wu, TT Vu, L Qu, G Haffari
arXiv preprint arXiv:2410.12458, 2024
12024
PromptDSI: Prompt-based Rehearsal-free Instance-wise Incremental Learning for Document Retrieval
TL Huynh, TT Vu, W Wang, Y Wei, T Le, D Gasevic, YF Li, TT Do
arXiv preprint arXiv:2406.12593, 2024
12024
SCAR: Efficient Instruction-Tuning for Large Language Models via Style Consistency-Aware Response Ranking
Z Li, Y Hua, TT Vu, H Zhan, L Qu, G Haffari
arXiv preprint arXiv:2406.10882, 2024
12024
Exploring the Potential of Multimodal LLM with Knowledge-Intensive Multimodal ASR
M Wang, Y Wang, TT Vu, E Shareghi, G Haffari
Findings of EMNLP 2024, 2024
12024
Active Continual Learning: Labelling Queries in a Sequence of Tasks
TT Vu, S Khadivi, D Phung, G Haffari
arXiv preprint arXiv:2305.03923, 2023
1*2023
Fantastic Targets for Concept Erasure in Diffusion Models and Where To Find Them
A Bui, T Vu, L Vuong, T Le, P Montague, T Abraham, J Kim, D Phung
arXiv preprint arXiv:2501.18950, 2025
2025
現在システムで処理を実行できません。しばらくしてからもう一度お試しください。
論文 1–20