Segueix
Hao Yang
Hao Yang
Correu electrònic verificat a monash.edu - Pàgina d'inici
Títol
Citada per
Citada per
Any
M-adapter: Modality adaptation for end-to-end speech-to-text translation
J Zhao, H Yang, E Shareghi, G Haffari
Interspeech 2022, 2022
192022
Investigating pre-trained audio encoders in the low-resource condition
H Yang, J Zhao, G Haffari, E Shareghi
Interspeech 2023, 2023
52023
Audio Is the Achilles' Heel: Red Teaming Audio Large Multimodal Models
H Yang, L Qu, E Shareghi, G Haffari
NAACL 2025, 2024
22024
Self-supervised rewiring of pre-trained speech encoders: Towards faster fine-tuning with less labels in speech processing
H Yang, J Zhao, G Haffari, E Shareghi
EMNLP 2022-Findings, 2022
22022
Jigsaw Puzzles: Splitting Harmful Questions to Jailbreak Large Language Models
H Yang, L Qu, E Shareghi, G Haffari
arXiv preprint arXiv:2410.11459, 2024
12024
RedApt: An Adaptor for wav2vec 2 Encoding\\Faster and Smaller Speech Translation without Quality Compromise
J Zhao, H Yang, G Haffari, E Shareghi
EMNLP 2022-Findings, 2022
12022
Towards Probing Speech-Specific Risks in Large Multimodal Models: A Taxonomy, Benchmark, and Insights
H Yang, L Qu, E Shareghi, G Haffari
EMNLP 2024, 2024
2024
Double Mixture: Towards Continual Event Detection from Speech
J Kang, T Wu, J Zhao, G Wang, Y Wei, H Yang, G Qi, YF Li, G Haffari
arXiv preprint arXiv:2404.13289, 2024
2024
En aquests moments el sistema no pot dur a terme l'operació. Torneu-ho a provar més tard.
Articles 1–8