Survey of Vulnerabilities in Large Language Models Revealed by Adversarial Attacks E Shayegani, MAA Mamun, Y Fu, P Zaree, Y Dong, N Abu-Ghazaleh The 62nd Annual Meeting of the Association for Computational Linguistics …, 2023 | 136 | 2023 |
Jailbreak in pieces: Compositional Adversarial Attacks on Multi-Modal Language Models E Shayegani, Y Dong, N Abu-Ghazaleh ICLR 2024 Spotlight - 🏆 Best Paper Award SoCal NLP 23 🏆, 2024 | 107 | 2024 |
Plug and Pray: Exploiting off-the-shelf components of Multi-Modal Models E Shayegani, Y Dong, N Abu-Ghazaleh arXiv preprint arXiv:2307.14539, 2023 | 23 | 2023 |
Cross-Modal Safety Alignment: Is textual unlearning all you need? T Chakraborty*, E Shayegani*, Z Cai, N Abu-Ghazaleh, M Salman Asif, ... EMNLP 2024 Findings, 2024 | 10 | 2024 |
That Doesn't Go There: Attacks on Shared State in Multi-User Augmented Reality Applications C Slocum, Y Zhang, E Shayegani, P Zaree, N Abu-Ghazaleh, J Chen USENIX Security 24, 2023 | 7 | 2023 |
DeepMem: ML Models as storage channels and their (mis-) applications MAA Mamun, QM Alam, E Shaigani, P Zaree, I Alouani, N Abu-Ghazaleh arXiv preprint arXiv:2307.08811, 2023 | 2 | 2023 |
Unfair Alignment: Examining Safety Alignment Across Vision Encoder Layers in Vision-Language Models S Bachu*, E Shayegani*, T Chakraborty, R Lal, A Dutta, C Song, Y Dong, ... arXiv preprint arXiv:2411.04291, 2024 | | 2024 |
Can Textual Unlearning Solve Cross-Modality Safety Alignment? T Chakraborty*, E Shayegani*, Z Cai, N Abu-Ghazaleh, MS Asif, Y Dong, ... Findings of the Association for Computational Linguistics: EMNLP 2024, 9830-9844, 2024 | | 2024 |
Securing Shared State in Multi-User Augmented Reality J Chen, C Slocum, Y Zhang, E Shayegani, P Zaree, N Abu-Ghazaleh 2024 IEEE International Symposium on Mixed and Augmented Reality Adjunct …, 2024 | | 2024 |