Suivre
Erfan Shayegani
Titre
Citée par
Citée par
Année
Survey of Vulnerabilities in Large Language Models Revealed by Adversarial Attacks
E Shayegani, MAA Mamun, Y Fu, P Zaree, Y Dong, N Abu-Ghazaleh
The 62nd Annual Meeting of the Association for Computational Linguistics …, 2023
1362023
Jailbreak in pieces: Compositional Adversarial Attacks on Multi-Modal Language Models
E Shayegani, Y Dong, N Abu-Ghazaleh
ICLR 2024 Spotlight - 🏆 Best Paper Award SoCal NLP 23 🏆, 2024
1072024
Plug and Pray: Exploiting off-the-shelf components of Multi-Modal Models
E Shayegani, Y Dong, N Abu-Ghazaleh
arXiv preprint arXiv:2307.14539, 2023
232023
Cross-Modal Safety Alignment: Is textual unlearning all you need?
T Chakraborty*, E Shayegani*, Z Cai, N Abu-Ghazaleh, M Salman Asif, ...
EMNLP 2024 Findings, 2024
102024
That Doesn't Go There: Attacks on Shared State in Multi-User Augmented Reality Applications
C Slocum, Y Zhang, E Shayegani, P Zaree, N Abu-Ghazaleh, J Chen
USENIX Security 24, 2023
72023
DeepMem: ML Models as storage channels and their (mis-) applications
MAA Mamun, QM Alam, E Shaigani, P Zaree, I Alouani, N Abu-Ghazaleh
arXiv preprint arXiv:2307.08811, 2023
22023
Unfair Alignment: Examining Safety Alignment Across Vision Encoder Layers in Vision-Language Models
S Bachu*, E Shayegani*, T Chakraborty, R Lal, A Dutta, C Song, Y Dong, ...
arXiv preprint arXiv:2411.04291, 2024
2024
Can Textual Unlearning Solve Cross-Modality Safety Alignment?
T Chakraborty*, E Shayegani*, Z Cai, N Abu-Ghazaleh, MS Asif, Y Dong, ...
Findings of the Association for Computational Linguistics: EMNLP 2024, 9830-9844, 2024
2024
Securing Shared State in Multi-User Augmented Reality
J Chen, C Slocum, Y Zhang, E Shayegani, P Zaree, N Abu-Ghazaleh
2024 IEEE International Symposium on Mixed and Augmented Reality Adjunct …, 2024
2024
Le système ne peut pas réaliser cette opération maintenant. Veuillez réessayer plus tard.
Articles 1–9