Folgen
Michael-Andrei Panaitescu-Liess
Michael-Andrei Panaitescu-Liess
PhD Student - University of Maryland, College Park
Bestätigte E-Mail-Adresse bei umd.edu
Titel
Zitiert von
Zitiert von
Jahr
Qu-ANTI-zation: Exploiting Quantization Artifacts for Achieving Adversarial Outcomes
S Hong, MA Panaitescu-Liess, Y Kaya, T Dumitras
Advances in Neural Information Processing Systems 34, 9303-9316, 2021
192021
More Context, Less Distraction: Zero-shot Visual Classification by Inferring and Conditioning on Contextual Attributes
B An, S Zhu, MA Panaitescu-Liess, CK Mummadi, F Huang
The Twelfth International Conference on Learning Representations, 2024
18*2024
Self-supervised representation learning on document images
A Cosma, M Ghidoveanu, M Panaitescu-Liess, M Popescu
Document Analysis Systems: 14th IAPR International Workshop, DAS 2020, Wuhan …, 2020
162020
Automatic Pseudo-Harmful Prompt Generation for Evaluating False Refusals in Large Language Models
B An, S Zhu, R Zhang, MA Panaitescu-Liess, Y Xu, F Huang
arXiv preprint arXiv:2409.00598, 2024
92024
Can Watermarking Large Language Models Prevent Copyrighted Text Generation and Hide Training Data?
MA Panaitescu-Liess, Z Che, B An, Y Xu, P Pathmanathan, S Chakraborty, ...
arXiv preprint arXiv:2407.17417, 2024
42024
AdvBDGen: Adversarially Fortified Prompt-Specific Fuzzy Backdoor Generator Against LLM Alignment
P Pathmanathan, UM Sehwag, MA Panaitescu-Liess, F Huang
arXiv preprint arXiv:2410.11283, 2024
2024
Like Oil and Water: Group Robustness Methods and Poisoning Defenses Don’t Mix
MA Panaitescu-Liess, Y Kaya, S Zhu, F Huang, T Dumitras
The Twelfth International Conference on Learning Representations, 2024
2024
PoisonedParrot: Subtle Data Poisoning Attacks to Elicit Copyright-Infringing Content from Large Language Models
MA Panaitescu-Liess, P Pathmanathan, Y Kaya, Z Che, B An, S Zhu, ...
Neurips Safe Generative AI Workshop 2024, 0
Das System kann den Vorgang jetzt nicht ausführen. Versuchen Sie es später erneut.
Artikel 1–8