フォロー
Yichuan Mo
Yichuan Mo
Ph.D. Candidate, Peking University
確認したメール アドレス: stu.pku.edu.cn
タイトル
引用先
引用先
Jailbreak and guard aligned language models with only few in-context demonstrations
Z Wei, Y Wang, L Ang, Y Mo, Y Wang
arXiv preprint arXiv:2310.06387, 2023
1742023
When Adversarial Training Meets Vision Transformers: Recipes from Training to Architecture
Y Mo, D Wu, Y Wang, Y Guo, Y Wang
NeurIPS 2022, 2022
642022
Multi-task learning improves synthetic speech detection
Y Mo, S Wang
ICASSP 2022, 2022
212022
Improving Generative Adversarial Networks via Adversarial Learning in Latent Space
Y Li, Y Mo, L Shi, J Yan, X Zhang, JUN ZHOU
NeurIPS 2022, 2022
192022
Fight Back against Jailbreaking via Prompt Adversarial Tuning
Y Mo, Y Wang, Z Wei, Y Wang
NeurIPS 2024, 2024
17*2024
DICE: Domain-attack Invariant Causal Learning for Improved Data Privacy Protection and Adversarial Robustness
Q Ren, Y Chen, Y Mo, Q Wu, J Yan
SIGKDD 2022, 2022
132022
TERD: A Unified Framework for Safeguarding Diffusion Models Against Backdoors
Y Mo, H Huang, M Li, A Li, Y Wang
ICML 2024, 2024
62024
On the Adversarial Transferability of Generalized" Skip Connections"
Y Wang, Y Mo, D Wu, M Li, X Ma, Z Lin
arXiv preprint arXiv:2410.08950, 2024
22024
PID: Prompt-Independent Data Protection Against Latent Diffusion Models
A Li, Y Mo, M Li, Y Wang
ICML 2024, 2024
12024
Towards Reliable Backdoor Attacks on Vision Transformers
Y Mo, D Wu, Y Wang, Y Guo, Y Wang
2023
現在システムで処理を実行できません。しばらくしてからもう一度お試しください。
論文 1–10