دنبال کردن
Yuhao Mao
Yuhao Mao
ایمیل تأیید شده در inf.ethz.ch - صفحهٔ اصلی
عنوان
نقل شده توسط
نقل شده توسط
سال
Connecting certified and adversarial training
Y Mao, M Müller, M Fischer, M Vechev
Advances in Neural Information Processing Systems 36, 73422-73440, 2023
252023
" Is your explanation stable?" A Robustness Evaluation Framework for Feature Attribution
Y Gan, Y Mao, X Zhang, S Ji, Y Pu, M Han, J Yin, T Wang
Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications …, 2022
162022
Transfer attacks revisited: A large-scale empirical study in real computer vision settings
Y Mao, C Fu, S Wang, S Ji, X Zhang, Z Liu, J Zhou, AX Liu, R Beyah, ...
2022 IEEE Symposium on Security and Privacy (SP), 1423-1439, 2022
162022
Understanding certified training with interval bound propagation
Y Mao, MN Müller, M Fischer, M Vechev
arXiv preprint arXiv:2306.10426, 2023
132023
Expressivity of reLU-networks under convex relaxations
M Baader, MN Müller, Y Mao, M Vechev
arXiv preprint arXiv:2311.04015, 2023
62023
Ctbench: A library and benchmark for certified training
Y Mao, S Balauca, M Vechev
arXiv preprint arXiv:2406.04848, 2024
42024
Multi-Neuron Unleashes Expressivity of ReLU Networks Under Convex Relaxation
Y Mao, Y Zhang, M Vechev
arXiv preprint arXiv:2410.06816, 2024
2024
Average Certified Radius is a Poor Metric for Randomized Smoothing
C Sun, Y Mao, MN Müller, M Vechev
arXiv preprint arXiv:2410.06895, 2024
2024
Transfer Learning Assisted Fast Design Migration Over Technology Nodes: A Study on Transformer Matching Network
C Chu, Y Mao, H Wang
2024 IEEE/MTT-S International Microwave Symposium-IMS 2024, 188-191, 2024
2024
Gaussian Loss Smoothing Enables Certified Training with Tight Convex Relaxations
S Balauca, MN Müller, Y Mao, M Baader, M Fischer, M Vechev
arXiv preprint arXiv:2403.07095, 2024
2024
Overcoming the Paradox of Certified Training with Gaussian Smoothing
S Balauca, MN Müller, Y Mao, M Baader, M Fischer, M Vechev
arXiv e-prints, arXiv: 2403.07095, 2024
2024
سیستم در حال حاضر قادر به انجام عملکرد نیست. بعداً دوباره امتحان کنید.
مقاله‌ها 1–11