Följ
Yunzhen Feng
Yunzhen Feng
Verifierad e-postadress på nyu.edu
Titel
Citeras av
Citeras av
År
A Tale of Tails: Model Collapse as a Change of Scaling Laws
E Dohmatob, Y Feng, P Yang, F Charton, J Kempe
Proceedings of the International Conference on Machine Learning (ICML), 2024
502024
Model Collapse Demystified: The Case of Regression
E Dohmatob, Y Feng, J Kempe
NeurIPS 2024, 2024
292024
Enhancing Certified Robustness of Smoothed Classifiers via Weighted Model Ensembling
C Liu, Y Feng, R Wang, B Dong
ICML 2021 Workshop on Adversarial Machine Learning., 2020
22*2020
Embarrassingly Simple Dataset Distillation
Y Feng, SR Vedantam, J Kempe
The Twelfth International Conference on Learning Representations, 2023
182023
Beyond Model Collapse: Scaling Up with Synthesized Data Requires Verification
Y Feng, E Dohmatob, P Yang, F Charton, J Kempe, F Meta
arXiv preprint arXiv:2406.07515, 2024
14*2024
Do Efficient Transformers Really Save Computation?
K Yang, J Ackermann, Z He, G Feng, B Zhang, Y Feng, Q Ye, D He, ...
Proceedings of the International Conference on Machine Learning (ICML), 2024
132024
Transferred Discrepancy: Quantifying the Difference Between Representations
Y Feng, R Zhai, D He, L Wang, B Dong
arXiv preprint arXiv:2007.12446, 2020
122020
Strong Model Collapse
E Dohmatob, Y Feng, A Subramonian, J Kempe
arXiv preprint arXiv:2410.04840, 2024
82024
Attacking Bayes: Are Bayesian Neural Networks Inherently Robust?
Y Feng, TGJ Rudner, N Tsilivis, J Kempe
Transactions on Machine Learning Research (TMLR), 2023
1*2023
PILAF: Optimal Human Preference Sampling for Reward Modeling
Y Feng, A Kwiatkowski, K Zheng, J Kempe, Y Duan
arXiv preprint arXiv:2502.04270, 2025
2025
Spend Wisely: Maximizing Post-Training Gains in Iterative Synthetic Data Boostrapping
P Yang, Y Feng, Z Chen, Y Wu, Z Li
arXiv preprint arXiv:2501.18962, 2025
2025
Systemet kan inte utföra åtgärden just nu. Försök igen senare.
Artiklar 1–11