Sledovat
Yuchen Zeng
Název
Citace
Citace
Rok
LIFT: Language-interfaced fine-tuning for non-language machine learning tasks
T Dinh*, Y Zeng*, R Zhang, Z Lin, S Rajput, M Gira, J Sohn, ...
NeurIPS 2022, 2022
1392022
Improving fairness via federated learning
Y Zeng, H Chen, K Lee
arXiv 2021, 2021
782021
Multiway clustering via tensor block models
M Wang, Y Zeng
NeurIPS 2019, 2019
692019
The expressive power of low-rank adaptation
Y Zeng, K Lee
ICLR 2024, 2024
552024
Equal Improvability: A New Fairness Notion Considering the Long-term Impact
O Guldogan*, Y Zeng*, J Sohn, R Pedarsani, K Lee
ICLR 2023, 2023
122023
Can MLLMs Perform Text-to-Image In-Context Learning?
Y Zeng*, W Kang*, Y Chen, HI Koo, K Lee
COLM 2024, 2024
62024
Outlier-Robust Group Inference via Gradient Space Clustering
Y Zeng, K Greenewald, K Lee, J Solomon, M Yurochkin
arXiv 2022, 2022
42022
Federated Learning with Local Fairness Constraints
Y Zeng, H Chen, K Lee
ISIT 2023, 1937-1942, 2023
32023
TabFlex: Scaling Tabular Learning to Millions with Linear Attention
Y Zeng, W Kang, A Mueller
Neural Information Processing Systems 2024 (NeurIPS) TRL Workshop, 2024
12024
Parameter-Efficient Fine-Tuning of State Space Models
K Galim*, W Kang*, Y Zeng*, HI Koo, K Lee
NeurIPS 2024 Workshop on Fine-Tuning in Modern Machine Learning: Principles …, 2024
12024
DARWIN 1.5: Large Language Models as Materials Science Adapted Learners
T Xie, Y Wan, Y Liu, Y Zeng, W Zhang, C Kit, D Zhou, B Hoex
arXiv preprint arXiv:2412.11970, 2024
2024
Coded Prompts for Large Language Models
Z Lin, B Chen, Y Zeng, K Lee
NeurIPS 2023 R0-FoMo Workshop, 2023
2023
Systém momentálně nemůže danou operaci provést. Zkuste to znovu později.
Články 1–12