Folgen
Hanlin Zhang
Hanlin Zhang
Bestätigte E-Mail-Adresse bei g.harvard.edu - Startseite
Titel
Zitiert von
Zitiert von
Jahr
Towards Principled Disentanglement for Domain Generalization
H Zhang, YF Zhang, W Liu, A Weller, B Schölkopf, EP Xing
CVPR 2022, 2022
1392022
Do the Rewards Justify the Means? Measuring Trade-Offs Between Rewards and Ethical Behavior in the MACHIAVELLI Benchmark
A Pan, CJ Shern, A Zou, N Li, S Basart, T Woodside, J Ng, H Zhang, ...
ICML 2023, 2023
1322023
DataComp-LM: In search of the next generation of training sets for language models
J Li, A Fang, G Smyrnis, M Ivgi, M Jordan, S Gadre, H Bansal, E Guha, ...
NeurIPS 2024, 2024
67*2024
Watermarks in the Sand: Impossibility of Strong Watermarking for Generative Models
H Zhang, BL Edelman, D Francati, D Venturi, G Ateniese, B Barak
ICML 2024, 2023
562023
Iterative Graph Self-Distillation
H Zhang, S Lin, W Liu, P Zhou, J Tang, X Liang, EP Xing
TKDE 2023, 2020
452020
Towards Interpretable Natural Language Understanding with Explanations as Latent Variables
W Zhou, J Hu, H Zhang, X Liang, M Sun, C Xiong, J Tang
NeurIPS 2020, 2020
422020
Exploring Transformer Backbones for Heterogeneous Treatment Effect Estimation
YF Zhang, H Zhang, ZC Lipton, LE Li, EP Xing
TMLR 2023, 2022
40*2022
Improved Logical Reasoning of Language Models via Differentiable Symbolic Programming
H Zhang, Z Li, J Huang, M Naik, E Xing
ACL-Findings 2023, 2022
322022
Toward Learning Human-aligned Cross-domain Robust Models by Countering Misaligned Features
H Wang, Z Huang, H Zhang, E Xing
UAI 2022, 2021
172021
Follow My Instruction and Spill the Beans: Scalable Data Extraction from Retrieval-Augmented Generation Systems
Z Qi, H Zhang, E Xing, S Kakade, H Lakkaraju
ICLR 2025, 2024
152024
A Study on the Calibration of In-context Learning
H Zhang, YF Zhang, Y Yu, D Madeka, D Foster, E Xing, H Lakkaraju, ...
NAACL 2024, 2023
132023
Eliminating Position Bias of Language Models: A Mechanistic Approach
Z Wang, H Zhang, X Li, KH Huang, C Han, S Ji, SM Kakade, H Peng, H Ji
ICLR 2025, 2024
82024
Evaluating Step-by-Step Reasoning through Symbolic Verification
YF Zhang, H Zhang, LE Li, E Xing
NAACL-Findings 2024, 2022
5*2022
CoLoR-Filter: Conditional Loss Reduction Filtering for Targeted Language Model Pre-training
D Brandfonbrener, H Zhang, A Kirsch, JR Schwarz, S Kakade
NeurIPS 2024, 2024
42024
A Closer Look at the Calibration of Differentially Private Learners
H Zhang, X Li, P Sen, S Roukos, T Hashimoto
arXiv preprint arXiv:2210.08248, 2022
32022
Stochastic Neural Networks with Infinite Width are Deterministic
L Ziyin, H Zhang, X Meng, Y Lu, E Xing, M Ueda
arXiv preprint arXiv:2201.12724, 2022
32022
How Does Critical Batch Size Scale in Pre-training?
H Zhang, D Morwani, N Vyas, J Wu, D Zou, U Ghai, D Foster, S Kakade
ICLR 2025, 2024
12024
Connections between Schedule-Free Optimizers, AdEMAMix, and Accelerated SGD Variants
D Morwani, N Vyas, H Zhang, S Kakade
arXiv preprint arXiv:2502.02431, 2025
2025
Mind the Gap: Examining the Self-Improvement Capabilities of Large Language Models
Y Song, H Zhang, C Eisenach, S Kakade, D Foster, U Ghai
ICLR 2025, 2024
2024
Das System kann den Vorgang jetzt nicht ausführen. Versuchen Sie es später erneut.
Artikel 1–19