フォロー
Ashwinee Panda
Ashwinee Panda
Postdoctoral Fellow, University of Maryland
確認したメール アドレス: umd.edu - ホームページ
タイトル
引用先
引用先
Fetchsgd: Communication-efficient federated learning with sketching
D Rothchild*, A Panda*, E Ullah, N Ivkin, I Stoica, V Braverman, ...
ICML 2020, 8253-8265, 2020
4412020
Visual adversarial examples jailbreak large language models
X Qi, K Huang, A Panda, M Wang, P Mittal
AAAI 2024, 2023
201*2023
Neurotoxin: Durable backdoors in federated learning
Z Zhang*, A Panda*, L Song, Y Yang, M Mahoney, P Mittal, R Kannan, ...
ICML 2022, 26429-26446, 2022
1542022
Sparsefed: Mitigating model poisoning attacks in federated learning with sparsification
A Panda, S Mahloujifar, AN Bhagoji, S Chakraborty, P Mittal
AISTATS 2022, 7587-7624, 2022
1112022
Privacy-preserving in-context learning for large language models
T Wu*, A Panda*, JT Wang*, P Mittal
ICLR 2024, 2023
47*2023
Safety Alignment Should Be Made More Than Just a Few Tokens Deep
X Qi, A Panda, K Lyu, X Ma, S Roy, A Beirami, P Mittal, P Henderson
arXiv preprint arXiv:2406.05946, 2024
272024
Teach LLMs to Phish: Stealing Private Information from Language Models
A Panda, CA Choquette-Choo, Z Zhang, Y Yang, P Mittal
ICLR 2024, 2024
23*2024
Differentially private image classification by learning priors from random processes
X Tang*, A Panda*, V Sehwag, P Mittal
NeurIPS 2023 36, 2024
152024
A New Linear Scaling Rule for Private Adaptive Hyperparameter Optimization
A Panda*, X Tang*, V Sehwag, S Mahloujifar, P Mittal
ICML 2024, 2023
14*2023
Private Fine-tuning of Large Language Models with Zeroth-order Optimization
X Tang*, A Panda*, M Nasr, S Mahloujifar, P Mittal
arXiv preprint arXiv:2401.04343, 2024
132024
Lottery ticket adaptation: Mitigating destructive interference in llms
A Panda, B Isik, X Qi, S Koyejo, T Weissman, P Mittal
ICML 2024 (Workshops), 2024
92024
Privacy auditing of large language models
A Panda, X Tang, M Nasr, CA Choquette-Choo, P Mittal
ICML 2024 Next Generation of AI Safety Workshop, 2024
32024
StructMoE: Structured Mixture of Experts Using Low Rank Experts
Z Sarwar, A Panda, B Thérien, S Rawls, A Das, K Balasubramaniam, ...
NeurIPS Efficient Natural Language and Speech Processing Workshop, 182-193, 2024
2024
Dense Backpropagation Improves Routing for Sparsely-Gated Mixture-of-Experts
A Panda, V Baherwani, Z Sarwar, B Thérien, S Rawls, S Sahu, ...
Workshop on Machine Learning and Compression, NeurIPS 2024, 2024
2024
Refusal Tokens: A Simple Way to Calibrate Refusals in Large Language Models
N Jain, A Shrivastava, C Zhu, D Liu, A Samuel, A Panda, A Kumar, ...
arXiv preprint arXiv:2412.06748, 2024
2024
Unlocking Trustworthy Machine Learning With Sparsity
A Panda
Princeton University, 2024
2024
Differentially Private Generation of High Fidelity Samples From Diffusion Models
V Sehwag, A Panda, A Pokle, X Tang, S Mahloujifar, M Chiang, JZ Kolter, ...
ICML 2023 DeployableGenerativeAI Workshop, 2023
2023
Workshop on Sparsity in LLMs (SLLM): Deep Dive into Mixture of Experts, Quantization, Hardware, and Inference
T Chen, U Evci, Y Ioannou, B Isik, S Liu, M Adnan, A Nowak, A Panda
ICLR 2025 Workshop Proposals, 0
StructMoE: Augmenting MoEs with Hierarchically Routed Low Rank Experts
Z Sarwar, A Panda, B Thérien, S Rawls, S Sahu, S Chakraborty
現在システムで処理を実行できません。しばらくしてからもう一度お試しください。
論文 1–19