Prati
Kevin Klyman
Kevin Klyman
Stanford, Harvard
Potvrđena adresa e-pošte na hks.harvard.edu - Početna stranica
Naslov
Citirano
Citirano
Godina
The Foundation Model Transparency Index
R Bommasani, K Klyman, S Longpre, S Kapoor, N Maslej, B Xiong, ...
arXiv preprint arXiv:2310.12941, 2023
1042023
The Great Tech Rivalry: China vs the U.S.
G Allison, K Klyman, K Barbesino, H Yen
https://www.belfercenter.org/sites/default/files …, 2021
862021
Do Foundation Model Providers Comply with the Draft EU AI Act
R Bommasani, K Klyman, D Zhang, P Liang
Center for Research on Foundation Models, 2023
402023
Position Paper: On the Societal Impact of Open Foundation Models
S Kapoor, R Bommasani, K Klyman, S Longpre, A Ramaswami, P Cihon, ...
Forty-first International Conference on Machine Learning, 0
40*
Position: A Safe Harbor for AI Evaluation and Red Teaming
S Longpre, S Kapoor, K Klyman, A Ramaswami, R Bommasani, ...
Forty-first International Conference on Machine Learning, 2023
36*2023
Introducing v0. 5 of the ai safety benchmark from mlcommons
B Vidgen, A Agrawal, AM Ahmed, V Akinwande, N Al-Nuaimi, N Alfaraj, ...
arXiv preprint arXiv:2404.12241, 2024
342024
Consent in crisis: The rapid decline of the ai data commons
S Longpre, R Mahari, A Lee, C Lund, H Oderinwale, W Brannon, ...
NEURIPS, 2024
262024
Stopping killer robots: country positions on banning fully autonomous weapons and retaining human control
M Wareham, S Goose, B Docherty, J Ross, T Porteous, J Kantack, ...
Human Rights Watch, 2020
222020
AI Risk Categorization Decoded (AIR 2024): From Government Regulations to Corporate Policies
Y Zeng, K Klyman, A Zhou, Y Yang, M Pan, R Jia, D Song, P Liang, B Li
arXiv preprint arXiv:2406.17864, 2024
192024
Considerations for governing open foundation models
R Bommasani, S Kapoor, K Klyman, S Longpre, A Ramaswami, D Zhang, ...
Science 386 (6718), 151-153, 2024
162024
Foundation Model Transparency Reports
R Bommasani, K Klyman, S Longpre, B Xiong, S Kapoor, N Maslej, ...
arXiv preprint arXiv:2402.16268, 2024
162024
The Foundation Model Transparency Index v1. 1: May 2024
R Bommasani, K Klyman, S Kapoor, S Longpre, B Xiong, N Maslej, ...
arXiv preprint arXiv:2407.12929, 2024
132024
AIR-Bench 2024: A Safety Benchmark Based on Risk Categories from Regulations and Policies
Y Zeng, Y Yang, A Zhou, JZ Tan, Y Tu, Y Mai, K Klyman, M Pan, R Jia, ...
arXiv preprint arXiv:2407.17436, 2024
112024
The Responsible Foundation Model Development Cheatsheet: A Review of Tools & Resources
S Longpre, S Biderman, A Albalak, H Schoelkopf, D McDuff, S Kapoor, ...
arXiv preprint arXiv:2406.16746, 2024
72024
Acceptable Use Policies for Foundation Models: Considerations for Policymakers and Developers
K Klyman
https://crfm.stanford.edu/2024/04/08/aups.html, 2024
52024
The U.S. Wants to Make Sure China Can’t Catch Up on Quantum Computing
K Klyman
Foreign Policy, 2023
52023
Acceptable Use Policies for Foundation Models
K Klyman
Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society 7, 752-767, 2024
42024
Language model developers should report train-test overlap
AK Zhang, K Klyman, Y Mai, Y Levine, Y Zhang, R Bommasani, P Liang
arXiv preprint arXiv:2410.08385, 2024
22024
Bridging the Data Provenance Gap Across Text, Speech and Video
S Longpre, N Singh, M Cherep, K Tiwary, J Materzynska, W Brannon, ...
arXiv preprint arXiv:2412.17847, 2024
2024
Biden Takes Measured Approach on China Investment Controls
K Klyman
Foreign Policy, 2023
2023
Sustav trenutno ne može provesti ovu radnju. Pokušajte ponovo kasnije.
Članci 1–20