Ikuti
Ryan Teehan
Ryan Teehan
Email yang diverifikasi di nyu.edu - Beranda
Judul
Dikutip oleh
Dikutip oleh
Tahun
Multitask prompted training enables zero-shot task generalization
V Sanh, A Webson, C Raffel, SH Bach, L Sutawika, Z Alyafeai, A Chaffin, ...
arXiv preprint arXiv:2110.08207, 2021
17912021
Bloom: A 176b-parameter open-access multilingual language model
T Le Scao, A Fan, C Akiki, E Pavlick, S Ilić, D Hesslow, R Castagné, ...
17682023
Beyond the imitation game: Quantifying and extrapolating the capabilities of language models
A Srivastava, A Rastogi, A Rao, AAM Shoeb, A Abid, A Fisch, AR Brown, ...
arXiv preprint arXiv:2206.04615, 2022
13522022
Nl-augmenter: A framework for task-sensitive natural language augmentation
KD Dhole, V Gangal, S Gehrmann, A Gupta, Z Li, S Mahamood, ...
arXiv preprint arXiv:2112.02721, 2021
792021
Socratic questioning of novice debuggers: A benchmark dataset and preliminary evaluations
E Al-Hossami, R Bunescu, R Teehan, L Powell, K Mahajan, M Dorodchi
Proceedings of the 18th workshop on innovative use of nlp for building …, 2023
242023
Multitask prompted training enables zero-shot task generalization. arXiv
V Sanh, A Webson, C Raffel, SH Bach, L Sutawika, Z Alyafeai, A Chaffin, ...
arXiv preprint arXiv:2110.08207, 2021
202021
Emergent structures and training dynamics in large language models
R Teehan, M Clinciu, O Serikov, E Szczechla, N Seelam, S Mirkin, ...
Proceedings of BigScience Episode# 5--Workshop on Challenges & Perspectives …, 2022
192022
Cut the carp: Fishing for zero-shot story evaluation
S Matiana, JR Smith, R Teehan, L Castricato, S Biderman, L Gao, ...
arXiv preprint arXiv:2110.03111, 2021
162021
Can language models employ the socratic method? experiments with code debugging
E Al-Hossami, R Bunescu, J Smith, R Teehan
Proceedings of the 55th ACM Technical Symposium on Computer Science …, 2024
142024
A general framework for inference-time scaling and steering of diffusion models
R Singhal, Z Horvitz, R Teehan, M Ren, Z Yu, K McKeown, R Ranganath
arXiv preprint arXiv:2501.06848, 2025
62025
College: Concept embedding generation for large language models
R Teehan, B Lake, M Ren
arXiv preprint arXiv:2403.15362, 2024
62024
Are LLMs Prescient? A Continuous Evaluation using Daily News as the Oracle
H Dai, R Teehan, M Ren
arXiv preprint arXiv:2411.08324, 2024
2024
ProCreate, Don’t Reproduce! Propulsive Energy Diffusion for Creative Generation
J Lu, R Teehan, M Ren
European Conference on Computer Vision, 397-414, 2024
2024
BLOOM: A 176B-Parameter Open-Access Multilingual Language Model
V Danchev, V Nikoulina, V Laippala, V Lepercq, V Prabhu, Z Alyafeai, ...
2023
SPICES: SURVEY PAPERS AS INTERACTIVE CHEATSHEET EMBEDDINGS
M McAteer, R Teehan
Beyond static papers: Rethinking how we share scientific understanding in ML …, 0
Sistem tidak dapat melakukan operasi ini. Coba lagi nanti.
Artikel 1–15