Følg
Vladislav Lialin
Titel
Citeret af
Citeret af
År
Scaling down to scale up: A guide to parameter-efficient fine-tuning
V Lialin, V Deshpande, A Rumshisky
arXiv preprint arXiv:2303.15647, 2023
1962023
Relora: High-rank training through low-rank updates
V Lialin, N Shivagunde, S Muckatira, A Rumshisky
arXiv preprint arXiv:2307.05695, 2023
105*2023
Learning to ask like a physician
E Lehman, V Lialin, KY Legaspi, AJR Sy, PTS Pile, NRI Alberto, ...
arXiv preprint arXiv:2206.02696, 2022
242022
Named entity recognition in noisy domains
V Malykh, V Lyalin
2018 international conference on artificial intelligence applications and …, 2018
122018
Honey, I shrunk the language: Language model behavior at reduced scale
V Deshpande, D Pechi, S Thatte, V Lialin, A Rumshisky
arXiv preprint arXiv:2305.17266, 2023
102023
Update frequently, update fast: Retraining semantic parsing systems in a fraction of time
V Lialin, R Goel, A Simanovsky, A Rumshisky, R Shah
arXiv preprint arXiv:2010.07865, 2020
10*2020
Scalable and accurate self-supervised multimodal representation learning without aligned video and text data
V Lialin, S Rawls, D Chan, S Ghosh, A Rumshisky, W Hamza
Proceedings of the IEEE/CVF Winter Conference on Applications of Computer …, 2023
82023
Life after BERT: What do Other Muppets Understand about Language?
V Lialin, K Zhao, N Shivagunde, A Rumshisky
arXiv preprint arXiv:2205.10696, 2022
82022
Relora: High-rank training through low-rank updates, 2023
V Lialin, N Shivagunde, S Muckatira, A Rumshisky
URL https://arxiv. org/abs/2307.05695, 0
8
Let's reinforce step by step
S Pan, V Lialin, S Muckatira, A Rumshisky
arXiv preprint arXiv:2311.05821, 2023
62023
Scaling down to scale up: A guide to parameter-efficient fine-tuning (2023)
V Lialin, V Deshpande, A Rumshisky
arXiv preprint arXiv:2303.15647, 2023
62023
Narrativetime: Dense temporal annotation on a timeline
A Rogers, M Karpinska, A Gupta, V Lialin, G Smelkov, A Rumshisky
arXiv preprint arXiv:1908.11443, 2019
52019
Deconstructing in-context learning: Understanding prompts via corruption
N Shivagunde, V Lialin, S Muckatira, A Rumshisky
arXiv preprint arXiv:2404.02054, 2024
42024
Recent advances, applications, and open challenges in machine learning for health: Reflections from research roundtables at ml4h 2023 symposium
H Jeong, S Jabbour, Y Yang, R Thapta, H Mozannar, WJ Han, ...
arXiv preprint arXiv:2403.01628, 2024
42024
Scaling down to scale up: A guide to parameter-efficient fine-tuning. arXiv 2023
V Lialin, V Deshpande, A Rumshisky
arXiv preprint arXiv:2303.15647, 0
4
Emergent abilities in reduced-scale generative language models
S Muckatira, V Deshpande, V Lialin, A Rumshisky
arXiv preprint arXiv:2404.02204, 2024
12024
Improving Classification Robustness for Noisy Texts with Robust Word Vectors
V Malykh, V Lyalin
Journal of Mathematical Sciences 273 (4), 605-613, 2023
12023
Larger Probes Tell a Different Story: Extending Psycholinguistic Datasets Via In-Context Learning
N Shivagunde, V Lialin, A Rumshisky
arXiv preprint arXiv:2303.16445, 2023
2023
Injecting Hierarchy with U-Net Transformers
D Donahue, V Lialin, A Rumshisky
arXiv preprint arXiv:1910.10488, 2019
2019
К вопросу о классификации зашумленных текстов
ВА Малых, ВА Лялин
Труды Института системного анализа Российской академии наук 68 (S1), 174-182, 2018
2018
Systemet kan ikke foretage handlingen nu. Prøv igen senere.
Artikler 1–20