Følg
Leonard Lausen
Leonard Lausen
Amazon Web Services
Verificeret mail på amazon.com
Titel
Citeret af
Citeret af
År
Deep learning for precipitation nowcasting: A benchmark and a new model
X Shi, Z Gao, L Lausen, H Wang, DY Yeung, W Wong, W Woo
Advances in neural information processing systems 30, 2017
10632017
Gluoncv and gluonnlp: Deep learning in computer vision and natural language processing
J Guo, H He, T He, L Lausen, M Li, H Lin, X Shi, C Wang, J Xie, S Zha, ...
Journal of Machine Learning Research 21 (23), 1-7, 2020
2502020
Nsml: A machine learning platform that enables you to focus on your models
N Sung, M Kim, H Jo, Y Yang, J Kim, L Lausen, Y Kim, G Lee, D Kwak, ...
arXiv preprint arXiv:1712.05902, 2017
862017
Hytrel: Hypergraph-enhanced tabular data representation learning
P Chen, S Sarkar, L Lausen, B Srinivasan, S Zha, R Huang, G Karypis
Advances in Neural Information Processing Systems 36, 32173-32193, 2023
352023
Large language models of code fail at completing code with potential bugs
T Dinh, J Zhao, S Tan, R Negrinho, L Lausen, S Zha, G Karypis
Advances in Neural Information Processing Systems 36, 41386-41412, 2023
282023
Exploring the role of task transferability in large-scale multi-task learning
V Padmakumar, L Lausen, M Ballesteros, S Zha, H He, G Karypis
arXiv preprint arXiv:2204.11117, 2022
202022
Better context makes better code language models: A case study on function call argument completion
H Pei, J Zhao, L Lausen, S Zha, G Karypis
Proceedings of the AAAI Conference on Artificial Intelligence 37 (4), 5230-5238, 2023
192023
Testing the limits of unified sequence to sequence llm pretraining on diverse table data tasks
S Sarkar, L Lausen
arXiv preprint arXiv:2310.00789, 2023
6*2023
Parameter and data efficient continual pre-training for robustness to dialectal variance in arabic
S Sarkar, K Lin, S Sengupta, L Lausen, S Zha, S Mansour
arXiv preprint arXiv:2211.03966, 2022
22022
Dive into deep learning for natural language processing
H Lin, X Shi, L Lausen, A Zhang, H He, S Zha, A Smola
Proceedings of the 2019 Conference on Empirical Methods in Natural Language …, 2019
22019
CrowdRisk: exploring crowdsourcing of risk information
L Lausen, M Rittenbruch, P Mitchell, E Horton, M Foth
Proceedings of the 28th Australian Conference on Computer-Human Interaction …, 2016
22016
Revisiting SMoE language models by evaluating inefficiencies with task specific expert pruning
S Sarkar, L Lausen, V Cevher, S Zha, T Brox, G Karypis
arXiv preprint arXiv:2409.01483, 2024
12024
Understanding Silent Data Corruption in LLM Training
J Ma, H Pei, L Lausen, G Karypis
arXiv preprint arXiv:2502.12340, 2025
2025
Systemet kan ikke foretage handlingen nu. Prøv igen senere.
Artikler 1–13