Folgen
Haonan Li
Haonan Li
LibrAI & MBZUAI
Bestätigte E-Mail-Adresse bei mbzuai.ac.ae - Startseite
Titel
Zitiert von
Zitiert von
Jahr
Cmmlu: Measuring massive multitask language understanding in chinese
H Li, Y Zhang, F Koto, Y Yang, H Zhao, Y Gong, N Duan, T Baldwin
arXiv preprint arXiv:2306.09212, 2023
1952023
Do-not-answer: A dataset for evaluating safeguards in llms
Y Wang, H Li, X Han, P Nakov, T Baldwin
arXiv preprint arXiv:2308.13387, 2023
1332023
A framework for few-shot language model evaluation
L Gao, J Tow, B Abbasi, S Biderman, S Black, A DiPofi, C Foster, ...
URL https://zenodo. org/records/10256836 7, 2023
932023
Jais and jais-chat: Arabic-centric foundation and instruction-tuned open generative large language models
N Sengupta, SK Sahu, B Jia, S Katipomu, H Li, F Koto, W Marshall, ...
arXiv preprint arXiv:2308.16149, 2023
812023
LLM360 K2: Scaling Up 360-Open-Source Large Language Models
Z Liu, B Tan, H Wang, W Neiswanger, T Tao, H Li, F Koto, Y Wang, S Sun, ...
arXiv e-prints, arXiv: 2501.07124, 2025
65*2025
Bactrian-x: Multilingual replicable instruction-following models with low-rank adaptation
H Li, F Koto, M Wu, AF Aji, T Baldwin
arXiv preprint arXiv:2305.15011, 2023
552023
Multispanqa: A dataset for multi-span question answering
H Li, M Tomko, M Vasardani, T Baldwin
Proceedings of the 2022 Conference of the North American Chapter of the …, 2022
512022
Place questions and human-generated answers: A data analysis approach
E Hamzei, H Li, M Vasardani, T Baldwin, S Winter, M Tomko
Geospatial Technologies for Local and Regional Development: Proceedings of …, 2020
332020
Neural character-level dependency parsing for Chinese
H Li, Z Zhang, Y Ju, H Zhao
Proceedings of the AAAI Conference on Artificial Intelligence 32 (1), 2018
332018
Lessons from the trenches on reproducible evaluation of language models
S Biderman, H Schoelkopf, L Sutawika, L Gao, J Tow, B Abbasi, AF Aji, ...
arXiv preprint arXiv:2405.14782, 2024
322024
Fact-checking the output of large language models via token-level uncertainty quantification
E Fadeeva, A Rubashevskii, A Shelmanov, S Petrakov, H Li, H Mubarak, ...
arXiv preprint arXiv:2403.04696, 2024
322024
Changes in bioactive lipid mediators in response to short-term exposure to ambient air particulate matter: a targeted lipidomic analysis of oxylipin signaling pathways
T Wang, Y Han, H Li, Y Wang, T Xue, X Chen, W Chen, Y Fan, X Qiu, ...
Environment International 147, 106314, 2021
312021
Kfcnet: Knowledge filtering and contrastive learning network for generative commonsense reasoning
H Li, Y Gong, J Jiao, R Zhang, T Baldwin, N Duan
arXiv preprint arXiv:2109.06704, 2021
272021
Large Language Models Only Pass Primary School Exams in Indonesia: A Comprehensive Test on IndoMMLU
F Koto, N Aisyah, H Li, T Baldwin
EMNLP 2023, 2023
262023
Arabicmmlu: Assessing massive multitask language understanding in arabic
F Koto, H Li, S Shatnawi, J Doughman, AB Sadallah, A Alraeesi, ...
arXiv preprint arXiv:2402.12840, 2024
252024
Sentiment-Aware Word and Sentence Level Pre-training for Sentiment Analysis
S Fan, C Lin, H Li, Z Lin, J Su, H Zhang, Y Gong, J Guo, N Duan
EMNLP 2022, 2022
242022
A rapid and high-throughput approach to quantify non-esterified oxylipins for epidemiological studies using online SPE-LC-MS/MS
T Wang, H Li, Y Han, Y Wang, J Gong, K Gao, W Li, H Zhang, J Wang, ...
Analytical and Bioanalytical Chemistry 412, 7989-8001, 2020
232020
A framework for few-shot language model evaluation, 07 2024
L Gao, J Tow, B Abbasi, S Biderman, S Black, A DiPofi, C Foster, ...
URL https://zenodo. org/records/12608602, 0
23
Learning from failure: Integrating negative examples when fine-tuning large language models as agents
R Wang, H Li, X Han, Y Zhang, T Baldwin
arXiv preprint arXiv:2402.11651, 2024
182024
Neural factoid geospatial question answering
H Li, E Hamzei, I Majic, H Hua, J Renz, M Tomko, M Vasardani, S Winter, ...
Journal of Spatial Information Science, 65-90, 2021
152021
Das System kann den Vorgang jetzt nicht ausführen. Versuchen Sie es später erneut.
Artikel 1–20