Kola: Carefully benchmarking world knowledge of large language models J Yu, X Wang, S Tu, S Cao, D Zhang-Li, X Lv, H Peng, Z Yao, X Zhang, ... arXiv preprint arXiv:2306.09296, 2023 | 116 | 2023 |
Preserving knowledge invariance: Rethinking robustness evaluation of open information extraction J Qi, C Zhang, X Wang, K Zeng, J Yu, J Liu, J Sun, Y Chen, L Hou, J Li, ... arXiv preprint arXiv:2305.13981, 2023 | 11 | 2023 |
A no-gold-standard technique for objective evaluation of quantitative nuclear-medicine imaging methods in the presence of correlated noise J Liu, Z Liu, HS Moon, J Mhlanga, A Jha Journal of Nuclear Medicine 61 (supplement 1), 523-523, 2020 | 8 | 2020 |
ParaMac: A general unsupervised paraphrase generation framework leveraging semantic constraints and diversifying mechanisms J Liu, J Shi, J Qi, L Hou, J Li, Q Tian Findings of the Association for Computational Linguistics: EMNLP 2022, 6193-6206, 2022 | 3 | 2022 |
ConstGCN: Constrained Transmission-based Graph Convolutional Networks for Document-level Relation Extraction J Qi, B Xu, K Zeng, J Liu, J Yu, Q Gao, J Li, L Hou arXiv preprint arXiv:2210.03949, 2022 | 3 | 2022 |
Probing structured semantics understanding and generation of language models via question answering J Liu, S Cao, J Shi, T Zhang, L Hou, J Li arXiv preprint arXiv:2401.05777, 2024 | 1 | 2024 |
A no-gold-standard technique to objectively evaluate quantitative imaging methods using patient data: Theory J Liu, Z Liu, J Mhlanga, BA Siegel, AK Jha arXiv preprint arXiv:2006.02290, 2020 | 1 | 2020 |
Dynamic multi teacher knowledge distillation for semantic parsing in KBQA A Zou, J Zou, S Cao, J Zhang, J Liu, J Wan, L Hou Expert Systems with Applications 263, 125599, 2025 | | 2025 |
How Proficient Are Large Language Models in Formal Languages? An In-Depth Insight for Knowledge Base Question Answering J Liu, S Cao, J Shi, T Zhang, L Nie, L Hu, L Hou, J Li Findings of the Association for Computational Linguistics ACL 2024, 792-815, 2024 | | 2024 |