Следене
Bosheng Ding
Заглавие
Позовавания
Позовавания
Година
Is GPT-3 a Good Data Annotator?
B Ding, C Qin, L Liu, YK Chia, S Joty, B Li, L Bing
ACL 2023, 2022
2422022
On the effectiveness of adapter-based tuning for pretrained language model adaptation
R He, L Liu, H Ye, Q Tan, B Ding, L Cheng, JW Low, L Bing, L Si
ACL 2021, 2021
2122021
DAGA: Data augmentation with a generation approach for low-resource tagging tasks
B Ding, L Liu, L Bing, C Kruengkrai, TH Nguyen, S Joty, L Si, C Miao
EMNLP 2020, 2020
1762020
Chain of Knowledge: A Framework for Grounding Large Language Models with Structured Knowledge Bases
X Li, R Zhao, YK Chia, B Ding, L Bing, S Joty, S Poria
ICLR 2024, 2023
127*2023
MulDA: A multilingual data augmentation framework for low-resource cross-lingual NER
L Liu, B Ding, L Bing, S Joty, L Si, C Miao
ACL 2021, 5834-5846, 2021
802021
Data augmentation using llms: Data perspectives, learning paradigms and challenges
B Ding, C Qin, R Zhao, T Luo, X Li, G Chen, W Xia, J Hu, AT Luu, S Joty
ACL 2024, 2024
752024
Generative AI: A systematic review using topic modelling techniques
P Gupta, B Ding, C Guan, D Ding
Data and Information Management 8 (2), 100066, 2024
662024
Retrieving multimodal information for augmented generation: A survey
R Zhao, H Chen, W Wang, F Jiao, XL Do, C Qin, B Ding, X Guo, M Li, X Li, ...
EMNLP 2023, 2023
662023
Globalwoz: Globalizing multiwoz to develop multilingual task-oriented dialogue systems
B Ding, J Hu, L Bing, SM Aljunied, S Joty, L Si, C Miao
ACL 2022, 2021
402021
Can chatgpt-like generative models guarantee factual accuracy? on the mistakes of new generation search engines
R Zhao, X Li, YK Chia, B Ding, L Bing
Technical Report, 2023
322023
Unraveling the landscape of large language models: a systematic review and future perspectives
Q Ding, D Ding, Y Wang, C Guan, B Ding
Journal of Electronic Business & Digital Economics 3 (1), 3-19, 2023
222023
How much are llms contaminated? a comprehensive survey and the llmsanitize library
M Ravaut, B Ding, F Jiao, H Chen, X Li, R Zhao, C Qin, C Xiong, S Joty
arXiv e-prints, arXiv: 2404.00699, 2024
152024
Exploring Self-supervised Logic-enhanced Training for Large Language Models
F Jiao, Z Teng, B Ding, Z Liu, NF Chen, S Joty
NAACL 2024, 2023
14*2023
Panda LLM: Training Data and Evaluation for Open-Sourced Chinese Instruction-Following Large Language Models
F Jiao, B Ding, T Luo, Z Mo
Technical Report, 2023
92023
Data augmentation using large language models: Data perspectives, learning paradigms and challenges
B Ding, C Qin, R Zhao, T Luo, X Li, G Chen, W Xia, J Hu, AT Luu, S Joty
arXiv preprint arXiv:2403.02990, 2024
72024
How Much are Large Language Models Contaminated? A Comprehensive Survey and the LLMSanitize Library
M Ravaut, B Ding, F Jiao, H Chen, X Li, R Zhao, C Qin, C Xiong, S Joty
arXiv preprint arXiv:2404.00699, 2024
42024
Improving in-context learning via bidirectional alignment
SJ Chengwei Qin, Wenhan Xia, Fangkai Jiao, Chen Chen, Yuchen Hu, Bosheng Ding
arXiv preprint arXiv:2312.17055, 2023
42023
Relevant or Random: Can LLMs Truly Perform Analogical Reasoning?
C Qin, W Xia, T Wang, F Jiao, Y Hu, B Ding, R Chen, S Joty
arXiv preprint arXiv:2404.12728, 2024
32024
Demystify Adult Learning: A Social Network and Large Language Model Assisted Approach
F Liu, B Ding, C Guan, Z Wei, D Niyato, J Tan
AIoT 2024, 2024
22024
StructTest: Benchmarking LLMs' Reasoning through Compositional Structured Outputs
H Chen, F Jiao, M Ravaut, N Farruque, XP Nguyen, C Qin, M Dey, B Ding, ...
arXiv preprint arXiv:2412.18011, 2024
2024
Системата не може да изпълни операцията сега. Опитайте отново по-късно.
Статии 1–20