Подписаться
Zhiyong Wu
Zhiyong Wu
Shanghai AI Lab
Подтвержден адрес электронной почты в домене cs.hku.hk - Главная страница
Название
Процитировано
Процитировано
Год
A survey on in-context learning
Q Dong, L Li, D Dai, C Zheng, J Ma, R Li, H Xia, J Xu, Z Wu, T Liu, ...
arXiv preprint arXiv:2301.00234, 2022
15282022
DiffuSeq: Sequence to Sequence Text Generation with Diffusion Models
S Gong, M Li, J Feng, Z Wu, LP Kong
arXiv preprint arXiv:2210.08933, 2022
3132022
Opencompass: A universal evaluation platform for foundation models
OC Contributors
GitHub repository, 2023
2122023
Internlm: A multilingual language model with progressively enhanced capabilities
ILM Team
2023-01-06)[2023-09-27]. https://github. com/InternLM/InternLM, 2023
1962023
Perturbed Masking: Parameter-free Probing for Analyzing and Interpreting BERT
Z Wu, Y Chen, B Kao, Q Liu
arXiv preprint arXiv:2004.14786, 2020
1962020
ZeroGen: Efficient Zero-shot Learning via Dataset Generation
J Ye, J Gao, Q Li, H Xu, J Feng, Z Wu, T Yu, L Kong
arXiv preprint arXiv:2202.07922, 2022
1752022
Can We Edit Factual Knowledge by In-Context Learning?
C Zheng, L Li, Q Dong, Y Fan, Z Wu, J Xu, B Chang
arXiv preprint arXiv:2305.12740, 2023
1342023
Self-Adaptive In-Context Learning: An Information Compression Perspective for In-Context Example Selection and Ordering
Z Wu, Y Wang, J Ye, L Kong
1322023
Compositional exemplars for in-context learning
J Ye, Z Wu, J Feng, T Yu, L Kong
International Conference on Machine Learning, 39818-39833, 2023
1112023
SeeClick: Harnessing GUI Grounding for Advanced Visual GUI Agents
K Cheng, Q Sun, Y Chu, F Xu, Y Li, J Zhang, Z Wu
arXiv preprint arXiv:2401.10935, 2024
972024
Next: a neural network framework for next poi recommendation
Z Zhang, C Li, Z Wu, A Sun, D Ye, X Luo
Frontiers of Computer Science 14, 314-333, 2020
942020
Good for misconceived reasons: An empirical revisiting on the need for visual context in multimodal machine translation
Z Wu, L Kong, W Bi, X Li, B Kao
Proceedings of the 59th Annual Meeting of the Association for Computational …, 2021
86*2021
Self-guided noise-free data generation for efficient zero-shot learning
WZ J Gao, R Pi, L Yong, H Xu, J Ye, Z Wu
The Eleventh International Conference on Learning Representations (ICLR 2023), 2023
78*2023
ProGen: Progressive Zero-shot Dataset Generation via In-context Feedback
J Ye, J Gao, J Feng, Z Wu, T Yu, L Kong
arXiv preprint arXiv:2210.12329, 2022
612022
OS-Copilot: Towards Generalist Computer Agents with Self-Improvement
Z Wu, C Han, Z Ding, Z Weng, Z Liu, S Yao, T Yu, L Kong
arXiv preprint arXiv:2402.07456, 2024
592024
OpenICL: An Open-Source Framework for In-context Learning
Z Wu, YX Wang, J Ye, J Feng, J Xu, Y Qiao, Z Wu
arXiv preprint arXiv:2303.02913, 2023
432023
Corex: Pushing the Boundaries of Complex Reasoning through Multi-Model Collaboration
Q Sun, Z Yin, X Li, Z Wu, X Qiu, L Kong
arXiv preprint arXiv:2310.00280, 2023
382023
Expanding Performance Boundaries of Open-Source Multimodal Models with Model, Data, and Test-Time Scaling
Z Chen, W Wang, Y Cao, Y Liu, Z Gao, E Cui, J Zhu, S Ye, H Tian, Z Liu, ...
arXiv preprint arXiv:2412.05271, 2024
372024
Towards practical open knowledge base canonicalization
TH Wu, Z Wu, B Kao, P Yin
Proceedings of the 27th ACM International Conference on Information and …, 2018
352018
Symbol-LLM: Towards Foundational Symbol-centric Interface For Large Language Models
F Xu, Z Wu, Q Sun, S Ren, F Yuan, S Yuan, Q Lin, Y Qiao, J Liu
arXiv preprint arXiv:2311.09278, 2023
342023
В данный момент система не может выполнить эту операцию. Повторите попытку позднее.
Статьи 1–20