Prati
Lingpeng Kong
Lingpeng Kong
Google DeepMind, The University of Hong Kong
Potvrđena adresa e-pošte na cs.hku.hk - Početna stranica
Naslov
Citirano
Citirano
Godina
Random feature attention
H Peng, N Pappas, D Yogatama, R Schwartz, NA Smith, L Kong
arXiv preprint arXiv:2103.02143, 2021
3752021
Diffuseq: Sequence to sequence text generation with diffusion models
S Gong, M Li, J Feng, Z Wu, LP Kong
arXiv preprint arXiv:2210.08933, 2022
3142022
A Dependency Parser for Tweets
L Kong, N Schneider, S Swayamdipta, A Bhatia, C Dyer, NA Smith
EMNLP 2014, 2014
2972014
Dynet: The dynamic neural network toolkit
G Neubig, C Dyer, Y Goldberg, A Matthews, W Ammar, A Anastasopoulos, ...
arXiv preprint arXiv:1701.03980, 2017
2772017
cosformer: Rethinking softmax in attention
Z Qin, W Sun, H Deng, D Li, Y Wei, B Lv, J Yan, L Kong, Y Zhong
arXiv preprint arXiv:2202.08791, 2022
2612022
Episodic memory in lifelong language learning
C de Masson D'Autume, S Ruder, L Kong, D Yogatama
Advances in Neural Information Processing Systems 32, 2019
2372019
A contrastive framework for neural text generation
Y Su, T Lan, Y Wang, D Yogatama, L Kong, N Collier
Advances in Neural Information Processing Systems 35, 21548-21561, 2022
2182022
Unifiedskg: Unifying and multi-tasking structured knowledge grounding with text-to-text language models
T Xie, CH Wu, P Shi, R Zhong, T Scholak, M Yasunaga, CS Wu, M Zhong, ...
arXiv preprint arXiv:2201.05966, 2022
2052022
Zerogen: Efficient zero-shot learning via dataset generation
J Ye, J Gao, Q Li, H Xu, J Feng, Z Wu, T Yu, L Kong
arXiv preprint arXiv:2202.07922, 2022
1752022
What do recurrent neural network grammars learn about syntax?
A Kuncoro, M Ballesteros, L Kong, C Dyer, G Neubig, NA Smith
arXiv preprint arXiv:1611.05774, 2016
1592016
Audio–visual segmentation
J Zhou, J Wang, J Zhang, W Sun, J Zhang, S Birchfield, D Guo, L Kong, ...
European Conference on Computer Vision, 386-403, 2022
1432022
Self-adaptive in-context learning: An information compression perspective for in-context example selection and ordering
Z Wu, Y Wang, J Ye, L Kong
arXiv preprint arXiv:2212.10375, 2022
1332022
Segmental recurrent neural networks
L Kong, C Dyer, NA Smith
arXiv preprint arXiv:1511.06018, 2015
1332015
Compositional exemplars for in-context learning
J Ye, Z Wu, J Feng, T Yu, L Kong
International Conference on Machine Learning, 39818-39833, 2023
1122023
Adaptive semiparametric language models
D Yogatama, C de Masson d’Autume, L Kong
Transactions of the Association for Computational Linguistics 9, 362-373, 2021
1112021
Language models can see: Plugging visual controls in text generation
Y Su, T Lan, Y Liu, F Liu, D Yogatama, Y Wang, L Kong, N Collier
arXiv preprint arXiv:2205.02655, 2022
1012022
MIT: A Large-Scale Dataset towards Multi-Modal Multilingual Instruction Tuning
L Li, Y Yin, S Li, L Chen, P Wang, S Ren, M Li, Y Yang, J Xu, X Sun, ...
arXiv preprint arXiv:2306.04387, 2023
972023
Multilingual machine translation with large language models: Empirical results and analysis
W Zhu, H Liu, Q Dong, J Xu, S Huang, L Kong, J Chen, L Li
arXiv preprint arXiv:2304.04675, 2023
972023
Distilling an ensemble of greedy dependency parsers into one MST parser
A Kuncoro, M Ballesteros, L Kong, C Dyer, NA Smith
arXiv preprint arXiv:1609.07561, 2016
852016
Osworld: Benchmarking multimodal agents for open-ended tasks in real computer environments
T Xie, D Zhang, J Chen, X Li, S Zhao, R Cao, JH Toh, Z Cheng, D Shin, ...
Advances in Neural Information Processing Systems 37, 52040-52094, 2025
792025
Sustav trenutno ne može provesti ovu radnju. Pokušajte ponovo kasnije.
Članci 1–20