Theo dõi
Shuming Ma
Shuming Ma
Microsoft Research Asia
Email được xác minh tại microsoft.com - Trang chủ
Tiêu đề
Trích dẫn bởi
Trích dẫn bởi
Năm
Kosmos-2: Grounding multimodal large language models to the world
Z Peng, W Wang, L Dong, Y Hao, S Huang, S Ma, F Wei
arXiv preprint arXiv:2306.14824, 2023
6172023
SGM: sequence generation model for multi-label classification
P Yang, X Sun, W Li, S Ma, W Wu, H Wang
arXiv preprint arXiv:1806.04822, 2018
5022018
Language is not all you need: Aligning perception with language models
S Huang, L Dong, W Wang, Y Hao, S Singhal, S Ma, T Lv, L Cui, ...
Advances in Neural Information Processing Systems 36, 72096-72109, 2023
4832023
Why can gpt learn in-context? language models implicitly perform gradient descent as meta-optimizers
D Dai, Y Sun, L Dong, Y Hao, S Ma, Z Sui, F Wei
arXiv preprint arXiv:2212.10559, 2022
3832022
Retentive network: A successor to transformer for large language models
Y Sun, L Dong, S Huang, S Ma, Y Xia, J Xue, J Wang, F Wei
arXiv preprint arXiv:2307.08621, 2023
3262023
Global encoding for abstractive summarization
J Lin, X Sun, S Ma, Q Su
arXiv preprint arXiv:1805.03989, 2018
2012018
meprop: Sparsified back propagation for accelerated deep learning with reduced overfitting
X Sun, X Ren, S Ma, H Wang
International Conference on Machine Learning, 3299-3308, 2017
2002017
A whole-slide foundation model for digital pathology from real-world data
H Xu, N Usuyama, J Bagga, S Zhang, R Rao, T Naumann, C Wong, ...
Nature 630 (8015), 181-188, 2024
1962024
Bitnet: Scaling 1-bit transformers for large language models
H Wang, S Ma, L Dong, S Huang, H Wang, L Ma, F Yang, R Wang, Y Wu, ...
arXiv preprint arXiv:2310.11453, 2023
1832023
Deepnet: Scaling transformers to 1,000 layers
H Wang, S Ma, L Dong, S Huang, D Zhang, F Wei
IEEE Transactions on Pattern Analysis and Machine Intelligence, 2024
1782024
The era of 1-bit llms: All large language models are in 1.58 bits
S Ma, H Wang, L Ma, L Wang, W Wang, S Huang, L Dong, R Wang, J Xue, ...
arXiv preprint arXiv:2402.17764 1, 2024
1752024
Longnet: Scaling transformers to 1,000,000,000 tokens
J Ding, S Ma, L Dong, X Zhang, S Huang, W Wang, N Zheng, F Wei
arXiv preprint arXiv:2307.02486, 2023
1642023
A length-extrapolatable transformer
Y Sun, L Dong, B Patra, S Ma, S Huang, A Benhaim, V Chaudhary, ...
arXiv preprint arXiv:2212.10554, 2022
1542022
XLM-E: Cross-lingual language model pre-training via ELECTRA
Z Chi, S Huang, L Dong, S Ma, B Zheng, S Singhal, P Bajaj, X Song, ...
arXiv preprint arXiv:2106.16138, 2021
1322021
Language models are general-purpose interfaces
Y Hao, H Song, L Dong, S Huang, Z Chi, W Wang, S Ma, F Wei
arXiv preprint arXiv:2206.06336, 2022
1052022
A simple and effective unified encoder for document-level machine translation
S Ma, D Zhang, M Zhou
Proceedings of the 58th annual meeting of the association for computational …, 2020
1042020
Subhojit Som, Xia Song, and Furu Wei
S Huang, L Dong, W Wang, Y Hao, S Singhal, S Ma, T Lv, L Cui, ...
Language is not all you need: Aligning perception with language models …, 2023
962023
Alternating language modeling for cross-lingual pre-training
J Yang, S Ma, D Zhang, S Wu, Z Li, M Zhou
Proceedings of the AAAI Conference on Artificial Intelligence 34 (05), 9386-9393, 2020
952020
On the representation collapse of sparse mixture of experts
Z Chi, L Dong, S Huang, D Dai, S Ma, B Patra, S Singhal, P Bajaj, X Song, ...
Advances in Neural Information Processing Systems 35, 34600-34613, 2022
902022
mT6: Multilingual pretrained text-to-text transformer with translation pairs
Z Chi, L Dong, S Ma, SHXL Mao, H Huang, F Wei
arXiv preprint arXiv:2104.08692, 2021
842021
Hệ thống không thể thực hiện thao tác ngay bây giờ. Hãy thử lại sau.
Bài viết 1–20