追蹤
Xiang Zhang
Xiang Zhang
UBC
在 ualberta.ca 的電子郵件地址已通過驗證 - 首頁
標題
引用次數
引用次數
年份
Don’t trust ChatGPT when your question is not in English: A study of multilingual abilities and types of LLMs
X Zhang, S Li, B Hauer, N Shi, G Kondrak
EMNLP, 2023
702023
ContraNovo: A Contrastive Learning Approach to Enhance De Novo Peptide Sequencing
Z Jin, S Xu, X Zhang, T Ling, N Dong, W Ouyang, Z Gao, C Chang, S Sun
AAAI, 2024
112024
A character-level length-control algorithm for non-autoregressive sentence summarization
P Liu, X Zhang, L Mou
NeurIPS, 2022
102022
Improving HowNet-Based Chinese Word Sense Disambiguation with Translations
X Zhang, B Hauer, G Kondrak
EMNLP, 2022
72022
TTIDA: Controllable Generative Data Augmentation via Text-to-Text and Text-to-Image Models
Y Yin, J Kaddour, X Zhang, Y Nie, Z Liu, L Kong, Q Liu
preprint, 2023
62023
Cross-Modal Consistency in Multimodal Large Language Models
X Zhang, S Li, N Shi, B Hauer, Z Wu, G Kondrak, M Abdul-Mageed, ...
preprint, 2024
5*2024
Bridging the Gap Between BabelNet and HowNet: Unsupervised Sense Alignment and Sememe Prediction
X Zhang, N Shi, B Hauer, G Kondrak
EACL, 2023
42023
π-PrimeNovo: an accurate and efficient non-autoregressive deep learning model for de novo peptide sequencing
X Zhang, T Ling, Z Jin, S Xu, Z Gao, B Sun, Z Qiu, J Wei, N Dong, G Wang, ...
Nature Communications, 2025
22025
Supervised Chain of Thought
X Zhang, D Ding
preprint, 2024
22024
Autoregressive + Chain of Thought = Recurrent: Recurrence's Role in Language Models' Computability and a Revisit of Recurrent Transformer
X Zhang, M Abdul-Mageed, LVS Lakshmanan
preprint, 2024
22024
Counting Ability of Large Language Models and Impact of Tokenization
X Zhang, J Cao, C You
preprint, 2024
12024
系統目前無法執行作業,請稍後再試。
文章 1–11