Követés
Ruibo Liu
Ruibo Liu
RS @Google DeepMind
E-mail megerősítve itt: google.com - Kezdőlap
Cím
Hivatkozott rá
Hivatkozott rá
Év
Gemini: a family of highly capable multimodal models
G Team, R Anil, S Borgeaud, JB Alayrac, J Yu, R Soricut, J Schalkwyk, ...
arXiv preprint arXiv:2312.11805, 2023
25322023
Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context
G Team, P Georgiev, VI Lei, R Burnell, L Bai, A Gulati, G Tanzer, ...
arXiv preprint arXiv:2403.05530, 2024
10062024
Gemma: Open models based on gemini research and technology
G Team, T Mesnard, C Hardin, R Dadashi, S Bhupatiraju, S Pathak, ...
arXiv preprint arXiv:2403.08295, 2024
9312024
Training Socially Aligned Language Models in Simulated Human Society
R Liu, R Yang, C Jia, G Zhang, D Zhou, AM Dai, D Yang, S Vosoughi
ICLR 2024, 2023
148*2023
Data Boost: Text Data Augmentation Through Reinforcement Learning Guided Conditional Generation
R Liu, G Xu, C Jia, W Ma, L Wang, S Vosoughi
EMNLP 2020, 2020
1222020
Mitigating Political Bias in Language Models Through Reinforced Calibration
R Liu, C Jia, J Wei, G Xu, L Wang, S Vosoughi
🏆 AAAI 2021 Outstanding Paper Award, 2021
1132021
Exploring collaboration mechanisms for llm agents: A social psychology view
J Zhang, X Xu, R Liu, H Bryan, S Deng
ACL 2024, 2023
982023
MERT: Acoustic Music Understanding Model with Large-Scale Self-supervised Training
Y Li, R Yuan, G Zhang, Y Ma, X Chen, H Yin, C Lin, A Ragni, E Benetos, ...
ICLR 2024, 2023
952023
Mind's Eye: Grounded Language Model Reasoning through Simulation
R Liu, J Wei, SS Gu, TY Wu, S Vosoughi, C Cui, D Zhou, AM Dai
ICLR 2023, 2022
862022
Best practices and lessons learned on synthetic data for language models
R Liu, J Wei, F Liu, C Si, Y Zhang, J Rao, S Zheng, D Peng, D Yang, ...
arXiv preprint arXiv:2404.07503, 2024
83*2024
Quantifying and alleviating political bias in language models
R Liu, C Jia, J Wei, G Xu, S Vosoughi
Artificial Intelligence 304, 103654, 2022
822022
Interactive natural language processing
Z Wang, G Zhang, K Yang, N Shi, W Zhou, S Hao, G Xiong, Y Li, MY Sim, ...
arXiv preprint arXiv:2305.13246, 2023
552023
Aligning generative language models with human values
R Liu, G Zhang, X Feng, S Vosoughi
NAACL 2022, 2022
532022
Reconstructing human joint motion with computational fabrics
R Liu, Q Shao, S Wang, C Ru, D Balkcom, X Zhou
Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous …, 2019
462019
Higher layers need more lora experts
C Gao, K Chen, J Rao, B Sun, R Liu, D Peng, Y Zhang, X Guo, J Yang, ...
arXiv preprint arXiv:2402.08562, 2024
362024
ChatMusician: Understanding and Generating Music Intrinsically with LLM
R Yuan, H Lin, Y Wang, Z Tian, S Wu, T Shen, G Zhang, Y Wu, C Liu, ...
ACL 2024, 2024
342024
Long-form factuality in large language models
J Wei, C Yang, X Song, Y Lu, N Hu, D Tran, D Peng, R Liu, D Huang, ...
NeurlPS 2024, 2024
33*2024
Design2code: How far are we from automating front-end engineering?
C Si, Y Zhang, Z Yang, R Liu, D Yang
arXiv e-prints, arXiv: 2403.03163, 2024
332024
Second Thoughts are Best: Learning to Re-Align With Human Values from Text Edits
R Liu, C Jia, G Zhang, T Liu, S Vosoughi
NeurlPS 2022, 2023
332023
A transformer-based framework for neutralizing and reversing the political polarity of news articles
R Liu, C Jia, S Vosoughi
Proceedings of the ACM on Human-Computer Interaction 5 (CSCW1), 1-26, 2021
332021
A rendszer jelenleg nem tudja elvégezni a műveletet. Próbálkozzon újra később.
Cikkek 1–20