Code llama: Open foundation models for code B Roziere*, J Gehring*, F Gloeckle*, S Sootla*, I Gat, XE Tan, Y Adi, J Liu, ... arXiv preprint arXiv:2308.12950, 2023 | 1690 | 2023 |
Effective long-context scaling of foundation models W Xiong*, J Liu*, I Molybog, H Zhang, P Bhargava, R Hou, L Martin, ... NAACL 2024, 2023 | 173 | 2023 |
Scene-llm: Extending language model for 3d visual understanding and reasoning R Fu, J Liu, X Chen, Y Nie, W Xiong Proceedings of the IEEE/CVF Winter Conference on Applications of Computer …, 2024 | 35 | 2024 |
How Far Are We From AGI: Are LLMs All We Need? T Feng, C Jin, J Liu, K Zhu, H Tu, Z Cheng, G Lin, J You Transactions on Machine Learning Research, 2024 | 17* | 2024 |
Clip-layout: Style-consistent indoor scene synthesis with semantic furniture embedding J Liu, W Xiong, I Jones, Y Nie, A Gupta, B Oğuz arXiv preprint arXiv:2303.03565, 2023 | 9 | 2023 |
Text-guided 3D Human Generation from 2D Collections TJ Fu, W Xiong, Y Nie, J Liu, B Oğuz, WY Wang EMNLP 2023 Findings, 2023 | 2 | 2023 |
Speculative Prefill: Turbocharging TTFT with Lightweight and Training-Free Token Importance Estimation J Liu, B Chen, C Zhang arXiv preprint arXiv:2502.02789, 2025 | | 2025 |