Подписаться
Haotian Liu
Haotian Liu
xAI
Подтвержден адрес электронной почты в домене x.ai - Главная страница
Название
Процитировано
Процитировано
Год
Visual instruction tuning
H Liu, C Li, Q Wu, YJ Lee
Advances in neural information processing systems 36, 2024
54492024
Improved Baselines with Visual Instruction Tuning
H Liu, C Li, Y Li, YJ Lee
CVPR 2024, 2023
18702023
Llava-med: Training a large language-and-vision assistant for biomedicine in one day
C Li, C Wong, S Zhang, N Usuyama, H Liu, J Yang, T Naumann, H Poon, ...
NeurIPS 2023, Datasets and Benchmarks Track, 2023
6532023
GLIGEN: Open-Set Grounded Text-to-Image Generation
Y Li, H Liu, Q Wu, F Mu, J Yang, J Gao, C Li, YJ Lee
CVPR 2023, 2023
6262023
Llava-next: Improved reasoning, ocr, and world knowledge
H Liu, C Li, Y Li, B Li, Y Zhang, S Shen, YJ Lee
https://llava-vl.github.io/blog/2024-01-30-llava-next, 2024
602*2024
Aligning Large Multimodal Models with Factually Augmented RLHF
Z Sun, S Shen, S Cao, H Liu, C Li, Y Shen, C Gan, LY Gui, YX Wang, ...
arXiv preprint arXiv:2309.14525, 2023
2512023
Masked discrimination for self-supervised learning on point clouds
H Liu, M Cai, YJ Lee
Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel …, 2022
1652022
ELEVATER: A Benchmark and Toolkit for Evaluating Language-Augmented Visual Models
C Li*, H Liu*, LH Li, P Zhang, J Aneja, J Yang, P Jin, H Hu, Z Liu, YJ Lee, ...
NeurIPS 2022, Datasets and Benchmarks Track, 2022
1532022
Llava-plus: Learning to use tools for creating multimodal agents
S Liu, H Cheng, H Liu, H Zhang, F Li, T Ren, X Zou, J Yang, H Su, J Zhu, ...
European Conference on Computer Vision, 126-142, 2024
1022024
LLaVA-NeXT: A Strong Zero-shot Video Understanding Model
Y Zhang, B Li, H Liu, YJ Lee, L Gui, D Fu, J Feng, Z Liu, C Li
https://llava-vl.github.io/blog/2024-04-30-llava-next-video/, 2024
102*2024
YolactEdge: Real-time Instance Segmentation on the Edge
H Liu*, RAR Soto*, F Xiao, YJ Lee
2021 IEEE International Conference on Robotics and Automation (ICRA), 9579-9585, 2021
932021
Making large multimodal models understand arbitrary visual prompts
M Cai, H Liu, SK Mustikovela, GP Meyer, Y Chai, D Park, YJ Lee
CVPR 2024, 2023
85*2023
Learning Customized Visual Models with Retrieval-Augmented Knowledge
H Liu, K Son, J Yang, C Liu, J Gao, YJ Lee, C Li
CVPR 2023, 2023
57*2023
An Empirical Study of Scaling Instruct-Tuned Large Multimodal Models
Y Lu, C Li, H Liu, J Yang, J Gao, Y Shen
arXiv preprint arXiv:2309.09958, 2023
342023
CharXiv: Charting Gaps in Realistic Chart Understanding in Multimodal LLMs
Z Wang, M Xia, L He, H Chen, Y Liu, R Zhu, K Liang, X Wu, H Liu, ...
arXiv preprint arXiv:2406.18521, 2024
252024
Identity from here, Pose from there: Self-supervised Disentanglement and Generation of Objects using Unlabeled Videos
F Xiao, H Liu, YJ Lee
Proceedings of the IEEE International Conference on Computer Vision, 7013-7022, 2019
212019
Operation strategy of public building: Implications from trade-off between carbon emission and occupant satisfaction
Y Chen, H Liu, L Shi
Journal of Cleaner Production 205, 629-644, 2018
132018
Generate Anything Anywhere in Any Scene
Y Li, H Liu, Y Wen, YJ Lee
arXiv preprint arXiv:2306.17154, 2023
112023
Fantastic Copyrighted Beasts and How (Not) to Generate Them
L He, Y Huang, W Shi, T Xie, H Liu, Y Wang, L Zettlemoyer, C Zhang, ...
arXiv preprint arXiv:2406.14526, 2024
102024
Lmms-eval: Accelerating the development of large multimoal models
B Li, P Zhang, K Zhang, F Pu, X Du, Y Dong, H Liu, Y Zhang, G Zhang, ...
March, 2024
92024
В данный момент система не может выполнить эту операцию. Повторите попытку позднее.
Статьи 1–20