[HTML][HTML] Generative ai for visualization: State of the art and future directions

Y Ye, J Hao, Y Hou, Z Wang, S ** review
E Kavaz, A Puig, I Rodríguez - Applied Sciences, 2023 - mdpi.com
Rapid growth in the generation of data from various sources has made data visualisation a
valuable tool for analysing data. However, visual analysis can be a challenging task, not …

Chartassisstant: A universal chart multimodal language model via chart-to-table pre-training and multitask instruction tuning

F Meng, W Shao, Q Lu, P Gao, K Zhang, Y Qiao… - arxiv preprint arxiv …, 2024 - arxiv.org
Charts play a vital role in data visualization, understanding data patterns, and informed
decision-making. However, their unique combination of graphical elements (eg, bars, lines) …

Opencqa: Open-ended question answering with charts

S Kantharaj, XL Do, RTK Leong, JQ Tan… - arxiv preprint arxiv …, 2022 - arxiv.org
Charts are very popular to analyze data and convey important insights. People often analyze
visualizations to answer open-ended questions that require explanatory answers …

Exploring chart question answering for blind and low vision users

J Kim, A Srinivasan, NW Kim, YS Kim - … of the 2023 CHI Conference on …, 2023 - dl.acm.org
Data visualizations can be complex or involve numerous data points, making them
impractical to navigate using screen readers alone. Question answering (QA) systems have …

Advancing multimodal large language models in chart question answering with visualization-referenced instruction tuning

X Zeng, H Lin, Y Ye, W Zeng - IEEE Transactions on …, 2024 - ieeexplore.ieee.org
Emerging multimodal large language models (MLLMs) exhibit great potential for chart
question answering (CQA). Recent efforts primarily focus on scaling up training datasets (ie …

An empirical evaluation of the gpt-4 multimodal language model on visualization literacy tasks

A Bendeck, J Stasko - IEEE Transactions on Visualization and …, 2024 - ieeexplore.ieee.org
Large Language Models (LLMs) like GPT-4 which support multimodal input (ie, prompts
containing images in addition to text) have immense potential to advance visualization …

Natural language dataset generation framework for visualizations powered by large language models

HK Ko, H Jeon, G Park, DH Kim, NW Kim… - Proceedings of the CHI …, 2024 - dl.acm.org
We introduce VL2NL, a Large Language Model (LLM) framework that generates rich and
diverse NL datasets using Vega-Lite specifications as input, thereby streamlining the …

“Explain What a Treemap is”: Exploratory Investigation of Strategies for Explaining Unfamiliar Chart to Blind and Low Vision Users

G Kim, J Kim, YS Kim - Proceedings of the 2023 CHI conference on …, 2023 - dl.acm.org
Visualization designers increasingly use diverse types of visualizations, but assistive
technologies and education for blind and low vision people often focus on elementary chart …