Voxelprompt: A vision-language agent for grounded medical image analysis
We present VoxelPrompt, an agent-driven vision-language framework that tackles diverse
radiological tasks through joint modeling of natural language, image volumes, and analytical …
radiological tasks through joint modeling of natural language, image volumes, and analytical …
Interpretable bilingual multimodal large language model for diverse biomedical tasks
Several medical Multimodal Large Languange Models (MLLMs) have been developed to
address tasks involving visual images with textual instructions across various medical …
address tasks involving visual images with textual instructions across various medical …
Can Modern LLMs Act as Agent Cores in Radiology~ Environments?
Advancements in large language models (LLMs) have paved the way for LLM-based agent
systems that offer enhanced accuracy and interpretability across various domains …
systems that offer enhanced accuracy and interpretability across various domains …