Artificial general intelligence for medical imaging analysis
Large-scale Artificial General Intelligence (AGI) models, including Large Language Models
(LLMs) such as ChatGPT/GPT-4, have achieved unprecedented success in a variety of …
(LLMs) such as ChatGPT/GPT-4, have achieved unprecedented success in a variety of …
Proxedit: Improving tuning-free real image editing with proximal guidance
DDIM inversion has revealed the remarkable potential of real image editing within diffusion-
based methods. However, the accuracy of DDIM reconstruction degrades as larger classifier …
based methods. However, the accuracy of DDIM reconstruction degrades as larger classifier …
Adapting vision foundation models for plant phenoty**
Foundation models are large models pre-trained on tremendous amount of data. They can
be typically adapted to diverse downstream tasks with minimal effort. However, as …
be typically adapted to diverse downstream tasks with minimal effort. However, as …
Sam-parser: Fine-tuning sam efficiently by parameter space reconstruction
Abstract Segment Anything Model (SAM) has received remarkable attention as it offers a
powerful and versatile solution for object segmentation in images. However, fine-tuning SAM …
powerful and versatile solution for object segmentation in images. However, fine-tuning SAM …
Few-Shot Diffusion Models Escape the Curse of Dimensionality
While diffusion models have demonstrated impressive performance, there is a growing need
for generating samples tailored to specific user-defined concepts. The customized …
for generating samples tailored to specific user-defined concepts. The customized …
Embedded prompt tuning: Towards enhanced calibration of pretrained models for medical images
Foundation models pre-trained on large-scale data have been widely witnessed to achieve
success in various natural imaging downstream tasks. Parameter-efficient fine-tuning (PEFT) …
success in various natural imaging downstream tasks. Parameter-efficient fine-tuning (PEFT) …
Low-rank adaptation of time series foundational models for out-of-domain modality forecasting
Low-Rank Adaptation (LoRA) is a widely used technique for fine-tuning large pre-trained or
foundational models across different modalities and tasks. However, its application to time …
foundational models across different modalities and tasks. However, its application to time …
Fine-grained prompt tuning: A parameter and memory efficient transfer learning method for high-resolution medical image classification
Parameter-efficient transfer learning (PETL) is proposed as a cost-effective way to transfer
pre-trained models to downstream tasks, avoiding the high cost of updating entire large …
pre-trained models to downstream tasks, avoiding the high cost of updating entire large …