A comprehensive survey on pretrained foundation models: A history from bert to chatgpt

C Zhou, Q Li, C Li, J Yu, Y Liu, G Wang… - International Journal of …, 2024 - Springer
Abstract Pretrained Foundation Models (PFMs) are regarded as the foundation for various
downstream tasks across different data modalities. A PFM (eg, BERT, ChatGPT, GPT-4) is …

Deep learning for image colorization: Current and future prospects

S Huang, X **, Q Jiang, L Liu - Engineering Applications of Artificial …, 2022 - Elsevier
Image colorization, as an essential problem in computer vision (CV), has attracted an
increasing amount of researchers attention in recent years, especially deep learning-based …

ColorFormer: Image colorization via color memory assisted hybrid-attention transformer

X Ji, B Jiang, D Luo, G Tao, W Chu, Z ** of multimedia applications through visual programming
R Du, N Li, J **, M Carney, S Miles, M Kleiner… - Proceedings of the …, 2023 - dl.acm.org
In recent years, there has been a proliferation of multimedia applications that leverage
machine learning (ML) for interactive experiences. Prototy** ML-based applications is …

Invertible image decolorization

R Zhao, T Liu, J **ao, DPK Lun… - IEEE Transactions on …, 2021 - ieeexplore.ieee.org
Invertible image decolorization is a useful color compression technique to reduce the cost in
multimedia systems. Invertible decolorization aims to synthesize faithful grayscales from …

Is bert blind? exploring the effect of vision-and-language pretraining on visual language understanding

M Alper, M Fiman… - Proceedings of the IEEE …, 2023 - openaccess.thecvf.com
Most humans use visual imagination to understand and reason about language, but models
such as BERT reason about language using knowledge acquired during text-only …