Efficient acceleration of deep learning inference on resource-constrained edge devices: A review

MMH Shuvo, SK Islam, J Cheng… - Proceedings of the …, 2022 - ieeexplore.ieee.org
Successful integration of deep neural networks (DNNs) or deep learning (DL) has resulted
in breakthroughs in many areas. However, deploying these highly accurate models for data …

Model compression for deep neural networks: A survey

Z Li, H Li, L Meng - Computers, 2023 - mdpi.com
Currently, with the rapid development of deep learning, deep neural networks (DNNs) have
been widely applied in various computer vision tasks. However, in the pursuit of …

Decoupled knowledge distillation

B Zhao, Q Cui, R Song, Y Qiu… - Proceedings of the IEEE …, 2022 - openaccess.thecvf.com
State-of-the-art distillation methods are mainly based on distilling deep features from
intermediate layers, while the significance of logit distillation is greatly overlooked. To …

Am-radio: Agglomerative vision foundation model reduce all domains into one

M Ranzinger, G Heinrich, J Kautz… - Proceedings of the …, 2024 - openaccess.thecvf.com
A handful of visual foundation models (VFMs) have recently emerged as the backbones for
numerous downstream tasks. VFMs like CLIP DINOv2 SAM are trained with distinct …

Multi-level logit distillation

Y **, J Wang, D Lin - … of the IEEE/CVF Conference on …, 2023 - openaccess.thecvf.com
Abstract Knowledge Distillation (KD) aims at distilling the knowledge from the large teacher
model to a lightweight student model. Mainstream KD methods can be divided into two …

Mngnas: distilling adaptive combination of multiple searched networks for one-shot neural architecture search

Z Chen, G Qiu, P Li, L Zhu, X Yang… - IEEE Transactions on …, 2023 - ieeexplore.ieee.org
Recently neural architecture (NAS) search has attracted great interest in academia and
industry. It remains a challenging problem due to the huge search space and computational …