From knowledge distillation to self-knowledge distillation: A unified approach with normalized loss and customized soft labels

Z Yang, A Zeng, Z Li, T Zhang… - Proceedings of the …, 2023 - openaccess.thecvf.com
Abstract Knowledge Distillation (KD) uses the teacher's prediction logits as soft labels to
guide the student, while self-KD does not need a real teacher to require the soft labels. This …

Diswot: Student architecture search for distillation without training

P Dong, L Li, Z Wei - … of the IEEE/CVF Conference on …, 2023 - openaccess.thecvf.com
Abstract Knowledge distillation (KD) is an effective training strategy to improve the
lightweight student models under the guidance of cumbersome teachers. However, the large …

Automated knowledge distillation via monte carlo tree search

L Li, P Dong, Z Wei, Y Yang - Proceedings of the IEEE/CVF …, 2023 - openaccess.thecvf.com
In this paper, we present Auto-KD, the first automated search framework for optimal
knowledge distillation design. Traditional distillation techniques typically require handcrafted …

Shadow knowledge distillation: Bridging offline and online knowledge transfer

L Li, Z ** - Advances in Neural Information Processing …, 2022 - proceedings.neurips.cc
Abstract Knowledge distillation can be generally divided into offline and online categories
according to whether teacher model is pre-trained and persistent during the distillation …

Kd-zero: Evolving knowledge distiller for any teacher-student pairs

L Li, P Dong, A Li, Z Wei… - Advances in Neural …, 2023 - proceedings.neurips.cc
Abstract Knowledge distillation (KD) has emerged as an effective technique for compressing
models that can enhance the lightweight model. Conventional KD methods propose various …

Emq: Evolving training-free proxies for automated mixed precision quantization

P Dong, L Li, Z Wei, X Niu, Z Tian… - Proceedings of the …, 2023 - openaccess.thecvf.com
Abstract Mixed-Precision Quantization (MQ) can achieve a competitive accuracy-complexity
trade-off for models. Conventional training-based search methods require time-consuming …

Auto-prox: Training-free vision transformer architecture search via automatic proxy discovery

Z Wei, P Dong, Z Hui, A Li, L Li, M Lu, H Pan… - Proceedings of the AAAI …, 2024 - ojs.aaai.org
The substantial success of Vision Transformer (ViT) in computer vision tasks is largely
attributed to the architecture design. This underscores the necessity of efficient architecture …

Saswot: Real-time semantic segmentation architecture search without training

C Zhu, L Li, Y Wu, Z Sun - Proceedings of the AAAI Conference on …, 2024 - ojs.aaai.org
In this paper, we present SasWOT, the first training-free Semantic segmentation Architecture
Search (SAS) framework via an auto-discovery proxy. Semantic segmentation is widely used …

On the opportunities of green computing: A survey

Y Zhou, X Lin, X Zhang, M Wang, G Jiang, H Lu… - arxiv preprint arxiv …, 2023 - arxiv.org
Artificial Intelligence (AI) has achieved significant advancements in technology and research
with the development over several decades, and is widely used in many areas including …

Auto-GAS: automated proxy discovery for training-free generative architecture search

L Li, H Sun, S Li, P Dong, W Luo, W Xue, Q Liu… - … on Computer Vision, 2024 - Springer
In this paper, we introduce Auto-GAS, the first training-free Generative Architecture Search
(GAS) framework enabled by an auto-discovered proxy. Generative models like Generative …