Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Knowledge distillation and student-teacher learning for visual intelligence: A review and new outlooks
L Wang, KJ Yoon - IEEE transactions on pattern analysis and …, 2021 - ieeexplore.ieee.org
Deep neural models, in recent years, have been successful in almost every field, even
solving the most complex problem statements. However, these models are huge in size with …
solving the most complex problem statements. However, these models are huge in size with …
Learn from model beyond fine-tuning: A survey
Foundation models (FM) have demonstrated remarkable performance across a wide range
of tasks (especially in the fields of natural language processing and computer vision) …
of tasks (especially in the fields of natural language processing and computer vision) …
Data-free knowledge distillation for heterogeneous federated learning
Federated Learning (FL) is a decentralized machine-learning paradigm, in which a global
server iteratively averages the model parameters of local users without accessing their data …
server iteratively averages the model parameters of local users without accessing their data …
Knowledge distillation: A survey
In recent years, deep neural networks have been successful in both industry and academia,
especially for computer vision tasks. The great success of deep learning is mainly due to its …
especially for computer vision tasks. The great success of deep learning is mainly due to its …
Source-free domain adaptation for semantic segmentation
Abstract Unsupervised Domain Adaptation (UDA) can tackle the challenge that
convolutional neural network (CNN)-based approaches for semantic segmentation heavily …
convolutional neural network (CNN)-based approaches for semantic segmentation heavily …
Dafkd: Domain-aware federated knowledge distillation
Federated Distillation (FD) has recently attracted increasing attention for its efficiency in
aggregating multiple diverse local models trained from statistically heterogeneous data of …
aggregating multiple diverse local models trained from statistically heterogeneous data of …
Data-free knowledge distillation via feature exchange and activation region constraint
Despite the tremendous progress on data-free knowledge distillation (DFKD) based on
synthetic data generation, there are still limitations in diverse and efficient data synthesis. It …
synthetic data generation, there are still limitations in diverse and efficient data synthesis. It …
Generative low-bitwidth data free quantization
Neural network quantization is an effective way to compress deep models and improve their
execution latency and energy efficiency, so that they can be deployed on mobile or …
execution latency and energy efficiency, so that they can be deployed on mobile or …
Data-free network quantization with adversarial knowledge distillation
Network quantization is an essential procedure in deep learning for development of efficient
fixed-point inference models on mobile or edge platforms. However, as datasets grow larger …
fixed-point inference models on mobile or edge platforms. However, as datasets grow larger …
Self-distillation as instance-specific label smoothing
It has been recently demonstrated that multi-generational self-distillation can improve
generalization. Despite this intriguing observation, reasons for the enhancement remain …
generalization. Despite this intriguing observation, reasons for the enhancement remain …