A comprehensive survey on test-time adaptation under distribution shifts
Abstract Machine learning methods strive to acquire a robust model during the training
process that can effectively generalize to test samples, even in the presence of distribution …
process that can effectively generalize to test samples, even in the presence of distribution …
Knowledge distillation and student-teacher learning for visual intelligence: A review and new outlooks
L Wang, KJ Yoon - IEEE transactions on pattern analysis and …, 2021 - ieeexplore.ieee.org
Deep neural models, in recent years, have been successful in almost every field, even
solving the most complex problem statements. However, these models are huge in size with …
solving the most complex problem statements. However, these models are huge in size with …
Fine-tuning global model via data-free knowledge distillation for non-iid federated learning
Federated Learning (FL) is an emerging distributed learning paradigm under privacy
constraint. Data heterogeneity is one of the main challenges in FL, which results in slow …
constraint. Data heterogeneity is one of the main challenges in FL, which results in slow …
Source-free domain adaptation for semantic segmentation
Abstract Unsupervised Domain Adaptation (UDA) can tackle the challenge that
convolutional neural network (CNN)-based approaches for semantic segmentation heavily …
convolutional neural network (CNN)-based approaches for semantic segmentation heavily …
Distilling object detectors via decoupled features
Abstract Knowledge distillation is a widely used paradigm for inheriting information from a
complicated teacher network to a compact student network and maintaining the strong …
complicated teacher network to a compact student network and maintaining the strong …
Data-free model extraction
Current model extraction attacks assume that the adversary has access to a surrogate
dataset with characteristics similar to the proprietary data used to train the victim model. This …
dataset with characteristics similar to the proprietary data used to train the victim model. This …
Towards data-free model stealing in a hard label setting
Abstract Machine learning models deployed as a service (MLaaS) are susceptible to model
stealing attacks, where an adversary attempts to steal the model within a restricted access …
stealing attacks, where an adversary attempts to steal the model within a restricted access …
Towards efficient data free black-box adversarial attack
Classic black-box adversarial attacks can take advantage of transferable adversarial
examples generated by a similar substitute model to successfully fool the target model …
examples generated by a similar substitute model to successfully fool the target model …
Spot-adaptive knowledge distillation
Knowledge distillation (KD) has become a well established paradigm for compressing deep
neural networks. The typical way of conducting knowledge distillation is to train the student …
neural networks. The typical way of conducting knowledge distillation is to train the student …
Synthesizing informative training samples with gan
Remarkable progress has been achieved in synthesizing photo-realistic images with
generative adversarial networks (GANs). Recently, GANs are utilized as the training sample …
generative adversarial networks (GANs). Recently, GANs are utilized as the training sample …