Obow: Online bag-of-visual-words generation for self-supervised learning

S Gidaris, A Bursuc, G Puy… - Proceedings of the …, 2021 - openaccess.thecvf.com
Learning image representations without human supervision is an important and active
research field. Several recent approaches have successfully leveraged the idea of making …

Few-shot object detection by knowledge distillation using bag-of-visual-words representations

W Pei, S Wu, D Mei, F Chen, J Tian, G Lu - European Conference on …, 2022 - Springer
While fine-tuning based methods for few-shot object detection have achieved remarkable
progress, a crucial challenge that has not been addressed well is the potential class-specific …

Toward Cross-Lingual Social Event Detection with Hybrid Knowledge Distillation

J Ren, H Peng, L Jiang, Z Hao, J Wu, S Gao… - ACM Transactions on …, 2024 - dl.acm.org
Recently published graph neural networks (GNNs) show promising performance at social
event detection tasks. However, most studies are oriented toward monolingual data in …

Knowledge distillation meets open-set semi-supervised learning

J Yang, X Zhu, A Bulat, B Martinez… - International Journal of …, 2024 - Springer
Existing knowledge distillation methods mostly focus on distillation of teacher's prediction
and intermediate activation. However, the structured representation, which arguably is one …

Knowledge Distillation Layer that Lets the Student Decide

A Gorgun, YZ Gurbuz, AA Alatan - arxiv preprint arxiv:2309.02843, 2023 - arxiv.org
Typical technique in knowledge distillation (KD) is regularizing the learning of a limited
capacity model (student) by pushing its responses to match a powerful model's (teacher) …

[PDF][PDF] FLRKD: Relational Knowledge Distillation Based on Channel-wise Feature Quality Assessment.

Z An, C Deng, W Dang, Z Dong, Q Luo, J Cheng - BMVC, 2023 - papers.bmvc2023.org
With the increasing computational power of computing devices, the pre-training of large
deep-learning models has become prevalent. However, deploying such models on edge …

FA-GAL-ResNet: Lightweight Residual Network using Focused Attention Mechanism and Generative Adversarial Learning via Knowledge Distillation

H Yang, T Lu, W Guo, S Chang, G Liu… - 2021 International Joint …, 2021 - ieeexplore.ieee.org
Despite that deep neural networks have achieved satisfactory performance, they rely on
powerful hardware for training, which is expensive and not easy to get. Therefore, the …

Deep face recognition in the wild

J Yang - 2022 - eprints.nottingham.ac.uk
Face recognition has attracted particular interest in biometric recognition with wide
applications in security, entertainment, health, marketing. Recent years have witnessed …

Knowledge Distillation By Sparse Representation Matching

DT Tran, M Gabbouj, A Iosifidis - arxiv preprint arxiv:2103.17012, 2021 - arxiv.org
Knowledge Distillation refers to a class of methods that transfers the knowledge from a
teacher network to a student network. In this paper, we propose Sparse Representation …