Image classification with small datasets: Overview and benchmark

L Brigato, B Barz, L Iocchi, J Denzler - IEEE Access, 2022 - ieeexplore.ieee.org
Image classification with small datasets has been an active research area in the recent past.
However, as research in this scope is still in its infancy, two key ingredients are missing for …

No data augmentation? alternative regularizations for effective training on small datasets

L Brigato, S Mougiakakou - Proceedings of the IEEE/CVF …, 2023 - openaccess.thecvf.com
Solving image classification tasks given small training datasets remains an open challenge
for modern computer vision. Aggressive data augmentation and generative models are …

Separation and concentration in deep networks

J Zarka, F Guth, S Mallat - arxiv preprint arxiv:2012.10424, 2020 - arxiv.org
Numerical experiments demonstrate that deep neural network classifiers progressively
separate class distributions around their mean, achieving linear separability on the training …

[HTML][HTML] Harmonic convolutional networks based on discrete cosine transform

M Ulicny, VA Krylov, R Dahyot - Pattern Recognition, 2022 - Elsevier
Convolutional neural networks (CNNs) learn filters in order to capture local correlation
patterns in feature space. We propose to learn these filters as combinations of preset …

Tune it or don't use it: Benchmarking data-efficient image classification

L Brigato, B Barz, L Iocchi… - Proceedings of the IEEE …, 2021 - openaccess.thecvf.com
Data-efficient image classification using deep neural networks in settings, where only small
amounts of labeled data are available, has been an active research area in the recent past …

To tune or not to tune? An approach for recommending important hyperparameters for classification and clustering algorithms

R El Shawi, M Bahman, S Sakr - Future Generation Computer Systems, 2025 - Elsevier
Abstract Machine learning algorithms are widely employed across various applications and
fields. Novel technologies in automated machine learning ease the complexity of algorithm …

Frequency regularization: Reducing information redundancy in convolutional neural networks

C Zhao, G Dong, S Zhang, Z Tan, A Basu - IEEE Access, 2023 - ieeexplore.ieee.org
Convolutional neural networks have demonstrated impressive results in many computer
vision tasks. However, the increasing size of these networks raises concerns about the …

On the shift invariance of max pooling feature maps in convolutional neural networks

H Leterme, K Polisano, V Perrier, K Alahari - arxiv preprint arxiv …, 2022 - arxiv.org
This paper focuses on improving the mathematical interpretability of convolutional neural
networks (CNNs) in the context of image classification. Specifically, we tackle the instability …

Dct-based fast spectral convolution for deep convolutional neural networks

Y Xu, H Nakayama - 2021 International Joint Conference on …, 2021 - ieeexplore.ieee.org
Spectral representations have been introduced into deep convolutional neural networks
(CNNs) mainly for accelerating convolutions and mitigating information loss. However …

Infinite class mixup

T Mensink, P Mettes - arxiv preprint arxiv:2305.10293, 2023 - arxiv.org
Mixup is a widely adopted strategy for training deep networks, where additional samples are
augmented by interpolating inputs and labels of training pairs. Mixup has shown to improve …