Split computing and early exiting for deep learning applications: Survey and research challenges

Y Matsubara, M Levorato, F Restuccia - ACM Computing Surveys, 2022‏ - dl.acm.org
Mobile devices such as smartphones and autonomous vehicles increasingly rely on deep
neural networks (DNNs) to execute complex inference tasks such as image classification …

Lgvit: Dynamic early exiting for accelerating vision transformer

G Xu, J Hao, L Shen, H Hu, Y Luo, H Lin… - Proceedings of the 31st …, 2023‏ - dl.acm.org
Recently, the efficient deployment and acceleration of powerful vision transformers (ViTs) on
resource-limited edge devices for providing multimedia services have become attractive …

[HTML][HTML] Single-layer vision transformers for more accurate early exits with less overhead

A Bakhtiarnia, Q Zhang, A Iosifidis - Neural Networks, 2022‏ - Elsevier
Deploying deep learning models in time-critical applications with limited computational
resources, for instance in edge computing systems and IoT networks, is a challenging task …

Towards anytime classification in early-exit architectures by enforcing conditional monotonicity

M Jazbec, J Allingham, D Zhang… - Advances in Neural …, 2023‏ - proceedings.neurips.cc
Modern predictive models are often deployed to environments in which computational
budgets are dynamic. Anytime algorithms are well-suited to such environments as, at any …

[HTML][HTML] Zero time waste in pre-trained early exit neural networks

B Wójcik, M Przewiȩźlikowski, F Szatkowski… - Neural Networks, 2023‏ - Elsevier
The problem of reducing processing time of large deep learning models is a fundamental
challenge in many real-world applications. Early exit methods strive towards this goal by …

Fiancee: Faster inference of adversarial networks via conditional early exits

P Karpikova, E Radionova… - Proceedings of the …, 2023‏ - openaccess.thecvf.com
Generative DNNs are a powerful tool for image synthesis, but they are limited by their
computational load. On the other hand, given a trained model and a task, eg faces …

Efficiently controlling multiple risks with pareto testing

B Laufer-Goldshtein, A Fisch, R Barzilay… - arxiv preprint arxiv …, 2022‏ - arxiv.org
Machine learning applications frequently come with multiple diverse objectives and
constraints that can change over time. Accordingly, trained models can be tuned with sets of …

Occamnets: Mitigating dataset bias by favoring simpler hypotheses

R Shrestha, K Kafle, C Kanan - European Conference on Computer Vision, 2022‏ - Springer
Dataset bias and spurious correlations can significantly impair generalization in deep neural
networks. Many prior efforts have addressed this problem using either alternative loss …

Meta-GF: Training dynamic-depth neural networks harmoniously

Y Sun, J Li, X Xu - European Conference on Computer Vision, 2022‏ - Springer
Most state-of-the-art deep neural networks use static inference graphs, which makes it
impossible for such networks to dynamically adjust the depth or width of the network …

Scrollnet: Dynamicweight importance for continual learning

F Yang, K Wang… - Proceedings of the IEEE …, 2023‏ - openaccess.thecvf.com
The principle underlying most existing continual learning (CL) methods is to prioritize
stability by penalizing changes in parameters crucial to old tasks, while allowing for plasticity …