Overview frequency principle/spectral bias in deep learning

ZQJ Xu, Y Zhang, T Luo - Communications on Applied Mathematics and …, 2024 - Springer
Understanding deep learning is increasingly emergent as it penetrates more and more into
industry and science. In recent years, a research line from Fourier analysis sheds light on …

F8net: Fixed-point 8-bit only multiplication for network quantization

Q **, J Ren, R Zhuang, S Hanumante, Z Li… - arxiv preprint arxiv …, 2022 - arxiv.org
Neural network quantization is a promising compression technique to reduce memory
footprint and save energy consumption, potentially leading to real-time inference. However …

Service delay minimization for federated learning over mobile devices

R Chen, D Shi, X Qin, D Liu, M Pan… - IEEE Journal on …, 2023 - ieeexplore.ieee.org
Federated learning (FL) over mobile devices has fostered numerous intriguing
applications/services, many of which are delay-sensitive. In this paper, we propose a service …

Resource constrained neural network training

M Pietrołaj, M Blok - Scientific Reports, 2024 - nature.com
Modern applications of neural-network-based AI solutions tend to move from datacenter
backends to low-power edge devices. Environmental, computational, and power constraints …

2-in-1 accelerator: Enabling random precision switch for winning both adversarial robustness and efficiency

Y Fu, Y Zhao, Q Yu, C Li, Y Lin - MICRO-54: 54th Annual IEEE/ACM …, 2021 - dl.acm.org
The recent breakthroughs of deep neural networks (DNNs) and the advent of billions of
Internet of Things (IoT) devices have excited an explosive demand for intelligent IoT devices …

EEFL: High-speed wireless communications inspired energy efficient federated learning over mobile devices

R Chen, Q Wan, X Zhang, X Qin, Y Hou… - Proceedings of the 21st …, 2023 - dl.acm.org
Energy efficiency is essential for federated learning (FL) over mobile devices and its
potential prosperous applications. Different from existing communication efficient FL …

A BF16 FMA is all you need for DNN training

J Osorio, A Armejach, E Petit, G Henry… - IEEE Transactions on …, 2022 - ieeexplore.ieee.org
Fused Multiply-Add (FMA) functional units constitute a fundamental hardware component to
train Deep Neural Networks (DNNs). Its silicon area grows quadratically with the mantissa …

Adaptive and Parallel Split Federated Learning in Vehicular Edge Computing

X Qiang, Z Chang, Y Hu, L Liu… - IEEE Internet of Things …, 2024 - ieeexplore.ieee.org
Vehicular edge intelligence (VEI) is a promising paradigm for enabling future intelligent
transportation systems by accommodating artificial intelligence (AI) at the vehicular edge …

On-device deep learning: survey on techniques improving energy efficiency of DNNs

A Boumendil, W Bechkit… - IEEE Transactions on …, 2024 - ieeexplore.ieee.org
Providing high-quality predictions is no longer the sole goal for neural networks. As we live
in an increasingly interconnected world, these models need to match the constraints of …

Accuracy Boosters: Epoch-Driven Mixed-Mantissa Block Floating-Point for DNN Training

SB Harma, A Chakraborty, B Falsafi, M Jaggi… - arxiv preprint arxiv …, 2022 - arxiv.org
The unprecedented growth in DNN model complexity, size, and amount of training data has
led to a commensurate increase in demand for computing and a search for minimal …