Overview frequency principle/spectral bias in deep learning
Understanding deep learning is increasingly emergent as it penetrates more and more into
industry and science. In recent years, a research line from Fourier analysis sheds light on …
industry and science. In recent years, a research line from Fourier analysis sheds light on …
F8net: Fixed-point 8-bit only multiplication for network quantization
Neural network quantization is a promising compression technique to reduce memory
footprint and save energy consumption, potentially leading to real-time inference. However …
footprint and save energy consumption, potentially leading to real-time inference. However …
Service delay minimization for federated learning over mobile devices
Federated learning (FL) over mobile devices has fostered numerous intriguing
applications/services, many of which are delay-sensitive. In this paper, we propose a service …
applications/services, many of which are delay-sensitive. In this paper, we propose a service …
Resource constrained neural network training
M Pietrołaj, M Blok - Scientific Reports, 2024 - nature.com
Modern applications of neural-network-based AI solutions tend to move from datacenter
backends to low-power edge devices. Environmental, computational, and power constraints …
backends to low-power edge devices. Environmental, computational, and power constraints …
2-in-1 accelerator: Enabling random precision switch for winning both adversarial robustness and efficiency
The recent breakthroughs of deep neural networks (DNNs) and the advent of billions of
Internet of Things (IoT) devices have excited an explosive demand for intelligent IoT devices …
Internet of Things (IoT) devices have excited an explosive demand for intelligent IoT devices …
EEFL: High-speed wireless communications inspired energy efficient federated learning over mobile devices
Energy efficiency is essential for federated learning (FL) over mobile devices and its
potential prosperous applications. Different from existing communication efficient FL …
potential prosperous applications. Different from existing communication efficient FL …
A BF16 FMA is all you need for DNN training
Fused Multiply-Add (FMA) functional units constitute a fundamental hardware component to
train Deep Neural Networks (DNNs). Its silicon area grows quadratically with the mantissa …
train Deep Neural Networks (DNNs). Its silicon area grows quadratically with the mantissa …
Adaptive and Parallel Split Federated Learning in Vehicular Edge Computing
Vehicular edge intelligence (VEI) is a promising paradigm for enabling future intelligent
transportation systems by accommodating artificial intelligence (AI) at the vehicular edge …
transportation systems by accommodating artificial intelligence (AI) at the vehicular edge …
On-device deep learning: survey on techniques improving energy efficiency of DNNs
A Boumendil, W Bechkit… - IEEE Transactions on …, 2024 - ieeexplore.ieee.org
Providing high-quality predictions is no longer the sole goal for neural networks. As we live
in an increasingly interconnected world, these models need to match the constraints of …
in an increasingly interconnected world, these models need to match the constraints of …
Accuracy Boosters: Epoch-Driven Mixed-Mantissa Block Floating-Point for DNN Training
The unprecedented growth in DNN model complexity, size, and amount of training data has
led to a commensurate increase in demand for computing and a search for minimal …
led to a commensurate increase in demand for computing and a search for minimal …