Overview frequency principle/spectral bias in deep learning
Understanding deep learning is increasingly emergent as it penetrates more and more into
industry and science. In recent years, a research line from Fourier analysis sheds light on …
industry and science. In recent years, a research line from Fourier analysis sheds light on …
DAdaQuant: Doubly-adaptive quantization for communication-efficient federated learning
Federated Learning (FL) is a powerful technique to train a model on a server with data from
several clients in a privacy-preserving manner. FL incurs significant communication costs …
several clients in a privacy-preserving manner. FL incurs significant communication costs …
Edge computing technology enablers: A systematic lecture study
With the increasing stringent QoS constraints (eg, latency, bandwidth, jitter) imposed by
novel applications (eg, e-Health, autonomous vehicles, smart cities, etc.), as well as the …
novel applications (eg, e-Health, autonomous vehicles, smart cities, etc.), as well as the …
" BNN-BN=?": Training Binary Neural Networks Without Batch Normalization
Batch normalization (BN) is a key facilitator and considered essential for state-of-the-art
binary neural networks (BNN). However, the BN layer is costly to calculate and is typically …
binary neural networks (BNN). However, the BN layer is costly to calculate and is typically …
F8net: Fixed-point 8-bit only multiplication for network quantization
Neural network quantization is a promising compression technique to reduce memory
footprint and save energy consumption, potentially leading to real-time inference. However …
footprint and save energy consumption, potentially leading to real-time inference. However …
Enabling design methodologies and future trends for edge AI: Specialization and codesign
This work is an introduction and a survey for the Special Issue on Machine Intelligence at the
Edge. The authors argue that workloads that were formerly performed in the cloud are …
Edge. The authors argue that workloads that were formerly performed in the cloud are …
Cpt: Efficient deep neural network training via cyclic precision
Low-precision deep neural network (DNN) training has gained tremendous attention as
reducing precision is one of the most effective knobs for boosting DNNs' training time/energy …
reducing precision is one of the most effective knobs for boosting DNNs' training time/energy …
Mia-former: Efficient and robust vision transformers via multi-grained input-adaptation
Vision transformers have recently demonstrated great success in various computer vision
tasks, motivating a tremendously increased interest in their deployment into many real-world …
tasks, motivating a tremendously increased interest in their deployment into many real-world …
2-in-1 accelerator: Enabling random precision switch for winning both adversarial robustness and efficiency
The recent breakthroughs of deep neural networks (DNNs) and the advent of billions of
Internet of Things (IoT) devices have excited an explosive demand for intelligent IoT devices …
Internet of Things (IoT) devices have excited an explosive demand for intelligent IoT devices …
A General and Efficient Training for Transformer via Token Expansion
The remarkable performance of Vision Transformers (ViTs) typically requires an extremely
large training cost. Existing methods have attempted to accelerate the training of ViTs yet …
large training cost. Existing methods have attempted to accelerate the training of ViTs yet …