Overview frequency principle/spectral bias in deep learning

ZQJ Xu, Y Zhang, T Luo - Communications on Applied Mathematics and …, 2024 - Springer
Understanding deep learning is increasingly emergent as it penetrates more and more into
industry and science. In recent years, a research line from Fourier analysis sheds light on …

DAdaQuant: Doubly-adaptive quantization for communication-efficient federated learning

R Hönig, Y Zhao, R Mullins - International Conference on …, 2022 - proceedings.mlr.press
Federated Learning (FL) is a powerful technique to train a model on a server with data from
several clients in a privacy-preserving manner. FL incurs significant communication costs …

Edge computing technology enablers: A systematic lecture study

S Douch, MR Abid, K Zine-Dine, D Bouzidi… - IEEE …, 2022 - ieeexplore.ieee.org
With the increasing stringent QoS constraints (eg, latency, bandwidth, jitter) imposed by
novel applications (eg, e-Health, autonomous vehicles, smart cities, etc.), as well as the …

" BNN-BN=?": Training Binary Neural Networks Without Batch Normalization

T Chen, Z Zhang, X Ouyang, Z Liu… - Proceedings of the …, 2021 - openaccess.thecvf.com
Batch normalization (BN) is a key facilitator and considered essential for state-of-the-art
binary neural networks (BNN). However, the BN layer is costly to calculate and is typically …

F8net: Fixed-point 8-bit only multiplication for network quantization

Q **, J Ren, R Zhuang, S Hanumante, Z Li… - arxiv preprint arxiv …, 2022 - arxiv.org
Neural network quantization is a promising compression technique to reduce memory
footprint and save energy consumption, potentially leading to real-time inference. However …

Enabling design methodologies and future trends for edge AI: Specialization and codesign

C Hao, J Dotzel, J **ong, L Benini, Z Zhang… - IEEE Design & …, 2021 - ieeexplore.ieee.org
This work is an introduction and a survey for the Special Issue on Machine Intelligence at the
Edge. The authors argue that workloads that were formerly performed in the cloud are …

Cpt: Efficient deep neural network training via cyclic precision

Y Fu, H Guo, M Li, X Yang, Y Ding, V Chandra… - arxiv preprint arxiv …, 2021 - arxiv.org
Low-precision deep neural network (DNN) training has gained tremendous attention as
reducing precision is one of the most effective knobs for boosting DNNs' training time/energy …

Mia-former: Efficient and robust vision transformers via multi-grained input-adaptation

Z Yu, Y Fu, S Li, C Li, Y Lin - Proceedings of the AAAI Conference on …, 2022 - ojs.aaai.org
Vision transformers have recently demonstrated great success in various computer vision
tasks, motivating a tremendously increased interest in their deployment into many real-world …

2-in-1 accelerator: Enabling random precision switch for winning both adversarial robustness and efficiency

Y Fu, Y Zhao, Q Yu, C Li, Y Lin - MICRO-54: 54th Annual IEEE/ACM …, 2021 - dl.acm.org
The recent breakthroughs of deep neural networks (DNNs) and the advent of billions of
Internet of Things (IoT) devices have excited an explosive demand for intelligent IoT devices …

A General and Efficient Training for Transformer via Token Expansion

W Huang, Y Shen, J **e, B Zhang… - Proceedings of the …, 2024 - openaccess.thecvf.com
The remarkable performance of Vision Transformers (ViTs) typically requires an extremely
large training cost. Existing methods have attempted to accelerate the training of ViTs yet …