FT-CNN: Algorithm-based fault tolerance for convolutional neural networks

K Zhao, S Di, S Li, X Liang, Y Zhai… - … on Parallel and …, 2020 - ieeexplore.ieee.org
Convolutional neural networks (CNNs) are becoming more and more important for solving
challenging and critical problems in many fields. CNN inference applications have been …

Design of a quantization-based dnn delta compression framework for model snapshots and federated learning

H **, D Wu, S Zhang, X Zou, S **… - … on Parallel and …, 2023 - ieeexplore.ieee.org
Deep neural networks (DNNs) have achieved remarkable success in many fields. However,
large-scale DNNs also bring storage costs when storing snapshots for preventing clusters' …

Smartidx: Reducing communication cost in federated learning by exploiting the cnns structures

D Wu, X Zou, S Zhang, H **, W **a… - Proceedings of the AAAI …, 2022 - ojs.aaai.org
Top-k sparsification method is popular and powerful forreducing the communication cost in
Federated Learning (FL). However, according to our experimental observation, it spends …

Low-power deep learning edge computing platform for resource constrained lightweight compact UAVs

A Albanese, M Nardello, D Brunelli - Sustainable Computing: Informatics …, 2022 - Elsevier
Abstract Unmanned Aerial Vehicles (UAVs), which can operate autonomously in dynamic
and complex environments, are becoming increasingly common. Deep learning techniques …

ChatIoT: Zero-code Generation of Trigger-action Based IoT Programs

Y Gao, K **ao, F Li, W Xu, J Huang… - Proceedings of the ACM on …, 2024 - dl.acm.org
Trigger-Action Program (TAP) is a simple but powerful format to realize intelligent IoT
applications, especially in home automation scenarios. Existing trace-driven approaches …

Drew: Efficient winograd cnn inference with deep reuse

R Wu, F Zhang, J Guan, Z Zheng, X Du… - Proceedings of the ACM …, 2022 - dl.acm.org
Deep learning has been used in various domains, including Web services. Convolutional
neural networks (CNNs), which are deep learning representatives, are among the most …

Fedcomp: A federated learning compression framework for resource-constrained edge computing devices

D Wu, W Yang, H **, X Zou, W **a… - IEEE Transactions on …, 2023 - ieeexplore.ieee.org
Top-K sparsification-based compression techniques are popular and powerful for reducing
communication costs in federated learning (FL). However, existing Top-K sparsification …

Smart-DNN+: A memory-efficient neural networks compression framework for the model inference

D Wu, W Yang, X Zou, W **a, S Li, Z Hu… - ACM Transactions on …, 2023 - dl.acm.org
Deep Neural Networks (DNNs) have achieved remarkable success in various real-world
applications. However, running a Deep Neural Network (DNN) typically requires hundreds …

FedSZ: Leveraging error-bounded lossy compression for federated learning communications

G Wilkins, S Di, JC Calhoun, Z Li, K Kim… - 2024 IEEE 44th …, 2024 - ieeexplore.ieee.org
With the promise of federated learning (FL) to allow for geographically-distributed and highly
personalized services, the efficient exchange of model updates between clients and servers …

Inshrinkerator: Compressing Deep Learning Training Checkpoints via Dynamic Quantization

A Agrawal, S Reddy, S Bhattamishra… - Proceedings of the …, 2024 - dl.acm.org
The likelihood of encountering in-training failures rises substantially with larger Deep
Learning (DL) training workloads, leading to lost work and resource wastage. Such failures …