Distributed artificial intelligence empowered by end-edge-cloud computing: A survey
As the computing paradigm shifts from cloud computing to end-edge-cloud computing, it
also supports artificial intelligence evolving from a centralized manner to a distributed one …
also supports artificial intelligence evolving from a centralized manner to a distributed one …
AI-based fog and edge computing: A systematic review, taxonomy and future directions
Resource management in computing is a very challenging problem that involves making
sequential decisions. Resource limitations, resource heterogeneity, dynamic and diverse …
sequential decisions. Resource limitations, resource heterogeneity, dynamic and diverse …
Threats, attacks and defenses to federated learning: issues, taxonomy and perspectives
Abstract Empirical attacks on Federated Learning (FL) systems indicate that FL is fraught
with numerous attack surfaces throughout the FL execution. These attacks can not only …
with numerous attack surfaces throughout the FL execution. These attacks can not only …
Grace: A compressed communication framework for distributed machine learning
Powerful computer clusters are used nowadays to train complex deep neural networks
(DNN) on large datasets. Distributed training increasingly becomes communication bound …
(DNN) on large datasets. Distributed training increasingly becomes communication bound …
Communication-efficient distributed deep learning: A comprehensive survey
Distributed deep learning (DL) has become prevalent in recent years to reduce training time
by leveraging multiple computing devices (eg, GPUs/TPUs) due to larger models and …
by leveraging multiple computing devices (eg, GPUs/TPUs) due to larger models and …
Meta-learning with a geometry-adaptive preconditioner
Abstract Model-agnostic meta-learning (MAML) is one of the most successful meta-learning
algorithms. It has a bi-level optimization structure where the outer-loop process learns a …
algorithms. It has a bi-level optimization structure where the outer-loop process learns a …
Rethinking gradient sparsification as total error minimization
Gradient compression is a widely-established remedy to tackle the communication
bottleneck in distributed training of large deep neural networks (DNNs). Under the error …
bottleneck in distributed training of large deep neural networks (DNNs). Under the error …
Using highly compressed gradients in federated learning for data reconstruction attacks
H Yang, M Ge, K **ang, J Li - IEEE Transactions on Information …, 2022 - ieeexplore.ieee.org
Federated learning (FL) preserves data privacy by exchanging gradients instead of local
training data. However, these private data can still be reconstructed from the exchanged …
training data. However, these private data can still be reconstructed from the exchanged …
A survey on optimization techniques for edge artificial intelligence (ai)
Artificial Intelligence (Al) models are being produced and used to solve a variety of current
and future business and technical problems. Therefore, AI model engineering processes …
and future business and technical problems. Therefore, AI model engineering processes …
[HTML][HTML] Federated learning meets remote sensing
Remote sensing (RS) imagery provides invaluable insights into characterizing the Earth's
land surface within the scope of Earth observation (EO). Technological advances in capture …
land surface within the scope of Earth observation (EO). Technological advances in capture …