[HTML][HTML] Distributed artificial intelligence: Taxonomy, review, framework, and reference architecture

N Janbi, I Katib, R Mehmood - Intelligent Systems with Applications, 2023 - Elsevier
Artificial intelligence (AI) research and market have grown rapidly in the last few years, and
this trend is expected to continue with many potential advancements and innovations in this …

Parallelizing dnn training on gpus: Challenges and opportunities

W Xu, Y Zhang, X Tang - … Proceedings of the Web Conference 2021, 2021 - dl.acm.org
In recent years, Deep Neural Networks (DNNs) have emerged as a widely adopted
approach in many application domains. Training DNN models is also becoming a significant …

Sparse training theory for scalable and efficient agents

DC Mocanu, E Mocanu, T Pinto, S Curci… - arxiv preprint arxiv …, 2021 - arxiv.org
A fundamental task for artificial intelligence is learning. Deep Neural Networks have proven
to cope perfectly with all learning paradigms, ie supervised, unsupervised, and …

Truly sparse neural networks at scale

S Curci, DC Mocanu, M Pechenizkiyi - arxiv preprint arxiv:2102.01732, 2021 - arxiv.org
Recently, sparse training methods have started to be established as a de facto approach for
training and inference efficiency in artificial neural networks. Yet, this efficiency is just in …

Distributed artificial intelligence: review, taxonomy, framework, and reference architecture

N Janbi, I Katib, R Mehmood - Taxonomy, Framework, and …, 2023 - papers.ssrn.com
Artificial intelligence (AI) research and market have grown rapidly in the last few years and
this trend is expected to continue with many potential advancements and innovations in this …

An Integrated Approach of Efficient Edge Task Offloading Using Deep RL, Attention and MDS Techniques

Priyadarshni, P Kumar, D Kadavala, S Tripathi… - SN Computer …, 2024 - Springer
Abstract In Distributed Computation Optimization (DCO) networks, where clients distribute
computational jobs among heterogeneous helpers with different capacities and pricing …

Rethinking Class-incremental Learning in the Era of Large Pre-trained Models via Test-Time Adaptation

IE Marouf, S Roy, E Tartaglione… - arxiv preprint arxiv …, 2023 - arxiv.org
Class-incremental learning (CIL) is a challenging task that involves sequentially learning to
categorize classes from new tasks without forgetting previously learned information. The …

[PDF][PDF] Large Scale Sparse Neural Networks

S Curci - research.tue.nl
Nowadays, deep learning is a current and stimulating area of machine learning, and it has
proven to remain a successful tool in numerous fields. Nevertheless, the scalability of deep …

Distributed Sparse Computing and Communication for Big Graph Analytics and Deep Learning

M Hasanzadeh Mofrad - 2021 - d-scholarship.pitt.edu
Sparsity can be found in the underlying structure of many real-world computationally
expensive problems including big graph analytics and large scale sparse deep neural …

[PDF][PDF] Distributed Sparse Computing and Communication for Big Graph Analytics

MH Mofrad - 2020 - people.cs.pitt.edu
The current disruptive state of High Performance Computing (HPC) and Cloud computing is
made possible by emerging CPU and GPU architectures [18, 95], parallel processing …