Throughput maximization of delay-aware DNN inference in edge computing by exploring DNN model partitioning and inference parallelism

J Li, W Liang, Y Li, Z Xu, X Jia… - IEEE Transactions on …, 2021 - ieeexplore.ieee.org
Mobile Edge Computing (MEC) has emerged as a promising paradigm catering to
overwhelming explosions of mobile applications, by offloading compute-intensive tasks to …

An eight-core RISC-V processor with compute near last level cache in Intel 4 CMOS

GK Chen, PC Knag, C Tokunaga… - IEEE Journal of Solid …, 2022 - ieeexplore.ieee.org
An eight-core 64-b processor extends RISC-V to perform multiply–accumulate (MAC) within
the shared last level cache (LLC). Instead of moving data from the LLC to the core, compute …

Delay-aware DNN inference throughput maximization in edge computing via jointly exploring partitioning and parallelism

J Li, W Liang, Y Li, Z Xu, X Jia - 2021 IEEE 46th Conference on …, 2021 - ieeexplore.ieee.org
Mobile Edge Computing (MEC) has emerged as a promising paradigm catering to
overwhelming explosions of mobile applications, by offloading the compute-intensive tasks …

Energy‐Efficient DNN Partitioning and Offloading for Task Completion Rate Maximization in Multiuser Edge Intelligence

X Tian, P Xu, H Gu, H Meng - Wireless Communications and …, 2023 - Wiley Online Library
Deep Neural Network (DNN) has become an essential technology for edge intelligence.
Due to significant resource and energy requirements for large‐scale DNNs' inference …

Virtual Service Provisioning for Internet of Things Applications in Mobile Edge Computing

J Li - 2022 - search.proquest.com
Abstract The Internet of Things (IoT) paradigm is paving the way for many new emerging
technologies, such as smart grid, industry 4.0, connected cars, smart cities, etc. Mobile Edge …