Learning-aided computation offloading for trusted collaborative mobile edge computing
Cooperative offloading in mobile edge computing enables resource-constrained edge
clouds to help each other with computation-intensive tasks. However, the power of such …
clouds to help each other with computation-intensive tasks. However, the power of such …
Optimization for speculative execution in big data processing clusters
H Xu, WC Lau - IEEE Transactions on Parallel and Distributed …, 2016 - ieeexplore.ieee.org
A big parallel processing job can be delayed substantially as long as one of its many tasks is
being assigned to an unreliable or congested machine. To tackle this so-called straggler …
being assigned to an unreliable or congested machine. To tackle this so-called straggler …
Model-driven computational sprinting
Computational sprinting speeds up query execution by increasing power usage for short
bursts. Sprinting policy decides when and how long to sprint. Poor policies inflate response …
bursts. Sprinting policy decides when and how long to sprint. Poor policies inflate response …
Providing worst-case latency guarantees with collaborative edge servers
X He, S Wang, X Wang - IEEE Transactions on Mobile …, 2021 - ieeexplore.ieee.org
Mobile Edge Computing (MEC) is a promising computing paradigm that provides cloud
computing services in proximity to end users. Due to the bursty and spatially imbalanced …
computing services in proximity to end users. Due to the bursty and spatially imbalanced …
Holistic workload scaling: a new approach to compute acceleration in the cloud
Workload scaling is an approach to accelerating computation and thus improving response
times by replicating the exact same request multiple times and processing it in parallel on …
times by replicating the exact same request multiple times and processing it in parallel on …
Optimized speculative execution strategy for different workload levels in heterogeneous spark cluster
X Huang, C Li, Y Luo - Proceedings of the 4th International Conference …, 2019 - dl.acm.org
Spark is a big data processing framework based on MapReduce, whose calculation model
requires that all tasks in all parent stages are completed before starting a new stage …
requires that all tasks in all parent stages are completed before starting a new stage …
Power of redundancy: Designing partial replication for multi-tier applications
Replicating redundant requests has been shown to be an effective mechanism to defend
application performance from high capacity variability-the common pitfall in the cloud. While …
application performance from high capacity variability-the common pitfall in the cloud. While …
Dual scaling vms and queries: Cost-effective latency curtailment
Wimpy virtual instances equipped with small numbers of cores and RAM are popular public
and private cloud offerings because of their low cost for hosting applications. The challenge …
and private cloud offerings because of their low cost for hosting applications. The challenge …
Differential approximation and sprinting for multi-priority big data engines
Today's big data clusters based on the MapReduce paradigm are capable of executing
analysis jobs with multiple priorities, providing differential latency guarantees. Traces from …
analysis jobs with multiple priorities, providing differential latency guarantees. Traces from …
sPARE: Partial replication for multi-tier applications in the cloud
Offering consistent low latency remains a key challenge for distributed applications,
especially when deployed on the cloud where virtual machines (VMs) suffer from capacity …
especially when deployed on the cloud where virtual machines (VMs) suffer from capacity …