A survey on virtual machine migration: Challenges, techniques, and open issues
When users flood in cloud data centers, how to efficiently manage hardware resources and
virtual machines (VMs) in a data center to both lower economical cost and ensure a high …
virtual machines (VMs) in a data center to both lower economical cost and ensure a high …
A large-scale analysis of hundreds of in-memory key-value cache clusters at twitter
Modern web services use in-memory caching extensively to increase throughput and reduce
latency. There have been several workload analyses of production systems that have fueled …
latency. There have been several workload analyses of production systems that have fueled …
Performance benchmarking and optimizing hyperledger fabric blockchain platform
The rise in popularity of permissioned blockchain platforms in recent time is significant.
Hyperledger Fabric is one such permissioned blockchain platform and one of the …
Hyperledger Fabric is one such permissioned blockchain platform and one of the …
FaasCache: kee** serverless computing alive with greedy-dual caching
Functions as a Service (also called serverless computing) promises to revolutionize how
applications use cloud resources. However, functions suffer from cold-start problems due to …
applications use cloud resources. However, functions suffer from cold-start problems due to …
A comprehensive review on edge caching from the perspective of total process: Placement, policy and delivery
H Wu, Y Fan, Y Wang, H Ma, L **ng - Sensors, 2021 - mdpi.com
With the explosive growth of smart devices and mobile applications, mobile core networks
face the challenge of exponential growth in traffic and computing demand. Edge caching is …
face the challenge of exponential growth in traffic and computing demand. Edge caching is …
Apparatus, system, and method for accessing memory
N Talagala, D Flynn - US Patent 9,208,071, 2015 - Google Patents
Apparatuses, systems, methods, and computer program products are disclosed for providing
access to auto-commit memory. An auto-commit memory module is configured to cause a …
access to auto-commit memory. An auto-commit memory module is configured to cause a …
High performance cache replacement using re-reference interval prediction (RRIP)
A Jaleel, KB Theobald, SC Steely Jr… - ACM SIGARCH computer …, 2010 - dl.acm.org
Practical cache replacement policies attempt to emulate optimal replacement by predicting
the re-reference interval of a cache block. The commonly used LRU replacement policy …
the re-reference interval of a cache block. The commonly used LRU replacement policy …
FIFO queues are all you need for cache eviction
As a cache eviction algorithm, FIFO has a lot of attractive properties, such as simplicity,
speed, scalability, and flash-friendliness. The most prominent criticism of FIFO is its low …
speed, scalability, and flash-friendliness. The most prominent criticism of FIFO is its low …
Adaptive insertion policies for high performance caching
The commonly used LRU replacement policy is susceptible to thrashing for memory-
intensive workloads that have a working set greater than the available cache size. For such …
intensive workloads that have a working set greater than the available cache size. For such …
{AdaptSize}: Orchestrating the Hot Object Memory Cache in a Content Delivery Network
Most major content providers use content delivery networks (CDNs) to serve web and video
content to their users. A CDN is a large distributed system of servers that caches and …
content to their users. A CDN is a large distributed system of servers that caches and …