A large-scale analysis of hundreds of in-memory key-value cache clusters at twitter

J Yang, Y Yue, KV Rashmi - ACM Transactions on Storage (TOS), 2021 - dl.acm.org
Modern web services use in-memory caching extensively to increase throughput and reduce
latency. There have been several workload analyses of production systems that have fueled …

FIFO queues are all you need for cache eviction

J Yang, Y Zhang, Z Qiu, Y Yue, R Vinayak - Proceedings of the 29th …, 2023 - dl.acm.org
As a cache eviction algorithm, FIFO has a lot of attractive properties, such as simplicity,
speed, scalability, and flash-friendliness. The most prominent criticism of FIFO is its low …

{AdaptSize}: Orchestrating the Hot Object Memory Cache in a Content Delivery Network

DS Berger, RK Sitaraman… - 14th USENIX Symposium …, 2017 - usenix.org
Most major content providers use content delivery networks (CDNs) to serve web and video
content to their users. A CDN is a large distributed system of servers that caches and …

[HTML][HTML] Learning relaxed belady for content distribution network caching

Z Song, DS Berger, K Li, A Shaikh, W Lloyd… - … USENIX Symposium on …, 2020 - usenix.org
NSDI '20 List of Accepted Papers | USENIX Sign In Conferences Attend Registration
Information Registration Discounts Student Grant Application Diversity Grant Application …

Accelerometer: Understanding acceleration opportunities for data center overheads at hyperscale

A Sriraman, A Dhanotia - Proceedings of the Twenty-Fifth International …, 2020 - dl.acm.org
At global user population scale, important microservices in warehouse-scale data centers
can grow to account for an enormous installed base of servers. With the end of Dennard …

{GL-Cache}: Group-level learning for efficient and high-performance caching

J Yang, Z Mao, Y Yue, KV Rashmi - 21st USENIX Conference on File …, 2023 - usenix.org
Web applications rely heavily on software caches to achieve low-latency, high-throughput
services. To adapt to changing workloads, three types of learned caches (learned evictions) …

{RobinHood}: Tail Latency Aware Caching--Dynamic Reallocation from {Cache-Rich} to {Cache-Poor}

DS Berger, B Berg, T Zhu, S Sen… - 13th USENIX Symposium …, 2018 - usenix.org
Tail latency is of great importance in user-facing web services. However, maintaining low tail
latency is challenging, because a single request to a web application server results in …

Segcache: a memory-efficient and scalable in-memory key-value cache for small objects

J Yang, Y Yue, R Vinayak - 18th USENIX Symposium on Networked …, 2021 - usenix.org
Modern web applications heavily rely on in-memory key-value caches to deliver low-latency,
high-throughput services. In-memory caches store small objects of size in the range of 10s to …

{FairyWREN}: A Sustainable Cache for Emerging {Write-Read-Erase} Flash Interfaces

S McAllister, B Berg, DS Berger… - … USENIX Symposium on …, 2024 - usenix.org
Datacenters need to reduce embodied carbon emissions, particularly for flash, which
accounts for 40% of embodied carbon in servers. However, decreasing flash's embodied …

Didacache: an integration of device and application for flash-based key-value caching

Z Shen, F Chen, Y Jia, Z Shao - ACM Transactions on Storage (TOS), 2018 - dl.acm.org
Key-value caching is crucial to today's low-latency Internet services. Conventional key-value
cache systems, such as Memcached, heavily rely on expensive DRAM memory. To lower …