TTL-based Cloud Caches

We consider in-memory key-value stores used as caches, and their elastic provisioning in the cloud. The cost associated to such caches not only includes the storage cost, but also the cost due to misses: in fact, the cache miss ratio has a direct impact on the performance perceived by end users, and...

Full description

Saved in:
Bibliographic Details
Published in:IEEE INFOCOM 2019 - IEEE Conference on Computer Communications pp. 685 - 693
Main Authors: Carra, Damiano, Neglia, Giovanni, Michiardi, Pietro
Format: Conference Proceeding
Language:English
Published: IEEE 01-04-2019
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:We consider in-memory key-value stores used as caches, and their elastic provisioning in the cloud. The cost associated to such caches not only includes the storage cost, but also the cost due to misses: in fact, the cache miss ratio has a direct impact on the performance perceived by end users, and this directly affects the overall revenues for content providers. Our aim is to adapt dynamically the number of caches based on the traffic pattern, to minimize the overall costs.We present a dynamic algorithm for TTL caches whose goal is to obtain close-to-minimal costs. We then propose a practical implementation with limited computational complexity: our scheme requires constant overhead per request independently from the cache size. Using real-world traces collected from the Akamai content delivery network, we show that our solution achieves significant cost savings specially in highly dynamic settings that are likely to require elastic cloud services.
ISSN:2641-9874
DOI:10.1109/INFOCOM.2019.8737546