ETICA: Efficient Two-Level I/O Caching Architecture for Virtualized Platforms
dc.contributor.author
Ahmadian, Saba
dc.contributor.author
Salkhordeh, Reza
dc.contributor.author
Mutlu, Onur
dc.contributor.author
Asadi, Hossein
dc.date.accessioned
2021-05-07T05:30:45Z
dc.date.available
2021-05-05T04:06:44Z
dc.date.available
2021-05-07T05:30:45Z
dc.date.issued
2021-10-01
dc.identifier.issn
1045-9219
dc.identifier.issn
1558-2183
dc.identifier.issn
2161-9883
dc.identifier.other
10.1109/TPDS.2021.3066308
en_US
dc.identifier.uri
http://hdl.handle.net/20.500.11850/482494
dc.description.abstract
In recent years, increased I/O demand of Virtual Machines (VMs) in large-scale data centers and cloud computing has encouraged system architects to design high-performance storage systems. One common approach to improving performance is to employ fast storage devices such as Solid-State Drives (SSDs) as an I/O caching layer for slower storage devices. SSDs provide high performance, especially on random requests, but they also have limited endurance: They support only a limited number of write operations and can therefore wear out relatively fast due to write operations. In addition to the write requests generated by the applications, each read miss in the SSD cache is served at the cost of imposing a write operation to the SSD (to copy the data block into the cache), resulting in an even larger number of writes into the SSD. Previous I/O caching schemes on virtualized platforms only partially mitigate the endurance limitations of SSD-based I/O caches; they mainly focus on assigning efficient cache write policies and cache space to the VMs. Moreover, existing cache space allocation schemes have inefficiencies: They do not take into account the impact of cache write policy in reuse distance calculation of the running workloads and hence, reserve cache blocks for accesses that would not be served by cache. In this article, we propose an Efficient Two-Level I/O Caching Architecture (ETICA) for virtualized platforms that can significantly improve I/O latency, endurance, and cost (in terms of cache size) while preserving the reliability of write-pending data blocks. As opposed to previous one-level I/O caching schemes in virtualized platforms, our proposed architecture 1) provides two levels of cache by employing both Dynamic Random-Access Memory (DRAM) and SSD in the I/O caching layer of virtualized platforms and 2) effectively partitions the cache space between running VMs to achieve maximum performance and minimum cache size. To manage the two-level cache, unlike the previous reuse distance calculation schemes such as Useful Reuse Distance (URD), which only consider the request type and neglect the impact of cache write policy, we propose a new metric, Policy Optimized reuse Distance (POD). The key idea of POD is to effectively calculate the reuse distance and estimate the amount of two-level DRAM+SSD cache space to allocate by considering both 1) the request type and 2) the cache write policy. Doing so results in enhanced performance and reduced cache size due to the allocation of cache blocks only for the requests that would be served by the I/O cache. ETICA maintains the reliability of write-pending data blocks and improves performance by 1) assigning an effective and fixed write policy at each level of the I/O cache hierarchy and 2) employing effective promotion and eviction methods between cache levels. Our extensive experiments conducted with a real implementation of the proposed two-level storage caching architecture show that ETICA provides 45 percent higher performance, compared to the state-of-The-Art caching schemes in virtualized platforms, while improving both cache size and SSD endurance by 51.7 and 33.8 percent, respectively.
en_US
dc.language.iso
en
en_US
dc.publisher
IEEE
en_US
dc.title
ETICA: Efficient Two-Level I/O Caching Architecture for Virtualized Platforms
en_US
dc.type
Journal Article
ethz.journal.title
IEEE Transactions on Parallel and Distributed Systems
ethz.journal.volume
32
en_US
ethz.journal.issue
10
en_US
ethz.journal.abbreviated
IEEE Trans. Parallel Distrib. Syst.
ethz.pages.start
2415
en_US
ethz.pages.end
2433
en_US
ethz.identifier.wos
ethz.identifier.scopus
ethz.publication.place
New York, NY
en_US
ethz.publication.status
published
en_US
ethz.leitzahl
ETH Zürich::00002 - ETH Zürich::00012 - Lehre und Forschung::00007 - Departemente::02140 - Dep. Inf.technologie und Elektrotechnik / Dep. of Inform.Technol. Electrical Eng.::09483 - Mutlu, Onur / Mutlu, Onur
ethz.leitzahl.certified
ETH Zürich::00002 - ETH Zürich::00012 - Lehre und Forschung::00007 - Departemente::02140 - Dep. Inf.technologie und Elektrotechnik / Dep. of Inform.Technol. Electrical Eng.::09483 - Mutlu, Onur / Mutlu, Onur
ethz.date.deposited
2021-05-05T04:06:53Z
ethz.source
SCOPUS
ethz.eth
yes
en_US
ethz.availability
Metadata only
en_US
ethz.rosetta.installDate
2021-05-07T05:30:54Z
ethz.rosetta.lastUpdated
2022-03-29T07:08:37Z
ethz.rosetta.versionExported
true
ethz.COinS
ctx_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.atitle=ETICA:%20Efficient%20Two-Level%20I/O%20Caching%20Architecture%20for%20Virtualized%20Platforms&rft.jtitle=IEEE%20Transactions%20on%20Parallel%20and%20Distributed%20Systems&rft.date=2021-10-01&rft.volume=32&rft.issue=10&rft.spage=2415&rft.epage=2433&rft.issn=1045-9219&1558-2183&2161-9883&rft.au=Ahmadian,%20Saba&Salkhordeh,%20Reza&Mutlu,%20Onur&Asadi,%20Hossein&rft.genre=article&rft_id=info:doi/10.1109/TPDS.2021.3066308&
Files in this item
Files | Size | Format | Open in viewer |
---|---|---|---|
There are no files associated with this item. |
Publication type
-
Journal Article [130848]