The Department of Computer Science at the University of Cyprus cordially invites you to the Colloquium entitled:
High Performance Cache Management Policies for Addressing the Memory Wall on Chip-Multiprocessors
Speaker: Dr. Aamer Jaleel
Increasing on-chip cache sizes and the widespread use of shared caches in CMPs has revived cache management as a hot research topic in both industry and academia. This talk focuses on improving cache performance by describing the cache management problem in a novel framework called Re-Reference Interval Prediction (RRIP) . The first part of the talk aims at improving the performance of the last-level cache (LLC). In this portion of the talk, we use RRIP to address the drawbacks of the commonly used LRU replacement policy. LRU replacement performs badly when the application working-set size is larger than the available cache or applications have frequent bursts of references to non-temporal data (called scans). To improve the performance of such applications, we propose Static RRIP (SRRIP) and Dynamic RRIP (DRRIP). We show that SRRIP and DRRßßIP do not require changes to the existing cache design, have insignificant hardware overhead, and can easily be integrated into the existing cache designs of modern high performance processors. The next part of the talk focuses not just on improving LLC performance but also on improving the performance of a multi-level cache hierarchy. In particular, we focus on improving the performance of an inclusive cache hierarchy. Inclusive caches are commonly used by microprocessors to simplify cache coherence. However, the trade-off has been lower performance compared to non-inclusive and exclusive caches. Contrary to conventional wisdom, we show that the limited performance of inclusive caches is due to inclusion victims-lines that are evicted from the core caches to satisfy the inclusion property-and not the reduced cache capacity of the hierarchy due to the duplication of data. These inclusion victims are incorrectly chosen for replacement because the last-level cache (LLC) is unaware of the temporal locality of lines in the core caches. We propose Temporal Locality Aware (TLA) cache management policies to allow an inclusive LLC to be aware of the temporal locality of lines in the core caches. We propose three TLA policies: Temporal Locality Hints (TLH), Early Core Invalidation (ECI), and Query Based Selection (QBS). We show that all three improve the performance of inclusive caches without requiring any additional hardware structures. In fact, QBS performs similar to a non-inclusive cache hierarchy.
Aamer Jaleel is a member of the VSSAD group at Intel Massachusetts Inc. Aamer's research interests include cache/memory system design, parallel architectures, micro-architecture, performance modeling, and workload characterization. While at Intel, Aamer's research work has contributed towards enhancement in performance modeling and improvements in the design of next generation Intel microprocessors. Aamer received his Ph.D. in Electrical Engineering from the University of Maryland, College Park in 2005.
|Mailing List: https://listserv.cs.ucy.ac.cy/mailman/listinfo/cs-colloquium|
|Sponsor: The CS Colloquium Series is supported by a generous donation from