ELF maximizing memory-level parallelism for GPUs with coordinated warp and fetch scheduling

Graphics processing units (GPUs) are increasingly utilized as throughput engines in the modern computer systems. GPUs rely on fast context switching between thousands of threads to hide long latency operations, however, they still stall due to the memory operations. To minimize the stalls, memory op...

Full description

Saved in:
Bibliographic Details
Published in:Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis pp. 1 - 12
Main Authors: Park, Jason Jong Kyu, Park, Yongjun, Mahlke, Scott
Format: Conference Proceeding
Language:English
Published: New York, NY, USA ACM 15-11-2015
Series:ACM Conferences
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Graphics processing units (GPUs) are increasingly utilized as throughput engines in the modern computer systems. GPUs rely on fast context switching between thousands of threads to hide long latency operations, however, they still stall due to the memory operations. To minimize the stalls, memory operations should be overlapped with other operations as much as possible to maximize memory-level parallelism (MLP). In this paper, we propose Earliest Load First (ELF) warp scheduling, which maximizes the MLP by giving higher priority to the warps that have the fewest instructions to the next memory load. ELF utilizes the same warp priority for the fetch scheduling so that both are coordinated. We also show that ELF reveals its full benefits when there are fewer memory conflicts and fetch stalls. Evaluations show that ELF can improve the performance by 4.1% and achieve total improvement of 11.9% when used with other techniques over commonly-used greedy-then-oldest scheduling.
ISBN:1450337236
9781450337236
ISSN:2167-4337
DOI:10.1145/2807591.2807598