The performance implications of thread management alternatives for shared-memory multiprocessors

An examination is made of the performance implications of several data structure and algorithm alternatives for thread management in shared-memory multiprocessors. Both experimental measurements and analytical model projections are presented. For applications with fine-grained parallelism, small dif...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on computers Vol. 38; no. 12; pp. 1631 - 1644
Main Authors: Anderson, T.E., Lazowska, E.D., Levy, H.M.
Format: Journal Article
Language:English
Published: New York, NY IEEE 01-12-1989
Institute of Electrical and Electronics Engineers
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:An examination is made of the performance implications of several data structure and algorithm alternatives for thread management in shared-memory multiprocessors. Both experimental measurements and analytical model projections are presented. For applications with fine-grained parallelism, small differences in thread management are shown to have significant performance impact, often posing a tradeoff between throughput and latency. Per-processor data structures can be used to to improve throughput, and in some circumstances to avoid locking, improving latency as well. The method used by processors to queue for locks is also shown to affect performance significantly. Normal methods of critical resource waiting can substantially degrade performance with moderate numbers of waiting processors. The authors present an Ethernet-style backoff algorithm that largely eliminates this effect.< >
Bibliography:ObjectType-Article-2
SourceType-Scholarly Journals-1
ObjectType-Feature-1
content type line 23
ISSN:0018-9340
1557-9956
DOI:10.1109/12.40843