Concurrent Data Structures from concurrency experts Mark Moir and Nir Shavit (Sun Microsystems Laboratories).
In 23 page overview authors cite !!!138!!! world-class sources + introduce good index.
Contents:
1.1 Designing Concurrent Data Structures
- Performance
- Blocking Techniques
- Nonblocking
- Techniques
- Complexity Measures
- Correctness
- Verification Techniques
- Tools of the Trade
1.2 Shared Counters and Fetch-and-Á Structures
1.3 Stacks and Queues
1.4 Pools
1.5 Linked Lists
1.6 Hash Tables
1.7 Search Trees
1.8 Priority Queues
1.9 Summary
Abstract:
The proliferation of commercial shared-memory multiprocessor machines has brought about significant changes in the art of concurrent programming. Given current trends towards lowcost chip multithreading (CMT), such machines are bound to become ever more widespread. Shared-memory multiprocessors are systems that concurrently execute multiple threads of computation which communicate and synchronize through data structures in shared memory. The efficiency of these data structures is crucial to performance, yet designing effective data structures for multiprocessor machines is an art currently mastered by few. By most accounts, concurrent data structures are far more difficult to design than sequential ones because threads executing concurrently may interleave their steps in many ways, each with a different and potentially unexpected outcome. This requires designers to modify the way they think about computation, to understand new design methodologies, and to adopt a new collection of programming tools. Furthermore, new challenges arise in designing scalable concurrent data structures that continue to perform well as machines that execute more and more concurrent threads become available. This chapter provides an overview of the challenges involved in designing concurrent data structures, and a summary of relevant work for some important data structure classes. Our summary is by no means comprehensive; instead, we have chosen popular data structures that illustrate key design issues, and hope that we have provided sufficient background and intuition to allow the interested reader to approach the literature we do not survey.
Index:
ABA problem, 1-16
abortable locks, 1-10
Amdahl’s Law, 1-3
AVL trees, 1-22
B+-trees, 1-20
B-link-trees, 1-21
backoff, 1-4
balancer, 1-13
barriers, 1-11
blocking synchronization, 1-3
cache-coherent multiprocessor, 1-3
CAS, 1-6
collision array, 1-13
combining funnel, 1-13
combining tree, 1-4, 1-13
compare-and-swap, 1-6
concurrency, 1-1–1-23
concurrent data structures, 1-1–1-23
- AVL trees, 1-22
- counters, 1-2–1-7
- deques, 1-17
- dictionaries, 1-18
- hash tables, 1-19
- linked lists, 1-18
- pools, 1-17
- priority pools, 1-23
- priority queues, 1-22
- queues, 1-16
- red-black trees, 1-22
- search trees, 1-20
- sets, 1-18
- skiplists, 1-22
- stacks, 1-15
contention, 1-3
counters, 1-2–1-7
counting networks, 1-13
data layout, 1-4
DCAS, 1-17
deadlock, 1-5
diffracting trees, 1-14
diffusing computation, 1-11
double-compare-and-swap, 1-17
elimination, 1-15
exponential backoff, 1-9
fetch-and-Á, 1-12
fetch-and-increment, 1-2–1-6
freelists, 1-16
group mutual exclusion, 1-11
hand-over-hand locking, 1-18
helping technique, 1-16
linearizability, 1-8
linearization point, 1-8
linked lists, 1-18
LL/SC, 1-6
load balancing, 1-17
load-linked, 1-6
lock, 1-2
lock coupling, 1-18
lock elision, 1-12
lock granularity, 1-3
lock-freedom, 1-5
locks, 1-9
- abortable, 1-10
- preemption-safe, 1-10
- queuelocks, 1-9
- reader-writer, 1-10
- spinlocks, 1-9
memory contention, 1-3
model checking, 1-9
multiprogramming, 1-3
mutex, 1-2
mutual exclusion, 1-2
non-uniform memory access, 1-4
nonblocking memory reclamation, 1-18
nonblocking progress conditions, 1-5
nonblocking synchronization, 1-3
NUMA, 1-4
obstruction-freedom, 1-5
optimistic concurrency control, 1-12
optimistic synchronization, 1-6
parallel random access machines, 1-7
pools, 1-17
preemption-safe locks, 1-10
priority pools, 1-23
priority queues, 1-22
queuelocks, 1-9
queues, 1-16
quiescent consistency, 1-8
reader-writer locks, 1-10
recursive split ordering, 1-20
red-black trees, 1-22
room synchronization, 1-11
scalability, 1-1
search trees, 1-20
sequential bottleneck, 1-3
sequential consistency, 1-8
serializability, 1-8
shared-memory multiprocessors, 1-1
skiplists, 1-22
software transactional memory, 1-12
speedup, 1-3
spinlocks, 1-9
spinning, 1-4
- local, 1-4
stacks, 1-15
store-conditional, 1-6
theorem proving, 1-9
transactional memory, 1-12
transactional synchronization, 1-11
trees, 1-20
two-handed emulation, 1-20
universal primitives, 1-6
verification techniques, 1-8
wait-freedom, 1-5
work sharing, 1-18
work stealing, 1-18
workload distribution, 1-17
понедельник, 15 марта 2010 г.
Подписаться на:
Комментарии к сообщению (Atom)
Ссылочка...
ОтветитьУдалитьhttp://blog.gamedeff.com/?p=303