Beyond Locks and Messages: The Future of Concurrent Programming
Conclusions
Here’s the tongue-in-cheek summary of the trends which, if you believe that the HPCS effort provides a glimpse of the future, will soon be entering the mainstream:
1. Threads are out (demoted to latency controlling status), tasks (and semi-implicit parallelism) are in.
2. Message passing is out (demoted to implementation detail), shared address space is in.
3. Locks are out (demoted to low-level status), transactional memory is in.
I think we’ve been seeing the twilight of thread-based parallelism for some time (see my previous post on Parallel Programming with Hints. It’s just not the way to fully explore hardware concurrency. Traditionally, if you wanted to increase the performance of your program on multicore machines, you had to go into the low-level business of managing thread pools, splitting your work between processors, etc. This is now officially considered the assembly language of concurrency and has no place in high level programming languages.
Message passing’s major flaw is the inversion of control–it is a moral equivalent of gotos in un-structured programming (it’s about time somebody said that message passing is considered harmful). MP still has its applications and, used in moderation, can be quite handy; but PGAS offers a much more straightforward programming model–its essence being the separation of implementation from algorithm. The Platonic ideal would be for the language to figure out the best parallel implementation for a particular algorithm. Since this is still a dream, the next best thing is getting rid of the interleaving of the two in the same piece of code.
Software transactional memory has been around for more than a decade now and, despite some negative experiences, is still alive and kicking. It’s by far the best paradigm for synchronizing shared memory access. It’s unobtrusive, deadlock-free, and scalable. If we could only get better performance out of it, it would be ideal. The hope though is in moving some of the burden of TM to hardware.
Sadly, the ease of writing parallel programs using those new paradigms does not go hand in hand with improving program correctness. My worry is that chasing concurrency bugs will eat into productivity gains.
What are the chances that we’ll be writing programs for desktops in Chapel, X10, or Fortress? Probably slim. Good chances are though that the paradigms used in those languages will continue showing up in existing and future languages. You may have already seen task driven libraries sneaking into the mainstream (e.g., the .NET TPL). There is a PGAS extension of C called UPC (Unified Parallel C) and there are dialects of several languages like Java, C, C#, Scala, etc., that experiment with STM. Expect more in the future.
Подписаться на:
Комментарии к сообщению (Atom)
Комментариев нет:
Отправить комментарий