An Application Driven Analysis of the ParalleX Execution Model by Matthew Anderson, Maciej Brodowicz, Hartmut Kaiser and Thomas Sterling.
Just in case you feel the need for more information about ParalleX after that post about the LSU software release. 😉
Exascale systems, expected to emerge by the end of the next decade, will require the exploitation of billion-way parallelism at multiple hierarchical levels in order to achieve the desired sustained performance. The task of assessing future machine performance is approached by identifying the factors which currently challenge the scalability of parallel applications. It is suggested that the root cause of these challenges is the incoherent coupling between the current enabling technologies, such as Non-Uniform Memory Access of present multicore nodes equipped with optional hardware accelerators and the decades older execution model, i.e., the Communicating Sequential Processes (CSP) model best exemplified by the message passing interface (MPI) application programming interface. A new execution model, ParalleX, is introduced as an alternative to the CSP model. In this paper, an overview of the ParalleX execution model is presented along with details about a ParalleX-compliant runtime system implementation called High Performance ParalleX (HPX). Scaling and performance results for an adaptive mesh refinement numerical relativity application developed using HPX are discussed. The performance results of this HPX-based application are compared with a counterpart MPI-based mesh refinement code. The overheads associated with HPX are explored and hardware solutions are introduced for accelerating the runtime system.
Graphaholics should also note:
Today’s conventional parallel programming methods such as MPI  and systems such as distributed memory massively parallelvprocessors (MPPs) and Linux clusters exhibit poor efficiency and constrained scalability for this class of applications. This severely hinders scientific advancement. Many other classes of applications exhibit similar properties, especially graph/tree data structures that have non uniform data access patterns. (emphasis added)
I like that, “non uniform data access patterns.”
My “gut” feeling is that this will prove very useful for processing semantics. Since semantics originate from us and have “non uniform data access patterns.”
Granted a lot of work between here and there, especially since the semantics side of the house is fond of declaring victory in favor of the latest solution.
You would think after years, decades, centuries, no, millenia of one “ultimate” solution after another, we would be a little more wary of such pronouncements. I suspect the problem is that programmers come by their proverbial laziness honestly. They get it from us. It is easier to just fall into line with whatever seems like a passable solution and to not worry about all the passable solutions that went before.
That is no doubt easier but imagine where medicine, chemistry, physics, or even computers would be if they had adopted such a model. True, we have to use models that work now, but at the same time we should encourage new, different, even challenging models that may (or may not) be better at capturing human semantics. Models that change even as we do.