Archive for the ‘Heterogeneous Programming’ Category

Native Actors – A Scalable Software Platform for Distributed, Heterogeneous Environments

Saturday, September 27th, 2014

Native Actors – A Scalable Software Platform for Distributed, Heterogeneous Environments by Dominik Charousset, Thomas C. Schmidt, Raphael Hiesgen, and Matthias Wählisch.

Abstract:

Writing concurrent software is challenging, especially with low-level synchronization primitives such as threads or locks in shared memory environments. The actor model replaces implicit communication by an explicit message passing in a ‘shared-nothing’ paradigm. It applies to concurrency as well as distribution, but has not yet entered the native programming domain. This paper contributes the design of a native actor extension for C++, and the report on a software platform that implements our design for (a)concurrent, (b) distributed, and (c) heterogeneous hardware environments. GPGPU and embedded hardware components are integrated in a transparent way. Our software platform supports the development of scalable and efficient parallel software. It includes a lock-free mailbox algorithm with pattern matching facility for message processing. Thorough performance evaluations reveal an extraordinary small memory footprint in realistic application scenarios, while runtime performance not only outperforms existing mature actor implementations, but exceeds the scaling behavior of low-level message passing libraries such as OpenMPI.

When I read Stroustrup: Why the 35-year-old C++ still dominates ‘real’ dev I started to post a comment asking why there were no questions about functional programming languages? But, the interview is a “puff” piece and not a serious commentary on programming.

Then I ran across this work on implementing actors in C++. Maybe Stroustrup was correct without being aware of it.

Bundled with the C++ library libcppa, available at: http://www.libcppa.org

DSL for Distributed Heterogeneous Systems

Saturday, August 9th, 2014

A Domain-Specific Language for Volume Processing and Visualization on Distributed Heterogeneous Systems

From the webpage:

As the size of image data from microscopes and telescopes increases, the need for high-throughput processing and visualization of large volumetric data has become more pressing. At the same time, many-core processors and GPU accelerators are commonplace, making high-performance distributed heterogeneous computing systems affordable. However, effectively utilizing GPU clusters is difficult for novice programmers, and even experienced programmers often fail to fully leverage the computing power of new parallel architectures due their steep learning curve and programming complexity.

In this research, we propose a new domain-specific language for volume processing and visualization on distributed heterogeneous computing systems, called Vivaldi (VIsualization LAnguage for DIstributed sytstems). Vivaldi’s Python-like grammar and parallel processing abstractions provide flexible programming tools for non-experts to easily write high-performance parallel computing code. Vivaldi provides commonly used functions and numerical operators for customized visualization and high-throughput image processing applications. We demonstrate the performance and usability of Vivaldi on several examples ranging from volume rendering to image segmentation.

A paper has been accepted for presentation at VIS2014. (9-14 November 2014, Paris)

I don’t have any other details but will keep looking.

I first saw this in a tweet by Albert Swart.

SDM 2014 Workshop on Heterogeneous Learning

Sunday, January 5th, 2014

SDM 2014 Workshop on Heterogeneous Learning

Key Dates:

01/10/2014: Paper Submission
01/31/2014: Author Notification
02/10/2014: Camera Ready Paper Due

From the post:

The main objective of this workshop is to bring the attention of researchers to real problems with multiple types of heterogeneities, ranging from online social media analysis, traffic prediction, to the manufacturing process, brain image analysis, etc. Some commonly found heterogeneities include task heterogeneity (as in multi-task learning), view heterogeneity (as in multi-view learning), instance heterogeneity (as in multi-instance learning), label heterogeneity (as in multi-label learning), oracle heterogeneity (as in crowdsourcing), etc. In the past years, researchers have proposed various techniques for modeling a single type of heterogeneity as well as multiple types of heterogeneities.

This workshop focuses on novel methodologies, applications and theories for effectively leveraging these heterogeneities. Here we are facing multiple challenges. To name a few: (1) how can we effectively exploit the label/example structure to improve the classification performance; (2) how can we handle the class imbalance problem when facing one or more types of heterogeneities; (3) how can we improve the effectiveness and efficiency of existing learning techniques for large-scale problems, especially when both the data dimensionality and the number of labels/examples are large; (4) how can we jointly model multiple types of heterogeneities to maximally improve the classification performance; (5) how do the underlying assumptions associated with multiple types of heterogeneities affect the learning methods.

We encourage submissions on a variety of topics, including but not limited to:

(1) Novel approaches for modeling a single type of heterogeneity, e.g., task/view/instance/label/oracle heterogeneities.

(2) Novel approaches for simultaneously modeling multiple types of heterogeneities, e.g., multi-task multi-view learning to leverage both the task and view heterogeneities.

(3) Novel applications with a single or multiple types of heterogeneities.

(4) Systematic analysis regarding the relationship between the assumptions underlying each type of heterogeneity and the performance of the predictor;

Apologies but I saw this announcement too late for you to have a realistic opportunity to submit a paper. 🙁

Very unfortunate because the focus of the workshop is right up the topic map alley.

The main conference, which focuses on data mining, is April 24-26, 2014 in Philadelphia, Pennsylvania, USA.

I am very much looking forward to reading the papers from this workshop! (And looking for notice of next year’s workshop much earlier!)

Cell Architectures (adding dashes of heterogeneity)

Saturday, May 12th, 2012

Cell Architectures

From the post:

A consequence of Service Oriented Architectures is the burning need to provide services at scale. The architecture that has evolved to satisfy these requirements is a little known technique called the Cell Architecture.

A Cell Architecture is based on the idea that massive scale requires parallelization and parallelization requires components be isolated from each other. These islands of isolation are called cells. A cell is a self-contained installation that can satisfy all the operations for a shard. A shard is a subset of a much larger dataset, typically a range of users, for example.

Cell Architectures have several advantages:

  • Cells provide a unit of parallelization that can be adjusted to any size as the user base grows.
  • Cell are added in an incremental fashion as more capacity is required.
  • Cells isolate failures. One cell failure does not impact other cells.
  • Cells provide isolation as the storage and application horsepower to process requests is independent of other cells.
  • Cells enable nice capabilities like the ability to test upgrades, implement rolling upgrades, and test different versions of software.
  • Cells can fail, be upgraded, and distributed across datacenters independent of other cells.

The intersection of semantic heterogeneity and scaling remains largely unexplored.

I suggest scaling in a homogeneous environment and then adding dashes of heterogeneity to see what breaks.

Adjust and try again.

The Heterogeneous Programming Jungle

Saturday, March 24th, 2012

The Heterogeneous Programming Jungle by Michael Wolfe.

Michael starts off with one definition of “heterogeneous:”

The heterogeneous systems of interest to HPC use an attached coprocessor or accelerator that is optimized for certain types of computation.These devices typically exhibit internal parallelism, and execute asynchronously and concurrently with the host processor. Programming a heterogeneous system is then even more complex than “traditional” parallel programming (if any parallel programming can be called traditional), because in addition to the complexity of parallel programming on the attached device, the program must manage the concurrent activities between the host and device, and manage data locality between the host and device.

And while he returns to that definition in the end, another form of heterogeneity is lurking not far behind:

Given the similarities among system designs, one might think it should be obvious how to come up with a programming strategy that would preserve portability and performance across all these devices. What we want is a method that allows the application writer to write a program once, and let the compiler or runtime optimize for each target. Is that too much to ask?

Let me reflect momentarily on the two gold standards in this arena. The first is high level programming languages in general. After 50 years of programming using Algol, Pascal, Fortran, C, C++, Java, and many, many other languages, we tend to forget how wonderful and important it is that we can write a single program, compile it, run it, and get the same results on any number of different processors and operating systems.

So there is the heterogeneity of attached coprocessor and, just as importantly, of the processors with coprocessors.

His post concludes with:

Grab your Machete and Pith Helmet

If parallel programming is hard, heterogeneous programming is that hard, squared. Defining and building a productive, performance-portable heterogeneous programming system is hard. There are several current programming strategies that attempt to solve this problem, including OpenCL, Microsoft C++AMP, Google Renderscript, Intel’s proposed offload directives (see slide 24), and the recent OpenACC specification. We might also learn something from embedded system programming, which has had to deal with heterogeneous systems for many years. My next article will whack through the underbrush to expose each of these programming strategies in turn, presenting advantages and disadvantages relative to the goal.

These are languages that share common subjects (think of their target architectures) and so are ripe for a topic map that co-locates their approaches to a particular architecture. Being able to incorporate official and non-official documentation, tests, sample code, etc., might enable faster progress in this area.

The future of HPC processors is almost upon us. It will not do to be tardy.