Map-Reduce-Merge: Simplified Relational Data Processing on Large Clusters by Hung-chih Yang, Ali Dasdan, Ruey-Lung Hsiao and D. Stott Parker.
Map-Reduce is a programming model that enables easy development of scalable parallel applications to process vast amounts of data on large clusters of commodity machines. Through a simple interface with two functions, map and reduce, this model facilitates parallel implementation of many real-world tasks such as data processing for search engines and machine learning.
However, this model does not directly support processing multiple related heterogeneous datasets. While processing relational data is a common need, this limitation causes difficulties and/or inefficiency when Map-Reduce is applied on relational operations like joins.
We improve Map-Reduce into a new model called Map-Reduce-Merge. It adds to Map-Reduce a Merge phase that can efficiently merge data already partitioned and sorted (or hashed) by map and reduce modules. We also demonstrate that this new model can express relational algebra operators as well as implement several join algorithms.
As of today, I count sixty-three (63) citations of this paper. I just discovered it today and it is going to take some time to work through all the citing materials and then materials that cite those papers.
The Peregrine software I mentioned in another post, implements this map-reduce-merge framework.