The Next Generation of Apache Hadoop MapReduce by Arun C Murthy (@acmurthy)
From the webpage:
In the Big Data business running fewer larger clusters is cheaper than running more small clusters. Larger clusters also process larger data sets and support more jobs and users.
The Apache Hadoop MapReduce framework has hit a scalability limit around 4,000 machines. We are developing the next generation of Apache Hadoop MapReduce that factors the framework into a generic resource scheduler and a per-job, user-defined component that manages the application execution. Since downtime is more expensive at scale high-availability is built-in from the beginning; as are security and multi-tenancy to support many users on the larger clusters. The new architecture will also increase innovation, agility and hardware utilization.
Since I posted the note about OpenStack and it is Friday, it seemed like a natural. Something to read over the weekend!
Saw this first at Alex Popescu’s myNoSQL – The Next Generation of Apache Hadoop MapReduce, which is sporting a new look!