The Next Generation of Apache Hadoop MapReduce
From the post:
In the Big Data business running fewer larger clusters is cheaper than running more small clusters. Larger clusters also process larger data sets and support more jobs and users.
The Apache Hadoop MapReduce framework has hit a scalability limit around 4,000 machines. We are developing the next generation of Apache Hadoop MapReduce that factors the framework into a generic resource scheduler and a per-job, user-defined component that manages the application execution. Since downtime is more expensive at scale high-availability is built-in from the beginning; as are security and multi-tenancy to support many users on the larger clusters. The new architecture will also increase innovation, agility and hardware utilization.
Start of an important series of posts on the next generation of Apache Hadoop MapReduce.
[…] Another Word For It Patrick Durusau on Topic Maps and Semantic Diversity « The Next Generation of Apache Hadoop MapReduce […]
Pingback by Next Generation of Apache Hadoop MapReduce – The Scheduler « Another Word For It — March 20, 2011 @ 1:23 pm