Another Word For It Patrick Durusau on Topic Maps and Semantic Diversity

July 26, 2012

Understanding Apache Hadoop’s Capacity Scheduler

Filed under: Clustering (servers),Hadoop,MapReduce — Patrick Durusau @ 10:43 am

Understanding Apache Hadoop’s Capacity Scheduler by Arun Murthy

From the post:

As organizations continue to ramp the number of MapReduce jobs processed in their Hadoop clusters, we often get questions about how best to share clusters. I wanted to take the opportunity to explain the role of Capacity Scheduler, including covering a few common use cases.

Let me start by stating the underlying challenge that led to the development of Capacity Scheduler and similar approaches.

As organizations become more savvy with Apache Hadoop MapReduce and as their deployments mature, there is a significant pull towards consolidation of Hadoop clusters into a small number of decently sized, shared clusters. This is driven by the urge to consolidate data in HDFS, allow ever-larger processing via MapReduce and reduce operational costs & complexity of managing multiple small clusters. It is quite common today for multiple sub-organizations within a single parent organization to pool together Hadoop/IT budgets to deploy and manage shared Hadoop clusters.

Initially, Apache Hadoop MapReduce supported a simple first-in-first-out (FIFO) job scheduler that was insufficient to address the above use case.

Enter the Capacity Scheduler.

Shared Hadoop clusters?

So long as we don’t have to drop off our punch cards at the shared Hadoop cluster computing center I suppose that’s ok.

😉

Just teasing.

Shared Hadoop clusters are more cost effective and makes better use of your Hadoop specialists.

No Comments

No comments yet.

RSS feed for comments on this post.

Sorry, the comment form is closed at this time.

Powered by WordPress