How Google’s Dremel Makes Quick Work of Massive Data by Ian Armas Foster.
From the post:
The ability to process more data and the ability to process data faster are usually mutually exclusive. According to Armando Fox, professor of computer science at University of California at Berkeley, “the more you do one, the more you have to give up on the other.”
Hadoop, an open-source, batch processing platform that runs on MapReduce, is one of the main vehicles organizations are driving in the big data race.
However, Mike Olson, CEO of Cloudera, an important Hadoop-based vendor, is looking past Hadoop and toward today’s research projects. That includes one named Dremel, possibly Google’s next big innovation that combines the scale of Hadoop with the ever-increasing speed demands of the business intelligence world.
“People have done Big Data systems before,” Fox said “but before Dremel, no one had really done a system that was that big and that fast.”
On Dremel, see: Dremel: Interactive Analysis of Web-Scale Datasets, as well.
Are you looking (or considering looking) beyond Hadoop?
Accuracy and timeliness beyond the average daily intelligence briefing will drive demand for your information product.
Your edge is agility. Use it.