Another Word For It Patrick Durusau on Topic Maps and Semantic Diversity

February 27, 2013

Microsoft and Hadoop, Sitting in a Tree…*

Filed under: Hadoop,Hortonworks,MapReduce,Microsoft — Patrick Durusau @ 2:55 pm

Putting the Elephant in the Window by John Kreisa.

From the post:

For several years now Apache Hadoop has been fueling the fast growing big data market and has become the defacto platform for Big Data deployments and the technology foundation for an explosion of new analytic applications. Many organizations turn to Hadoop to help tame the vast amounts of new data they are collecting but in order to do so with Hadoop they have had to use servers running the Linux operating system. That left a large number of organizations who standardize on Windows (According to IDC, Windows Server owned 73 percent of the market in 2012 – IDC, Worldwide and Regional Server 2012–2016 Forecast, Doc # 234339, May 2012) without the ability to run Hadoop natively, until today.

We are very pleased to announce the availability of Hortonworks Data Platform for Windows providing organizations with an enterprise-grade, production-tested platform for big data deployments on Windows. HDP is the first and only Hadoop-based platform available on both Windows and Linux and provides interoperability across Windows, Linux and Windows Azure. With this release we are enabling a massive expansion of the Hadoop ecosystem. New participants in the community of developers, data scientist, data management professionals and Hadoop fans to build and run applications for Apache Hadoop natively on Windows. This is great news for Windows focused enterprises, service provides, software vendors and developers and in particular they can get going today with Hadoop simply by visiting our download page.

This release would not be possible without a strong partnership and close collaboration with Microsoft. Through the process of creating this release, we have remained true to our approach of community-driven enterprise Apache Hadoop by collecting enterprise requirements, developing them in open source and applying enterprise rigor to produce a 100-precent open source enterprise-grade Hadoop platform.

Now there is a very smart marketing move!

A smaller share of a larger market is always better than a large share of a small market.

(You need to be writing down these quips.) 😉

Seriously, take note of how Hortonworks used the open source model.

They did not build Hadoop in their image and try to sell it to the world.

Hortonworks gathered requirements from others and built Hadoop to meet their needs.

Open source model in both cases, very different outcomes.

* I didn’t remember the rhyme beyond the opening line. Consulting the oracle (Wikipedia), I discovered Playground song. 😉

February 23, 2013

Large-Scale Data Analysis Beyond Map/Reduce

Filed under: BigData,MapReduce,Nephele,PACT — Patrick Durusau @ 3:08 pm

Large-Scale Data Analysis Beyond Map/Reduce by Fabian Hüske.

From the description:

Stratosphere is a joint project by TU Berlin, HU Berlin, and HPI Potsdam and researches “Information Management on the Cloud”. In the course of the project, a massively parallel data processing system is built. The current version of the system consists of the parallel PACT programming model, a database inspired optimizer, and the parallel dataflow processing engine, Nephele. Stratosphere has been released as open source. This talk will focus on the PACT programming model, which is a generalization of Map/Reduce, and show how PACT eases the specification of complex data analysis tasks. At the end of the talk, an overview of Stratosphere’s upcoming release will be given.

In Stratosphere, parallel programming model is separated from the execution engine (unlike Hadoop).

Interesting demonstration of differences between Hadoop versus PACT programming models.

Home: Stratosphere: Above the Clouds

I first saw this at DZone.

February 22, 2013

Hadoop Adds Red Hat [More Hadoop Silos Coming]

Filed under: Hadoop,MapReduce,Red Hat,Semantic Diversity,Semantic Inconsistency — Patrick Durusau @ 1:27 pm

Red Hat Unveils Big Data and Open Hybrid Cloud Direction

From the post:

Red Hat, Inc. (NYSE: RHT), the world’s leading provider of open source solutions, today announced its big data direction and solutions to satisfy enterprise requirements for highly reliable, scalable, and manageable solutions to effectively run their big data analytics workloads. In addition, Red Hat announced that the company will contribute its Red Hat Storage Hadoop plug-in to the ApacheTM Hadoop® open community to transform Red Hat Storage into a fully-supported, Hadoop-compatible file system for big data environments, and that Red Hat is building a robust network of ecosystem and enterprise integration partners to deliver comprehensive big data solutions to enterprise customers. This is another example of Red Hat’s strategic commitment to big data customers and its continuing efforts to provide them with enterprise solutions through community-driven innovation.

The more Hadoop grows, the more Hadoop silos will as well.

You will need Hadoop and semantic skills to wire Hadoop silos together.

Re-wire with topic maps to avoid re-wiring the same Hadoop silos over and over again.

I first saw this at Red Hat reveal big data plans, open sources HDFS replacement by Elliot Bentley.

February 20, 2013

Introducing… Tez: Accelerating processing of data stored in HDFS

Filed under: DAG,Graphs,Hadoop YARN,MapReduce,Tez — Patrick Durusau @ 9:23 pm

Introducing… Tez: Accelerating processing of data stored in HDFS by Arun Murthy.

From the post:

MapReduce has served us well. For years it has been THE processing engine for Hadoop and has been the backbone upon which a huge amount of value has been created. While it is here to stay, new paradigms are also needed in order to enable Hadoop to serve an even greater number of usage patterns. A key and emerging example is the need for interactive query, which today is challenged by the batch-oriented nature of MapReduce. A key step to enabling this new world was Apache YARN and today the community proposes the next step… Tez

What is Tez?

Tez – Hindi for “speed” – (currently under incubation vote within Apache) provides a general-purpose, highly customizable framework that creates simplifies data-processing tasks across both small scale (low-latency) and large-scale (high throughput) workloads in Hadoop. It generalizes the MapReduce paradigm to a more powerful framework by providing the ability to execute a complex DAG (directed acyclic graph) of tasks for a single job so that projects in the Apache Hadoop ecosystem such as Apache Hive, Apache Pig and Cascading can meet requirements for human-interactive response times and extreme throughput at petabyte scale (clearly MapReduce has been a key driver in achieving this).

With the emergence of Apache Hadoop YARN as the basis of next generation data-processing architectures, there is a strong need for an application which can execute a complex DAG of tasks which can then be shared by Apache Pig, Apache Hive, Cascading and others. The constrained DAG expressible in MapReduce (one set of maps followed by one set of reduces) often results in multiple MapReduce jobs which harm latency for short queries (overhead of launching multiple jobs) and throughput for large-scale queries (too much overhead for materializing intermediate job outputs to the filesystem). With Tez, we introduce a more expressive DAG of tasks, within a single application or job, that is better aligned with the required processing task – thus, for e.g., any given SQL query can be expressed as a single job using Tez.

If you are familiar with Michael Sperberg-McQueen and Claus Huitfeldt’s work on DAGs, you would be as excited as I am! (Goddag for example.)

On any day this would be awesome work.

Even more so coming on the heels of two other major project announcements. Securing Hadoop with Knox Gateway and The Stinger Initiative: Making Apache Hive 100 Times Faster, both from Hortonworks.

February 19, 2013

Using Clouds for MapReduce Measurement Assignments [Grad Class Near You?]

Filed under: Cloud Computing,MapReduce — Patrick Durusau @ 11:30 am

Using Clouds for MapReduce Measurement Assignments by Ariel Rabkin, Charles Reiss, Randy Katz, and David Patterson. (ACM Trans. Comput. Educ. 13, 1, Article 2 (January 2013), 18 pages. DOI = 10.1145/2414446.2414448)

Abstract:

We describe our experiences teaching MapReduce in a large undergraduate lecture course using public cloud services and the standard Hadoop API. Using the standard API, students directly experienced the quality of industrial big-data tools. Using the cloud, every student could carry out scalability benchmarking assignments on realistic hardware, which would have been impossible otherwise. Over two semesters, over 500 students took our course. We believe this is the first large-scale demonstration that it is feasible to use pay-as-you-go billing in the cloud for a large undergraduate course. Modest instructor effort was sufficient to prevent students from overspending. Average per-pupil expenses in the Cloud were under $45. Students were excited by the assignment: 90% said they thought it should be retained in future course offerings.

With properly structured assignments, I can see this technique being used to introduce library graduate students to data mining and similar topics on non-trivial data sets.

Getting “hands on” experience should make them more than a match for the sales types from information vendors.

Not to mention that data mining flourishes when used with an understanding of the underlying semantics of the data set.

I first saw this at: On Teaching MapReduce via Clouds

February 18, 2013

Real World Hadoop – Implementing a Left Outer Join in Map Reduce

Filed under: Hadoop,MapReduce — Patrick Durusau @ 6:25 am

Real World Hadoop – Implementing a Left Outer Join in Map Reduce by Matthew Rathbone.

From the post:

This article is part of my guide to map reduce frameworks, in which I implement a solution to a real-world problem in each of the most popular hadoop frameworks.

If you’re impatient, you can find the code for the map-reduce implementation on my github, otherwise, read on!

The Problem
Let me quickly restate the problem from my original article.

I have two datasets:

  1. User information (id, email, language, location)
  2. Transaction information (transaction-id, product-id, user-id, purchase-amount, item-description)

Given these datasets, I want to find the number of unique locations in which each product has been sold.

Not as easy a problem as it appears. But I suspect a common one in practice.

Clydesdale: Structured Data Processing on MapReduce

Filed under: Clydesdale,Hadoop,MapReduce — Patrick Durusau @ 6:16 am

Clydesdale: Structured Data Processing on MapReduce by Tim Kaldewey, Eugene J. Shekita, Sandeep Tata.

Abstract:

MapReduce has emerged as a promising architecture for large scale data analytics on commodity clusters. The rapid adoption of Hive, a SQL-like data processing language on Hadoop (an open source implementation of MapReduce), shows the increasing importance of processing structured data on MapReduce platforms. MapReduce offers several attractive properties such as the use of low-cost hardware, fault-tolerance, scalability, and elasticity. However, these advantages have required a substantial performance sacrifice.

In this paper we introduce Clydesdale, a novel system for structured data processing on Hadoop – a popular implementation of MapReduce. We show that Clydesdale provides more than an order of magnitude in performance improvements compared to existing approaches without requiring any changes to the underlying platform. Clydesdale is aimed at workloads where the data fits a star schema. It draws on column oriented storage, tailored join-plans, and multicore execution strategies and carefully fits them into the constraints of a typical MapReduce platform. Using the star schema benchmark, we show that Clydesdale is on average 38x faster than Hive. This demonstrates that MapReduce in general, and Hadoop in particular, is a far more compelling platform for structured data processing than previous results suggest. (emphasis in original)

The authors make clear that Clydesdale is a research prototype and lacks many features needed for full production use.

But an order of magnitude and sometimes two orders of magnitude improvement should pique your interest in helping with such improvements.

I find the “re-use” of existing Hadoop infrastructure particularly exciting.

Order of magnitude or more gains with current approaches is a signal someone is thinking about issues and not simply throwing horsepower at a problem.

I first saw this in NoSQL Weekly, Issue 116.

February 15, 2013

The Family of MapReduce and Large Scale Data Processing Systems

Filed under: Hadoop,MapReduce — Patrick Durusau @ 2:03 pm

The Family of MapReduce and Large Scale Data Processing Systems by Sherif Sakr, Anna Liu, Ayman G. Fayoumi.

Abstract:

In the last two decades, the continuous increase of computational power has produced an overwhelming flow of data which has called for a paradigm shift in the computing architecture and large scale data processing mechanisms. MapReduce is a simple and powerful programming model that enables easy development of scalable parallel applications to process vast amounts of data on large clusters of commodity machines. It isolates the application from the details of running a distributed program such as issues on data distribution, scheduling and fault tolerance. However, the original implementation of the MapReduce framework had some limitations that have been tackled by many research efforts in several followup works after its introduction. This article provides a comprehensive survey for a family of approaches and mechanisms of large scale data processing mechanisms that have been implemented based on the original idea of the MapReduce framework and are currently gaining a lot of momentum in both research and industrial communities. We also cover a set of introduced systems that have been implemented to provide declarative programming interfaces on top of the MapReduce framework. In addition, we review several large scale data processing systems that resemble some of the ideas of the MapReduce framework for different purposes and application scenarios. Finally, we discuss some of the future research directions for implementing the next generation of MapReduce-like solutions.

At twenty-seven pages and one hundred and thirty-five references, this is one for the weekend and perhaps beyond!

Definitely a paper to master if you are interested in seeing the next generation of MapReduce techniques before your competition.

I first saw this at Alex Popescu’s The Family of MapReduce and Large Scale Data Processing Systems.

February 13, 2013

Imperative and Declarative Hadoop: TPC-H in Pig and Hive

Filed under: Hadoop,Hive,MapReduce,Pig,TPC-H — Patrick Durusau @ 11:41 am

Imperative and Declarative Hadoop: TPC-H in Pig and Hive by Russell Jurney.

From the post:

According to the Transaction Processing Council, TPC-H is:

The TPC Benchmark™H (TPC-H) is a decision support benchmark. It consists of a suite of business oriented ad-hoc queries and concurrent data modifications. The queries and the data populating the database have been chosen to have broad industry-wide relevance. This benchmark illustrates decision support systems that examine large volumes of data, execute queries with a high degree of complexity, and give answers to critical business questions.

TPC-H was implemented for Hive in HIVE-600 and for Pig in PIG-2397 by Hortonworks intern Jie Li. In going over this work, I was struck by how it outlined differences between Pig and SQL.

There seems to be tendency for simple SQL to provide greater clarity than Pig. At some point as the TPC-H queries become more demanding, complex SQL seems to have less clarity than the comparable Pig. Lets take a look.
(emphasis in original)

A refresher in the lesson that what solution you need, in this case Hive or PIg, depends upon your requirements.

Use either one blindly at the risk of poor performance or failing to meet other requirements.

Data deduplication tactics with HDFS and MapReduce [Contractor Plagiarism?]

Filed under: Deduplication,HDFS,MapReduce,Plagiarism — Patrick Durusau @ 11:29 am

Data deduplication tactics with HDFS and MapReduce

From the post:

As the amount of data continues to grow exponentially, there has been increased focus on stored data reduction methods. Data compression, single instance store and data deduplication are among the common techniques employed for stored data reduction.

Deduplication often refers to elimination of redundant subfiles (also known as chunks, blocks, or extents). Unlike compression, data is not changed and eliminates storage capacity for identical data. Data deduplication offers significant advantage in terms of reduction in storage, network bandwidth and promises increased scalability.

From a simplistic use case perspective, we can see application in removing duplicates in Call Detail Record (CDR) for a Telecom carrier. Similarly, we may apply the technique to optimize on network traffic carrying the same data packets.

Covers five (5) tactics:

  1. Using HDFS and MapReduce only
  2. Using HDFS and HBase
  3. Using HDFS, MapReduce and a Storage Controller
  4. Using Streaming, HDFS and MapReduce
  5. Using MapReduce with Blocking techniques

In these times of “Great Sequestration,” how much you are spending on duplicated contractor documentation?

You do get electronic forms of documentation. Yes?

Not that difficult to document prior contractor self-plagiarism. Teasing out what you “mistakenly” paid for it may be harder.

Question: Would you rather find out now and correct or have someone else find out?

PS: For the ambitious in government employment. You might want to consider how discovery of contractor self-plagiarism reflects on your initiative and dedication to “good” government.

February 9, 2013

Production-Ready Hadoop 2 Distribution

Filed under: Hadoop,MapReduce,Marketing — Patrick Durusau @ 8:21 pm

WANdisco Launches Production-Ready Hadoop 2 Distribution

From the post:

WANdisco today announced it has made its WANdisco Distro (WDD) available for free download.

WDD is a production-ready version powered by Apache Hadoop 2 based on the most recent release, including the latest fixes. These certified Apache Hadoop binaries undergo the same quality assurance process as WANdisco’s enterprise software solutions.

The WDD team is led by Dr. Konstantin Boudnik, who is one of the original Hadoop developers, has been an Apache Hadoop committer since 2009 and served as a Hadoop architect with Yahoo! This team of Hadoop development, QA and support professionals is focused on software quality. WANdisco’s Apache Hadoop developers have been involved in the open source project since its inception and have the authority within the Apache Hadoop community to make changes to the code base, for fast fixes and enhancements.

By adding its active-active replication technology to WDD, WANdisco is able to eliminate the single points of failure (SPOFs) and performance bottlenecks inherent in Hadoop. With this technology, the same data is simultaneously readable and writable on every server, and every server is actively supporting user requests. There are no passive or standby servers with complex administration procedures required for failover and recovery.

WANdisco (Somehow the quoted post failed to include the link.)

Download WANdisco Distro (WDD)

Two versions for download:

64-bit WDD v3.1.0 for RHEL 6.1 and above

64-bit WDD v3.1.0 for CentOS 6.1 and above

You do have to register and are emailed a download link.

I know marketing people have a formula that if you pester 100 people you will make N sales.

I suppose but if your product is compelling enough, people are going to be calling you.

When was the last time you heard of a drug dealer making cold calls to sell dope?

February 7, 2013

A Quick Guide to Hadoop Map-Reduce Frameworks

Filed under: Hadoop,Hive,MapReduce,Pig,Python,Scalding,Scoobi,Scrunch,Spark — Patrick Durusau @ 10:45 am

A Quick Guide to Hadoop Map-Reduce Frameworks by Alex Popescu.

Alex has assembled links to guides to MapReduce frameworks:

Thanks Alex!

February 5, 2013

MapReduce Algorithms

Filed under: Algorithms,MapReduce,Texts — Patrick Durusau @ 4:55 pm

MapReduce Algorithms by Bill Bejeck.

Bill is writing a series of posts on implementing the algorithms given in pseudo-code in: Data-Intensive Text Processing with MapReduce.

  1. Working Through Data-Intensive Text Processing with MapReduce
  2. Working Through Data-Intensive Text Processing with MapReduce – Local Aggregation Part II
  3. Calculating A Co-Occurrence Matrix with Hadoop
  4. MapReduce Algorithms – Order Inversion
  5. Secondary Sorting

Another resource to try with your Hadoop Sandbox install!

I first saw this at Alex Popescu’s 3 MapReduce and Hadoop Links: Secondary Sorting, Hadoop-Based Letterpress, and Hadoop Vaidya.

Understanding MapReduce via Boggle [Topic Map Game Suggestions?]

Filed under: Hadoop,MapReduce — Patrick Durusau @ 4:54 pm

Understanding MapReduce via Boggle by Jesse Anderson.

From the post:

Graph theory is a growing part of Big Data. Using graph theory, we can find relationships in networks.

MapReduce is a great platform for traversing graphs. Therefore, one can leverage the power of an Apache Hadoop cluster to efficiently run an algorithm on the graph.

One such graph problem is playing Boggle*. Boggle is played by rolling a group of 16 dice. Each players’ job is find the most number of words spelled out by the dice. These dice are six-sided with a single letter that faces up:

Cool!

Any suggestions for a game that illustrates topic maps?

Perhaps a “discovery” game that leads to more points, etc., as merges occur?

I first saw this at Alex Popescu’s 3 MapReduce and Hadoop Links: Secondary Sorting, Hadoop-Based Letterpress, and Hadoop Vaidya.

February 3, 2013

Content Based Image Retrieval (CBIR)

Filed under: Image Recognition,MapReduce — Patrick Durusau @ 6:57 pm

MapReduce Paves the Way for CBIR

From the post:

Recently, content based image retrieval (CBIR) has gained active research focus due to wide applications such as crime prevention, medicine, historical research and digital libraries.

As a research team from the School of Science, Information Technology and Engineering at theUniversity of Ballarat, Australia has suggested, image collections in databases in distributed locations over the Internet pose a challenge to retrieve images that are relevant to user queries efficiently and accurately.

The researchers say that with this in mind, it has become increasingly important to develop new CBIR techniques that are effective and scalable for real-time processing of very large image collections. To address this, the offer up a novel MapReduce neural network framework for CBIR from large data collection in a cloud environment.

Reference to the paper: MapReduce neural network framework for efficient content based image retrieval from large datasets in the cloud by Sitalakshmi Venkatraman. (In Hybrid Intelligent Systems (HIS), 2012 12th International Conference on)

Abstract:

Recently, content based image retrieval (CBIR) has gained active research focus due to wide applications such as crime prevention, medicine, historical research and digital libraries. With digital explosion, image collections in databases in distributed locations over the Internet pose a challenge to retrieve images that are relevant to user queries efficiently and accurately. It becomes increasingly important to develop new CBIR techniques that are effective and scalable for real-time processing of very large image collections. To address this, the paper proposes a novel MapReduce neural network framework for CBIR from large data collection in a cloud environment. We adopt natural language queries that use a fuzzy approach to classify the colour images based on their content and apply Map and Reduce functions that can operate in cloud clusters for arriving at accurate results in real-time. Preliminary experimental results for classifying and retrieving images from large data sets were quite convincing to carry out further experimental evaluations.

Sounds like the basis for a user-augmented index of visual content to me.

You?

January 26, 2013

DataFu: The WD-40 of Big Data

Filed under: DataFu,Hadoop,MapReduce,Pig — Patrick Durusau @ 1:42 pm

DataFu: The WD-40 of Big Data by Sam Shah.

From the post:

If Pig is the “duct tape for big data“, then DataFu is the WD-40. Or something.

No, seriously, DataFu is a collection of Pig UDFs for data analysis on Hadoop. DataFu includes routines for common statistics tasks (e.g., median, variance), PageRank, set operations, and bag operations.

It’s helpful to understand the history of the library. Over the years, we developed several routines that were used across LinkedIn and were thrown together into an internal package we affectionately called “littlepiggy.” The unfortunate part, and this is true of many such efforts, is that the UDFs were ill-documented, ill-organized, and easily got broken when someone made a change. Along came PigUnit, which allowed UDF testing, so we spent the time to clean up these routines by adding documentation and rigorous unit tests. From this “datafoo” package, we thought this would help the community at large, and there you have DataFu.

So what can this library do for you? Let’s look at one of the classical examples that showcase the power and flexibility of Pig: sessionizing a click steam.

DataFu

The UDF bag and set operations are likely to be of particular interest.

January 18, 2013

Hortonworks Data Platform 1.2 Available Now!

Filed under: Apache Ambari,Hadoop,HBase,Hortonworks,MapReduce — Patrick Durusau @ 7:18 pm

Hortonworks Data Platform 1.2 Available Now! by Kim Rose.

From the post:

Hortonworks Data Platform (HDP) 1.2, the industry’s only complete 100-percent open source platform powered by Apache Hadoop is available today. The enterprise-grade Hortonworks Data Platform includes the latest version of Apache Ambari for comprehensive management, monitoring and provisioning of Apache Hadoop clusters. By also introducing additional new capabilities for improving security and ease of use, HDP delivers an enterprise-class distribution of Apache Hadoop that is endorsed and adopted by some of the largest vendors in the IT ecosystem.

Hortonworks continues to drive innovation through a range of Hadoop-related projects, packaging the most enterprise-ready components, such as Ambari, into the Hortonworks Data Platform. Powered by an Apache open source community, Ambari represents the forefront of innovation in Apache Hadoop management. Built on Apache Hadoop 1.0, the most stable and reliable code available today, HDP 1.2 improves the ease of enterprise adoption for Apache Hadoop with comprehensive management and monitoring, enhanced connectivity to high-performance drivers, and increased enterprise-readiness of Apache HBase, Apache Hive and Apache HCatalog projects.

The Hortonworks Data Platform 1.2 features a number of new enhancements designed to improve the enterprise viability of Apache Hadoop, including:

  • Simplified Hadoop Operations—Using the latest release of Apache Ambari, HDP 1.2 now provides both cluster management and the ability to zoom into cluster usage and performance metrics for jobs and tasks to identify the root cause of performance bottlenecks or operations issues. This enables Hadoop users to identify issues and optimize future job processing.
  • Improved Security and Multi-threaded Query—HDP 1.2 provides an enhanced security architecture and pluggable authentication model that controls access to Hive tables and the metastore. In addition, HDP 1.2 improves scalability by supporting multiple concurrent query connections to Hive from business intelligence tools and Hive clients.
  • Integration with High-performance Drivers Built for Big Data—HDP 1.2 empowers organizations with a trusted and reliable ODBC connector that enables the integration of current systems with high-performance drivers built for big data. The ODBC driver enables integration with reporting or visualization components through a SQL engine built into the driver. Hortonworks has partnered with Simba to deliver a trusted, reliable high-performance ODBC connector that is enterprise ready and completely free.
  • HBase Enhancements—By including and testing HBase 0.94.2, HDP 1.2 delivers important performance and operational improvements for customers building and deploying highly scalable interactive applications using HBase.

There goes the weekend!

January 16, 2013

Apache Hive 0.10.0 is Now Available

Filed under: Hadoop,Hive,MapReduce — Patrick Durusau @ 7:57 pm

Apache Hive 0.10.0 is Now Available by Ashutosh Chauhan.

From the post:

We are pleased to announce the the release of Apache Hive version 0.10.0. More than 350 JIRA issues have been fixed with this release. A few of the most important fixes include:

Cube and Rollup: Hive now has support for creating cubes with rollups. Thanks to Namit!

List Bucketing: This is an optimization that lets you better handle skew in your tables. Thanks to Gang!

Better Windows Support: Several Hive 0.10.0 fixes support running Hive natively on Windows. There is no more cygwin dependency. Thanks to Kanna!

Explain’ Adds More Info: Now you can do an explain dependency and the explain plan will contain all the tables and partitions touched upon by the query. Thanks to Sambavi!

Improved Authorization: The metastore can now optionally do authorization checks on the server side instead of on the client, providing you with a better security profile. Thanks to Sushanth!

Faster Simple Queries: Some simple queries that don’t require aggregations, and therefore MapReduce jobs, can now run faster.Thanks to Navis!

Better YARN Support: This release contains additional work aimed at making Hive work well with Hadoop YARN. While not all test cases are passing yet, there has been a lot of good progress made with this release. Thanks to Zhenxiao!

Union Optimization: Hive queries with unions will now result in a lower number of MapReduce jobs under certain conditions. Thanks to Namit!

Undo Your Drop Table: While not really truly ‘undo’, you can now reinstate your table after dropping it. Thanks to Andrew!

Show Create Table: The lets you see how you created your table. Thanks to Feng!

Support for Avro Data: Hive now has built-in support for reading/writing Avro data. Thanks to Jakob!

Skewed Joins: Hive’s support for joins involving skewed data is now improved. Thanks to Namit!

Robust Connection Handling at the Metastore Layer: Connection handling between a metastore client and server and also between a metastore server and the database layer has been improved. Thanks to Bhushan and Jean!

More Statistics: Its now possible to collect and store scalar-valued statistics for your tables and partitions. This will enable better query planning in upcoming releases. Thanks to Shreepadma!

Better-Looking HWI : HWI now uses a bootstrap javascript library. It looks really slick. Thanks to Hugo!

If you are excited about some of these new features, I recommend that you download hive-0.10 from: Hive 0.10 Release.

The full Release Notes are available here: Hive 0.10.0 Release Notes

This release saw contributions from many different people. We have numerous folks reporting bugs, writing patches for new features, fixing bugs, testing patches, helping users on mailing lists etc. We would like to give a big thank you to everyone who made hive-0.10 possible.

-Ashutosh Chauhan

A long quote but it helps to give credit where credit is due.

January 9, 2013

A Guide to Python Frameworks for Hadoop

Filed under: Hadoop,MapReduce,Python — Patrick Durusau @ 12:03 pm

A Guide to Python Frameworks for Hadoop by Uri Laserson.

From the post:

I recently joined Cloudera after working in computational biology/genomics for close to a decade. My analytical work is primarily performed in Python, along with its fantastic scientific stack. It was quite jarring to find out that the Apache Hadoop ecosystem is primarily written in/for Java. So my first order of business was to investigate some of the options that exist for working with Hadoop from Python.

In this post, I will provide an unscientific, ad hoc review of my experiences with some of the Python frameworks that exist for working with Hadoop, including:

  • Hadoop Streaming
  • mrjob
  • dumbo
  • hadoopy
  • pydoop
  • and others

Ultimately, in my analysis, Hadoop Streaming is the fastest and most transparent option, and the best one for text processing. mrjob is best for rapidly working on Amazon EMR, but incurs a significant performance penalty. dumbo is convenient for more complex jobs (objects as keys; multistep MapReduce) without incurring as much overhead as mrjob, but it’s still slower than Streaming.

Read on for implementation details, performance comparisons, and feature comparisons.

A non-word count Hadoop example? Who would have thought? 😉

Enjoy!

January 8, 2013

Designing algorithms for Map Reduce

Filed under: Algorithms,BigData,Hadoop,MapReduce — Patrick Durusau @ 11:48 am

Designing algorithms for Map Reduce by Ricky Ho.

From the post:

Since the emerging of Hadoop implementation, I have been trying to morph existing algorithms from various areas into the map/reduce model. The result is pretty encouraging and I’ve found Map/Reduce is applicable in a wide spectrum of application scenarios.

So I want to write down my findings but then found the scope is too broad and also I haven’t spent enough time to explore different problem domains. Finally, I realize that there is no way for me to completely cover what Map/Reduce can do in all areas, so I just dump out what I know at this moment over the long weekend when I have an extra day.

Notice that Map/Reduce is good for “data parallelism”, which is different from “task parallelism”. Here is a description about their difference and a general parallel processing design methodology.

I’ll cover the abstract Map/Reduce processing model below. For a detail description of the implementation of Hadoop framework, please refer to my earlier blog here.

A bit dated (2010) but still worth your time.

I missed its initial appearance so appreciated Ricky pointing back to it in MapReduce: Detecting Cycles in Network Graph.

You may also want to consult: Designing good MapReduce algorithms by Jeffrey Ullman.

MapReduce: Detecting Cycles in Network Graph [Merging Duplicate Identifiers]

Filed under: Giraph,MapReduce,Merging — Patrick Durusau @ 11:47 am

MapReduce: Detecting Cycles in Network Graph by Ricky Ho.

From the post:

I recently received an email from an audience of my blog on Map/Reduce algorithm design regarding how to detect whether a graph is acyclic using Map/Reduce. I think this is an interesting problem and can imagine there can be wide range of application to it.

Although I haven’t solved this exact problem in the past, I’d like to sketch out my thoughts on a straightforward approach, which may not be highly optimized. My goal is to invite other audience who has solved this problem to share their tricks.

To define the problem: Given a simple directed graph, we want to tell whether it contains any cycles.

Relevant to processing of identifiers in topic maps which may occur on more than one topic (prior to merging).

What is your solution in a mapreduce context?

January 5, 2013

Apache Crunch

Filed under: Cascading,Hive,MapReduce,Pig — Patrick Durusau @ 7:50 am

Apache Crunch: A Java Library for Easier MapReduce Programming by Josh Wills.

From the post:

Apache Crunch (incubating) is a Java library for creating MapReduce pipelines that is based on Google’s FlumeJava library. Like other high-level tools for creating MapReduce jobs, such as Apache Hive, Apache Pig, and Cascading, Crunch provides a library of patterns to implement common tasks like joining data, performing aggregations, and sorting records. Unlike those other tools, Crunch does not impose a single data type that all of its inputs must conform to. Instead, Crunch uses a customizable type system that is flexible enough to work directly with complex data such as time series, HDF5 files, Apache HBase tables, and serialized objects like protocol buffers or Avro records.

Crunch does not try to discourage developers from thinking in MapReduce, but it does try to make thinking in MapReduce easier to do. MapReduce, for all of its virtues, is the wrong level of abstraction for many problems: most interesting computations are made up of multiple MapReduce jobs, and it is often the case that we need to compose logically independent operations (e.g., data filtering, data projection, data transformation) into a single physical MapReduce job for performance reasons.

Essentially, Crunch is designed to be a thin veneer on top of MapReduce — with the intention being not to diminish MapReduce’s power (or the developer’s access to the MapReduce APIs) but rather to make it easy to work at the right level of abstraction for the problem at hand.

Although Crunch is reminiscent of the venerable Cascading API, their respective data models are very different: one simple common-sense summary would be that folks who think about problems as data flows prefer Crunch and Pig, and people who think in terms of SQL-style joins prefer Cascading and Hive.

Brief overview of Crunch and an example (word count) application.

Definitely a candidate for your “big data” tool belt.

December 17, 2012

Apache Ambari: Hadoop Operations, Innovation, and Enterprise Readiness

Filed under: Apache Ambari,Hadoop,MapReduce — Patrick Durusau @ 4:23 pm

Apache Ambari: Hadoop Operations, Innovation, and Enterprise Readiness by Shaun Connolly

From the post:

Over the course of 2012, through Hortonworks’ leadership within the Apache Ambari community we have seen the rapid creation of an enterprise-class management platform required for enabling Apache Hadoop to be an enterprise viable data platform. Hortonworks engineers and the broader Ambari community have been working hard on their latest release, and we’d like to highlight the exciting progress that’s been made to Ambari, a 100% open and free solution that delivers the features required from an enterprise-class management platform for Apache Hadoop.

Why is the open source Ambari management platform important?

For Apache Hadoop to be an enterprise viable platform it not only needs the Data Services that sit atop core Hadoop (such as Pig, Hive, and HBase), but it also needs the Management Platform to be developed in an open and free manner. Ambari is a key operational component within the Hortonworks Data Platform (HDP), which helps make Hadoop deployments for our customers and partners easier and more manageable.

Stability and ease of management are two key requirements for enterprise adoption of Hadoop and Ambari delivers on both of these. Moreover, the rate at which this project is innovating is very exciting. In under a year, the community has accomplished what has taken years to complete for other solutions. As expected the “ship early and often” philosophy demonstrates innovation and helps encourage a vibrant and widespread following.

A reminder that tools can’t just be cool or clever.

Tools must fit within enterprise contexts where “those who lead from behind” are neither cool nor clever. But they do pay the bills and so are entitled to predictable and manageable outcomes.

Maybe. 😉 But that is the usual trade-off and if Apache Ambari helps Hadoop meet their requirements, so much the better for Hadoop.

December 14, 2012

How-To: Run a MapReduce Job in CDH4

Filed under: Cloudera,Hadoop,MapReduce — Patrick Durusau @ 2:56 pm

How-To: Run a MapReduce Job in CDH4 by Sandy Ryza.

From the post:

This is the first post in series that will get you going on how to write, compile, and run a simple MapReduce job on Apache Hadoop. The full code, along with tests, is available at http://github.com/cloudera/mapreduce-tutorial. The program will run on either MR1 or MR2.

We’ll assume that you have a running Hadoop installation, either locally or on a cluster, and your environment is set up correctly so that typing “hadoop” into your command line gives you some notes on usage. Detailed instructions for installing CDH, Cloudera’s open-source, enterprise-ready distro of Hadoop and related projects, are available here: https://ccp.cloudera.com/display/CDH4DOC/CDH4+Installation. We’ll also assume you have Maven installed on your system, as this will make compiling your code easier. Note that Maven is not a strict dependency; we could also compile using Java on the command line or with an IDE like Eclipse.

The Use Case

There’s been a lot of brawling on our pirate ship recently. Not so rarely, one of the mates will punch another one in the mouth, knocking a tooth out onto the deck. Our poor sailors will wake up the next day with an empty bottle of rum, wondering who’s responsible for the gap between their teeth. All this violence has gotten out of hand, so as a deterrent, we’d like to provide everyone with a list of everyone that’s ever left them with a gap. Luckily, we’ve been able to set up a Flume source so that every time someone punches someone else, it gets written out as a line in a big log file in Hadoop. To turn this data into these lists, we need a MapReduce job that can 1) invert the mapping from attacker to their victim, 2) group by victims, and 3) eliminate duplicates.

Cool!

Imagine using the same technique while you watch the evening news!

On second thought, that would take too much data entry and be depressing.

Stick to the pirates!

December 10, 2012

Apache Gora

Filed under: BigData,Gora,Hadoop,HBase,MapReduce — Patrick Durusau @ 5:26 pm

Apache Gora

From the webpage:

What is Apache Gora?

The Apache Gora open source framework provides an in-memory data model and persistence for big data. Gora supports persisting to column
stores, key value stores, document stores and RDBMSs, and analyzing the data with extensive Apache Hadoop MapReduce support.

Why Apache Gora?

Although there are various excellent ORM frameworks for relational databases, data modeling in NoSQL data stores differ profoundly from their relational cousins. Moreover, data-model agnostic frameworks such as JDO are not sufficient for use cases, where one needs to use the full power of the data models in column stores. Gora fills this gap by giving the user an easy-to-use in-memory data model and persistence for big data framework with data store specific mappings and built in Apache Hadoop support.

The overall goal for Gora is to become the standard data representation and persistence framework for big data. The roadmap of Gora can be grouped as follows.

  • Data Persistence : Persisting objects to Column stores such as HBase, Cassandra, Hypertable; key-value stores such as Voldermort, Redis, etc; SQL databases, such as MySQL, HSQLDB, flat files in local file system of Hadoop HDFS.
  • Data Access : An easy to use Java-friendly common API for accessing the data regardless of its location.
  • Indexing : Persisting objects to Lucene and Solr indexes, accessing/querying the data with Gora API.
  • Analysis : Accesing the data and making analysis through adapters for Apache Pig, Apache Hive and Cascading
  • MapReduce support : Out-of-the-box and extensive MapReduce (Apache Hadoop) support for data in the data store.

When writing about the Nutch 2.X development path, I discovered my omission of Gora from this blog. Apologies for having overlooked it until now.

December 6, 2012

Hadoop for Dummies

Filed under: Hadoop,MapReduce — Patrick Durusau @ 11:37 am

Hadoop for Dummies by Robert D. Schneider.

Courtesy of IBM, it’s what you think it is.

I am torn between thinking that educating c-suite executives is a good idea and wondering what sort of mis-impressions will follow from that education.

I suppose that could be an interesting sociology experiment. IT departments could forward the link to their c-suite executives and then keep track of the number and type of mis-impressions.

Collected at some common website by industry, could create a baseline for c-suite explanations of technology. 😉

December 5, 2012

Impala Beta (0.3) + Cloudera Manager 4.1.2 [Get’m While Their Hot!]

Filed under: Cloudera,Hadoop,Impala,MapReduce — Patrick Durusau @ 5:46 am

Cloudera Impala Beta (version 0.3) and Cloudera Manager 4.1.2 Now Available by Vinithra Varadharajan.

If you are keeping your Hadoop ecosystem skills up to date, drop by Cloudera for the latest Impala beta and a new release of Cloudera Manager.

Vinithra reports that new releases of Impala are going to drop every two to four weeks.

You can either wait for the final release of Impala or read along and contribute to the final product with your testing and comments.

December 4, 2012

New to Hadoop

Filed under: Cloudera,Hadoop,MapReduce — Patrick Durusau @ 12:08 pm

New to Hadoop

Cloudera has organized a seven step program for learning Hadoop!

  1. Read Up on Background
  2. Install Locally, Install a VM, or Spin Up on Cloud
  3. Explore Tutorials
  4. Get Trained Up
  5. Read Books
  6. Contribute!
  7. Participate!

It doesn’t list every possible resource but all the ones listed are high quality.

Following this program will build a solid basis for exploring the Hadoop ecosystem on your own.

December 3, 2012

Cloudera – Videos from Strata + Hadoop World 2012

Filed under: Cloudera,Hadoop,MapReduce — Patrick Durusau @ 7:20 pm

Cloudera – Videos from Strata + Hadoop World 2012

The link is to the main resources page, where you can find many other videos and other materials.

If you want Strata + Hadoop World 2012 videos specifically, search on Hadoop World 2012.

As of today, that pulls up 41 entries. Should be enough to keep you occupied for a day or so. 😉

November 29, 2012

Abusing Cloud-Based Browsers for Fun and Profit [Passing Messages, Not Data]

Filed under: Cloud Computing,Javascript,MapReduce,Messaging — Patrick Durusau @ 12:58 pm

Abusing Cloud-Based Browsers for Fun and Profit by Vasant Tendulkar, Joe Pletcher, Ashwin Shashidharan, Ryan Snyder, Kevin Butler and William Enck.

Abstract:

Cloud services have become a cheap and popular means of computing. They allow users to synchronize data between devices and relieve low-powered devices from heavy computations. In response to the surge of smartphones and mobile devices, several cloud-based Web browsers have become commercially available. These “cloud browsers” assemble and render Web pages within the cloud, executing JavaScript code for the mobile client. This paper explores how the computational abilities of cloud browsers may be exploited through a Browser MapReduce (BMR) architecture for executing large, parallel tasks. We explore the computation and memory limits of four cloud browsers, and demonstrate the viability of BMR by implementing a client based on a reverse engineering of the Puffin cloud browser. We implement and test three canonical MapReduce applications (word count, distributed grep, and distributed sort). While we perform experiments on relatively small amounts of data (100 MB) for ethical considerations, our results strongly suggest that current cloud browsers are a viable source of arbitrary free computing at large scale.

Excellent work on extending the use of cloud-based browsers. Whether you intend to use them for good or ill.

The use of messaging as opposed to passage of data is particularly interesting.

Shouldn’t that work for the process of merging as well?

Comments/suggestions?

« Newer PostsOlder Posts »

Powered by WordPress