Another Word For It Patrick Durusau on Topic Maps and Semantic Diversity

May 30, 2014

…Setting Up an R-Hadoop System

Filed under: Hadoop,Hadoop YARN,R — Patrick Durusau @ 2:30 pm

Step-by-Step Guide to Setting Up an R-Hadoop System by Yanchang Zhao.

From the post:

This is a step-by-step guide to setting up an R-Hadoop system. I have tested it both on a single computer and on a cluster of computers. Note that this process is for Mac OS X and some steps or settings might be different for Windows or Ubuntu.

What looks like an excellent post on installing R-Hadaoop. It is written for the Mac OS and I have yet to confirm its installation on either Windows or Ubuntu.

I won’t be installing this on Windows so if you can confirm any needed changes and post them I would appreciate it.

I first saw this in a tweet by Gregory Piatetsky.

Apache™ Spark™ v1.0

Filed under: Hadoop,Spark,SQL — Patrick Durusau @ 1:30 pm

Apache™ Spark™ v1.0

From the post:

The Apache Software Foundation (ASF), the all-volunteer developers, stewards, and incubators of more than 170 Open Source projects and initiatives, announced today the availability of Apache Spark v1.0, the super-fast, Open Source large-scale data processing and advanced analytics engine.

Apache Spark has been dubbed a “Hadoop Swiss Army knife” for its remarkable speed and ease of use, allowing developers to quickly write applications in Java, Scala, or Python, using its built-in set of over 80 high-level operators. With Spark, programs can run up to 100x faster than Apache Hadoop MapReduce in memory.

“1.0 is a huge milestone for the fast-growing Spark community. Every contributor and user who’s helped bring Spark to this point should feel proud of this release,” said Matei Zaharia, Vice President of Apache Spark.

Apache Spark is well-suited for machine learning, interactive queries, and stream processing. It is 100% compatible with Hadoop’s Distributed File System (HDFS), HBase, Cassandra, as well as any Hadoop storage system, making existing data immediately usable in Spark. In addition, Spark supports SQL queries, streaming data, and complex analytics such as machine learning and graph algorithms out-of-the-box.

New in v1.0, Apache Spark offers strong API stability guarantees (backward-compatibility throughout the 1.X series), a new Spark SQL component for accessing structured data, as well as richer integration with other Apache projects (Hadoop YARN, Hive, and Mesos).

Spark Homepage.

A bit more technical note of the release from the project:

Spark 1.0.0 is a major release marking the start of the 1.X line. This release brings both a variety of new features and strong API compatibility guarantees throughout the 1.X line. Spark 1.0 adds a new major component, Spark SQL, for loading and manipulating structured data in Spark. It includes major extensions to all of Spark’s existing standard libraries (ML, Streaming, and GraphX) while also enhancing language support in Java and Python. Finally, Spark 1.0 brings operational improvements including full support for the Hadoop/YARN security model and a unified submission process for all supported cluster managers.

You can download Spark 1.0.0 as either a source package (5 MB tgz) or a prebuilt package for Hadoop 1 / CDH3, CDH4, or Hadoop 2 / CDH5 / HDP2 (160 MB tgz). Release signatures and checksums are available at the official Apache download site.

What a nice way to start the weekend!

I first saw this in a tweet by Sean Owen.

May 15, 2014

Diving into HDFS

Filed under: Hadoop,HDFS — Patrick Durusau @ 2:22 pm

Diving into HDFS by Julia Evans.

From the post:

Yesterday I wanted to start learning about how HDFS (the Hadoop Distributed File System) works internally. I knew that

  • It’s distributed, so one file may be stored across many different machines
  • There’s a namenode, which keeps track of where all the files are stored
  • There are data nodes, which contain the actual file data

But I wasn’t quite sure how to get started! I knew how to navigate the filesystem from the command line (hadoop fs -ls /, and friends), but not how to figure out how it works internally.

Colin Marc pointed me to this great library called snakebite which is a Python HDFS client. In particular he pointed me to the part of the code that reads file contents from HDFS. We’re going to tear it apart a bit and see what exactly it does!

Be cautious reading Julia’s post!

Her enthusiasm can be infectious. 😉

Seriously, I take Julia’s posts as the way CS topics are supposed to be explored. While there is hard work, there is also the thrill of discovery. Not a bad approach to have.

May 4, 2014

Lock and Load Hadoop

Filed under: Hadoop,MapReduce — Patrick Durusau @ 10:35 am

How to Load Data for Hadoop into the Hortonworks Sandbox

Summary:

This tutorial describes how to load data into the Hortonworks sandbox.

The Hortonworks sandbox is a fully contained Hortonworks Data Platform (HDP) environment. The sandbox includes the core Hadoop components (HDFS and MapReduce), as well as all the tools needed for data ingestion and processing. You can access and analyze sandbox data with many Business Intelligence (BI) applications.

In this tutorial, we will load and review data for a fictitious web retail store in what has become an established use case for Hadoop: deriving insights from large data sources such as web logs. By combining web logs with more traditional customer data, we can better understand our customers, and also understand how to optimize future promotions and advertising.

“Big data” applications are fun to read about but aren’t really interesting until your data has been loaded.

If you don’t have the Hortonworks Sandbox you need to get it: Hortonworks Sandbox.

April 21, 2014

Hive 0.13 and Stinger!

Filed under: Hadoop,Hive,STINGER — Patrick Durusau @ 4:55 pm

Announcing Apache Hive 0.13 and Completion of the Stinger Initiative! by Harish Butani.

From the post:

The Apache Hive community has voted on and released version 0.13 today. This is a significant release that represents a major effort from over 70 members who worked diligently to close out over 1080 JIRA tickets.

Hive 0.13 also delivers the third and final phase of the Stinger Initiative, a broad community based initiative to drive the future of Apache Hive, delivering 100x performance improvements at petabyte scale with familiar SQL semantics. These improvements extend Hive beyond its traditional roots and brings true interactive SQL query to Hadoop.

Ultimately, over 145 developers representing 44 companies, from across the Apache Hive community contributed over 390,000 lines of code to the project in just 13 months, nearly doubling the Hive code base.

The three phases of this important project spanned Hive versions 0.11, 0.12 and 0.13. Additionally, the Apache Hive team coordinated this 0.13 release with the simultaneous release of Apache Tez 0.4. Tez’s DAG execution speeds Hive queries run on Tez.

Hive 0.13

Kudos to one and all!

Open source work at its very best!

April 17, 2014

Cloudera Live (beta)

Filed under: Cloudera,Hadoop,Hive,Impala,Oozie,Solr,Spark — Patrick Durusau @ 4:57 pm

Cloudera Live (beta)

From the webpage:

Try a live demo of Hadoop, right now.

Cloudera Live is a new way to get started with Apache Hadoop, online. No downloads, no installations, no waiting. Watch tutorial videos and work with real-world examples of the complete Hadoop stack included with CDH, Cloudera’s completely open source Hadoop platform, to:

  • Learn Hue, the Hadoop User Interface developed by Cloudera
  • Query data using popular projects like Apache Hive, Apache Pig, Impala, Apache Solr, and Apache Spark (new!)
  • Develop workflows using Apache Oozie

Great news for people interested in Hadoop!

Question: Will this become the default delivery model for test driving software and training?

Enjoy!

April 10, 2014

Apache Hadoop 2.4.0 Released!

Filed under: Hadoop,Hortonworks,MapReduce — Patrick Durusau @ 6:20 pm

Apache Hadoop 2.4.0 Released! by Arun Murthy.

From the post:

It gives me great pleasure to announce that the Apache Hadoop community has voted to release Apache Hadoop 2.4.0! Thank you to every single one of the contributors, reviewers and testers!

Hadoop 2.4.0 continues that momentum, with additional enhancements to both HDFS & YARN:

  • Support for Access Control Lists in HDFS (HDFS-4685)
  • Native support for Rolling Upgrades in HDFS (HDFS-5535)
  • Smooth operational upgrades with protocol buffers for HDFS FSImage (HDFS-5698)
  • Full HTTPS support for HDFS (HDFS-5305)
  • Support for Automatic Failover of the YARN ResourceManager (YARN-149) (a.k.a Phase 1 of YARN ResourceManager High Availability)
  • Enhanced support for new applications on YARN with Application History Server (YARN-321) and Application Timeline Server (YARN-1530)
  • Support for strong SLAs in YARN CapacityScheduler via Preemption (YARN-185)

And of course:

Links

See Arun’s post for more details or just jump to the downloads links.

Scalding 0.9: Get it while it’s hot!

Filed under: Hadoop,MapReduce,Scalding,Tweets — Patrick Durusau @ 6:11 pm

Scalding 0.9: Get it while it’s hot! by P. Oscar Boykin.

From the post:

It’s been just over two years since we open sourced Scalding and today we are very excited to release the 0.9 version. Scalding at Twitter powers everything from internal and external facing dashboards, to custom relevance and ad targeting algorithms, including many graph algorithms such as PageRank, approximate user cosine similarity and many more.

Oscar covers:

  • Joins
  • Input/output
    • Parquet Format
    • Avro
    • TemplateTap
  • Hadoop counters
  • Typed API
  • Matrix API

Or if you want something a bit more visual and just as enthusiastic, see:

Basically the same content but with Oscar live!

April 2, 2014

Apache Tajo

Filed under: Apache Tajo,Hadoop — Patrick Durusau @ 3:57 pm

Apache Tajo SQL-on-Hadoop engine now a top-level project by Derrick Harris.

From the post:

Apache Tajo, a relational database warehouse system for Hadoop, has graduated to to-level status within the Apache Software Foundation. It might be easy to overlook Tajo because its creators, committers and users are largely based in Korea — and because there’s a whole lot of similar technologies, including one developed at Facebook — but the project could be a dark horse in the race for mass adoption. Among Tajo’s lead contributors are an engineer from LinkedIn and members of the Hortonworks technical team, which suggests those companies see some value in it even among the myriad other options.

It is far too early to be choosing winners in the Hadoop ecosystem.

There are so many contenders, with their individual boosters, that if you don’t like the solutions offered today, wait a week or so, another one will pop up on the horizon.

Which isn’t a bad thing. There isn’t any reason to think IT has uncovered the best data structures or algorithms for your data. Anymore than you would have thought that twenty years ago.

The caution I would offer is to hold tightly to your requirements and not those of some solution. Compromise may be necessary on your part, but fully understand what you are giving up and why.

The only utility that software can have, for any given user, is that it performs some task they require to be performed. For vendors, adopters, promoters, software has other utilities, which are unlikely to interest you.

Hortonworks Data Platform 2.1

Filed under: Apache Ambari,Falcon,Hadoop,Hadoop YARN,Hive,Hortonworks,Knox Gateway,Solr,Storm,Tez — Patrick Durusau @ 2:49 pm

Hortonworks Data Platform 2.1 by Jim Walker.

From the post:

The pace of innovation within the Apache Hadoop community is truly remarkable, enabling us to announce the availability of Hortonworks Data Platform 2.1, incorporating the very latest innovations from the Hadoop community in an integrated, tested, and completely open enterprise data platform.

A VM available now, full releases to follow later in April.

Just grabbing the headings from Jim’s post:

The Stinger Initiative: Apache Hive, Tez and YARN for Interactive Query

Data Governance with Apache Falcon

Security with Apache Knox

Stream Processing with Apache Storm

Searching Hadoop Data with Apache Solr

Advanced Operations with Apache Ambari

See Jim’s post for some of the details and the VM for others.

March 16, 2014

Hadoop Alternative Hydra Re-Spawns as Open Source

Filed under: Hadoop,Hydra,Interface Research/Design,Stream Analytics — Patrick Durusau @ 7:31 pm

Hadoop Alternative Hydra Re-Spawns as Open Source by Alex Woodie.

From the post:

It may not have the name recognition or momentum of Hadoop. But Hydra, the distributed task processing system first developed six years ago by the social bookmarking service maker AddThis, is now available under an open source Apache license, just like Hadoop. And according to Hydra’s creator, the multi-headed platform is very good at some big data tasks that the yellow pachyderm struggles with–namely real-time processing of very big data sets.

Hydra is a big data storage and processing platform developed by Matt Abrams and his colleagues at AddThis (formerly Clearspring), the company that develops the Web server widgets that allow visitors to easily share something via their Twitter, Facebook, Pintrest, Google+, or Instagram accounts.

When AddThis started scaling up its business in the mid-2000s, it got flooded with data about what users were sharing. The company needed a scalable, distributed system that could deliver real-time analysis of that data to its customers. Hadoop wasn’t a feasible option at that time. So it built Hydra instead.

So, what is Hydra? In short, it’s a distributed task processing system that supports streaming and batch operations. It utilizes a tree-based data structure to store and process data across clusters with thousands of individual nodes. It features a Linux-based file system, which makes it compatible with ext3, ext4, or even ZFS. It also features a job/cluster management component that automatically allocates new jobs to the cluster and rebalance existing jobs. The system automatically replicates data and handles node failures automatically.

The tree-based structure allows it to handle streaming and batch jobs at the same time. In his January 23 blog post announcing that Hydra is now open source, Chris Burroughs, a member of AddThis’ engineering department, provided this useful description of Hydra: “It ingests streams of data (think log files) and builds trees that are aggregates, summaries, or transformations of the data. These trees can be used by humans to explore (tiny queries), as part of a machine learning pipeline (big queries), or to support live consoles on websites (lots of queries).”

To learn a lot more about Hydra, see its GitHub page.

Another candidate for “real-time processing of very big data sets.”

The reflex is to applaud improvements in processing speed. But what sort of problems require that kind of speed? I know the usual suspects, modeling the weather, nuclear explosions, chemical reactions, but at some point, the processing ends and a human reader has to comprehend the results.

Better to get information to the human reader sooner rather than later, but there is a limit to the speed at which a user can understand the results of a computational process.

From a UI perspective, what research is there on how fast/slow information should be pushed at a user?

Could make the difference between an app that is just annoying and one that is truly useful.

I first saw this in a tweet by Joe Crobak.

March 13, 2014

Kite Software Development Kit

Filed under: Cloudera,Hadoop,Kite SDK,MapReduce — Patrick Durusau @ 7:12 pm

Kite Software Development Kit

From the webpage:

The Kite Software Development Kit (Apache License, Version 2.0), or Kite for short, is a set of libraries, tools, examples, and documentation focused on making it easier to build systems on top of the Hadoop ecosystem.

  • Codifies expert patterns and practices for building data-oriented systems and applications
  • Lets developers focus on business logic, not plumbing or infrastructure
  • Provides smart defaults for platform choices
  • Supports gradual adoption via loosely-coupled modules

Version 0.12.0 was released March 10, 2014.

Do note that unlike some “pattern languages,” these are legitimate patterns are based on expert patterns and practices. (There are “patterns” produced like Uncle Bilius (Harry Potter and the Deathly Hallows, Chapter Eight) after downing a bottle of firewhiskey. You should avoid such patterns.)

March 11, 2014

Data Science Challenge

Filed under: Challenges,Cloudera,Hadoop,MapReduce — Patrick Durusau @ 4:22 pm

Data Science Challenge

Some details from the registration page:

Prerequisite: Data Science Essentials (DS-200)
Schedule: Twice per year
Duration: Three months from launch date
Next Challenge Date: March 31, 2014
Language: English
Price: USD $600

From the webpage:

Cloudera will release a Data Science Challenge twice each year. Each bi-quarterly project is based on a real-world data science problem involving a large data set and is open to candidates for three months to complete. During the open period, candidates may work on their project individually and at their own pace.

Current Data Science Challenge

The new Data Science Challenge: Detecting Anomalies in Medicare Claims will be available starting March 31, 2014, and will cost USD $600.

In the U.S., Medicare reimburses private providers for medical procedures performed for covered individuals. As such, it needs to verify that the type of procedures performed and the cost of those procedures are consistent and reasonable. Finally, it needs to detect possible errors or fraud in claims for reimbursement from providers. You have been hired to analyze a large amount of data from Medicare and try to detect abnormal data — providers, areas, or patients with unusual procedures and/or claims.

Register for the challenge.

Build a Winning Model

CCP candidates compete against each other and against a benchmark set by a committee including some of the world’s elite data scientists. Participants who surpass evaluation benchmarks receive the CCP: Data Scientist credential.

Lead the Field

Those with the highest scores from each Challenge will have an opportunity to share their solutions and promote their work on cloudera.com and via press and social media outlets. All candidates retain the full rights to their own work and may leverage their models outside of the Challenge as they choose.

Useful way to develop some street cred in data science.

March 8, 2014

Merge Mahout item based recommendations…

Filed under: Hadoop,Hive,Mahout,MapReduce,Recommendation — Patrick Durusau @ 8:08 pm

Merge Mahout item based recommendations results from different algorithms

From the post:

Apache Mahout is a machine learning library that leverages the power of Hadoop to implement machine learning through the MapReduce paradigm. One of the implemented algorithms is collaborative filtering, the most successful recommendation technique to date. The basic idea behind collaborative filtering is to analyze the actions or opinions of users to recommend items similar to the one the user is interacting with.

Similarity isn’t restricted to a particular measure or metric.

How similar is enough to be considered the same?

That is a question topic map designers must answer on a case by case basis.

February 27, 2014

Mortar PIG Cheat Sheet

Filed under: Hadoop,MapReduce,Pig — Patrick Durusau @ 11:28 am

Mortar PIG Cheat Sheet

From the cheatsheet:

We love Apache Pig for data processing—it’s easy to learn, it works with all kinds of data, and it plays well with Python, Java, and other popular languages. And, of course, Pig runs on Hadoop, so it’s built for high-scale data science.

Whether you’re just getting started with Pig or you’ve already written a variety of Pig scripts, this compact reference gathers in one place many of the tools you’ll need to make the most of your data using Pig 0.12

Easier on the eyes than a one pager!

Not to mention being a good example of how to write and format a cheat sheet.

February 26, 2014

Secrets of Cloudera Support:…

Filed under: Cloudera,Hadoop,MapReduce,Solr — Patrick Durusau @ 3:50 pm

Secrets of Cloudera Support: Inside Our Own Enterprise Data Hub by Adam Warrington.

From the post:

Here at Cloudera, we are constantly pushing the envelope to give our customers world-class support. One of the cornerstones of this effort is the Cloudera Support Interface (CSI), which we’ve described in prior blog posts (here and here). Through CSI, our support team is able to quickly reason about a customer’s environment, search for information related to a case currently being worked, and much more.

In this post, I’m happy to write about a new feature in CSI, which we call Monocle Stack Trace.

Stack Trace Exploration with Search

Hadoop log messages and the stack traces in those logs are critical information in many of the support cases Cloudera handles. We find that our customer operation engineers (COEs) will regularly search for stack traces they find referenced in support cases to try to determine where else that stack trace has shown up, and in what context it would occur. This could be in the many sources we were already indexing as part of Monocle Search in CSI: Apache JIRAs, Apache mailing lists, internal Cloudera JIRAs, internal Cloudera mailing lists, support cases, Knowledge Base articles, Cloudera Community Forums, and the customer diagnostic bundles we get from Cloudera Manager.

It turns out that doing routine document searches for stack traces doesn’t always yield the best results. Stack traces are relatively long compared to normal search terms, so search indexes won’t always return the relevant results in the order you would expect. It’s also hard for a user to churn through the search results to figure out if the stack trace was actually an exact match in the document to figure out how relevant it actually is.

To solve this problem, we took an approach similar to what Google does when it wants to allow searching over a type that isn’t best suited for normal document search (such as images): we created an independent index and search result page for stack-trace searches. In Monocle Stack Trace, the search results show a list of unique stack traces grouped with every source of data in which unique stack trace was discovered. Each source can be viewed in-line in the search result page, or the user can go to it directly by following a link.

We also give visual hints as to how the stack trace for which the user searched differs from the stack traces that show up in the search results. A green highlighted line in a search result indicates a matching call stack line. Yellow indicates a call stack line that only differs in line number, something that may indicate the same stack trace on a different version of the source code. A screenshot showing the grouping of sources and visual highlighting is below:

See Adam’s post for the details.

I like the imaginative modification of standard search.

Not all data is the same and searching it as if it were, leaves a lot of useful data unfound.

February 25, 2014

Apache Hadoop 2.3.0 Released!

Filed under: Hadoop,Hortonworks,MapReduce — Patrick Durusau @ 2:21 pm

Apache Hadoop 2.3.0 Released! by Arun Murthy.

From the post:

It gives me great pleasure to announce that the Apache Hadoop community has voted to release Apache Hadoop 2.3.0!

hadoop-2.3.0 is the first release for the year 2014, and brings a number of enhancements to the core platform, in particular to HDFS.

With this release, there are two significant enhancements to HDFS:

  • Support for Heterogeneous Storage Hierarchy in HDFS (HDFS-2832)
  • In-memory Cache for data resident in HDFS via Datanodes (HDFS-4949)

With support for heterogeneous storage classes in HDFS, we now can take advantage of different storage types on the same Hadoop clusters. Hence, we can now make better cost/benefit tradeoffs with different storage media such as commodity disks, enterprise-grade disks, SSDs, Memory etc. More details on this major enhancement are available here.

Along similar lines, it is now possible to use memory available in the Hadoop cluster to centrally cache and administer data-sets in-memory in the Datanode’s address space. Applications such as MapReduce, Hive, Pig etc. can now request for memory to be cached (for the curios, we use a combination of mmap, mlock to achieve this) and then read it directly off the Datanode’s address space for extremely efficient scans by avoiding disk altogether. As an example, Hive is taking advantage of this feature by implementing an extremely efficient zero-copy read path for ORC files – see HIVE-6347 for details.

See Arun’s post for more details.

I guess there really is a downside to open source development.

It’s so much faster than commercial product development cycles. 😉 (Hard to keep up.)

February 24, 2014

Index and Search Multilingual Documents in Hadoop

Filed under: Hadoop,Lucene,Solr — Patrick Durusau @ 4:27 pm

Index and Search Multilingual Documents in Hadoop by Justin Kestelyn.

From the post:

Basis Technology’s Rosette Base Linguistics for Java (RBL-JE) provides a comprehensive multilingual text analytics platform for improving search precision and recall. RBL provides tokenization, lemmatization, POS tagging, and de-compounding for Asian, European, Nordic, and Middle Eastern languages, and has just been certified for use with Cloudera Search.

Cloudera Search brings full-text, interactive search, and scalable indexing to Apache Hadoop by marrying SolrCloud with HDFS and Apache HBase, and other projects in CDH. Because it’s integrated with CDH, Cloudera Search brings the same fault tolerance, scale, visibility, and flexibility of your other Hadoop workloads to search, and allows for a number of indexing, access control, and manageability options.

In this post, you’ll learn how to use Cloudera Search and RBL-JE to index and search documents. Since Cloudera takes care of the plumbing for distributed search and indexing, the only work needed to incorporate Basis Technology’s linguistics is loading the software and configuring your Solr collections.

You may have guessed by the way the introduction is worded that Rosette Base Linguistics isn’t free. I checked at the website but found no pricing information. Not to mention that the coverage looks spotty:

  • Arabic
  • Chinese (simplified)
  • Chinese (traditional)
  • English
  • Japanese
  • Korean
  • If your multilingual needs fall in one or more of those languages, this may work for you.

    On the other hand, for indexing and searching multilingual text, you should compare Solr, which has factories for the following languages:

    • Arabic
    • Brazilian Portuguese
    • Bulgarian
    • Catalan
    • Chinese
    • Simplified Chinese
    • CJK
    • Czech
    • Danish
    • Dutch
    • Finnish
    • French
    • Galician
    • German
    • Greek
    • Hebrew, Lao, Myanmar, Khmer
    • Hindi
    • Indonesian
    • Italian
    • Irish
    • Kuromoji (Japanese)
    • Latvian
    • Norwegian
    • Persian
    • Polish
    • Portuguese
    • Romanian
    • Russian
    • Spanish
    • Swedish
    • Thai
    • Turkish

    Source: Solr Wiki.

February 23, 2014

How Companies are Using Spark

Filed under: BigData,Hadoop,Spark — Patrick Durusau @ 7:50 pm

How Companies are Using Spark, and Where the Edge in Big Data Will Be by Matei Zaharia.

Description:

While the first big data systems made a new class of applications possible, organizations must now compete on the speed and sophistication with which they can draw value from data. Future data processing platforms will need to not just scale cost-effectively; but to allow ever more real-time analysis, and to support both simple queries and today’s most sophisticated analytics algorithms. Through the Spark project at Apache and Berkeley, we’ve brought six years research to enable real-time and complex analytics within the Hadoop stack.

At time mark 1:53, Matei says when size of storage is no longer an advantage, you can gain an advantage by:

Speed: how quickly can you go from data to decisions?

Sophistication: can you run the best algorithms on the data?

As you might suspect, I strongly disagree that those are the only two points where you can gain an advantage with Big Data.

How about including:

Data Quality: How do you make data semantics explicit?

Data Management: Can you re-use data by knowing its semantics?

You can run sophisticated algorithms on data and make quick decisions, but if your data is GIGO (garbage in, garbage out), I don’t see the competitive edge.

Nothing against Spark, managing video streams with only 1 second of buffering was quite impressive.

To be fair, Matei does include ClearStoryData as one of his examples and ClearStory says that they merge data based in its semantics. Unfortunately, the website doesn’t mention any details other than there is a “patent pending.”

But in any event, I do think data quality and data management should be explicit items in any big data strategy.

At least so long as you want big data and not big garbage.

February 19, 2014

Spark Graduates Apache Incubator

Filed under: Graphs,GraphX,Hadoop,Spark — Patrick Durusau @ 12:07 pm

Spark Graduates Apache Incubator by Tiffany Trader.

From the post:

As we’ve touched on before, Hadoop was designed as a batch-oriented system, and its real-time capabilities are still emerging. Those eagerly awaiting this next evolution will be pleased to hear about the graduation of Apache Spark from the Apache Incubator. On Sunday, the Apache Spark Project committee unanimously voted to promote the fast data-processing tool out of the Apache Incubator.

Databricks refers to Apache Spark as “a powerful open source processing engine for Hadoop data built around speed, ease of use, and sophisticated analytics.” The computing framework supports Java, Scala, and Python and comes with a set of more than 80 high-level operators baked-in.

Spark runs on top of existing Hadoop clusters and is being pitched as a “more general and powerful alternative to Hadoop’s MapReduce.” Spark promises performance gains up to 100 times faster than Hadoop MapReduce for in-memory datasets, and 10 times faster when running on disk.

BTW, the most recent release, 0.90, includes GraphX.

Spark homepage.

February 15, 2014

Spring XD – Tweets – Hadoop – Sentiment Analysis

Filed under: Hadoop,MapReduce,Sentiment Analysis,Tweets — Patrick Durusau @ 11:18 am

Using Spring XD to stream Tweets to Hadoop for Sentiment Analysis

From the webpage:

This tutorial will build on the previous tutorial – 13 – Refining and Visualizing Sentiment Data – by using Spring XD to stream in tweets to HDFS. Once in HDFS, we’ll use Apache Hive to process and analyze them, before visualizing in a tool.

I re-ordered the text:

This tutorial is from the Community part of tutorial for Hortonworks Sandbox (1.3) – a single-node Hadoop cluster running in a virtual machine. Download to run this and other tutorials in the series.

This community tutorial submitted by mehzer with source available at Github. Feel free to contribute edits or your own tutorial and help the community learn Hadoop.

not to take anything away from Spring XD or Sentiment Analysis but to emphasize the community tutorial aspects of the Hortonworks Sandbox.

At present count on tutorials:

Hortonworks: 14

Partners: 12

Community: 6

Thoughts on what the next community tutorial should be?

February 11, 2014

CUBE and ROLLUP:…

Filed under: Aggregation,Hadoop,Pig — Patrick Durusau @ 1:29 pm

CUBE and ROLLUP: Two Pig Functions That Every Data Scientist Should Know by Joshua Lande.

From the post:

I recently found two incredible functions in Apache Pig called CUBE and ROLLUP that every data scientist should know. These functions can be used to compute multi-level aggregations of a data set. I found the documentation for these functions to be confusing, so I will work through a simple example to explain how they work.

Joshua starts his post with a demonstration of using GROUP BY in Pig for simple aggregations. That sets the stage for demonstrating how important CUBE and ROLLUP can be for data aggregations in PIG.

Interesting possibilities suggest themselves by the time you finish Joshua’s posting.

I first saw this in a tweet by Dmitriy Ryaboy.

February 9, 2014

Write and Run Giraph Jobs on Hadoop

Filed under: Cloudera,Giraph,Graphs,Hadoop,MapReduce — Patrick Durusau @ 7:52 pm

Write and Run Giraph Jobs on Hadoop by Mirko Kämpf.

From the post:

Create a test environment for writing and testing Giraph jobs, or just for playing around with Giraph and small sample datasets.

Apache Giraph is a scalable, fault-tolerant implementation of graph-processing algorithms in Apache Hadoop clusters of up to thousands of computing nodes. Giraph is in use at companies like Facebook and PayPal, for example, to help represent and analyze the billions (or even trillions) of connections across massive datasets. Giraph was inspired by Google’s Pregel framework and integrates well with Apache Accumulo, Apache HBase, Apache Hive, and Cloudera Impala.

Currently, the upstream “quick start” document explains how to deploy Giraph on a Hadoop cluster with two nodes running Ubuntu Linux. Although this setup is appropriate for lightweight development and testing, using Giraph with an enterprise-grade CDH-based cluster requires a slightly more robust approach.

In this how-to, you will learn how to use Giraph 1.0.0 on top of CDH 4.x using a simple example dataset, and run example jobs that are already implemented in Giraph. You will also learn how to set up your own Giraph-based development environment. The end result will be a setup (not intended for production) for writing and testing Giraph jobs, or just for playing around with Giraph and small sample datasets. (In future posts, I will explain how to implement your own graph algorithms and graph generators as well as how to export your results to Gephi, the “Adobe Photoshop for graphs”, through Impala and JDBC for further inspection.)

The first in a series of posts on Giraph.

This is great stuff!

It should keep you busy during your first conference call and/or staff meeting on Monday morning.

Monday won’t seem so bad. 😉

January 29, 2014

Create a Simple Hadoop Cluster with VirtualBox ( < 1 Hour)

Filed under: Cloudera,Hadoop,MapReduce — Patrick Durusau @ 5:33 pm

How-to: Create a Simple Hadoop Cluster with VirtualBox by Christian Javet.

From the post:

I wanted to get familiar with the big data world, and decided to test Hadoop. Initially, I used Cloudera’s pre-built virtual machine with its full Apache Hadoop suite pre-configured (called Cloudera QuickStart VM), and gave it a try. It was a really interesting and informative experience. The QuickStart VM is fully functional and you can test many Hadoop services, even though it is running as a single-node cluster.

I wondered what it would take to install a small four-node cluster…

I did some research and I found this excellent video on YouTube presenting a step by step explanation on how to setup a cluster with VMware and Cloudera. I adapted this tutorial to use VirtualBox instead, and this article describes the steps used.

Watch for the line:

Overall we will allocate 14GB of memory, so ensure that the host machine has sufficient memory, otherwise this will impact your experience negatively.

Yes, “…impact your experience negatively.”

😉

Enjoy!

January 27, 2014

…NFL’s ‘Play by Play’ Dataset

Filed under: Hadoop,MapReduce — Patrick Durusau @ 9:28 pm

Data Insights from the NFL’s ‘Play by Play’ Dataset by Jesse Anderson.

From the post:

In a recent GigaOM article, I shared insights from my analysis of the NFL’s Play by Play Dataset, which is a great metaphor for how enterprises can use big data to gain valuable insights into their own businesses. In this follow-up post, I will explain the methodology I used and offer advice for how to get started using Hadoop with your own data.

To see how my NFL data analysis was done, you can view and clone all of the source code for this project on my GitHub account. I am using Hadoop and its ecosystem for this processing. All of the data for this project uses the NFL 2002 season to the 4th week of the 2013 season.

Two MapReduce programs do the initial processing. These programs process the Play by Play data and parse out the play description. Each play has unstructured or handwritten data that describes what happened in the play. Using Regular Expressions, I figured out what type of play it was and what happened during the play. Was there a fumble, was it a run or was it a missed field goal? Those scenarios are all accounted for in the MapReduce program.

Just in case you aren’t interested in winning $1 billion at basketball or you just want to warm up for that challenge, try some NFL data on for size.

Could be useful in teaching you the limits of analysis. For all the stats that can be collected and crunched, games don’t always turn out as predicted.

On any given Monday morning you may win or lose a few dollars in the office betting pool, but number crunching is used for more important decisions as well.

Tutorial 1: Hello World… [Hadoop/Hive/Pig]

Filed under: Hadoop,Hive,Hortonworks,Pig — Patrick Durusau @ 9:17 pm

Tutorial 1: Hello World – An Overview of Hadoop with Hive and Pig

Don’t be frightened!

The tutorial really doesn’t use big data tools to quickly say “Hello World” or to even say it quickly, many times. 😉

One of the clearer tutorials on big data tools.

You won’t quite be dangerous by the time you finish this tutorial but you should have a strong enough taste of the tools to want more.

Enjoy!

January 21, 2014

Extracting Insights – FBO.Gov

Filed under: Government Data,Hadoop,NLTK,Pig,Python — Patrick Durusau @ 3:20 pm

Extracting Insights from FBO.Gov data – Part 1

Extracting Insights from FBO.Gov data – Part 2

Extracting Insights from FBO.Gov data – Part 3

Dave Fauth has written a great three part series on extracting “insights” from large amounts of data.

From the third post in the series:

Earlier this year, Sunlight foundation filed a lawsuit under the Freedom of Information Act. The lawsuit requested solication and award notices from FBO.gov. In November, Sunlight received over a decade’s worth of information and posted the information on-line for public downloading. I want to say a big thanks to Ginger McCall and Kaitlin Devine for the work that went into making this data available.

In the first part of this series, I looked at the data and munged the data into a workable set. Once I had the data in a workable set, I created some heatmap charts of the data looking at agencies and who they awarded contracts to. In part two of this series, I created some bubble charts looking at awards by Agency and also the most popular Awardees.

In the third part of the series, I am going to look at awards by date and then displaying that information in a calendar view. Then we will look at the types of awards.

For the date analysis, we are going to use all of the data going back to 2000. We have six data files that we will join together, filter on the ‘Notice Type’ field, and then calculate the counts by date for the awards. The goal is to see when awards are being made.

The most compelling lesson from this series is that data doesn’t always easily give up its secrets.

If you make it to the end of the series, you will find the government, on occasion, does the right thing. I’ll admit it, I was very surprised. 😉

January 17, 2014

Petuum

Filed under: Hadoop,Machine Learning,Petuum — Patrick Durusau @ 7:14 pm

Petuum

From the homepage:

Petuum is a distributed machine learning framework. It takes care of the difficult system “plumbing work”, allowing you to focus on the ML. Petuum runs efficiently at scale on research clusters and cloud compute like Amazon EC2 and Google GCE.

A Bit More Details

Petuum provides essential distributed programming tools that minimize programmer effort. It has a distributed parameter server (key-value storage), a distributed task scheduler, and out-of-core (disk) storage for extremely large problems. Unlike general-purpose distributed programming platforms, Petuum is designed specifically for ML algorithms. This means that Petuum takes advantage of data correlation, staleness, and other statistical properties to maximize the performance for ML algorithms.

Plug and Play

Petuum comes with a fast and scalable parallel LASSO regression solver, as well as an implementation of topic model (Latent Dirichlet Allocation) and L2-norm Matrix Factorization – with more to be added on a regular basis. Petuum is fully self-contained, making installation a breeze – if you know how to use a Linux package manager and type “make”, you’re ready to use Petuum. No mucking around trying to find that Hadoop cluster, or (worse still) trying to install Hadoop yourself. Whether you have a single machine or an entire cluster, Petuum just works.

What’s Petuum anyway?

Petuum comes from “perpetuum mobile,” which is a musical style characterized by a continuous steady stream of notes. Paganini’s Moto Perpetuo is an excellent example. It is our goal to build a system that runs efficiently and reliably — in perpetual motion.

Musically inclined programmers? 😉

The bar for using Hadoop and machine learning gets lower by the day. At least in terms of details that can be mastered by code.

Which is how it should be. The creative work, choosing data, appropriate algorithms, etc., being left to human operators.

I first saw this at Danny Bickson’s Petuum – a new distributed machine learning framework from CMU (Eric Xing).

PS: Remember to register for the 3rd GraphLab Conference!

January 16, 2014

MS SQL Server -> Hadoop

Filed under: Hadoop,Hortonworks,SQL Server,Sqoop — Patrick Durusau @ 2:59 pm

Community Tutorial 04: Import from Microsoft SQL Server into the Hortonworks Sandbox using Sqoop

From the webpage:

For a simple proof of concept I wanted to get data from MS SQL Server into the Hortonworks Sandbox in an automated fasion using Sqoop. Apache Sqoop provides a way of efficiently transferring bulk data between Apache Hadoop and relational databases. This tutorial will show you how to use Sqoop to import data into the Hortonworks Sandbox from a Microsoft SQL Server data source.

You’ll have to test this one without me.

I have thought about setting up a MS SQL Server but never got around to it. 😉

Apache Crunch User Guide (new and improved)

Filed under: Apache Crunch,Hadoop,MapReduce — Patrick Durusau @ 10:13 am

Apache Crunch User Guide

From the motivation section:

Let’s start with a basic question: why should you use any high-level tool for writing data pipelines, as opposed to developing against the MapReduce, Spark, or Tez APIs directly? Doesn’t adding another layer of abstraction just increase the number of moving pieces you need to worry about, ala the Law of Leaky Abstractions?

As with any decision like this, the answer is “it depends.” For a long time, the primary payoff of using a high-level tool was being able to take advantage of the work done by other developers to support common MapReduce patterns, such as joins and aggregations, without having to learn and rewrite them yourself. If you were going to need to take advantage of these patterns often in your work, it was worth the investment to learn about how to use the tool and deal with the inevitable leaks in the tool’s abstractions.

With Hadoop 2.0, we’re beginning to see the emergence of new engines for executing data pipelines on top of data stored in HDFS. In addition to MapReduce, there are new projects like Apache Spark and Apache Tez. Developers now have more choices for how to implement and execute their pipelines, and it can be difficult to know in advance which engine is best for your problem, especially since pipelines tend to evolve over time to process more data sources and larger data volumes. This choice means that there is a new reason to use a high-level tool for expressing your data pipeline: as the tools add support for new execution frameworks, you can test the performance of your pipeline on the new framework without having to rewrite your logic against new APIs.

There are many high-level tools available for creating data pipelines on top of Apache Hadoop, and they each have pros and cons depending on the developer and the use case. Apache Hive and Apache Pig define domain-specific languages (DSLs) that are intended to make it easy for data analysts to work with data stored in Hadoop, while Cascading and Apache Crunch develop Java libraries that are aimed at developers who are building pipelines and applications with a focus on performance and testability.

So which tool is right for your problem? If most of your pipeline work involves relational data and operations, than Hive, Pig, or Cascading provide lots of high-level functionality and tools that will make your life easier. If your problem involves working with non-relational data (complex records, HBase tables, vectors, geospatial data, etc.) or requires that you write lots of custom logic via user-defined functions (UDFs), then Crunch is most likely the right choice.

As topic mappers you are likely to work with both relational as well as complex non-relational data so this should be on your reading list.

I didn’t read the prior Apache Crunch documentation so I will have to take Josh Wills at his word that:

A (largely) new and (vastly) improved user guide for Apache Crunch, including details on the new Spark-based impl:

It reads well and makes a good case for investing time in learning Apache Crunch.

I first saw this in a tweet by Josh Wills.

« Newer PostsOlder Posts »

Powered by WordPress