Another Word For It Patrick Durusau on Topic Maps and Semantic Diversity

November 24, 2013

Under the Hood: [of RocksDB]

Filed under: Facebook,Key-Value Stores,leveldb,RocksDB — Patrick Durusau @ 2:33 pm

Under the Hood: Building and open-sourcing RocksDB by Dhruba Borthakur.

From the post:

Every time one of the 1.2 billion people who use Facebook visits the site, they see a completely unique, dynamically generated home page. There are several different applications powering this experience–and others across the site–that require global, real-time data fetching.

Storing and accessing hundreds of petabytes of data is a huge challenge, and we’re constantly improving and overhauling our tools to make this as fast and efficient as possible. Today, we are open-sourcing RocksDB, an embeddable, persistent key-value store for fast storage that we built and use here at Facebook.

Why build an embedded database?

Applications traditionally access their data via remote procedure calls over a network connection, but that can be slow–especially when we need to power user-facing products in real time. With the advent of flash storage, we are starting to see newer applications that can access data quickly by managing their own dataset on flash instead of accessing data over a network. These new applications are using what we call an embedded database.

There are several reasons for choosing an embedded database. When database requests are frequently served from memory or from very fast flash storage, network latency can slow the query response time. Accessing the network within a data center can take about 50 microseconds, as can fast-flash access latency. This means that accessing data over a network could potentially be twice as slow as an application accessing data locally.

Secondly, we are starting to see servers with an increasing number of cores and with storage-IOPS reaching millions of requests per second. Lock contention and a high number of context switches in traditional database software prevents it from being able to saturate the storage-IOPS. We’re finding we need new database software that is flexible enough to be customized for many of these emerging hardware trends.

Like most of you, I don’t have 1.2 billion people visiting my site. 😉

However, understanding today’s “high-end” solutions will prepare you for tomorrow’s “middle-tier” solution and day after tomorrow’s desktop solution.

A high level overview of RocksDB.

Other resources to consider:

RocksDB Facebook page.

RocksDB on Github.


Update: Igor Canadi has posted to the Facebook page a proposal to add the concept of ColumnFamilies to RocksDB. https://github.com/facebook/rocksdb/wiki/Column-Families-proposal Comments? (Direct comments on that proposal to the Facebook page for RocksDB.)

November 15, 2013

rocksdb

Filed under: Key-Value Stores,leveldb — Patrick Durusau @ 8:09 pm

rocksdb by Facebook.

Just in case you can’t wait for OhmDB to appear, Facebook has open sourced rocksdb.

From the readme file:

rocksdb: A persistent key-value store for flash storage
Authors: * The Facebook Database Engineering Team
         * Build on earlier work on leveldb by Sanjay Ghemawat
           (sanjay@google.com) and Jeff Dean (jeff@google.com)

This code is a library that forms the core building block for a fast
key value server, especially suited for storing data on flash drives.
It has a Log-Structured-Merge-Database (LSM) design with flexible tradeoffs
between Write-Amplification-Factor(WAF), Read-Amplification-Factor (RAF)
and Space-Amplification-Factor(SAF). It has multi-threaded compactions,
making it specially suitable for storing multiple terabytes of data in a
single database.

The core of this code has been derived from open-source leveldb.

The code under this directory implements a system for maintaining a
persistent key/value store.

See doc/index.html for more explanation.
See doc/impl.html for a brief overview of the implementation.

The public interface is in include/*.  Callers should not include or
rely on the details of any other header files in this package.  Those
internal APIs may be changed without warning.
...

Something to keep in mind to use with that multi-terabyte drive you are eyeing as a present to yourself. 😉

August 2, 2013

Norch – a search engine for node.js

Filed under: JSON,leveldb,node-js,Search Engines — Patrick Durusau @ 3:01 pm

Norch – a search engine for node.js by Fergus McDowall.

From the post:

Norch is a search engine written for Node.js. Norch uses the Node search-index module which is in turn written using the super fast levelDB library that Google open-sourced in 2011.

The aim of Norch is to make a simple, fast search server, that requires minimal configuration to set up. Norch sacrifices complex functionality for a limited robust feature set, that can be used to set up a freetext search engine for most enterprise scenarios.

Currently Norch features

  • Full text search
  • Stopword removal
  • Faceting
  • Filtering
  • Relevance weighting (tf-idf)
  • Field weighting
  • Paging (offset and resultset length)

Norch can index any data that is marked up in the appropriate JSON format

Download the first release of Norch (0.2.1) here

Not every feature possible but it looks like Norch covers the most popular ones.

May 1, 2013

LevelDB Review (in 18 parts, seriously)

Filed under: Database,leveldb — Patrick Durusau @ 3:55 pm

My first encounter with this series by Oren Eini was: Reviewing LevelDB: Part XVIII–Summary.

At first I thought it had to be a late April Fool’s day joke.

On further investigation, much to my delight, it was not!

Searching his blog returned a hodge-podge listing in no particular order, with some omissions.

As a service to you (and myself), I have collated the posts in order:

Reviewing LevelDB, Part I: What is this all about?

Reviewing LevelDB: Part II, Put some data on the disk, dude

Reviewing LevelDB: Part III, WriteBatch isn’t what you think it is

Reviewing LevelDB: Part IV: On std::string, buffers and memory management in C++

Reviewing LevelDB: Part V, into the MemTables we go

Reviewing LevelDB: Part VI, the Log is base for Atomicity

Reviewing LevelDB: Part VII–The version is where the levels are

Reviewing LevelDB: Part VIII–What are the levels all about?

Reviewing RaveDB [LevelDB]: Part IX- Compaction is the new black

Reviewing LevelDB: Part X–table building is all fun and games until…

Reviewing LevelDB: Part XI–Reading from Sort String Tables via the TableCache

Reviewing RavenDB [LevelDB]: Part XII–Reading an SST

Reviewing LevelDB: Part XIII–Smile, and here is your snapshot

Reviewing LevelDB: Part XIV– there is the mem table and then there is the immutable memtable

Reviewing LevelDB: Part XV–MemTables gets compacted too

Reviewing LevelDB: Part XVI–Recovery ain’t so tough?

Reviewing LevelDB: Part XVII– Filters? What filters? Oh, those filters …

Reviewing LevelDB: Part XVIII–Summary

Parts IX and XII have typos in the titles, RavenDB instead of LevelDB.

Now there is a model for reviewing database software!

April 28, 2013

LevelGraph [Graph Databases and Semantic Diversity]

Filed under: Graphs,leveldb,LevelGraph,Semantic Diversity — Patrick Durusau @ 9:47 am

LevelGraph

From the webpage:

LevelGraph is a Graph Database. Unlike many other graph database, LevelGraph is built on the uber-fast key-value store LevelDB through the powerful LevelUp library. You can use it inside your node.js application.

LevelGraph loosely follows the Hexastore approach as presente in the article: Hexastore: sextuple indexing for semantic web data management C Weiss, P Karras, A Bernstein – Proceedings of the VLDB Endowment, 2008. Following this approach, LevelGraph uses six indices for every triple, in order to access them as fast as it is possible.

The family of graph databases gains another member.

The growth of graph database offerings is evidence the effort to reduce semantic diversity is a fool’s errand.

It isn’t hard to find graph database projects, yet new ones appear on a regular basis.

With every project starting over with the basic issues of graph representation and algorithms.

The reasons for that diversity are likely as diverse as the diversity itself.

If the world has been diverse, remains diverse and evidence is it will continue to be diverse, what are the odds in fighting diversity?

That’s what I thought.

Topic maps, embracing diversity.

I first saw this in a tweet by Frank Denis.

February 10, 2012

SSTable and Log Structured Storage: LevelDB

Filed under: leveldb,SSTable — Patrick Durusau @ 4:14 pm

SSTable and Log Structured Storage: LevelDB by Ilya Grigorik.

From the post:

If Protocol Buffers is the lingua franca of individual data record at Google, then the Sorted String Table (SSTable) is one of the most popular outputs for storing, processing, and exchanging datasets. As the name itself implies, an SSTable is a simple abstraction to efficiently store large numbers of key-value pairs while optimizing for high throughput, sequential read/write workloads.

Unfortunately, the SSTable name itself has also been overloaded by the industry to refer to services that go well beyond just the sorted table, which has only added unnecessary confusion to what is a very simple and a useful data structure on its own. Let’s take a closer look under the hood of an SSTable and how LevelDB makes use of it.

How important is this data structure? It or a variant used by Google’s BigTable, Hadoop’s HBase, and Cassandra, among others.

Whether this will work for your purposes is another question, but it never hurts to know more today than you did yesterday.

November 3, 2011

NoSQL Exchange – 2 November 2011

NoSQL Exchange – 2 November 2011

It doesn’t get much better or fresher (for non-attendees) than this!

  • Dr Jim Webber of Neo Technology starts the day by welcoming everyone to the first of many annual NOSQL eXchanges. View the podcast here…
  • Emil Eifrém gives a Keynote talk to the NOSQL eXchange on the past, present and future of NOSQL, and the state of NOSQL today. View the podcast here…
  • HANDLING CONFLICTS IN EVENTUALLY CONSISTENT SYSTEMS In this talk, Russell Brown examines how conflicting values are kept to a minimum in Riak and illustrates some techniques for automating semantic reconciliation. There will be practical examples from the Riak Java Client and other places.
  • MONGODB + SCALA: CASE CLASSES, DOCUMENTS AND SHARDS FOR A NEW DATA MODEL Brendan McAdams — creator of Casbah, a Scala toolkit for MongoDB — will give a talk on “MongoDB + Scala: Case Classes, Documents and Shards for a New Data Model”
  • REAL LIFE CASSANDRA Dave Gardner: In this talk for the NOSQL eXchange, Dave Gardner introduces why you would want to use Cassandra, and focuses on a real-life use case, explaining each Cassandra feature within this context.
  • DOCTOR WHO AND NEO4J Ian Robinson: Armed only with a data store packed full of geeky Doctor Who facts, by the end of this session we’ll have you tracking down pieces of memorabilia from a show that, like the graph theory behind Neo4j, is older than Codd’s relational model.
  • BUILDING REAL WORLD SOLUTION WITH DOCUMENT STORAGE, SCALA AND LIFT Aleksa Vukotic will look at how his company assessed and adopted CouchDB in order to rapidly and successfully deliver a next generation insurance platform using Scala and Lift.
  • ROBERT REES ON POLYGLOT PERSISTENCE Robert Rees: Based on his experiences of mixing CouchDB and Neo4J at Wazoku, an idea management startup, Robert talks about the theory of mixing your stores and the practical experience.
  • PARKBENCH DISCUSSION This Park Bench discussion will be chaired by Jim Webber.
  • THE FUTURE OF NOSQL AND BIG DATA STORAGE Tom Wilkie: Tom Wilkie takes a whistle-stop tour of developments in NOSQL and Big Data storage, comparing and contrasting new storage engines from Google (LevelDB), RethinkDB, Tokutek and Acunu (Castle).

And yes, I made a separate blog post on Neo4j and Dr. Who. 😉 What can I say? I am a fan of both.

August 10, 2011

LevelDB – Update

Filed under: leveldb,NoSQL — Patrick Durusau @ 7:16 pm

LevelDB – Fast and Lightweight Key/Value Database From the Authors of MapReduce and BigTable

From the post:

LevelDB is an exciting new entrant into the pantheon of embedded databases, notable both for its pedigree, being authored by the makers of the now mythical Google MapReduce and BigTable products, and for its emphasis on efficient disk based random access using log-structured-merge (LSM) trees.

The plan is to keep LevelDB fairly low-level. The intention is that it will be a useful building block for higher-level storage systems. Basho is already investigating using LevelDB as one if its storage engines.

Includes a great summary of information from the LevelDB mailing list.

A must read if you are interested in LevelDB.

May 9, 2011

leveldb

Filed under: leveldb,NoSQL — Patrick Durusau @ 10:33 am

leveldb

A NoSQL library.

From the website:

LevelDB is a library that implements a fast persistent key-value store.

Features

  • Keys and values are arbitrary byte arrays.
  • Data is stored sorted by key.
  • Callers can provide a custom comparison function to override the sort order.
  • The basic operations are Put(key,value), Get(key), Delete(key).
  • Multiple changes can be made in one atomic batch.
  • Users can create a transient snapshot to get a consistent view of data.
  • Forward and backward iteration is supported over the data.
  • Data is automatically compressed using the Snappy compression library.
  • External activity (file system operations etc.) is relayed through a virtual interface so users can customize the operating system interactions.
  • Detailed documentation about how to use the library is included with the source code.

Limitations

  • This is not a SQL database. It does not have a relational data model, it does not support SQL queries, and it has no support for indexes.
  • Only a single process (possibly multi-threaded) can access a particular database at a time.
  • There is no client-server support builtin to the library. An application that needs such support will have to wrap their own server around the library.

Powered by WordPress