Another Word For It Patrick Durusau on Topic Maps and Semantic Diversity

September 19, 2012

Process group in erlang: some thoughts about the pg module

Filed under: Distributed Systems,Erlang — Patrick Durusau @ 10:58 am

Process group in erlang: some thoughts about the pg module by Paolo D’Incau.

From the post:

One of the most common ways to achieve fault tolerance in distributed systems, consists in organizing several identical processes into a group, that can be accessed by a common name. The key concept here is that whenever a message is sent to the group, all members of the group receive it. This is a really nice feature, since if one process in the group fails, some other process can take over for it and handle the message, doing all the operations required.

Process groups allow also abstraction: when we send a message to a group, we don’t need to know who are the members and where they are. In fact process groups are all but static. Any process can join an existing group or leave one at runtime, moreover a process can be part of more groups at the same time.

Fault tolerance is going to be an issue if you are using topic maps and/or social media in an operational context.

Having really “cool” semantic capabilities isn’t worth much if the system fails at a critical point.

September 16, 2012

Spanner : Google’s globally distributed database

Filed under: Database,Distributed Systems — Patrick Durusau @ 5:39 am

Spanner : Google’s globally distributed database

From the post:

This paper, whose co-authors include Jeff Dean and Sanjay Ghemawat of MapReduce fame, describes Spanner. Spanner is Google’s scalable, multi-version, globally distributed, and synchronously-replicated database. It is the first system to distribute data at global scale and support externally-consistent distributed transactions. Finally the paper comes out! Really exciting stuff!

Abstract from the paper:

Spanner is Google’s scalable, multi-version, globally-distributed, and synchronously-replicated database. It is the first system to distribute data at global scale and support externally-consistent distributed transactions. This paper describes how Spanner is structured, its feature set, the rationale underlying various design decisions, and a novel time API that exposes clock uncertainty. This API and its implementation are critical to supporting external consistency and a variety of powerful features: non-blocking reads in the past, lock-free read-only transactions, and atomic schema changes, across all of Spanner.

Spanner: Google’s Globally Distributed Database (PDF File)

Facing user requirements, Google did not say: Suck it up and use tools already provided.

Google engineered new tools to meet their requirements.

Is there a lesson there for other software projects?

August 6, 2012

What’s the Difference? Efficient Set Reconciliation without Prior Context

Filed under: Distributed Systems,P2P,Set Reconciliation,Sets,Topic Map Software — Patrick Durusau @ 4:56 pm

What’s the Difference? Efficient Set Reconciliation without Prior Context by David Eppstein, Michael T. Goodrich, Frank Uyeda, and George Varghese.

Abstract:

We describe a synopsis structure, the Difference Digest, that allows two nodes to compute the elements belonging to the set difference in a single round with communication overhead proportional to the size of the difference times the logarithm of the keyspace. While set reconciliation can be done efficiently using logs, logs require overhead for every update and scale poorly when multiple users are to be reconciled. By contrast, our abstraction assumes no prior context and is useful in networking and distributed systems applications such as trading blocks in a peer-to-peer network, and synchronizing link-state databases after a partition.

Our basic set-reconciliation method has a similarity with the peeling algorithm used in Tornado codes [6], which is not surprising, as there is an intimate connection between set difference and coding. Beyond set reconciliation, an essential component in our Difference Digest is a new estimator for the size of the set difference that outperforms min-wise sketches [3] for small set differences.

Our experiments show that the Difference Digest is more efficient than prior approaches such as Approximate Reconciliation Trees [5] and Characteristic Polynomial Interpolation [17]. We use Difference Digests to implement a generic KeyDiff service in Linux that runs over TCP and returns the sets of keys that differ between machines.

Distributed topic maps anyone?

June 21, 2012

Knowledge Discovery Using Cloud and Distributed Computing Platforms (KDCloud, 2012)

Filed under: Cloud Computing,Conferences,Distributed Systems,Knowledge Discovery — Patrick Durusau @ 2:49 pm

Knowledge Discovery Using Cloud and Distributed Computing Platforms (KDCloud, 2012)

From the website:

Paper Submission August 10, 2012

Acceptance Notice October 01, 2012

Camera-Read Copy October 15, 2012

Workshop December 10, 2012 Brussels, Belgium

Collocated with the IEEE International Conference on Data Mining, ICDM 2012

From the website:

The 3rd International Workshop on Knowledge Discovery Using Cloud and Distributed Computing Platforms (KDCloud, 2012) provides an international platform to share and discuss recent research results in adopting cloud and distributed computing resources for data mining and knowledge discovery tasks.

Synopsis: Processing large datasets using dedicated supercomputers alone is not an economical solution. Recent trends show that distributed computing is becoming a more practical and economical solution for many organizations. Cloud computing, which is a large-scale distributed computing, has attracted significant attention of both industry and academia in recent years. Cloud computing is fast becoming a cheaper alternative to costly centralized systems. Many recent studies have shown the utility of cloud computing in data mining, machine learning and knowledge discovery. This workshop intends to bring together researchers, developers, and practitioners from academia, government, and industry to discuss new and emerging trends in cloud computing technologies, programming models, and software services and outline the data mining and knowledge discovery approaches that can efficiently exploit this modern computing infrastructures. This workshop also seeks to identify the greatest challenges in embracing cloud computing infrastructure for scaling algorithms to petabyte sized datasets. Thus, we invite all researchers, developers, and users to participate in this event and share, contribute, and discuss the emerging challenges in developing data mining and knowledge discovery solutions and frameworks around cloud and distributed computing platforms.

Topics: The major topics of interest to the workshop include but are not limited to:

  • Programing models and tools needed for data mining, machine learning, and knowledge discovery
  • Scalability and complexity issues
  • Security and privacy issues relevant to KD community
  • Best use cases: are there a class of algorithms that best suit to cloud and distributed computing platforms
  • Performance studies comparing clouds, grids, and clusters
  • Performance studies comparing various distributed file systems for data intensive applications
  • Customizations and extensions of existing software infrastructures such as Hadoop for streaming, spatial, and spatiotemporal data mining
  • Applications: Earth science, climate, energy, business, text, web and performance logs, medical, biology, image and video.

It’s December, Belgium and an interesting workshop. Can’t ask for much more than that!

June 9, 2012

Distributed Systems Tracing with Zipkin [Sampling @ Twitter w/ UI]

Filed under: BigData,Distributed Systems,Sampling,Systems Research,Tracing — Patrick Durusau @ 7:15 pm

Distributed Systems Tracing with Zipkin

From the post:

Zipkin is a distributed tracing system that we created to help us gather timing data for all the disparate services involved in managing a request to the Twitter API. As an analogy, think of it as a performance profiler, like Firebug, but tailored for a website backend instead of a browser. In short, it makes Twitter faster. Today we’re open sourcing Zipkin under the APLv2 license to share a useful piece of our infrastructure with the open source community and gather feedback.

Hmmm, tracing based on the Dapper paper that comes with a web-based UI for a number of requests. Hard to beat that!

Thinking more about the sampling issue, what if I were to sample a very large stream of proxies and decided to only merge a certain percentage and pipe the rest to /dev/null?

For example, I have an UPI feed and that is my base set of “news” proxies. I have feeds from the various newspaper, radio and TV outlets around the United States. If the proxies from the non-UPI feeds are without some distance of the UPI feed proxies, they are simply discarded.

True, I am losing the information of which newspapers carried the stories, whose bylines consisted of changing the order of the words or dumbing them down, but those may not fall under my requirements.

I would rather than a few dozen very good sources than say 70,000 sources that say the same thing.

If you were testing for news coverage or the spread of news stories, your requirements might be different.

I first saw this at Alex Popescu’s myNoSQL.

Dapper, a Large-Scale Distributed Systems Tracing Infrastructure [Data Sampling Lessons For “Big Data”]

Filed under: BigData,Distributed Systems,Sampling,Systems Research,Tracing — Patrick Durusau @ 7:14 pm

Dapper, a Large-Scale Distributed Systems Tracing Infrastructure by Benjamin H. Sigelman, Luiz Andr´e Barroso, Mike Burrows, Pat Stephenson, Manoj Plakal, Donald Beaver, Saul Jaspan, and Chandan Shanbhag.

Abstract:

Modern Internet services are often implemented as complex, large-scale distributed systems. These applications are constructed from collections of software modules that may be developed by different teams, perhaps in different programming languages, and could span many thousands of machines across multiple physical facilities. Tools that aid in understanding system behavior and reasoning about performance issues are invaluable in such an environment.

Here we introduce the design of Dapper, Google’s production distributed systems tracing infrastructure, and describe how our design goals of low overhead, application-level transparency, and ubiquitous deployment on a very large scale system were met. Dapper shares conceptual similarities with other tracing systems, particularly Magpie [3] and X-Trace [12], but certain design choices were made that have been key to its success in our environment, such as the use of sampling and restricting the instrumentation to a rather small number of common libraries.

The main goal of this paper is to report on our experience building, deploying and using the system for over two years, since Dapper’s foremost measure of success has been its usefulness to developer and operations teams. Dapper began as a self-contained tracing tool but evolved into a monitoring platform which has enabled the creation of many different tools, some of which were not anticipated by its designers. We describe a few of the analysis tools that have been built using Dapper, share statistics about its usage within Google, present some example use cases, and discuss lessons learned so far.

A very important paper for anyone working with large and complex systems.

With lessons on data sampling as well:

we have found that a sample of just one out of thousands of requests provides sufficient information for many common uses of the tracing data.

You have to wonder in “data in the petabyte range” cases, how many of them could be reduced to gigabyte (or smaller) size with no loss in accuracy?

Which would reduce storage requirements, increase analysis speed, increase the complexity of analysis, etc.

Have you sampled your “big data” recently?

I first saw this at Alex Popescu’s myNoSQL.

April 21, 2012

Distributed Temporal Graph Database Using Datomic

Filed under: Datomic,Distributed Systems,Graphs,Temporal Graph Database — Patrick Durusau @ 4:34 pm

Distributed Temporal Graph Database Using Datomic

Post by Alex Popescu calling out construction of a “distributed temporal graph database.”

Temporal used in the sense of timestamping entries in the database.

Beyond such uses, beware, there be dragons.

Temporal modeling isn’t for the faint of heart.

April 1, 2012

Intro to Distributed Erlang (screencast)

Filed under: Distributed Systems,Erlang — Patrick Durusau @ 7:10 pm

Intro to Distributed Erlang (screencast) by Bryan Hunter.

From the description:

Here’s an introduction to distribution in Erlang. This screencast demonstrates creating three Erlang nodes on a Windows box and one on a Linux box and then connecting them using the one-liner “net_adm:ping” to form a mighty compute cluster.

Topics covered:

  • Using erl to start an Erlang node (an instance of the Erlang runtime system).
  • How to use net_adm:ping to connect four Erlang nodes (three on Windows, one on Linux).
  • Using rpc:call to RickRoll a Linux box from an Erlang node running on a Windows box.
  • Using nl to load (deploy) a module from one node to all connected nodes.

Not the most powerful cluster but a good way to learn distributed Erlang.

March 15, 2012

A Distributed C Compiler System on MapReduce: Mrcc

Filed under: Compilers,Distributed Systems — Patrick Durusau @ 8:02 pm

A Distributed C Compiler System on MapReduce: Mrcc

Alex Popescu of myNoSQL points to software and a paper on distributed C code for compilation.

Changing to distributed architectures may uncover undocumented decisions made long ago and far away. Decisions that we may choose to make differently this time. Hopefully we will do a better job of documenting them. (Not that it will happen but there is no law against hoping.)

February 3, 2012

Building Distributed Systems with Riak Core

Filed under: Distributed Systems,Riak — Patrick Durusau @ 4:54 pm

Building Distributed Systems with Riak Core by Steve Vinoski (Basho).

From the description:

Riak Core is the distributed systems foundation for the Riak distributed database and the Riak Search full-text indexing system. Riak Core provides a proven architecture and key functionality required to quickly build scalable, distributed applications. This talk will cover the origins of Riak Core, the abstractions and functionality it provides, and some guidance on building distributed systems.

Rest assured or be forewarned that there is no Erlang code in this presentation.

For all that, it is still a very informative presentation on building scalable, distributed applications.

December 30, 2011

Explorations in Parallel Distributed Processing:..

Filed under: Distributed Systems,Parallel Programming — Patrick Durusau @ 6:00 pm

Explorations in Parallel Distributed Processing: A Handbook of Models, Programs, and Exercises by James L. McClelland.

From Chapter 1, Introduction:

Several years ago, Dave Rumelhart and I first developed a handbook to introduce others to the parallel distributed processing (PDP) framework for modeling human cognition. When it was first introduced, this framwork represented a new way of thinking about perception, memory, learning, and thought, as well as a new way of characterizing the computational mechanisms for intelligent information processing in general. Since it was first introduced, the framework has continued to evolve, and it is still under active development and use in modeling many aspects of cognition and behavior.

Our own understanding of parallel distributed processing came about largely through hands-on experimentation with these models. And, in teaching PDP to others, we discovered that their understanding was enhanced through the same kind of hands-on simulation experience. The original edition of the handbook was intended to help a wider audience gain this kind of experience. It made many of the simulation models discussed in the two PDP volumes (Rumelhart et al.,  1986; McClelland et al.,  1986) available in a form that is intended to be easy to use. The handbook also provided what we hoped were accessible expositions of some of the main mathematical ideas that underlie the simulation models. And it provided a number of prepared exercises to help the reader begin exploring the simulation programs.

The current version of the handbook attempts to bring the older handbook up to date. Most of the original material has been kept, and a good deal of new material has been added. All of simulation programs have been implemented or re-implemented within the MATLAB programming environment. In keeping with other MATLAB projects, we call the suite of programs we have implemented the PDPTool software.

Latest revision (Sept. 2011) is online for your perusal. A good way to develop an understanding of parallel processing.

Apologies for not seeing this before Christmas. Please consider it an early birthday present for your birthday in 2012!

November 27, 2011

6th International Symposium on Intelligent Distributed Computing – IDC 2012

Filed under: Artificial Intelligence,Conferences,Distributed Systems — Patrick Durusau @ 8:57 pm

6th International Symposium on Intelligent Distributed Computing – IDC 2012

Important Dates:

Full paper submission: April 10, 2012
Notification of acceptance: May 10, 2012
Final (camera ready) paper due: June 1, 2012
Symposium: September 24-26, 2012

From the call for papers:

Intelligent computing covers a hybrid palette of methods and techniques derived from classical artificial intelligence, computational intelligence, multi-agent systems a.o. Distributed computing studies systems that contain loosely-coupled components running on different networked computers and that communicate and coordinate their actions by message transfer. The emergent field of intelligent distributed computing is expected to pose special challenges of adaptation and fruitful combination of results of both areas with a great impact on the development of new generation intelligent distributed information systems. The aim of this symposium is to bring together researchers involved in intelligent distributed computing to allow cross-fertilization and synergy of ideas and to enable advancement of researches in the field.

The symposium welcomes submissions of original papers concerning all aspects of intelligent distributed computing ranging from concepts and theoretical developments to advanced technologies and innovative applications. Papers acceptance and publication will be judged based on their relevance to the symposium theme, clarity of presentation, originality and accuracy of results and proposed solutions.

November 2, 2011

Systems We Make

Filed under: Distributed Systems — Patrick Durusau @ 6:24 pm

Systems We Make Curating complex distributed systems of our times by Srihari Srinivasan.

About:

These are indeed great times for Distributed Systems enthusiasts. The boom in the number and variety of systems being built in both academia and the industry has created a strong need to curate interesting creations under one roof.

Systems We Make was conceived to fill this void. Although Systems We Make is still in its infancy I hope to shape it into something more than just a catalog. So stay tuned as we evolve this site and do write to me about how you feel!

Systems We Make may still be in its “infancy” but I am certainly going to both watch this site for news as well as mine the resources it already offers!

I don’t have any predictions for when it will happen but it isn’t hard to foresee a time when “distributed computing” is as archaic as “my computer.” Computing will be a service much like electricity or water, based on a computing fabric, the details of which matter only those charged with its maintenance.

« Newer Posts

Powered by WordPress