Another Word For It Patrick Durusau on Topic Maps and Semantic Diversity

October 27, 2015

A Certain Tendency Of The Database Community

Filed under: Consistency,Database,Networks — Patrick Durusau @ 7:16 pm

A Certain Tendency Of The Database Community by Christopher Meiklejohn.

From the post:

Abstract

We posit that striving for distributed systems that provide “single system image” semantics is fundamentally flawed and at odds with how systems operate in the physical world. We realize the database as an optimization of this system: a required, essential optimization in practice that facilitates central data placement and ease of access to participants in a system. We motivate a new model of computation that is designed to address the problems of computation over “eventually consistent” information in a large-scale distributed system.

Eventual Consistency

When we think about the world we live in, we do not usually say it is eventually consistent, for this is a term usually applied to computing systems, made up of multiple machines, that have to operate with shared information.

Eventual consistency is a consistency model for replicated, shared state. A consistency model is a contract between an application developer and a system that application will run on. A contract between a developer and a system states the following: given the developer follows the rules defined by the system, certain outcomes from the system are guaranteed. This makes it possible for developers to build successful applications, for without this contract, applications would have no guarantee that the actions they perform would have a correct outcome.

(italics in original)

A very accessible and great read on “eventual consistency.”

Christopher points out that any “state” of knowledge is a snapshot under a given set of constraints:

For instance, if the leading researchers on breast cancer were to document the state-of-the-art in a book, as the document is being written it would no longer reflect the state-of-the-art. The collective knowledge of this group is always changing, and as long as we continue to rewrite the document it will only be approaching the combined knowledge of the group. We can think of this somewhat formally: if we had a way to view the group’s knowledge as a omniscient observer and we represent that knowledge as a linear function, the recorded text would be asymptotic to function of the sum of global knowledge.

He concludes with this question:

…Can we build computational abstractions that allow devices to communicate peer-to-peer, acknowledging the true source of truth for a particular piece of information and scale to the amount of information that exists, not only between all computers in a planetary-scale distributed system, but all entities in the universe[?]

I’m not sure about “all entities in the universe,” or even a “planetary-scale distributed system,” but we do know that Netware Directory Services (NDS) (now eDirectory) was a replicated, distributed, sharded database with eventual convergence that was written in 1993.

We have had the computational abstractions for a replicated, distributed, sharded database with eventual convergence for a number of years.

I would adjust Christopher’s “true source of truth,” for “source of truth as defined by users,” to avoid the one-world-truth position that crippled the Semantic Web even before FOL and RDF syntax arrived.

August 21, 2014

…Loosely Consistent Distributed Programming

Filed under: Consistency,Distributed Computing,Programming — Patrick Durusau @ 3:15 pm

Language Support for Loosely Consistent Distributed Programming by Neil Conway.

Abstract:

Driven by the widespread adoption of both cloud computing and mobile devices, distributed computing is increasingly commonplace. As a result, a growing proportion of developers must tackle the complexity of distributed programming—that is, they must ensure correct application behavior in the face of asynchrony, concurrency, and partial failure.

To help address these difficulties, developers have traditionally relied upon system infrastructure that provides strong consistency guarantees (e.g., consensus protocols and distributed transactions). These mechanisms hide much of the complexity of distributed computing—for example, by allowing programmers to assume that all nodes observe the same set of events in the same order. Unfortunately, providing such strong guarantees becomes increasingly expensive as the scale of the system grows, resulting in availability and latency costs that are unacceptable for many modern applications.

Hence, many developers have explored building applications that only require loose consistency guarantees—for example, storage systems that only guarantee that all replicas eventually converge to the same state, meaning that a replica might exhibit an arbitrary state at any particular time. Adopting loose consistency involves making a well-known tradeoff: developers can avoid paying the latency and availability costs incurred by mechanisms for achieving strong consistency, but inexchange they must deal with the full complexity of distributed computing. As a result, achieving correct application behavior in this environment is very difficult.

This thesis explores how to aid developers of loosely consistent applications by providing programming language support for the difficulties they face. The language level is a natural place to tackle this problem: because developers that use loose consistency have fewer system facilities that they can depend on, consistency concerns are naturally pushed into application logic. In part, our goal has been to recognize, formalize, and automate application-level consistency patterns.

We describe three language variants that each tackle a different challenge in distributed programming. Each variant is a modification of Bloom, a declarative language for distributed programming we have developed at UC Berkeley. The first variant of Bloom, BloomL, enables deterministic distributed programming without the need for distributed coordination. Second, Edelweiss allows distributed storage reclamation protocols to be generated in a safe and automatic fashion. Finally, BloomPO adds sophisticated ordering constraints that we use to develop a declarative, high-level implementation of concurrent editing, a particularly difficult class of loosely consistent programs.

Unless you think of topic maps as static files, recent developments in “loosely consistent distributed programming” should be high on your reading list.

It’s entirely possible to have a topic map that is a static file, even one that has been printed out to paper. But that seems like a poor target for development. Captured information begins progressing towards staleness from the moment of its capture.

I first saw this in a tweet by Peter Bailis.

August 19, 2014

CRDTs: Consistency without consensus

Filed under: Consistency,CRDT,Distributed Systems — Patrick Durusau @ 7:17 pm

CRDTs: Consistency without consensus by Peter Bourgon.

Abstract:

When you think of distributed systems, you probably think in terms of consistency via consensus. That is, enabling a heterogeneous group of systems to agree on facts, while remaining robust in the face of failure. But, as any distributed systems developer can attest, it’s harder than it sounds. Failure happens in myriad, byzantine ways, and failure modes interact unpredictably. Reliable distributed systems need more than competent engineering: they need a robust theoretical foundation. CRDTs, or Convergent Replicated Data Types, are a set of properties or behaviors, discovered more than invented, which enable a distributed system to achieve consistency without consensus, and sidestep entire classes of problems altogether. This talk provides a practical introduction to CRDTs, and describes a production CRDT system built at SoundCloud to serve high-volume time-series data.

Slides: bbuzz14-peter_bourgon_0.pdf

This is very much worth your time!

Great discussion of data models after time mark 23:00 (approximately).

BTW, the system discussed is open source and in production: http://github.com/soundcloud/roshi

July 28, 2014

Synchronizer Based on Operational Transformation…

Filed under: Consistency,Operations,P2P,Synchronization,Version Vectors,Versioning — Patrick Durusau @ 7:53 pm

Synchronizer Based on Operational Transformation for P2P Environments by Michelle Cart and Jean Ferrié

Abstract:

Reconciling divergent copies is a common problem encountered in distributed or mobile systems, asynchronous collaborative groupware, concurrent engineering, software configuration management, version control systems and personal work involving several mobile computing devices. Synchronizers provide a solution by enabling two divergent copies of the same object to be reconciled. Unfortunately, a master copy is generally required before they can be used for reconciling n copies, otherwise copy convergence will not be achieved. This paper presents the principles and algorithm of a Synchronizer which provides the means to reconcile n copies, without discriminating in favour of any particular copy. Copies can be modified (concurrently or not) on different sites and the Synchronizer we propose enables them to be reconciled pairwise, at any time, regardless of the pair, while achieving convergence of all copies. For this purpose, it uses the history of operations executed on each copy and Operational Transformations. It does not require a centralised or ordering (timestamp, state vector, etc.) mechanism. Its main advantage is thus to enable free and lazy propagation of copy updates while ensuring their convergence – it is particularly suitable for P2P environments in which no copy should be favoured.

Not the oldest work on operational transformations, 2007, nor the most recent.

Certainly of interest for distributed topic maps as well as other change tracking applications.

I first saw this in a tweet by onepaperperday.

July 22, 2014

Readings in conflict-free replicated data types

Filed under: Consistency,CRDT,Merging,Riak — Patrick Durusau @ 6:55 pm

Readings in conflict-free replicated data types by Christopher Meiklejohn.

From the post:

This is a work in progress post outlining research topics related to conflict-free replicated data types, or CRDTs.

Yesterday, Basho announced the release of Riak 2.0.0 RC1, which contains a comprehensive set of “data types” that can be used for building more robust distributed applications. For an overview of how to use these data types in Riak to avoid custom, and error prone, merge functions, see the Basho documentation site.

You’re probably more familiar with another name for these data types: conflict-free replicated data types (CRDTs). Simply put, CRDTs are data structures which capture some aspect of causality, along with providing interfaces for safely operating over the value and correctly merging state with diverged and concurrently edited structures.

This provides a very useful property when combined with an eventual consistency, or AP-focused, data store: Strong Eventual Consistency (SEC). Strong Eventual Consistency is an even stronger convergence property than eventual consistency: given that all updates are delivered to all replicas, there is no need for conflict resolution, given the conflict-free merge properties of the data structure. Simply put, correct replicas which have received all updates have the same state.

Here’s a great overview by one of the inventors of CRDTs, Marc Shapiro, where he discusses conflict-free replicated data types and their relation to strong eventual consistency.

In this Hacker News thread, there was an interesting discussion about why one might want to implement these on the server, why implementing them is non-trivial, and what the most recent research related to them consists of.

This post serves as a reading guide on the the various areas of conflict-free replicated data types. Papers are broken down into various areas and sorted in reverse chronologically.

Relevant to me because the new change tracking in ODF is likely to be informed by CRDTs and because eventually consistent merging is important for distributed topic maps.

Confusion would be the result if the order of merging topics results in different topic maps.

CRDTs are an approach to avoid that unhappy outcome.

Enjoy!

PS: Remember to grab a copy of Riak 2.0.0 RC1.

May 20, 2014

Fun with CRDTs

Filed under: Consistency,CRDT,Merging — Patrick Durusau @ 3:32 pm

Fun with CRDTs by Richard Dallaway.

From the post:

At the end of last year I had some fun implementing a CRDT. These are data structures designed to combine together when you have no control over order of changes, timing of changes, or the number of participants in the data structure. The example I looked at was a sequential datatype, namely the WOOT CRDT for collaborative text editing.

Doesn’t:

combine together when you have no control over order of changes, timing of changes, or the number of participants in the data structure.

sound familiar? 😉

Richard points to:

Slides.

Video.

He also recommends that you watch: Reconciling Eventually-Consistent Data with CRDTs by Noel Welsh, before viewing his video.

Great stuff!

March 12, 2014

Raft Consensus Algorithm

Filed under: Algorithms,Consensus,Consistency,Paxos — Patrick Durusau @ 1:34 pm

Raft: Understandable Distributed Consensus

A compelling visualization of the Raft consensus algorithm!

I first saw the visualization link in a tweet by Aaron Bull Schaefer.

The visualization closes with pointers to more information on Raft.

One pointer is to the Raft Consensus Algorithm website.

From the homepage:

Raft is a consensus algorithm that is designed to be easy to understand. It’s equivalent to Paxos in fault-tolerance and performance. The difference is that it’s decomposed into relatively independent subproblems, and it cleanly addresses all major pieces needed for practical systems. We hope Raft will make consensus available to a wider audience, and that this wider audience will be able to develop a variety of higher quality consensus-based systems than are available today.

There are links to videos + slides, the raft-dev Google Group, and numerous implementations of the Raft algorithm.

The other pointer from the visualization is to the Raft paper: In Search of an Understandable Consensus Algorithm (PDF) by Diego Ongaro and John Ousterhout.

From the paper (section 4):

We had several goals in designing Raft: it must provide a complete and appropriate foundation for system building, so that it significantly reduces the amount of design work required of developers; it must be safe under all conditions and available under typical operating conditions; and it must be efficient for common operations. But our most important goal—and most difficult challenge—was understandability. It must be possible for a large audience to understand the algorithm comfortably. In addition, it must be possible to develop intuitions about the algorithm, so that system builders can make the extensions that are inevitable in real-world implementations.

Who would have thought that choosing more obvious/understandable approaches would have practical benefits?

There were numerous points in the design of Raft where we had to choose among alternative approaches. In these situations we evaluated the alternatives based on understandability: how hard is it to explain each alternative (for example, how complex is its state space, and does it have subtle implications?), and how easy will it be for a reader to completely understand the approach and its implications? Given a choice between an alternative that was concise but subtle and one that was longer (either in lines of code or explanation) but more obvious, we chose the more obvious approach. Fortunately, in most cases the more obvious approach was also more concise. (emphasis added)

Understandability, now there’s a useful requirement.

February 9, 2014

Eventual Consistency Of Topic Maps

Filed under: Consistency,Topic Maps — Patrick Durusau @ 3:32 pm

What if all transactions required strict global consistency? by Matthew Aslett.

From the post:

My mum recently moved house. Being the dutiful son that I am I agreed to help her pack up her old house, drive to her new place and help unpack when we got there.

As it happens the most arduous part of the day did not involve packing, driving or unpacking but waiting: waiting for the various solicitors involved to confirm that the appropriate funds had been deposited in the appropriate bank accounts before the estate agents could hand over the keys.

It took hours, and was a reminder that while we might think of bank transfers as being instantaneous, there can be considerable delays involved in confirming that the correct amount has been debited from one bank account and credited to another.

Matthew goes on to illustrate that banking transactions have always been “eventually consistent.” He doesn’t mention it but the the Uniform Commercial Code has several sections that cover checks, bank deposits and other matters. Text of the UCC at LLI.

The one thing the financial industry has that topic maps lack, is a common expectation of “eventual consistency.” The Uniform Commercial Code establishes (where adopted) the rules by which “eventual consistency” for banks is governed.

To avoid client disappointment, discuss “eventual consistency” up front. If your client expects instantaneous merging, with some data sets, they are likely to be disappointed.

The saying about a project being completed: faster, cheaper, better, but you can only pick two out of the three? Works with merging as well.

November 24, 2012

Consistency through semantics

Filed under: Consistency,Semantics,Software — Patrick Durusau @ 2:13 pm

Consistency through semantics by Oliver Kennedy.

From the post:

When designing a distributed systems, one of the first questions anyone asks is what kind of consistency model to use. This is a fairly nuanced question, as there isn’t really one right answer. Do you enforce strong consistency and accept the resulting latency and communication overhead? Do you use locking, and accept the resulting throughput limitations? Or do you just give up and use eventual consistency and accept that sometimes you’ll end up with results that are just a little bit out of sync.

It’s this last bit that I’d like to chat about today, because it’s actually quite common in a large number of applications. This model is present in everything from user-facing applications like Dropbox to SVN/GIT, to back-end infrastructure systems like Amazon’s Dynamo and Yahoo’s PNUTs. Often, especially in non-critical applications latency and throughput are more important than dealing with the possibility that two simultaneous updates will conflict.

So what happens when this dreadful possibility does come to pass? Clearly the system can’t grind to a halt, and often just randomly discarding one of these updates is the wrong thing to do. So what happens? The answer is common across most of these systems: They punt to the user.

Intuitively, this is the right thing to do. The user sees the big picture. The user knows best how to combine these operations. The user knows what to do, so on those rare occurrences where the system can’t handle it, the user can.

But why is this the right thing to do? What does the user have that the infrastructure doesn’t?

Take the time to read the rest of Oliver’s post.

He distinguishes rather nicely between applications and users.

November 13, 2012

Eventually-Consistent Data Structures

Filed under: Consistency,Data Types — Patrick Durusau @ 3:55 pm

Eventually-Consistent Data Structures by Sean Cribbs

Summary:

Sean Cribbs discusses Convergent Replicated Data Types, data structures that tolerate eventual consistency.

Covers a number of eventually consistent data types.

Materials you may want to cover before you watch the presentation:

Safety/Liveness – from Proving the Correctness of Multiprocess Programs – Leslie Lamport (March 1977) (As a bonus, a link to all Leslie Lamport’s papers.)

Safety and liveness: Eventual consistency is not safe by Peter Ballis.

Logic and Lattices for Distributed Programming by Neil Conway, William Marczak, Peter Alvaro, Joseph M. Hellerstein, and David Maier.

A comprehensive study of Convergent and Commutative Replicated Data Types by Marc Shapiro, Nuno Preguiça, Carlos Baquero, and Marek Zawirski.

Strong Eventual Consistency and Conflict-free Replicated Data Types by Marc Shapiro (video).

I first saw this in a tweet by Sean T. Allen.

October 11, 2012

Convergent and Commutative Replicated Data Types [Warning: Heavy Sledding Ahead]

Filed under: Consistency,CRDT,Data Structures,Data Types — Patrick Durusau @ 4:23 pm

A comprehensive study of Convergent and Commutative Replicated Data Types (PDF file) Marc Shapiro, Nuno M. Preguiça, Carlos Baquero, Marek Zawirski.

Abstract:

Eventual consistency aims to ensure that replicas of some mutable shared object converge without foreground synchronisation. Previous approaches to eventual consistency are ad-hoc and error-prone. We study a principled approach: to base the design of shared data types on some simple formal conditions that are sufficient to guarantee eventual consistency. We call these types Convergent or Commutative Replicated Data Types (CRDTs). This paper formalises asynchronous object replication, either state based or operation based, and provides a sufficient condition appropriate for each case. It describes several useful CRDTs, including container data types supporting both add and remove operations with clean semantics, and more complex types such as graphs, montonic DAGs, and sequences. It discusses some properties needed to implement non-trivial CRDTs.

I found this following a link in the readme for riak dt which said:

WHAT?

Currently under initial development, riak_dt is a platform for convergent data types. It’s built on riak core and deployed with riak. All of our current work is around supporting fast, replicated, eventually consistent counters (though more data types are in the repo, and on the way.) This work is based on the paper – A Comprehensive study of Convergent and Commutative Replicated Data Types – which you may find an interesting read.

WHY?

Riak’s current model for handling concurrent writes is to store sibling values and present them to the client for resolution on read. The client must encode the logic to merge these into a single, meaningful value, and then inform Riak by doing a further write. Convergent data types remove this burden from the client, as their structure guarantees they will deterministically converge to a single value. The simplest of these data types is a counter.

I haven’t thought of merging of subject representatives as a quest for “consistency” but that is one way to think about it.

The paper is forty-seven pages long and has forty-four references, most of which I suspect are necessary to fully appreciate the work.

Having said that, I suspect it will be well worth the effort.

September 23, 2012

The Cost of Strict Global Consistency [Or Rules for Eventual Consistency]

Filed under: Consistency,Database,Finance Services,Law,Law - Sources — Patrick Durusau @ 10:15 am

What if all transactions required strict global consistency? by Matthew Aslett.

Matthew quotes Basho CTO Justin Sheehy on eventual consistency and traditional accounting:

“Traditional accounting is done in an eventually-consistent way and if you send me a payment from your bank to mine then that transaction will be resolved in an eventually consistent way. That is, your bank account and mine will not have a jointly-atomic change in value, but instead yours will have a debit and mine will have a credit, each of which will be applied to our respective accounts.”

And Matthew comments:

The suggestion that bank transactions are not immediately consistent appears counter-intuitive. Comparing what happens in a transaction with a jointly atomic change in value, like buying a house, with what happens in normal transactions, like buying your groceries, we can see that for normal transactions this statement is true.

We don’t need to wait for the funds to be transferred from our accounts to a retailer before we can walk out the store. If we did we’d all waste a lot of time waiting around.

This highlights a couple of things that are true for both database transactions and financial transactions:

  • that eventual consistency doesn’t mean a lack of consistency
  • that different transactions have different consistency requirements
  • that if all transactions required strict global consistency we’d spend a lot of time waiting for those transactions to complete.

All of which is very true but misses an important point about financial transctions.

Financial transactions (involving banks, etc.) are eventually consistent according to the same rules.

That’s no accident. It didn’t just happen that banks adopted ad hoc rules that resulted in a uniform eventual consistency.

It didn’t happen over night but the current set of rules for “uniform eventual consistency” of banking transactions are spelled out by the Uniform Commercial Code. (And other laws, regulations but that is a major part of it.)

Dare we say a uniform semantic for financial transactions was hammered out without the use of formal ontologies or web addresses? And that it supports billions of transactions on a daily basis? To become eventually consistent?

Think about the transparency (to you) of your next credit card transaction. Standards and eventual consistency make that possible.

February 20, 2011

On Distributed Consistency — Part 1 (MongoDB)

Filed under: Consistency,Distributed Consistency,MongoDB,NoSQL — Patrick Durusau @ 1:04 pm

On Distributed Consistency — Part 1 (MongoDB)

The first of a six part series on consistency in distributed databases.

From the website:

See also:

  • Part 2 – Eventual Consistency
  • Part 3 – Network Partitions
  • Part 4 – Multi Data Center
  • Part 5 – Multi Writer Eventual Consistency
  • Part 6 – Consistency Chart

For distributed databases, consistency models are a topic of huge importance. We’d like to delve a bit deeper on this topic with a series of articles, discussing subjects such as what model is right for a particular use case. Please jump in and help us in the comments.

Consistency is an issue that will confront distributed topic maps so best to start learning the options now.

Powered by WordPress