Another Word For It Patrick Durusau on Topic Maps and Semantic Diversity

July 5, 2014

Understanding and Expressing Scalable Concurrency

Filed under: Algorithms,Concurrent Programming,Parallelism,Scalability — Patrick Durusau @ 4:35 pm

Understanding and Expressing Scalable Concurrency by Aaron Turon.

Abstract


The Holy Grail of parallel programming is to provide good speedup while hiding or avoiding the pitfalls of concurrency. But some level in the tower of abstraction must face facts: parallel processors execute code concurrently, and the interplay between concurrent code, synchronization, and the memory subsystem is a major determiner of performance. Effšective parallel programming must ultimately be supported by scalable concurrent algorithms—algorithms that tolerate (or even embrace) concurrency for the sake of scaling with available parallelism. This dissertation makes several contributions to the understanding and expression of such algorithms:

  • It shows how to understand scalable algorithms in terms of local protocols governing each part of their hidden state. These protocols are visual artifacts that can be used to informally explain an algorithm at the whiteboard. But they also play a formal role in a new logic for verifying concurrent algorithms, enabling correctness proofs that are local in space, time, and thread execution. Correctness is stated in terms of refinement: clients of an algorithm can reason as if they were using the much simpler specification code it refines.
  • It shows how to express synchronization in a declarative but scalable way, based on a new library providing join patterns. By declarative, we mean that the programmer needs only to write down the constraints of a synchronization problem, and the library will automatically derive a correct solution.By scalable, we mean that the derived solutions deliver robust performance with increasing processor count and problem complexity. The library’s performance on common synchronization problems is competitive with specialized algorithms from the literature.
  • It shows how to express scalable algorithms through reagents, a new monadic abstraction. With reagents, concurrent algorithms no longer need to be constructed from “wholecloth,” i.e., by using system-level primitives directly. Instead, they are built using a mixture of shared-state and message-passing combinators. Concurrency experts benefit, because they can write libraries at a higher level, with more reuse, without sacrificing scalability. Their clients benefit, because composition empowers them to extend and tailor a library without knowing the details of its underlying algorithms.

Not for the faint of heart! 😉

But if you are interested in algorithms for when processing is always parallel by default, best dig in.

I like the author’s imagery of “Go Fish” when he says:

A scalable hashtable is useful not just for concurrent systems; it can also be a boon for explicit parallel programming. A simple but vivid example is the problem of duplicate removal: given a vector of items, return the items in any order, but without any duplicates. Since the input is unstructured, any way of dividing it amongst parallel threads appears to require global coordination to discover duplicate items. The key to avoiding a multiprocessor game of “Go Fish” is to focus on producing the output rather than dividing the input. If threads share a scalable hashtable that allows parallel insertion of distinct elements, they can construct the correct output with (on average) very little coordination, by simply each inserting a segment of the input into the table, one element at a time.

Now that I think about it, topic map processing does a lot of duplicate removal.

Topic maps in a parallel processing environment anyone?

I first saw this in a tweet by Alex Clemmer.

February 23, 2014

Architecture Matters…

Filed under: Architecture,Clojure,Scalability — Patrick Durusau @ 3:30 pm

Architecture Matters : Building Clojure Services At Scale At SoundCloud by Charles Ditzel.

Charles points to three posts on Clojure services at scale:

Building Clojure Services at Scale by Joseph Wilk.

Architecture behind our new Search and Explore experience by Petar Djekic.

Evolution of SoundCloud’s Architecture by Sean Treadway.

If you aren’t already following Charle’s blog (I wasn’t, am now), you should be.

January 25, 2014

12 Free eBooks on Scala

Filed under: Books,Programming,Scala,Scalability — Patrick Durusau @ 8:27 pm

12 Free eBooks on Scala by Atithya Amaresh.

If you are missing any of these, now is the time to grab a copy:

  1. Functional Programming in Scala
  2. Play for Scala
  3. Scala Cookbook
  4. Lift Cookbook
  5. Scala in Action
  6. Testing in Scala
  7. Programming Scala by Venkat Subramaniam
  8. Programming Scala by Dean Wampler, Alex Payne
  9. Software Performance and Scalability
  10. Scalability Rules
  11. Lift in Action
  12. Scala in Depth

Enjoy!

January 13, 2014

Exploiting Parallelism and Scalability (XPS)

Filed under: HPC,Parallelism,Scalability — Patrick Durusau @ 8:10 pm

Exploiting Parallelism and Scalability (XPS) NSF

Full Proposal Window: February 10, 2014 – February 24, 2014

Synopsis:

Computing systems have undergone a fundamental transformation from the single-processor devices of the turn of the century to today’s ubiquitous and networked devices and warehouse-scale computing via the cloud. Parallelism is abundant at many levels. At the same time, semiconductor technology is facing fundamental physical limits and single processor performance has plateaued. This means that the ability to achieve predictable performance improvements through improved processor technologies alone has ended. Thus, parallelism has become critically important.

The Exploiting Parallelism and Scalability (XPS) program aims to support groundbreaking research leading to a new era of parallel computing. Achieving the needed breakthroughs will require a collaborative effort among researchers representing all areas– from services and applications down to the micro-architecture– and will be built on new concepts, theories, and foundational principles. New approaches to achieve scalable performance and usability need new abstract models and algorithms, new programming models and languages, new hardware architectures, compilers, operating systems and run-time systems, and must exploit domain and application-specific knowledge. Research is also needed on energy efficiency, communication efficiency, and on enabling the division of effort between edge devices and clouds.

The January 10th webinar for this activity hasn’t been posted yet.

Without semantics, XPS will establish a new metric:

GFS: Garbage per Femtosecond.

October 10, 2013

F1 And Spanner Holistically Compared

Filed under: F1,Scalability,Spanner — Patrick Durusau @ 6:36 pm

F1 And Spanner Holistically Compared

From the post:

This aricle, F1: A Distributed SQL Database That Scales by Srihari Srinivasan, is republished with permission from a blog you really should follow: Systems We Make – Curating Complex Distributed Systems.

With both the F1 and Spanner papers out its now possible to understand their interplay a bit holistically. So lets start by revisiting the key goals of both systems.

Just in case you missed the F1 paper.

The conclusion should give you enough reason to read this post and the papers carefully:

The F1 system has been managing all AdWords advertising campaign data in production since early 2012. AdWords is a vast and diverse ecosystem including 100s of applications and 1000s of users, all sharing the same database. This database is over 100 TB, serves up to hundreds of thousands of requests per second, and runs SQL queries that scan tens of trillions of data rows per day. Availability reaches five nines, even in the presence of unplanned outages, and observable latency on our web applications has not increased compared to the old MySQL system.

Keep this in mind when you read stories composed of excuses about the recent collapse of healthcare.gov.

September 8, 2013

Scaling Writes

Filed under: Graphs,Neo4j,Scalability — Patrick Durusau @ 5:58 pm

Scaling Writes by Max De Marzi.

From the post:

Most of the applications using Neo4j are read heavy and scale by getting more powerful servers or adding additional instances to the HA cluster. Writes however can be a little bit tricker. Before embarking on any of the following strategies it is best that the server is tuned.

Max encountered someone who wants to write data. How weird is that? 😉

Seriously, your numbers may vary from Max’s but you will be on your way to tuning write performance after reading this post.

Don’t depend on the NSA to capture your data. Freedom of Information requests take time and often have omissions. Test your recovery options with any writing solution.

May 2, 2013

HyperLogLog — Cornerstone of a Big Data Infrastructure

Filed under: Algorithms,BigData,Data Mining,HyperLogLog,Scalability — Patrick Durusau @ 10:44 am

HyperLogLog — Cornerstone of a Big Data Infrastructure

From the introduction:

In the Zipfian world of AK, the HyperLogLog distinct value (DV) sketch reigns supreme. This DV sketch is the workhorse behind the majority of our DV counters (and we’re not alone) and enables us to have a real time, in memory data store with incredibly high throughput. HLL was conceived of by Flajolet et. al. in the phenomenal paper HyperLogLog: the analysis of a near-optimal cardinality estimation algorithm. This sketch extends upon the earlier Loglog Counting of Large Cardinalities (Durand et. al.) which in turn is based on the seminal AMS work FM-85, Flajolet and Martin’s original work on probabilistic counting. (Many thanks to Jérémie Lumbroso for the correction of the history here. I am very much looking forward to his upcoming introduction to probabilistic counting in Flajolet’s complete works.) UPDATE – Rob has recently published a blog about PCSA, a direct precursor to LogLog counting which is filled with interesting thoughts. There have been a few posts on HLL recently so I thought I would dive into the intuition behind the sketch and into some of the details.

After seeing the HyperLogLog references in Approximate Methods for Scalable Data Mining I started looking for a fuller explanation/illustration of HyperLogLog.

Stumbled on this posting.

Includes a great HyperLogLog (HLL) simulation written in JavaScript.

Enjoy!

Approximate Methods for Scalable Data Mining

Filed under: BigData,Data Mining,HyperLogLog,Scalability — Patrick Durusau @ 10:32 am

Approximate Methods for Scalable Data Mining by Andrew Clegg.

Slides from a presentation at: Data Science London 24/04/13.

To get your interest, a nice illustration of HyperLogLog algorithm, “Billions of distinct values in 1.5KB of RAM with 2% relative error.”

Has a number of other useful illustrations and great references.

November 6, 2012

Slides and Photos: Bigger Data & Smarter Scaling

Filed under: BigData,Scalability,TimesOpen — Patrick Durusau @ 11:48 am

Slides and Photos: Bigger Data & Smarter Scaling by Marci Windsheimer.

Report on the TimesOpen event on data and scaling.

You will find here:

Our most recent TimesOpen event was our biggest yet. Here are some highlights:

Should be more than enough inducement to catch the next TimesOpen event!

October 25, 2012

Exploiting Parallelism and Scalability (XPS) (NSF)

Filed under: Language,Language Design,Parallelism,Scalability — Patrick Durusau @ 4:53 am

Exploiting Parallelism and Scalability (XPS) (NSF)

From the announcement:

Synopsis of Program:

Computing systems have undergone a fundamental transformation from the single-processor devices of the turn of the century to today’s ubiquitous and networked devices and warehouse-scale computing via the cloud. Parallelism has become ubiquitous at many levels. The proliferation of multi- and many-core processors, ever-increasing numbers of interconnected high performance and data intensive edge devices, and the data centers servicing them, is enabling a new set of global applications with large economic and social impact. At the same time, semiconductor technology is facing fundamental physical limits and single processor performance has plateaued. This means that the ability to achieve predictable performance improvements through improved processor technologies has ended.

The Exploiting Parallelism and Scalability (XPS) program aims to support groundbreaking research leading to a new era of parallel computing. XPS seeks research re-evaluating, and possibly re-designing, the traditional computer hardware and software stack for today’s heterogeneous parallel and distributed systems and exploring new holistic approaches to parallelism and scalability. Achieving the needed breakthroughs will require a collaborative effort among researchers representing all areas– from the application layer down to the micro-architecture– and will be built on new concepts and new foundational principles. New approaches to achieve scalable performance and usability need new abstract models and algorithms, programming models and languages, hardware architectures, compilers, operating systems and run-time systems, and exploit domain and application-specific knowledge. Research should also focus on energy- and communication-efficiency and on enabling the division of effort between edge devices and clouds.

Full proposals due: February 20, 2013, (due by 5 p.m. proposer’s local time).

I see the next wave of parallelism and scalability being based on language and semantics. Less so on more cores and better designs in silicon.

Not surprising since I work in languages and semantics every day.

Even so, consider a go-cart that exceeds 160 miles per hour (260 km/h) remains a go-cart.

Go beyond building a faster go-cart.

Consider language and semantics when writing your proposal for this program.

August 16, 2012

A Provably Correct Scalable Concurrent Skip List

Filed under: Data Structures,Lock-Free Algorithms,Scalability,Skip List — Patrick Durusau @ 3:19 pm

From High Scalability, report of Paper: A Provably Correct Scalable Concurrent Skip List.

From the post:

In MemSQL Architecture we learned one of the core strategies MemSQL uses to achieve their need for speed is lock-free skip lists. Skip lists are used to efficiently handle range queries. Making the skip-lists lock-free helps eliminate contention and make writes fast.

If this all sounds a little pie-in-the-sky then here’s a very good paper on the subject that might help make it clearer: A Provably Correct Scalable Concurrent Skip List.

The cited paper is by Maurice Herlihy, Yossi Lev, Victor Luchangco, and Nir Shavit. The authors shared Sun Microsystems as an employer so you know the paper is dated.

For more background on lock-free data structures, including Keir Fraser’s “Practical lock freedom” dissertation, see: Practical lock-free data structures.

July 11, 2012

Compressive Genomics [Compression as Merging]

Filed under: Bioinformatics,Compression,Genome,Merging,Scalability — Patrick Durusau @ 2:27 pm

Compressive genomics by Po-Ru Loh, Michael Baym, and Bonnie Berger (Nature Biotechnology 30, 627–630 (2012) doi:10.1038/nbt.2241)

From the introduction:

In the past two decades, genomic sequencing capabilities have increased exponentially[cites omitted] outstripping advances in computing power[cites omitted]. Extracting new insights from the data sets currently being generated will require not only faster computers, but also smarter algorithms. However, most genomes currently sequenced are highly similar to ones already collected[cite omitted]; thus, the amount of new sequence information is growing much more slowly.

Here we show that this redundancy can be exploited by compressing sequence data in such a way as to allow direct computation on the compressed data using methods we term ‘compressive’ algorithms. This approach reduces the task of computing on many similar genomes to only slightly more than that of operating on just one. Moreover, its relative advantage over existing algorithms will grow with the accumulation of genomic data. We demonstrate this approach by implementing compressive versions of both the Basic Local Alignment Search Tool (BLAST)[cite omitted] and the BLAST-Like Alignment Tool (BLAT)[cite omitted], and we emphasize how compressive genomics will enable biologists to keep pace with current data.

Software available at: Compression-accelerated BLAST and BLAT.

A new line of attack on searching “big data.”

Making “big data” into “smaller data” and enabling analysis of it while still “smaller data.”

Enabling the searching of highly similar genomes by compression is a form of merging isn’t it? That is a sequence (read subject) that occurs multiple times over similar genomes is given a single representative, while preserving its relationship to all the individual genome instances.

What makes merger computationally tractable here and yet topic may systems, at least some of them, are reported to have scalability issues: Scalability of Topic Map Systems by Marcel Hoyer?

What other examples of computationally tractable merging would you suggest? Including different merging approaches/algorithms. Thinking it might be a useful paper/study to work from scalable merging examples towards less scalable ones. Perhaps to discover what choices have an impact on scalability.

June 6, 2012

Concurrent Programming for Scalable Web Architectures

Filed under: Concurrent Programming,Parallel Programming,Scalability,Web Applications — Patrick Durusau @ 7:49 pm

Concurrent Programming for Scalable Web Architectures by Benjamin Erb.

Abstract:

Web architectures are an important asset for various large-scale web applications, such as social networks or e-commerce sites. Being able to handle huge numbers of users concurrently is essential, thus scalability is one of the most important features of these architectures. Multi-core processors, highly distributed backend architectures and new web technologies force us to reconsider approaches for concurrent programming in order to implement web applications and fulfil scalability demands. While focusing on different stages of scalable web architectures, we provide a survey of competing concurrency approaches and point to their adequate usages.

High Scalability has a good list of topics and the table of contents.

Or you can jump to the thesis homepage.

Just in case you are thinking about taking your application to “web scale.” 😉

May 12, 2012

Paxos Made Moderately Complex

Filed under: Paxos,Scalability — Patrick Durusau @ 4:16 pm

Paxos Made Moderately Complex

From the post:

If you are a normal human being and find the Paxos protocol confusing, then this paper, Paxos Made Moderately Complex, is a great find. Robbert van Renesse from Cornell University has written a clear and well written paper with excellent explanations.

The Abstract:

For anybody who has ever tried to implement it, Paxos is by no means a simple protocol, even though it is based on relatively simple invariants. This paper provides imperative pseudo-code for the full Paxos (or Multi-Paxos) protocol without shying away from discussing various implementation details. The initial description avoids optimizations that complicate comprehension. Next we discuss liveness, and list various optimizations that make the protocol practical.

If you need safety (“freedom from inconsistency”) and fault-tolerant topic map results, you may want to spend some quality time with this paper.

As with most things, user requirements are going to drive the choices you have to make.

Hard for me to think a “loosely consistent” merging system is useful, but for TV entertainment data that may be enough. Who is sleeping with who probably has lag time in reporting anyway.

For more serious data, Paxos may be your protocol of choice.

Cell Architectures (adding dashes of heterogeneity)

Cell Architectures

From the post:

A consequence of Service Oriented Architectures is the burning need to provide services at scale. The architecture that has evolved to satisfy these requirements is a little known technique called the Cell Architecture.

A Cell Architecture is based on the idea that massive scale requires parallelization and parallelization requires components be isolated from each other. These islands of isolation are called cells. A cell is a self-contained installation that can satisfy all the operations for a shard. A shard is a subset of a much larger dataset, typically a range of users, for example.

Cell Architectures have several advantages:

  • Cells provide a unit of parallelization that can be adjusted to any size as the user base grows.
  • Cell are added in an incremental fashion as more capacity is required.
  • Cells isolate failures. One cell failure does not impact other cells.
  • Cells provide isolation as the storage and application horsepower to process requests is independent of other cells.
  • Cells enable nice capabilities like the ability to test upgrades, implement rolling upgrades, and test different versions of software.
  • Cells can fail, be upgraded, and distributed across datacenters independent of other cells.

The intersection of semantic heterogeneity and scaling remains largely unexplored.

I suggest scaling in a homogeneous environment and then adding dashes of heterogeneity to see what breaks.

Adjust and try again.

February 27, 2012

Cassandra Radical NoSQL Scalability

Filed under: Cassandra,CQL - Cassandra Query Language,NoSQL,Scalability — Patrick Durusau @ 8:25 pm

Cassandra Radical NoSQL Scalability by Tim Berglund.

From the description:

Cassandra is a scalable, highly available, column-oriented data store in use use at Netflix, Twitter, Urban Airship, Constant Contact, Reddit, Cisco, OpenX, Digg, CloudKick, Ooyala and more companies that have large, active data sets. The largest known Cassandra cluster has over 300 TB of data in over 400 machines.

This open source project managed by the Apache foundation offers a compelling combination of a rich data model, a robust deployment track record, and a sound architecture. This video presents the Cassandra’s data model, works through its API in Java and Groovy, talks about how to deploy it and looks at use cases in which it is an appropriate data storage solution.

It explores the Amazon Dynamo project and Google’s BigTable and explains how its architecture helps us achieve the gold standard of scalability: horizontal scalability on commodity hardware. You will be ready to begin experimenting with Cassandra immediately and planning its adoption in your next project.

Take some time to look at CQL – Cassandra Query Language.

BTW, Berglund is a good presenter.

February 1, 2012

Journal of Web Semantics: Special Issue on Scalability

Filed under: Scalability,Semantics — Patrick Durusau @ 4:36 pm

Journal of Web Semantics: Special Issue on Scalability

Navigation Note:

Title of article goes to download page for the article.

Author’s names are hyperlinks to pages that list the author’s publications in the Journal of Web Semantics.

Editorial – Special Issue “Scalability”
Jeff Heflin, Heiner Stuckenschmidt

Scalable Distributed Indexing and Query Processing over Linked Data
Marcel Karnstedt, Kai-Uwe Sattler, Manfred Hauswirth

Searching Web Data: an Entity Retrieval and High-Performance Indexing Model
Renaud Delbru, Stephane Campinas, Giovanni Tummarello

WebPIE: A Web-scale parallel inference engine using MapReduce
Jacopo Urbani, Spyros Kotoulas, Jason Massen, Frank van Harmelen, Henri Bal

Scalable and Distributed Methods for Entity Matching, Consolidation and Disambiguation over Linked Data Corpora
Aidan Hogan, Antoine Zimmermann, Juergen Umbrich, Axel Polleres, Stefan Decker

Powered by WordPress