Archive for the ‘Systems Research’ Category

Third Age of Computing?

Monday, August 26th, 2013

The ‘third era’ of app development will be fast, simple, and compact by Rik Myslewski.

From the post:

The tutorial was conducted by members of the HSA – heterogeneous system architecture – Foundation, a consortium of SoC vendors and IP designers, software companies, academics, and others including such heavyweights as ARM, AMD, and Samsung. The mission of the Foundation, founded last June, is “to make it dramatically easier to program heterogeneous parallel devices.”

As the HSA Foundation explains on its website, “We are looking to bring about applications that blend scalar processing on the CPU, parallel processing on the GPU, and optimized processing of DSP via high bandwidth shared memory access with greater application performance at low power consumption.”

Last Thursday, HSA Foundation president and AMD corporate fellow Phil Rogers provided reporters with a pre-briefing on the Hot Chips tutorial, and said the holy grail of transparent “write once, use everywhere” programming for shared-memory heterogeneous systems appears to be on the horizon.

According to Rogers, heterogeneous computing is nothing less than the third era of computing, the first two being the single-core era and the muti-core era. In each era of computing, he said, the first programming models were hard to use but were able to harness the full performance of the chips.

(…)

Exactly how HSA will get there is not yet fully defined, but a number of high-level features are accepted. Unified memory addressing across all processor types, for example, is a key feature of HSA. “It’s fundamental that we can allocate memory on one processor,” Rogers said, “pass a pointer to another processor, and execute on that data – we move the compute rather than the data.”
(…)

Rik does a deep dive with references to the HSA Programmers Reference Manual to Project Sumatra that bring data-parallel algorithms to Java 9 (2015).

The only discordant note is that Nivdia and Intel are both missing from the HSA Foundation. Invited but not present.

Customers of Nvidia and/or Intel (I’m both) should contact Nvidia (Contact us) and Intel (contact us) and urge them to join the HSA Foundation. And pass this request along.

Sharing of memory is one of the advantages of HSA (heterogeneous systems architecture) and it is the where the semantics of shared data will come to the fore.

I haven’t read the available HSA documents in detail, but the HSA Programmer’s Reference Manual appears to presume that shared data has only one semantic. (It never says that but that is my current impression.)

We have seen that the semantics of data is not “transparent.” The same demonstration illustrates that data doesn’t always have the same semantic.

Simply because I am pointed to a particular memory location, there is no reason to presume I should approach that data with the same semantics.

For example, what if I have a Social Security Number (SSN). In processing that number for the Social Security Administration, it may serve to recall claim history, eligibility, etc. If I am accessing the same data to compare it to SSN records maintained by the Federal Bureau of Investigation (FBI), it may not longer be a unique identifier in the same sense as at the SSA.

Same “data,” but different semantics.

Who you gonna call? Topic Maps!

PS: Perhaps not as part of the running code but to document the semantics you are using to process data. Same data, same memory location, multiple semantics.

…1 Million TPS on $5K Hardware

Tuesday, September 11th, 2012

Russ’ 10 Ingredient Recipe for Making 1 Million TPS on $5K Hardware

Got your attention? Good. Read on:

My name is Russell Sullivan, I am the author of AlchemyDB: a highly flexible NoSQL/SQL/DocumentStore/GraphDB-datastore built on top of redis. I have spent the last several years trying to find a way to sanely house multiple datastore-genres under one roof while (almost paradoxically) pushing performance to its limits.

I recently joined the NoSQL company Aerospike (formerly Citrusleaf) with the goal of incrementally grafting AlchemyDB’s flexible data-modeling capabilities onto Aerospike’s high-velocity horizontally-scalable key-value data-fabric. We recently completed a peak-performance TPS optimization project: starting at 200K TPS, pushing to the recent community edition launch at 500K TPS, and finally arriving at our 2012 goal: 1M TPS on $5K hardware.

Getting to one million over-the-wire client-server database-requests per-second on a single machine costing $5K is a balance between trimming overhead on many axes and using a shared nothing architecture to isolate the paths taken by unique requests.

Even if you aren’t building a database server the techniques described in this post might be interesting as they are not database server specific. They could be applied to a ftp server, a static web server, and even to a dynamic web server.

My blog falls short of needing that level of TPS per second but your experience may be different. 😉

It is a good read in any case.

Puppet

Saturday, June 9th, 2012

Puppet

From “What is Puppet?”:

Puppet is IT automation software that helps system administrators manage infrastructure throughout its lifecycle, from provisioning and configuration to patch management and compliance. Using Puppet, you can easily automate repetitive tasks, quickly deploy critical applications, and proactively manage change, scaling from 10s of servers to 1000s, on-premise or in the cloud.

Puppet is available as both open source and commercial software. You can see the differences here and decide which is right for your organization.

How Puppet Works

Puppet uses a declarative, model-based approach to IT automation.

  1. Define the desired state of the infrastructure’s configuration using Puppet’s declarative configuration language.
  2. Simulate configuration changes before enforcing them.
  3. Enforce the deployed desired state automatically, correcting any configuration drift.
  4. Report on the differences between actual and desired states and any changes made enforcing the desired state.

Topic maps seem like a natural for systems administration.

They can capture the experience and judgement of sysadmins that aren’t ever part of printed documentation.

Make sysadmins your allies when introducing topic maps. Part of that will be understanding their problems and concerns.

Being able to intelligently discuss software like Puppet will be a step in the right direction. (Not to mention giving you ideas about topic map applications for systems administration.)

Distributed Systems Tracing with Zipkin [Sampling @ Twitter w/ UI]

Saturday, June 9th, 2012

Distributed Systems Tracing with Zipkin

From the post:

Zipkin is a distributed tracing system that we created to help us gather timing data for all the disparate services involved in managing a request to the Twitter API. As an analogy, think of it as a performance profiler, like Firebug, but tailored for a website backend instead of a browser. In short, it makes Twitter faster. Today we’re open sourcing Zipkin under the APLv2 license to share a useful piece of our infrastructure with the open source community and gather feedback.

Hmmm, tracing based on the Dapper paper that comes with a web-based UI for a number of requests. Hard to beat that!

Thinking more about the sampling issue, what if I were to sample a very large stream of proxies and decided to only merge a certain percentage and pipe the rest to /dev/null?

For example, I have an UPI feed and that is my base set of “news” proxies. I have feeds from the various newspaper, radio and TV outlets around the United States. If the proxies from the non-UPI feeds are without some distance of the UPI feed proxies, they are simply discarded.

True, I am losing the information of which newspapers carried the stories, whose bylines consisted of changing the order of the words or dumbing them down, but those may not fall under my requirements.

I would rather than a few dozen very good sources than say 70,000 sources that say the same thing.

If you were testing for news coverage or the spread of news stories, your requirements might be different.

I first saw this at Alex Popescu’s myNoSQL.

Dapper, a Large-Scale Distributed Systems Tracing Infrastructure [Data Sampling Lessons For “Big Data”]

Saturday, June 9th, 2012

Dapper, a Large-Scale Distributed Systems Tracing Infrastructure by Benjamin H. Sigelman, Luiz Andr´e Barroso, Mike Burrows, Pat Stephenson, Manoj Plakal, Donald Beaver, Saul Jaspan, and Chandan Shanbhag.

Abstract:

Modern Internet services are often implemented as complex, large-scale distributed systems. These applications are constructed from collections of software modules that may be developed by different teams, perhaps in different programming languages, and could span many thousands of machines across multiple physical facilities. Tools that aid in understanding system behavior and reasoning about performance issues are invaluable in such an environment.

Here we introduce the design of Dapper, Google’s production distributed systems tracing infrastructure, and describe how our design goals of low overhead, application-level transparency, and ubiquitous deployment on a very large scale system were met. Dapper shares conceptual similarities with other tracing systems, particularly Magpie [3] and X-Trace [12], but certain design choices were made that have been key to its success in our environment, such as the use of sampling and restricting the instrumentation to a rather small number of common libraries.

The main goal of this paper is to report on our experience building, deploying and using the system for over two years, since Dapper’s foremost measure of success has been its usefulness to developer and operations teams. Dapper began as a self-contained tracing tool but evolved into a monitoring platform which has enabled the creation of many different tools, some of which were not anticipated by its designers. We describe a few of the analysis tools that have been built using Dapper, share statistics about its usage within Google, present some example use cases, and discuss lessons learned so far.

A very important paper for anyone working with large and complex systems.

With lessons on data sampling as well:

we have found that a sample of just one out of thousands of requests provides sufficient information for many common uses of the tracing data.

You have to wonder in “data in the petabyte range” cases, how many of them could be reduced to gigabyte (or smaller) size with no loss in accuracy?

Which would reduce storage requirements, increase analysis speed, increase the complexity of analysis, etc.

Have you sampled your “big data” recently?

I first saw this at Alex Popescu’s myNoSQL.

Hawaii International Conference on System Sciences – Proceedings – TM Value-Add

Sunday, March 18th, 2012

Hawaii International Conference on System Sciences

The Hawaii International Conference on System Sciences (HICSS) is the sponsor of the Knowledge Economics conference I mentioned earlier today.

It has a rich history (see below) and just as importantly, free access to its proceedings back to 2005, via the CS Digital Library (ignore the wording that you need to login).

I did have to locate the new page which is: HCISS Proceedings 1995 –.

The proceedings illustrate why a topic map that captures prior experience can be beneficial.

For example, the entry for 1995 reads:

  • 28th Hawaii International Conference on System Sciences (HICSS’95)
  • 28th Hawaii International Conference on System Sciences (HICSS’95)
  • 28th Hawaii International Conference on System Sciences
  • 28th Hawaii International Conference on System Sciences (HICSS’95)
  • 28th Hawaii International Conference on System Sciences (HICSS’95)

Those are not duplicate entries. They all lead to unique content.

The entry for 2005 reads:

  • Volume No. 5 – Proceedings of the 38th Annual Hawaii International Conference on System Sciences (HICSS’05) – Track 5
  • Volume No. 7 – Proceedings of the 38th Annual Hawaii International Conference on System Sciences (HICSS’05) – Track 7
  • Volume No. 3 – Proceedings of the 38th Annual Hawaii International Conference on System Sciences (HICSS’05) – Track 3
  • Volume No. 6 – Proceedings of the 38th Annual Hawaii International Conference on System Sciences (HICSS’05) – Track 6
  • Volume No. 9 – Proceedings of the 38th Annual Hawaii International Conference on System Sciences (HICSS’05) – Track 9
  • Volume No. 2 – Proceedings of the 38th Annual Hawaii International Conference on System Sciences (HICSS’05) – Track 2
  • Volume No. 1 – Proceedings of the 38th Annual Hawaii International Conference on System Sciences (HICSS’05) – Track 1
  • Volume No. 8 – Proceedings of the 38th Annual Hawaii International Conference on System Sciences (HICSS’05) – Track 8
  • Volume No. 4 – Proceedings of the 38th Annual Hawaii International Conference on System Sciences (HICSS’05) – Track 4

Now that’s a much more useful listing. 😉

Don’t despair! That changed in 2009 and in the latest (2011), we find:

  • 2011 44th Hawaii International Conference on System Sciences

OK, so we follow that link to find (in part):

  • David J. Nickles, Daniel D. Suthers, “A Study of Structured Lecture Podcasting to Facilitate Active Learning,” hicss, pp.1-10, 2011 44th Hawaii International Conference on System Sciences, 2011
  • J. Lucca, R. Sharda, J. Ruffner, U. Shimp, D. Biros, A. Clower, “Ammunition Multimedia Encyclopedia (AME): A Case Study,” hicss, pp.1-10, 2011 44th Hawaii International Conference on System Sciences, 2011
  • (emphasis added)

Do you notice anything odd about the pagination numbers?

Just for grins, 2008, all the articles are one page long – pp 28, pp 29.

BTW, all the articles appear (I haven’t verified this) to have unique DOI entries.

I am deeply interested in the topics covered by HICSS, so if I organize the proceeding into a more useful form, how do I make that extensible by other researchers?

That is I have no interest in duplicating the work already done for these listings but rather adding value to them and at the same time, being open to more value being added to my work product.

Here are some observations to start a requirements process.

The pages with the HTML abstracts of the articles have helpful links (such as more articles by the same author and co-authors). Could duplicate the author/co-author links but why? Additional maintenance duty.

Conferences should have a one page display, 1995 to current year, with each conference expanding into tracks and each track into papers (with authors listed). Mostly for browsing purposes.

Should be searchable across these proceedings only. (A feature apparently not available at the CS Digital Library site.)

Search should include title, author, exact phrase (as CS Digital Library does) but also subjects.

What am I missing? (Lots I know so be gentle. 😉 )

BTW, be aware of: HICSS Symposium and Workshop Reports and Monographs (also free for downloading)

More Google Cluster Data

Wednesday, November 30th, 2011

More Google Cluster Data

From the post:

Google has a strong interest in promoting high quality systems research, and we believe that providing information about real-life workloads to the academic community can help.

In support of this we published a small (7-hour) sample of resource-usage information from a Google production cluster in 2010 (research blog on Google Cluster Data). Approximately a dozen researchers at UC Berkeley, CMU, Brown, NCSU, and elsewhere have made use of it.

Recently, we released a larger dataset. It covers a longer period of time (29 days) for a larger cell (about 11k machines) and includes significantly more information, including:

I remember Robert Barta describing the use of topic maps for systems administration. This data set could give some insight into the design of a topic map for cluster management.

What subjects and relationships would you recognize, how and why?

If you are looking for employment, this might be a good way to attract Google’s attention. (Hint to Google: Releasing interesting data sets could be a way to vet potential applicants in realistic situations.)

CNetS

Saturday, November 26th, 2011

CNetS: Center for Complex Networks and Systems Research

Work of the Center:

The types of problems that we work on include mining usage and traffic patterns in technological networks such as the Web and the Internet; studying the interaction between social dynamics and online behaviors; modeling the evolution of complex social and technological networks; developing adaptive, distributed, collaborative, agent-based applications for Web search and recommendation; understanding complex biological networks and complex reaction in biochemistry; developing models for the spread of diseases; understanding how coordinated behavior arises from the dynamical interaction of nervous system, body, and environment; studying social human behavior; exploring reasons underlying species diversity; studying the interplay between self-organization and natural selection; understanding how information arises and is used in biological systems; and so on. All these examples are characterized by complex nonlinear feedback mechanisms and it is now being increasingly recognized that the outcome of such interactions can only be understood through mathematical and computational models.

Lots of interesting content. I will be calling some of it out in the future.