Archive for the ‘Erlang’ Category

DRAKON-Erlang: Visual Functional Programming

Saturday, October 20th, 2012

DRAKON-Erlang: Visual Functional Programming

DRAKON is a visual programming language developed for the Buran Space Project.

I won’t repeat the surplus of adjectives used to describe DRAKON. Its long term use in the Russian space program is enough to recommend review of its visual techniques.

The DRAKO-Erlang project is an effort to combine DRAKON as a flow language/representation with Erlang.

A graphical notation for topic maps never caught on and with the rise of big data, visual representation of merging algorithms could be quite useful.

I am not suggesting DRAKON-Erlang as a solution to those issues but as a data point to take into account.


Count unique items in a text file using Erlang

Wednesday, October 17th, 2012

Count unique items in a text file using Erlang by Paolo D’Incau.

From the post:

Many times during our programming daily routine, we have to deal with log files. Most of the log files I have seen so far are just text files where the useful information are stored line by line.

Let’s say you are implementing a super cool game backend in Erlang, probably you would end up with a bunch of servers implementing several actions (e.g. authentication, chat, store character progress etc etc); well I am pretty sure you would not store the characters info in a text file, but maybe (and I said maybe) you could find useful to store in a text file some of the information that comes from the authentication server.

Unique in the sense you are thinking.

But that happens, even in topic maps.

Disco [Erlang/Python – MapReduce]

Monday, October 1st, 2012


From the webpage:

Disco is a distributed computing framework based on the MapReduce paradigm. Disco is open-source; developed by Nokia Research Center to solve real problems in handling massive amounts of data.

Disco is powerful and easy to use, thanks to Python. Disco distributes and replicates your data, and schedules your jobs efficiently. Disco even includes the tools you need to index billions of data points and query them in real-time.

Install Disco on your laptop, cluster or cloud of choice and become a part of the Disco community!

I rather like the MapReduce graphic you will see at About.

I first saw this in Guido Kollerie’s post on the recent Python users meeting in the Netherlands. Guido details his 5 minute presentation on Disco.

Process group in erlang: some thoughts about the pg module

Wednesday, September 19th, 2012

Process group in erlang: some thoughts about the pg module by Paolo D’Incau.

From the post:

One of the most common ways to achieve fault tolerance in distributed systems, consists in organizing several identical processes into a group, that can be accessed by a common name. The key concept here is that whenever a message is sent to the group, all members of the group receive it. This is a really nice feature, since if one process in the group fails, some other process can take over for it and handle the message, doing all the operations required.

Process groups allow also abstraction: when we send a message to a group, we don’t need to know who are the members and where they are. In fact process groups are all but static. Any process can join an existing group or leave one at runtime, moreover a process can be part of more groups at the same time.

Fault tolerance is going to be an issue if you are using topic maps and/or social media in an operational context.

Having really “cool” semantic capabilities isn’t worth much if the system fails at a critical point.

Elli (Erlang Web Server) [Lessons in Semantic Interoperability – Part 1]

Saturday, September 1st, 2012


From the post:

My name is Knut, and I want to show you something really cool that I built to solve some problems we are facing here at Wooga.

Having several very successful social games means we have a large number of users. In a single game, they can generate around ten thousand HTTP requests per second to our backend systems. Building and operating the software required to service these games is a big challenge that sometimes requires creative solutions.

As developers at Wooga, we are responsible for the user experience. We want to make our games not only fun and enjoyable but accessible at all times. To do this we need to understand and control the software and hardware we rely on. When we see an area where we can improve the user experience, we go for it. Sometimes this means taking on ambitious projects. An example of this is Elli, a webserver which has become one of the key building blocks of our successful backends.

Having used many of the big Erlang webservers in production with great success, we still found ourselves thinking of how we could improve. We want a simple and robust core with no errors or edge cases causing problems. We need to measure the performance to help us optimize our network and user code. Most importantly, we need high performance and low CPU usage so our servers can spend their resources running our games.

I started this post about Elli to point out the advantages of having a custom web server application. If your needs aren’t meet by one of the standard ones.

Something clicked and I realized that web servers, robust and fast as well as lame and slow, churn out semantically interoperable content every day.

For hundreds of millions of users.

Rather than starting from the perspective of the “semantic interoperability” we want, why not examine the “semantic interoperability” we have already, for clues on what may or may not work to increase it?

When I say “semantic interoperability” on the web, I am speaking of the interpretation of HTML markup, the <a>, <p>, <ol>, <ul>, <div>, <h1-6>, elements that make up most pages.

What characteristics do those markup elements share that might be useful in creating more semantic interoperability?

The first characteristic is simplicity.

You don’t need a lot of semantic overhead machinery or understanding to use any of them.

A plain text editor and knowledge that some text has a general presentation is enough.

Takes a few minutes for a user to learn enough HTML to produce meaningful (to them and others) results.

At least in the case of HTML, that simplicity has lead to a form of semantic interoperability.

HTML was defined with interoperable semantics but unadopted interoperable semantics are like no interoperable semantics at all.

If HTML has simplicity of semantics, what else does it have that lead to widespread adoption?

Erlang Cheat Sheet [And Cheat Sheets in General]

Monday, August 20th, 2012

Erlang Cheat Sheet

Fairly short (read limited) cheat sheet on Erlang. Found at:

Has a number of cheat sheets and is in the process of creating a cheat sheet template.

Questions that come to mind:

  • Using a topic map to support a cheat sheet, what more would you expect to see? Links to fuller examples? Links to manuals? Links to sub-cheat sheets?
  • Have you seen any ontology cheat sheets? For coding consistency, that sounds like something that could be quite handy.
  • For existing ontologies, any research on frequency of use to support the creation of cheat sheets? (Would not waste space on “thing” for example. Too unlikely to bear mentioning.)

Riak 1.2 Webinar – 21st August 2012

Wednesday, August 8th, 2012

Riak 1.2 Webinar – 21st August 2012

  • 11:00 Pacific Daylight Time (San Francisco, GMT-07:00)
  • 14:00 Eastern Daylight Time (New York, GMT-04:00)
  • 20:00 Europe Summer Time (Berlin, GMT+02:00)

From the registration page:

Join Basho Technologies’ Engineer, Joseph Blomstedt, for an in-depth overview of Riak 1.2, the latest version of Basho’s flagship open source database. In this live webinar, you will see changes in Riak 1.2 open source and Enterprise versions, including:

  • New approach to cluster administration
  • Built-in capability negotiation
  • Repair Search or KV Partitions thru Riak Console
  • Enhanced Handoff Reporting
  • Protobuf API Support for 2i and Search indexes
  • New Packaging for FreeBSD, SmartOS, and Ubuntu
  • Stats Improvements
  • LevelDB Improvements

I would have included this with the Riak 1.2 release post but was afraid you would not get past the download link and not see the webinar.

It’s on my calendar. How about yours?

Riak 1.2 Is Official!

Wednesday, August 8th, 2012

Riak 1.2 Is Official!

From the post:

Nearly three years ago to the day, from a set of green, worn couches in a modest office Cambridge, Massachusetts, the Basho team announced Riak to the world. To say we’ve come a long way from that first release would be an understatement, and today we’re pleased to announce the release and general availability of Riak 1.2.

Here’s the tl;dr on what’s new and improved since the Riak 1.1 release:

  • More efficiently add multiple Riak nodes to your cluster
  • Stage and review, then commit or abort cluster changes for easier operations; plus smoother handling of rolling upgrades
  • Better visibility into active handoffs
  • Repair Riak KV and Search partitions by attaching to the Riak Console and using a one-line command to recover from data corruption/loss
  • More performant stats for Riak; the addition of stats to Riak Search
  • 2i and Search usage thru the Protocol Buffers API
  • Official Support for Riak on FreeBSD
  • In Riak Enterprise: SSL encryption, better balancing and more granular control of replication across multiple data centers, NAT support

If that’s all you need to know, download the new release or read the official release notes. Also, go register for RICON.

OK, but I have a question: What happened to the lucky “…green, worn couches…”? ;-)

Crash Course in Erlang

Sunday, May 20th, 2012

Crash Course in Erlang by Knut Hellan.

Knut writes:

This is a summary of a talk I held Monday May 14 2012 at an XP Meetup in Trondheim. It is meant as a teaser for listeners to play with Erlang themselves.

First, some basic concepts. Erlang has a form of constant called atom that is defined on first use. They are typically used as enums or symbols in other languages. Variables in Erlang are [im]mutable so assigning a new value to an existing variable is not allowed. (emphasis added)

Not so much an introduction as a tease to get you to learn more Erlang.

Some typos but look upon those as a challenge to verify what you are reading.

I may copy this post “as is” and use it as a “critical reading/research” assignment for my class.

Then have the students debate their corrections.

That could be a very interesting exercise on not taking everything you read on blind faith, how do you verify what you have read and in the process, evaluate that material as well.

Do you develop a sense of trust for some sources as being “better” than others? Are there ones you turn to by default?

Dempsy – a New Real-time Framework for Processing BigData

Friday, May 4th, 2012

Dempsy – a New Real-time Framework for Processing BigData by Boris Lublinsky.

From the post:

Real time processing of BigData seems to be one of the hottest topics today. Nokia has just released a new open-source project – Dempsy. Dempsy is comparable to Storm, Esper, Streambase, HStreaming and Apache S4. The code is released under the Apache 2 license

Dempsy is meant to solve the problem of processing large amounts of "near real time" stream data with the lowest lag possible; problems where latency is more important that "guaranteed delivery." This class of problems includes use cases such as:

  • Real time monitoring of large distributed systems
  • Processing complete rich streams of social networking data
  • Real time analytics on log information generated from widely distributed systems
  • Statistical analytics on real-time vehicle traffic information on a global basis

The important properties of Dempsy are:

  • It is Distributed. That is to say a Dempsy application can run on multiple JVMs on multiple physical machines.
  • It is Elastic. That is, it is relatively simple to scale an application to more (or fewer) nodes. This does not require code or configuration changes but done by dynamic insertion or removal of processing nodes.
  • It implements Message Processing. Dempsy is based on message passing. It moves messages between Message processors, which act on the messages to perform simple atomic operations such as enrichment, transformation, etc. In general, an application is intended to be broken down into more smaller simpler processors rather than fewer large complex processors.
  • It is a Framework. It is not an application container like a J2EE container, nor a simple library. Instead, like the Spring Framework, it is a collection of patterns, the libraries to enable those patterns, and the interfaces one must implement to use those libraries to implement the patterns.

Dempsy’ programming model is based on message processors communicating via messages and resembles a distributed actor framework . While not strictly speaking an actor framework in the sense of Erlang or Akka actors, where actors explicitely direct messages to other actors, Dempsy’s Message Processors are "actor like POJOs" similar to Processor Elements in S4 and to some extent Bolts in Storm. Message processors are similar to actors in that they operate on a single message at a time, and need not deal with concurrency directly. Unlike actors, Message Processors also are relieved of the the need to know the destination(s) for their output messages, as this is handled inside by Dempsy based on the message properties.

In short Dempsy is a framework to enable the decomposing of a large class of message processing problems into flows of messages between relatively simple processing units implemented as POJOs. 

The Dempsy Tutorial contains more information.

See the post for an interview with Dempsy’s creator, NAVTEQ Fellow Jim Carroll.

Will the “age of data” mean that applications and their code will also be viewed and processed as data? The capabilities you have are those you request for a particular data set? Would like to see topic maps on the leading (and not dragging) edge of that change.

Building Highly Available Systems in Erlang

Saturday, April 21st, 2012

Building Highly Available Systems in Erlang

From the description:


Joe Armstrong discusses highly available (HA) systems, introducing different types of HA systems and data, HA architecture and algorithms, 6 rules of HA, and how HA is done with Erlang.


Joe Armstrong is the principal inventor of Erlang and coined the term “Concurrency Oriented Programming”. At Ericsson he developed Erlang and was chief architect of the Erlang/OTP system. In 1998 he formed Bluetail, which developed all its products in Erlang. In 2003 he obtain his PhD from the Royal Institute of Technology, Stockholm. He is author of the book “Software for a concurrent world”.

Gives the six (6) rules for highly available systems and how Erlang meets those six (6) rules.

  • Isolation rule: Operations must be isolated
  • Concurrency: The world is concurrent
  • Must detect failures: If can’t detect, can’t fix
  • Fault Identification: Enough detail to do something.
  • Live Code Upgrade: Upgrade software while running.
  • Stable Storage: Must survive universal power failure.

Quotes: Why Computers Stop and What Can Be Done About It, Jim Gray, Technical Report 85.7, Tandem Computers 1985, for example.

Highly entertaining and informative.

What do you think of the notion of an evolving software system?

How would you apply that to a topic map system?

Modelling graphs with processes in Erlang

Wednesday, April 4th, 2012

Modelling graphs with processes in Erlang by Nick Gibson.

From the post:

One of the advantages of Erlang’s concurrency model is that creating and running new processes is much cheaper. This opens up opportunities to write algorithms in new ways. In this article, I’ll show you how you can implement a graph searching algorithm by modeling the domain using process interaction.

I’ll assume you’re more or less comfortable with Erlang, if you’re not you might want to go back and read through Builder AU’s previous guides on the subject.

First we need to write a function for the nodes in the graph. When we spawn a process for each node it will need to run a function that sends and receives messages. Each node needs two things, its own name, and the links it has to other nodes. To store the links, we’ll use a dictionary which maps name to the node’s Pid. [I checked the links and they still work. Amazing for a five year old post.]

In the graph reading club discussion today, it was suggested that we need to look at data structures more closely. There are a number of typical and not so typical data structures for graphs and/or graph databases.

I am curious if it would be better to develop the requirements for data structures, separate and apart from thinking of them as graph or graph database storage?

For example, we don’t want information about “edges,” but rather data items composed of two (or more) addresses (of other data items) per data item. Or an ordered list of such data items. And the addresses of the data items in question have specific characteristics.

Trying to avoid being influenced by the implied necessities of “edges,” at least until they are formally specified. At that point, we can evaluate data structures that meet all the previous requirements, plus any new ones.

Elixir – A modern approach to programming for the Erlang VM

Monday, April 2nd, 2012


From the homepage:

Elixir is a programming language built on top of the Erlang VM. As Erlang, it is a functional language built to support distributed, fault-tolerant, non-stop applications with hot code swapping.

Elixir is also dynamic typed but, differently from Erlang, it is also homoiconic, allowing meta-programming via macros. Elixir also supports polymorphism via protocols (similar to Clojure’s), dynamic records and provides a reference mechanism.

Finally, Elixir and Erlang share the same bytecode and data types. This means you can invoke Erlang code from Elixir (and vice-versa) without any conversion or performance hit. This allows a developer to mix the expressiveness of Elixir with the robustness and performance of Erlang.

If you want to install Elixir or learn more about it, check our getting started guide.

Quite possibly of interest to Erlang programmers.

Take a close look at the languages mentioned in the Wikipedia article on homoiconicity as other examples of homoiconic languages.

Question: The list contains “successful” and “unsuccessful” languages. Care to comment on possible differences that account for the outcomes?

Thinking a “successful” semantic mapping language will need to have certain characteristics. The question is, of course, which ones?

Intro to Distributed Erlang (screencast)

Sunday, April 1st, 2012

Intro to Distributed Erlang (screencast) by Bryan Hunter.

From the description:

Here’s an introduction to distribution in Erlang. This screencast demonstrates creating three Erlang nodes on a Windows box and one on a Linux box and then connecting them using the one-liner “net_adm:ping” to form a mighty compute cluster.

Topics covered:

  • Using erl to start an Erlang node (an instance of the Erlang runtime system).
  • How to use net_adm:ping to connect four Erlang nodes (three on Windows, one on Linux).
  • Using rpc:call to RickRoll a Linux box from an Erlang node running on a Windows box.
  • Using nl to load (deploy) a module from one node to all connected nodes.

Not the most powerful cluster but a good way to learn distributed Erlang.

Erlang as a Cloud Citizen

Saturday, March 31st, 2012

Erlang as a Cloud Citizen by Paolo Negri. (Erlang Factory San Francisco 2012)

From the description:

This talk wants to sum up the experience of designing, deploying and maintaining an Erlang application targeting the cloud and precisely AWS as hosting infrastructure.

As the application now serves a significantly large user base with a sustained throughput of thousands of games actions per second we’re able to analyse retrospectively our engineering and architectural choices and see how Erlang fits in the cloud environment also comparing it to previous experiences of clouds deployments of other platforms.

We’ll discuss properties of Erlang as a language and OTP as a framework and how we used them to design a system that is a good cloud citizen. We’ll also discuss topics that are still open for a solution.

Interesting but you probably want to wait for the video. The slides are interesting, considering the argument for fractal-like engineering for scale, but not enough detail to be really useful.

Still, responding to 0.25 billion uncacheable reqs/day is a performance number you should not ignore. Depends on your use case.

Milking Performance from Riak Search

Thursday, March 22nd, 2012

Milking Performance from Riak Search by Gary William Flake.

From the post:

The primary backend store of Clipboard is built on top of Riak, one of the lesser known NoSQLs solutions. We love Riak and are really happy with our experiences with it — both in terms of development and operations — but to get to where we are, we had to use some tricks. In this post I want to share with you why we chose Riak and also arm you with some of the best tricks that we learned along the way. Individually, these tricks gave us better than a 100x performance boost, so they may make a big difference for you too.

If you don’t know what Clipboard is, you should try it out. We’re in private beta now, but here’s a backdoor that will bypass the invitation system: Register at Clipboard.

Good discussion of term-based partitioning and its disadvantages. (Term-based partitioning being native to Riak.) Solved in part by judging likely queries in advance and precomputing inner joins. Not a bad method, depending on your confidence in your guesses about likely queries.

You will also have to determine if sorting on a primary key meets your needs, for a 10X to 100X performance gain.

A Peek Inside the Erlang Compiler

Thursday, February 9th, 2012

A Peek Inside the Erlang Compiler

From the post:

Erlang is a complex system, and I can’t do its inner workings justice in a short article, but I wanted to give some insight into what goes on when a module is compiled and loaded. As with most compilers, the first step is to convert the textual source to an abstract syntax tree, but that’s unremarkable. What is interesting is that the code goes through three major representations, and you can look at each of them.

Covers the following transformations:

  • Syntax trees to Core Erlang
  • Core Erlang to code for the register-based BEAM virtual machine (final output of compiler)
  • BEAM bytecode into threaded code (loader output)

Just in case you wanted to know more about Erlang than you found in the crash course. ;-)

A deeper understanding of any language is useful. Understanding “why” a construction works is the first step to writing a better one.

Crash Course in Erlang

Thursday, February 9th, 2012

Crash Course in Erlang (PDF file) by Roy Deal Simon.

“If your language is not functional, it’s dysfunctional baby.”

I suppose I look at Erlang (and other) intros just to see if the graphics/illustrations are different from other presentations. ;-) Not enough detail to really teach you much but sometimes the graphics are worth remembering.

Not any time soon but it would be interesting to review presentations for common illustrations. Perhaps even a way to find the ones that are the best to use with particular audiences. Something to think about.

Vector Clocks – Easy/Hard?

Friday, February 3rd, 2012

The Basho blog has a couple of very good posts on vector clocks:

Why Vector Clocks are Easy

Why Vector Clocks are Hard

The problem statement was as follows:

Alice, Ben, Cathy, and Dave are planning to meet next week for dinner. The planning starts with Alice suggesting they meet on Wednesday. Later, Dave discuss alternatives with Cathy, and they decide on Thursday instead. Dave also exchanges email with Ben, and they decide on Tuesday. When Alice pings everyone again to find out whether they still agree with her Wednesday suggestion, she gets mixed messages: Cathy claims to have settled on Thursday with Dave, and Ben claims to have settled on Tuesday with Dave. Dave can’t be reached, and so no one is able to determine the order in which these communications happened, and so none of Alice, Ben, and Cathy know whether Tuesday or Thursday is the correct choice.

Vector clocks are used to keep the order of communications clear. Something you will need in distributed systems, including those for topic maps.


Monday, January 23rd, 2012


From the webpage:

Scalaris is a scalable, transactional, distributed key-value store. It can be used for building scalable Web 2.0 services.

Scalaris uses a structured overlay with a non-blocking Paxos commit protocol for transaction processing with strong consistency over replicas. Scalaris is implemented in Erlang.

Following I found:

Our work is similar to Amazon’s SimpleDB, but additionally supports full ACID properties. Dynamo, in contrast, restricts itself to eventual consistency only. As a test case, we chose Wikipedia, the free encyclopedia, that anyone can edit. Our implementation serves approx. 2,500 transactions per second with just 16 CPUs, which is better than the public Wikipedia.

Be forewarned that the documentation is in Google Docs, which does not like Firefox on Ubuntu.

Sigh, back to browser wars, again? Says it will work with Google Chrome.

Flake: A Decentralized, K-Ordered Unique ID Generator in Erlang

Wednesday, January 18th, 2012

Flake: A Decentralized, K-Ordered Unique ID Generator in Erlang

From the post:

At Boundary we have developed a system for unique id generation. This started with two basic goals:

  • Id generation at a node should not require coordination with other nodes.
  • Ids should be roughly time-ordered when sorted lexicographically. In other words they should be k-ordered 1, 2.

All that is required to construct such an id is a monotonically increasing clock and a location 3. K-ordering dictates that the most-significant bits of the id be the timestamp. UUID-1 contains this information, but arranges the pieces in such a way that k-ordering is lost. Still other schemes offer k-ordering with either a questionable representation of ‘location’ or one that requires coordination among nodes.

Just in case you are looking for a decentralized source of K-ordered unique IDs. ;-)

First seen at: myNoSQL as: Flake: A Decentralized, K-Ordered Unique ID Generator in Erlang.

A Basic Full Text Search Server in Erlang

Monday, October 10th, 2011

A Basic Full Text Search Server in Erlang

From the post:

This post explains how to build a basic full text search server in Erlang. The server has the following features:

  • indexing
  • stemming
  • ranking
  • faceting
  • asynchronous search results
  • web frontend using websockets

Familiarity with the OTP design principles is recommended.

Looks like a good way to become familiar with Erlang and text search issues.

Buckets of Sockets

Tuesday, October 4th, 2011

Buckets of Sockets

OK, so some of the stuff I have pointed to lately hasn’t been “hard core.” ;-)

This should give you some ideas about building communications (including servers) in connection with topic maps.

From the webpage:

So far we’ve had some fun dealing with Erlang itself, barely communicating to the outside world, if only by text files that we read here and there. As much of relationships with yourself might be fun, it’s time to get out of our lair and start talking to the rest of the world.

This chapter will cover three components of using sockets: IO lists, UDP sockets and TCP sockets. IO lists aren’t extremely complex as a topic. They’re just a clever way to efficiently build strings to be sent over sockets and other Erlang drivers.

Riak 1.0.0 RC 1

Thursday, September 22nd, 2011

Riak 1.0.0 RC 1

From the post:

We are pleased to announce the first release candidate for Riak 1.0.0 is now available.

The packages are available on our downloads page:

As a release candidate, we consider this to be a functionally complete representation of Riak 1.0.0. From now until the 1.0.0 release, only critical bug fixes will be merged into the repository. We would like to thank everyone who took the time to download, install, and run the pre-releases. The Riak community has always been one of the great strengths of Riak, and this release period has been no different with feedback and bug reports we’ve been given.


Leveling Up in The Process Quest

Wednesday, September 14th, 2011

Leveling Up in The Process Quest: The Hiccups of Appups and Relups

I won’t reproduce the image that “learn you some Erlang for great good” uses for a failed update, you will have to visit the blog page to see for yourself.

I can quote the first couple of paragraphs that sets the background for it:

Doing some code hot-loading is one of the simplest things in Erlang. You recompile, make a fully-qualified function call, and then enjoy. Doing it right and safe is much more difficult, though.

There is one very simple challenge that makes code reloading problematic. Let’s use our amazing Erlang-programming brain and have it imagine a gen_server process. This process has a handle_cast/2 function that accepts one kind of argument. I update it to one that takes a different kind of argument, compile it, push it in production. All is fine and dandy, but because we have an application that we don’t want to shut down, we decide to load it on the production VM to make it run.

I suspect that Erlang or something close to it will become the norm in the not too distant future. Mostly because there won’t be an opportunity to “catch up” on all the data streams in the event of re-loading the application. May have buffering in the event of a reader failure but not system wide.

An Open Source Platform for Virtual Supercomputing

Wednesday, September 7th, 2011

An Open Source Platform for Virtual Supercomputing, Michael Feldman reports:

Erlang Solutions and Massive Solutions will soon launch a new cloud platform for high performance computing. Last month they announced their intent to bring a virtual supercomputer (VSC) product to market, the idea being to enable customers to share their HPC resources either externally or internally, in a cloud-like manner, all under the banner of open source software.

The platform will be based on Clustrx and Xpandrx, two HPC software operating systems that were the result of several years of work done by Erlang Solutions, based in the UK, and Massive Solutions, based in Gibraltar. Massive Solutions has been the driving force behind the development of these two OS’s, using Erlang language technology developed by its partner.

In a nutshell, Clustrx is an HPC operating system, or more accurately, middleware, which sits atop Linux, providing the management and monitoring functions for supercomputer clusters. It is run on its own small server farm of one or more nodes, which are connected to the compute servers that make up the HPC cluster. The separation between management and compute enables it to support all the major Linux distros as well as Windows HPC Server. There is a distinct Clustrx-based version of Linux for the compute side as well, called Compute Based Linux.

A couple of things to note from within the article:

The only limitation to this model is its dependency on the underlying capabilities of Linux. For example, although Xpandrx is GPU-aware, since GPU virtualization is not yet supported in any Linux distros, the VSC platform can’t support virtualization of those resources. More exotic HPC hardware technology would, likewise, be out of the virtual loop.

The common denominator for VSC is Erlang, not just the company, but the language, which is designed for programming massively scalable systems. The Erlang runtime has built-in to support for things like concurrency, distribution and fault tolerance. As such, it is particularly suitable for HPC system software and large-scale interprocess communication, which is why both Clustrx and Xpandrx are implemented in the language.

As computing power and access to computing power increases, have you seen an increase in robust (in your view) topic map applications?

Erlang – 3 Slide decks

Monday, September 5th, 2011

I encountered three (3) slide decks on Erlang today:

Mohamed Samy presents two sessions on Erlang:

Erlang Session 1 – General introduction, sequential Erlang.

Erlang Session 2 – Concurrency, Actors

Despite the titles, there was no session 3.

Which writing those up, I saw:

Concurrency Oriented Programming in Erlang A more advanced view of Erlang and its possibilities.

Erlang Community Site

Thursday, August 25th, 2011

Erlang Commnity site:

Interesting collection of links to various Erlang resources.

Includes Try Erlang site, where you can try Erlang in your browser.

I have seen topic maps displayed in web browsers. I have seen fairly ugly topic map editors in web browsers. No, don’t think I have seen a “Try Topic Maps” type site. Have I just missed it?

Thoughts? Suggestions?

A practical introduction to MochiWeb

Sunday, July 24th, 2011

A practical introduction to MochiWeb

From the post:

Bob Ippolito, creator or MochiWeb, describes it as “an Erlang library for building lightweight HTTP servers”. It’s not a framework: it doesn’t come with URL dispatch, templating or data persistence. Despite not having an official website or narrative documentation, MochiWeb is a popular choice to build web services in Erlang. The purpose of this article is to help you to get started by gradually building a microframework featuring templates and URL dispatch. Persistence will not be covered.

Just in case you are interested in building web services in Erlang for your topic map application.

Build your own internet search engine

Tuesday, July 19th, 2011

Build your own internet search engine by Daniel Himmelein.

Uses Erlang but also surveys the Apache search stack.

Not that you have to roll your own search engine but it will give you a different appreciate for the issues they face.

Update: Build your own internet search engine – Part 2

I ran across part 2 while cleaning up at year’s end. Enjoy!