From the post:
50 million messages per second on a single machine is mind blowing!
We have measured this for a micro benchmark of Akka 2.0.
As promised in Scalability of Fork Join Pool I will here describe one of the tuning settings that can be used to achieve even higher throughput than the amazing numbers presented previously. Using the same benchmark as in Scalability of Fork Join Pool and only changing the configuration we go from 20 to 50 million messages per second.
The micro benchmark use pairs of actors sending messages to each other, classical ping-pong. All sharing the same fork join dispatcher.
Fairly sure the web scale folks will just sniff and move on. It’s not like every Facebook user sending individual messages to all of their friends and their friend’s friends, all at the same time.
On the other hand, 50 million messages per second per machine, on enough machines, and you are talking about a real pile of message.
Are we approaching the point of data being responsible for processing itself and reporting the results? Or at least reporting itself to the nearest processor with the appropriate inputs? Perhaps by broadcasting a message itself?
Closer to home, could a topic map infrastructure be built using message passing that reports a TMDM based data model? For use by query or constraint languages? That is it presents a TMDM API as it were, although behind the scenes the reported API is the result of message passing and processing.
That would make the data model or API if you prefer, a matter of what message passing had been implemented.
More malleable and flexible than a relational database scheme or Cyc based ontology. An enlightened data structure, for a new age.