When supercomputers meet the Semantic Web
Jack Park forwarded the link to this post.
It has descriptions like:
Everything about the hardware is optimised to churn through large quantities of data, very quickly, with vital statistics that soon become silly. A single processor “can sustain 128 simultaneous threads and is connected with up to 8 GB of memory.” The Cray XMT comes with at least 16 of those processors, and can scale to over 8,000 of them in order to handle over 1 million simultaneous threads with 64 TB of shared system memory. Should you want to, you could easily hold the entire Linked Data Cloud in main memory for rapid analysis without the usual performance bottleneck introduced by swapping data on and off disks.
Now, that’s computing!
Do note of the emphasis on graph processing.
I think Semantic Web and topic map fans would do well to pay attention to the big data movement mentioned in this article.
Imagine a topic map whose topics emerge in interaction with subject matter experts querying the data as opposed to being statically authored.
Same for associations between subjects and even their association types.
Still topic maps, just a different way to think about authoring them.
I don’t have a Cray XMT but it should be possible to practice emergent topic map authoring on a smaller device.
I rather like that, emergent topic map authoring, ETMA.
Let me push that around a bit and I will post further notes about it.