Another Word For It Patrick Durusau on Topic Maps and Semantic Diversity

April 17, 2013

In-Memory Computing

Filed under: Computation,Computer Science,Programming — Patrick Durusau @ 1:23 pm

Why In-Memory Computing Is Cheaper And Changes Everything by Timo Elliott.

From the post:

What is the difference? Database engines today do I/O. So if they want to get a record, they read. If they want to write a record, they write, update, delete, etc. The application, which in this case is a DBMS, thinks that it’s always writing to disk. If that record that they’re reading and writing happens to be in flash, it will certainly be faster, but it’s still reading and writing. Even if I’ve cached it in DRAM, it’s the same thing: I’m still reading and writing.

What we’re talking about here is the actual database is physically in in-memory. I’m doing a fetch to get data and not a read. So the logic of the database changes. That’s what in-memory is about as opposed to the traditional types of computing.

Why is it time for in-memory computing?

Why now? The most important thing is this: DRAM costs are dropping about 32% every 12 months. Things are getting bigger, and costs are getting lower. If you looked at the price of a Dell server with a terabyte of memory three years ago, it was almost $100,000 on their internet site. Today, a server with more cores — sixteen instead of twelve — and a terabyte of DRAM, costs less than $40,000.

In-memory results in lower total cost of ownership

So the costs of this stuff is not outrageous. For those of you who don’t understand storage, I always get into this argument: the total cost of acquisition of an in-memory system is likely higher than a storage system. There’s no question. But the total cost of TCO is lower – because you don’t need storage people to manage memory. There are no LUNs [logical unit numbers]: all the things your storage technicians do goes away.

People cost more than hardware and software – a lot more. So the TCO is lower. And also, by the way, power: one study IBM did showed that memory is 99% less power than spinning disks. So unless you happen to be an electric company, that’s going to mean a lot to you. Cooling is lower, everything is lower.

Timo makes a good case for in-memory computing but I have a slightly different question.

If both data and program are stored in memory, where is the distinction between program and data?

Or in topic map terms, can’t we then speak about subject identities in the program and even in data at particular points in the program?

That could be a very powerful tool for controlling program behavior and re-purposing data at different stages of processing.

1 Comment

  1. […] Timo Elliott was speculating about entirely RAM-based computing in: In-Memory Computing. […]

    Pingback by Aerospike « Another Word For It — April 19, 2013 @ 1:02 pm

RSS feed for comments on this post.

Sorry, the comment form is closed at this time.

Powered by WordPress