Caching in HBase: SlabCache by Li Pi.
From the post:
The amount of memory available on a commodity server has increased drastically in tune with Moore’s law. Today, its very feasible to have up to 96 gigabytes of RAM on a mid-end, commodity server. This extra memory is good for databases such as HBase which rely on in memory caching to boost read performance.
However, despite the availability of high memory servers, the garbage collection algorithms available on production quality JDK’s have not caught up. Attempting to use large amounts of heap will result in the occasional stop-the-world pause that is long enough to cause stalled requests and timeouts, thus noticeably disrupting latency sensitive user applications.
Introduces management of the file system cache for those with loads and memory to justify and enable it.
Quite interesting work, particularly if you are ignoring the nay-sayers about the adoption of Hadoop and the Cloud in the coming year.
What the nay-sayers are missing is that yes, unimaginative mid-level managers and admins have no interest in Hadoop or the Cloud. What Hadoop and the Cloud present are opportunities that imaginative re-packagers and re-processing startups are going to use to provide new data streams and services.
Can’t ask startups that don’t exist yet why they have chosen to go with Hadoop and the Cloud.
That goes unnoticed by unimaginative commentators who reflect the opinions of uninformed managers, whose opinions are confirmed by the publication of the columns by unimaginative commentators. One of those feedback loops I mentioned earlier today.
[…] You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own […]
Pingback by Caching in HBase: SlabCache « Another Word For It | Programmer Solution — January 11, 2012 @ 8:06 pm