Third Workshop on Massive Data Algorithmics (MASSIVE 2011)
From the website:
Tremendous advances in our ability to acquire, store and process data, as well as the pervasive use of computers in general, have resulted in a spectacular increase in the amount of data being collected. This availability of high-quality data has led to major advances in both science and industry. In general, society is becoming increasingly data driven, and this trend is likely to continue in the coming years.
The increasing number of applications processing massive data means that in general focus on algorithm efficiency is increasing. However, the large size of the data, and/or the small size of many modern computing devices, also means that issues such as memory hierarchy architecture often play a crucial role in algorithm efficiency. Thus the availability of massive data also means many new challenges for algorithm designers.
Forgive me for mentioning it, but what is the one thing all algorithms have in common? Whether for massive data or no?
Ah, yes, some presumption about the identity of the subjects to be processed.
Would be rather difficult to efficiently process anything unless you knew where you were starting and with what?
Making the subjects processed by algorithms efficiently interchangeable seems like a good thing to me.