Can Extragalactic Data Be Standardized? by Ian Armas Foster.
From the post:
While lacking the direct practical applications that the study of genomics offers, astronomy is one of the more compelling use cases big data-related areas of academic research.
The wealth of stars and other astronomical phenomena that one can identify and classify provide an intriguing challenge. The long-term goal will be to eventually use the information from astronomical surveys in modeling the universe.
However, according to recent research written from French computer scientists Nicolas Kamennoff, Sebastien Foucaud, and Sebastien Reybier, the gradual decline of Moore’s Law and the resulting lack of computing power combined with the ever-expanding ability to see outside the Milky Way are creating a significant bottleneck in astronomical research. In particular, software has yet to catch up to strides made in parallel processing.
This article is the first of two focused around an ambitious-sounding institute known as the Taiwan Extragalactic Astronomical Data Center (TWEA-DC ). Here, the researchers identified three problems they hope to solve through the TWEA-DC: misuse of resources, the existence of a heterogeneous software ecosystem, and data transfer.
I guess this counts as one of my more “theory” oriented posts on topic maps. 😉
Of particular interest for the recognition that heterogeneity isn’t limited to data. Heterogeneity exists between software systems as well.
Homogeneity, for both data and software, is an artifice constructed to make early digital computers possible.
Whether CS is now strong enough for the default case, heterogeneity of both data and software, remains to be seen.
(On TWEA-DC proper, see: TaiWan Extragalactic Astronomical Data Center — TWEA-DC (website))