From the OpenStack wiki:
The OpenStack Open Source Cloud Mission: to produce the ubiquitous Open Source Cloud Computing platform that will meet the needs of public and private clouds regardless of size, by being simple to implement and massively scalable.
There are three (3) core projects:
OPENSTACK COMPUTE: open source software and standards for large-scale deployments of automatically provisioned virtual compute instances.
OPENSTACK OBJECT STORAGE: open source software and standards for large-scale, redundant storage of static objects.
OPENSTACK IMAGE SERVICE: provides discovery, registration, and delivery services for virtual disk images.
Two (2) new projects that will be promoted to core on the next release:
OpenStack Identity: Code-named Keystone, The OpenStack Identity Service provides unified authentication across all OpenStack projects and integrates with existing authentication systems.
OpenStack Dashboard: Dashboard enables administrators and users to access and provision cloud-based resources through a self-service portal.
And a host of unofficial projects, related to one or more OpenStack components. (OpenStack Projects)
So far as I could tell, no projects to deal with mapping between data sets in any re-usable way.
Do you think cloud computing will make semantic impedance more or less obvious?
More obvious because of the clash of the unknown semantics of data sets.
Less obvious because the larger the data sets, the greater the tendency to assume the answer(s), however curious, must be correct.
Which do you think it will be?