Enhancing Time Series Data by Applying Bitemporality (It’s not just what you know, it’s when you know it) by Jeffrey Shmain.
A “white paper” and all that implies but it raises the interesting question of setting time boundaries for the validity of data.
From the context of the paper, “bitemporality” means setting a start and end time for the validity of some unit of data.
We all know the static view of the world presented by most data systems is false. But it works well enough in some cases.
The problem is that most data systems don’t allow you to choose static versus some other view of the world.
In part because to get a non-static view, you have to modify your data system (often not a good idea) or migrate to another data system (which is expensive and not risk free) to obtain a non-static view of the world.
Jeffrey remarks in the paper that “all data is time series data” and he’s right. Data arrives at time X, was sent at time T, was logged at time Y, was seen by the CIO at Z, etc. To say nothing of tracking changes to that data.
Not all cases require that much detail but if you need it, wouldn’t it be nice to have?
Your present system may limit you to static views but topic maps can enhance your system in place. Avoiding the dangers of upgrading in place and/or migrating into unknown perils and hazards.
When did you know you needed time based validity for your data?
For a bit more technical view of bitemporality. (authored by Robbert van Dalen)