Lee makes a compelling argument for XML as the underlying mechanism for data integration when saying:
…Perhaps the data in your relational databases is structured. What about your knowledge management systems, customer information systems, document systems, CMS, mail, etc.? How do you integrate that data with structured data to get a holistic view of all your data? What do you do when you want to bring a group of relational schemas from different systems together to get that elusive 360 view – which is being demanded by the world’s regulators banks? Mergers and acquisitions drive this requirement too. How do you search across that data?
Sure there are solution stack answers. We’ve all seen whiteboards with ever growing number of boxes and those innocuous puny arrows between them that translate to teams of people, buckets of code, test and operations teams. They all add up to ever-increasing costs, complexity, missed deadlines & market share loss. Sound overly dramatic? Gartner calculated a worldwide spend of $5 Billion on data integration software in 2015. How much did you spend … would you know where to start calculating that cost?
While pondering what you spend on a yearly basis for data integration, contemplate two more questions from Lee:
…So take a moment to think about how you treat the data format that underpins your intellectual property? First-class citizen or after-thought?…
If you are treating your XML elements as first class citizens, do tell me that you created subject identity tests for those subjects?
So that a programmer new to your years of legacy XML will understand that <MFBM>, <MBFT> and <MBF> elements are all expressed in units of 1,000 board feet.
Reducing the cost of data integration tomorrow, next year and five years after that, requires investment in the here and now.
Perhaps that is why data integration costs continue to climb.
Why pay for today what can be put off until tomorrow? (Future conversion costs are a line item in some future office holder’s budget.)