I recently attended Informatica World and helped them celebrate 25 years! It is surprising to me that we, both Informatica and Right Triangle, have been doing this for so long and I am amazed at what we are able to accomplish with our data. A funny moment occurred, when one of the speakers in the Architects Summit uttered an acronym, SSDD: Same s**t, different decade. That drew a hardy laugh from many, including me because in many ways that is what I always feel. The key is not what we are trying to do, that is the same. The key is how we apply improving technology to do it. Much of what follows is the blog post I began on my flight to Informatica world, which hits on the same theme: SSDD. It’s probably why my laugh was the loudest in the room.
We at Right Triangle, like all other data professionals continually evolve our approach to delivering enterprise information. In many ways, what’s old is new again, in the broad goals to deliver information to the thirsty consumers who want it ever faster and easier to engage. Again, broadly, we are addressing the same problems that data warehousing began addressing now 30 years ago. What’s changed is the technology and, consequently, the techniques and tools.
Our fundamental methodology is unchanged, but we now must deliver streaming data, “Big” data, and still deliver integrated, structured, time variant data. Our architectural methodology evolved to take advantage of the technology to deliver information to broader classes of users providing varying functions, thus requiring variable application types. And do this all faster, with greater context, and greater self-service.
At Right Triangle we have always tried to separate data integration from BI / Analytic application delivery. Mostly because trying to put all the rules for data quality, data integration AND all the rules for BI application delivery made classic EDWs immovable objects. So years ago we began that separation in the relational world. Integrate the data into an EDW, then build specific application repositories (using cubes, tabular models, even purchased software models) that use the EDW as a common source but uniquely satisfy the functional needs of the consuming application. This worked (and still does) well for relationally focused solutions providing internal corporate dashboards. However, that ship has long sailed and organizations now need, want, demand data differently.
Our architectural approach has evolved to support the needs of integrating core structured data, but also increasingly drives work to the newer platforms for managing data. We’ve taken the functional separation to the next level. We need to loosen rules in data integration, but must still provide quality data. We need to put data in the hands of users more quickly, yet still provide context for consumption. This is why our SSDD now focuses on provisioning data: structured, unstructured, streaming, to what we call the Business Data Foundation. This includes lighter, more frictionless data processing to do the minimal amount of work to provide more useful meta-data about context, quality, trustworthiness, etc. to an Enterprise Data Catalog. This enables a more fundamental change in dealing with the different data types. Given an Enterprise Data Catalog that provides more clarity into the data, we instantly provide a more robust ability to access data for varying purposes through varying tools. We no longer tie the needs of one business user to the requirements of other business users, yet still provide curated, vetted data sets they can use with confidence.
Completing the SSDD theme, we really are providing the same service: surface the data, provide context and integration, ensure quality, and enable interaction and analysis. The difference is, we aren’t making every user, go to one place, where we force all data to conform to rigorous, time consuming integration development and processing. Yes, it’s the SSDD but on the always evolving users’ terms, not baked into the immovable EDW.