Since I was a child, I have been fascinated by science fiction and all of the technology promises it has made. Flying cars, interstellar travel, and of course, sentient machines that would either make the world a perfect place or simply kill off all the humans.

Living for the last (too many) years in the tech industry, I have developed a nasty habit of deconstructing why some of these ‘future features’ would never work - at least with the technology we have today. I was reminded of that during a binge watch “X-Files” marathon a few weeks ago. For those of you playing along at home, the referenced episode is “Ghost in the Machine” - season 1, episode 7. Mulder and Scully run into a ‘smart building’ who has become sentient and, of course, starts killing people. Also, don’t judge me.

In the episode, the killer AI (not in a good way) is shown accessing elevator controls, plumbing, security systems and telephones. OK, this is 1993 and there is a good chance that the management systems of these all devices were custom written, and that all of the control data, sensor logs, and anything else required for artificial intelligence was centrally stored and accessible.

But I compare this to my own house. No elevator here, but my lights, thermostats, speakers and dimmers are all made by different companies - each with a proprietary interface. Sure, Homekit and other technologies let things play nice together, but what if I wanted (for instance) my phone to monitor my TV’s volume and adjust it to wherever I happened to be in the house. Or to pause my show when the doorbell rings. Creating something that is constantly monitoring all of these things with the intelligence to make that data actionable is not uncommon from the challenges that my customers appear to be having.

They are relying on data from enterprise systems (ServiceNow, Active Directory), data lakes (Elastic, Hadoop) and log collectors (Splunk, Syslog) to feed their machine learning and AI projects -- and having a difficult time at it. They have two choices. Either pick a technology and feed everything else into it, or transform all of the data into a common format.

Those who try to consolidate the data find that the licenses involved in do so are prohibitively expensive or that the complex lookup and scripting to make everything work together nicely requires full-time expert resources to ensure things don’t break.

Those transforming the data bring in yet another old school technology (ETL) or simply find themselves copying the data into yet another silo. How many silos must one have to solve the problem that data is locked in silos? How long will the tech industry keep solving one problem by creating another?

One of the reasons I am excited about Gemini’s Autonomous Data Cloud (ADC) is its concept of data availability. Imagine all of the data sources mentioned above (and many more) all accessible with a simple, non-proprietary query language. Enterprises can finally let their data scientists do data science, and not wrangle data. Take that a step further and allow ADC to start making the connections between those data sources and things get very, very interesting.

Maybe not killer elevator interesting, but definitely on the right path.

Want to learn more about Gemini’s Autonomous Data Cloud and how we are helping enterprises solve these integration and connection issues? Drop us a note. We are always happy to have those conversations.