All of us are healthcare consumers: patients in a long term care facility and people spinning prayer wheels to protect themselves from evil spirits, alike. We all heard stories of a future of AI that takes over every aspect of medical care from diagnosis, to treatment, prognostication and much more. As a healthcare consumer you know that is not the case. Other industries are already there: AI conducts most trades on major stock exchanges, cars sense their sorrounding and maintain safe distance from hazards in their environment, cities are planned according to AI simulations of traffic flows, the list goes on.
If your only interaction with healthcare systems is as a consumer, or with a consumer, chances are you have never seen any indication to suggest that an AI is used in healthcare. Perhaps it even frustrated you. If you ever though about why that might be, chances are you attributed the lack of AI in healthcare to doctors being afraid of AI that can replace them, AI not yet attaining sufficient quality* for medical applications, and/or healthcare adminsitrators being too conservative to try something new. You would have been wrong but you wouldn't be alone.
Technologists, deep-tech venture capitalists, and futurists who are not intimately familiar with healthcare systems are the worst predictors of the penetration of AI into healthcare. Even worse, their over-optimistic, over-confident views lead them to believe that the right person "cracking" AI in healthcare and revolutionizing the industry with nothing more than determination to do so. They are not only wrong, they make investment decisions that end up costing them billions (see for example, Theranos here or here).
You could also guess (or hope?) that behind the scenes, healthcare is full of AI helping doctors with any number of tasks, unbeknown to consumers. That's not right either.
We can safely establish that the lack of AI in healthcare is not for the lack of trying. The first demonstration of AI in healthcare were made well over half a century ago: The first AI clinical decision support systems was unveiled in the 1960's, the first robotic surgery was presented in the 1970's. By the end of the 1980's, every doctor's office had a desktop computer. The scientific literature is full of applications of AI in medicine. The Journal of Biomedical Informatics is the first of many to be entirely dedicated to the topic. So what actually prevents AI in healthcare?
It's the data, baby.
If you want to understand healthcare data in the real world, think of a garbage tip. Now think of a war zone. Then, think of a demolition site that was hit by an oil tanker, and sit between the garbage tip to the war zone. Your clinical data is in there somewhere. Furtunately, the system that carefully put a piece of data, (under a burning tyre behind a concrete slab, next to a dead cat) usually remembers where it left it, so when you go see your doctor, your doctor can usually retrieve the information. A different doctor in a different facility would have to work hard to find your information and an AI would have the same problem.
AI works by finding patterns in data. The messier the data, the more of it it needs until there is enough repetition to identify a pattern. In healthcare, we don't actually know how much data that is. We do know that we can usually get access to terrabytes of healthcare data. Sometimes, especially when images are concerned, we can access petabytes of data. We can envisage a future in which we can get access to exabytes of data. But for AI to find patterns in real-world healthcare data, it may well be that even zettabytes of data are not enough. That's a thousand times more data than we can ever foresee having access to. A lot more than all the healthcare data that exists all over the world, put together. But it's not just how much data that matters. Other factors include how messy it is, how diverse is the information represented in the data, and how much information is implicit or missing from the data. So, for example, just adding exabytes of epigenomic data, is not enough because we increase not only the size, but also the diversity of the data. An AI would still struggle to find patterns.
That's why there's no AI in medicine - because there's not enough data for it to process. It is certainly possible to create a dataset, develop and test an AI with it, and publish a scientific paper that shows the AI made better decisions than human clinicians. Of course it did, it had people to help it access better quality data. But real-world data is not like that so the AI doesn't work and you don't find it in your hospital or clinic.
Evidentli has the world's first (and so far only) AI that can reliably sort through the junk, the dead cats and the burning tyres, aggregate, clean and organize data in a standard format. The format is open so anyone who can read a website can know how the data is organized and how to use it. An AI requires a lot less data because patterns are easier to find. Without Evidentli, this process can take literally millions of work hours for one data set. A large proportion of these hours require clinical knowledge and the rest require data science skills, both of which are prohibitively expensive.
When we sort through the data automatically and cheaply, we unlock the potential of AI and innovation to enter healthcare. There, it would be welcomed by practicioners because it would benefit patients.
*"AI quality" refers to both the accuracy of decisions and predictions made by an AI, as well as product, marketing and other commercial aspects of the product
Leave a Reply
Comments, questions or queries? Let us know!"