In recent years, with the advancement of all things AI, there has been many calls for regulators, such as the FDA in the USA, to create a more permissive environment that allows for medical devices that can rapidly self-improve by using machine-learned algorithms.
On the other hand, machine learning studies have failed to be reproducible, failed to be foreseeable (that is, in some cases ML can outperform people, sometimes it can't and we can't tell in advance if it would work or not), and ML have failed to deliver on the promise of "always improving". In medicine, this could serious consequences from using ML in the name of faster progress.
Evidence based medicine advocates that medicine be safe as well as effective. This means that progress should be measurable both on improved efficacy and that adoption depend on informed understanding of the associated risks. Providing evidence of the risks and benefits requires expensive trials and often slows availability of new technologies.
There are two different world views at play. Innovators want a health system that moves quickly (and may often break things) like the Hare from the famous tale. Regulators go for the turtoise's approach: steady incremental improvement over time. This is the difference between a disruptive innovation model and incremental innovation.
When it comes to phones, cars, video games, we know we prefer the hare. But where do you stand when it comes to healthcare: would you prefer the hare's or the turtoise's?
Leave a Reply
Comments, questions or queries? Let us know!"