European Union has made progress on the long-awaited European artificial intelligence (AI) law. In December, an agreement was reached on its consolidation. In this standard, the European body classifies medical devices with this technology in the section high-risk artificial intelligence systems. But what should health regulations include? Ignacio J. Medrano, a neuroscientist and expert on this tool, summarizes the necessary elements in three aspects. First of all, it requires that AI not be classified as a medical device, as is the case in other regulations. “An algorithm created with the help of AI, with machine learning It doesn’t have to be a medical device, but that’s what was decided.”
“In my opinion, in USA An error is made, which is then repeated in Great Britain, which should classify artificial intelligence as a medical device. That is, since I don’t have any kind of control box to put it in, I turn it on just as if it were a catheter or EKG device so I can get started decision support algorithm. I need to do a clinical trial and get the same seal from the FDA that the machine would have to get from a hospital. It is pointless, algorithm is information“, he explains. Moreover, this path ensures that it is neither sustainable nor scalable.
Therefore, the expert argues, Europe has the opportunity to “understand this better, thanks to the fact that it came to the party later.” This does not mean that AI algorithms are controlled: “Obviously, AI algorithms should be approved, monitored and regulated, but they should not go through CE marking as if it were a physical device, which greatly delays innovation,” he says.
“Fast Track” to encourage speed
In addition, he proposes to create a kind of ‘fast track’ or pre-certification of companies or institutes to work faster. “It would be very important for the development of this technology, and there would be an opportunity to do this.”
To these two elements we will add a third, related to the principle of data minimization European data protection law (GDPR, English abbreviation). This regulation, he argues, advocates using the minimum possible data for each investigation. “The problem is that when you work with machine learning, sometimes you discover correlations between data that you do not know a priori. So, in fact, the more data, the better,” explains Ignacio J. Medrano.
“AI algorithms must be approved, tested and regulated, but they must be CE marked.”
That doesn’t mean it’s an excuse for companies to take responsibility for data, he said. “You can actually find associations in data that are unknown until the machine starts looking at it.” Therefore consider that Artificial Intelligence and Health Law I would have to take this principle into account in order to make an “exception” so that it does not constitute a drag.
The specialist assures that this path is not free of difficulties, but it cannot be achieved by slowing down innovation, since this affects how technologies reach patients. “We should be safeguards, but the problem with going beyond that is that we’re not getting the algorithms in time into the hands of the people who actually need them because they have a disease today, not in five years,” he says .
Although the information contained in Medical Articles may contain statements, data or notes from medical institutions or specialists, it is edited and prepared by journalists. We encourage the reader to consult a healthcare professional with any health-related questions.