ChatGPT already diagnoses, but requires medical control

Ignacio Hernández Medrano, neurologist and expert in artificial intelligence.

The artificial intelligence (AI) is already present in the healthcare world and can be seen in tools like chatGPT that require human supervision. In recent years, there has been a boom in the use of these tools for the diagnosis of some diseases such as Covid-19 or disorders such as arrhythmias. Looking to the future, specialists say that AI will serve to identify diseases more serious, but care must be taken so that it does not generate inequalities between patients and closely monitor the mistakes they can make These machines.

“The chat evolutionGPT It is very fast and in a few months we will have a version 500 times more powerful. Intelligent systems will begin to diagnose more serious pathologies and it is logical that they begin with the picture and continue towards other types of data such as laboratory data”, explains Ignacio Hernández Medrano, neurologist and creator of Savana, an AI platform that reads and analyzes millions of clinical documents. “It is important to understand that it is a general AI, not an expert, so that when using it in medicine (whether diagnosis or prediction) it must be fed with expert databases, who understand the nuance of what it means, for example, progression in multiple myeloma,” he adds.

What happens if the machinery of artificial intelligence fails? As indicated by the neurologist, like the clinical guidelines They mark the path that the doctor should follow based on scientific evidence, the same thing happens with AI. These devices will be encoded based on previous studies and will follow what is defined so that the patient is benefited.

“One applies what is probabilistically most likely based on tool; if the patient turns out not to be predictable by the algorithm, then there is no responsibility (neither from the doctor nor from the tool). There is only one for the professional if he deviates from what the algorithm says for no reason, or for the AI ​​manufacturer if he does validation cheat or similar frauds”, he maintains.

Inequalities in healthcare

Although these devices may act as toilets, they are never will replace the human being, since their work is limited to “solving problems but without conscience”. Therefore, this is where the medical professional comes in with his ethics, closely related to conscience: “They are tools that have to be in hands of a professional“.

In the words of Hernández, when machines and humans come together, “the best possible result” is obtained. Both parts are necessary because the devices don’t think contractually or contextually, so there are variables that escape. In the same way, machines allow associate a number of alterations that the human brain is not capable of.

In this sense, there is a risk that health care will be different depending on the economic level of the patient. “Perhaps we are beginning to move towards a world in which those who have fewer resources have access to a highly robotized healthcare, with few humans. This is what happens when we call low-quality services, which are served by a machine. As long as the rich are the ones with the privilege of having a doctora human being supported by machines and that the interface is human”, says Hernández.

Regulations on artificial intelligence

However, one of the great challenges that AI has encountered has been regulation. Currently, the data protection regulation at the European level allows the release of many big data projects that were previously stopped. Now only two premises must be met: that the data is anonymous and that its use is not commercial.

Despite this, the law speaks of the data minimization principlethat is, it advocates using the least amount of data possible to carry out the research, which continues to be a problem for researchers

“Our systems are based on explore variable relationships in immense amounts of data and, a priori, you do not know what you are going to find. I find out when I enter all the data and explore using artificial intelligence techniques“Argues Hernández, who is confident that this annoyance will be resolved as soon as possible because it also means”a competitive barrier huge relative to China or the United States.”

Hernández: “We have to make sure that we have tested, verified and updated artificial intelligence algorithms”

On the other hand, Europe also has the Artificial Intelligence Regulation which includes the most important aspects such as the privacy and verification. However, as Hernández points out, there are still some issues to be addressed. First, the demand for explainability to AI systems: “It is beginning to be understood that it is not possible to demand explainability from machines because they function as pattern recognition systems And they don’t know how they got there.”

Likewise, something that this regulation does not contemplate is the monitoring and Algorithm update of AI. “As well as checking in real life that what we do in the lab applies to patients, we have to make sure we have tested, verified algorithms and updated”, concludes the expert.

The information published in Redacción Médica contains affirmations, data and statements from official institutions and health professionals. However, if you have any questions related to your health, consult your corresponding health specialist.

Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button