Artificial intelligences as reliable as doctors, really?



Artificial intelligences as reliable as doctors, really?

Autonomous cars, voice or visual recognition, personal assistants, gaming or chess winning programs, software that predict your purchases or natural disasters … The AI ​​is currently leaping giants. If there is one area where it stands out, it is in health.

Thus, some startups and researchers are now very seriously thinking of using AI and chatbots for the interrogation phase of teleconsultations, such as Doctolib or Qare. A way to fight the bobology and unclog emergencies. At the risk of making diagnostic errors. But when will robots doctors, who will do everything from A to Z?

From diagnostic assistance to diagnostics

While computer-assisted diagnosis is deployed in some hospital services, AI may eventually be able to identify risks of disease, believe some thinkers, transhumanists for the most part. But the idea is very serious.

In his 2018 AI report to the government, mathematician Cédric Villani presents health as a "priority" area of ​​application, which could be revolutionized by Big Data and algorithms. Thus, machine learning, or machine learning, could allow a machine to analyze and decrypt, independently, including medical imaging, elements that have escaped humans.

Thanks to deep learning, Watson, the IBM supercomputer, has been scanning thousands of radios and MRIs for cancer tumors since 2017. Through its division specialized in medical AI, Watson Health Solutions, IBM trains its AI to recognize pathologies on images from MRIs or radios. The algorithm relies on the experience acquired, but also on statistical bases, to diagnose early the symptoms of a disease. Like Swiss start-up Sophia Genetics, Watson is also analyzing millions of DNA sequences from cancer clinics. Objective: to provide advice to the medical profession for a "personalized medicine".

But at the Cleveland Clinic (Ohio, USA), Watson is also used by doctors for something else: developing diagnoses. It relies, to fulfill this mission, on the medical file of the patient, the analysis of similar cases and the available scientific literature. At Google, another project, Patient Rescue, is to analyze, through AlphaGo, the AI ​​of its division DeepMind (undisputed winner in the game of Go), the data of 1.6 million patients, so as to create a system alert for liver patients. With DeepMind, Google / Alphabet has also partnered with several London hospitals to test its diagnostic software.

At the Pitié-Salpêtrière hospital in Paris, a "diagnostic aid" AI developed by Microsoft is also involved in the oncology department. On the iBiopsy platform, medical data is processed by algorithms, which attempt to isolate "biomarkers" signaling the progression of certain cancers. According to Professor Olivier Lucidarme, head of the multipurpose radiology and oncology department of Pitié-Salpêtrière, the use of this platform made it possible to "significantly improve the diagnostic capacity of rare diseases".

The AI ​​has also won the general public and the world of startups, who design applications diagnostic assistance, such as Doctor Clic, HumanDx or Babylon Health – whose CEO, Ali Parsa, does not hesitate to say that the machines are "neutral in their judgments, unlike doctors, and free from confirmation bias (which consists in unconsciously checking one's own assumptions").

Some tech companies are also designing AI-based systems to diagnose psychiatric pathologies, presenting their chatbots as "psys robots". Some research projects aim to accurately detect users' feelings – this is the case of a study conducted at the University of Melbourne, whose purpose is to "analyze feelings" through AI, in order to to detect signs of schizophrenia in written discussions. At MIT, researchers also use artificial intelligence to try to recognize the symptoms of depression in a patient's way of talking – she analyzes audio or textual data from interviews and compares them with those of sick people , in order to establish similarities of language. This should ultimately allow a robot to "accurately predict whether the respondent is suffering from depression, without having to go through the questions and answers," says one of the scientists in TechCrunch. For now, the precision of the AI ​​is 71%, but it should probably increase over time …

AIs as reliable as doctors, really?

So, tomorrow, would you be ready to be treated by Doctors Google, IBM and Microsoft? According to some researchers, thanks to its ability to digest millions of data in the blink of an eye, AI could deliver more reliable medical diagnoses than human physicians. "Some medical disciplines are becoming obsolete: those based on the analysis of signals such as radiology, pathology and even dermatology could be largely automated," says Antoine Geissbuhler, chief physician at the University Hospitals of Geneva (HUG), Maddyness, for example. . The only brake: access to medical data, highly regulated in most countries, but essential for the proper functioning of an AI.

In a very recent study, published in Lancet Digital Health, British researchers at the NHS Foundation of Birmingham University Hospitals, the success rate of an AI is now 87%, compared to 86% for a human doctor. Alastair Denniston, co-author of the study, says that he has pooled data from 20,000 scientific studies on the detection of human diseases, followed by the "promising results" of 14 of them, in order to arrive at this Estimation of the efficiency and accuracy of deep learning in diagnostics.

According to British researchers, AI gives its "green light" for appropriate treatment in 93% of cases, while doctors do so in 91% of cases. Promising results, therefore, for the diagnosis of rare diseases and pathologies such as cancers or ocular diseases.

But other experts qualify these results. They estimate that research has been conducted on too few cases. Because theoretically, to be successful, the AI ​​must learn from a lot more data. Xiaoxuan Liu, co-author of the NHS Foundation Trust study, himself admits in the Guardian that there is "obviously still a lot of gray space about the performance of artificial intelligence on medical diagnoses related to human. " It also notes that there is still a lack of quality studies to say whether AI would be really effective in making a diagnosis from medical images. Before adding, totally contradicting itself: "but our message is very clear: the AI ​​is able to establish diagnoses in a qualitative way, even more qualitative than a human".

However, it should be noted that in the cross-searches of this study, the doctors did not receive any "extra" patient information, which they would normally have obtained in the real world, and who would have allowed to more accurately deliver their diagnosis? And why would an AI not make mistakes?

The importance of human intuition

According to MIT researchers, AI will not replace doctors because it will never have the health professional's "medical experience", nor his "human emotional intelligence", nor his empathy, nor his "intuition". "and its feelings – which allow to detect invisible things for algorithms -, in front of patients who describe their symptoms in a different way according to their personality and the context.

By analyzing doctors' written notes on ICU patients, US scientists found that the "instinctive feelings" of physicians about the condition of a particular patient played an "important role in determining" of the number of tests they ordered. "The experience of a doctor, as well as his years of training and practice, allow him to know more precisely, beyond the list of symptoms, whether you feel good or not," says Mohammad Ghassemi , member of the MIT Institute for Medical Engineering and Science (IMES) team. According to this study, "it is clear that doctors are using something that is not in the data to guide part of their decision-making, and that some invisible things are detectable only by their intuition and instinct." And to wonder: "would it be possible for a machine to have instinctive intuitions and feelings?"

Of course, it is still possible to train an AI to imitate the "instinctive feelings" of a physician, with data that is not in raw data, such as a patient's oral speech, or his or her eyes (filmed ). But to say that they will one day be truly "instinctive", the risk of seeing them go wrong for a long time is great. Moreover, if the researchers of the NHS Foundation Trust have retained only 14 searches out of 20 000 (or 0.07%), it is mainly because the AI ​​systems used made many errors of appreciation .

Even those who design diagnostic aid applications are aware of this limitation. Thus, Dr. Arthur André, neurosurgeon and co-creator of Citizen Doc, a system that uses AI and questionnaires to find, "from a symptom," a "diagnostic solution," states that "healthy chatbots will especially come to streamline the organizational problems present today in the medical world, such as making appointments or the sharing of information ", and that they will only constitute" excellent sorting tools, allowing the doctor to discharge a part of the bobology ". Something complementary, therefore. Only complementary, in order to propose to the human doctors "tracks" of diagnoses.

At the NHS Foundation Trust (financially supported by the UK government, to the tune of 282 million euros), researchers are also cautious, although they contribute, by way of their optimism a little too large, to feed the fantasy of a "doctor robot". Thus, Alastair Denniston only speaks of the (uncertain) potential of AI in the health sector for "help" in sorting medical imaging clichés, leaving more time for doctors to "interact" with their patients. Proof that even for British scientists, human relations are not likely to become the preserve of machines anytime soon. In any case, given the still small number of studies comparing AI to doctors, it is still too early to say that an algorithm would do better than a doctor.