
The use of AI in medicine holds enormous promise, but presents equally large challenges.
I n a classic case of finding a balance between costs and benefits associated with science, researchers are grappling with the question of how artificial intelligence within medicine can and should be applied to clinical patient care – despite knowing that there are examples where it puts patients’ lives at risk.
The particular question was central to a recent university of Adelaide seminar, part of the Research Tuesdays lecture series , titled “Antidote AI. ”
As artificial intelligence grows in sophistication plus usefulness, we have begun to see it appearing more and more in everyday life. From AI traffic control and ecological studies , to machine learning finding the origins of the Martian meteorite and reading Arnhem Land rock art , the possibilities seem endless for AI research.
Perhaps some of the most promising plus controversial uses for synthetic intelligence lie in the particular medical field.
The genuine excitement clinicians and artificial cleverness researchers feel for the prospect of AI assisting in patient care is palpable plus honourable. Medicine is, after all, about helping people and the particular ethical foundation is “do no harm. ” AI is surely part of the equation with regard to advancing our ability to treat patients inside the future.
AI is surely section of the equation regarding advancing our own ability in order to treat patients in the future.
Khalia Primer, who is a PhD candidate at the particular Adelaide Medical School, points to many areas of medication where AI is already making waves. “AI systems are discovering crucial health risks, detecting lung cancer , diagnosing diabetes, classifying skin disorders and determining the best drugs to fight neurological disease.
“We may not need to worry about the rise associated with the radiology machines, but what safety concerns do have to be considered when machine learning meets medical science? What risks and potential harms should healthcare workers end up being aware of and what solutions can we get on the table to make sure that this exciting field continues in order to develop? ” Primer asks.
These challenges are compounded, Primer says, by the fact “the regulatory environment has struggled to keep up” and “AI training intended for healthcare workers is virtually nonexistent”.
“AI training for health care workers will be virtually nonexistent. ”
Khalia Primer
As both a clinician by training plus an AI researcher, Dr Lauren Oakden-Rayner, Senior Research Fellow at the College of Adelaide’s Australian Institute for Machine Learning (AIML) and Director of Healthcare Imaging Study at the particular Royal Adelaide Hospital, balances the pros and the cons of AI in medicine.
“How do we talk about AI? ” she requires. One way is to highlight the fact that AI systems are usually performing as well as or even outperforming humans. The particular second way is in order to say AI is not intelligent.
“You might call these, the AI ‘hype’ position and the AI ‘contrarian’ placement, ” Oakden-Rayner says. “People have made whole careers out of being on one of these positions now. ”
Oakden-Rayner explains that both of these types of positions are true. But how can each be correct?
“You may call these, the AI ‘hype’ place as well as the AI ‘contrarian’ position. People have made whole careers out of being on one associated with these jobs now. ”
Dr Lauren Oakden-Rayner
The problem according to Oakden-Rayner is in the way we compare AI in order to humans. A fairly understandable baseline given all of us are human, but the researcher insists that this only serves to confuse the particular AI-scape simply by anthropomorphising AI.
Oakden-Rayner factors to some 2015 study within comparative psychology – the study of non-human intelligences. That study showed that, for the tasty deal with, pigeons could be trained to spot breast cancer in mammograms. In truth, the pigeons took just two to three days to reach expert performance.
Of course, no one would claim for a second that will pigeons are usually as smart as a trained radiologist. The birds have no idea what cancer is or what they are looking at. “Morgan’s Canon” – the principle that the particular behaviour associated with a nonhuman animal should not be interpreted in complex psychological terms if it can instead be interpreted with simpler concepts – says that we should not assume the non-human intelligence is doing something smart if there is usually a simpler explanation. This certainly applies to AI.
“These technologies do not often work the way we all expect them to. ”
Doctor Lauren Oakden-Rayner
Oakden-Rayner also recounts an AI that looked in a picture of a cat and correctly identified it as a feline – before becoming entirely certain it was the picture associated with guacamole. So sensitive are AI in order to pattern recognition. The hilarious cat/guacamole mix-up replicated in a medical setting becomes much less funny.
This leads Oakden-Rayner to ask: “Does that will put patients at risk? Does that introduce security concerns? ”
The answer is yes.
An early AI tool used in medicine was employed to look at mammograms just like the pigeons. In the earlier 1990s, the particular tool was given the green light for use in discovering cancer of the breast within hundreds of thousands associated with women based. The decision was dependent on laboratory experiments that will showed radiologists improved their detection rates when using the AI. Great, right?
Twenty-five years later, a 2015 study looked from the real-world application of the particular program and the results weren’t so good. In fact , women were worse off exactly where the tool was in use. The particular takeaway to get Oakden-Rayner is definitely that “these technologies usually do not often work the way we expect them to”.
AI tends to perform worst pertaining to patients who are most in danger – in other words, the individuals that require the most care.
Additionally , Oakden-Rayner notes that there are usually 350 AI systems on the market, but only about five have been subjected to clinical trials. And AI seems to carry out worst meant for patients that are most at risk – quite simply, the sufferers that need probably the most treatment.
AI offers also already been shown to be problematic when it comes to different demographic groups. Commercially available facial acknowledgement systems were found to perform poorly on black people. “The companies that actually took that will on board, went back plus fixed their own systems by training upon more diverse data sets, ” Oakden-Rayner notes. “And these types of systems are now much more equal in their output. No one thought about even trying to do that when they were building the systems originally and putting them on the market. ”
Much more concerning is an algorithm utilized in the US simply by judges to determine sentencing, bail, parole, and for predicting the likelihood of recidivism within individuals. The system is still in use despite 2016 media reports that it was more likely to be wrong in predicting that the black person would reoffend.
So , where does this particular leave things for Oakden-Rayner?
“I’m an AI researcher, ” she says. “I’m not just someone who pokes holes in AI. I really like artificial intelligence. And We know that the particular vast majority of my talk is about the causes harm to and the risks. Yet the reason I’m like this is because I’m a clinician, and so we need to understand what can go wrong, so we can prevent it. ”
“I really like artificial cleverness […] we have to understand what can be mistaken, and we may prevent this. ”
Dr Lauren Oakden-Rayner
Key in order to making AI safer, according to Oakden-Rayner, is to put standards of practice and guidelines in place for publishing clinical trials involving artificial intelligence. Plus, she believes, this is all very achievable.
Professor Lyle Palmer, a genetic epidemiology lecturer with the University or college of Adelaide and furthermore a Senior Research Fellow at AIML, highlights the particular role that South Australia is playing as the centre designed for AI analysis and development.
If there’s one thing you need for good artificial intelligence, he states, it’s information. Diverse data. And lots of it. South Australia is a prime location just for large population studies given the large troves of healthcare history in the state, says Palmer. But he also echoes the sentiments of Oakden-Rayner that these tests have to include diverse samples to capture the differences in various demographics.
“This is all possible. We’ve had the technology to do this for the purpose of ages. ”
Professor Lyle Palmer
“What a cool thing this would become if everyone in South Australia experienced their own homepage, where almost all of their particular medical results were posted and we can engage them in medical research, and a whole range of other activities around things like health promotion, ” Palmer says excitedly. “This can be all probable. We’ve got the technologies to do this for ages. ”
Palmer states this technology is particularly advanced within Australia – especially in South Australia.
This historical data can help researchers determine, for example , the lifetime of a disease to better realize what causes diseases to develop in different individuals.
For Palmer, AI is going to be critical in medication given the particular “hard times” in healthcare including within the drug delivery pipeline, which is certainly seeing many treatments not really reaching the people who need it.
AI can perform wonderful points. But , while Oakden-Rayner warns, it’s a mistake to compare it in order to humans. The tools are only as good as the information we feed them plus, even then, they can make many bizarre mistakes because of their sensitivity to patterns.
Artificial intelligence will certainly transform medicine (more slowly than some have suggested in the past, it seems) for sure. However just because the new technology itself is intended in order to care with regard to patients, the particular human creators of the technology are usually required to ensure that the particular technology will be itself safe and not doing more damage than great.
Evrim Yazgin
Evrim Yazgin has a Bachelor of Science majoring in mathematical physics and a Master of Technology in physics, both from the College or university of Melbourne.