Artificial intelligence is revolutionizing healthcare. But how can this potential be used in the best possible way?
For example, if patients are worried about their health, we know they can quickly obtain initial diagnoses with a symptom checker app. The app also helps patients decide whether they should see a specialist and if so which one. The app’s algorithm provides the doctor with valuable information for taking the patient’s medical history, even in the case of a rare disease.
But how far can this go? Will it provide the best choice of treatment based on algorithms, guidelines, and a collective decision from doctors in ‘the hub’? Where will this hub be? Will healthcare professionals on the ground (i.e., in hospitals, clinics, practicing in real life) ever see the hub? Will the hub be made of Artificial Intelligence computerized ‘doctors’? (Think Doogie Howser crossed with the Terminator)
How far can we utilize AI, before it starts to replace us?
In radiology, for example, artificial intelligence is already showing it’s worth during an MRI examination. MRI images consist of hundreds of thousands of individual image data, in which neural networks recognize patterns and calculate the probabilities for certain diseases. To do this, the software was previously trained with numerous MRI images and the associated diagnoses.
When a new image is loaded into the AI-supported assistance system, it matches the most likely diagnosis, based on previous cases, providing the monitoring doctor with important ‘past examples’, allowing the physician to quicken the treatment process. As of this time, doctors are still required with their specialty and experience, to make the final decision, in a reflective manner, but how long before the AI is so foolproof that it does not require monitoring? It can make the decision and go.
AI requires data, as much of it as possible, so if it is fed as much data as there is in the entire global medical community, surely it will be best placed to decide optimal diagnosis and treatment plans, based on the past. Think of what this could mean for society? Quick diagnosis, quick treatment plans, cut away the fat and you’re left with the machine.
Current clinical examples of Artificial Intelligence include:
- Reading mammograms- which is now more accurate and indeed exceeding radiologist assessments.
- Examining retinas of diabetic patients
- Reading pathology slides: for example, they can be used for skin cancer. Compared to the fact that the average dermatologist may look at 12000 lesions in their lifetime, AI can assess 250,000 lesions in one go, and have the accuracy of recommending in an instant which patient should undertake a biopsy.
- AI can collate and summarize millions of similar cases in all forms of illness (cancer, surgery, drugs, etc.,) and can identify patients at risk of various cancers, helping get an early diagnosis (which we know is key in any cancer).
- In immunotherapy, AI may identify patients who could benefit from targeted therapies based on rare biomarkers and also identify infection patterns and patients at risk, and who are candidates for antibiotics.
What about non-clinical examples of Artificial Intelligence?
- AI can analyze large datasets: For example, from a statistical point of view, AI can be used to uncover health disparity (income, regional differences, lifestyle, medication/treatment options available, white vs non-whites, private vs public, etc.,). Whatever the integer, AI will store the data immediately, in order for healthcare and government systems to act upon unfairness or inequality.
- AI can process EMR, PACS, DICOM, etc, data immediately and send it to the right department or store it in the cloud for instant referral to relevant physicians. This will allow a seamless mode of communication between GP, hospital department, clinical commission group, medical device technology group, supply confederacy, guideline body, and government department, all the way back to the patient.
What about AI at home?
- There is a change in interaction, heightened during COVID-19, with virtual healthcare, doctors to patients over the internet. We are already seeing virtual bots in all walks of life, so would it be applicable in healthcare? There are already AI systems/heart-blood pressure-oxygen monitors/apps/wearables, helping patients with fully automated medication refills, drug compliance monitoring, and result notifications.
Is the machine superseding the doctor?
Let us go back to the beginning.
What is Artificial Intelligence? Just in case you haven’t been on the planet over the last 2 decades or so.
Siri, Alexa, Cortana, Google Assistant, the list goes on. These are all forms of AI. But the concept extends far past voice-activated systems. AI is the simulation of human intelligence processes by machines, especially computer systems. Almost all businesses today employ some type of AI, and some are more complicated than others.
Artificial Intelligence can be categorized as weak or strong.
Weak AI is a system designed and trained for a particular task. Like voice-activated assistants. It can answer your questions or obey a programmed command, but it cannot work without human interaction.
Strong AI is a system with generalized human cognitive abilities, i.e., it can solve tasks and find solutions without human intervention. A self-driving car is an example of strong AI that uses a combination of computer vision, image recognition, and deep learning to pilot a vehicle while staying in a given lane and avoiding unexpected obstacles like pedestrians.
AI has made its way into a variety of industries that benefit both businesses and consumers, like education, finance, law, manufacturing, and healthcare. In fact, many technologies incorporate AI, including automation, machine learning, machine vision, natural language processing, and robotics.
The application of AI raised legal, ethical, and security concerns. For instance, if an autonomous vehicle is involved in an accident, liability is unclear. Now think of a medical practice, if a robotic arm used for knee surgery makes a mistake, is it the fault of the operating physician, what if there is a latency issue within the operating system, who would be liable then?
Hackers, in the meantime, are using sophisticated machine learning tools to gain access to sensitive systems. Now think if this happens during a surgical procedure. The repercussions could be deadly. What is the failsafe in this case? And what about concerns of a data breach? If hackers have access to medical data, it could cause huge concerns (e.g., spreading medical records of people over social media).
Despite the risks, there are very few regulations governing the use of AI tools. It is an open platform. And there are thousands of scientific engineering facilities throughout the world trying to make their own version of AI. Experts assure AI will simply improve products and services for the benefit of mankind, but life does not run that smoothly. Due diligence needs to be done to monitor and manage all AI systems; even if the system has a 100% success rate over 10–20 years, it would only take one incident to create a catastrophe.
How much, in this case, will doctors and healthcare decision makers allow Artificial Intelligence to take over?
The statistics tell you that middle-age and older physicians are retiring at an early age. There is a global physician shortage, an increase in baby-boomers, but no increase in medical school positions. Is it more cost-effective for hospitals to invest in AI technology? Will clinical judgment ever be fully replaced by data and algorithms? What about those patients who refuse to be treated by a machine and want to feel touched and cared for by a human? What if the AI has a 100% success rate, would the patient feel differently, especially if the procedure is surgery for example? AI cannot provide empathy and understanding, but what if it can in the future? The entire healthcare community will be serving computers not the other way round…now where is John Connor?
Leave a Reply