Artificial Intelligence (AI) for Disease Diagnosis (2023)

AI

AI and ML programs designed to diagnose disease are trained on data like patient's history, lab results, scans, symptoms, and images of confirmed and susceptible cases; when run on unseen data predicts efficient and more accurate information regarding diseases.

  • Artificial Intelligence (AI) for Disease Diagnosis (1)

    Nouman Iftikhar

    Bioinformatician by profession. I am passionate about Artificial Intelligence. To me, AI has emerged as the only solution for the accuracy and swiftness of human life activities.

    More posts by Nouman Iftikhar.

Nouman Iftikhar

11 min read

(Video) How AI is making it easier to diagnose disease | Pratik Shah

Artificial Intelligence (AI) for Disease Diagnosis (3)

Health is undoubtedly the most important factor in life. In 2019, 58.39 million people died and 2020 is projected to claim 59.23 million lives. The top three causes of global death rates are cardiovascular, respiratory, and neonatal disorders. Despite all scientific research and development, diseases are still a major threat to existence. As AI is transforming almost all the aspects of our lives, researchers felt the dire need to incorporate artificial intelligence in combating the disease. Advancement in medical imaging and scalable and accurate deep learning for electronic health records made it possible to reuse medical data for training the programs.

“Early diagnosis saves lives”

Since artificial intelligence and machine learning are all about data and information, training specialized programs with the data like patient's history, lab test results, scan results, and symptoms data can result in an efficient and more accurate information regarding diseases. This article is an attempt to highlight benefits of AI in healthcare industry.

Why Do We Need Artificial Intelligence in Disease Diagnosis

In 2017, there were 1.566 physicians for 1000 people globally, with Germany being the highest with 4.3 physicians per 1000 people. This data indicates that there is immense population pressure on doctors and providing proper medical facilities to the whole population is yet a dream.

Training a medical professional takes laborious efforts, resources, and time. Fulfilling population needs is not yet possible. Artificial Intelligence is aiding doctors in swift diagnosis. This helps saving consultancy time and brain work and enables quick and accurate diagnosis. The use of AI in healthcare is a need of the era.

Image processing and NLP are top AI applications in healthcare. Those diseases that require medical imaging can be assisted by Image processing, an AI term under Computer Vision. The algorithm is supplied with many thousand images of confirmed and susceptible cases. Then the algorithm is trained with this data set to predict the condition with new image data. Let us discuss in detail which diseases have been diagnosed with the help of AI.

How Image Processing (Computer Vision) aids in medical Diagnosis?

Image Processing is a way in AI to read images and extract information or patterns from huge image data. In medical diagnosis, sources of images are MRI, CT scan, X-ray, Ultrasound, and other technics. Clinicians use these image data to determine the presence of a certain disease. At the merger of healthcare and technology, researchers are more focused towards artificial intelligence (AI) in medical diagnosis.

A) Artificial Intelligence for Cancer Detection

Deep learning is making Lung cancer diagnosis more accurate and swift

Cancer is the second most common reason for casualty worldwide, while lung cancer took 1.8 million lives in 2018. Hungary was the country with the most deaths due to lung cancer. For early detection as well as successful diagnosis, researchers are training the algorithms by providing them scanned data taken from patients. Due to early screening of lung cancer, the mortality rate was reduced by 20-43%. This screening has been added to United States screening guidelines.

A study was published in Nature Medicine journal regarding the use of Deep Learning (Machine Learning Medical Diagnosis). In this study, researchers trained the algorithm with computed tomography scans of patients before and after the manual diagnosis of lung cancer. The data set used was comprised of 6716 cases submitted for screening to National Lung Cancer Screening trials, and 1139 confirmed cases of lung cancer.

(Video) Using Artificial Intelligence to Diagnose Disease

The algorithm competed with six top radiologists in diagnosing lung cancer from computerized tomography (CT) scans. Without prior imaging, the algorithm outperformed the radiologists with 94% accuracy. 11% reduction was observed in false positives while 5% reduction was observed in false negatives. This study opened new horizons and opportunities in deep learning to assist in medical diagnosis. The algorithm is not yet deployed on a national level, as more robust and accurate models are needed to be fully reliable.

Image processing and Deep learning helps in diagnosing skin cancer

Skin cancer is one of the most common types of cancers with leading occurrences in Australia and New Zealand. Nonmelanoma skin cancer affects 1,198,073 people worldwide killing 63,731 in 2020. The very basic way to diagnose skin cancer is a physical examination of skin and biopsy. Skin cancer is characterized by brown spots with dark circumference, a color-changing mole that bleeds, or small irregular lesions.

Pigmented moles and lesions can be a key target for machine vision to distinguish between cancerous and non-cancerous lesions. The typical appearance of lesions can be recorded as a training set for algorithms. If training is successful and prediction scores are promising, medical science can promise non-invasive diagnosis of skin cancer (without biopsy).

Melafind, an artificial intelligence cancer (melanoma) detection system was developed by Mela Sciences (discontinued) was a good attempt towards skin lesions detection for skin cancer. MelaFind was equipped with an advanced imaging system (Fuji FinePix Pro SR camera with 3rd generation dermoscopic attachment) that can scan the lesions up to few millimeters below the skin using infrared light. These images are then provided to an advanced AI algorithm that is already trained with huge skin cancer data.

MelaFind is only helpful in detecting pigmented lesions, and those which are 0.2 cm to 2.2 cm in size. Other sizes were not supported by MelaFind. FDA had approved MelaFind to assist dermatologists to detect potential cancer cell lesions. However, this system was not 100% accurate to correctly diagnose all types of skin cancers. Moreover, many dermatologists believed that this system can replace expert physicians. Yet, the manufacturer never claimed to replace the human with the machines.

SkinVision, an AI-supported mobile application, is a milestone in AI medical diagnostics. A person with some abnormality of the skin has to take a picture from a phone camera, and this app uploads the image to the cloud where it is compared with thousands of trained images. Within 30 seconds, SkinVision tells the person about expected risks with that abnormality. By using the power of machine learning and image recognition, SkinVision has successfully identified 6000 skin cancer cases in the UK since 2016.

The data, SkinVision has, is so rich by user input as well. There is a collection of 3.5 million photos of skin lesions uploaded by 1.5 million users. By early detection of skin cancer risks, 95% of skin cancers can be healed successfully. SkinVision pricing is quite affordable. A user has three SkinVision subscription plans.

  • Single Skin Check= €6,99 per check
  • 1 Year = €49,99/year (unlimited checks)
  • 3 months = €29,99

Deep Learning can dig deep into Gene Expression Data for Cancer Detection

Researchers have been trying to link gene expression (products of DNA) data to the presence of cancer for many decades. The human genome has 3.2 billion nucleotides (ATGC), which collectively make millions of proteins. These proteins run the function of the body as enzymes. To find out culprit genes or gene mutations from such huge data is the greatest challenge in molecular biology and cancer research.

Researchers from Oregon State University used deep learning to detect signals in gene expression data that separate cancerous cells from normal cells. The algorithm they used was Stacked Denoising Autoencoder (SDAE). Autoencoders are meant to link output directly to the input. It is part of the pre-training module. SDAE is a neural network based on multiple layers of autoencoders in which the output of the first layer is directly connected to the input of the second layer.

Researchers analyze RNA-seq (RNA is an expressed form of DNA) data from The Cancer Genome Atlas (TCGA) database of both cancer and healthy cells. There were 1097 breast cancer samples and 113 normal cell samples. By applying SDAE, a deep architecture, the team was able to detect genes responsible for breast cancer. Moreover, the algorithm efficiently discovered biomarkers for therapeutic applications as well. By inputting more data, this algorithm can be fine-tuned and more accurate.

(Video) Artificial Intelligence Can Change the future of Medical Diagnosis | Shinjini Kundu | TEDxPittsburgh

Artificial Intelligence can differentiate between different brain tumors

In 2016, brain or Central Nervous System (CNS) cancer affected approximately 330,000 people worldwide, killing 227,000 of them. A brain tumor does not kill a person at once but it takes time. The 5-year survival rate is the most common time window in brain tumor patients, which is decreased by age. Successful treatment of brain or CNS cancer highly depends upon proper diagnosis.

Neuro physicians face a greater challenge in differentiating between radiation necrosis (brain cells killed by radiation) and brain cancer recurrence. On MRI, they both look similar to each other, being hard to distinguish by doctors. Yet their treatment is quite different from each other. If diagnosed accurately through non-invasive methods, intensive biopsies are no longer required.

Researchers from Case Western Reserve University developed a program that uses radiomics features of MRI images to detect key differences. As we discussed above that both anomalies are hard to detect by the naked eye, this program can look deep into graphic features that bare eyes may not reveal. Radiomicsis an approach that mines for multiple signatures from radiographic medical images, based on data-characterization algorithms. These radiomic features are so deep that the naked eye usually misses them. For training the algorithm, images of 58 patients were taken from gadolinium T1WI, T2WI, and FLAIR MRI imaging sequence. Out of 58 test results, 43 were taken as training data set while the remaining were used as an independent dataset. All the reports were analyzed by an expert neuropathologist before injecting them into the algorithm.

Since the differentiation approach was based on radiomic characterization, the support vector machine (SVM) classifier successfully diagnosed 12 out of 15 independent results, outperforming the two expert neuropathologists who could correctly identify the anomaly only for 7 out of 15 and 8 out of 15 respectively. This comparison motivated the researcher to incorporate more data to detect more meaningful insights into radiomic data to reduce error in brain cancer diagnosis.

Electrocardiogram + cardiac MRI images for heart disease

Electrocardiogram (ECG or EKG) is a noninvasive test to determine the rhythm and electrical activity of the heart. Doctors recommend this test if a person feels chest pain, lightheadedness, irregular heartbeats, rapid pulse, and shortness of breath. A computer connected to ECG shows the data in the waveform.

Since medical health records are maintained, developed countries have rich patients’ data, ready to be injected into a machine for learning. By training an algorithm with medical data of a normal and diseased person, a program may become self-sufficient to diagnose disease susceptibility and occurrence.

There are two astonishing studies regarding the deployment of machine learning in analyzing ECG waves. The deep learning methodology was designed by observing the brain’s architect. Neurons in the brain are interconnected through synapsis. Each neuron is represented as a node in deep learning algorithms. Nodes are activated with ECG data. The data flow then is recorded as a relationship between input (ECG) and output (condition of patient or Wave pattern classification).

1. Cardiac Rhythm classification using Deep Neural Networks

Hannun and colleagues researched the classification of cardiac rhythms using neural networks. They utilized a dataset of 91,232 single-lead ECG recordings from 53,877 patients in an ambulance. The reason to collect ambulatory data was to ensure no other details of medical history are known.

The major goal was to determine whether the algorithm can detect 12 classes of cardiac rhythms or not. When 91,232 ECG recordings were given to the algorithm, it showed surprising results in detecting all the classes of cardiac rhythms from unique datasets unknown to the algorithm. This observation was validated by expert cardiologists, who were outperformed by this algorithm.

By training the algorithms with all possible cardiac rhythmic patterns, researchers are sure to minimize cardiac conditions due to early detection of abnormality in heartbeats. Apple watch series 4 and later were supplied with an ECG app that records atrial fibrillation to determine the heartbeat. This allows continuous monitoring of cardiac rhythms.

(Video) How Viz.ai Is Using Artificial Intelligence To Detect Diseases Faster And Save More Lives | Forbes

2. Multi-lead ECG dataset training for identifying blood flow from the heart

Left ventricular systolic dysfunction (LVSD) is life-threatening condition in which the heart’s left ventricle is unable to pump the blood normally. The normal range of left ventricular ejection fraction is between 55% to 70% of total blood. In LVSD, ejection fraction lowers to 35% causing severe risks to life.

Attia and coworkers used 12 lead ECG that not only records cardiac rhythm but also ejection fraction. The study aimed to identify LVSD patients from ECG data by training an algorithm using a convolutional neural network. The network was trained with ECG data from 44, 959 patients. Then this algorithm was assigned unique data of 52,870 patients not trained to the network. The results showed an AUC of 0.93 for abnormalities in ECG patterns.

These studies encourage the researcher to incorporate the use of AI and machine learning for medical diagnosis. This will help reducing mortality as well as permanent impairment of organs.

B) AI for Diabetic Retinopathy Detection

As the name indicates, Diabetic Retinopathy (DR) is a condition of the eye’s retina due to diabetes. Due to diabetes, blood vessels of light-sensing tissues are damaged resulting in vision loss. According to diabetesAtlas, 463 million diabetic people were existing in 2019 globally. Type 2 diabetes comprises 90 to 95 percent of total diabetes. Among all type 2 diabetic patients, 21% of patients are diagnosed with diabetic retinopathy at the time of diabetes diagnosis.

Usually, DR is diagnosed through Fluorescein angiography (Cost= $479 to $597) or Optical coherence tomography (OCT) (Cost $471 avg). In both tests, the physician observes the retina through dilated pupils and takes images. After that, a fluid is injected intravenously that flows through retinal blood vessels. This enables doctors to see if vessels are blocked or punctured. These diagnostic procedures are costly and required detailed examination.

Artificial intelligence aims to aid in medical diagnosis involving the use of data (images or stats) as the training set and predict the new data for diagnosis. The images taken from tests already performed are of great value for scientists. By utilization this rich data, many researchers can design algorithms for accurate and swift DR diagnosis. Few examples are given below:

IDx-DR

Created by IDx Technologies, IDX-DR is AI-supported medical diagnostic software approved by FDA for diagnosing diabetic retinopathy. The system is supported by fundus camera (aka retinal camera), a low-power microscope with a camera. The operator takes two images from each eye and submits them to IDx-DR client. This client analyzes the images with a training set and gives the results in less than one minute. Those with negative to less than mild DR are advised for retesting after 12 months. the others with mild DR are advised to visit an eye specialist.

IDx-DR successfully diagnosed 900 mild DR persons with 87% sensitivity and 90% specificity. The same prototype is used for diagnosing (age-related macular degeneration) AMD and glaucoma as well.

EyeArt Eye Screening System

Created by EyeNuk, EyeArt is also a great AI technology for DR diagnosis. It functions the same like IDx-DR, but it does not require pupil dilation and expert human professionals. EyeArt takes images using fundus camera and feeds them into the algorithm. The EyeArt’s AI analyzes the images for not only DR, but also calculates the extent of the lesion using international clinical standard protocols. A PDF report is generated within one minute.

C) Natural Language Processing for symptoms and disease expectancy

As the name indicates, NLP is a branch of artificial intelligence that learns the insights of human languages to understand, interpret, and manipulates words and phrases. Disease diagnosis is majorly based on symptoms. These symptoms are expressed in a verbal form in front of a doctor. If a computer program is fed with symptom phrases and their diagnosis, it will be trained to predict the disease expectancy for new data of symptoms. This approach has been widely used in AI application in healthcare. Medical chatbots are on the top of the list.

(Video) Artificial intelligence in healthcare: opportunities and challenges | Navid Toosi Saidy | TEDxQUT

These medical chatbots are intended to lessen the burdens on hospitals and clinics, in case if a person is not feeling well and thinks he needs medical attention. By using medical chatbots, a person can enter his symptoms through a conversational interface. After symptoms are entered, the conversational data (like sore throat, headaches, acidity) is sent to the medical NLP AI program trained for disease diagnosis based on symptoms. If the medical condition is calculated as a medical emergency, the person is advised to see the doctor for further assistance. Some AI chatbots also offer lifestyle advice like diet, exercise, while other offer navigation for the nearest medical facility.

Here are the names of top medical chatbots:

Chatbot Name

Company

Interface

Platforms

Subscription

domain

OneRemission

keenethics

Text messaging

iOS

Cancer

Youper

Youper

Text messaging

iOS

Free (with ads)

Psychology

Babylon Health

Babylon Health

Text messaging, video calling

Android, iOS

£149 Annually

Medical

Florence

Pact Care

Text messaging

Messenger, Skype, Kik

free

Drug, medication

YourMD

Instant messaging

iOS, Android, Facebook Messenger, Slack, KIK, Telegram, browser

free

Medical

Ada Health

Ada

Q&A, text messaging, MCQs

Android

free

Medical

Molly

Sensely

Text, speech

Web, Android, iOS

Free

Medical

With advanced scanning technologies and information management approaches, the future of AI in healthcare seems brighter than before.

FAQs

Can AI predict diseases? ›

Artificial intelligence techniques are used to predict diseases based on available patient data. Medical diagnosis requires physicians and medical laboratories for testing, while artificial intelligence-based predictive systems are used for the early prediction of diseases.

How AI is used in pathology? ›

As in other domains, artificial intelligence is becoming increasingly important in medicine. In particular, deep learning-based pattern recognition methods can advance the field of pathology by incorporating clinical, radiologic, and genomic data to accurately diagnose diseases and predict patient prognoses.

Is AI used in medical imaging? ›

AI-based diagnosis reduced these risks. In fact, deep learning algorithms have become a methodology of choice for radiology imaging analysis. This includes different image modalities like CT, MRI, PET, ultrasonography etc and different tasks like tumor detection, segmentation, disease prediction etc.

What type of AI is used in healthcare? ›

Machine learning is one of the most common forms of artificial intelligence in healthcare. It is a broad technique at the core of many approaches to AI and healthcare technology and there are many versions of it.

How does AI help early detection? ›

AI can provide many benefits in early detection by being able to show any risks that an individual may have. A study from 2017 involving patients at risk of stroke used AI algorithms based on symptoms and genetic history to place them into an early detection stage.

What types of predictions is AI being used for? ›

Artificial intelligence (AI) algorithms enable computers to predict certain outcomes based on large amounts of data. In construction, AI may be used to predict the risk of project cost overruns, the risk of on-site accidents, and the need for maintenance over time.

Will pathology be replaced by AI? ›

Although total replacement appears an impossibility, the inexorable progression of AI technology will undoubtedly alter the practice of pathology over the coming decades. The laboratory of the future may bear little resemblance to the laboratory of today.

Can pathology be automated? ›

Automated pathology systems bring immediate cost savings

So overall, improving workflows with workflow automation systems can translate into considerable savings in labor and materials and also reduce the risk of costly errors in patient care.

What is AI in radiology? ›

Talk of artificial intelligence (AI) has been running rampant in radiology circles. Sometimes referred to as machine learning or deep learning, AI, many believe, can and will optimize radiologists' workflows, facilitate quantitative radiology, and assist in discovering genomic markers.

How is AI used in MRI? ›

AI can have a significant role in this, in view of its potential to accelerate MRI scanning, image post-processing and reporting, introduce novel biomarkers and incorporate those in decision-making and prognostication models.

How accurate is AI in radiology? ›

Overall, it made the correct diagnosis 92 percent of the time, as opposed to the physicians' 77.5 percent of the time, making the machines 19 percent more accurate than the humans.

When did AI start in medicine? ›

AI History in Healthcare

AI can be traced to the 1950s when Alan Turing, a British mathematician, questioned whether machines could think. It wasn't until over a decade later that AI was incorporated into life sciences, and in the 1970s, it made its way into healthcare.

How does digital pathology work? ›

Digital pathology is the process of digitizing glass slides using a whole slide image scanner and then analyzing the digital images using an image viewer, typically on a computer monitor or mobile device.

What exactly AI means? ›

Artificial intelligence is the simulation of human intelligence processes by machines, especially computer systems. Specific applications of AI include expert systems, natural language processing, speech recognition and machine vision.

Videos

1. Artificial Intelligence In Healthcare | Examples Of AI In Healthcare | Edureka
(edureka!)
2. Medical Diagnosis With AI?
(Wall Street Journal)
3. AI in Healthcare: Top A.I. Algorithms In Healthcare - The Medical Futurist
(The Medical Futurist)
4. The application of AI in diagnostic Imaging.
(GE Healthcare)
5. Machine Learning AI Deep Patient Makes It Possible to Diagnose and Predict Diseases
(Ellen MacArthur Foundation)
6. Digital Diagnostics - Forging a New Path: Healthcare AI Done the Right Way
(WebsEdgeMedicine)
Top Articles
Latest Posts
Article information

Author: Carmelo Roob

Last Updated: 11/24/2022

Views: 5921

Rating: 4.4 / 5 (65 voted)

Reviews: 88% of readers found this page helpful

Author information

Name: Carmelo Roob

Birthday: 1995-01-09

Address: Apt. 915 481 Sipes Cliff, New Gonzalobury, CO 80176

Phone: +6773780339780

Job: Sales Executive

Hobby: Gaming, Jogging, Rugby, Video gaming, Handball, Ice skating, Web surfing

Introduction: My name is Carmelo Roob, I am a modern, handsome, delightful, comfortable, attractive, vast, good person who loves writing and wants to share my knowledge and understanding with you.