AI & Healthcare

From New Media Business Blog

Jump to: navigation, search

Artificial Intelligence (AI) can support and complement human healthcare providers in many ways, including but not limited to providing faster service, decreasing wait times, providing higher precision in surgery, diagnosing illnesses, and analyzing data to identify trends or genetic information that would predispose someone to a particular disease. This is done through AI technology such as automation, machine learning, machine vision, natural language processing, and robotics. When saving minutes can mean saving lives, AI and machine learning can be transformative not only for healthcare but for every single patient.


Contents

AI & Dosage Error Reduction

Dosage errors and harmful drug interactions make up 37% and 11% of all preventable medication errors, respectively [1]. These errors can be easily avoided by employing AI technology that would assist healthcare professionals when prescribing medication. This can be achieved by calculating the proper dosage of each medication and by ensuring there is no harmfully interaction with other medications a patient may be taking. As doctors are only human, it would be beneficial to first have AI provide input into remedies and have a secondary, more calculated opinion by a doctor.

The Issue With Doctors

Normally patients are issued prescriptions based off questioning and routine check-ups from healthcare professionals. However, this leaves a chance for human error and incorrect diagnoses. Doctors are busy and typically do not get to spend enough time with each patient as doctors' offices are full and visits must be accelerated in order to meet demand. Therefore, with the introduction of AI assistants, a major portion of the doctor’s time can be saved for assessing patients and treating serious cases in more detail. Their time spent searching patients’ files, asking questions, and remembering proper dosage and other clinical tasks is now allocated to an AI assistant that provides this information quickly. In addition, AI assistants have smaller chances of making mistakes than doctors, whose minds may wander or be distracted due to excessive work and fatigue.

How AI Can Help

To decrease human dosage error, AI can be utilized in storing a large repository of patient history medical information history along with information on on illnesses, causes and treatments. The AI assistant can then generate a treatment based off of inputted information, which the doctor can then review and finalize the treatment or prescription. This would reduce the chances of wrong diagnosis and unwanted medical errors.

Artificial intelligence has the ability to lower medical errors by 30-40% and treatment expenses by as much as 50%, according to a 2016 study done by Frost and Sullivan [2]. In addition, according to the World Health Organization, medical errors and faulty medical diagnosis are among the top ten causes of death worldwide[2]. Furthermore, 1 in every 10 hospital admissions leads to a medical error, resulting in death among 1 in 300 of those admitted[2]. Therefore, artificial intelligence becomes a very important development that needs to be utilized by healthcare professionals in order to eliminate these preventable medical errors and make healthcare more efficient.

Example: MedAware

Researchers from the School of Medicine at Johns Hopkins University discovered in 2016 that deaths from medical errors may contribute to more than 250,000 deaths per year, making it the third leading cause of non-violent death in the United States[3].

MedAware has been designed to provide algorithmically mined data from millions of electronic medical records, flagging outliers in prescription data for healthcare providers in real-time[3]. Much like IBM Watson, MedAware provides healthcare professionals with technology that can sift through large amounts of data exponentially faster than any human, producing an output that aids in better decision making which in this case being protection against prescription errors and thereby preventing dosage error.

The Benefits of Using MedAware

MedAware saves patients' lives and improves safety by reducing preventable medical effects from prescription errors. For hospitals and healthcare professionals, it reduces preventable medical costs by avoiding human errors and eliminating preventable dosage errors.

The effectiveness of MedAware has been proven through a 2017 study published in the Journal of American Medical Informatics by Harvard Medical School researchers. They used MedAware to mine five years of electronic health data in aims of detecting medication errors[3]. In a sample of 300 charts out of 15,000, researchers discovered that 75% of alerts were correct in identifying probable medication errors[3]. MedAware is therefore a very valuable tool for physicians, patients, and hospitals in preventing prescription dosage errors in real-time.

Example: IBM Watson

IBM’s Watson for Health is helping healthcare organizations in far more ways than any human ever could. Watson applies cognitive technology to unlock an immense amount of health data to aid diagnosis and has the capability to review and store more medical information exponentially faster than any human[4]. This information includes every medical journal, symptom, and case study of treatment and response around the world[5].

IBM Watson has the aptitude to reveal connections and relationships among genes, drugs, diseases and other entities by analyzing multiple sets of healthcare related information[4]. In addition, Watson is able to sift through massive amounts of data and analyze it to come to conclusions more comprehensively and exponentially faster than any human research team or online tool can[4].

Watson is also cloud-based so it is constantly updated to contain the latest information[4]. IBM Watson has established its ability to help humans in ways never before imagined, at an alarming speed, through its abilities to tackle large amounts of information with such ease.

Drug Discovery

Researches using IBM Watson begin with a group of drugs they would like to analyze for testing. They then utilize Watson's cognitive platform to predict or define the relationships among them, in addition to its healthcare and life sciences knowledge. Watson uses natural language processing to understand and make meaning of its vast amount of data to reveal connections. By doing so, it can generate new hypotheses by predicting potential relationships not previously known. Results are shown as interactive visualizations that show the connections and relationships, allowing the user to make sense of vast amounts of data, thereby generating new insight[4].

Watson’s Drug Discovery feature is used to assist with new drug identification and drug repurposing by pharmaceutical companies furthermore to biotechnological and academic institutions [4]. By fractioning the amount of time needed to explore life science and drug related data, researchers can allocate more time towards understanding the relationships IBM Watson helps reveal and towards understanding diseases, drugs, and other relevant information[4].

Virtual Assistants, Robots, and the Automation of Administrative Work

Virtual Assistants

Applications

There are many terms for virtual assistants in healthcare (including virtual nurses, virtual nursing assistants, and virtual healthcare assistants), and the definitions vary slightly with each term, their functionality shares the same foundation: to provide care to patients through the use of AI-powered technology. According to an Accenture study, virtual assistants have the power to save the U.S. healthcare industry up to $20 billion annually[6].

Virtual assistants are used for different purposes depending on their user. Virtual assistants act as information sources for patients, educating them on their own conditions or on different areas of interest[7]. For many people, a virtual assistant is used as a self-monitoring tool at home. Through their voice-enabled technology, they can interact with the patient through asking questions or by giving instructions to guide the patient through different tasks to gather relevant data, which can then be relayed back to their physician[8]. Another important function of virtual assistants is their ability to remind patients of when to take their medication and how much of it to take[7]. They can also aid in booking and notifying patients of upcoming appointments, even recommending they make one in the case that symptoms persist[8]. In some cases, virtual assistants have been suggested to be able to provide companionship to their users [9]. This is more likely in cases where an avatar and “friendly voice” is used as part of the interface[9].

Benefits

Meet Molly, Your Virtual Assistant

Virtual assistants offer many benefits to both patients and healthcare providers. Some of these include:

  • 24/7 Availability: Virtual assistants offer convenience to patients by being available at all times, allowing them to gain access to information when they want it [10].
  • More Engagement: Information is easily accessible through a virtual assistant, allowing patients to become more engaged in their own care as well as the care of others [11].
  • Fewer Appointments: Virtual assistants help patients to assess their symptoms from home, allowing them to have a better understanding of the urgency of their situation without needing to talk to a doctor or nurse directly [11]. Patients benefit from being able to access reliable information from the comfort of their own home, while physicians see the benefit in less time wasted on non-urgent or serious cases [11].
  • Streamline Appointments: Virtual assistants can help streamline appointments by gathering patients’ information and symptom data prior to the in-person visit [11]. With this information, they can also suggest potential treatment options that can be reviewed in advance [11].
  • Efficient Data Tracking: Data can be collected on a more regular basis by the patient, in their own home, through a series of simple instructions guided by the virtual assistant and use of wearables and other biometric devices [11].
  • Better Communication: With all of this data and insight being relayed to healthcare providers, virtual assistants allow for a greater understanding of the patient. These insights can help facilitate more enriched conversations between the two parties [10].

Example: Sensely

Sensely’s Molly is one example of a virtual assistant being used in the healthcare industry. Molly is an AI-powered nurse avatar who connects patients with the resources to better monitor and manage their health [12]. One article says to “[t]hink of her as Alexa, but with a medical degree and an expertly tuned bedside manner” [12].

Molly can help patients track their vitals using linked wearable devices and can directly connect with the patient’s physician to alert them of any emergency situations[12]. Molly is designed to deliver all of these functions without “compromising empathy” [13], even using machine learning to detect a patient’s emotions and respond more accurately over time [12]. She is able to speak to patients in 32 different languages and is used by hundreds of thousands of patients all over the world [12].

Artificially Intelligent Robots in Healthcare

Difference Between Robots and Artificially Intelligent Robots

Robotics and AI are used for very different purposes, but there is a common misconception that they are the same thing or that robotics is a subset of AI [14]. To clarify, robotics is a branch of technology dealing with the construction of robots, whereas AI is a branch of computer-science involving the simulation of intelligent human behaviour in machines [15].

Therefore, there is a difference between a robot and an artificially intelligent robot. The definition of a robot on its own is a “programmable machine, which is usually able to carry out a series of actions autonomously, or semi-autonomously” [14]. There are many robots being used in healthcare that do not involve AI, like those that are designed to help patients walk or lift them into their wheelchairs [16]. An artificially intelligent robot is one that is controlled by AI algorithms, allowing it to perform much more complex tasks. AI algorithms can take on learning, perception, problem-solving, language-understanding, and even logical reasoning [14]. While there is a clear use for both in the healthcare industry, the points made in this section specifically regard the use of artificially intelligent robots.

Applications

There are a wide variety of uses for artificially intelligent robots in healthcare. Robots can act as transcribers and as an information source for stored medical records, which can help reduce the possibility of medical error through the increased accuracy of information [17]. Robots are able to help doctors and nurses with patient diagnosis, monitoring, and data collection[17]. Data collection can be extremely helpful in emergency situations, like cases where a patient is suffering from heart failure and information is able to be relayed to medical professionals so that the correct course of action can be taken as soon as possible [17].

Many applications of artificially intelligent robots both directly and indirectly affect patient care. Like virtual healthcare assistants, robots are being used to remind patients of when to eat and drink, take their medication, exercise or when they have an appointment scheduled [17]. Robots are being increasingly used in nursing or care homes to provide companionship for those who may not have visitors or other sources of entertainment [17]. Other applications of robots are as support systems for nurses and other healthcare workers. Rather than for patient interaction, some robots are designed specifically to aid in carrying out logistical tasks to give nurses more time to spend directly caring for their patients [18]. These robots can perform tasks like selecting and delivering the required medical items from storage to various locations in a healthcare facility and aiding in setting up rooms for new incoming patients [18].

Example: Pepper

Pepper in Action

Pepper is the first artificially intelligent robot to be capable of recognizing faces and basic human emotions. Standing four feet tall and weighing 62 pounds, Pepper is equipped with three cameras, twenty engines, and a touchscreen tablet in its chest to display messages and encourage input. Pepper is designed for human interaction and is used across many industries, including finance, retail, tourism, and healthcare.

According to Softbank Robotics (Pepper’s creators), typical use cases in healthcare include the robot being used as a:

  • Health assistant
  • Communicator
  • Data generator
  • Edutainer
  • Receptionist
  • Brand ambassador
  • Sales assistant
  • Service provider[19]

In Canada, Humber River Hospital in Toronto was the first hospital to recruit Pepper, where it is used to greet, guide, and entertain people [20]. Pepper is also being used there in more challenging ways to help “provide support to sick children and their families as part of the Child Life Program” [20].

PARO with Senior

Example: PARO

PARO is a robotic version of animal therapy, commonly used in healthcare facilities where live animals are unable to visit due to logistical or hygienic issues [21]. PARO is made to resemble a baby seal and includes five different sensors (tactile, light, audition, temperature, and posture sensors) which enable it to respond to people as well as the environment it is in [21]. Using artificial intelligence, PARO is able to “learn” to behave in a way that its user favours. For example, its creators say, “if you stroke it every time you touch it, PARO will remember your previous action and try to repeat that action to be stroked. If you hit it, PARO remembers its previous action and tries not to do that action” [21].

Clinical studies have shown that PARO can reduce stress and anxiety, improve mood, boost sleep, and reduce pain perception in patients that use it [21]. It has also been suggested that it aids in facilitating emotional expression, social interaction, and speech fluency [22].

Automation of Administrative Work

The Importance of Automation

It is estimated that $91 billion dollars are being wasted annually on inefficient administration work [23], and that there is a possibility to save up to $18 billion of that per year through the use of AI technology [6]. These savings could result in less time spent on inefficiently processing paperwork and more time spent on providing better patient care.

Labour can be attributed as the healthcare industry’s “single largest operating expense” [24]. However, rather than interacting with patients, most of it is spent carrying out administrative tasks on computers. One study claimed that six out of every 10 people who work in healthcare never interact with patients and of the ones that do, sometimes only as little as 27% of their time is spent directly with them [24]. Therefore, using artificially intelligent devices to provide support to these workers with these tasks could allow for a big shift in time devoted to real patient care.

Applications

There are many ways that AI is automating administrative work in healthcare. Some areas being transformed include:

  • Hospital Bed Assignments: AI is being used to help hospitals with forecasting demands for beds as well as their assignment [24]. At John Hopkins Hospital, the AI-powered control centre has helped assign beds 30% faster and has reduced the times spent in recovery rooms and in waiting rooms [24]. These efficiencies have allowed the hospital to accept 60% more transfer patients [24].
  • Fax Categorization: Faxes remains the healthcare industry’s main channel of communication, requiring a lot of time to import and sort the important data within them [23]. AI has helped reduce the two minutes and 36 seconds of the reviewal and importation of data to less than half of that and is on pace to reduce it further to under 30 seconds in the next year [23].
  • Documentation: AI-powered tools analyze both patient clinical data and administrative data to offer real-time suggestions to doctors on how to ensure they are complying with government guidelines when dealing with health record documentation [24]. These AI tools have shown to help make records 36% more compliant and have cut some hospitals’ time spent on documentation by up to 45% [24].

AI & Genome Editing

CRISPR Applications

Genome editing refers to the use of technology to make modifications to an organism’s DNA. CRISPR-Cas9, or Clustered Regularly Interspaced Short Palindromic Repeats and protein-9, is the leading system for genome editing technology. Public companies currently experimenting with this technology include – but are not limited to – Crispr Therapeutics, Intellia Therapeutics, and Editas Medicine. The CRISPR system works by recoding the guide RNA molecule to target and eliminate specific cells in the DNA sequence using a Cas-9 protein. The guide RNA molecule can then either add-in a new DNA sequence or tie the break.[25]

CRISPR genome editing technology will be the catalyst for transformation in industries such as healthcare and agriculture. Potential applications of CRISPR are numerous, and include modifying DNA to fight cancer cells, destroy genetic diseases, and make drought-resistant crops.[26] The magnitude of these potential applications is critical to garnering interest and support for CRISPR genome editing technology. Specific to the healthcare industry, genome editing serves as the potential solution to many traditionally thought-of untreatable cases.

Map of Legal Restrictions on Genome Editing


Countries such as Australia, Brazil, and Canada have legislation imposing bans on genome editing technology. For example, under the Assisted Human Reproduction Act of 2004 in Canada, one may be incarcerated for up to ten years.[27] These types of penalties discourage researchers residing in such countries from helping advance genome editing technology. Moreover, negative government stances may deter institutional investors from allocating capital to genome editing companies. Therefore, for genome editing technology to reach a level of sophistication appropriate for commercial applications, legislations must change to accommodate further research.

The aforementioned issue of restricted legislation amplifies the problem with traditional genome editing technology not being precise enough for commercial use. For instance, accidents can occur where the application of CRISPR makes changes to DNA it did not intend to target. These off-target effects can be serious and life-threatening. In order to calibrate and fine-tune genome editing technology, researchers are experimenting with AI.




Microsoft's Elevation and Azimuth Projects

Microsoft is paving the path for applying AI to advance genome editing technology with its’ Elevation and Azimuth projects. The former project’s objective is to help minimize the off-target effects of traditional CRISPR technology; while the latter project’s objective is to maximize the on-target effects. Ultimately, this two-pronged approach aims to develop both the accuracy and precision of genome editing technology in order for it to be commercially applicable to healthcare.

Microsoft’s Elevation project is using AI machine learning to fine-tune CRISPR technology.[28] Elevation is using AI to run CRISPR genome editing simulations in order to improve the AI’s capacity to predict off-target effects. The AI simulations return target scores for various guide RNAs. These scores are essentially machine-learning based probabilities of the RNAs’ potentials to induce off-target effects. Researchers can use these target score results to help guide CRISPR genome editing development.

Microsoft’s Azimuth project is similar to the aforementioned Elevation project with respect to using AI machine learning to help advance CRISPR technology.[29] However, the Azimuth project focuses on predicting on-target effects. The Azimuth machine-learning system runs CRISPR genome editing simulations to build its AI capacity to predict which part of a gene to target in a particular instance. For example, if one wanted to destroy a certain genetic disease, Azimuth AI would help determine which part of the DNA sequence to target with CRISPR-cas9.

Ethical Implications

The benefits of functional genome editing previously mentioned will have a great impact on the healthcare industry. Practitioners will have access to the power of genome editing as a solution to the unfortunate cases such as genetic diseases and cancers. Albeit, the admirable pursuit to developing this technology carries various ethical implications. If genome editing technology is advanced to its full potential through the application of AI, the technology will be susceptible to misuse and exploitation. For example, genome editing may be used in Olympic sporting events to physically enhance athletes without the use of detectable steroids. This example of exploiting the technology for non-therapeutic applications can be extended to individuals in society. To direct effective regulations in-case of commercial implementation of genome editing technology, public discussion amongst researchers, practitioners, legislators, and citizens should be actively supported.

AI & Surgical Systems

General AI-Assisted Surgery

Example: The da Vinci Surgical System

The da Vinci Surgical System

Using robotic technology, the da Vinci Surgical System assists surgeons by providing more precision and accuracy of small instruments when performing minimally invasive laparoscopy surgery[30]. The da Vinci System has been used in over 3 million surgeries since 2000.

Performed through several small keyhole-sized incisions in the patient, a laparoscope (a magnified camera and light on the end of a tube) projects a high definition image to a screen[31]. Surgeons can then see more than what may be possible without the da Vinci System[30]. Surgeons control the surgical instruments and each movement is steadied and translated into smaller, more precise movements, causing less damage to the surrounding flesh.

There are many advantages of using the da Vinci System when compared to manual open surgery. Patients experience decreased pain and recovery time following the surgery. The da Vinci System’s surgical instruments are not restricted to movement limitations as they can bend and twist in ways that human wrists cannot. The laparoscope allows for improved visuals of areas that may be not be easy or safe to access otherwise.

Example: Soft Tissue Autonomous Robot

The Soft Tissue Autonomous Robot (STAR) can perform partially autonomous incisions and suturing with computer imaging and a 3D tracking algorithm[31]. Though it still needs some assistance from surgeons, the STAR does not need instruction for every single movement as it can rely on its own algorithm. For example, picking where and when to thread the needle.

Not currently United States Food and Drug Administration (FDA) approved, it has only been tested on animals. In particular, live anesthetized pigs in intestinal anastomosis operations. This is effective in representing the STAR’s capability as pigs most closely resemble humans’ structure, muscle, etc[32]. The operation involves repairing a severed intestine. If the sutures are too tight, it can tear the intestine. Too loose and leaking of intestinal fluids can be fatal. The STAR’s consistency in this operation has outperformed highly trained specialists and other forms of robot-assisted surgery. Its incisions have also been compared to surgeons the STAR had less deviations from the intended.

The STAR can improve healthcare provision in areas lacking specialists of something like intestinal anastomosis. It can also shorten waitlists for surgeries by automating repetitive steps.

Dental Surgery

Example: Yomi

Neocis’ Yomi is a United States FDA approved robot-assisted dental surgery machine. Beginning with a computed tomography (CT) scan of the patient, the surgeon then plans the dental implant procedure on Yomi[33]. During the surgery, Yomi’s haptic guidance technology perfectly angles the surgical instrument which the surgeon then guides into the patient’s mouth and flesh.

Haptic guidance technology gives the dental surgeon clues when movements begin to deviate from the plan. These clues include vibrations and resistance. For example, when using the drilling instrument, the dental surgeon simply needs to push the instrument into the flesh. When reaching the appropriate depth, Yomi will beep, indicate on the screen that the instrument has reached the bottom, and also restrict further movement into the flesh[34]. Advantages of using the Yomi include shorter surgery time, less pain felt by the patient, and shorter recovery time when compared to regular dental surgery. Since Yomi allows the surgeon more precise use of the instruments, there is less disruption of the surrounding flesh and therefore causing generally less pain and reducing recovery time for the patient. The haptic guidance technology also encourages the dental surgeon to stay on course with the planned surgery. The clues given by Yomi helps reassure the surgeon of the plan and angles of the instruments, etc. and should therefore also reduce surgery time.

Autonomous Dental Surgery with Supervision

Example: Autonomous Dental Surgery

A team in China has successfully completed the world’s first autonomous dental surgery. This remarkable robot was designed by a team from Xian’s Stomatological Hospital and the robot institute at Beijing’s Beihang University[35].

The process began with a CT scan of their volunteer patient used to plan and program a route, navigating the mouth with a “special marking system”[36]. The two dental implants, created using 3D printing, were implanted in under an hour. The operation was supervised by two medical professionals but they did not interfere or assist at any point. The robot was able to adjust to the movements of the patient who was under local anesthesia[35]. China is currently experiencing a shortage of dental surgeons and wide adoption of this autonomous machine can dramatically reduce waitlists for similar procedures. The robot also has less risk of human error and only allows for a low 0.2 – 0.3mm margin of error.

Failure: Replacing Specialists

Sedasys

Anesthesiologists are specialist medical doctors who plan and administer anesthesia to patients of all ages to relieve pain prior, during, and following a surgery amongst other things[37]. Ethicon's Sedasys, the computer-assisted personalized sedation system, was designed to replace anesthesiologists for low-risk patients undergoing routine procedures such as colonoscopies. With only a nurse necessary to be present, the machine administers the sedative propofol and tracked the patients’ vitals[38]. This is cost effective as the Sedasys costs around $150-$250 per procedure compared to $2,000 for a human anesthesiologist[39].

Only three years after its FDA approval[40], Ethicon (the subsidiary of Johnson & Johnson) discontinued Sedasys due to poor sales in 2016. Their recall notice from late 2015 due to “degredation of the plastic material” lists only 14 units sold[41]. Replacing an anesthesiologist with a machine, even for routine procedures, lacks consumer trust. With a team of trained specialists on hand at each hospital, pain clinic, etc., it is understandable why may have passed on investing in the Sedasys system. This could have contributed to the fall of Sedasys but others also argue that the poor design too was a factor.

Implications for the Healthcare Industry

Artificial intelligence is helping surgeons and other healthcare providers do their jobs more efficiency and precision with generally less risk and complications. The automation of suturing, for example, can significantly cut down on surgery time as it reduces repetitive tasks. With higher precision, there is less disruption of the surgical site’s surrounding flesh resulting in less pain and shorter recovery time for patients following the procedure. Even though most initial costs are high, the AI-assisted and automated surgical systems commonly increase efficiency and therefore will marginally be more cost-effective.

These machines have the potential to provide more care to patients in places like third-world countries or rural areas, without as many specialists because of the increased efficiency. We are in the midst of a doctor shortage, with more of the detrimental effects predicted to be realized in the next 10 to 15 years.

Delayed adoption can be due to lack of consumer trust, poor design, or high initial costs. As seen with the Sedasys system, even a great idea can fail for many reasons. For example, the da Vinci Surgical System is $2 million. Even if hospitals adopt these assisted surgical systems, patients may refuse the use of them as consumer trust delays are high. What will convince a hospital’s decision-makers to invest in a machine that would replace or complement trained specialists if they have a staff of specialists on hand already?

Deskilling is the main downside of these surgical systems. All of the mentioned autonomous or AI-assisted surgical systems require training. More surgeons may choose to rely more heavily on the use of these systems rather than doing manual surgery. With systems like Yomi which is just short of automated, deskilling is a real threat. However, due to convenience the creation and adoption of more of these systems may be more popularized.

The Future of AI & Surgical Systems

Though the advancement of autonomous surgical system is remarkable, the adoption of these systems likely requires the approval of those possibly being replaced. Therefore, it is more likely that AI-assisted or semi-autonomous systems that complement the work of surgeons and other medical professionals will be more widely accepted and adopted. This will also increase deskilling of surgeons and specialists.

As machine learning and other facets of AI advance, the creation of more automated surgical systems will be realized in the future. The goal would be for these surgical systems to learn and improve upon each surgical experience like a human surgeon. This may only be possible with more advancement of AI.

AI for Diagnostics

Approximately 10-percent of patient deaths are attributed to misdiagnoses.[42] In 2016, there were 250-thousand deaths in the U.S. due to medical errors; this represents the third-leading cause of death that year.[43] Medical errors include, for example, a surgeon performing a surgery on the wrong body part, a doctor failing to properly diagnose a patient, or a pharmacist preparing the wrong medication.[44] The root cause of the frequency of misdiagnoses is typically human error. This type of error includes failure to spot signs of illness, communication gaps between healthcare professionals and patients, and inefficient health IT systems. Researchers are experimenting with the use of AI for diagnostics in order to revolutionize the diagnostics function in healthcare.

The idea of AI diagnostics focuses on developing software which can be applied to quickly diagnose medical conditions with exceptional accuracy. Furthermore, based on the diagnosis, researchers want AI to be able to propose treatment plans and predict treatment outcomes. The potential benefits of AI diagnostics in the healthcare industry, besides timely and accurate diagnostics, include: better access to medical diagnostics in rural communities, lower marginal service costs for patients, and less stress on the diagnostics function in the healthcare system.

However, the AI diagnostics technology is still in the early adoption stages. The development of the technology is limited by the immaterial amount of experimental patient data. Therefore, the natural solution to developing AI diagnostics is to leverage big data. Unsurprisingly, big technology companies such as Microsoft, IBM, and Google are spearheading the way for AI diagnostics; associated AI projects include Microsoft’s Oxford, IBM’s Watson, and Google’s Deepmind. All three companies are developing their AI machines for a wide-array of applications; however, Google’s Deepmind project has made particular progress in the healthcare diagnostics function.

Global Eyesight Estimates

Google DeepMind

Google acquired DeepMind Technologies for $500 million in 2014.[45] DeepMind is a world-leader in AI research, development and application. The DeepMind project does not have a pre-defined scope in terms of a specific AI function. Instead, DeepMind is characteristic of Google in that it leverages big data to provide an assortment of consumer solutions in various industries. DeepMind is researching AI solutions in broad arenas such as healthcare, environment, and energy-efficiency.

Specific to AI in healthcare, DeepMind has partnered with various health institutions to leverage data stores for machine-learning experiments. In this manner, DeepMind combines the industry expertise from healthcare professionals with the big data analytics capabilities developed by Google to create an AI healthcare diagnostics system. For example, DeepMind partnered with Moorfields Eye Hospital in 2016 to begin the development of AI healthcare applications particular to eye disease diagnostics.[46] The problem with traditional eye diagnostics techniques is that there is a tendency to identify infections or diseases belatedly. In a recent 2018 study, Deepmind has claimed to have developed the capacity to instantly detect the type and severity of eye conditions at an accuracy of expert doctors.[47]

The implications of AI diagnostics are critical in the field of eye disease diagnostics because of the magnitude of individuals affected by associated diseases. Eye disease can lead to temporary or permanent eyesight loss and blindness, which currently affects 285 million people. This number is projected to triple by 2050.[48] Projects such as Google’s DeepMind are actively working towards developing an AI diagnostics machine capable of reversing the current trend of increasing cases of eye disease.

Eye Disease

Google

Google is working towards improving computer vision to have similar or better accuracy to human vision. One application is for the diagnoses of eye disease[49]. Many types of eye disease are highly treatable if caught early but signs and symptoms may not always show up early. If left untreated, diabetes retinopathy for example, a common complication in people with diabetes, will lead to blindness.

Typically, the test is done by an ophthalmologist examining a retinal scan. Google’s DeepMind has successfully programmed an algorithm to detect diabetic eye disease and up to 50 other different eye diseases from a retinal image[50]. Currently, it is just as accurate as board-certified ophthalmologists in recommending treatment. This automated diagnostic system can complement an ophthalmologist’s work or provide a higher standard of healthcare in areas with a shortage or lack of specialists.

Airdoc

Airdoc Diagnoses

Similarly, Chinese start-up, Airdoc, has programmed an algorithm using AI to detect 30 diseases by taking a picture of the retina at the back of the eye[51]. It can detect and diagnose diseases such as hypertension, diabetes, and optic nerve disease by examining specks and discolouration on the retina scan. They have utilized Microsoft Azure, a cloud computing service, to leverage open source tool[52].

A chain of optical stores in China have installed 200 of these automated Airdoc machines for routine scans with a goal to increase that to 1200 in three years[51]. The patient sits down and placed their chin onto a brace and adjusts the angle of the head facing the machine until a green light in the eyepiece comes into focus for the patient. There is a flash of light and the machine takes a high-resolution medical grade photo of one eye, sent to the cloud, and the process is then repeated for the other. Many signs and symptoms may not reveal itself until the eye disease is severe like with diabetic retinopathy. It is estimated that over 110 million people in China have diabetes but only 30 percent of them are aware of it[51]. The availability of Airdoc machines and making the automated retinal scan a common routine could save the vision of many.

Cancer

Dermatology

Malignant and Benign Skin Legions

Stanford University researchers have trained an algorithm to diagnose skin cancer using deep learning. With the use of 130,000 images of skin legions with characteristics of over 2000 different diseases of over 700 classifications, the algorithm can give recommendations with as much accuracy as board-certified dermatologists on biopsy-proven clinical images[53]. The recommendations being either biopsy and treatment, or just reassuring the patient.

This follows the normal procedures followed currently in diagnosing skin cancer. It is typically done visually first with a clinical visit, then followed by further noninvasive examinations and biopsy if necessary. Since there is so much variation in skin legions, deep convolutional neural networks, a set of algorithms used to recognize patterns[54], is incorporated in a large data set including clinical data from 18 online open-access sources and the Stanford University Medical Center[53].

This information can one day be available on a mobile app offering analyses and recommendations for patients at home for more effective and efficient care.

Pathology

Pathology is the scientific field involving the study and diagnoses of disease with by testing of tissues, organs, or bodily fluids such as blood or urine[55]. This field has remained relatively unchanged in about a century as it still involves manual visual examination under a microscope.

Researchers at Harvard Medical School and its teaching hospital, Beth Israel Deaconess Medical Centre, have utilized AI to improve the efficiency and accuracy of cancer diagnoses[56]. Using deep learning, machine learning, and neural networks, they have trained layers of algorithms to recognize and interpret complex patterns in data to interpret pathology images.

Routinely, pathologists search through millions of normal cells for a few cancer cells in a patient’s lymph nodes to identify malignant cancer. The algorithms can do this well autonomously with 92% successful diagnoses when compared to human pathologists’ 96% accuracy. Combining the pathologist analysis and the automated method resulted in a remarkable 99.5% accuracy[56]. This significant reduction in errors can enable pathologists to work more efficiently when complemented with the machine.

Medical Codes and Records

Parkinson's

Parkinson’s disease is a neurodegenerative disorder with no known cause or cure[57]. It is hard to predict as there is a long, varied list of possible symptoms. Some affect motor skills like tremors and rigid muscles but other non-motor symptoms include dementia and pain[58]. Those with Parkinson’s may not get all of the symptoms, different symptoms will show as the disease progresses, and some individuals progress through the five stages of Parkinson’s faster than others. It is currently impossible to predict if someone has Parkinson’s before symptoms emerge.

A study from Washington University in St. Louis is attempting to scan medical records for symptoms of Parkinson’s developing. Using 200,000 Medicare claims to train the algorithm, 90,000 had been diagnosed with the disease and almost 120,000 were used as a control group[59]. They algorithm found 73% accuracy in predicting the Parkinson’s with the early symptoms. This is a step towards reducing the time for diagnosis. Currently this time period is about a year and a half in which the patient must go through a multitude of tests and symptoms are worsening.

Drugs meant for combatting Parkinson’s symptoms work better for some than others[58]. Speeding up diagnoses would help patients begin their journey of testing different drugs and methods of reducing the impact of their symptoms on their everyday lives.

Insurance Rates

Apixio has developed algorithms through machine learning to mine patient record data for relevant information. Many Americans require risk adjustment – determination of their health status and disease risk which then translates into an according price for their insurance plan[60]. Their HCC Profiler sorts through patient charts and decide what information is relevant for risk adjustment. There are many problems with medical codes as they are used typically for billing purposes. Apixio assures that insurance plans rates are only based on well-supported diagnoses[61]. Old irrelevant diagnoses like breast cancer survivor in remission for 15 years for example, will also still affect some but not all calculations but the HCC Profiler can determine the relevancy. With the HCC Profiler and a small team of 4 auditors, they estimate 50,000 clients processed in one week, efficiency that is currently unmatched[60].

While this is convenient, there runs a risk that the algorithm is inaccurate and will show irrelevant information, which could have negative consequences like higher insurance premiums for those being observed. Codes and keywords inputted by the medical professionals may also be incorrect causing similar outcomes.

Privacy Concerns

Privacy is one significant concern regarding AI in the healthcare industry. AI is getting increasingly sophisticated and perhaps not all will want their health records following them. Like with Apixio assisting in the calculation of insurance risk, one doctor’s misdiagnosis or incorrect recording of a symptom can seriously detriment one’s life. This already currently happens with certain types of life insurance acceptability. We also do not know what these third-party can be doing with our most sensitive information in the long-run.

Implications for the Healthcare Industry

The incorporation of artificial intelligence results in many implications that benefit both the patients and healthcare providers:

  • Lower marginal costs: AI Technology is helping healthcare professionals increase the speed and accuracy of diagnostics like with pathology. A machine can easily identify a few cancer cells out of a million normal cells and do it well too. Some of these diagnostic applications may have a high initial cost but would result in lower marginal costs for healthcare providers as more patients can be seen in the same period of comparative time.
  • Increased accessibility: Seeking and receiving medical attention is a time-consuming process for patients. Diagnostic tools like Airdoc or the Stanford’s skin cancer diagnostic algorithms providing recommendations for further treatment streamlines care provided by physicians. If accessible to patients on a phone application, patients can be reassured from home about their skin legion among other concerns or be directed to see a physician.
  • Combatting shortage of physicians: There is currently a shortage of healthcare providers with the disparity only increasing in the next decade. Increased adoption of these diagnostic tools can provide increased accessibility for all but especially those in areas with a shortage of or lacking specially trained healthcare providers. We in Canada are also fortunate to have a Health Act covering accessibility for all those covered but this is not consistent in all countries.
  • Less stress on the diagnostics function: There can also be less stress on the diagnostics function and more emphasis on treating patients. As mentioned with Parkinson’s, typical time for diagnosis is a year and a half. During this time, symptoms are worsening and various tests are run for a long list of signs and symptoms[59]. Though skin cancer diagnosis at two weeks is quite short in comparison to Parkinson’s, waitlists for treatment vary and can be standard to have over a 60 day wait for first treatment[62]. Melanoma, one of the most common and deadly of skin cancers, is highly treatable in early stages but if not caught until later, it can spread on the skin and eventually fatally reach the bloodstream and other parts of the body[63]. Providing the AI technology, especially in places with lacking specialists or experiencing long waitlists, can save lives.

Risks of Using AI in Healthcare

Some of the risks that encompass the use of artificial intelligence in healthcare, whether it’s in relation to treatment or diagnosis and assisting doctors, include the following[64]:

Reliability and safety: AI can make errors which, if difficult to detect or accompanied with indirect effects, can result in serious consequences. This creates problems in integrating AI into healthcare. As AI technology matures, it is assumed the reliability and safety of using it will increase; however, in a field such as healthcare, the tolerance for risks is minimal as consequences can be critical and fatal. Therefore, this is the most important risk to mitigate before moving forward with AI in healthcare. If this risk is not mitigated, further implementation of AI into the healthcare industry becomes difficult or even unfeasible.

Liability and accountability: Who is responsible when harm is caused by an AI system or machine? How will we determine who is liable when an AI assistant is at fault, will it be the doctor using it or the programmer who made the assistant? Laws protecting healthcare professionals and patients need to keep up with AI technology.

Patient trust: Acquiring trust from patients, especially the elderly who are used to traditional healthcare, can be difficult as humans generally like dealing with other humans rather than a machine. Delayed consumer trust may also come from lack of popularity or knowledge of the technology or product. Patient trust can only be gained with time and physicians' assurance that the technology is reliable and safe.

Automation and unemployment: Nurses and other medical professionals can feel threatened by increased use of AI in healthcare. For example, as virtual assistants and virtual nurses may decrease the demand for nurses on staff in a hospital. This is a risk that may create resistance among healthcare professionals as they may see the introduction of such technology as a decreased need of their future employment. In addition, with AI technology at their aid, this risks deskilling of medical professionals, similar to how humans have become less reliant on manually checking their blindside while driving as their cars become equipped with lights that notifies them if someone is in their blind spot.

Data privacy and security: How should we trade off private and secure health data for medical breakthroughs and innovations? As with most things that have sensitive and important data, this risk must be mitigated before the general population accepts AI as a viable tool in healthcare. Data privacy and security is becoming a more common concern among people, as big companies have been revealed as collecting sensitive personal data for their own corporate gains.

Data and quality inconsistencies: If data contains errors or inconsistencies, it can render it useless in accurate decision making. Therefore, data must remain consistent among all AI technology. The data must also be organized in a user-friendly manner while maintaining relevant, reliant, and useful information for it to be of quality for use to healthcare professionals.

Internet and power outages: Seeing as how all the AI technology utilized for healthcare professionals will be operated and powered online through the cloud, this is a consideration when implementing AI in healthcare. Although the possibility of this risk occurring may be rare, when it does happen the consequences can be very critical if reliance on AI technology during surgeries or treatment is high. Though hospitals are equipped with back up generators, if they experience internet crashes or major power outages, a majority of this technology will be rendered useless.

Transparency: The underlying logic behind the outputs generated by AI can be difficult or impossible to interpret and understand. To exemplify, AI can come to conclusions but may not explain comprehensively how it came to that conclusion. Understanding how AI got to the solution will be a challenge in some cases and this may prove to be very time consuming and even counter-productive as it may be take more time understanding how AI got to a particular solution rather than coming up with a solution themselves.

Regulation: Regulating AI is a very important but difficult risk to mitigate as AI technology is advancing at a high rate. Regulations will determine the extent and freedom to which AI can be used and government agencies need to regulate AI so big companies do not abuse it. Elon Musk himself has urged government officials to regulate AI "before it is too late.[65]" AI poses a threat to humanity through its ability to be abused as weapons, replacing jobs and intruding on people's privacy.

The risks highlights above need to be mitigated or avoided before AI can be successfully integrated into healthcare. Having a flexible balance when introducing AI technology in healthcare is the most effective way to implement it moving forward. Although an argument of whether regulation would stifle the ability to innovate with AI may be present, the need to regulate it is more important as companies or people may abuse the ability of AI and technology is advancing at an alarming rate.

The Future of AI in Healthcare

AI has already started to make big advances in solving process inefficiencies, repetitive and expensive procedures, reducing human error, and freeing up more time for patient care [66]. Despite seeing AI everywhere, “we have yet to realize the game-changing clinical, operational, and financial opportunities that await us in [the] healthcare [industry]” [66].

In order to realize these effects, our healthcare systems must first address some of the major challenges facing the industry in regard to AI, including:

  • Information Overload: Many of our health care providers are facing the challenge of being overloaded by the massive amounts of data available to them. According to a Cisco report, under 3% of the data being generated by hospitals each year is actionable, tagged, or analyzed [66]. Reasons for this include “data privacy concerns, high regulatory hurdles for new ‘black box’ technologies, lack of clear reimbursement pathways, and the difficulty of data curation” [66]. In order for this statistic to be improved, health care systems will need to have clearly defined workflows to be able to process this information more effectively.
  • Identifying the Right Opportunities: There are endless opportunities to apply AI to the healthcare industry, but which applications should we be spending our time and money designing and implementing? According to GE Healthcare, “AI’s potential to enhance human and machine capability must align with key organizational priorities, goals, and missions” [66]. A big challenge the healthcare industry is facing is determining which opportunities should be pursued and how they can be best integrated with the existing systems in place.
  • Disconnected Data Sources: With all of these new sources of data from devices like wearables and mobile health accessories, data is being moved away from the hospitals and into pockets of separated information sources [66]. A challenge being faced involves being able to connect all of these broken-up data sources together to be able to make use of this information in the most effective way [66].

Authors

Brittany Aoki Amrit Dulai Stefan Filipovic Carman Kwan
Beedie School of Business
Simon Fraser University
Burnaby, BC, Canada
Beedie School of Business
Simon Fraser University
Burnaby, BC, Canada
Beedie School of Business
Simon Fraser University
Burnaby, BC, Canada
Beedie School of Business
Simon Fraser University
Burnaby, BC, Canada
baoki@sfu.ca adulai@sfu.ca stefanf@sfu.ca carmank@sfu.ca

References

  1. https://www.nehi.net/bendthecurve/sup/documents/Medication_Errors_%20Brief.pdf/
  2. 2.0 2.1 2.2 http://blog.myhealthvectors.com/index.php/use-of-artificial-intelligence-to-reduce-medical-errors//
  3. 3.0 3.1 3.2 3.3 https://digit.hbs.org/submission/medaware-using-ai-to-eliminate-prescription-errors//
  4. 4.0 4.1 4.2 4.3 4.4 4.5 4.6 https://www.ibm.com/products/watson-drug-discovery/
  5. https://www.pwc.com/gx/en/industries/healthcare/publications/ai-robotics-new-health/transforming-healthcare.html/
  6. 6.0 6.1 https://www.accenture.com/t20170418T023006__w__/us-en/_acnmedia/PDF-49/Accenture-Health-Artificial-Intelligence.pdf#zoom=50/
  7. 7.0 7.1 https://searchhealthit.techtarget.com/tip/Five-ways-a-healthcare-virtual-assistant-can-improve-patient-engagement/
  8. 8.0 8.1 https://medicalfuturist.com/chatbots-health-assistants//
  9. 9.0 9.1 https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5496465//
  10. 10.0 10.1 https://www.forbes.com/sites/ciocentral/2013/03/13/how-virtual-health-assistants-can-reshape-healthcare/#52ea56de2000/
  11. 11.0 11.1 11.2 11.3 11.4 11.5 https://hbr.org/2018/04/virtual-health-care-could-save-the-u-s-billions-each-year/
  12. 12.0 12.1 12.2 12.3 12.4 https://www.forbes.com/sites/insights-intelai/2018/09/21/the-ultimate-physicians-assistant/#6ad839b12747/
  13. http://www.sensely.com/#features-first/
  14. 14.0 14.1 14.2 https://blog.robotiq.com/whats-the-difference-between-robotics-and-artificial-intelligence/
  15. https://www.merriam-webster.com/dictionary/
  16. http://www.riken.jp/en/pr/press/2015/20150223_2/
  17. 17.0 17.1 17.2 17.3 17.4 https://corporatesocialresponsibilityblog.com/2018/06/26/robotic-nursing-care//
  18. 18.0 18.1 https://www.digitaltrends.com/cool-tech/moxi-hospital-robot-wants-to-help-nurses/
  19. https://www.softbankrobotics.com/emea/en/industries/healthcare/
  20. 20.0 20.1 https://globalnews.ca/news/4180025/pepper-canada-robot/
  21. 21.0 21.1 21.2 21.3 http://www.parorobots.com//
  22. https://www.telegraph.co.uk/news/2018/03/02/roboticsealscould-introduced-dementia-wards-reduce-stress-anxiety//
  23. 23.0 23.1 23.2 https://hbr.org/2018/03/how-ai-is-taking-the-scut-work-out-of-health-care//
  24. 24.0 24.1 24.2 24.3 24.4 24.5 24.6 https://hbr.org/2018/11/can-ai-address-health-cares-red-tape-problem/
  25. https://ghr.nlm.nih.gov/primer/genomicresearch/genomeediting
  26. https://disruptionhub.com/9-amazing-applications-crispr/
  27. https://www.cbc.ca/natureofthings/m_features/gene-editing-in-canada
  28. https://blogs.microsoft.com/ai/crispr-gene-editing/
  29. https://blogs.microsoft.com/ai/molecular-biology-meets-computer-science-tools-in-new-system-for-crispr/
  30. 30.0 30.1 https://www.davincisurgery.com/da-vinci-surgery/da-vinci-surgical-system/
  31. 31.0 31.1 https://www.nhs.uk/conditions/laparoscopy/
  32. https://www.goshen.edu/academics/biology/pigbook/human-pig-comparisons/
  33. https://youtu.be/FfeN5jP6N5I
  34. https://vimeo.com/233019375
  35. 35.0 35.1 https://www.scmp.com/news/china/article/2112197/chinese-robot-dentist-first-fit-implants-patients-mouth-without-any-human
  36. https://www.youtube.com/watch?v=DcKFLYPBLl8&feature=youtu.be
  37. https://www.cas.ca/English/who-are-anesthesiologists
  38. https://journals.lww.com/anesthesia-analgesia/fulltext/2017/02000/Failure_of_Sedasys___Destiny_or_Poor_Design_.43.aspx
  39. https://www.washingtonpost.com/news/the-switch/wp/2016/03/28/its-game-over-for-the-robot-intended-to-replace-anesthesiologists/?noredirect=on&utm_term=.aea03508a828
  40. https://www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfpma/pma.cfm?id=P080009
  41. https://www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfres/res.cfm?id=138180
  42. http://www.nationalacademies.org/hmd/~/media/Files/Report%20Files/2015/Improving-Diagnosis/DiagnosticError_ReportBrief.pdf
  43. https://doi.org/10.1136/bmj.i2139
  44. https://www.techemergence.com/machine-learning-medical-diagnostics-4-current-applications/
  45. https://techcrunch.com/2014/01/26/google-deepmind/
  46. https://deepmind.com/applied/deepmind-health/working-partners/health-research-tomorrow/moorfields-eye-hospital-nhs-foundation-trust/
  47. https://deepmind.com/applied/deepmind-health/working-partners/health-research-tomorrow/moorfields-eye-hospital-nhs-foundation-trust/
  48. https://deepmind.com/applied/deepmind-health/working-partners/health-research-tomorrow/moorfields-eye-hospital-nhs-foundation-trust/
  49. https://ai.google/research/teams/brain/healthcare-biosciences
  50. https://www.forbes.com/sites/samshead/2018/08/13/google-deepminds-ai-can-detect-50-eye-disease-conditions-and-save-sight/#51324d9e27f3
  51. 51.0 51.1 51.2 https://news.microsoft.com/apac/features/ai-and-preventative-healthcare-diagnosis-in-the-blink-of-an-eye/
  52. https://youtu.be/YlloKQPU-rs
  53. 53.0 53.1 https://www.nature.com/articles/nature21056.epdf?author_access_token=8oxIcYWf5UNrNpHsUHd2StRgN0jAjWel9jnR3ZoTv0NXpMHRAJy8Qn10ys2O4tuPakXos4UhQAFZ750CsBNMMsISFHIKinKDMKjShCpHIlYPYUHhNzkn6pSnOCt0Ftf6
  54. https://skymind.ai/wiki/neural-network
  55. https://www.mcgill.ca/pathology/about/definition
  56. 56.0 56.1 https://hms.harvard.edu/news/better-together
  57. https://parkinson.org/understanding-parkinsons/what-is-parkinsons
  58. 58.0 58.1 https://www.webmd.com/parkinsons-disease/guide/parkinsons-disease-progression#1-2
  59. 59.0 59.1 https://dzone.com/articles/using-ai-to-provide-early-diagnoses-of-parkinson
  60. 60.0 60.1 https://www.apixio.com/wp-content/uploads/2017/03/Apixio_The-Risk-Adjustment-Digest.pdf
  61. https://www.apixio.com/press/apixio-launches-first-to-market-software-application-to-help-insurers-audit-their-medicare-advantage-reimbursement-coding/
  62. https://www.cancerresearchuk.org/health-professional/cancer-statistics/statistics-by-cancer-type/melanoma-skin-cancer/diagnosis-and-treatment#heading-One
  63. https://dermatology.ca/public-patients/skin/melanoma/
  64. http://nuffieldbioethics.org/wp-content/uploads/Artificial-Intelligence-AI-in-healthcare-and-research.pdf/
  65. https://www.nytimes.com/2017/09/01/opinion/artificial-intelligence-regulations-rules.html/
  66. 66.0 66.1 66.2 66.3 66.4 66.5 66.6 https://www.gehealthcare.com//media/b3a5e32538454cf4a61a4c58bd775415.pdf /


Personal tools