Is the Healthcare Industry Well-Equipped to work with AI Technologies?

While AI has been making steady in-roads in healthcare, it comes with a fair share of challenges ranging from data unavailability, to outdated regulatory policies. In part one of the series on AI in Healthcare, we explore the potential hurdles that digital technologies pose to this industry today.

Think about the last time you visited a doctor at a hospital. You would’ve noticed that everything from your medical records during that visit, to the medicines the doctor has prescribed have been recorded in the hospital’s electronic health system. Now, imagine every doctor’s visit you’ve ever had being recorded in a system, so that the next time you visit them, your entire medical history shows up in one place – think of the possibilities it can throw up in bringing greater accuracy and efficiency in clinical diagnosis. This, in simple words, is what technology, and in specific, AI can do for the healthcare industry. It can use data and machine learning to bring greater accuracy, and innovation not just in clinical diagnosis and patient care, but also in administrative management, patient post-care management, drug discovery, clinical trials, and pain management.

Is AI new to healthcare?

Not really. The industry has been exploring AI technologies since as early as the 1960s. For example, neural networks, also known as ANN (artificial neuron networks) have been used for years to mimic the way the neurons in our brain process signals. ANN helps diagnose if a patient is likely to develop a disease in future. Similarly, surgical robots have been used since the early 2000s in the U.S. to assist surgeons in performing minimally invasive procedures like stitch wounds and neck injuries.

The question then arises – if tech has been such an integral part of healthcare for years, why has its adoption been slow, and restricted to just administrative tasks, and customer service?

Challenges of AI Adoption

– Lack of access to data: AI and ML devices need to be fed with relevant and up-to-date data in order for them to be trained to identify and predict patterns with greater accuracy. But, currently, patient data is skewed and available in silos, thus throwing caution into the air on the accuracy and relevance of device-generated predictions. For example, in clinical diagnosis, when a patient visits a hospital, ML devices can make more accurate predictions if there’s data fed on the patient’s past medical history, current course of treatment, and post-treatment care. But, there’s not enough data available on past records, and many patients often don’t follow post-treatment processes and doctor visits judiciously, thus leaving room for error in ML predictions.

– Security & privacy concerns: Where there is data, there’s concerns surrounding the privacy of such data. According to a report by Protenus Breach Barometer, in 2019 alone, over 41 million patient records were breached. In 2020, especially when the pandemic broke out, and doctors and administrative staff were overworked, healthcare organizations saw a spike in ransomware attacks. For instance, a ransomware attack forced the Champaign-Urbana Public Health District (Illinois) to shut down its computers for three days, and the district was forced to pay US $300,000 in ransom.

Cyberattacks can cost healthcare companies dearly. Understanding the financial threats surrounding data privacy, they closely guard their data in highly secure compliant systems. Thus, creating a single repository for all data to feed AI devices and generate unified predictions seems far-fetched, unless the security concern is addressed.

– Regulatory obstacles: Regulatory policies are too outdated for the pace at which technological advancements are taking place in healthcare. On one hand, accountability is high in healthcare and on the other, unlike a drug or a vaccine, healthcare software undergo regular updation. Given the risk involved in using AI devices to diagnose and predict, every update will need regulatory approval, which can be tedious.

For example, the U.S. Food & Drug Administration has been taking several steps to create policies around safe and effective use of digital healthcare technologies. In one instance, it has developed a policy for device software functions and mobile medical applications, particularly those that pose a higher risk to patient safety. The policies seek to provide approval for the software, and not for the device itself. Hence, like any typical software, when the device is updated (say, when it is fed with more data to generate better results), it means its technical function also changes, which could require an FDA approval before being released. Today, this seems like a cumbersome and time-consuming process, which requires developers to plan in advance on the potential versions they plan to incorporate into the device.

– Lack of personalization: Healthcare staff and patients believe that AI devices can’t replace doctors, be it to diagnose, or to interact with patients. In 2019, Harvard Business Review published a study in the Journal of Consumer Research, which revealed that despite proving that AI can outperform doctors, patients strongly refused to trust healthcare provided by an AI device. The patients in the study believed that AI devices don’t take into account individual characteristics and idiosyncrasies when producing a diagnosis.

Seeking a doctor for a physical or mental ailment is a very personal process for most people. Replacing doctors, nurses, or even administrative staff with a chatbot or an ML device may not be a solution. On this front, AI’s functionality may be limited to working alongside healthcare staff, rather than replacing them, which addresses a huge concern around technology replacing workers in this sector.

What does this mean for the industry?

Aside from the challenges cited above, there are still more challenges associated with healthcare tech, such as; the cost of incorporating digital technologies, lack of skilled staff to monitor and operate them, and most concerning of all, the ‘black box AI’ factor (AI systems generate self-directed predictions based on millions of data fed into the system. Especially in healthcare, where a wrong diagnosis can cost dearly, there’s still no way to identify and interpret why the device came up with a particular prediction. Hence, the term).

That being said, AI has proved to be quite effective in areas such as telemedicine and pain management, especially since the pandemic started. A Deloitte study shows that 73% of healthcare companies increased their AI funding in 2020, and healthcare leaders have found AI to be effective in monitoring COVID cases, and vaccine development and distribution.

AI has the potential to create value-based healthcare in future, a topic we will explore in part two of this series.

Found this blog useful? Share it with your network

Continue reading

No Results Found

The page you requested could not be found. Try refining your search, or use the navigation above to locate the post.

You too Can Do!

All are welcome to be a part of the Can Do community. You can join our team,
volunteer, mentor, train, share your story, and get involved in more ways than you believe.

+91 80728 14086

1K@cando.ind.in

+91 80728 14086

1K@cando.ind.in

7 + 9 =