Professor Mohanbir Sawhney on Technology’s Disruption in Healthcare
Kellogg MBA student Michael Salna (1Y 2021) interviews Professor Mohanbir Sawhney, associate dean of digital innovation and clinical professor of marketing, on technology’s disruption in the healthcare industry, especially during the COVID-19 pandemic.
Telemedicine has been one of the most widely adopted technologies during the COVID-19 pandemic. This modality is not only more convenient for patients and providers but can also expand access to care. How do you think the field of telemedicine has disrupted, and will continue to disrupt, the fields of both outpatient and inpatient medicine?
The pandemic clearly created a disruption in the interaction between care providers and patients, particularly in immunocompromised or elderly age groups, where simply coming into the hospital itself became a transmission risk. Telemedicine has been growing over the past few years but COVID has accelerated a change in attitude on the part of patients, providers, and payers — as “acceptability” on the part of governments is facilitating reimbursement. These factors forced the adoption of telehealth and its popularity can be seen in the rise of valuations of companies such as Teladoc. As early as April 2020, in the early days of the pandemic, almost 43% of Medicare primary visits utilized telehealth because the in-person alternative was not available.
Telemedicine also extends to the use of wearable devices that can allow you to monitor patients remotely. For example, there are now smart pill bottles, which, through Bluetooth-enabled pressure sensor, can monitor if the pill bottle has been accessed. Through the Bluetooth sensor, the data is transmitted to your smartphone and uploaded to the cloud, so now you can monitor adherence in real-time. The Freestyle Libre product from Abbott for glucose monitoring or Omron’s wearable blood pressure monitors, are other examples in the growing number of remote patient monitoring options available.
These technologies allow for ongoing monitoring and potentially real-time changes to management (for e.g., changing blood pressure medication dosing based on months of continuous data rather than several sporadic clinic visits). In turn, medication adherence can be monitored to ensure treatment plans are working appropriately.
The adoption of telemedicine is expected to be uneven across specialties — for example, primary care and mental health will have the highest adoption as these are typically visit-based rather than procedural. Fields such as gastroenterology, however, may have less adoption as patients are required to come in for screening procedures.
Information from these sources can now close the loop through integration into Electronic Health Records (EHRs).
What is your opinion on how this data will be monitored? How are our healthcare systems going to be able to deal with this exponential proliferation of data and organize it in such a way that it can be used to provide better care?
There are three issues that strain the system – data security, data standardization, and data processing capacity. If we look at the EHR space, we are still using systems that are antiquated and barely able to meet evolving demands.
We will need to build systems that are designed to deal with streaming data and not batch data. For example, some physicians use AI-based transcription applications like Saykara that the physician can talk into — the application picks up the keywords and auto-populates appropriate fields in the EHR.
There may be an opportunity here to build the next generation of EHR platforms that have been designed using open-source technology or distributed file systems, and can process and analyze data real-time. Otherwise, we will get swamped with the volume of data flowing in as our current systems cannot handle this.
This will also mean dealing with concerns of data security and HIPAA compliance, especially when we are talking about sensors that are collecting data from patients’ homes. Any device that is connected to this rich internet of things is a potential vector for a cyber-attack. With these remote patient monitoring devices, since they are so small, they do not have a lot of edge security built in and can become access points for hackers to access the entire system’s network.
Another concern is the pace of adoption in healthcare. In healthcare in general, the technology adoption has always been a conundrum. It is the industry in the US with the most potential, but “IT hesitancy” is still prevalent among physicians and hospitals who have been doing things a certain way for many years.
Is the failure to adopt because there is not sufficient financial incentive to do so?
It is a perception that the business case from a revenue standpoint is not compelling, but that is false for two reasons. The first one being accountable care. As care models shift towards accountable care organizations, there is a strong case to be made for adoption of technology. If you invest more in monitoring and prediction, the hospitals can save a lot in readmissions or procedures.
Secondly, the revenue cycle is hugely dependent on our ability to process the data real-time. For example, some physicians manage the process from the day the patient clicks an advertisement for the practice to the doctor visit to the post-op visit by combing two very different universes that are generally not well integrated — Customer Relationship Management (CRM) and enterprise systems (i.e., Salesforce and the EHR space). The CRM system tracks everything that happens until the customer becomes a patient, and then the EHR tracks everything that happens after. As the surgeon picks up a scalpel, the procedure code is entered into the system, and the patients gets billed almost in real-time. While typical revenue cycles are 90 days, productivity enhancements such as these can bring down the cycle time down to less than 20 days.
The automation of the back-end billing and administration can significantly drive down costs, providing another opportunity for higher margins.
A final topic I wanted to discuss is the application of artificial intelligence and machine learning in healthcare. How have we seen these technologies employed during the COVID-19 pandemic?
The COVID-19 pandemic has provided a target rich environment for AI and machine learning applications.
In the initial phases of the pandemic, the use cases were around early prediction of outbreaks and disease spread. A lot of work was done around building prediction models which were iteratively improved: in the early phases with pandemic data from SARS, then incorporating data from Wuhan, South Korea and Italy, and then through the construction of aggregate ensemble models (‘models of models’). Machines can learn to incorporate data as it is generated to continually generate better prediction models.
Within the management of the disease and treatment, a very interesting area of AI became the optimization of healthcare capacity. A predictive model was built that can predict the likelihood of a patient developing serious complications within 96 hours of being discharged from the ICU. So, understanding whether a patient is in the “green zone” or “red zone” provided the information on whether the bed could be freed up or not. Similar steps were taken in optimizing ventilator capacity and can even be used to develop prioritization algorithms for vaccine distribution.
A final interesting use case of machine learning is in the reading of CT scans. By feeding a model thousands of CT scans graded based on opacification of lung fields and other radiologic metrics of severity, AI can learn to read these scans and assign a coronavirus severity score. This score can, in turn, be used to recommend treatment plans such as probability of requiring ICU admission. Machine learning techniques such as these can greatly expand the capacity and throughput of radiology as well as pathology. In the future, we will see tremendous expansion of AI into everyday workflow and the decisions of clinicians across healthcare.