This interview with Booz Allen Hamilton’s Chief Medical Officer Kevin Vigilante and Principal Joachim Roski looks at the term AI Winter; the opportunities for AI in Federal Health; the challenges to AI adoption and next steps for industry.
Defining AI Winter
Artificial Intelligence is not a new concept and in fact, has been discussed since the 1950s. Its progress though has not been steady but instead burdened by periods of setbacks or AI Winters during which interest and investment withers. Some of this decline can be due to technology not meeting the expectations, which leads to declines in funding, in research and in applications.
There is also a hype cycle – explained well by Gartner – that most technologies go through and it seems the greater the hype, the more significant the decline when things do not move forward as hoped. We fear there is a chance we may be headed towards another AI Winter, unless industry can get ahead of some of the emerging security and integration risks.
The Damage of an AI Winter
The one step forward, two steps back approach ultimately gets us nowhere. A recent Institute of Medicine report on AI on health illustrated enormous innovation potential for AI for many patients/consumers, clinicians, Healthcare administrators, public health officials, researchers, and others. Use case after use case demonstrates the potential innovation. If we think of robotic process automation (RPA) which is pretty low risk compared with AI that may guide decisions, we can demonstrate a short-term gain of taking the burden of humans for repetitive work and start to gain trust.
We must be realistic in setting expectations of trust and ensure we are developing AI solutions in an ethically responsible way. If we don’t implement responsible forward movement, future scandals associated cyber attacks and privacy breaches will set everything back and delay that forward movement. If we think of the nuclear industry as an example, once trust is lost it can be very difficult to reclaim ground.
The issue of security has been recognized by Governments and large entities around the globe and many have formed strategic plans around the use of AI. There is an aura to AI that conveys some anxiety in people, perhaps because it has that interesting characteristic of mimicking human intelligence that we must recognize.
There needs to be a mindful approach to AI innovation that understands the accompanying risks if something is poorly implemented.
We have identified at least 10 significant risks that span the AI lifecycle and that identify what we need, how to develop, how to implement and how to maintain. There are distinct risks for each phase that include privacy, avoiding biased algorithms and insecure hardware. We have also identified at least 16 evidence-based practices in data science that could be implemented to mitigate risk.
It is important also to understand that there are black boxes of explainability when it comes to AI. When we are talking about clinicians and the potential for AI to provide decision support, we need to understand how we get those clinicians to trust AI when they don’t understand how it reached a particular recommendation, and how to secure their trust when the decisions they will make based on it will have a real and concrete impact.
The Potential of AI in Healthcare
There are so many potential applications for AI in Healthcare. Think of patients and their families and what they can do outside of the hospital with devices and wearables. From the physical and mental health view there are opportunities to monitor responses and to have those responses provide feedback. The potential for clinical decisions based on multiple devices and various inputs, with more robust guidance based on trends and potential problems is incredible.
From the public health view, we have seen work related to COVID trends and prediction. While some of the initial modelling was off, in part because of our inexperience with the infection and the unpredictability of human behavior, COVID represented a rich data environment with data signals from across the globe that were coming in at breathtaking speed. AI allowed us to process that data in a way we could not have before.
Within the hospital setting robotic surgery can at times achieve results that would be much more difficult for humans. Consider the potential for computer vision in which a patient at risk of a fall could be monitored remotely. Think about self-driving cars and how AI, operating within a specified context, can change the course of action based on what it encounters and potential implications for Healthcare.
The Data Premise
In order for AI to be useful, you have to have enough data to make use of the technology. DOD’s Advana, an open architecture platform, enables the integration of multiple streams of data and is digests a critical mass of data to the point that deploying AI becomes more productive. The Joint Artificial Intelligence Center (JAIC) is the Department of Defense’s (DOD) Artificial Intelligence (AI) Center of Excellence and was established to accomplish success in AI.
There is a huge opportunity here to learn what we can from what DOD is doing in terms of fundamental techniques, to learn from the investment being made in the civilian space and to harness the power of big data to really employ AI.
Agencies that have not started to think about what AI might mean for them need to think about getting started. It is the future and cannot be ignored. Those in this position need to approach it from the top down. There needs to be an effort to ensure the right leadership is engaged in the decisions. Determining what is important for the organization and how to prioritize, including how to govern individual projects and ensuring this is all handled appropriately within the laws and regulations in place is key.
It is also important to understand that there are different types of AI solutions. Some, like RPA, are simple and straightforward. These can be applied discretely to back office functions quickly and they provide quick ROI.
Finally, there must be a commitment for the long-term. Implementing AI will be a multi-year effort that will increase in intensity as the projects move along and uncovered what needs to be implemented.
Challenges to AI
There are a number of companies in the industry now that market solutions that may work in one sector, but not another. There must be a way to see behind the hype to understand proven performance and this will mean solutions that can be implemented that repeatedly produce results.
Trust is among the biggest challenges and the hardest to overcome as we read more about ransomware and privacy breaches. Collectively, we need to put our shoulders behind all AI focused efforts to ensure we don’t continue to erode trust in AI because of specific solutions. To that same end, we must recognize that the ethical stakes in Healthcare can be higher than in other industries. If AI sends me on a bad vacation, I learn a lesson. As a physician, and as the agent of a patient, clinical decisions based on AI must meet a much higher bar. Clinicians have to have an unwavering trust in the validity of AI and then AI must meet those expectations – every time.
An Infrastructure of Trust
Physicians order tests all of the time and generally, they have no idea how the laboratory equipment produce the results for those tests. They trust and accept the results, because they are built within an infrastructure that creates trust. If you think of production or manufacturing standards, QA standards, agencies like the FDA or other regulatory bodies, this is standard practice.
Those kinds of parallels in AI – traceable capabilities, certification, assessments that are intended to engender trust, can start to change the view of AI and set the standards for trust.
The Role of Industry
Now is the time for industry to come together. We must develop thresholds of certification that will bring reassurance, that are replicable and that will provide a level of assurance to those who may not be AI sophisticated.
There have been ethical guidelines published by a number of organizations but without a great deal of convergence between standards and defining AI. This means there is yet an opportunity for industry to identify standards we can all believe in, to identify mechanisms to hold us all accountable that can be independently verified to ensure standards are being met. It may only take a small group but once those standards and guidelines are proven, others will follow. Now is also the opportunity for industry to self-regulate, rather than having those standards imposed from outside.
Such guidelines would also be an opportunity for Government to signal its support. Senators recently proposed training for Federal procurement professionals. If there was a seal of approval for a well-developed solution, that would make it much easier for those officials.
NIST recently issued recommended standards for AI. Across Government and industry, the trend is moving in this direction. We need the action of the right players to ensure we are moving in the right direction to continue the forward movement and to ensure we don’t see another AI winter.
About Joachim Roski
Joachim Roski has more than 20 years of experience delivering solutions in care transformation, Healthcare quality and safety improvement, Healthcare business and outcome analytics, and population health improvement. He uses artificial intelligence, machine learning, and data science to support better decision making and improvement in Healthcare management and patient outcomes.
About Dr. Kevin Vigilante
Executive Vice President Dr. Kevin Vigilante is a leader in Booz Allen’s health business, advising Government Healthcare clients at the Departments of Health and Human Services, Veterans Affairs, and the Military Health System. He currently leads a portfolio of work at the Department of Veteran’s Affairs.