By Mike Farahbakhshian, By Light
In this month’s article, Mike Farahbakhshian walks us through the greatest health IT challenge of the next decade: securing medical devices. With cyberattacks supplanting kinetic attacks, medical devices present a prominent target for enemy states, organized crime, industrial sabotage, and rogue actors. The consequences will be life or death. Find out what Federal Health IT thought leaders are doing to protect us all. Suggested drink pairing: dry cider. Reading time: 10 minutes, but you can check out any time you like.
There is No Such Thing as a Fish: Blurring Lines Between Information Technology and Medical Devices
Classifying things is never cut-and-dry. Basing your categorization on appearances leads to bizarre circumstances, as followers of Aristotle often found. A more scientific approach to classification doesn’t help, either. When I was a child, I wanted a pet raptor. Just because my macaw – whose IQ rivals that of cream cheese — is technically a theropod dinosaur doesn’t mean I’ve fulfilled the spirit of that dream.
Likewise, just because things share a common origin and some under-the-hood similarities doesn’t mean they are all that similar. There’s no easy catch-all group that encompasses what we think of as “fish.” There’s also no easy catch-all group that can truly explain where information technology stops and medical devices begin.
So, what is a medical device?
ISO Standard 11.040 describes medical equipment fairly well. Strictly speaking, a medical device can be anything involved in provision of care: on the low-tech side, tongue depressors, scalpels, and implants; higher-tech examples include a network-aware infusion pump, pacemaker or MRI machine.
For the purposes of this article, we aren’t worried about plain old wooden tongue depressors (though, I’m sure, the Russians have found a way to hack it). Our concerns lie with medical devices that are network aware. If it’s on the Internet, it can be compromised.
What will happen when someone shuts down a piece of surgical equipment during an operation, using ransomware? Will this be the year? We do not have the security monitoring equipment in place to prevent this.
Barbarians Smashing Gates or Thieves Picking Locks?
In general, people are terrible at estimating risk. This risk is exactly why terrorism works: 2,977 innocent people tragically died during 9/11, and the results included: the PATRIOT act, controversial expansion of Government surveillance, two wars and a new cabinet-level Department of Homeland Security. Meanwhile, in 2016, 35,092 people died in traffic fatalities. In fact, since 1946, we haven’t seen fewer than 30,000 traffic fatalities per year. I haven’t seen a massive societal overhaul, despite the fact you’re much more likely to face your doom behind the wheel of a car than at the hands of a terrorist.
People are bad at estimating risk because the spectacular and the grandiose leaves a deeper emotional impact than the mundane and the common. We literally become desensitized to experiences that do not traumatize us.
So it is with estimation of cyber-risk. Most literature I see on the subject portrays the ultimate doom scenario. Virulent worms indiscriminately destroying medical devices, leading to death and destruction! Massive breaches of PHI are thrown on the Internet! A “grey-goo” scenario where every networked pacemaker is fried until it is useless, like a slow-burning EMP!
Okay, these are spectacular and terrifying, but the far more insidious reality is that medical device breaches will most likely be used as a vector for extortion or gain. I’ll discuss three examples here, that are far more likely than the Code Blue Apocalypse:
- bad actors may disclose vulnerabilities to manipulate stock prices;
- identity theft may be performed for financial gain;
- and medical devices can be used as a platform for other attacks on non-medical systems.
Martin Shkreli, Meet Guccifer 2.0
Bad actors have already disclosed vulnerabilities to manipulate stock prices. I will repeat this again, because it bears repeating: This has already happened. In 2016, a security company, MedSec, discovered vulnerabilities in St. Jude pacemakers. Before releasing the disclosure, MedSec partnered with an investment firm – the extremely appropriately named Muddy Waters Capital — to short St. Jude’s stock.
Could it get any worse? Why yes, it can.
Your Identity Pays My Medical Bills
Medical records are a hot commodity, hotter than credit cards. In fact, they’re so hot that the price has been plummeting; a full medical record can be purchased for under $50 on the Darknet. For the price of a decent bottle of bourbon, you can have access to someone’s demographic information (including SSN), medical history, and notes such as allergies or chronic conditions.
The medical record alone can be used to allow a bad actor to receive medical treatment and shunt the bill onto a poor victim. Conversely, the medical record can be used, especially the SSN, as part of a “fullz,” or full identity; this can be used to open (and exhaust) lines of credit.
Once again: this has already happened, repeatedly. In September 2014, Waco-based American Income Life Insurance was the subject of a medical records breach. These “fullz” were selling for $6.40 in bulk on the Darknet. In 2015, 78.8 million health records were stolen from Anthem and sold on the Darknet.
What were these stolen records used for? Let the hacker tell you in his or her own words:
Moreover, the Institute for Critical Infrastructure Technology has stated, in no uncertain terms:
The worst part of this? In stealing information for financial gain, hackers may inadvertently overwrite or delete things like adverse drug reactions or allergies. Your wallet might be the target, but you might be the collateral.
How can this get any worse? I’m glad you asked.
Instruments of Destruction
I predict that the use of medical devices as platforms for attacks on other systems will be the most common threat in 2017 and beyond. As grim as it sounds, there are only so many hackers and after a certain point, there will be enough stolen identities to saturate the market. (The recent price drop on the Darknet reflects this.) When Social Security Numbers seemingly grow on trees, it’s just easier to buy one of the ones out there than go to the effort of hacking a medical device.
However, medical devices, especially ones that are not on isolated medical networks, can be used a stepping stones for attacks on other services. If you’re hacking a hospital device, these targets can include financial/billing servers, industrial control systems, national security systems piggybacking on dark fiber/SCADA, and many more. If you’re hacking a commodity medical device (like a FitBit, smartphone health app, etc.) you can leverage a CloudBleed-like vulnerability to attack … well, any other public-facing resource, from Government servers to banks to your best friend’s Facebook page.
Once again, this method of attack has been done before, by many actors: Anonymous, individual hackers, and even state-sponsored agents. An entire category of worms called “Tilded,” including Duqu and Stuxnet, were developed by state actors to disable industrial control systems. Stuxnet was used to disable Iranian nuclear plants by damaging their centrifuges. While I will shed no tears over Iran’s nuclear program being set back, I do worry about our critical infrastructure being attacked in a similar way. We already know our power grid is woefully vulnerable, but to have a hospital network be its assailant adds insult to injury.
Is There a Solution?
Of course there is.The human species has always lived on the precipice of doom. This goes beyond our shaky relationship with driving automobiles and power grid security, We barely survived the Toba supercatastrophe. Eventually, either the Yellowstone supervolcano will blow or an asteroid on the scale of the K-T extinction will hit us. Even worse, disco may yet return. Yet, we don’t hang our heads in despair: we are an ingenious little species, and while we cannot eliminate risk, we can mitigate it.
In this sense, medical device vulnerabilities are like drug side effects – we must be aware of the risk, acknowledge it, and accept those risks – but only if the benefit outweighs the risks.
Who will help mitigate these risks?
The short answer: It’s going to take everyone, working together.
While the FDA regulates medical device security, they cannot do so in a vacuum. Medical devices, after all, are critical infrastructure as important as the power and telephony grids. Executive Order 13636 provide a framework for the collaborative sharing of insights and working together to secure all infrastructure.
This collaboration includes industry-Government partnerships such as the National Health Information Sharing And Analysis Center (NH-ISAC) and Medical Device Innovation, Safety and Security consortium (MDISS). These two organizations have joined forces to create the Medical Device Vulnerability Intelligence Program for Evaluation and Response (MD-VIPER), a clearinghouse of cybersecurity vulnerability disclosure and reporting
However, none of this will work without collaboration from manufacturers. And here lies the problem: when sharing of vulnerability info about Medical Devices, how do we address concerns about liability on the part of manufacturers and healthcare providers? My suggestion: we need tort reform, in the form of liability relief, to encourage information sharing. However, what about about loss of sales, or stock price drop, if info about a vulnerability gets out.? This muddies the waters (pun intended): how do you fight that?
What do we do?
Assuming that manufacturers want to play ball, here are some best practices.
- Manufacturers must ensure that the three “deadly sins” (hard coded/default passwords, poor/vulnerable encryption, and lack of patching) are mitigated.
- Manufacturers – led by Philips – are pushing for a Software Bill of Materials (SBOM), a manifest of all software packages and versions, for medical devices.
- These vendors should use the Manufacturer Disclosure Statement for Medical Device Security (MDS2) and the Medical Device Risk Assessment Plaform (MDRAP) to properly assess and categorize risks. Where possible, more stringent questionnaires (such as Mayo Clinic and Kaiser’s) should augment the MDS2.
- Risk assessments before purchase are ideal, as it allows the vendor to commit to working on fixes for identified problems. You probably won’t get the fix before delivery, but at least they can commit to some timetable.
- While physician input is key, the user requirements must be kept in a separate stream. Why? Physicians or other end users are not going to be asking or caring about info security. In fact, they are focused on easy usability, which often conflicts with security. So someone else needs to develop the security requirements.
- Medical devices get reused and recycled, so vendors must build in security from design – for example, ability to cleanse devices of patient info when returned or resold. In this sense, ISO 80001-2-8:2016 is a good guideline.
Best Practices? Lame. You’re a futurist. Talk about Big Data or something.
Fine, fine. In addition to this common-sense approach, I recommend the following tech-forward approaches to solving the problem.
For example, software-defined networks help identify a like “fleet” of devices, and deal with them in a similar way. However, a software-defined network is itself vulnerable to malware, so equal rigor must be put into securing the software defined network. Here, the VA is actually a leader, with the Medical Device Isolation Architecture (MDIA); in most cases, healthcare networks are too flat and vulnerable.
I strongly recommend Machine Learning can help identify potential threats that are not intuitive. It is called Non-Obvious Relationship Analysis (NORA) Non-obvious relationship analysis and was originally developed to identify casino fraud. I strongly recommend that thought leaders in the industry use NORA, and other analytic tools such as Watson, to predict and neutralize threats before they become catastrophic.
Finally, I’d recommend more Collaborative Research and Development Aggrements (CRADAs) with Federal agencies. For example, Underwriters Laboratories (UL) has a CRADA with Veterans Affairs to improve standards for medical device security certification. I applaud the effort, but these CRADAs must be increased in number and scope, including: manufacturers, Cloud providers, networking and utilities vendors, and every Federal agency or standards body that deals with healthcare.
Give a Man a Fish, and He Eats for a Day: But If There Is No Such Thing as a Fish, Does He Eat At All?
Hopefully, this long walk through the valley of the shadow of death has lit a fire under the seats of thought leaders and decision makers. At the very least, this article may be printed and used to line the cages of well-meaning but not-too-bright theropod dinosaurs.
However, it’s not enough to simply take action: we must take intelligent action. As long as the line between medical device and commodity IT is blurred; as long as there are conflicting incentives to disclose information (for public safety) and withhold information (to protect stock prices and liability); and as long as there are multiple disparate groups with their own agendas as part of this dialogue, we can’t expect an easy fix. Much like anything important in life, it will require a soft touch and hard work.
Yet this daunting task must be done. Hardworking civil servants and insightful members of industry all want to see medical devices be more safe. We must be vigilant and remind ourselves that it is in all of our interests to see our medical devices secure.