For years, many healthcare organizations tended to be skeptical and resistant (if not outright hostile) to the idea of storing their data, particularly protected health information, in the cloud. IT and security decision-makers had deep reservations about stashing such sensitive data anywhere but their own on-premises servers, safe under their own watchful eyes.
But not too long ago that changed, and seemed to change quickly. To the surprise of many, over the past few years, it appears that many healthcare providers have been getting markedly more comfortable putting their trust in the cloud.
"If you had asked me in 2011, I would have predicted that healthcare would still be one of the slower moving industries," said Jason McKay, chief technology officer of Logicworks, a managed hosting company that helps organizations in many sectors build and manage cloud infrastructure. "We were surprised at the uptake."
Part of that is the obvious benefits of speed and agility that remote hosting has to offer. Partly it has to do with the recent attention paid to healthcare's very specialized needs by giants such as Amazon Web Services, IBM, Google and Microsoft Azure, such as their willingness, only in the past several years, to sign onto HIPAA-compliant business associate agreements and gain HITRUST certification.
Yet another reason seems to be that, as the relentlessness and creativity of malware, ransomware, spyware and other cybersecurity exploits have ramped up in recent years, these major cloud players have been upping their own games – rolling out advanced artificial intelligence and machine learning capabilities to combat those threats to protect their clients' hosted data.
Now the answer seems obvious to many: Who's more likely to have a handle on the myriad threats to sensitive patient data? A small hospital with a dozen or so capable but overmatched IT staffers? Or a global hosting company with hundreds of security and AI experts, laser-focused on protecting information assets?
The trust level in the cloud has evolved to the point that Beth Israel Deaconess Medical Center CIO John Halamka, MD, put it: "I predict that five years from now none of us will have data centers. We're going to go out to the cloud to find EHRs, clinical decision support, analytics."
Automating detection of anomalous behavior
For his part, McKay – who, as CTO of Logicworks, works with major cloud companies – has some advice for hospital CIOs and CISOs looking to avail themselves of some of the recent AI-driven innovations in healthcare security.
It's not just a matter of adopting new technologies such as Amazon's Macie and GuardDuty, he said. As useful as those tools are in their approach to stopping threats, it's important to have good processes in place for enterprise-wide infrastructure security, because often those capabilities will still require some tough decisions about the risks they sniff out.
With Macie, Amazon deploys artificial intelligence to automate the discovery, classification and protection of sensitive data in the AWS cloud. The tool can detect sensitive data such as protected health information or Social Security numbers and, with dashboards and alerts, offers visibility into how the data is being accessed or moved in the cloud. The technology looks out for anomalies and can issue alerts when it finds unauthorized access or data leaks.
GuardDuty, meanwhile, is described by Amazon as a managed threat detection service that scans continually for any malicious or unauthorized behavior. Its threat intelligence uses machine learning to find anomalies in the account and workload activity – looking for unusual API calls, for instance, or potentially unauthorized deployments that could point toward an account compromise. When a potential threat is detected, the service delivers a detailed security alert to the GuardDuty console and AWS CloudWatch Events to help make alerts more actionable.
Microsoft has also been innovating on its own cloud-based machine learning tools, of course, such as adaptive application controls in Azure Security Center, which can "analyze the behavior of Azure virtual machines, create a baseline of applications, group the VMs and decide if they are good candidates for application whitelisting, as well as recommend and automatically apply the appropriate whitelisting rules," as Ben Kliger, senior product manager at Azure Security Center, explained in a blog post.
The tool helps surface apps that can be exploited to bypass a whitelisting solution, and provides full management and monitoring capabilities, through which clients change an existing whitelist alerted on violations of the whitelists, Kliger added.
IBM, meanwhile, updated its Resilience security platform last month with orchestration capabilities that combining machine learning and human intelligence to enhance incident response. And last week at its Google I/O 2018 conference revealed new security capabilities in artificial intelligence.
Human intelligence is a key first-step
When asked what he'd advise healthcare decision-makers to be thinking about when they consider cloud hosting and the automatic threat detection it enables, McKay's first two suggestions have more to the gray matter of carbon-based life forms – clear communications between hospital employees and IT staff – than any artificial intelligence capabilities.
In his consulting work, "the first thing that comes up early – and if it doesn't, we introduce it early – is a clear understanding of responsibility for all parties," said McKay.
"Now you're on a public cloud platform, so there's at a minimum one extra player who is responsible for something that, when it was hosted in-house, was entirely in the purview of the institution hosting the data," he explained. "So it's important to understand the shared security model of either AWS or Azure."
That means understanding who's responsible for what, with regard to both day-to-day and long-term application level security components, he said.
"And then the other one we look for, and raise the red flag if we don't have a clear understanding on the part of the customer, is a knowledge of the classification of your data," said McKay. "That's knowing what your sensitive data is, and then knowing where you're putting it and how it's used by your applications."
For example, AWS has a business associate agreement that "stipulates which services can be used in what manner to comply with their BAA," he said. "You could very easily run afoul of that if you don't know where your data is and you haven't classified it properly."
With those people and process measures in place, it will be easier to enjoy the advances afforded by tools such as Macie and GuardDuty, said McKay.
Macie's data classification alerting, based on the ability to "crawl the necessary bucket full of data and crawl and look for things" – social security numbers, say, or PHI, or credit card numbers for PCI enforcement – is "incredibly useful," he said.
"And GuardDuty is an even more interesting one to me in that it's a broader approach," he added. "It's a machine learning-based, security-focused anomaly detection engine. When it's turned on for a given environment, they're going to be looking both for things that are matching a security violation flag."
For instance, McKay explained, "there's known databases and botnet end-points; if it sees one of your instances communicating with a botnet in China, they can generate an alert about that. If they see patterns of access to resources falling outside of a norm – if it's an outlier in some way – they can generate an alert."
Of particular interest, he said, is that, "because it's an AWS service, it's very easy to tie that programmatically with other capabilities in AWS."
As an example: "If you're running an application where a potential security breach is more expensive to you than the loss of some application availability, you can do some things like, based on a GuardDuty classification of a compromised host speaking to a botnet, you could have that call a lambda function, which will take a snapshot of the instance for forensic purposes and then either shut it down or terminate it programmatically – meanwhile alerting those who need to know that this just happened," said McKay.
Enabling programmatic action
It may sound complex but that sort of "programmatic security response" is one of the biggest selling points of cloud machine learning tools such as these, he said.
Once their ins and outs are sufficiently understood – and hospital IT staff are well-versed in how to use them the right way – they're going to be "far, far better than a human being responding to that alert, doing diagnostics, taking some action," said McKay. "Even if they're well-trained, no one is going to do it as efficiently and quickly as a programming pipeline."
When hospitals are doing classification with Macie, for instance, if misplaced data is detected – "let's say an app developer made a mistake and they pushed some data that was supposed to go into a secured S2 bucket into an S3 bucket that was publicly available" – in a situation like that, "you might have your data identified by Macie as sensitive data in a public location," he explained.
"You can take programmatic action to apply a security policy to that bucket to prohibit access. Rather than wait for the notice to come and someone to go make the change, you just automatically drop it off of the net."
The same holds true for any misplaced credentials that might be found, said McKay. "AWS Trusted Advisor will scan things like public software repositories such as GitHub for things that match a regex of access keys into an environment. You can programmatically disable those keys when they're found in a public space."
Judgment calls to make
But the fact that the technology can easily do it doesn't mean that the response is an easy call to make. And that's why hospitals need to be thinking hard about their security processes beyond just whiz-bang technology.
"If your application was using those keys in such a way that disabling them breaks the application, that's a decision the app owner has to make in terms of impact," he said.
That downtime pose challenges for operations, "but it could be a multi-million-dollar compliance issue if you end up with a breach – a few minutes of downtime to prevent this breach might be worth it."
For all the advances AI and machine learning have brought to cloud security, ultimately hospitals need to understand that it's about making those sorts of judgment calls: programmatic action vs. impact on the application vs. avoidance of the cost of a breach.
"Those are all questions that have to be asked and answered," said McKay.
Twitter: @MikeMiliardHITN
Email the writer: [email protected]
Source: Read Full Article