How not to handle a data breach brought to you by Uber, Equifax and many others

Without question, security incidents and breaches happen on a regular basis. But it’s the major cybersecurity events that thrust policies and procedures into the spotlight, allowing the public to learn from those lessons and apply them to their own businesses.

However, for organizations impacted by such events, as a whole, it can either make or break a company in the eyes of the public.

Equifax: A prime example of what not to do

Case in point: Equifax’s handling of its 2017 breach reporting. The data on about 143 million Americans was breached, officials discovered on July 29, 2017. However, hackers got into the system in May 2017 through an exploited Apache Struts vulnerability — that Equifax failed to patch although it was released by Apache in March of the same year.

To make matters worse, it was the company’s second breach that year. Hackers had access to Equifax’s TALX payroll division between April 2016 and March 2017. The culprits easily changed the 4-digit PINs used by managers.

The icing on the cake? The official Equifax account sent users to a phishing site, instead of the official webpage for users to secure a year of free credit monitoring. Yet another Equifax site that offered free credit freezes to customers failed to load properly and customers were forced to hand in paper requests with attached ID documents.

Equifax added to the problem by first announcing the number initially impacted, which officials had to go back and increase a few days later with about 150 percent or so more victims. Down the line they would add even more victims to the final tally, and report another breach on its South American site.

The good news — if there can be good news here — is that Equifax completely bungled its response on numerous levels, which now the healthcare industry can learn from and avoid when reporting security incidents and breaches. But what if a failed response isn’t so overtly egregious?

Uber: hiding the truth

In November 2016, hackers quietly informed Uber that they’d downloaded the personal information of 57 million riders and drivers — 25 million of whom were located in the U.S. From driver’s licenses to emails, the breach was severe. So serious, however, that Uber paid the hackers $100,000 to keep it secret.

In 2017, Uber’s board of directors began an investigation into its security team due to litigation from another issue entirely. In doing so, the law firm discovered the hush money payment and the breach went public. The company just settled with all 50 states and Washington, D.C. for a cool $148 million.

Not only do companies that hide attacks risk disclosure later on, there are a wide range of other risks to this practice.

As told to Healthcare IT News in September 2017 by CynergisTek CEO Mac McMillan, ransomware and other hacks can’t be swept under the rug. Inevitably, a hacker has compromised the environment due to some weakness and exploited it.

Further, hackers talk to each other on the dark web. McMillan explained if you’ve been hacked and paid, other hackers will find out and may attempt to hit your company with another attack down the line. But an even greater effect is that on staff, who see how you’ve handled security and will get the message that security is not important.

Peachtree Neurological Clinic provides a positive example on the importance of owning a security incident or breach and the need for a forensic investigation after the attack.

The Atlanta provider discovered a 15-month breach during an investigation into a ransomware attack. Had the provider taken the mentality of Uber and attempted to avoid the repercussions of a hack, they wouldn’t have discovered another vulnerability that was putting patient data at risk.

Allscripts: The harm of misinformation

EHR vendor Allscripts has the unfortunate honor of being included in this crowd. Mostly because its users were so frustrated and allegedly harmed by the company’s January 2018 breach that they sued. Allscripts has asked for the case to be dismissed, but so far, the case is still pending.

Early this year, Allscripts went down after being hit with the notorious SamSam ransomware. The network was mostly offline to 1,500 clients for up to a week, though some functions went online periodically during this time. The majority of services went back online about a week later.

But for some clients the platform remained slow and had numerous log-in errors. In fact, several Allscripts users sent many emails to Healthcare IT News both frustrated by the outage — and perplexed that the EHR vendor was telling the public the issue had been resolved when in fact these small providers were still struggling with access.

Shortly after, Boynton Beach, Florida-based Surfside Non-Surgical Orthopedics filed a lawsuit on behalf of all affected clients for "significant business interruption and disruption and lost revenues.”

The breach highlights the need for transparency.

Transparency takes strategy

The right way to approach this is to strategize reporting with what you think is a realistic estimate of individuals affected. But the second part would be to go back in and adjust this number, explained Mike Kijewski, Medcrypt CEO.

To Kijewski, LabCorp — which went down in July after hackers got in through an RDP brute force attack — did this correctly. LabCorp may not have provided as much detail as the public would have hoped, but it was transparent with what occurred and the extent of the impact.

“Rather than downplay it, take ownership: Here’s what happened, here’s the range or scope of people affected, and then release more information going forward,” Kijewski said. “A company needs to provide frequent updates and use candor.”

“Take responsibility and precautions with the security of data — then if and when breached, be transparent and the industry will respond candidly,” he added.

A similar situation happened when Nuance fell victim to the global Petya cyberattack in June 2018. Clyde Hewitt, vice president of Security Strategy for CynergisTek explained that there was a variance in the amount of transparency that was provided in public channels between the organizations and the victims.

“Information disclosed through private channels to clients was reported to contain more detail and addressed the concerns of many organizations,” said Hewitt. “One may conclude that a higher level of transparency, even if delivered under a confidentiality agreement, helps to reduce tension and increase the level of trust.”

“Rapid and thorough communications to all stakeholders is the key to making the right decisions,” he added. “Everyone’s input is valuable, and the collective wisdom can identify and address issues early.”

Learning from mistakes

So what’s the best way to handle the inevitable breach? On a case by case basis, according to Lee Kim, director of Privacy and Security for HIMSS North America.

“But don’t sit on it,” said Kim. “Make sure you have your teams coordinated in advance of a breach: this includes your communications and legal teams and others.”

To Hewitt, an organization must avoid assuming the attack response should be managed entirely by IT. The compliance team must be involved in any incident involving the unauthorized acquisition or access to sensitive data. In fact, they should be a key stakeholder, coordinating a cross-functional response.

“The compliance team, consisting of the privacy officer, compliance officer, and general counsel, should be engaged at the onset,” said Hewitt. “In today’s hostile computing environment, it may be necessary to collect all evidence in a forensically sound way to help with the investigation, analysis and ultimately the root cause analysis.”

At the same time, the actions can “prove that the organization acted in a reasonable manner,” he added.

Organizations should also avoid assuming a cyberattack on infrastructure will only affect clinical systems, Hewitt explained. “In fact, non-clinical functions such as back-office business operations can also be adversely impacted. In these instances, it may be necessary to initiate downtime procedures.”

Defense must-haves: business continuity and disaster recovery

“The best defense is to develop a solid privacy and security incident response plan and test it often so that all stakeholders know their role,” said Hewitt.

Part of that and a broader business continuity program is Disaster Recovery. Hewitt explained these will give guidance and procedures on how to bring systems — networks, servers, applications, data, and end-user workstations — back online after an outage. They also include actions by critical vendors.

“Key elements of all DR plans include a list of team members along with decision and authority responsibilities, staff notification plans, and contact information for all internal and external key stakeholders,” he added.

But also critical in response are the business continuity plans, which help support patient care delivery and operations in case of down time. Hewitt said it should also address recovery after systems are restored.

“Key elements of BC plans should include detailed downtime procedures, responsibilities matrix, and recovery procedures once systems are restored. It also defines the steps to return to normal operations following an outage,” Hewitt said.

“Organizations should practice both their DR and BC plans frequently, but no less than annually, so that all staff are very familiar with the local procedures,” he added. “This should be supplemented with required reading, specifically case studies that detail how other organizations detected, responded, and recovered from their event.”

The use of incident response policies are required by HIPAA and more than just good practice. Hewitt explained that the policies should “define terms, specifically differentiating between an event, incident, and breach and outline the governance structure to respond to an incident.”

“The incident response process leaders should engage the compliance team and legal very early after a suspected event is confirmed to be an incident,” said Hewitt. “Everyone should remember that not all security or privacy events or incidents result in a breach, but all incidents should be managed.”

“I think by now we’re starting to understand as an industry to some extent that these breaches are inevitable,” said Kijewski. “And it’s our goal to adjust the scope, and impact — and make the harm as minimal as possible.”

Twitter: @JF_Davis_
Email the writer: [email protected]

Source: Read Full Article