Last week, a federal judge approved a settlement – the largest to date when it comes to data breaches – that is historic and yet falls flat: Anthem, the Indianapolis-based insurer, has agreed to pay a total of $115 million to settle all charges related to its 2015 data breach.
The breach, strongly believed to have been perpetrated by actors with ties to the Chinese government, began with a phishing attack. By the time the electronic dust settled, the information of 79 million people (including 12 million minors) had been stolen, including names, birth dates, medical IDs and/or Social Security numbers, street addresses, and email addresses.
Needless to say, this information can be used to perpetrate all types of fraud.
And while the judge overseeing the case has found the settlement to be “fair, adequate, and reasonable,” critics have noted that the victims only get $51 million of the total settlement, which amounts to 65 cents per person. The rest goes to lawyers and consultants.
What’s surprising about this story is not that the victims are getting shafted; or that the lawyers are getting an ethically-dubious portion of the settlement; or even that Anthem settled out of court, a once unthinkable action. Then again, courts are warming up to the idea that victims of a data breach have suffered an injury that is redressable by law. (Chances are that if this lawsuit had been filed ten years ago, the defending corporation would have successfully argued to have it tossed from court).
What is surprising is that all of this happened despite Anthem having had what experts called “reasonable” security measures at the time of the breach.
What exactly is “reasonable” security? Is it tantamount to “good” security? Or perhaps it doesn’t reach the level of good, but it’s better than “bad” security, which in turn is better than no security? Its converse, unreasonable security, what would that be like?
What constitutes “reasonable” security is not fleshed out, anywhere, in detail. But, we do know this: per the settlement, Anthem has to increase threefold their data security budget. Which is weird because (a) if you have to treble your budget in regards to security, maybe it wasn’t reasonable to begin with? and (b) the flashpoint of the data breach – clicking on a phishing email that surreptitiously installed malware, which may or may not have been flagged by antivirus software – can hardly be prevented by spending more money.
But even weirder is this:
“The [California Department of Insurance examination] team noted Anthem’s exploitable vulnerabilities, worked with Anthem to develop a plan to address those vulnerabilities, and conducted a penetration test exercise to validate the strength of Anthem’s corrective measures,” the department said in its statement. “As a result, the team found Anthem’s improvements to its cybersecurity protocols and planned improvements were reasonable.” [healthitsecurity.com]
There’s that “reasonable” word again. The company had reasonable security, got hacked, corrective measures were taken, and now the improvements are reasonable?
If you’re being hacked by what could potentially be the intelligence arm of a foreign state, perhaps you’d like something that’s more than reasonable. Hopefully, the choice of words to describe what was implemented do not accurately reflect the effort, planning, and technical expertise that actually went into it.
At the same time, it’s hard to ignore the fact that data breaches like this are the perfect moral hazard:
- The information that is stolen is tied to individuals. Any misuse of the data will affect these people, not the company.
- A rotating cast of executives means that you don’t necessarily plan for the long term. Especially if you’re paid very well for being fired because of a data breach.
- Financial penalties become meaningless if (a) they can be used to offset taxes, (b) happen to be a drop in the bucket (Anthem’s 2017 revenue was $90 billion), and (c) the cost can be passed on to customers.
Related Articles and Sites: