Data Encryption And Security: Should We Be Concerned About Breach Fatigue?.

In many corners of the world, the loss of a portable computer that holds sensitive data leads to a data breach notification, especially in those cases where proper security such as laptop encryption software like AlertBoot was not used.


At the rate things are going, breach fatigue — the desensitization that occurs from receiving too many breach notifications — is going to be a real problem, if it isn’t already.  Question is, should there be a harm threshold to combat breach fatigue?  Up to now I’ve been sitting on the fence, but today I’ve read a couple of blog posts that has me leaning over to the “no threshold, report everything” side.  Obviously, taking up this position is my own and does not reflect the views of my employer.


Breach Fatigue is Real



Before I go into why I was persuaded into joining the “send breach notifications for all breaches, no matter how small or ‘uneventful'” party, I should remark that breach fatigue is real.  You can read more about it at bankinfosecurity.com.  Although it’s geared towards corporations (financial companies, in line with the site name), there is plenty to learn from the article.


Why I Stopped Straddling the Fence



In a post titled “How the Epsilon Breach Hurts Consumers,” Adam Shostack goes into personal detail how the breach at Epsilon could have affected him if he had not been alerted about the breach.  If you’ll recall, the only thing that was stolen from the Epsilon hack were “email addresses and/or customer names.”  As data breaches go, it’s pretty uneventful except for its size (millions affected by estimates).


I hadn’t mentioned it in the Epsilon case, but anyone who’s been following this blog knows that I sometimes muse about the potential ramifications (secondary, tertiary) of scams using not-particularly-sensitive data, such as in the stock market scam described by John Allen Paulos (all you need is a name and an address.  The scammers used a phone directory, if I’m not wrong).  The scam is illegal on Wall Street today, but it doesn’t stop scammers from using an on-line version of it to pump up penny stocks and leave victims holding the (empty) bag when the conmen dump it.


Coming back to Epsilon, a similar scam to the above could be pulled off because the hackers who breached Epsilon have three pieces of information, not two: they have email addresses, names, and whose customers they were.  That last one is key, because it’s what allows on-line conmen to tailor their message to increase a fraud’s success (today known as phishing).


Granted, being a little proactive and aware of the dangers of the internet could prevent such scams from being successful.  Shostack, however, shows an instance where someone (like him) could fall victim because he was proactive and aware of what lurks on-line.


Essentially, Shostack has set up different email addresses where only one company holds that email address.  For example, if he is a Wells Fargo customer, he gives the bank his email address as this_is_my_wells_fargo_account_7592828283@example.com.  Now only correspondence coming from the bank will reach that inbox, unless there was a breach at Wells Fargo.  If he finds any emails from Wells Fargo in some other email account than the one stated previously, he knows that he can ignore them, knowing them to be phishing attempts, since all official Wells Fargo correspondence will come to him at this_is_my_wells_fargo_account_7592828283@example.com.


However, if Wells Fargo has a data breach, Shostack has no way of knowing that he is about to be victimized unless his bank goes public the way Epsilon did.   You can claim “well, he should be vigilant” and the answer to that is: He is vigilant.  He’s so vigilant that he decided to set up a plan to ensure that he lowers the chances of getting scammed on-line, which is predicated upon Wells Fargo doing what it promised to do: keep its customers’ data safe.


So, if (or if you prefer, when) Wells Fargo or its affiliate fails at keeping that promise, they should make it public because neither Wells Fargo, nor data security experts, nor nobody else knows who is protecting himself in what way:



The Limits of Expertise


This is one example of the limits of experts to understand the impact of breaches on consumers. There are doubtless others, and we should be willing to question the limits of our expertise to fully understand the impacts of breaches on everyone. We should also question our expertise to decide for them what’s best for others. [newschoolsecurity.com]


When you consider how many people are aware of on-line dangers (my mother is in her 70’s and calls me for help if she thinks she has a phising email in her inbox, instead of clicking things willy-nilly), it shouldn’t come as a surprise that a lot of people have developed their own ways of ensuring that they don’t get scammed.


A rudimentary method is to hover over links to see where they point, but who’s to say that other methods, including Shostack’s above, are not as good or as effective or even better?  For example, can your eyes discern the difference between “l and 1” or “l and I” (depending on how your browser renders letters)?  Your visual check would fail in the above; Shostack’s method wouldn’t, assuming no breach of data (only because he doesn’t have to deal with it.  It’s in the wrong inbox!).


So, no thresholds for data breaches because essentially, who’s to say whether a data breach is low risk or not?  The only “threshold” should be whether proper data security was used.  If it wasn’t used, no one gets to judge whether a data breach is low-risk or otherwise.


What About the Fatigue Setting In?



Of course, going with the no-thresholds stance means that we could be inundated in a world of breach notification.  Shostack has a reply to that:


Now, some people will argue that there’s “breach fatigue”, and that that means we should select for others which incidents they’ll hear about and which they won’t. While I agree that there’s breach fatigue, that’s a weak argument in a free society. People should be able to learn about incidents which may have an effect on them, so that they (and I) can make good risk management decisions. We don’t argue against telling people that there’s lead paint on Chinese toys even though much of the damage will already have been done by the paint that’s flaked off. We don’t argue against telling the public about stock trades by insiders even though only a few experts look at it. We as a free society encourage the free flow of information, knowing that it will be put to a plethora of uses.

I also read this by Dissent at databreaches.net, which lists even more reasons why a harm threshold should not exist:



Yet other professionals and privacy advocates argue that every breach involving personal information should result in notification.  For some, the rationale is that it is a matter of trust and ethics – that if a company promised to protect privacy and security and failed to do so, they need to let the consumer know.  Others argue that entities that have been breached have a bias in determining the risk to the consumer and may underplay the risk so that they do not have the expense and potential reputation harm of having to notify consumers.  And yet others – this privacy advocate included – argue that the consumer has a need to and right to make their own decisions about whether there is anything they need to do and they cannot make that decision if information is withheld from them.


Last night on Twitter, I tweeted, “So if 1m ppl might get confused/numbed, does that mean I don’t have to be notified when I want to be informed?”  Somewhat stunningly, one of the discussants answered,  “Yes.”


Even if 99% of the public doesn’t care if they are notified, even if many or even most people might suffer breach notice fatigue,  I do not consent to my individual right to be waived by others.   Paternalistic arguments about protecting me from breach notice fatigue fall into the same category as arguments for censoring content because children need protection and the government knows best.


I’m not about stand on soapbox and discuss about free society, paternalism, ethics, etc.  All of these contribute, in some way, towards why I’ve decided to take a stand on the issue.  However, my reasoning is much more practical:




  1. A threshold doesn’t really serve to protect anyone but the companies that experienced the breach.  Whereas companies will definitely benefit from such a threshold, it’s arguable whether people whose data was breached will also benefit just because someone decides the situation has a “low risk.”


  2. There are many ways that people deal with the fact that on-line world is anything but safe.  Some of these methods of protection rely upon knowing that something down the line is “broken,” as in a breached database.  The only way to fix them is by knowing that something broke and making the necessary changes.  With a harm threshold, you’re looking to short-change those who are interested in protecting themselves in order to “protect” those who’d rather ignore the situation.  Why would you want to do that?


Related Articles and Sites:
http://newschoolsecurity.com/2011/06/how-the-epsilon-breach-hurts-consumers/
http://www.databreaches.net/?p=18607



Comments (0)


Let us know what you think