Canada has introduced a bill that, if passed, would amend the Personal Information and Electronic Documents Act by making data breach notifications mandatory. It looks like safe harbor is not being given specifically for the use of security solutions such as AlertBoot’s managed disk encryption software. Nevertheless, safety may be found in the language that, ironically, could also hamstring the well-meaning legislation.
Good Things about the (Upcoming) Law
The site canadianlawyermag.com notes that Canada’s neighbor to the south issues fines as “an established punishment for data breaches,” and that this could be true in the land of maple syrup and hockey. Bill S-4, introduced in April of this year, would be the vehicle for doing so.
It would require organizations to notify both individuals and the privacy commissioner, in the event of a breach of security of personal information and keep a record of every breach. Breaches could also incur fines of up to $100,000.
A former interim privacy commissioner is quoted as saying that the bill gets many things right:
- Mandatory breach notification.
- Requiring the notification “as soon as possible” but not within a specified timeline is “well-thought-through.”
- “the notification would occur ‘only in cases of significant harm,’ which includes ‘physical and moral’ harm” (that’s a direct quote from canadianlawyermag.com).
I’ve got to disagree a bit here. The first bullet makes sense: if you don’t make it mandatory, odds are that the companies that need to fess up the most will not do so. However, the other two?
A “specified timeline,” as it was put, is definitely a good idea. While the argument stands that companies will need time to determine what happened – and the duration of the investigation is bound to differ from case to case – it’s also true that there is such a thing as taking too long. Forensic investigations can sometimes take over six months, perhaps over one year. The problem with ASAP is that it means different things to different people, and a breached entity can always argue, in a logical and reasonable manner, why they couldn’t notify people sooner.
The point of the notification is to ensure that affected individuals can protect themselves, meaning that they should be alerted while they can still do something about it. HIPAA, the US Health Insurance Portability and Accountability Act, sets a 60-calendar day limit for exactly this reason.
Risk of Harm Threshold
Another idea that’s being flagged as a good thing, but is not in my opinion, is the clause of “significant harm”. This reads suspiciously similar to HIPAA’s own “risk of harm threshold.” This essentially put the breached entities in charge of deciding whether a data breach would result in harm to individuals.
The position has been compared to putting the fox in charge of the chicken coop. The wolves in charge of the sheep. Children in charge of the candy store. You get the idea…it’s a bad idea.
Of course, a perfectly logical reason is given for why there should be a significant harm clause: constant notification over something that usually turns out to be nothing (the loss of a USB drive, which happens often enough but usually leads to no harm, is mentioned) will just serve to unnecessarily scare people. In the long run, it could lead to a “breach notification tolerance” where people ignore such things.
This could very well be, but still doesn’t resolve the conflict of interest. Furthermore, what if the breached organization makes the wrong call (no nefarious intentions involved)? Victimized individuals wouldn’t know why, when, or how they became a data breach victim. Worse, they may associate their troubles with another organization that did notify people but weren’t the root cause of their problems.
There’s a reason why laws are being re-written, updated, and modified to remove harm thresholds.