The site databreaches.net notes in one posting that two companies, Fandango and Credit Karma, have agreed to settle charges with the Federal Trade Commission (FTC). Both companies were brought to task for misleading consumers – more specifically, for promising to protect customer data when they hadn’t. Both cases revolved around the use of mobile data encryption (technically, the implementation of Secure Sockets Layer, SSL); however, how they ended up breaching their customers’ trust is a bit different.
As it turns out, the mobile app had a problem: it wasn’t validating SSL certificates. This is a bad thing because it allows “man-in-the-middle” (MITM) attacks, instances where a user is connecting to the internet via a hotspot provided by a third party. The third party could be an internet access providing Samaritan…or he could be recording your data as it passes through his equipment, including credit card numbers, passwords, etc.
The FTC made the argument that Fandango should have known better than to release the app in this state, seeing how there are free and cheap solutions that seek out these kinds of vulnerabilities.
Furthermore, Fandango didn’t have an adequate method for receiving vulnerability reports. Apparently, a security researcher had attempted to contact the company about the security lapse, but his email was flagged due to the use of the word “password” and was logged as a password reset request email, triggering an automated email message for resetting passwords. So, even if someone wanted to help them out, there was no way to do so.
(I get the feeling that the security researcher must have informed the FTC of the vulnerability, because…how else would the FTC have found out? The FTC complaint makes it clear that they were the ones that ultimately informed Fandango of the issue.)
Credit Karma also ran into the same problem as Fandango: SSL certificates were not being validated. Thankfully, Credit Karma had the appropriate channels for receiving news of this security vulnerability: a user alerted them of said problem.
It turns out that this company had outsourced the development of their Apple iOS mobile app, and during the process of building it, had authorized the contractor to,
use code that disabled SSL certificate validation “in testing only,” but failed to ensure this code’s removal from the production version of the application.
When Credit Karma released their Android app, they ran into the same problem again.
One may feel inclined to blame the outside developers. The thing is, in both instances, “in-house security engineers” looked into the issue and released an update for the app. In other words, Credit Karma had the manpower to actually test things out (which they probably did, at least for those areas that the company deemed important).
Missing it the first time around, it’s understandable. But the second time around? Fool me once, fool me twice….
Then, there is the further issue concerning authentication tokens: the gist of this leg of the story is that a security review by the in-house engineers uncovered it. Why didn’t they uncover it before? Possibly because they didn’t have a review prior to the apps’ original release.
Second, would Credit Karma have been protected if they didn’t have any in-house software engineers who could have reviewed the code when contacted about the security failings? After all, it’s not unusual to find an app developer who’s outsourced all of the coding and is the “developer” in name only.
The answer to both is probably “no.” Even if Credit Karma didn’t have in-house security engineers, ultimately the responsibility rests with them. The FTC would have gone after Credit Karma regardless and, as is usually the case in such matters, Credit Karma would have the option of going after its outsourced developers. It’s all about kicking the can down Responsibility Lane.
Indeed, who’s to say they haven’t? After all, the contractor should have known better, and Credit Karma did only authorize it temporarily for testing purposes only.