The explosive Twitter whistleblower complaint that was made public yesterday — detailing a raft of damning allegations across security, privacy and data protection issues (among others) by Twitter’s former former head of security, Peiter “Mudge” Zatko — contained references to European regulators along with claims that the social media firm had misled or intended to mislead regional oversight bodies over its compliance with local laws.
Two national data protection authorities in the EU, in Ireland and France, have confirmed to TechCrunch that they are following up on the whistleblower complaint.
Ireland, which is Twitter’s lead supervisor for the bloc’s General Data Protection Regulation (GDPR) — and previously led a GDPR investigation of a separate security incident that resulted in a $550k fine for Twitter — said it is “engaging” with the company in the wake of the publicity around the complaint.
“We became aware of the issues when we read the media stories [yesterday] and have engaged with Twitter on the matter,” the regulator’s deputy commissioner, Graham Doyle, told us.
While France’s DPA said it is investigating allegations made in the complaint.
“The CNIL is currently investigating the complaint filed in the US. For the moment we are not in a position to confirm or deny the accuracy of the alleged breaches,” a spokesperson for the French watchdog told us. “If the accusations are true, the CNIL could carry out checks that could lead to an order to comply or a sanction if breaches are found. In the absence of a breach, the procedure would be terminated.
Machine learning concerns
Ireland’s Data Protection Commission (DPC) and France’s national equivalent, the CNIL, were both cited in the ‘Mudge report’ — in one instance in relation to Zatko’s suspicion that Twitter intended to mislead them in relation to enquiries about data-sets used to train its machine learning algorithms in a similar way to how the complaint alleges Twitter misled the FTC years earlier over the issue.
In a section of the complaint given the title “misleading regulators in multiple countries”, Zatko asserts that the FTC had asked Twitter questions about the training material used to build its machine learning models.
“Twitter realized that truthful answers would implicate the company in extensive copyright / intellectual property violations,” runs the complaint, before asserting that Twitter’s strategy (which he says executives “explicitly acknowledged was deceptive”) was to decline to provide the FTC with the requested training material and instead point it to “particular models that would not expose Twitter’s failure to acquire appropriate IP rights”.
The two European regulators come into the picture because Zatko suggests they were poised to make similar enquiries this year — and he says he was told by a Twitter staffer that the company intended to try to use the same tactic it had deployed in response to earlier FTC enquiries on the issue, to derail regulatory scrutiny.
“In early 2022, the Irish-DPC and French-CNIL were expected to ask similar questions, and a senior privacy employee told Mudge that Twitter was going to attempt the same deception,” the complaint states. “Unless circumstances have changed since Mudge was fired in January, then Twitter’s continued operation of many of its basic products is most likely unlawful and could be subject to an injunction, which could take down most or all of the Twitter platform.”
Neither the Irish nor French watchdog responded to questions about the specific claims being made. So it’s not clear what enquiries the EU data protection agencies may have made — or be planning to make — of Twitter in relation to its machine learning training data-sets.
One possibility — and perhaps the most likely one, given EU data protection law — could be they have concerns or suspicions that Twitter processed personal data to build its AI models without having a proper legal basis for the processing.
In a separate example, the controversial facial recognition firm, Clearview AI, has in recent months faced a raft of regional enforcements from DPAs linked to its use of personal data for training its facial recognition models. Although the personal data in that case — selfies/facial biometrics — is among the most protected ‘sensitive’ class of data under EU law, meaning it carries the strictest requirements for legal processing (and it’s not clear whether Twitter might have been using similarly sensitive data-sets for training its AI models).
Cookies out of control?
The Mudge complaint also makes a direct claim that Twitter misled the CNIL over a separate issue — related to improper separation of cookie functions — after the French watchdog ordered it to amend its processes to come into compliance with relevant laws in December 2021.
Zatko alleges that up until Q2/Q3 of 2021 Twitter lacked sufficient understanding of how it was deploying cookies and what they were used for — and also that Twitter cookies were being used for multiple functions, such as ad tracking and security sessions.
“It was apparent Twitter was in violation of international data requirements across many regions of the world,” the complaint asserts.
A key tenet of European Union data protection law that applies here is ‘purpose limitation’ — i.e. the principle that personal data must be used for the stated (legitimate) purpose it was collected for; and that uses for data should not be bundled. So if Twitter was mingling cookie function for distinctly different purposes, such as marketing and security — as the complaint claims — that would create clear legal problems for it in the EU.
According to the complaint, the CNIL got wind of a cookie function problem at Twitter and ordered the company to fix at the end of last year, presumably relying on its competence under the EU’s ePrivacy Direction (which regulates use of tracking technologies like cookies).
Zatko writes that a new privacy engineering team at Twitter had worked “tirelessly” to disentangle cookie function in order to permit “some form of user choice and control” — to, for example, deny tracking cookies but accept security-related cookies — as would be required under EU law. And he says this fix was rolled out, exclusively in France, on December 31, 2021, but was immediately rolled back and disabled after Twitter encountered a problem — an ops SNAFU he seizes on to heap more blame on Twitter for failing to have a separate testing environment.
But while he writes that the bug was fixed “in a matter of hours”, he claims Twitter product and legal decision-makers blocked rolling it out for another month — until January 31, 2021 — “in order to extract maximum profit from French users before rolling out the fix”.
“Mudge challenged executives to claim this was anything other than an effort to prioritize incremental profits over user privacy and legal data privacy requirements,” the complaint also asserts, adding: “The senior leaders in that meeting confessed that Mudge was correct.”
Zatko makes a further claim that Twitter launched “proactive” legal action — in which he says they were “attempting to claim that all cookies were by definition critical and required, because the platform is powered by advertisements” — before going on to allege that during internal conversations he heard product staff stating the argument was “false and made in bad faith”.
Twitter was contacted for a response to the specific claims referenced in cited portions of the whistleblower’s report but at the time of writing it had not responded. But the company put out a general response to the Mudge report yesterday — dismissing the complaint as a “false narrative” by a disgruntled former employee, which it also claimed was “riddled with inconsistencies and inaccuracies”.
Regardless, the whistleblower complaint is already sparking fresh regulatory scrutiny of Twitter’s claims.
It’s not clear what penalties the company could face in the EU if regulators decide — on closer inspection — that it has breached regional requirements after following up on Mudge’s complaint.
The GDPR allows for penalties that scale up to 4% of annual global turnover — although Twitter’s prior GDPR penalty, for a separate security-related breach, fell far short of that. However enforcements are supposed to factor in the scale and extent (and indeed intent) of any violations — and the extensive failings being alleged by Mudge, could — if stood up by formal regulatory investigation — lead, eventually, to a far more substantial penalty.
The ePrivacy Directive, which gives CNIL competency to regulate Twitter’s cookies, empowers DPAs to issue “effective, proportionate and dissuasive” sanctions — so it’s hard to predict what that might mean in hard financial terms if it deems a fine is justified. But in recent years the French watchdog has issues a series of multi-million dollar fines to tech giants for cookie-related failures.
This includes two beefy penalties for Google — a $170M fine in January over deceptive cookie consent banners; and a separate $120M fine in December 2020 for dropping tracking cookies without consent — as well as a $68M fine for Facebook back in January (also for deceptive cookies), and a $42M fine for Amazon at the end of 2020, also for dropping tracking cookies without consent.
from https://ift.tt/wVxc1eK
via Technews
No comments:
Post a Comment