The Athens-based Hellenic data protection authority has fined the controversial facial recognition firm €20 million and banned it from collecting and processing the personal data of people living in Greece. It has also ordered it to delete any data on Greek citizens that it has already collected.
Since late last year, national DPAs in the U.K., Italy and France have also issued similar decisions sanctioning Clearview — effectively freezing its ability to sell its services in their markets since any local customers would be putting themselves at risk of being fined.
Last year, privacy regulators in Canada and Australia also concluded Clearview’s activities fall foul of local laws — in earlier blows to its ability to scale internationally.
More recently, in May, Clearview agreed to major restrictions on its services domestically, inside the U.S., in exchange for settling a 2020 lawsuit from the American Civil Liberties Union (ACLU), which had accused it of breaking state law in Illinois that bans the use of individuals’ biometric data without consent.
The European Union’s data protection framework, the General Data Protection Regulation (GDPR), sets a similarly high bar for legal use of biometric data to identify individuals — a standard that extends across the bloc, as well as to some non-member states (including the U.K.); so around 30 countries in all.
Under the GDPR, such a sensitive purpose for personal data (i.e., facial recognition for an ID-matching service) would — at a minimum — require explicit consent from the data subjects to process their biometric data.
In its 23-page decision, the Hellenic DPA said Clearview had breached the legality and transparency principles of the GDPR, finding violations of articles 5 (1)a, 6 and 9; as well as breaches of obligations under articles 12, 14, 15 and 27.
The Greek DPA’s decision follows a May 2021 complaint made by local human rights advocacy group, Homo Digitalis, which has trumpeted the win in a press release — saying the €20 million penalty sends a “strong signal against intrusive business models of companies that seek to make money through the illegal processing of personal data.”
The advocacy organization also suggested the fine sends “a clear message to law enforcement authorities working with companies of this kind that such practices are illegal and grossly violate the rights of data subjects.” (In an even clearer message last year, Sweden’s DPA fined the local police authority €250,000 for unlawful use of Clearview it said breached the country’s Criminal Data Act.)
At current count, the company been fined — on paper — close to €50 million by regulators in Europe. Albeit, it’s less clear whether it has paid any of the fines yet, given potential appeals and the overarching challenge for international regulators of enforcing local laws against a U.S.-based entity if it decides not to cooperate.
The U.K.’s DPA told us Clearview is appealing its sanction in that market.
“We have received notification that Clearview AI has appealed. Clearview AI are not required to comply with the Enforcement Notice or pay the Penalty Notice until the appeal is determined. We will not be commenting further on this case whilst the legal process is ongoing,” the ICO’s spokesperson said.
Italy’s data protection watchdog declined to provide an update when we asked whether the fine had been paid or not.
Clearview’s responses to earlier GDPR penalties have suggested it is not currently doing business in the affected markets. But it remains to be seen whether the enforcements will work to permanently shut it out of the region — or whether it might seek to circumvent sanctions by adapting its product in some way.
In the U.S., it spun its settlement with the ACLU as a “huge win” for its business — claiming it would not be impacted because it would still be able to sell its algorithm (rather than access to its database) to private companies in the U.S.
The U.S. lawsuit settlement also included an exception for government contractors — suggesting Clearview can continue to work with federal government agencies in the U.S., such as Homeland Security and the FBI — while applying a five-year ban on it providing its software to any government contractors or state or local government entities in Illinois itself.
It is certainly notable that European DPAs have not — so far — ordered the destruction of Clearview’s algorithm, despite multiple regulators concluding it was trained on unlawfully obtained personal data.
As we’ve reported before, legal experts have suggested there is a grey area over whether the GDPR empowers oversight bodies to be able to order the deletion of AI models trained on improperly obtained data — not just order deletion of the data itself, as appears to have happened so far in this Clearview saga.
But incoming EU AI legislation could be set to empower regulators to go further: The (still draft) Artificial Intelligence Act contains powers for market surveillance authorities to ‘take all appropriate corrective actions’ to bring an AI system into compliance — including withdrawing it from the market (which essentially amounts to commercial destruction) — depending on the nature of the risk it poses.
If the AI Act that’s finally adopted by EU co-legislators retains this provision, it suggests any wiggle room for commercial entities to operate unlawfully trained AI models inside the bloc could be headed for some hard-edged legal clarity soon.
In the meanwhile, if Clearview obeys all these international orders to delete and stop processing citizens’ data it will be unable to keep its AI models updated with fresh biometric data on people from countries where it’s banned from processing people’s biometric data — implying that the utility of its product will gradually degrade with each fully enforced ban order.
Discussion about this post