Meta has Certain It will pause its plans to start training its own AI systems using data from its users in the European Union and the United Kingdom
The move follows opposition from the Irish Data Protection Commission (DPC), Meta’s main regulator in the EU, which acts on behalf of several data protection authorities across the bloc. UK Information Commissioner’s Office (ICO) Also asked Meta has paused its plans until it can address the concerns it has raised.
“DPC welcomes Meta’s decision to pause its plans to train its large language model using public content shared by adults on Facebook and Instagram across the EU/EEA,” the DPC said in a statement. statement Friday. “This decision has come after extensive engagement between the DPC and Meta. The DPC, in cooperation with the EU data protection authorities, will continue to engage with Meta on this issue.
While Meta is already exploiting user-generated content to train its AI in markets like the US, Europe’s strict General Data Protection Regulation (GDPR) regulations have created hurdles for Meta – and other companies – looking to improve their systems. Its own artificial intelligence, including large language models with user-generated training. material.
However, last month Meta began notifying users of the presence The next change For its privacy policy, which it said would give it the right to use public content on Facebook and Instagram to train its AI, including content from comments, interactions with companies, status updates, and associated photos and captions. Company He argued that he needed to do this To reflect “the diverse languages, geography and cultural references of the people of Europe.”
These changes were due to come into effect on June 26 – 12 days from now. But no Plans stimulated A non-profit privacy activist organization Noib (“None of Your Business”) filed 11 complaints with EU member states, arguing that Meta violates various aspects of the General Data Protection Regulation. One such issue concerns the issue of opt-in versus opt-out, vs When personal data is processed, users should be asked for their permission first rather than requiring an action to refuse.
For its part, Meta was relying on a clause in the GDPR called “legitimate interests” to assert that its actions were compliant with the regulations. This is not the first time Meta has used this legal basis in defence, having previously done so to justify European users’ treatment of targeted advertising.
It always seemed likely that regulators would at least halt implementation of Meta’s planned changes, especially in light of how difficult it is for the company to make it for users to “opt out” of the use of their data. The company said it sent more than 2 billion notifications informing users of upcoming changes, but unlike other important public messages that are plastered at the top of users’ feeds, such as prompts to get out and vote, these notifications appeared alongside users’ standard notifications: Friends’ birthdays , photo tag alerts, group announcements, and more. So, if someone doesn’t check their notifications regularly, it’s very easy to miss it.
And those who see the notification won’t automatically know there’s a way to object or opt out, because it simply invites users to click to see how Meta will use their information. There was no sign of a choice here.
Furthermore, users were not technically able to “opt out” of the use of their data. Instead, they had to complete an objection form where they put forward their arguments as to why they did not want their data processed – it was entirely up to Meta’s discretion as to whether this request was respected, although the company said it would respect both. to request.
![Facebook "objection" Form](https://techcrunch.com/wp-content/uploads/2024/06/ABABABABA.jpg?w=659)
Even though the objection form was linked from the notification itself, anyone who was proactively looking for the objection form in their account settings has stopped working.
On Facebook, they first had to click profile picture At the top right; He hits Settings and privacy; handle Privacy Center; Scroll down and click Generative AI in Meta to divide; Scroll down again past a set of links to a section titled More resources. The first link within this section is called “How Meta uses information for generative AI modelsThey had to read about 1,100 words before arriving at a separate link to the company’s “right to object” form. It was a similar story in Facebook’s mobile app.
![link to "Right to object" Form](https://techcrunch.com/wp-content/uploads/2024/06/6.RightToObject.png?w=680)
Earlier this week, when asked why this process requires the user to submit an objection rather than opt-in, Meta’s policy communications director said Matt Pollard TechCrunch pointed out his site Current blog postWhich says: “We believe in this legal basis [“legitimate interests”] “It is the most appropriate balance of processing public data at the scale necessary to train AI models, while respecting people’s rights.”
To translate this, making this choice will likely not generate enough “range” in terms of people willing to provide their data. So the best way around this is to issue a single notification among other user notifications; Hiding the objection form behind six clicks for those seeking to “opt out” independently; Then have them justify their objection, rather than giving them the option not to participate directly.
in Updated blog post On Friday, Stefano Fratta, Meta’s global engagement director for privacy policy, said he was “disappointed” by the request he received from the DPC.
“This is a step backwards for European innovation, competition in AI development, and a further delay in bringing the benefits of AI to people in Europe,” Fratta wrote. “We are very confident that our approach complies with European laws and regulations. AI training is not limited to our services, and we are more transparent than many of our industry peers.
Artificial intelligence arms race
None of this is new, and the meta-AI arms race has shone a huge spotlight on the massive arsenal of data that big tech companies hold for all of us.
Earlier this year, Reddit revealed that it was contracted to make up to $200 million in the coming years in exchange for licensing its data to companies like OpenAI, the maker of ChatGPT and OpenAI. Google. These companies are already facing huge fines for relying on copyrighted news content to train their generative AI models.
But these efforts also highlight the lengths to which companies will go to ensure they are able to benefit from this data within the constraints imposed by existing legislation; “Choice” is rarely on the agenda, and the opt-out process is often unnecessarily arduous. Just last month, someone discovered some questionable language in Slack’s existing privacy policy suggesting that it would be able to leverage user data to train its AI systems, with users only being able to opt out by emailing the company. .
Last year, Google finally gave online publishers a way to choose their websites from training its models by enabling them to inject a piece of code into their sites. OpenAI, for its part, is building a custom tool to allow content creators to opt out of its generative AI intelligence training; This should be ready by 2025.
While Meta’s attempts to train its AI on users’ public content in Europe are on hold for now, it will likely rear its head again in another form after consultation with the DPC and ICO – hopefully through a different user permission process.
Stephen Almond, executive director of regulatory risk at the ICO, said: “In order to make the most of productive AI and the opportunities it presents, it is important that the public trusts that their privacy rights will be respected from the outset.” a Friday statement. “We will continue to monitor key developers of generative AI, including Meta, to review the safeguards they have put in place and ensure the information rights of UK users are protected.”