Facebook’s parent company Meta has been forced to delay the rollout of its AI service in Europe due to privacy concerns and regulatory hurdles. blogIt was originally published to defend the opposition to the company’s AI plans.
Meta was in the process of introducing a new privacy policy for how it uses people’s data to train AI models. New TerminologyThe company had allowed Instagram and Facebook user data collected since 2007 to be used to train AI models but faced legal action in 11 European countries and strong opposition from authorities, including the Irish Data Protection Commission.
Mehta cited legitimate interests General Data Protection Regulation The company argued that the law was not an adequate basis for doing so because the data it was using was something users had chosen to make public on their social media profiles.
Now the company has been forced to stop collecting this data, saying that without including this local information “we’re only giving people a second-rate experience.”
“We are disappointed that our lead regulator, the Irish Data Protection Commission, on behalf of European DPAs, has asked us to delay training our large language models (LLMs) with public content shared by adults on Facebook and Instagram, particularly as we have taken regulators’ feedback into account and notified European Data Protection Authorities (DPAs) since March,” Mehta wrote in a blog post.
They said the move was a “setback for European innovation and competition in AI development” and would “further delay bringing the benefits of AI to Europeans.”
Meta’s competitors use data to train AI
Meta is Google and Open AI The company also uses the data to train AI models: “AI training is not specific to our services, making us more transparent than many others in the industry.”
Facebook’s parent company has a good argument: training AI models requires vast amounts of data regardless of who provides the solution, but Facebook holds vast amounts of historical data from billions of users and also has a reputation for violating data privacy.
Jake Moore, global cybersecurity advisor at ESET, said artificial intelligence algorithms require “enormous amounts of data” to process and produce output for human consumption.
“Meta has access to vast amounts of personal data, but data regulations may prescribe delays if users have not given permission for their data to be analysed for this purpose,” Moore said.
Moore said that every time we interact with an AI program, it’s “likely to collect, analyze and fine-tune what it does with whatever information is available to it,” so it’s important to stay vigilant about how easily your personal data can be used.
Will Meta AI be launched in Europe?
It would be a mistake to think that Meta won’t roll out its AI services in Europe: the company says it is “committed to bringing Meta AI and the models that power it to more people around the world, including in Europe.”
Meta also said it needs the data to deliver a functional product: “Without training our models on public content, such as public posts and comments that Europeans share on our service and other services, our models and the AI capabilities they power will not be able to accurately understand important regional language, culture and trending topics on social media,” Mehta said.
Mehta believes that “complexity, inconsistency and uncertainty” in the application of regulations in the EU “risk Europeans falling even further behind other countries in the adoption of new technologies.”
The company also stressed its “transparent approach” and said it remains committed to “putting AI in the hands of more people around the world, including in Europe.”
But this raises questions for Facebook users in other parts of the world. Europe and the UK have very strict data protection regulations and prioritize consumer privacy. The US is also starting to embrace this ethos, but it is still far behind the EU in data protection.