Due to regulatory issues, Meta, formerly known as Facebook, has declared that it will not be releasing its future multimodal AI models to consumers within the European Union. This action is being taken against the backdrop of changing data privacy legislation and the company’s worries about adhering to EU standards.
The aforementioned multimodal AI models are intended to process text, visuals, and audio in order to improve AI capabilities on all of Meta’s platforms, which includes its ground-breaking Ray-Ban smart glasses. Axios reports that Meta decided against releasing these cutting-edge AI models in the EU due to the unpredictability of the regulatory landscape.
In response to questions from Axios, Meta said, “We will release a multimodal Llama model over the coming months, but not in the EU due to the unpredictable nature of the European regulatory environment.”
Apple recently declared it will not roll out its Apple Intelligence capabilities in Europe due to regulatory concerns; Meta’s decision is in line with that company’s recent actions. Margrethe Vesteger, the commissioner for competition of the European Union, denounced Apple’s move, characterising it as a tactic to stifle competition in areas where the tech giant already has a substantial amount of sway.
Retaining these multimodal AI models could hinder European creativity and technological progress. Businesses that intended to use Meta’s AI models into their goods and services will now be unable to provide these features in the EU market.
Meta made it clear that it still plans to launch its next text-only model, Llama 3, in the European Union. The main reason for the company’s reluctance is the difficulties in collecting data from European customers to train AI models while complying with the General Data Protection Regulation (GDPR), the EU’s strict data protection legislation.
When Meta tried to train its AI models in May using publicly accessible data from Facebook and Instagram users in Europe, it encountered resistance from regulators. In response to concerns expressed by European data privacy regulators on the morality and legality of such data usage, Meta suspended these initiatives in order to comply with GDPR.
In a blog post justifying its position on data usage, Meta stressed, “We believe that Europeans will be ill-served by AI models that are not informed by Europe’s rich cultural, social, and historical contributions.”
Notwithstanding these obstacles, Meta is steadfast in its goal to introduce its multimodal AI models in the UK, a country with GDPR-like data protection regulations. The corporation expressed dissatisfaction about what it believes to be Europe’s slower regulatory adaptation as compared to other regions, which is affecting its capacity to innovate and quickly implement cutting-edge technologies.
The decision by Meta to withhold multimodal AI models from the EU highlights the significant impact of legal uncertainty on IT businesses’ plans and developments in one of the greatest marketplaces in the world, even as it navigates the complexities of global data legislation.