X hit with 9 GDPR complaints after hoovering European user data to train AI
The episode is likely to be part of a larger trend as AI companies adapt to data privacy law in Europe, experts say.
xAI launched its chatbot Grok in November. / Adobe Stock
Elon Musk's social media company X was targeted Monday with nine complaints following its nonconsensual use of European user data to train its AI chatbot, Grok.
Noyb, the privacy law nonprofit behind the new complaints – whose name stands for ’None of your business’ – claims that X’s data-harvesting scheme represent a blatant violation of the EU’s General Data Protection Regulation (GDPR). The law requires tech companies to receive explicit consent from users before gathering personal information – and that all personal data-collection efforts have a valid legal basis.
Explore frequently asked questions
“If just a small number of Twitter's 60 million users consented to the training of its AI systems, [X] would have more than enough training data for any new AI model,” Noyb wrote in a blog post published today. ”But asking people for permission is not Twitter’s current approach, instead they just take user data without information to users or permission from them.
Noyb has filed its complaints with the data protection authorities (DPAs) in Austria, Belgium, France, Greece, Ireland, Italy, Netherlands, Poland and Spain.
Last month, an X user pointed out that the platform appeared to have surreptitiously introduced a default setting which allowed it to gather user data to train Grok, a chatbot launched in November by Musk's AI start-up, xAI.
Twitter just activated a setting by default for everyone that gives them the right to use your data to train grok. They never announced it. You can disable this using the web but it's hidden. You can't disable using the mobile app
Direct link: https://t.co/lvinBlQoHC pic.twitter.com/LqiO0tyvZG
— Kimmy Bestie of Bunzy, Co-CEO Execubetch™️ (@EasyBakedOven) July 26, 2024
Advertisement
The post went viral and soon afterwards caught the attention of the Irish Data Protection Commission (DPC), the primary GDPR watchdog for X, which last Tuesday filed proceedings in the Irish High Court against X.
Two days later, X said that it would pause its efforts to train Grok using data from European users – which had been in place since early May – according to Politico.
The unfolding case surrounding X and its alleged GDPR infringements have “implications for the entire AI industry, not just for X, and more acutely when it comes to the availability of data to lawfully train models,“ says Gabriela Zanfir-Fortuna, vice-president for global privacy at the Future of Privacy Forum, a think tank focused on data privacy. “The fundamental question here is: How can publicly available personal data be collected and used to train AI models in a way that respects all requirements of data protection law in the EU, from having a lawful ground to process the data in the first place, to transparency, to ensuring opt-outs, to ensuring privacy safeguards are baked into the systems being built from their design stage, and so on.“
Advertisement
X isn’t the first big tech company to face privacy-related scrutiny concerning their AI efforts. In June, Meta announced that it would put a hold on its plan to harvest Instagram and Facebook user data in Europe for AI-training purposes following a series of GDPR violation concerns.
“Many American companies have hit roadblocks in launching AI products in the EU because of issues with GDPR compliance,“ says Jennifer Huddleston, a senior fellow in tech policy at the Cato Institute. “GDPR’s static and heavily regulatory approach to data protection has many compliance requirements particularly around consent and deletion that can make it difficult on AI products.“
Suggested newsletters for you
Some experts predict that regulatory probes into AI developers’ privacy practices will only continue to gain momentum. "There is a fundamental conflict between the development of certain AI tools, which require vast amounts of data, and EU privacy laws that impose data minimization and purpose limitation standards as well as strict transparency obligations," says Jessica Lee, an attorney specializing in data privacy law. "I would not be surprised to see more privacy complaints and regulatory probes over the next few months. We are in the middle of an AI data race, and as companies look to ingest vast amounts of data to power their AI products, there will likely be increased scrutiny on how companies obtain consent, ensure transparency, and comply with laws like the GDPR.“
Musk has positioned Grok as a less politically correct alternative to leading chatbots like OpenAI’s ChatGPT and Google’s Gemini. The driving mission behind xAI, according to the company’s website, is to “understand the universe.”
Beyond alleged policy violations, X is also under the microscope for its content moderation practices. Today, Thierry Breton, the European Commissioner for Internal Market and Services posted a letter addressed Musk on X reminding the billionnaire to ensure proper content moderation and adherence with European tech policy including, the Digital Services Act (DSA), ahead of the broadcast of his conversation with Donald Trump planned for tonight.
“With great audience comes greater responsibility,“ Breton wrote in his post.
For more on the latest happenings in AI, web3 and other cutting-edge technologies, sign up for The Emerging Tech Briefing newsletter.