LONDON: Ireland’s data protection watchdog has opened a formal investigation into the artificial intelligence chatbot Grok, developed by Elon Musk’s AI venture xAI and integrated into social media platform X, over concerns that it may have violated European privacy laws by generating and facilitating the spread of sexually explicit deepfake images.
The inquiry was announced by Ireland’s Data Protection Commission (DPC), which serves as the lead regulator for X in the European Union because the company’s EU headquarters is based in Dublin. Under the EU’s General Data Protection Regulation (GDPR), Ireland has the authority to examine whether companies operating across the bloc are complying with strict rules governing the collection, processing and safeguarding of personal data.
The investigation follows mounting reports that Grok’s image-generation and editing tools were being used to create manipulated images of real individuals, including non-consensual “undressing” or sexualised deepfakes. Some reports suggested that minors were also targeted in such content, heightening concerns among regulators and child-protection advocates.
Deepfakes — hyper-realistic images, audio or videos generated using artificial intelligence — have emerged as a growing global problem. When used maliciously, they can damage reputations, spread misinformation and violate privacy rights. In the case of Grok, European regulators are examining whether X put in place adequate safeguards to prevent the misuse of its AI tools and whether personal data was processed lawfully in training or deploying the system. Under GDPR, companies must ensure that personal data is handled transparently, securely and only for legitimate purposes. If the DPC finds that X failed to comply, it has the power to impose hefty fines — potentially up to 4% of the company’s global annual turnover. Beyond financial penalties, regulators can also order changes to how services operate within the EU.
The probe adds to a broader wave of regulatory scrutiny facing major technology companies over generative AI tools. European authorities have been particularly vigilant about AI systems that can produce harmful or unlawful content, especially where minors or non-consensual intimate imagery are involved. The EU has recently strengthened its digital oversight framework through measures such as the Digital Services Act and the upcoming AI Act, both of which aim to hold platforms accountable for the risks posed by emerging technologies.