Spain has asked prosecutors to examine major social media platforms over concerns about the spread of AI-generated child sexual abuse material, in a move that signals growing pressure on technology firms across Europe.
The investigation request, confirmed by government sources and reported by Reuters and The Guardian, focuses on whether platforms including X, Meta, and TikTok have allowed the distribution of illegal synthetic images. The issue has moved rapidly up the political agenda as governments across the EU grapple with the rise of artificial intelligence tools capable of generating realistic but fabricated content.
Spain’s initiative marks one of the most high-profile national responses so far. It reflects increasing concern that existing moderation systems may not be equipped to detect or remove AI-generated abuse material quickly enough.
A government response to emerging technology risks
Officials say the request to prosecutors follows technical assessments warning that deepfake technology is being used to create and circulate illegal content involving minors. Although the images are artificially generated, Spanish law treats such material as criminal when it depicts sexual abuse.
The government has framed the move as part of a broader effort to strengthen online child protection. Spain has already been discussing tighter age-verification rules and greater accountability for platforms operating within the country.
Prosecutors will now review whether there is sufficient evidence to open formal criminal proceedings or request further information from the companies involved. The process is likely to take time, and no charges have been filed at this stage.
Pressure growing across Europe
Spain’s action comes amid wider European debate about how to regulate artificial intelligence and protect children online. Several countries are exploring stricter rules on social media access for minors, as well as clearer legal responsibilities for platforms hosting user-generated content.
The European Union is already implementing the Digital Services Act, which requires large platforms to remove illegal material and assess systemic risks. However, the rapid development of AI tools has created new challenges for enforcement and detection.
Experts say synthetic imagery can be harder to identify than traditional illegal content, particularly when it is generated and shared quickly across multiple platforms. This has prompted calls for stronger monitoring systems and clearer legal frameworks.
Legal review process
The request to prosecutors does not automatically mean charges will follow. Instead, it begins a legal review process to determine whether potential offences may have occurred and whether companies met their obligations under Spanish and European law.
Any formal investigation would likely involve requests for platform data, moderation policies, and internal safeguards. Technology firms could also face regulatory scrutiny even if criminal charges are not pursued.
For now, the focus is on establishing the scale of the problem and the adequacy of existing protections. The outcome may influence how Spain — and potentially other EU countries — approach the regulation of AI-generated content in the months ahead.
As artificial intelligence becomes more widely available, governments are under increasing pressure to respond quickly to its misuse. Spain’s move places the issue firmly on the national agenda and signals that enforcement efforts around online child protection are likely to intensify.