Spain is advancing new regulations to make artificial intelligence systems and social media platforms safer and more transparent [1, 2, 3, 4].

The move signals a growing conflict between European governments and global technology firms over the governance of algorithmic design and user protection. By targeting practices that specifically affect minors, Spain aims to establish a legal framework that prevents corporate profit motives from overriding public safety [1, 2, 3].

Digital Transformation Minister Óscar López announced the push this week and said the government will not yield to pressure from the tech industry [5, 6]. The regulations focus on curbing design practices that target children, and increasing the accountability of platforms regarding how their AI systems operate [1, 2, 4].

López addressed the influence of major technology firms during the announcement. He said, "The profit of four [7] tech companies cannot come at the expense of the rights of millions" [1].

Government officials said that the rules are necessary to ensure that citizens' rights are protected in an era of rapid AI integration [1, 2, 3]. The administration intends to increase transparency regarding how data is used, and how algorithms influence user behavior [1, 4].

Despite intense lobbying efforts from major technology companies, the Spanish government remains committed to the timeline for these rules [2, 5]. López reaffirmed this stance in a recent interview, where he said, "We will not be swayed by lobbying; Spain will push ahead with these rules to protect our citizens, especially children" [6].

The proposed measures include stricter oversight of AI-driven content delivery, and mandates for platforms to disclose the mechanisms used to engage young users [1, 4]. This approach aligns Spain with broader European efforts to regulate the digital economy and protect consumer privacy [2].

"The profit of four tech companies cannot come at the expense of the rights of millions."

Spain's determination to implement these rules despite industry pressure suggests a shift toward more aggressive national enforcement of digital safety. If successful, this framework could serve as a blueprint for other EU member states to challenge the dominance of big tech firms and prioritize the protection of minors over platform engagement metrics.