Desiree Sainthrope is a legal expert whose mastery of global compliance and international trade agreements makes her a pivotal voice in the discussion surrounding data privacy. As the European Union navigates the treacherous waters of the Digital Omnibus proposal, her ability to dissect the tension between innovation and regulation provides essential clarity for practitioners. This conversation explores the shifting legal sands of “legitimate interest” for AI training, the conflicting opinions of European data protection authorities, and the potential for a regulatory landscape defined more by uncertainty than by clear mandates. We delve into how the December 2024 and February 2026 opinions have reshaped the accountability expectations for companies aiming to process data at scale.
How has the recent back-and-forth between the European Commission and the Council regarding the Digital Omnibus proposal impacted the perceived stability of legitimate interest as a legal ground for AI development?
The regulatory environment in Brussels has transitioned from a period of relative optimism to one of deep frustration for those on the front lines. Initially, the December 2024 opinion from the European Data Protection Board offered a sense of security, suggesting that “legitimate interest” could serve as a functional legal basis for training artificial intelligence, provided strict accountability was maintained. However, the recent pivot by the Council of the European Union to potentially scrap this provision has introduced a thick layer of legal fog. Instead of enshrining these practices into primary law, we are left wondering if member states are simply being cautious or if they fundamentally disagree with the current practice. This uncertainty forces companies to rely on non-binding guidance, which feels like building a skyscraper on shifting sand rather than a solid legal foundation.
What are the primary concerns regarding the accountability of organizations that might utilize these legal pathways to process data on a massive scale without explicit user consent?
The joint opinion released by the EDPB and the European Data Protection Supervisor in February 2026 highlights a significant anxiety regarding potential loopholes. There is a palpable fear among civil society and certain regulators that relying on legitimate interest could effectively bypass the need for user consent during the large-scale processing of personal information. To prevent this, the 2026 opinion insists that if this legal basis remains, the conditions for its application must be far more rigorous and transparent than what is currently proposed. We are looking at a high-stakes environment where a “legitimate interest assessment” is no longer just a checkbox, but a complex narrative that must prove the innovation’s value outweighs individual privacy risks. The stakes are incredibly high, as failing to meet these accountability bars could lead to crippling compliance challenges under the current, more rigid regime.
In what ways would the removal of specific GDPR amendments from the Omnibus proposal hinder the ability of European companies to remain competitive and bold in the global AI market?
Removing this language from the proposal essentially kills the chance for legislators to provide a “how-to” guide for applying GDPR in real-life AI scenarios. Without primary law coverage, European firms are left in a state of paralysis, unsure if their massive investments in model training will be retroactively deemed illegal by a shifting interpretation of non-binding rules. This debate is fundamental because the final outcome will dictate exactly how bold a company can afford to be when competing with tech giants from regions with fewer restrictions. We are currently missing the opportunity to see how the law survives contact with complex machine learning use cases, which is a massive blow to long-term innovation strategy. If we return to the status quo, the legal friction of navigating these unwritten rules will likely slow down the deployment of home-grown European AI.
What is your forecast for the future of AI regulatory compliance in Europe over the next several years?
I anticipate a period defined by “regulatory exhaustion” as companies struggle to reconcile the conflicting signals coming from the Commission, the Council, and various data protection authorities. We will likely see a surge in enforcement actions and legal challenges that will force the European Court of Justice to step in where the legislators have hesitated. Until then, the lack of a codified “legitimate interest” ground in the Digital Omnibus will mean that only the most well-funded companies can afford the legal risks associated with training cutting-edge models. It is a bittersweet outlook; while the focus on privacy remains a core European value, the lack of a simplified path forward might lead to a brain drain of AI talent to more predictable jurisdictions. The next five years will be a grueling test of whether Europe can actually balance its high privacy standards with the reality of a fast-moving, data-hungry technological revolution.
