EU Kicks Off 2026 With a Digital Privacy Law Surge

EU Kicks Off 2026 With a Digital Privacy Law Surge

As a leading legal expert with deep experience in drafting and analyzing international trade agreements, Desiree Sainthrope has a unique vantage point on the intricate web of EU digital policy. With her broad interests spanning from intellectual property to the real-world implications of AI, she offers critical insights into the EU’s ambitious regulatory landscape. In this conversation, we explore the practical consequences of major legislative developments in 2026, from new GDPR enforcement deadlines and the balancing act of AI governance to the persistent ambiguities of the Data Act and the vital mission to protect children’s privacy online.

The new GDPR Procedural Regulation introduces a 15-month timeframe for resolving cross-border cases, taking effect in April 2027. How might this deadline affect the thoroughness of investigations, and what new procedural rights should both complainants and companies under investigation be most aware of?

The introduction of this 15-month deadline is a direct attempt to tackle the notorious delays in cross-border GDPR enforcement. On one hand, it’s a welcome push for efficiency. However, there’s a tangible risk that data protection authorities, pressed for time, might conduct less thorough investigations to simply close the case file. The key here is the built-in flexibility—the possibility to extend by 12 months for highly complex cases. What’s truly transformative, though, are the newly clarified procedural rights. For both the person lodging the complaint and the company being investigated, the rules of engagement are now harmonized across the EU. This means a more predictable process, from the admissibility of a complaint to the rights each party has during the investigation, which should bring much-needed clarity to what has often been a frustratingly opaque system.

EU privacy bodies recently expressed concern that the Digital Omnibus on AI proposal could weaken transparency obligations for high-risk systems. What are the biggest trade-offs between fostering AI innovation and protecting individual rights, and how could this impact public trust in AI deployment?

This is the central tension in all AI governance. The EU wants to encourage innovation, and measures like AI sandboxes are a great example of that. But the recent joint opinion from the EDPB and EDPS makes it clear they feel the current proposal tips the scales too far away from fundamental rights. The biggest trade-off is delaying key obligations, like transparency for high-risk systems. The thinking is that this gives developers breathing room, but privacy authorities see it as a critical failure to protect individuals from the outset. If the public perceives that powerful AI systems are being rolled out without robust safeguards and clear accountability, trust will inevitably erode. This could create a public backlash that ultimately hinders AI adoption far more than any regulation would. The authorities’ specific warnings about not weakening AI literacy and limiting the use of sensitive data for bias correction show just how high the stakes are.

With the EU Data Act now largely applicable, many businesses find key definitions like ‘data holder’ and ‘product data’ unclear. What are the practical, day-to-day operational risks for companies navigating this ambiguity, and what specific steps can they take now while awaiting further guidance?

The operational risks are immense and immediate. For a company, not knowing definitively if you are a ‘data holder’ or what exactly constitutes ‘product data’ creates a state of legal paralysis. Do you build the infrastructure to share data? Who do you grant access to? Answering these questions incorrectly could lead to non-compliance, disputes, and significant financial risk. Day-to-day, this means engineers, product managers, and legal teams are stuck. They can’t design systems or draft contracts with confidence. While we await official guidance from the Commission, which held a workshop in January to address these very issues, companies should be proactive. They need to conduct internal audits to map their data flows and make a good-faith interpretation of these terms based on their business model. Documenting this process meticulously will be their best defense and preparation for when the definitions are eventually clarified.

The Cyprus presidency is continuing its focus on protecting children online, a priority also highlighted on Data Protection Day. Beyond compliance with new regulations, what innovative technical or policy solutions should platforms proactively implement to better protect minors’ privacy in a tangible way?

Compliance is the floor, not the ceiling, especially when it comes to protecting children. Platforms need to move beyond check-the-box exercises and embrace privacy by design as a core product philosophy. This means implementing more robust and intuitive age verification mechanisms that don’t collect excessive personal data. It also involves designing user interfaces where the most privacy-protective settings are the default for any user identified as a minor. Imagine platforms developing “privacy dashboards” specifically for young users and their parents, using simple language and visuals to explain what data is being collected and why. Proactively investing in these tangible, user-centric solutions not only offers better protection but also builds trust with families, which is a far more sustainable strategy than simply reacting to the next regulation, be it the Digital Services Act guidelines or the Digital Fairness Act.

What is your forecast for EU digital policy over the next 18 months, especially concerning the interplay between the Data Act, the Digital Omnibus package, and AI governance?

My forecast is for a period of intense, and at times messy, legislative convergence. We’re not looking at separate laws operating in silos anymore. The Digital Omnibus proposal, for instance, directly proposes amendments to the Data Act, meaning businesses trying to comply with the Data Act today are aiming at a moving target. The real friction will be where these legislative projects overlap. How will the data-sharing obligations of the Data Act interact with the transparency requirements for AI systems that are trained on that very data? The EU institutions are trying to build a cohesive digital rulebook, but the pace is frantic. I expect we’ll see significant lobbying and sharp debates as the specifics are hammered out, particularly with the Digital Omnibus on AI, which needs to be fast-tracked to delay certain obligations that would otherwise kick in this August. The next 18 months will be defined by clarification, amendment, and the challenging process of making these ambitious laws work together in practice.

Subscribe to our weekly news digest.

Join now and become a part of our fast-growing community.

Invalid Email Address
Thanks for Subscribing!
We'll be sending you our best soon!
Something went wrong, please try again later