Welcome to an insightful conversation with Desiree Sainthrope, a distinguished legal expert with deep expertise in drafting and analyzing trade agreements, and a recognized authority in global compliance. With a keen interest in intellectual property and the evolving implications of emerging technologies like AI, Desiree brings a unique perspective to the complex world of cybersecurity regulations. In this interview, we dive into the pressing need for stricter cybersecurity laws, the impact of recent legislative developments in the UK and Europe, the question of accountability for breaches, and the importance of collaboration between security teams and leadership. Join us as we explore these critical topics through Desiree’s expert lens.
How do you view the sentiment among security professionals that current cybersecurity laws aren’t strict enough, and what’s your take on whether tougher regulations are needed?
I think the concern from 69% of security professionals, as highlighted in recent surveys, reflects a real frustration with the pace of regulatory evolution compared to the speed of cyber threats. I agree that in many areas, laws haven’t kept up with the sophistication of attacks or the scale of damage they can cause. For instance, many existing frameworks lack specific mandates for emerging risks like AI-driven threats. That said, stricter isn’t always better—laws need to be smart, adaptable, and enforceable. Without that balance, you risk overburdening organizations without actually improving security.
What are some of the most glaring gaps in today’s cybersecurity regulations that you believe should be prioritized for reform?
One major gap is the lack of harmonization across jurisdictions. Companies operating globally often face a patchwork of rules that create compliance headaches and leave loopholes for attackers to exploit. Another issue is the insufficient focus on proactive measures—most regulations are reactive, kicking in after a breach rather than incentivizing prevention through things like mandatory stress testing or investment in resilience. Lastly, there’s a lag in addressing new tech like AI, where the potential for misuse isn’t yet fully accounted for in legal frameworks.
Looking at recent regulations like the EU AI Act, DORA, and NIS2, which do you think will reshape the cybersecurity landscape the most, and why?
I’d point to NIS2 as having the most transformative potential. It builds on its predecessor by widening the scope of critical sectors and imposing stricter requirements for incident reporting and risk management. What’s really groundbreaking is the personal liability it places on senior management for non-compliance. That’s a game-changer because it forces cybersecurity to become a boardroom priority, not just an IT issue. While the EU AI Act is critical for emerging tech and DORA strengthens financial sector resilience, NIS2’s broader reach and accountability focus give it the edge in terms of immediate impact.
How do you anticipate the UK’s Cyber Security and Resilience Bill will influence businesses once it’s fully implemented?
The Bill, still in progress, is set to have a profound effect, especially with its push for mandatory incident reporting and penalties for non-compliance. For the roughly 1,000 UK firms it will apply to, particularly in critical infrastructure, it’s going to demand a significant uplift in transparency and preparedness. The ban on ransomware payments for certain sectors is also a bold move, though controversial—it could deter attacks but might also leave organizations in a tough spot during a crisis. Overall, I expect it to drive a cultural shift where cybersecurity becomes a core business function, though smaller firms might struggle with the resource burden.
When it comes to accountability for cyber breaches, who do you believe should bear the primary responsibility, and what’s your reasoning?
I lean toward the view that the board should hold primary responsibility, aligning with the overwhelming sentiment from security professionals. Cybersecurity isn’t just a technical issue; it’s a strategic one. Boards set the tone for risk management and resource allocation, so when a breach happens, it often reflects failures at that level—whether it’s underfunding security or ignoring known risks. Holding CISOs solely accountable, as only a minority suggest, misses the bigger picture. They can’t operate effectively without executive support and budget. It’s about shared responsibility, but the ultimate accountability has to rest at the top.
There’s a debate about whether individual employees who violate policies should face consequences for breaches. What’s your perspective on balancing personal accountability with organizational responsibility?
I think personal accountability for employees needs to be handled with care. While it’s true that human error or negligence can open the door to breaches, pinning the blame on individuals often ignores systemic issues like inadequate training or unclear policies. I believe consequences should focus more on education and correction rather than punishment, unless there’s clear malice or gross negligence. The organization’s responsibility is to create an environment where mistakes are minimized through robust processes. Blaming individuals can erode trust and discourage reporting of smaller issues before they escalate.
With over half of surveyed professionals supporting sanctions or fines for senior management in serious cyber incidents, do you think personal liability for leaders is a fair approach, and could it drive better security practices?
Personal liability for senior management, as seen in laws like NIS2 and DORA, is a fair approach in principle because it aligns accountability with authority. Leaders who make strategic decisions should face consequences if their negligence leads to catastrophic breaches. I do think it can drive better practices—when personal fines or sanctions are on the table, cybersecurity gets prioritized in a way that vague corporate penalties don’t achieve. However, it must be balanced with clear guidelines on what constitutes negligence versus unavoidable risk, or you risk scaring off capable leaders from taking on these roles.
How can security teams foster stronger collaboration with senior management to ensure cybersecurity is a shared priority at the board level?
Security teams need to step out of the technical silo and engage the board on their terms. This means translating cyber risks into business impacts—dollars lost, reputational damage, or legal liabilities—rather than jargon about vulnerabilities. Regular briefings, tabletop exercises, and involving board members in incident response planning can build that partnership. It’s also about building trust; security professionals should position themselves as enablers of business goals, not just gatekeepers. When leadership sees cybersecurity as a competitive advantage, collaboration follows naturally.
What strategies can security professionals use to better communicate complex cyber risks to non-technical stakeholders like board members?
The key is storytelling with data. Instead of diving into technical details, security teams should frame risks in relatable scenarios—think “here’s what could happen if we’re hit by ransomware during peak sales season” and back it up with stats on downtime costs or customer loss. Visual aids, like risk heatmaps, can also simplify complex ideas. Another tactic is benchmarking—showing how peers or competitors handle similar threats can make the stakes clearer. Ultimately, it’s about making the invisible threat of cybercrime tangible to those who don’t live in the tech world.
Looking ahead, what is your forecast for the future of cybersecurity regulations over the next few years, especially with the rapid evolution of technology?
I foresee a wave of more prescriptive and globally coordinated regulations over the next few years, driven by the borderless nature of cyber threats and technologies like AI. We’ll likely see frameworks that mandate specific security controls rather than just broad principles, especially for critical infrastructure and data-heavy sectors. There’s also going to be a stronger push for international alignment—think cross-border agreements on incident reporting or AI governance—to close gaps that attackers exploit. But the challenge will be balancing innovation with regulation; if laws become too rigid, they could stifle tech advancements. It’s a tightrope, and I expect a lot of trial and error as policymakers catch up to the threat landscape.