With an extensive background in drafting and analyzing international trade and compliance agreements, Desiree Sainthrope has become a leading voice on the legal and political challenges of regulating emerging technologies. Her work at the intersection of intellectual property, AI, and global policy provides a unique lens through which to view the escalating tensions between sovereign nations and multinational tech platforms. This conversation explores the diplomatic maneuvering behind the UK’s push for online safety, the practical realities of enforcing new laws against tech giants, the philosophical divides that complicate international cooperation, and the shadowy influence of personal politics on corporate compliance. We delve into the specifics of a high-level meeting between UK and U.S. officials concerning AI-generated harmful content and what it signals for the future of online regulation.
When a senior UK official raises concerns about AI-generated content with a U.S. counterpart like JD Vance, what are the immediate goals? Can you walk us through the typical next steps for international cooperation on tech regulation following such a high-level discussion?
The immediate goal in a meeting like the one between Deputy Prime Minister Lammy and Vice President Vance is, first and foremost, to elevate the issue beyond a technical or regulatory dispute into a matter of high-level diplomatic importance. It’s about signaling that this isn’t just a concern for a regulator like Ofcom; it’s a priority for the entire government, right up to the Prime Minister. By directly raising the flood of AI-generated sexualized images and explicitly stating the UK’s position, the goal is to get a read on the US administration’s receptiveness and put them on notice. The fact that Vance was reportedly “receptive” is a crucial first step. Following this, the process typically cascades down. Diplomatic staff will follow up, sharing more detailed briefings and evidence. Regulators like Ofcom will engage with their US counterparts, and you’ll see a series of technical and legal discussions aimed at finding common ground, even if formal policy alignment is difficult.
Regulators like Ofcom are now investigating platforms under new laws like the Online Safety Act. When a platform responds by making a problematic AI feature a paid service, what message does that send, and what further enforcement mechanisms can regulators realistically use?
That kind of response sends a deeply cynical message. It suggests the platform views the creation of harmful, potentially unlawful material not as a fundamental safety failure to be rectified, but as a commercial feature to be managed. The UK government’s spokesperson hit the nail on the head by calling it a move that “simply turns an AI feature that allows the creation of unlawful images into a premium service.” It’s an attempt to appear responsive while doing the bare minimum and, frankly, it’s an insult to the regulator’s intelligence. For Ofcom, this doesn’t end the matter. They’ve already initiated an “expedited assessment,” and this response will be factored into their investigation. Under the Online Safety Act, they have significant enforcement powers. This could escalate to formal notices demanding specific changes, and if the platform remains non-compliant, it could face substantial fines. The platform is essentially daring the regulator to act, and Ofcom, with full government backing, is now positioned to set a major precedent.
The U.S. has historically expressed concerns that the UK’s online safety laws could limit free expression. How can officials bridge this philosophical divide when discussing a specific harm like AI deepfakes, and what practical compromises might be on the table?
This is the central challenge in U.S.-UK tech diplomacy. The U.S. sees a potential threat to free expression, while the UK sees an urgent need to protect citizens from clear and present harms. The way to bridge this divide is to frame the issue not as a debate about speech, but as a fight against unequivocally illegal activity. When you’re talking about AI generating child sexual abuse material, you’ve moved far beyond a philosophical discussion on expression. The UK’s approach, as articulated by Prime Minister Starmer, is to label this content “disgraceful,” “disgusting,” and “unlawful.” This language is deliberate; it frames the problem as a law enforcement issue. A practical compromise might involve focusing regulations very narrowly on the functionality of AI tools that enable the creation of illegal images, rather than on broader content moderation policies. The goal would be to demonstrate that the aim is to disable a criminal tool, not to police legitimate speech.
Evidence now suggests that AI chatbots are generating child sexual abuse material being circulated on the dark web. Beyond platform-level content removal, what technical and law enforcement steps are most effective in disrupting this supply chain, and what are the biggest hurdles?
Once this material moves to the dark web, the problem becomes exponentially more difficult and requires a completely different set of tools. Platform-level removal is a crucial first step, but it’s like plugging one leak in a dam that’s already burst. To disrupt the supply chain, you need a two-pronged approach. Technically, it involves deep forensic work to analyze the AI-generated images, looking for digital watermarks or artifacts that can trace them back to the specific model, like Grok, that created them. On the law enforcement side, it requires intense international cooperation between agencies that specialize in navigating the dark web. The findings from groups like the Internet Watch Foundation are vital as they provide the initial intelligence. The biggest hurdles are the anonymity that the dark web provides, the sheer volume of content, and the jurisdictional nightmare of pursuing criminals who could be operating from anywhere in the world.
The political relationships of tech leaders, who sometimes advise presidents, can be a major factor. To what extent do these personal dynamics influence a platform’s compliance with international regulations, and how do governments navigate that complex reality during negotiations?
These personal dynamics are enormously influential and create a shadow diplomacy that runs parallel to official channels. When a platform’s owner has a history as a presidential adviser and maintains relationships with powerful political figures—like the connection between Musk, Trump, and Vance—it can foster a sense of immunity or at least a belief that political pressure can be applied to soften a regulatory blow. It complicates negotiations because you’re no longer just dealing with a corporate entity; you’re dealing with a politically connected individual. Governments navigate this by building a robust, undeniable case based on law and evidence, as the UK is doing with the Online Safety Act. They also work to build a coalition of international allies to demonstrate that the concerns are widespread, not just the agenda of one country. This multilateral pressure makes it much harder for one individual, no matter how well-connected, to ignore their legal obligations.
What is your forecast for the regulation of generative AI on major social media platforms over the next two years?
I forecast a period of intense and unavoidable conflict. The era of self-regulation is definitively over. Over the next two years, we’re going to see regulators like Ofcom use their new powers to make an example of non-compliant platforms. This will likely involve a series of high-profile investigations, ultimatums, and potentially the first major fines levied under these new online safety regimes. Platforms will continue to test the boundaries, employing tactics like paywalling problematic features or making superficial changes, but regulators are now equipped and mandated to push back hard. The legislative frameworks are in place; the next two years will be about enforcement and precedent-setting. It will be a messy, litigious, and politically charged process, but it will fundamentally reshape the responsibilities of platforms regarding the AI tools they deploy.