Victoria Councillor Uses Deepfake to Seek AI Regulation

Victoria Councillor Uses Deepfake to Seek AI Regulation

Victoria City Councillor Jeremy Caradonna recently captured the attention of both the public and his fellow policymakers by releasing a sophisticated deepfake video of himself to demonstrate the growing dangers posed by unregulated synthetic media. In this digital rendering, a fabricated version of the councillor meticulously explains how easily artificial intelligence can be utilized to generate lifelike but entirely false content. This demonstration was not merely a technical showcase but a deliberate warning that such tools can be weaponized by malicious actors to spread targeted disinformation and erode the very foundations of democratic principles. By showcasing his own likeness being manipulated, Caradonna highlighted the terrifying reality that voters and elected officials can no longer implicitly trust the visual or auditory information they encounter online. This phenomenon creates a significant challenge for representative government, as the reliability of public discourse is central to maintaining a functional society where truth remains a shared commodity among all citizens.

The Balancing Act: Innovation Versus National Security

While the councillor acknowledged that artificial intelligence offers profound benefits in critical sectors such as healthcare diagnostics, high-tech manufacturing, and autonomous transportation systems, he emphasized that its unbridled advancement poses a direct threat to national security. The capacity for these systems to generate content with an inherent aura of truthfulness necessitates an immediate and concerted effort across every tier of the Canadian government. Caradonna’s primary objective is to secure council approval for a formal motion that would urge the Union of BC Municipalities and the Federation of Canadian Municipalities to collaborate closely with provincial and federal authorities. This multi-level approach aims to establish a framework of reasonable and enforceable regulations that protect democratic institutions from foreign or domestic interference. By standardizing the oversight of these technologies, the government can foster a safe environment where beneficial AI developments flourish while effectively mitigating the systemic risks that currently threaten national sovereignty and social cohesion.

The proposed motions moved forward for review during the April 2, 2026, committee meeting, with the definitive goal of presenting these findings at major regional and national government conventions later this year. Local leaders recognized that the risks associated with AI-driven misinformation were too severe to ignore and required a unified, whole-of-government strategy to prevent irreparable harm. To move forward, stakeholders focused on implementing mandatory watermarking for AI-generated media and creating public awareness campaigns to boost digital literacy among the electorate. Additionally, the initiative suggested that legislative bodies should prioritize the development of rapid-response protocols to debunk deepfakes during active election cycles. By establishing these guardrails, the administration sought to balance technological progress with the necessity of preserving public trust in the information ecosystem. This proactive stance served as a blueprint for how municipalities might influence broader national policy through the strategic use of provocative, real-world demonstrations of emerging digital threats.

Subscribe to our weekly news digest.

Join now and become a part of our fast-growing community.

Invalid Email Address
Thanks for Subscribing!
We'll be sending you our best soon!
Something went wrong, please try again later