The CBC news video experiment created using VEO 3 demonstrates the undermining of digital trust by AI video technology. The breaking news video reports on a wildfire spreading in Alberta. The news anchor calmly delivers the news, and behind her, the map shows flames spreading to central Canada. The reporter connects to the scene, and an emergency alert sounds in the background. However, everything on the screen is fake. The video was created using Google’s AI video generation tool, ‘Veo 3’, and is not an actual news report.

CBC created this content as an experiment to demonstrate the power of AI video technology. The purpose was to show how sophisticatedly false information can be made. The video appeared real to viewers, and there were no awkward scenes or defects commonly seen in traditional AI content. Experts analyzed that Veo 3 could realistically simulate physical elements such as voice, sound effects, shadows, and textures. The tools that can construct falsehoods as reality are now open to the general public.
Videos more real than the real thing
Veo 3 spread online immediately after its release. In its first week alone, news of fake celebrity deaths, fabricated political press conferences, and manipulated election scenes were created in multiple languages. These videos reached tens of thousands of people in a short time, and some were mistaken for real news by the media and on SNS.

The UK Turing Institute recently reported that “AI video blurs the line between fact and falsehood, and its effects are not one-off but cumulative.” The report pointed out cases where parody videos produced by AI were confused with media reports or candidate statements during actual election processes. Professor Angela Misri from Toronto Metropolitan University in Canada warned, “If AI-created false realities repeat, people eventually trust nothing.”
Limitations of detection technology
Warnings about deepfake technology have been around for a long time. However, existing detection systems are becoming increasingly powerless against sophisticated AI content. Physical errors, mismatched lip movements, and unrealistic backgrounds were clues to identifying past video manipulations. However, Veo 3 overcomes all these weaknesses and neutralizes the detection criteria themselves.
Professor Nina Brown from Syracuse University in the United States said, “AI-created videos deceive the viewer’s senses and even dull the critical thinking of media consumers,” and added, “Repeated fake videos leave the public confused about ‘what to believe.’”
Regulation is slow, technology is advanced
The US Congress passed the ‘Take It Down Act’ in April 2025, which criminalizes non-consensual deepfake sexual content. However, there is no comprehensive regulation for manipulating public information in politics, society, and health. The European Union also demands transparency for AI content through the Digital Services Act (DSA), but actual implementation is slow and varies by region.
The Ada Lovelace Institute, a technology watchdog, analyzed, “Current technical protective measures alone are insufficient to prevent the spread of misinformation.” Julia Smekman, a researcher, stated, “AI video stimulates emotions, visuals, and sounds, so there are clear limits to the existing word filtering or algorithm warnings.”
How to stop AI fake news
Solutions to prevent the spread of fake videos are also being discussed. The key is restoring ‘digital trust’. First, automatic labeling of video content indicating AI generation is proposed with ‘digital watermarking’ technology. Meta, Microsoft, and others are introducing a joint standard to attach ‘origin tags’ to AI content. However, it is not yet mandatory and is not applied across all platforms.
The role of the press and SNS platforms is also crucial. A system is needed that can detect AI-generated content at the distribution stage and clearly inform viewers of the fact-checking results. CBC in Canada is conducting viewer education programs alongside AI video production experiments. They offer misinformation identification education and encourage open discussion about the dual nature of AI technology to students, teachers, and the general public.
Alongside technical defense, restoring the trust in the press and strengthening citizens’ ability to interpret information must be pursued. Professor Angela Misri emphasized, “To respond to the threats created by AI, citizen education, media ethics, and technological regulation must work simultaneously.”
Fake news is not a new phenomenon. However, artificial intelligence makes fake look real and causes the real to be mistaken for fake. Now more than ever, a balance of technology, regulation, and citizen surveillance to protect the truth is crucially needed.