27.05.2025 • Whitepaper

Deepfakes and Digital Trust: Why we need Authentication Technology to Secure Video Evidence

Generative AI is promising to transform security systems, creating new heights of efficiency, accuracy, and depths of data depth that can be captured by these systems. These positive advancements are dominating our industry conversations about AI's impact on the security industry.

Photo
Leo Levit, Chairman, Onvif Steering Committee
© Onvif

However, the influences from generative AI are not all positive, with some presenting serious threats to the integrity of the industry’s core set of technologies - video surveillance. One of the most pressing areas of concern is the growing prevalence of manipulated or “deepfake” videos, which are made possible by the mainstream availability of video alteration tools based on generative AI technology. Creating these deepfake videos or altering existing videos used to require expert skills and expensive equipment but can now be done using everyday apps, many of which are free to download or use. 

While some instances of deepfakes or manipulated video are easier to spot - think a dinosaur wandering around an office lobby - other alterations to video can be seamless to the viewer because they don’t involve obvious changes in the video footage. For example, altering the timestamp of a video clip to a different day or time can provide incorrect information about when the event occurred. Taking out specific frames of a video clip from an event can remove the event or person of interest in the video, which means the clip does not accurately represent the event. More extreme examples include substituting a face with that of another person in a scene or using generative AI to put a firearm in an individual’s hand.

The threat of deepfakes has moved from theoretical concern to practical reality in recent years. In 2024, a multinational company in Hong Kong was tricked into wiring $25 million to fraudsters after participating in a video call with a deepfake of the company’s chief financial officer. Law enforcement agencies have also reported instances where manipulated surveillance footage was submitted as evidence in criminal cases, with timestamps and content altered to create false alibis.

This ability to alter video can ultimately pose significant challenges to organizational trust in video evidence and the industry’s ability to maintain the authenticity of surveillance footage, which can have severe consequences in many areas. Video is one of the most crucial pieces of evidence used in criminal investigations, court proceedings, and internal corporate security investigations. 

In many countries, there is a very robust chain of custody process required as part of law enforcement investigations and the admission of the video as evidence in court. Public distrust in video can easily lead to concerns about reasonable doubt in the eyes of a jury or judicial ruling in court proceedings and corporate investigations. 

If the current legal precedents about the admissibility of video evidence are undermined by AI manipulation, courts may be forced to establish entirely new standards for this type of evidence. This could potentially exclude video evidence in cases where authentication cannot be established. 

For corporate security, the stakes are equally high. Internal investigations rely heavily on surveillance footage to resolve incidents ranging from workplace safety violations to theft and harassment claims. Human resources departments and corporate legal teams often base critical decisions on video evidence. If this evidence is in doubt, organizations face increased liability risks, higher settlement costs, and greater difficulty in fairly resolving disputes in the workplace. Insurance companies have also begun expressing concern about the ability to verify claims in an era of manipulable video, with some policies now specifically addressing digital evidence reliability.

The impacts extend beyond the courtroom and corporate settings. Public safety organizations, transportation systems, critical infrastructure protection, and national security applications all rely on verified video for both real-time decision-making and after-action reviews.

As these threats continue to grow, traditional forensic techniques to safeguard video footage will not be enough to protect against generative AI’s ability to covertly and overtly alter surveillance video. This growing need for new solutions highlights the importance for industry collaboration and a standardized way to preserve the integrity of video and institutional trust in the footage as an accurate view of a situation.

Finding a Solution with Media Signing

As a global standards organization, Onvif is working on a method of video authentication called media signing that provides proof that the video has not been altered since it has left the specific camera sensor that captured the video. Securing the video at its earliest point, when the sensor in the camera captures the video, is key to ensuring the authenticity and trustworthiness of the video footage from camera to court.

On a technical level, the method involves a camera having a unique signing key that is used to sign a group of video frames, where each frame is accounted for. The signature is then embedded in the video. When the video is played through a media player (like a standalone video player or video management client) that supports media signing and a trusted root certificate from the camera manufacturer, the media player can verify that the video data originated directly from that specific camera and has not been tampered with. If pixels in a video frame have been altered, or frames have been removed or reordered, the signature verification will fail and the video player signals that the video is not valid. 

Simplifying Authentication for Law Enforcement

Standardizing video authentication using Onvif enables a common way to verify the authenticity of the video it has received. This can help to streamline processes for video users such as law enforcement and other criminal justice personnel, who deal with video footage generated by systems from many different manufacturers that might use a variety of methods for protecting video. In addition, by securing the video right from the specific camera that captured it, there’s no need to prove the chain of custody for the video. You can verify the video authenticity at every step — from the camera to a person viewing the exported recording. With authentication provided at the point of capture, the video can be traced back to the device that recorded it.

Open Source Release

Onvif is planning to release the implementation of media signing as an open-source project. Providing the specification to the open-source community will add transparency to the Onvif method and make it easier for a wide community of developers to use it, helping the standard gain wider adoption in the security industry. Making these standards available via open source will also create transparency in the technical implementation, preserving trust in the authentication process and the integrity of the video itself.

Standardizing this process for the security industry and others that rely on camera footage for other uses will provide consistency and reliability in the authenticity of video. Onvif believes that video authentication at the source (from the camera) through media signing will provide the assurances needed to preserve trust in surveillance video. 


Author: Leo Levit Chairman, Onvif Steering Committee

Business Partner

ONVIF

2400 Camino Ramon, Suite 375
CA 94583 San Ramon
US

Business Partner contact







most read