Article
How to Protect Your Brand and Audiences from Fake Press Releases, Imposter Content and Deepfakes
In the discussions we are having with clients at Tauth Labs, the idea of incorporating a layer of digital security into content for audiences to be able to determine if it is authentic or has been manipulated is strongly resonating at a time of rising digital fraud.
A large company with an extremely robust user-verification process noted that once content was shared there wasn’t a way for journalists, analysts or clients to confirm its authenticity without going back to the source.
In a world where the whole point of content is for it to be shared, this is a major issue.
Large technology consulting firms and technology leaders are waving red flags warning that generative AI is making it easier to both create and share fake or imposter content. A quick search for fake press releases provides examples of companies that have seen wild swings in stock prices and highlights real damage being done.
Generative AI tools are now able to do things that were not possible a few years ago. This includes at scale content generation and the instant creation of websites, vishing (voice phishing), and the agentic (automated) distribution of content.
Although these tools make it easier to build businesses and business models, they also make it easier to propagate misinformation, disinformation and perpetrate fraud. By some estimates the total cost of cybercrime in the U.S. alone is expected to amount to more than $1 trillion in 2025. And, since State actors are involved with almost unlimited budgets, fake content designed to look like it is from reputable media organizations and influence opinion, is part of this toxic mix.
With the growing recognition of the need for tools that protect companies, their clients and consumers, sophisticated software has been developed to identify fraud and disinformation in the digital world. The combination of technology and human evaluation provides companies with the ability to spot disinformation campaigns, imposter content, manipulated content and deepfakes once they have been posted or shared.
Addressing fake or manipulated content once it has been posted is not cheap, both in terms of dollars and time. In a world where stock trades are executed in milliseconds based on signals from the news or (what may look like) company announcements, a lot of damage can be done in the seconds, minutes and hours it may take to address the impact of a malicious piece of content.
Can the risks be reduced by making it easier for audiences to know what is real? The answer is yes.
By using content authentication, a technology standard developed by the world’s largest technology and media companies, companies have a way for audiences to know if content can be trusted. Content credentials in the form of robust digital watermarks incorporated into documents, images, audio and video prove that it is actually from the company or individual it appears to be from, show if changes have been made, and show information relevant to how it was produced.
Authentication is an ounce of prevention that saves a pound of cure – by ensuring that authenticated, not fake content, is more likely to be acted on by audiences. This protects them and companies from potential fraud. It is also a solution for a number of other challenges including declining trust in content, proof of authorship, and incorporating opt-in and opt-out language around AI training.
It is not a magic bullet. We cannot wish digital fraud away. It is here to stay. But in the same way security certificates (http to https) provided a new layer of digital security for websites, authentication promises to do the same thing for content.
In the new AI world, we need both content authentication and an “all of the above” approach to tools that provide the ability to identify disinformation, check facts and manage against the whack-a-mole fraudsters damaging reputations and bottom lines.
Simon Erskine Locke is co-founder & CEO of Tauth Labs, which provides trusted content authentication to the communications industry based on C2PA standard. He is founder & CEO of CommunicationsMatch™, communications agencies, and a former head of communications functions at Prudential Financial, Morgan Stanley and Deutsche Bank.
Content Authentication Adoption Worldwide
See what we written lately
Request an invite
Get a front row seat to the newest in identity and access.

















