OAN’s James Meyers
11:50 AM – Thursday, February 8, 2024
Tech giant Google announced they are joining other tech companies including Adobe, Intel and Microsoft to decipher when a piece of media has been altered by artificial intelligence.
Google said they will be using a project from the software Adobe called Content Credentials, which gives creators the opportunity to add a small “CR” symbol to AI media.
The “CR” symbol links to information regarding when, where and how the media was edited. Essentially, the symbol would function as a form of metadata that indicates AI editing and allows viewers to verify videos, images, audio and documents.
However, the Coalition for Content Provenance and Authenticity (C2PA), doesn’t support enforcing such measures on all content and actually proposed a way for places, such as social media and news organizations, to share trusted digital media.
“The way we think we’re trying to solve the problem is first, we want to have you have the ability to prove as a creator what’s true,” said Dana Rao, who leads Adobe’s legal, security and policy organization and co-founded the coalition. “And then we want to teach people that if somebody is trying to tell you something that is true, they will have gone through this process and you’ll see the ‘CR,’ almost like a ‘Good Housekeeping’ seal of approval.”
With the development of AI, it has opened the door to creative ideas as well as disinformation and sexual abuse.
This has caused calls to rein in the technology or make it clearer when something has actually been created by AI. One idea that has been proposed is watermarking, which adds signals to decipher real from fake.
Meanwhile, the tech giant has created multiple AI consumer products such as Bard, an AI chatbot, and AI editing tools.
“At Google, a critical part of our responsible approach to AI involves working with others in the industry to help increase transparency around digital content,” said Laurie Richardson, vice president of trust and safety at Google, in a press release about Google joining the C2PA.
“This is why we are excited to join the committee and incorporate the latest version of the C2PA standard. It builds on our work in this space — including Google DeepMind’s SynthID, Search’s About this Image and YouTube’s labels denoting content that is altered or synthetic — to provide important context to people, helping them make more informed decisions.”
Furthermore, these new advancements of AI technology have also seen the consequences. On Microsoft and Google’s search engines, non-consensual sexually explicit “deepfake” images of celebrity women can be found. “Deepfake” material means that the real photos are AI-edited real photos as well as videos that use AI to “swap” faces and clone voices.
Stay informed! Receive breaking news blasts directly to your inbox for free. Subscribe here. https://www.oann.com/alerts