U.S. And Other Countries Develop New Global Guidelines For AI Security

A photo taken on November 23, 2023 shows the logo of the ChatGPT application developed by US artificial intelligence research organization OpenAI on a laptop screen (R) and the letters AI on a smartphone screen in Frankfurt am Main, western Germany. Sam Altman's shock return as chief executive of OpenAI late on November 22 -- days after being sacked -- caps a chaotic period that highlighted deep tensions at the heart of the Artificial Intelligence community. The board that fired Altman from his role as CEO of the ChatGPT creator has been almost entirely replaced following a rebellion by employees, cementing his position at the helm of the firm. (Photo by Kirill KUDRYAVTSEV / AFP) (Photo by KIRILL KUDRYAVTSEV/AFP via Getty Images)
(Photo by KIRILL KUDRYAVTSEV/AFP via Getty Images)

OAN’s James Meyers
8:15 AM – Monday, November 27, 2023

The United States and multiple other countries have come to an agreement on guidelines for the use of Artificial Intelligence (AI). 

Advertisement

In a 20-page document unveiled on Sunday, 18 countries came to an agreement on how AI should be used to keep it safe for customers and the public from misuse. 

The agreement is non-binding and only gives general recommendations, which includes AI systems for abuse, vetting software suppliers and protecting data from tampering.

The director of the U.S. Cybersecurity and Infrastructure Security Agency, Jen Easterly, said it was important that countries place guidelines on AI systems for safety purposes.

“This is the first time that we have seen an affirmation that these capabilities should not just be about cool features and how quickly we can get them to market or how we can compete to drive down costs,” Easterly told Reuters, saying the guidelines represent “an agreement that the most important thing that needs to be done at the design phase is security.”

The agreement put in place is in hopes to keep AI technology from being hijacked by hackers and to make sure proper security testing is completed before a new product is released. 

Those critical of A.I. believe that the technology could be used to disrupt elections, take jobs, or turbocharge fraud. 

The agreement is broken down into four key major areas. Secure design, secure development, secure deployment, and secure operation and maintenance, including suggestive behaviors to help improve security. 

Stay informed! Receive breaking news blasts directly to your inbox for free. Subscribe here. https://www.oann.com/alerts

Share this post!