Leading AI Companies Commit to Safer Technology with Watermarks on AI-Generated Content

In a significant development for the AI industry, the White House announced on Friday that major players, including OpenAI, have pledged to enhance the safety and trustworthiness of their AI technology. The commitments reflect a shared focus on three core principles essential to the future of AI: safety, security, and trust. The move comes as concerns about the misuse of AI-generated imagery and audio for fraud and misinformation have escalated, especially with the approaching 2024 US presidential election.

The companies involved in this initiative, namely Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI, are set to collaborate with President Joe Biden to unveil their commitments. These include the development of robust technical mechanisms, such as watermarking systems, to help users identify AI-generated content. This feature aims to prevent individuals from being deceived by seemingly authentic fakes.

A White House official revealed that the goal is to establish an easy-to-use system for recognizing AI-created content, encompassing both visual and audio material. The companies have agreed to subject their AI systems to independent testing, specifically assessing risks related to biosecurity, cybersecurity, and societal effects.

While organizations like Common Sense Media laud the White House for taking the initiative to regulate AI technology, there remains skepticism about whether tech companies will genuinely uphold their voluntary pledge to act responsibly and support stringent regulations.

To complement these efforts, President Biden is also working on an executive order to ensure the safety and trustworthiness of AI applications. Additionally, the White House aims to collaborate with international allies in formulating a comprehensive framework to govern the development and use of AI globally.

This move towards enhanced AI safety and transparency was a topic of discussion between EU commissioner Thierry Breton and OpenAI CEO Sam Altman during a June meeting in San Francisco. Breton expressed eagerness to pursue conversations, particularly concerning watermarking technology, which Altman hinted would be showcased soon.

In conclusion, the commitments made by leading AI companies underscore the critical steps being taken to promote responsible AI. With watermarks and other technical safeguards in place, users will have a means to distinguish AI-generated content, instilling greater confidence in the technology’s integrity and fostering a safer digital landscape. As collaboration extends internationally, the future of AI promises to be more secure, trustworthy, and beneficial for society at large.

Source : Tuko

 

 / 

Sign in

Send Message

My favorites