Author Topic: Tech giants like Microsoft and Apple join the U.S. safety consortium  (Read 82 times)

Offline javajolt

  • Administrator
  • Hero Member
  • *****
  • Posts: 35382
  • Gender: Male
  • I Do Windows
    • windows10newsinfo.com
    • Email
The U.S. government is taking measures to increase the safe development of generative AI. In a statement today, U.S. Secretary of Commerce Gina Raimondo revealed that AI tech giants like Apple, OpenAI, and Microsoft are joining the first-ever consortium addressing protocols surrounding AI.

The leading companies are among 200 other entities that have agreed to be a part of the U.S. AI Safety Institute Consortium (AISIC). The tech giants include OpenAI, Google, Meta, Apple, Anthropic, Microsoft, Amazon, Nvidia, Intel, IBM, and HP. The full list can be found here.

The consortium will bring together academics, government and industry researchers, and AI creators to ensure that AI is developed and deployed responsibly. The agency's actions will follow Biden's Executive Order that calls for better user protection, increased competition, improved risk management, and prioritizing Civil and Equity rights.

Due to the rapid development of AI, there are insufficient security protocols regarding the safe deployment of the technology.

Recently, a deep fake of Biden's voice surfaced in the media, while in another case, deep fakes were being used for financial fraud. Even the Federal Trade Commission began offering $25,000 to recognize whether particular audios were fake or real. To address these issues, AISIC has been established.

Gina Raimondo elaborated on the roles of the AISIC, stating:

Quote
“The U.S. government has a significant role in setting the standards and developing the tools we need to mitigate the risks and harness the immense potential of artificial intelligence. President Biden directed us to pull every lever to accomplish two key goals: set safety standards and protect our innovation ecosystem. That’s precisely what the U.S. AI Safety Institute Consortium is set up to help us do.”

When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works.
Tech giants like Microsoft and Apple join the U.S. safety consortium to tackle risks of AI
Anushe Fawaz · Feb 8, 2024 09:34 EST1
An AI-powered humanoid during the process of thinking

The U.S. government is taking measures to increase the safe development of generative AI. In a statement today, U.S. Secretary of Commerce Gina Raimondo revealed that AI tech giants like Apple, OpenAI, and Microsoft are joining the first-ever consortium addressing protocols surrounding AI.

The leading companies are among 200 other entities that have agreed to be a part of the U.S. AI Safety Institute Consortium (AISIC). The tech giants include OpenAI, Google, Meta, Apple, Anthropic, Microsoft, Amazon, Nvidia, Intel, IBM, and HP. The full list can be found here.

The consortium will bring together academics, government and industry researchers, and AI creators to ensure that AI is developed and deployed responsibly. The agency's actions will follow Biden's Executive Order that calls for better user protection, increased competition, improved risk management, and prioritizing Civil and Equity rights.

Due to the rapid development of AI, there are insufficient security protocols regarding the safe deployment of the technology. Our editorial also highlights the threats of generative AI to user privacy and security.

Recently, a deep fake of Biden's voice surfaced in the media, while in another case, deep fakes were being used for financial fraud. Even the Federal Trade Commission began offering $25,000 to recognize whether particular audios were fake or real. To address these issues, AISIC has been established.

Gina Raimondo elaborated on the roles of the AISIC, stating:

“The U.S. government has a significant role in setting the standards and developing the tools we need to mitigate the risks and harness the immense potential of artificial intelligence. President Biden directed us to pull every lever to accomplish two key goals: set safety standards and protect our innovation ecosystem. That’s precisely what the U.S. AI Safety Institute Consortium is set up to help us do.”

AISIC will focus on handling "red-teaming, capability evaluations, risk management, safety and security, and watermarking synthetic content."

The organization's efforts can be seen in action as Meta and OpenAI recently added watermarks to help recognize the image as AI-generated. Meta has made it essential to watermark AI-generated content on Instagram, Facebook, and Threads.

source