in , ,

Top IT Companies Agree to Preserve AI

Read Time:1 Minute, 51 Second

The governments of South Korea and the UK have stated that major artificial intelligence companies have made a commitment to a new set of voluntary measures aimed at improving AI safety. Prominent technology companies such as Amazon, Google, Meta, Microsoft, OpenAI, xAI, and Zhipu AI, a Chinese developer, have consented to release frameworks describing how they would assess the dangers related to their sophisticated artificial intelligence models.

Prior to today’s inauguration of the global AI summit in Seoul, the firms made a commitment to refrain from developing or implementing any AI models until serious dangers could be adequately addressed. This expands on the ideas expressed in the Bletchley Declaration, which was presented at the first AI Safety Summit in November by UK Prime Minister Rishi Sunak.

“These pledges guarantee that the top AI firms globally will offer openness and responsibility regarding their strategies for creating secure AI,” said Sunak. “It establishes a precedent for international AI safety standards that will unlock this game-changing technology’s benefits.”

According to the agreement, AI businesses must evaluate the risks associated with their frontier models before, during, and after training and deployment. They must lay out mitigation plans and clearly define the boundaries of unacceptable risk. Vice-president of global affairs at OpenAI Anna Makanju stressed the need of developing safety protocols in tandem with scientific breakthroughs and pledged to work together to make sure AI serves mankind as a whole.

Similar voluntary pledges made by a number of these firms at the White House last year are echoed in this announcement. The statement emphasizes public transparency, with the exception of situations where it could increase risks or divulge sensitive commercial information, but the accountability procedures for these pledges are still unclear.

Michelle Donelan, the UK’s science secretary, drew attention to the accomplishments of earlier voluntary agreements and underlined the necessity of ongoing business and government cooperation. She also mentioned the UK’s stance, which is that more government-funded research into the dangers of AI is needed rather than rushing into legislation to ensure its safety.

The world’s attention is still focused on developing strong frameworks to manage the risks and securely realize the potential of AI technology as the AI summit progresses.

What do you think?

Amber Warning with ‘Danger to Life’ Alert Issued for Heavy Rain

Applications for Health and Care Worker Visas Fall by 76% in the First Half of 2024