#News

Scientist's Open Letter To Control AI Before It’s Too Late

Scientist's Open Letter To Control AI Before It’s Too Late

Date: September 17, 2024

Renowned and influential AI scientists have shared an open letter to global governments for protection against catastrophic risks to humans.

The international dialogue on AI safety in Venice sparked a new collective concern from some of the most renowned AI pioneers in the world. A group of influential AI scientists have shared an open letter to all global governments, urging them to create a global oversight and control system before AI advancement goes out of human control.

The conclusion of the international dialogue in Venice focused on building AI for the greater good of humanity. The group of scientists published a written open letter on September 16 around collective steps nations must take to prevent catastrophic disasters led by AI. 

“Loss of human control or malicious use of these AI systems could lead to catastrophic outcomes for all of humanity,” the statement read, “Unfortunately, we have not yet developed the necessary science to control and safeguard the use of such advanced intelligence.”

The open letter aimed at building three aspects of AI monitoring and oversight: emergency preparedness agreements and institutions, a safety assurance framework, and independent global AI safety and verification research.

Over 30 signatories from the United States, Canada, China, Britain, Singapore, and other countries joined hands to form a global contingency plan with immediate actions in case of emergencies. AI researchers from top institutions and universities revealed that the scientific exchange of AI advancements between superpowers was shrinking, especially because of the growing distrust between the US and China.

“In the longer term, states should develop an international governance regime to prevent the development of models that could pose global catastrophic risks,” said the statement.

This was the third dialogue meeting on AI safety conducted by the nonprofit US research group Safe AI Forum. In early September, the US, UK, and Europe signed the world’s first legally binding international AI treaty that prioritizes human safety, rights, and wellbeing over any AI innovation. It also formed concrete guidelines that place accountability in AI regulation on its makers. Tech corporations and leading AI giants have expressed that over-regulation will weaken innovation efforts, especially in the EU region. However, the EU region and other nations have strongly supported AI tools for productivity, education, and other pro-human aspects.

Arpit Dubey

By Arpit Dubey LinkedIn Icon

Have newsworthy information in tech we can share with our community?

Post Project Image

Fill in the details, and our team will get back to you soon.

Contact Information
+ * =