
MANILA, Philippines – More than 200 signatories, including Nobel Peace Prize laureate and Rappler CEO Maria Ressa, urged the United Nations General Assembly on Monday, September 22, to establish artificial intelligence “red lines” — guidelines with robust enforcement mechanisms which AI should not cross — by the end of 2026.
The call for AI red lines by AI experts, Nobel Prize laureates, and former heads of state and ministers underscores AI’s potential as well as its risks.
The AI Red Lines initiative stated that “AI could soon far surpass human capabilities and escalate risks such as engineered pandemics, widespread disinformation, large-scale manipulation of individuals including children, national and international security concerns, mass unemployment, and systematic human rights violations.”
Without proper checks on the development of AI, the signatories said, experts warn that “it will become increasingly difficult to exert meaningful human control in the coming years.”
Said Ressa, “History teaches us that when confronted with irreversible, borderless threats, cooperation is the only rational way to pursue national interests.”
In addition to Ressa, signatories include OpenAI cofounder Wojciech Zaremba, Anthropic CISO Jason Clinton, Google DeepMind research scientist Ian Goodfellow, British-Canadian computer scientist Geoffrey Hinton, Nobel laureate in Economics Joseph Stiglitz, and others.
The Verge, in its report, noted that some regional AI Red lines exist. Specifically, these are the European Union’s AI Act banning some “unacceptable” uses of AI within the EU, and the agreement between the US and China that nuclear weapons should remain under human control.
The AI Red Lines initiative recommends that aside from an international treaty for AI red lines and the translation of such international pledges into national law, an independent international technical recommendation body be set up to have standardized auditing protocols to check if developed AIs are made following agreed upon rules and do not cross internationally defined boundaries. It added, “The International Network of AI Safety and Security Institutes is well-positioned to play a role in this process.”
More information on the AI Red Lines initiative is available on its website. – Rappler.com