ChatGPT creator OpenAI is taking seriously the full spectrum of safety risks related to AI and launching its “Preparedness” team as planned.
OpenAI, the artificial intelligence (AI) research and deployment firm behind ChatGPT, is launching a new initiative to assess a broad range of AI-related risks.
OpenAI is building a new team dedicated to tracking, evaluating, forecasting and protecting potential catastrophic risks stemming from AI, the firm announced on Oct. 25.
Called “Preparedness,” OpenAI’s new division will specifically focus on potential AI threats related to chemical, biological, radiological and nuclear threats, as well as individualized persuasion, cybersecurity and autonomous replication and adaptation.
Led by Aleksander Madry, the Preparedness team will try to answer questions like how dangerous are frontier AI systems when put to misuse as well as whether malicious actors would be able to deploy stolen AI model weights.
“We believe that frontier AI models, which will exceed the capabilities currently present in the most advanced existing models, have the potential to benefit all of humanity,” OpenAI wrote, admitting that AI models also pose “increasingly severe risks.” The firm added:
“We take seriously the full spectrum of safety risks related to AI, from the systems we have today to the furthest reaches of superintelligence. […] To support the safety of highly-capable AI systems, we are developing our approach to catastrophic risk preparedness.”
According to the blog post, OpenAI is now seeking talent with different technical backgrounds for its new Preparedness team. Additionally, the firm is launching an AI Preparedness Challenge for catastrophic misuse prevention, offering $25,000 in API credits to its top 10 submissions.
OpenAI previously said that it was planning to form a new team dedicated to addressing potential AI threats in July 2023.
Related: CoinMarketCap launches ChatGPT plugin
The risks potentially associated with artificial intelligence have been frequently highlighted, along with fears that AI has the potential to become more intelligent than any human. Despite acknowledging these risks, companies like OpenAI have been actively developing new AI technologies in recent years, which has in turn sparked further concerns.
In May 2023, the Center for AI Safety nonprofit organization released an open letter on AI risk, urging the community to mitigate the risks of extinction from AI as a global priority alongside other societal-scale risks, such as pandemics and nuclear war.
Magazine: How to protect your crypto in a volatile market — Bitcoin OGs and experts weigh in