DH Latest NewsDH NEWSLatest NewsNEWSInternationalBusiness

Makers of ChatGPT call for regulation to safeguard humanity from the risks posed by fast-developing AI

The creators of ChatGPT, OpenAI, have called for the regulation of ‘superintelligent’ artificial intelligence (AI) systems, asserting that an international regulatory body similar to the International Atomic Energy Agency should be established to safeguard humanity from the risks associated with rapidly advancing AI.

In a note published on OpenAI’s website, co-founders Greg Brockman and Ilya Sutskever, along with CEO Sam Altman, urged the formation of an international regulator that would develop methods to inspect AI systems, conduct audits, enforce safety standards, and impose restrictions on deployment and security levels. The goal is to mitigate the potential “existential risk” posed by such systems.

According to the note, it is conceivable that within the next decade, AI systems will surpass human expertise in most domains and engage in productive activities equivalent to large corporations today. The power of superintelligent AI is expected to exceed that of any previous technology, offering both significant benefits and potential downsides. The OpenAI leaders emphasize the need for proactive management of these risks and advocate for coordination among organizations involved in AI research to ensure responsible development aligned with societal values and safety.

The Center for AI Safety (CAIS) in the US, which focuses on reducing risks associated with AI, outlines eight categories of catastrophic and existential risks related to AI development.

The creators of ChatGPT believe that democratic decision-making is necessary to establish boundaries and defaults for AI systems, but they acknowledge the challenge of designing an effective mechanism for this purpose.

While the note recognizes the potential for a better world through AI, with examples in education, creative work, and productivity, it also highlights the decreasing cost and increasing number of actors involved in AI development, making regulation essential. Halting AI progress would require extreme measures like a global surveillance regime, which is not guaranteed to be effective.

During a panel discussion with US lawmakers, OpenAI CEO Sam Altman emphasized the criticality of regulating increasingly powerful AI models to mitigate risks. Altman specifically addressed concerns regarding AI’s potential impact on election integrity.

OpenAI was founded on the belief that AI has the potential to improve various aspects of human life while acknowledging the associated risks. The proliferation of sophisticated AI models in the market has raised concerns about their potential to exacerbate societal issues such as misinformation and prejudice.

Altman also highlighted the positive contributions of AI, suggesting that OpenAI’s generative AI could address major challenges like climate change and cancer treatment in the future. However, given the risks involved, regulatory intervention by governments is seen as crucial by OpenAI’s CEO.

shortlink

Post Your Comments


Back to top button