OpenAI Will Only Deploy Its Latest Technology if It Is Deemed Safe

0
33
OpenAI Will Only Deploy Its Latest Technology if It Is Deemed Safe


Artificial intelligence firm OpenAI laid out a framework to handle security in its most superior fashions, together with permitting the board to reverse security choices, based on a plan revealed on its web site Monday.

Microsoft-backed OpenAI will solely deploy its newest know-how if it’s deemed protected in particular areas akin to cybersecurity and nuclear threats. The firm can be creating an advisory group to evaluation security reviews and ship them to the corporate’s executives and board. While executives will make choices, the board can reverse these choices.

Since ChatGPT’s launch a yr in the past, the potential risks of AI have been high of thoughts for each AI researchers and most people. Generative AI know-how has dazzled customers with its capacity to jot down poetry and essays, but in addition sparked security considerations with its potential to unfold disinformation and manipulate people.

In April, a gaggle of AI business leaders and specialists signed an open letter calling for a six-month pause in growing techniques extra highly effective than OpenAI’s GPT-4, citing potential dangers to society. A May Reuters/Ipsos ballot discovered that greater than two-thirds of Americans are involved concerning the attainable unfavourable results of AI and 61 p.c consider it may threaten civilization.

Earlier this month, OpenAI delayed the launch of its customized GPT retailer till early 2024. OpenAI had launched the customized GPTs and retailer throughout its first developer convention in November. The firm is reportedly persevering with to “make improvements” to GPTs primarily based on buyer suggestions.

Last month, OpenAI additionally underwent turmoil when its board fired CEO Sam Altman. Altman then returned to the corporate days after his ouster whereas the board was revamped.

© Thomson Reuters 2023


Affiliate hyperlinks could also be routinely generated – see our ethics assertion for particulars.



Source hyperlink