fbpx
News

Superintelligence got you down? Open AI is on the case

OpenAI wants to build an "automated alignment researcher" to help humans control a hypothetical superintelligent AI

Artificial intelligence (AI) that can help you write emails, plan trips, photoshop pictures, adjust your written tone, and much more has become rapidly more advanced in recent years.

Some people are asking: will AI eventually be out of our control? OpenAI, the people behind ChatGPT, has formed a new team that hopes to find ways to “steer and control AI systems much smarter than us.”

A superintelligence is an AI that is smarter than the most gifted human individuals. “We believe it could arrive this decade,” OpenAI says on its blog.

“Superintelligence will be the most impactful technology humanity has ever invented, and could help us solve many of the world’s most important problems. But the vast power of superintelligence could also be very dangerous, and could lead to the disempowerment of humanity or even human extinction.”

The plan starts with alignment research. That’s the field that helps to keep AI “aligned with human values” and following “human intent.” OpenAI proposes building an “automated alignment researcher” that could help humans manage superintelligence and keep it on the straight and narrow. It argues that the current tools we use to keep AI in line won’t work on an AI that is smarter than us because of how much they rely on human supervision.

OpenAI’s new team to build the automated alignment researcher will be headed by Jan Leike and Ilya Sutskever. Leike is the alignment team lead and a research associate at the Future of Humanity Institute at Oxford. Sutskever is OpenAI’s co-founder and Chief Scientist.

This decision comes as an increasing number of world leaders are starting to recognize the possibilities and dangers inherent in AI and struggle to regulate an ever-evolving technology. OpenAI’s Sam Altman visited the White House in May for a meeting about AI with Vice President Kamala Harris, as Congress discussed potential regulations.

In Canada, Bill C-27 proposes new regulations that would protect Canadians’ privacy and regulate AI, including a brand new role: an AI and Data Commissioner. At the time of this writing, Bill C-27 is still in the House of Commons; it went through its second reading in April. ChatGPT is also under investigation by Canadian privacy officials for its use of user data.

Tech experts and AI researchers have raised numerous concerns about the development of large-scale AI. Notable people like Steve Wozniak, Elon Musk, Emad Mostaque and others signed an open letter in March asking labs to pause this work. In May, more industry leaders issued a renewed warning, including Sam Altman, Demis Hasswabis, and Geoffrey Hinton.

Advanced AI has created, and will continue to create, ethical questions in all sectors of life. In long conversations with Microsoft’s AI Bing Chat, it can become erratic, inappropriate, and straight-out racist, sexist, homophobic, etc. Research from Stanford shows that the essays of non-native English speakers are more likely to be flagged as AI-generated when they are original. It’s also been shown to be a powerful tool for spreading misinformation.

At the same time, this technology has been able to put a lot of good into the world. Consider the MIT scientists who were able to create a new antibiotic with AI assistance, or the AI tool that can revitalize old family photos.

Source: OpenAI Via: Engadget

MobileSyrup may earn a commission from purchases made via our links, which helps fund the journalism we provide free on our website. These links do not influence our editorial content. Support us here.

Related Articles

Comments