On Wednesday, former OpenAI Chief Scientist Ilya Sutskever announced the creation of a new company called Safe Superintelligence, Inc. (SSI) aims to carefully build “superintelligence,” which is a hypothetical form of artificial intelligence that surpasses human intelligence, perhaps by a great deal.
“We will follow the higher wisdom by shooting straight, with one focus, one goal, and one thing,” wrote Sutskever on X. “We will do this through the changes made by a small, broken team.“
Sutskever was a founding member of OpenAI and was previously the company’s chief scientist. Two others are joining Sutskever at SSI first: Daniel Levy, who previously led the Optimization Group at OpenAI, and Daniel Gross, an AI investor who worked on machine learning at Apple between 2013 and 2017. The three have written comments on the company’s innovations. . website.
Sutskever and several colleagues resigned from OpenAI in May, six months after Sutskever played a key role in the ouster of OpenAI CEO Sam Altman, who later returned. Although Sutskever did not publicly complain about OpenAI after he left – and OpenAI executives such as Altman wished him well in his new journey – another retired member of OpenAI’s Superalignment Team, Jan Leike, publicly complained that “over the years, the security culture and its methods. [had] he took a back seat to the shiny things” at OpenAI. Leike entered the OpenAI Anthropic competition later in May.
A vague idea
OpenAI currently aims to develop AGI, or artificial intelligence, that can match human intelligence in performing a variety of tasks without special training. Sutskever hopes to jump beyond that while trying to shoot for the moon, with no distractions in the way.
“This company is unique because its first product will be smart, and it won’t do anything else until then,” Sutskever said in an interview with Bloomberg. “It will be completely protected from external pressures against a large and complex object and remain in the rat race.”
In his previous work at OpenAI, Sutskever was part of the “Superalignment” group that is studying how to “align” (shape the behavior of) a hypothetical form of AI, sometimes called “ASI” for “superior intelligence,” to benefit humanity. .
As you can imagine, it’s hard to connect something that doesn’t exist, so Sutskever’s quest has its doubts at times. On X, University of Washington computer science professor (and frequent OpenAI critic) Pedro Domingos wrote, “Ilya Sutskever’s new company is guaranteed to succeed, because the unrealized superintelligence is guaranteed to be safe.“
Like AGI, superintelligence is a vague term. Since the mechanics of human intelligence are still not well understood—and since human intelligence is difficult to quantify or explain because there is no single type of human intelligence—knowing superior intelligence when it arrives can be difficult.
Already, computers far surpass humans in many ways of generating information (such as basic math), but are they really that smart? Many proponents of superintelligence imagine sci-fi scenarios of “alien intelligence” and forms that operate without humans, and that’s what Sutskever hopes to achieve with careful control.
“You’re talking about a large, high-tech facility that’s developing technology,” Bloomberg said. “It’s crazy, isn’t it? It’s the security of that that we want to provide.”
#OpenAI #expert #Sutskever #takes #shot #intelligent #company