No time to read? Just listen to the news!
TL;DR
- Safe Superintelligence Inc. (SSI), co-founded by former OpenAI scientist Ilya Sutskever, raised $1 billion to develop safe AI systems.
- The company, valued at $5 billion, is backed by major investors like Andreessen Horowitz and Sequoia Capital.
- SSI plans to use the funds for computing power and talent acquisition, focusing on AI safety and alignment with human values.
- Co-founder Daniel Gross emphasizes the importance of investor support for SSI’s mission to prioritize research and development before market entry.
Safe Superintelligence Inc. (SSI), a new artificial intelligence company, has successfully raised $1 billion in funding to advance its mission of developing safe AI systems. Co-founded by former OpenAI chief scientist Ilya Sutskever, along with Daniel Gross and Daniel Levy, SSI aims to create AI systems that are not only highly intelligent but also safe and aligned with human values.
The funding round, which values the three-month-old company at approximately $5 billion, was led by prominent venture capital firms including Andreessen Horowitz, Sequoia Capital, DST Global, and SV Angel. An investment partnership run by Nat Friedman and SSI’s Chief Executive Daniel Gross, known as NFDG, also participated in the funding.
SSI’s primary goal is to develop ‘safe superintelligence,’ which refers to AI systems that possess reasoning abilities on par with or exceeding human intelligence, without posing risks to humanity. The company plans to use the funds to acquire computing power and hire top talent, focusing on building a small, highly trusted team of researchers and engineers. These teams will be based in Palo Alto, California, and Tel Aviv, Israel.
Ilya Sutskever, a key figure in the AI field, left OpenAI in May 2024 after a controversial attempt to remove CEO Sam Altman. This led to his departure and the dismantling of OpenAI’s “Superalignment” team, which focused on ensuring AI systems remain aligned with human values. Sutskever’s new venture, SSI, seeks to address these alignment challenges by prioritizing safety in AI development.
Daniel Gross, SSI’s CEO, emphasized the importance of having investors who understand and support the company’s mission. He stated, “It’s important for us to be surrounded by investors who understand, respect, and support our mission, which is to make a straight shot to safe superintelligence and in particular to spend a couple of years doing R&D on our product before bringing it to market.”
The focus on AI safety comes amid growing concerns about the potential risks of AI systems. This includes fears that rogue AI could act against human interests or even cause human extinction. A California bill seeking to impose safety regulations on AI companies has divided the industry, with companies like OpenAI and Google opposing it, while others like Anthropic and Elon Musk’s xAI support it.
SSI’s approach to AI development is distinct from other companies like OpenAI, which prioritize fast-paced commercial advancements. SSI’s “scaling in peace” methodology emphasizes ensuring AI safety before enhancing capabilities, allowing the company to work on long-term goals without the pressure of immediate commercial success.
As SSI continues to build its team and infrastructure, it plans to partner with cloud providers and chip companies to meet its computing needs. However, the company has not yet disclosed which firms it will collaborate with. Sutskever’s experience and vision for safe AI development position SSI as a potentially transformative player in the AI industry, aiming to set new standards for safety and alignment in artificial intelligence.
Source: Reuters