OpenAI Co-Founder Sustkever Ventures into New AI Venture

OpenAI Co-Founder Sustkever Launches New AI Project
OpenAI Co-Founder Sustkever Ventures into New AI Venture

OpenAI co-founder Ilya Sutskever has presented his modern AI company, Safe Superintelligence (SSI).

Having left the ChatGPT creator last month, Sutskever wasted no time in propelling his modern AI venture. He took the declaration to social media.

On June 19, he composed on X that Daniel Gross, who was instrumental in Apple’s AI and Look operations, and Daniel Exact, another Open AI alumni, will connect Sutskever. SSI will be present in Palo Alto, California, and Tel Aviv, Israel.

In a letter posted on X, Ilya Sutskever, Daniel Gross, Daniel Levy stated, “We have started the world’s first straight-shot SSI lab, with one goal and one product: a safe superintelligence. It’s called Safe Superintelligence Inc.”

The trio further wrote, “We approach safety and capabilities in tandem, as technical problems to be solved through revolutionary engineering and scientific breakthroughs. We plan to advance capabilities as fast as possible while making sure our safety always remains ahead.” The letter also read that the team is “dedicated to focusing on SSI and nothing else”.

Sutskever also talked to Bloomberg in detail about the modern company, but he did not examine its financing structure or any valuation credited to SSI.

For individuals involved with the startups, raising the capital is not an issue. SSI is a profit company by design, unlike the fledgling stages of OpenAI. 

On May 15 , Sutskever posted on X that he is working on an as-yet-unnamed extent that is “very personally meaningful” for him.

About Ilya Sutskever

The 37-year-old Israeli-Candian computer researcher Sutskever was known to have a necessary part in OpenAI. He contributed to the security perspectives of advancing innovation as well as the development of “superintelligent” systems.

He worked with Jan Lieke on the latter. Both people parted ways with the company after a detailed fallout with respect to AI security. This has acted as a catalyst for the work that SSI will now embrace, whereas Leike has taken up an administration position at rival AI firm Anthropic.

Sutskever clashed with Altman over how quickly to create AI, an innovation prominent scientists have cautioned might hurt humanity if permitted to develop without built-in limitations, for instance, on misinformation. Jan Leike, another OpenAI ingenious who co-led the so-called superalignment group with Sutskever, also resigned. Leike’s duties included investigating ways to constrain AI's potential harm.

In his resignation tweet, he wrote , “After almost a decade, I have made the decision to leave OpenAI.  The company’s trajectory has been nothing short of miraculous, and I’m confident that OpenAI will build AGI that is both safe and beneficial under the leadership of @sama, @gdb, @miramurati and now, under the excellent research leadership of @merettm.  It was an honor and a privilege to have worked together, and I will miss everyone dearly.   So long, and thanks for everything.”

Related Stories

No stories found.
logo
Analytics Insight
www.analyticsinsight.net