Article source: Silicon Rabbit Jun
Image source: Generated by AI
How valuable is the name of “former OpenAI employee” in the market?
On February 25 local time, Business Insider reported that the new company Thinking Machines Lab, just announced by Mira Murati, former chief technology officer of OpenAI, is launching a US$1 billion financing at a US$9 billion valuation.
Currently, Thinking Machines Lab has not disclosed the timetable or specific details of any products or technologies. The company’s public information is only a team of more than 20 former OpenAI employees and their vision: to build a future where “everyone can access knowledge and tools to make AI serve people’s unique needs and goals.”
Mira Murati and Thinking Machines Lab
The capital appeal of OpenAI entrepreneurs has formed a “snowball effect”。Before Murati, SSI, founded by Ilya Sutskever, former chief scientist of OpenAI, had already achieved a valuation of US$30 billion based on OpenAI genes and a concept.
Since Musk quit OpenAI in 2018, former OpenAI employees have founded more than 30 new companies with a total financing of more than $9 billion. These companies form a complete ecological chain covering AI security (Anthropic), infrastructure (xAI), and vertical applications (Perplexity).
This reminds people of the wave of Silicon Valley entrepreneurship created by Musk, Peter Teel and other founders left after PayPal was acquired by eBay in 2002, forming the PayPal Gang. In this wave, Tesla, LinkedIn, YouTube and other legendary companies emerged. OpenAI’s departing employees are also forming their “OpenAI Gang”.
However, the script of the “OpenAI Gang” is even more radical: the “PayPal Gang” took 10 years to create 2 billion-dollar companies, while the “OpenAI Gang” gave birth to 5 billion-dollar companies in just two years after the launch of ChatGPT., among themAnthropic is valued at $61.5 billion, Ilya Sutskever’s SSI is valued at $30 billion, Musk’s xAI is valued at $24 billion,In the next three years, hundreds of billions of dollars of unicorns are likely to be born in the “OpenAI Gang”.
The “OpenAI Gang” has set off a new round of talent fission in Silicon Valley, affecting the entire Silicon Valley and even reshaping the global AI power landscape.
OpenAI’s fission path
Among the 11 co-founders of OpenA, only Sam Altman and Wojciech Zaremba, head of the language and code generation team, are still in office.
2024 is the peak of OpenAI’s departure. During the year, Ilya Sutskever (who resigned in May 2024), John Schulman (who resigned in August 2024) and others resigned one after another. The OpenAI security team has been reduced from 30 to 16, with 47% layoffs; among senior executives, key figures such as Chief Technology Officer Mira Murati and Chief Research Officer Bob McGrew, have left one after another; among the technical team, core technical talents such as Alec Radford, chief designer of the GPT series, and Tim Brooks, head of Sora (joined Google) have left; deep learning expert Ian Goodfellow joined Google, and Andrej Karpathy left for the second time to start an education company.
“Gathering is a ball of fire, scattering is a starry sky.”
More than 45% of the core technical backbones who joined OpenAI before 2018 chose to set up separate portals. These new “portals” have also disassembled and reorganized OpenAI’s technical gene pool into three major strategic groups.
The first is the “direct army” that continues the OpenAI gene. They can be said to be a group of ambitious people in OpenAI 2.0.
Mira Murati’s Thinking Machines Lab almost completely transplanted OpenAI’s R & D architecture: John Schulman is responsible for the reinforcement learning framework, Lilian Weng leads the AI security system, and even the neural architecture diagram of GPT-4 is directly used as a technical blueprint for new projects.
Their “Open Science Manifesto” points directly to OpenAI’s closed trend in recent years, and plans to create a “more transparent AGI R & D path” through the continuous disclosure of technical blogs, papers and code. This has also triggered some ripple reactions in the AI industry: three top researchers from Google DeepMind jumped and joined with the Transformer-XL architecture.
Ilya Sutskever’s Safe Superintelligence Inc. (SSI) chose another path. Sutskever co-founded the company with two other researchers, Daniel Gross and Daniel Levy. They abandoned all short-term commercialization goals and focused on building “irreversible secure super-intelligence”-a technical framework that is almost a philosophical proposition. The company had just been established, a16z, Sequoia Capital and other institutions decided to invest US$1 billion to “pay” Sutskever’s ideals.
Ilya Sutskever and SSI
The other faction is the “subversive” who had left ChatGPT before.
Anthropic, founded by Dario Amodi, has evolved from an “OpenAI opposition” to the most dangerous competitor. Its Claude 3 series model is on par with GPT-4 in many tests. In addition, Anthropic has also established an exclusive cooperation with Amazon AWS, which means that Anthropic is gradually eroding the foundation of OpenAI in terms of computing power. The chip technology jointly developed by Anthropic and AWS may further weaken OpenAI’s bargaining power in NVIDIA’s GPU purchases.
Another representative figure in this group is Musk. Although Musk left OpenAI in 2018, some of the founding members of the xAI he founded also worked for OpenAI, including Igor Babuschkin and later returned to OpenAI.Kyle Kosic。Thanks to Musk’s strong resources, xAI poses a threat to OpenAI in many aspects such as talent, data, and computing power. By integrating the real-time social data stream of Musk’s X platform, xAI’s Grok-3 can instantly capture hot events on the X platform to generate answers. While ChatGPT’s training data is as of 2023, with a significant timeliness gap. This kind of data closed-loop is OpenAI relies on the Microsoft ecosystem and is difficult to replicate.
However, Musk’s positioning for xAI is notThe disruptors of OpenAI want to regain the original intention of “OpenAI”. xAI adheres to the “maximum open source” strategy. For example, the Grok-1 model is open source under the Apache 2.0 protocol, attracting developers from around the world to participate in ecological construction. This is in sharp contrast to OpenAI’s closed-source tendency in recent years (such as GPT-4 only provides API services).
The third group is some “disruptors” who reconstruct the logic of the industry.
Perplexity, founded by Aravind Srinivas, a former OpenAI research scientist, was one of the first companies to transform search engines with a large AI model. Perplexity used AI to directly generate answers to replace the link list on search pages. Today, it receives more than 20 million searches a day and has raised more than US$500 million (valued at US$9 billion).
Adept’s founder is David Luan, former vice president of engineering at OpenAI. He has been involved in technical research on languages, supercomputing, reinforcement learning, as well as security and policy development for GPT-2, GPT-3, CLIP, and DALL-E projects. Adept focuses on developmentAI AgentThe goal is to help users automate complex tasks (such as generating compliance reports, design drawings, etc.) through large models combined with tool invocation capabilities. The ACT-1 model developed by it can directly operate office software, Photoshop, etc. Currently, the company’s core founding team, including David Luan, has transferred to Amazon’s AGI team.
Covariant is a full-fledged smart startup with a valuation of US$1 billion. Its founding team is all from the robot team disbanded by OpenAI, and its technical genes stem from its experience in GPT model development. to concentrate the developmentBasic model of robotThe goal is to realize autonomous robot operation through multimodal AI, with a special focus on warehousing and logistics automation.。ButCurrently, three members of Covariant’s core founding team, Pieter Abbeel, Peter Chen and Rocky Duan, have joined Amazon.
Some “OpenAI Help” startups
Source: public information, organized by: flagship
The transition of AI technology from tool attributes to productivity factors has given rise to three types of industrial opportunities:Alternative scenarios(Such as subverting traditional search engines),Incremental Scenes(Such as intelligent transformation of manufacturing),Reconstructed Scenes(Such as breakthroughs at the bottom of life sciences). The common characteristics of these scenarios are: they have the potential to build a data flywheel (user interaction data feedback model), deep interaction with the physical world (robot action data/biological experimental data), and a gray space with ethical supervision.
OpenAI’s technology spillover is providing the underlying impetus for this industrial transformation. Its early open source strategies (such as partial open source of GPT-2) created a dandelion effect of technology diffusion, but when technological breakthroughs entered deep waters, closed source commercialization became an inevitable choice.
This contradiction has given rise to two phenomena: on the one hand, departing talents migrate technologies such as Transformer architecture and intensive learning to vertical scenarios (such as manufacturing and biotechnology) to build barriers through scenario data; on the other hand, giants realize technology through talent mergers and acquisitions. Card slots form a closed loop of technology harvesting.
When the moat becomes a watershed
The “OpenAI Gang” is making great progress, but the old club OpenAI is “struggling”.
In terms of technology and products, the release date of GPT-5 has been repeatedly postponed, and mainstream ChatGPT products are generally believed by the market that the innovation speed cannot keep up with the development of the industry.
In terms of market, DeepSeek, a latecomer, has begun to gradually catch up with OpenAI. Its model performance is close to ChatGPT but the training cost is only 5% of GPT-4. This low-cost replication path is disintegrating OpenAI’s technical barriers.
However, a large part of the reason for the rapid growth of the “OpenAI Gang” is due to the internal contradictions of OpenAI.
At present, OpenAI’s core research team can be said to have fallen apart. Among the 11 co-founders, only Sam Altman and Wojciech Zaremba are in office, and 45% of the core researchers have left.
Wojciech Zaremba
Co-founder Ilya Sutskever left to create SSI, chief scientist Andrej Karpathy publicly shared his experience in Transformer optimization, and Tim Brooks, head of Sora video generation project, switched to Google DeepMind. On the technical team, more than half of the authors of early GPT versions have left, and most of them have joined the ranks of OpenAI competitors.
At the same time, OpenAI’s own recruitment priorities seem to have changed, according to data compiled by Lightcast, which tracks recruitment information. In 2021, 23% of the company’s recruitment information will be general research positions. In 2024, general research will only account for 4.4% of its recruitment information, which also reflects that the status of scientific research talents in OpenAI is changing.
The organizational cultural conflict brought about by commercial transformation has become more and more obvious. While the employee size has expanded by 225% in three years, the early hacker spirit has gradually been replaced by the KPI system. Some researchers have bluntly been forced to shift from exploratory research to product iteration.
This strategic swing has left OpenAI in a dual dilemma: it needs to continue to produce breakthrough technologies to maintain valuations, and it has to face competitive pressure from former employees to quickly replicate results using its methodology.
The winner and winner of the AI industry lies not in the breakthrough of parameters in the laboratory, but in who can inject technological genes into the capillaries of the industry-reconstructing business in the answer flow of search engines, the motion trajectory of robotic arms, and the molecular dynamics of biological cells. The underlying logic of the world.
Is Silicon Valley trying to split OpenAI?
The rapid rise of the “OpenAI Gang” and “PayPal Gang” is largely due to California law.
Since California legislated to ban competition agreements in 1872, its unique legal environment has become a catalyst for innovation in Silicon Valley. According to Section 16600 of the California Business and Occupation Code, any provision that restricts professional freedom is invalid. This institutional design directly promotes the free flow of skilled talents.
The average tenure period of programmers in Silicon Valley is only 3-5 years, which is much lower than that of other technology centers. This high-frequency flow has formed a knowledge spillover effect. Take Fairchild Semiconductor as an example. Its departing employees founded 12 semiconductor giants including Intel and AMD., laying the industrial foundation of Silicon Valley.
Laws prohibiting competition agreements may seem to not protect innovative companies enough, but in fact they promote innovation. The flow of technical personnel accelerates the spread of technology and lowers the threshold for innovation.
In 2024, the U.S. Federal Trade Commission (FTC) predicts that after the comprehensive non-competition agreement in April 2024, the innovation vitality of the United States will be further released. In the first year of policy implementation, 8500 new companies will be added, and the number of patents will surge by 17,000 – 29,000. There are 3,000 – 5,000 new patents. In the next 10 years, the annual patent growth rate will be 11-19%.
Capital is also an important driving force for the rise of OpenAI.
Silicon Valley’s venture capital accounts for more than 30% of the United States. Sequoia Capital, Kaipeng Perkins and other institutions have built a complete financing chain from seeds to IPOs. This capital-intensive model has given rise to dual effects.
First of all, capital is the engine that drives innovation. Angel investors provide not only funds, but also the integration of industry resources. When Uber was founded, it had only $200,000 in seed funding from the two founders and only three registered taxis. After receiving US$1.25 million in angel investment, it began rapid financing, and by 2015 it had reached a valuation of US$40 billion.
Venture capital’s long-term attention to the technology industry has also promoted the upgrading of the technology industry. Sequoia Capital invested in Apple in 1978 and Oracle in 1984, laying its influence in the semiconductor and computer fields; in 2020, it began to deeply deploy artificial intelligence and participate in cutting-edge projects such as OpenAI. The tens of billions of dollars invested in AI by international capital (such as Microsoft) has shortened the commercialization cycle of generative AI technology from several years to several months.
Capital also provides innovative companies with higher fault tolerance capabilities. The speed at which accelerators screen failed projects is as important as successful projects. According to statistics from start-up analysis firm startups, the failure rate of start-ups worldwide is 90%, and the failure rate of start-ups in Silicon Valley is 83%. Although start-ups are not easy to succeed, in venture capital investment grid, failed experience can be quickly transformed into nutrients for new projects.
Photo source: startuptalky.com
However, capital has also changed the development path of these innovative companies to a certain extent.
The head AI project received a valuation of more than US$1 billion before releasing its products, which in turn made it more difficult for other small and medium-sized innovation teams to obtain resources exponentially. This structural imbalance is more prominent in regional distribution. Research results from database management firm Dealroom show that the venture capital received by the U.S. Bay Area in a single quarter (US$24.7 billion) is equivalent to the world’s second-to fifth-largest venture capital centers (London, Beijing, Bangalore, Berlin) combined. At the same time, although emerging markets such as India achieved a 133% financing growth, 97% of the funds flowed to unicorn companies valued at more than US$1 billion.
In addition, capital has a strong “path dependence” and capital prefers areas with quantifiable returns, which also makes it difficult for many emerging basic science innovations to be strongly supported at the financial level. For example, in the field of quantum computing, Guo Guoping, founder of the domestic quantum computing startup Origin Quantum, sold his house to start a business due to lack of funds in the early days of his business. Guo Guoping first sought financing in 2015. Data released by the Ministry of Science and Technology at that time showed that my country’s total investment in scientific research was less than 2.2% of GDP, of which basic research funds accounted for only 4.7% of R & D investment.
Not only is there a lack of support, big capital is also using the temptation of “money” to target top talents, which basically locks the salary of CTO positions in start-ups at seven digits (US dollars for American companies, RMB for China companies), forming a cycle in which giants monopolize talents-capital chases giants.
Mira Murati and Ilya Sutskever, two companies, both raised billions of dollars in financing with just one concept。This all comes from their trust premium in the technical capabilities of OpenAI’s top teams, but this trust also carries risks-whether AI technology can grow exponentially for a long time, and secondly, vertical scene data can form monopoly barriers. When these two risks encounter real challenges (such as slowing breakthroughs in multimodal models and soaring industry data acquisition costs), overheating of capital may trigger an industry reshuffle.