Your Position Home AI Technology

OpenAI’s renegade brothers and sisters join forces to create a 450 billion unicorn

Wen| Alphabet List

The Amodi brothers and sisters, who bear the banner of OpenAI rebels, have found Anthropic another new financing.

On March 4, Beijing time, Anthropic announced that it had completed a US$3.5 billion Series E financing, with a post-investment valuation of US$61.5 billion, three times that of the previous year. It is worth noting that Anthropic originally planned to raise US$2 billion, but this time it exceeded the task.

At this time, it was only four years since the Amodi siblings started their own businesses independently. At the end of 2020, before the launch of ChatGPT, the Amodi siblings decided to start a new start. At the time, her brother Dario Amodei was already vice president of OpenAI research, and her sister Daniela Amodei was vice president of security policy.

In 2021, Anthropic, the brother and sister’s AI startup, was born, and imitated the trick of Ultraman’s binding to Microsoft, Anthropic successively found Amazon and Google’s two major financial investors. Since then, Anthropic’s valuation has been rising all the way.

According to CB Insights, the financing will make Anthropic the fifth most valuable startup in the United States after SpaceX, OpenAI, Stripe and Databricks. It will also become the seventh most valuable startup in the world after SpaceX, ByteDance, OpenAI, Stripe, Shein and Databricks.

Two siblings, who are halfway through the field of artificial intelligence, have become more and more popular stars in the field of large models with the help of Anthropic.

01

Being named OpenAI rebels since birth, Anthropic started from a high point.

It was the end of 2020, two years before ChatGPT was launched and farther away from the frequent departures of OpenAI executives, and the Amodii siblings decided to start a new start.

At the time, her brother Dario Amodei was already vice president of OpenAI research, and her sister Daniela Amodei was vice president of security policy.

When they joined later, it is rare for siblings to serve as executives at the same technology company.

The Amodi brothers and sisters come from an Italian family and grew up in Google. The family also has a strong academic atmosphere. It can be said that their lives started at a high level.

His brother Dario majored in physics. Near graduation, he began to pay attention to artificial intelligence and chose neuroscience during his postgraduate studies. He holds a bachelor’s degree in physics from Stanford University, a doctorate in physics from Princeton University, and served as a postdoctoral scholar at Stanford Medical School.

Since the end of 2014, Dario has changed clubs three times in two years. First, he started his career in Baidu’s Silicon Valley AI Laboratory, where he was under Wu Enda, and then moved to Google. In 2016, Dario joined the newly founded OpenAI.

Two years later, her sister Daniela joined, and she studied English literature at the University of California, Santa Cruz. After graduation, her career began in the fields of global health and politics and had little to do with artificial intelligence. However, as she joined the financial services company Stripe, she began to come into contact with artificial intelligence.

Later, they put this interest in artificial intelligence into a more ambitious narrative, feeling that they all felt that they had a huge responsibility to make the world a better place.

In 2018, two years after her brother Dario entered OpenAI, Daniela also joined, and the brother and sister reunited. After that, Dario contributed to GPT-2 and GPT-3 and eventually became vice president of research, while Daniela started managing the natural language processing generation team, served as vice president of human resources, and eventually became vice president of security policy.

However, differences within OpenAI are also brewing.

It was the year when the brothers and sisters met in the workplace that the leadership within OpenAI, represented by Ultraman, realized that their computing power could not keep up and began to consider transformation. Musk, one of the founders, finally quit for various reasons. Later, OpenAI embarked on the road of commercialization, and the for-profit company OpenAI LP was established. In 2020, OpenAI and Microsoft signed a $1 billion agreement. It was at this time that the Amodii siblings felt that they could no longer stay.

Dario later recalled that he discovered the law of scaling while working in OpenAI, and that he could achieve higher performance by using more data and computing power to train AI rather than relying on algorithms. This scares him: it consists of simple components. Anyone can build it with enough money and we are developing a powerful and potentially dangerous technology. rdquo;

At the end of that year, the Amodi brothers and sisters left OpenAI together and took away seven senior OpenAI employees in one go. In 2021, Anthropic was established.

When asked about the reasons for leaving, Dario always behaved very glibly in front of the media, saying that we (the co-founders) agreed on the same page and trusted each other. We did this for the right reasons, but when asked if it meant he didn’t trust others at OpenAI, he refused to answer.

02

Action is the best indication of whether it is an OpenAI rebel or not.

In terms of company attributes, the Amodi brothers and sisters chose to establish Anthropic as a public benefit company (PBC), which can pursue social responsibility and profit at the same time. Now OpenAI is seeking transformation and also wants to become a PBC.

From the beginning, safety has become Anthropic’s highlight keyword.Anthropic’s predecessor was even named AI Safety Lab. In 2021, artificial intelligence hazard theory is not mainstream.

At first, Anthropic also considered using AI models from other companies for security research, but quickly gave up the idea and decided to build its own model. The model is named Claude and has three core goals: helpful, harmless, and honest.

They even introduced Constitutional A.I.& The concept of rdquo; provides the model with a written list of principles (constitution) and requires it to adhere to as much as possible. These lists include the United Nations Universal Declaration of Human Rights, Apple’s terms of service, etc. They then used the second model to assess whether the first model’s behavior was constitutional and corrected it if necessary.

To put it bluntly, it is to let artificial intelligence self-supervise.

In the summer of 2022, the Amodii brothers and sisters have their first difficult moment.

That is: Anthropic has completed training Claude, and the entire team is amazed at his performance. The release of Claude will definitely bring more fame and fortune to Anthropic, which has only been established for a year. However, they are also worried about whether there will be security risks in releasing the powerful Claude directly.

In the end, the Amodi brothers and sisters decided to continue with internal security testing. This pressure lasted for several months.

On the other hand, OpenAI released ChatGPT at the end of November. This software is like a huge rock entering the water, causing shocking waves. Judging from the inside stories reported by many subsequent media reports, OpenAI launched ChatGPT under very uncertain circumstances.

In other words, Ultraman and Amodi faced the same problem, but made completely different choices.

At least judging from their public statements, the Amodii brothers and sisters did not say they regretted it. Dario said in an interview with Time magazine that he wanted to avoid triggering a race to build bigger and perhaps more dangerous artificial intelligence systems: I think that’s the right thing to do. rdquo;

He added immediately: But this is not entirely clear.& rdquo; This uncertainty is easy to understand. After all, the race will start anyway, and Amodi’s decision could cost Anthropic billions of dollars.

In any case, the decision ultimately earned Anthropic praise for a long time. Especially when ChatGPT raises safety concerns among regulators and the public. Facing the Senate in Washington in 2023, Dario testified that he believed AI systems powerful enough to cause massive damage and change the global balance of power could emerge as early as 2025.

There were many people who said this at that time, but the Amodi brothers and sisters, who always held the safety card and were regarded as OpenAI rebels, naturally seemed more credible.

During Claude’s final release and iteration, Anthropic became more secure. New York Times reporter Kevin Roose was invited to headquarters by Anthropic for writing an article documenting his conflict with Microsoft’s new Bing chatbot embedded in ChatGPT.

It was the summer of 2023, a few weeks before Claude 2 was released, and Ross still felt nervousness and anxiety rather than excitement and expectation.

“What worries me most is whether the model will do terrible behaviors that we didn’t expect? rdquo; Dario said.

Even this sentiment is not unique to top management, with all engineers across the company seemingly more interested in talking about security risks than technical capabilities. An employee told Ross over lunch that the probability of runaway artificial intelligence destroying humans is as high as 20% in the next decade.

Ross described it as like walking into an award-winning restaurant, getting ready to eat, only to find all the staff talking to you about food poisoning.

To use chief scientist Kaplan’s words, this pessimism was not deliberately created, but that Anthropic originally attracted such thoughtful people who were skeptical about these long-term risks.

This sentiment was eventually transmitted to the Claude product itself. This safety-oriented training method also makes Claude an extremely cautious chat robot. It has been ridiculed by critics as being too boring and lecturing for AI doomsday promoters.

However, Anthropic’s safety card and Claude’s caution have made the company increasingly attractive to corporate customers and investors. Attracting corporate customers is easy to understand. After all, to B users are different from to C users. What the former wants is security and reliability.

After the release of the original Claude, Anthropic financing broke out. First, it was a US$450 million Series C financing, including investor Google; then Amazon also stepped in, and its financing amount continued to rise. By October 2023, Anthropic’s total financing amount exceeded US$7.6 billion.

03

However, as Anthropic grew and became one of the number one players in the AI field, this OpenAI rebel seemed to have received the same skepticism as OpenAI.

Dario has to explain more and more frequently how Anthropic’s security and development are balanced.

The Amodi brothers and sisters left and founded Anthropic because of the commercial transformation of OpenAI and their concern for AI security. This is a recognized founding story at the beginning. However, Anthropic, which has always been holding the good person card, has gradually attracted more and more investments and released one big model after another. It has inevitably led to the discovery of contradictions: Anthropic and OpenAI compete for the performance, financing scale, and profitability of the big model, while also taking into account security issues.

Or more bluntly: We are all AI unicorns and star start-ups, why should we believe you? Just because you say you care more about the world and pay more attention to security issues?

Last year, Anthropic released Claude 3.5 Sonnet, saying it sorted out new industry standards in reasoning, coding and certain types of mathematics, defeating OpenAI’s latest model at the time, GPT-4o.

At this time, Dario said in an interview with Time magazine that there are some gaps between the public’s perception of security and their ideas:

“I don’t think we are an artificial intelligence security company, but a company focused on the public interest. We are not a company that believes that artificial intelligence systems will pose dangers. This is a question of experience.& rdquo;

“I would rather live in an ideal world. Unfortunately, in the real world we live in, there are many economic pressures, not only competition between companies, but also competition between countries. But if we can really prove that risks are real, and I hope we can do that, then maybe we can really get the world to stop and think. rdquo;

In other words, Dario still adheres to the view that powerful AI can pose huge dangers. Without powerful AI, you cannot discover what the huge danger is and you cannot stop it.

One message Dario has been conveying to the outside world is that Anthropic is not trying to trigger or participate in competitions, but wants to force everyone to raise safety standards with its own safety-oriented standards and attitude.

The Amodi brothers and sisters have always been believers in the Law of Scaling. After all, Anthropic’s founding logic is: in the Law of Scaling, we saw the possibility of building a better AI system as long as the resources invested are large enough, and this is behind the security risks are too great, so we set up our own business.

Under this narrative, they have always kept a close eye on OpenAI and regarded it as their biggest competitor.

But at the end of January this year, the situation became complicated after the DeepSeek R1 was launched.

Almost immediately, Dalio said DeepSeek’s performance was basically the worst of all the models we had tested, but did not indicate which DeepSeek model Anthropic had tested, or more technical details about those tests.

Moreover, Dario personally issued a denunciation in support of the United States ‘strengthening of chip controls, but some of the estimates cited in it actually lacked factual basis.

What is quite significant is that Dario suggested that DeepSeek take artificial intelligence security considerations seriously.

If it had been a few years ago, when Anthropic was founded, Dario’s safety card would have attracted more support. But as mentioned earlier, today’s Anthropic is also in the midst of a balance between security and development.

TechChurch specifically mentioned in its report on the matter that it remains to be seen whether such security issues will seriously hinder DeepSeek’s rapid popularity. On the other hand, AWS and Microsoft have publicly announced the integration of R1 into their cloud platforms. Ironically, Amazon is Anthropic’s largest investor.

After that, at the end of February, Anthropic’s new generation model Claude 3.7 Sonnet was released. It is worth noting that the outside world had always thought that Anthropic’s next release would be Claude 4.

This financing, Anthropic has undoubtedly proved that it is still attractive, especially the financing amount much higher than planned (from US$2 billion to US$3.5 billion), a valuation of US$60 billion and a P/E ratio of 60 times, all indicate that investors are still optimistic about Anthropic’s future.

Whether they are leading players who believe in the law of scaling or innovative players who thrive with limited resources, new models will be launched one after another. In the month after DeepSeek caused chaos, xAI, OpenAI and Anthropic have all released new generation models.

In addition, OpenAI’s GPT-5 and DeepSeek R2 are likely to be available soon.

Next, it is unknown how long the Amodi brothers and sisters ‘ safety cards will work.

 

Popular Articles