- Just Go Grind
- Posts
- How Dario Amodei Led Anthropic's Race To the Top
How Dario Amodei Led Anthropic's Race To the Top
A deep dive into the CEO and Co-Founder of Anthropic and how he became one of the world's most influential AI leaders.
Hey, Justin here, and welcome to Just Go Grind, a newsletter sharing the lessons, tactics, and stories of world-class founders! Today’s deep dive is available for premium subscribers.

Do you have a podcast or newsletter? |
The Just Go Grind Podcast is finally back and, based on the last poll I did, I know some of you have already listened to the latest episode! Thank you!
For those of you who haven’t yet, what are you waiting for? 😜
The show features interviews with incredible founders and investors, is available on all podcasting platforms and YouTube, and new episodes will be released weekly.
Watch and listen to the Just Go Grind Podcast on:

I’m editing today’s deep dive after a 17-mile run on a beautiful sunny Saturday in Los Angeles.
I moved to LA in 2018 for business school at USC and it’s still one of the best decisions I’ve ever made.
One of the other decisions I’m happy to have made?
Hiring a full-time writer, Erika, to help me with Just Go Grind.
She’s been great so far and I found her through Athyna, which made the hiring process incredibly easy.
Today, we have the first deep dive written by Erika!
Let’s dive in.


Dario Amodei has always been certain about a few things. First off, that he was passionate about math. Second, that he somehow wanted to make the world better.
Both of those things combined truly defined the journey he would later embark on. Not any journey, but one that ends with a 60 billion dollar company to his name.
That’s what happens when you have years of background in the AI field and too strong of a vision on how its future should look.
That’s the third thing he was certain of—that safety and reliability in AI mattered. That they shouldn't be taken for granted.
And so he took it upon himself to build the artificial intelligence empire that is now Anthropic, dedicated fully to safety and research.
Dario overtook the world of AI with his new approach. He became a leader in the field. He went from an employee to the biggest competitor of the same company. And all in the span of a decade.
Here’s how he did it.

KEY TAKEAWAYS FROM DARIO AMODEI
Don’t have time to read the whole piece?
Here’s what you need to know from Dario Amodei, the founder of Anthropic, tying his lessons into ones we’ve previously learned from other founders we’ve covered:
Dario has learnt that whenever you are passionate about something and have a clear vision on how to do it, you stop wasting time trying to convince everybody else that’s the right way. Results will show that for you soon enough if you just go and put it into action.
For AI to actually build a more productive future, safety research cannot be an afterthought, it has to be a priority. Otherwise, the risks will not only keep you from growth, but could go as far as representing a real threat to humanity.
Competition will never do anything for the evolution of AI if not taken with responsibility. That’s why Anthropic’s ‘race to the top’ mentality aims to lay the foundation for a more ethically responsible field.
Dario embraces the desire to do things differently. He has felt it, after all. But with Anthropic, what he felt was more of a need to bring a different outlook into the AI world, and stop repeating the same approach.

San Francisco born and bred, Dario Amodei recalls growing up obsessed with math and ‘its sense of objectivity’ compared to ever-changing opinions.
He and his younger sister Daniela were already dreaming big as children, always wanting ‘to save the world together’.
We’ve always kind of had this sort of uniting top level goal of wanting to, you know, work on something that matters, something that’s important and meaningful.
This wasn’t exactly casual, since they grew up in the heart of a family where social responsibility really mattered.
[My parents] really thought a lot about how to make things better. How do people who have been born in a fortunate position reflect their responsibilities and, you know, deliver their responsibilities to those who are less fortunate.
While Dario and Daniela were still students, that translated into small donations to global health organizations. Flash forward to 2021, it translated into both of them co-founding a safety and research inclined AI company.
But let’s not get ahead of ourselves.
Dario went to Stanford and got his bachelor’s degree on Physics. It’s safe to say his interest in math was not going anywhere.
During this time he got into the work of Ray Kurzweil and got inspired to dive deeper into the world of AI. So much so that he decided to switch his PhD from Theoretical Physics to Biophysics and Computational Neuroscience at Princeton, what he considered the best next thing.
It didn't feel like AI was working yet. And so I wanted to study the closest thing to that that there was which was, you know, our brains. It’s a natural intelligence, so therefore the closest thing to an artificial intelligence that exists.
Further on, he was a postdoctoral scholar at Stanford University School of Medicine, where he got to see the work coming out of computer science AI leader and founder Andrew Ng’s learning group.
That was the deciding factor. By 2014 he was determined to join this evolving field, one way or another.
My reaction at the time was ‘Oh my God, I’m so late to this area. The revolution has already happened’. […] I was just like ‘this tiny community of 50 people, they’re the giants of this field. It’s too late to get in. If I rush in maybe I can get some of the scraps. That was my mentality when I kind of entered the field.’

FIRST DIVE INTO THE AI WORLD
He started working at Baidu in late 2014 alongside the Andrew Ng, where he gained a lot of experience in speech recognition systems.
His initial insight was that models would always perform better the larger you made them, the longer you trained them, the more data you gave them. This was his first approach to the Scaling Hypothesis.
I think somewhere between 2014 and 2017 was when it really clicked for me, when I really got conviction that ‘hey, we’re gonna be able to do these incredibly wide cognitive tasks if we just scale up the models’.
After a year at Baidu, he spent another in Google Brain as a Senior Research Scientist. At this point in time is where Dario became interested in the issue of safety and reliability of AI systems, which forever changed his view on the field.

JOINING OPEN AI
It’s impossible to comprehend Dario’s present success without acknowledging his days at OpenAI.
He was considered for the company well before it ever even existed, but he decided to join only after a few other ‘smart people’ did too.
5 years later, he had climbed all the way up to Vice President of Research.
He led the development of large language models such as GPT-2 and GPT-3, and also set the overall direction of the company toward long-term safety research, focused on powerful and more interpretable AI systems.
He learned some valuable lessons during that time that he would carry with him to Anthropic later in time.
One of the big themes of those 5 years was this idea of scaling. That you can put more data, more compute into the AI models and they just get better and better. I think that, you know, that thesis was really central.
And the second thesis that was central is: you don’t get everything that way. You can scale the models up but there are questions that are unanswered. It’s ultimately sort of the fact value of distinction.
You scale the model up, it learns more and more about the world, but you’re not telling it how to act, how to behave, what goals to pursue. And so that dangling thread, that free variable was the second thing.
And so those were really kind of the two lessons that I learned, and of course those ended up being the two things that Anthropic was really about.
There’s this general sense of fascination around the way Dario left OpenAI, only to later become one of their biggest competitors out there. But the truth is, it’s much simpler than whatever drama one could imagine.
All in all he just had different ideas on how to do things the right way.
Civilization is going down this path to very powerful AI. What’s the way to do it that’s cautious, straightforward, honest, that builds trust in the organizations and individuals?
How do we get from here to there? And how do we have a real vision for how to get it right? How can safety not just be something we say because it helps with recruiting?
I think at the end of the day, if you have a vision for that, forget about anyone else’s vision, I don’t wanna talk about anyone else’s vision. If you have a vision for how to do it, you should go off and you should do that vision. It is incredibly unproductive to try and argue with someone else’s vision.
The more Dario got into AI safety research, the less he could stand it not being taken as a priority.
So he took it upon himself to build a company that would work towards safer systems, all by generating reliable research on the risks and opportunities of AI.
I wanted to invent and discover things in some kind of beneficial way. That was how I came to it, and that led to working on AI, and AI required a lot of engineering, and eventually AI required a lot of capital.
But what I found was that if you don't do this in a way where you’re setting the environment, where you set up the company, then a lot of it gets done, a lot of it repeats the same mistakes that I found so alienating about the tech community. It's the same people. It's the same attitude. It’s the same pattern matching. And so at some point it just seemed inevitable that we do it a different way.


Subscribe to Premium to read the rest.
Become a paying subscriber to get access to this post and other subscriber-only content.
Already a paying subscriber? Sign In.
A subscription gets you:
- • Premium subscriber-only editions
- • Access to 65+ founder deep dives and the entire history of posts
- • Event discounts & early access
Reply