How to think about careers: building aptitudes
(Doing Good Better) Better
Remixed from this excellent talk by Emma Abele from the Global Challenges Project. Views presented in this article are my own.
In this post, I present one approach to thinking about careers that I have found valuable. A lot of the examples refer to Effective Altruism, but are more generally applicable if you want to figure out what to do with your career.
The default approach to EA career planning
Most EAs are not approaching career planning in the right way.
Let's say you're an undergrad at Waterloo studying chemical engineering and specializing in thermodynamics. At the same time, you are really passionate about ending factory farming.
You might approach career planning as an exercise with LEGOs, and try to fit these two things together. "How could I use thermodynamics to end factory farming?"
This is the default approach to EA career planning. The default career planning is to take a look at all the careers that are somehow classified as "EA careers''. So this is everything mentioned by 80,000 hours.
And only then do you think about where you fit in. What are your interests and how does it relate to these things? How does your degree relate? You take the intersection of all those things, and that's the career for you.
This is a bad approach to career planning. Specifically, there are three main problems with this approach.
Problem #1: Leads to thinking so long as it “counts” it’s good
Symptom: You think about whether your career sounds like an EA career, as opposed to thinking about whether you're actually trying to improve the world as much as you can.
When another EA asks you about your career, you want to say something that clearly sounds like an EA career.
You want to say that you work on “AI safety” or drop some big name EA org that you work at. But what really matters is not whether what you do can be called “global priorities research”, what matters is that you're actually trying to do the most good.
Kelsey Piper is a journalist at Vox who writes EA related articles in the Future Perfect section.
Kelsey is someone who managed to avoid this trap. Even though journalism was not thought of as a main EA career, she saw that it was a very high impact thing for her to do so she went for it rather than going for a really recognized EA career.
And she's had massive counterfactual impact because of this.
Problem #2: Overemphasizes your current interests, skills and knowledge
Your interests can change a lot more than you might think.
This means that your current interests are much less important than you think they are. If you aim for what you think is most important and really lean into that, you really can become interested in it.
So whatever you're interested in right now doesn't really matter. So it shouldn't really play a major factor in your career decisions like it does with the default approach to EA career planning.
Don’t say:
❌ “What EA stuff are you interested in?”
Say:
✅ “What do you think is most important to change about the world?”
So as well as your interests, the default approach also overemphasizes your current skills and knowledge. For example, you may think you can't contribute to AI safety because you didn't study computer science in undergrad.
Fun fact: Anthropic is one of the top companies working on AI safety today. Both the president of Anthropic, Daniella Amodei, as well as one of the co-founders, Jack Clark, studied English literature in undergrad.
Here's another example: Jason Matheny.
Jason studied art history at U Chicago, but he didn't look for a job relevant to art history when he graduated.
Instead, he thought that working on HIV prevention in India was important. So he did that and got a PhD on the economics of pharmaceutical development, but then … he didn't stick to global health either.
He founded New Harvest, the first nonprofit dedicated to cultivated meat research. He was one of the first to popularize cultivated meat. I’m sure you can guess the pattern by now — he decided not to stick to that either.
Jason went into x-risk research, first doing biosecurity work at the Center for Health Security at Johns Hopkins University, and then was the director of research at the Future of Humanity Institute at Oxford University.
And then he shifted again to climbing the ranks of the US government to try to reduce x-risks. That way he went to IARPA and after six years became the director there, and then he was the founding director of CSET, the Center for Security and Emerging technology at Georgetown University.
And now he’s in the White House in various important roles, including being the Deputy Director of National Security in the Office of Science and Technology Policy.
When you think about your career, think like Jason.
You don't need to be held back by your background.
If you think something is important and has a really high impact, you should go and try to work on that problem, whether or not your pre-history was working on that problem.
Problem #3: What we most need is people who can figure out what to do
Today, EA is more talent constrained than it is funding constrained. It needs people who can figure out what to do.
What kind of person is that?
People with an entrepreneurial mindset - who can grapple with an ill-defined problem and actually contribute and understand what's going on without a lot of guidance.
How does this relate to the default 80,00 hours career planning approach?
The default approach tells you what to do instead of getting you to start figuring out what to do by yourself.
And yet, figuring stuff out for themselves is what we most need young people to practice getting good at.
Imagine two people who both check out the 80k website and think that AI safety research seems like a very important thing to do.
Bob ends their investigation there and starts taking computer science classes and applying to jobs and 80k job boards.
Alice spends hundreds of hours digging into AI safety stuff, trying to understand what's going on and trying to understand 80k’s reasoning process.
Alice is much more likely to help prevent AI risk because they're getting into the headspace and the practice of figuring out for themselves - "What do I actually think we should do about AI risk?"
These are the three main problems with the default approach to EA career planning.
Solution: Building aptitudes and understanding
What is an approach that avoids the problems outlined above?
We have two missions:
1. building our aptitudes to get really good at something useful
2. building our understanding of how to improve the world
If you keep pursuing both of these missions, they'll come together and allow you to continually find ways to improve the world.
How do you actually do this?
Step 1: Check out this general list of aptitudes by Holden Karnofsky.
Step 2: Try to get very good at one of the “aptitudes”
One counter-intuitive result of this approach is that you want to choose internships and side projects more based on whether they help you build the desired aptitude than whether they're high impact.
This is somewhere a lot of students in EA are making a mistake.
If you want to get better at the "organization building, running, and boosting" aptitude, you want to look for an organization that is quickly growing and has good organizational capacity, and has people there that you'll learn a lot from.
That's much more important than if they happen to be working on something of high impact.
How do you get funding to do all this stuff?
Open Philanthropy provides early career funding e.g. if you want to apply to do another degree, self-study for a bootcamp etc that you think will put you in a better position to improve the long-term future (this encompassed AI Safety, Climate Change, Nuclear Risks etc). The Long-Term Future Fund is another great option for funding.
Unlike the default career planning approach, this isn't something that you just do once, and then you have a clear path to follow.
You keep doing it and have it build up with time.
If you're going to do one thing now that you finished reading this blogpost, figure out how you're going to set aside time to do this.
It could be every Sunday, or every day for an hour, or intensely for a couple of weeks over the summer. Whatever works best for you.
Whatever it is, I highly recommend you decide right now.
P.S. This is one approach to career planning. But surprise: there are other approaches!!! Stay tuned for part ii. And let me know what you thought in the comments!




