AI might literally kill everyone if anyone's allowed to make it superhumanly smart before it's known how to do that safely. To prevent extinction, we're building institutional support and public awareness for AI safety. It's working; we should do more of it.
We are convinced this is literally the most effective way to spend money right now.
01 — Strategy
Survival requires institutional buy-in
Explaining the problem to decision-makers creates allies who can advocate for treaties and legislation. Explaining it to the general public makes the threat salient and increases the cost of interference from irresponsible actors.
We figure out how to explain this problem effectively through targeted advertising, automating persuasion on the threat, and idea diffusion modeling, then scale what works. We also provide strategic and comms support for allied organizations like CAIS and MIRI.
02 — Evidence
Our approach is working
Automated x-risk-pilling
Our chatbot makes valid and rigorous arguments about why users should care about the threat that AI might literally kill everyone. People are convinced.
On average, users moved nearly halfway toward "Completely changed my mind"
Scale: 0 = "Not at all" → 10 = "Completely changed my mind"
5
Median
4.46
Average
24
Responses
In response to "How helpful was this?" after a conversational turn. One respondent who rated 0 added that they had already been convinced and just wanted to learn more.
Cost per click: as low as $0.10 (always under $0.50)
These numbers are extraordinary. Judging by comments, we're convincing people.
03 — Scaling
What we'll do with funding
Currently, no one receives a salary—even those working full-time. All funding goes to communications: ads, LLM inference, copies of "If Anyone Builds It" for influential people. The data shows our approach works: we are already convincing. Scaling what we have is a good idea; and we're excited about making it even more efficient.
With additional funding, we'll:
Spend a lot more on ads to iterate much more precisely on lots of narrow audiences
Have many more people interact with our x-risk-pilling tool
Pay salaries and hire talented people, including professional designers and communicators, get them up-to-speed on AI existential threats, and create even better content
Conduct many larger-scale experiments
Scale up campaigns that efficiently persuade people on the threat that AI might literally kill everyone, including in targeted ways (e.g., in districts where the salience of the problem is currently more important for strategic reasons)
Provide more extensive support to allied organizations
04 — Donate
Support this work
We operate two nonprofit entities, both tax-exempt: