AI Safety and Governance Fund

Our Mission

As a nonpartisan 501(c)(4) nonprofit organization, we're dedicated to ensuring that AI and other technologies benefit humanity and are developed safely, securely, in alignment with human values, and stimulate the economy.

The consensus view of the people building AI systems is that they might cause literal human extinction. This isn't science fiction; it's what the science tells us.

Educational Resources

“The Problem” from MIRI

A detailed introduction to the technical and governance problems that need to be solved to develop smarter-than-human general AI systems safely.

Read now

A report commissioned by the US State Department

The report states that the most advanced AI systems could, in the worst case, “pose an extinction-level threat to the human species”.

Learn More

Why care? Ask our tool

Have questions or counterarguments about the risk that a future AI system might cause human extinction?

Chat to our tool

AI 2027

A forecasting scenario of how a race to superintelligence could unfold, informed by trend extrapolations, wargames, expert feedback, experience at OpenAI, and previous forecasting successes.

Take a Look

Get Involved

Your voice is crucial in shaping the future. Here's how you can make a difference:

Take Action Donate