AI Safety and Governance Fund

Our Mission

As a nonpartisan 501(c)(4) organization, we're dedicated to:

Demystifying AI

Our research makes complex AI concepts accessible to all. We demystify AI for the public and policymakers, explain how it's built and works, and help scientists communicate their work and the state of the field. We also explain the safety and security of current AI systems and risks posed by future generally smarter-than-human systems.

Advocating for responsible AI development

We believe AI has enormous potential to benefit the public, and responsible development of AI systems that don't pose catastrophic risks should be incentivized. We promote policies that harness AI's potential while safeguarding the public from catastrophic risks posed by future general AI systems.

Maintaining U.S. leadership in AI development

We propose strategies for maintaining U.S. leadership in AI development. The U.S. has a unique position and can leverage it to ensure frontier AI systems remain aligned with human values globally.

Educational Resources

We recommend a range of materials to keep you informed and engaged:

“The Problem” from MIRI

A detailed introduction to the technical and governance problems that need to be solved to develop smarter-than-human general AI systems safely.

Read now

A report commissioned by the US State Department

The report states that the most advanced AI systems could, in the worst case, “pose an extinction-level threat to the human species”.

Learn More

Introduction to AI Safety, Ethics, and Society

Read this freely available book from Dan Hendrycks, the executive director of the Center for AI Safety and xAI's AI safety advisor.

Read The Book

AI 2027

A forecasting scenario of how a race to superintelligence could unfold, informed by trend extrapolations, wargames, expert feedback, experience at OpenAI, and previous forecasting successes.

Take a Look

Get Involved

Your voice is crucial in shaping the future. Here's how you can make a difference:

Take Action Now