Our impact

Although we're just getting started, our members are gaining access from the halls of government to our local communities. The results are promising and keep us going.


We work across four areas to combat existential risk from AI. Each area has its own theory of change, but they reinforce each other.

Direct Institutional Plan (DIP)

Politicians, journalists, and civil servants shape policy. By engaging directly with these institutional actors, we inform the people who make decisions about AI.

What we've done

  • Hundreds of lawmakers contacted across multiple countries

  • Dozens of in-person meetings with politicians

  • 10+ community presentations and events

What we're building

  • Structured DIP training programme for new advocates

  • Resources to help citizens engage effectively with their representatives

Movement Building

Lasting change requires organised communities. We establish and support local groups so that advocacy can happen everywhere.

What we've built

  • PauseAI Germany established as a legal entity

  • Torchbearer Community Wisconsin (local group)

What we're building

  • Supporting coordination for other organisations by building organisation-level MicroCommit

  • Creating a network in Massachusetts with seven lawmaker offices

Infrastructure and Technology

We build tools that make it easier for people to take action on AI safety, reducing friction and enabling coordination at scale.

What we've built

  • MicroCommit.io – 1,500+ users, 8,500 requests reviewed, 140,000 messages sent to lawmakers along with ControlAI

What we're building

  • AI safety voter guides – helping voters see where candidates stand on AI regulation and ask them to take a position

  • Tools for tracking and coordinating outreach by citizens to their representatives

Public Communication

Public opinion shapes political will. We communicate AI existential risk to broader audiences, explaining the steps necessary for a better future.

What we're doing

  • Creating educational video content on AI X-risk and explainers on regulatory solutions

  • Publishing written material on existential risk from AI, and our humanist approach to making  the future go well