AI will be incredibly transformative, and we’re collectively unprepared for many of its worst risks.
To help solve this problem, CAIP drafted model legislation, advocated for bipartisan solutions, hosted events to foster discussion and information sharing, gave feedback on others’ policies, endorsed bills that would help protect against AI risk, and connected policymakers with leading experts in AI.
Unfortunately, CAIP has run out of funding and has ceased most of our active operations. The two exceptions are:
- Our grassroots [Policy Advocacy Network](link to Policy Advocacy Network page), which continues to train and support young AI safety leaders from around the country. To connect with the Policy Advocacy Network, please email ivan@aipolicy.us.
- Our legislative review service. If you would like confidential expert feedback on pending legislation or draft legislation, please contact jason@aipolicy.us, CAIP’s Executive Director. We are still in contact with volunteer experts in the technical, legal, and policy details of AI safety, and we would be happy to share free advice on draft legislation and bill text.
If you would like to help revive CAIP, please contact jason@aipolicy.us, CAIP’s Executive Director, to discuss a donation. CAIP retains its status as a 501(c)(4) corporation, and many of our key team members would be delighted to return if new funding becomes available.
In the meantime, this website preserves CAIP’s most important policy ideas, research papers, and press coverage. We also have recordings of our podcast episodes and panel briefings, and an archive of our company blog.

Whistleblower Protections for AI Employees
Whistleblowers are a powerful tool to minimize the risk of public harm from AI. Our latest research shows how proper protections can be designed to avoid concerns such as the violation of trade secrets.

AI Agents: Governing Autonomy in the Digital Age
A report on policies to address the emerging risks of increasingly autonomous AI agents.

Building Resilience to AI's Disruptions to Emergency Response
An emergency response system overwhelmed with AI-generated incidents is a crisis in the making.
CAIP priorities
Our policy mission is simple:
require safe AI.
To ensure powerful AI is safe, we need effective governance. That’s why our policy recommendations focus on ensuring the government has enough:
- Visibility and expertise to understand AI development
- Adeptness and authority to respond to rapidly evolving risks
- Infrastructure to support developers in innovating safely
Our Priorities
This work is collaborative and iterative. We take in ideas and feedback from our network of leading researchers and practitioners to make our recommendations both robust and practical.
- Build government capacity
- Safeguard development
- Mitigate extreme risk
As AI grows more capable, so do its risks. We must prepare governance now to keep pace. We are advocating policies to ensure the government has enough:
Visibility and expertise to understand AI development
Adeptness and authority to respond to rapidly evolving risks
Infrastructure to work with rather than against developers
- Visibility and expertise to understand AI development
- Adeptness and authority to respond to rapidly evolving risks
- Infrastructure to work with rather than against developers
Frequently asked questions
With AI advancing rapidly, we urgently need to develop the government’s capacity to rapidly identify and respond to AI's national security risks.
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Tellus in metus vulputate eu scelerisque felis. Purus sit amet volutpat consequat mauris nunc congue nisi vitae.
We’re a small, DC-based team of former AI researchers and policy professionals. We work with a wide network of experts from industry, academia, think tanks, government, and nonprofits. You can read more about us here.
We launched CAIP to address the urgent need for effective AI governance. In 2023, hundreds of AI experts warned that AI could cause catastrophic harm in the near future. At the time, very few researchers were sharing these safety challenges with policymakers or identifying concrete legislative solutions. CAIP is now helping close that gap.
CAIP is grateful for the generous support of our donors. We’re supported primarily by mid-level and major individual donors who share our mission to improve AI governance. Several of these donors built up their wealth during the dot-com boom or by working at hedge funds. To protect the privacy of these individuals, we do not publish their names. We also received seed funding through an organization sponsored by Jaan Tallinn, a former founding engineer at Skype. To maintain our independence, we do not accept funding from companies who are designing or building AI software or hardware. We are nonpartisan and focused squarely on the public interest.
We’re always looking for people to join our team and support our work. We also regularly engage with stakeholders across AI and policy, and we would love to hear from you if you have questions, feedback, or ideas for collaborations.