Our work
Advising Policymakers
We’re working with Congress and federal agencies to help them understand advanced AI development and effectively prepare for it. We create resources, host events, and connect policymakers with the stakeholders they need to hear from.
We don't just talk about risks. We develop and advocate for solutions.
We share policy proposals, draft model legislation, and give feedback on others' policies. This work is collaborative and iterative. We take in ideas from our network of leading researchers and practitioners to make recommendations that are both robust and practical.
Developing solutions
We’re not just talking about AI risk; we’re helping find solutions. This work is collaborative and iterative. We work with our expert network of researchers and practitioners to design robust, practical policy solutions.
Report on AI's Workforce Impacts
Our research on AI's current and future effects on the labor market.
CAIP Statement on the Release of the Future of AI Innovation Act
CAIP welcomes the release of the bipartisan Future of AI Innovation Act
Public Support for AI Regulation
A majority of the American public supports government regulation of AI
Our priorities
Our policy mission is simple:
require safe AI.
To ensure powerful AI is safe, we need effective governance. That’s why our policy recommendations focus on ensuring the government has enough:
- Visibility and expertise to understand AI development
- Adeptness and authority to respond to rapidly evolving risks
- Infrastructure to support developers in innovating safely
Our Priorities
This work is collaborative and iterative. We take in ideas and feedback from our network of leading researchers and practitioners to make our recommendations both robust and practical.
- Build government capacity
- Safeguard development
- Mitigate extreme risk
As AI grows more capable, so do its risks. We must prepare governance now to keep pace. We are advocating policies to ensure the government has enough:
Visibility and expertise to understand AI development
Adeptness and authority to respond to rapidly evolving risks
Infrastructure to work with rather than against developers
- Visibility and expertise to understand AI development
- Adeptness and authority to respond to rapidly evolving risks
- Infrastructure to work with rather than against developers
Frequently asked questions
With AI advancing rapidly, we urgently need to develop the government’s capacity to rapidly identify and respond to AI's national security risks.
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Tellus in metus vulputate eu scelerisque felis. Purus sit amet volutpat consequat mauris nunc congue nisi vitae.
We’re a small, DC-based team of former AI researchers and policy professionals. We work with a wide network of experts from industry, academia, think tanks, government, and nonprofits. You can read more about us here.
We launched CAIP to address the urgent need for effective AI governance. Our founder Thomas previously researched technical methods to make advanced AI safe. At the time, very few researchers were sharing these safety insights with policymakers. CAIP is now helping close that gap.
We’re supported by donors who share our mission to improve AI governance. We are completely independent and nonpartisan, and we do not accept industry funding.
We’re always looking for people to join our team and support our work. We also regularly engage with stakeholders across AI and policy, and we would love to hear from you if you have questions, feedback, or ideas for collaborations.