The Center for AI Risk Management & Alignment (CARMA) is a research and policy think tank dedicated to more safely managing the progression and effects of rapid advances in artificial intelligence. Through rigorous analysis and strategic intervention, we work to help ensure that transformative AI technologies remain controllable, aligned with human values, trustworthy, and beneficial to society. CARMA brings together experts in artificial intelligence, broader computer science, policy, infrastructure resilience, complex systems, mechanism design, and international technology governance to address both acute and systemic risks from increasingly powerful AI systems.
We integrate risk management, policy research, and technical safety into a unified approach for addressing AI challenges. Our work takes a systems-based approach spanning governance frameworks, risk modeling, public safety preparedness plans, cooperation mechanisms, and technical safety research. By combining expertise across disciplines, we can identify systemic vulnerabilities and develop a suite of proposed analyses, management frameworks, and solutions that bridge parts of the gap between safety assurance and practical governance measures. We focus on identifying, analyzing, preventing, and mitigating the most consequential risks from increasingly powerful AI systems. As no single solution will suffice, we pursue multiple lines of defense, prevention, and mitigation.
Center for AI Risk Management & Alignment
Copyright © 2025 Center for AI Risk Management & Alignment - All Rights Reserved. The Center for AI Risk Management & Alignment is a project of Social & Environmental Entrepreneurs, Inc., a 501(c)(3) nonprofit public charity.
We use cookies to analyze website traffic and optimize your website experience. By accepting our use of cookies, your data will be aggregated with all other user data.