Richard Mallah founded and leads CARMA, and operates across its portfolio of projects in risk assessment, policy strategy, and technical safety. Richard also has a part-time hat as the Principal AI Safety Strategist at the Future of Life Institute, which he joined in 2014, and where he does research, analysis, advocacy, strategy, and fiel
Richard Mallah founded and leads CARMA, and operates across its portfolio of projects in risk assessment, policy strategy, and technical safety. Richard also has a part-time hat as the Principal AI Safety Strategist at the Future of Life Institute, which he joined in 2014, and where he does research, analysis, advocacy, strategy, and field building regarding technical, strategy, and policy aspects of transformative AI safety.
He’s taken AGI seriously since 2010, and has maintained a sense of urgency about societal risks from AGI since 2012.
Over the past decade he’s studied, ideated, collaborated, and contributed on scalable safety, computational ethics, pathways to AGI, risk types and paths, theories of change, and governance recommendations.
Mr. Mallah has also co-led the Fairness, Auditing, Transparency, and Externalities of AI Center of Excellence at management consultancy Keystone Strategy, which provided perspective on AI auditing and multibillion-dollar litigations. Heading enterprise risk management systems at the world’s largest asset manager BlackRock during The Financial Crisis also lent him appreciation for the interplay among systemic tail risk, technology, multiscale foresight, risk reduction, and catalysts for systemic improvement.
Mr. Mallah has been working in machine learning and AI in industry for over twenty years, spanning roles in AI algorithms research, research management, development management, systems architecture, product management, CTO, chief scientist, management consulting, and strategy. He holds a degree in Intelligent Systems from Columbia University.
Kyle A. Kilian heads the Societal Defense Team at CARMA, which includes Public Security Policy, Offense/Defense Dynamics, and crosscutting concerns. He is an accomplished leader in intelligence analysis, multidisciplinary research, and technology modernization in the national security enterprise. He was recently the Deputy Director of the
Kyle A. Kilian heads the Societal Defense Team at CARMA, which includes Public Security Policy, Offense/Defense Dynamics, and crosscutting concerns. He is an accomplished leader in intelligence analysis, multidisciplinary research, and technology modernization in the national security enterprise. He was recently the Deputy Director of the Transformative Futures Institute, where he focused on applying strategic foresight to anticipate risks from emerging technologies. Kyle has served for over a decade in the Defense and Intelligence Community (IC) in strategic, tactical, and joint operational environments. His research interests lie at the intersection of artificial intelligence (AI), complex systems, network modeling, and international security, with expertise in exploratory futures modeling and activity-based intelligence (ABI). Kyle is a senior research fellow at the Center for the Future Mind, a Mentor at the Foresight Institute, and a 2022 fellow with the Global Catastrophic Risk Institute. Kyle holds graduate degrees in Data Science and Cyber Intelligence from the National Intelligence University and International Affairs from the American University's School of International Service.
Anna Katariina Wisakanto is a researcher and strategist at CARMA, where she leads the Comprehensive Risk Assessment project. Anna has been working in AI since 2018 across industry, academia and business, and levers a background in philosophy, engineering physics, and complex adaptive systems to analyze and address the risks posed by advan
Anna Katariina Wisakanto is a researcher and strategist at CARMA, where she leads the Comprehensive Risk Assessment project. Anna has been working in AI since 2018 across industry, academia and business, and levers a background in philosophy, engineering physics, and complex adaptive systems to analyze and address the risks posed by advanced AI systems and their capabilities.
She contributes to the field of AI safety by focusing on evaluations, risk assessment, and developing a holistic understanding of the actual risks and limitations of AI systems. Anna's work on the Comprehensive Risk Assessment project involves utilizing novel analytical methods derived from first principles to model the pathways connecting AI capabilities to potential harms at a global scale and identifying those that pose the greatest risk.
Alongside her work at CARMA, Anna explores the intersection of philosophy of AI, complex adaptive systems, and cognitive science, investigating topics such as the impact of global AI systems on human cognitive and moral autonomy. This multifaceted perspective allows her to bring a unique blend of holistic thinking, creative problem-solving, and ethical considerations to the challenges of managing risks from highly transformative technological systems like AI. Anna holds an Engineering Physics degree from Chalmers University of Technology, where she wrote her thesis on quantum error correction.
Abra Ganz leads the Geostrategic Dynamics team at CARMA. In this role she investigates how transformative AI (TAI) will change the existing dynamics between states and companies, and how TAI can be used to help rather than hinder international cooperation. This work brings together tools from game theory, mechanism design, and political a
Abra Ganz leads the Geostrategic Dynamics team at CARMA. In this role she investigates how transformative AI (TAI) will change the existing dynamics between states and companies, and how TAI can be used to help rather than hinder international cooperation. This work brings together tools from game theory, mechanism design, and political and organizational psychology to model the dynamics of multilateral competition and coopetition as well as using policy research to inform pragmatic solutions.
Prior to CARMA, Abra worked as a researcher at Yale University’s Digital Ethics Center where she focused on how physical infrastructure can be used to govern digital systems. She has also done technical AI safety research at ETH Zürich (on adversarial robustness) and MIT (on inverse reinforcement learning) and authored a chapter on Proxy Gaming in the 'AI Safety, Ethics, and Society' textbook. Abra holds an undergraduate degree in Classics from the University of Oxford and a Master’s in Logic from the Institute of Logic, Language, and Computation at the University of Amsterdam.
Daniel Kroth is a Senior Researcher at CARMA, where he leads streams in the Public Security Policy program. He also contributes to projects across the Offense/Defense Dynamics, Geostrategic Dynamics, and Comprehensive Risk Assessment programs. His prior experience spans international security and technology policy with appointments at Law
Daniel Kroth is a Senior Researcher at CARMA, where he leads streams in the Public Security Policy program. He also contributes to projects across the Offense/Defense Dynamics, Geostrategic Dynamics, and Comprehensive Risk Assessment programs. His prior experience spans international security and technology policy with appointments at Lawrence Livermore National Laboratory, the Special Competitive Studies Project, and the Wilson Center’s Science and Technology Innovation Program. A firm believer in the importance of mentorship and improving accessibility in technology policy and international affairs, he is a founding member of the Next Frontier Seminar, a nonprofit which supports outstanding undergraduate student research in those fields. Daniel remains active in political science research, particularly in international security and methodology.
Daniel holds a master’s degree with concentrations in International Security and Technology and Global Affairs from the Fletcher School of Law and Diplomacy, where he wrote his thesis on a theoretical framework for novel nuclear-conventional entanglement risks presented by emerging technologies. While at the Fletcher School, Daniel studied AI policy and cybersecurity at the Harvard Kennedy School of Government. Prior to his graduate studies, he undertook a year of study at Albert-Ludwigs-Universität Freiburg with support from the German Academic Exchange Service (DAAD) and additional study at Sichuan University. He holds a bachelor’s degree from Michigan State University.
Giulio Corsi is a Senior Researcher at CARMA, focusing on Offense/Defense Dynamics and balances generated by AI capabilities. With extensive experience in evaluating AI safety and the societal impacts of AI systems, Giulio has previously worked on the development of novel techniques for assessing risks in AI-mediated environments. His res
Giulio Corsi is a Senior Researcher at CARMA, focusing on Offense/Defense Dynamics and balances generated by AI capabilities. With extensive experience in evaluating AI safety and the societal impacts of AI systems, Giulio has previously worked on the development of novel techniques for assessing risks in AI-mediated environments. His research, which applies quantitative methods and machine learning techniques to analyze AI risks, has been published in journals such as EPJ Data Science and Harvard Kennedy School Misinformation Review. Giulio is also interested in the exploration of systemic and cascading AI risks across multiple domains, with a focus on interdisciplinary measurement and mapping approaches. Giulio's research has also informed high-impact policy context, and he has recently been contributing to a large-scale assessment, led by the United Nations International Atomic Energy Agency, of how frontier AI can disrupt emergency responses during nuclear emergencies.
Alongside his work at CARMA, Giulio holds a position as a Research Associate at the University of Cambridge's Leverhulme Centre for the Future of Intelligence, where he leads research work on epistemic security, exploring how AI affects information circulation and public decision-making. Giulio holds an MPhil and PhD from the University of Cambridge.
Daniele (Dan) Palombi is a Mathematics and Computer Science researcher, specialising in topics spanning Algorithmic Game Theory, Mechanism Design, Multi-Agent Systems, Concurrent Systems, Category Theory, Type Theory, Programming Language Theory and Domain Theory.
Alongside his work at CARMA, Dan is a consultant (Mechanism Design, Market D
Daniele (Dan) Palombi is a Mathematics and Computer Science researcher, specialising in topics spanning Algorithmic Game Theory, Mechanism Design, Multi-Agent Systems, Concurrent Systems, Category Theory, Type Theory, Programming Language Theory and Domain Theory.
Alongside his work at CARMA, Dan is a consultant (Mechanism Design, Market Design and Operations Research) at 20squares and coordinates its R&D efforts as a member of its core team, and a researcher at the Institute for Categorical Cybernetics (Programming Language Theory, Type Theory, applications of Category Theory to Game Theory and Multi-Agent Systems).
In the past, Dan has worked as a researcher in Concurrent Programming and Probabilistic Programming, as a developer and designer of programming languages, build systems and developer tools, and as an embedded developer for industry-scale manufacturing robots.
His work at CARMA focuses on combining his strong interest in AI safety research with his expertise in mechanism design and multi-agent systems, designing and implementing mechanisms and markets that promote positive-sum strategic interactions among intelligent agents, favouring alignment with human values and societal goals, and building mathematical models and simulations for complex strategic scenarios.
As a Senior Risk Assessment Associate with CARMA, Joe Rogero provides part-time support for the development of CARMA's risk assessment frameworks.
Joe is a former Reliability Engineer with a background in risk assessment, root cause analysis, incident investigation, technical writing, and data analysis,. His prior engineering work included
As a Senior Risk Assessment Associate with CARMA, Joe Rogero provides part-time support for the development of CARMA's risk assessment frameworks.
Joe is a former Reliability Engineer with a background in risk assessment, root cause analysis, incident investigation, technical writing, and data analysis,. His prior engineering work included the use of failure modes and effect analysis, event tree risk assessment, multidisciplinary facilitation, and other tools and techniques to identify, quantify, aggregate, prioritize, and communicate safety and financial risk scenarios from the mundane to the catastrophic. More recently he served as a Teaching Fellow in the 2024 AI Safety Fundamentals course, guiding more than 50 participants through an introduction to AI Safety with BlueDot Impact. He has also volunteered with the Centre for Effective Altruism's EA Virtual Programs and as a facilitator and career navigator for AI Safety Quest. In April, he began writing for the Communications team at the Machine Intelligence Research Institute.
Akash Wasil is an AI policy researcher whose work focuses on emergency preparedness, international AI governance, and safety standards. Akash is a Master's Student in Georgetown's Security Studies Program, where he focuses on the intersection of AI policy & national security. He recently worked as a research manager through the University
Akash Wasil is an AI policy researcher whose work focuses on emergency preparedness, international AI governance, and safety standards. Akash is a Master's Student in Georgetown's Security Studies Program, where he focuses on the intersection of AI policy & national security. He recently worked as a research manager through the University of Cambridge ERA AI fellowship, where he supervised 5 junior AI governance researchers. Before working in AI policy, Akash received a BA in psychology from Harvard University and an MA in clinical psychology from the University of Pennsylvania.
Akash has written papers about how the US and UK governments can better prepare for AI-related national security threats, how federal agencies can improve their ability to understand AI progress, how nations could one day verify compliance with potential international agreements relating to AI, and how regulators could evaluate safety cases from AI developers. His work frequently involves interacting with policymakers in Congress, the Executive Branch, and the UK Civil Service. At CARMA, Akash works on the public security policy team and contributes to the comprehensive risk assessment, offense/defense dynamics, and geostrategic dynamics teams.
As a Risk Assessment Research Associate at CARMA, Corin Katzke assists on clarifying aspects of functionality, agency, and risk, and helps structure scenario-oriented models.
Corin is also an AI scenario researcher at Convergence Analysis, and also writes the AI Safety Newsletter for the Center for AI Safety. Recently, he's written about s
As a Risk Assessment Research Associate at CARMA, Corin Katzke assists on clarifying aspects of functionality, agency, and risk, and helps structure scenario-oriented models.
Corin is also an AI scenario researcher at Convergence Analysis, and also writes the AI Safety Newsletter for the Center for AI Safety. Recently, he's written about scenario planning and on the implications of AI agency for AI risk, on theories of victory for AI governance, and how the US could respond to AI emergencies. Before that, he studied philosophy at Yale, working on the cognitive science of morality.
As a Research Assistant at CARMA, Avyay's work spans a wide range of projects across risk management and policy strategy.
He has a growing interest in AI governance, and has been a cohort mentor for the Axiom Futures Fellowship and the Center for AI Safety's "AI Safety Ethics and Society" course. His experience includes integrating AI sys
As a Research Assistant at CARMA, Avyay's work spans a wide range of projects across risk management and policy strategy.
He has a growing interest in AI governance, and has been a cohort mentor for the Axiom Futures Fellowship and the Center for AI Safety's "AI Safety Ethics and Society" course. His experience includes integrating AI systems for companies and conducting research at Columbia University's Software Systems lab.
He was previously a fellow at the AI Futures Fellowship, where he worked on a method to reduce jailbreaks in language models. He has since been involved in independent research forecasting AI chip technologies and their growth rates. This work aims to assess if compute monitoring of GPUs alone is viable for AI governance.
He has completed the AI Safety Fundamentals course in governance and has gained working knowledge of the EU AI Act through his involvement in understanding and analyzing standards from the Joint Technical Committee 21. His focus areas include scalable oversight methods, detecting deception in AI models, and understanding the implications of rapid compute progress.
His work aims to contribute to the development of safe and aligned AI systems, with an emphasis on reducing existential risks from advanced AI. He is keen to expand his contributions to AI Governance by pursuing a long term career working in helping design Governance Systems and Policies for safe AI.
Misty Rodriguez is an accomplished Executive Assistant with over two decades of experience in diverse professional environments. Misty is known for her keen ability to streamline processes, manage complex projects, and enhance organizational productivity. Her career has spanned roles in office management, process optimization, and executi
Misty Rodriguez is an accomplished Executive Assistant with over two decades of experience in diverse professional environments. Misty is known for her keen ability to streamline processes, manage complex projects, and enhance organizational productivity. Her career has spanned roles in office management, process optimization, and executive support at dynamic organizations like CBS This Morning, Fox News, and True Search.
A passionate lifelong learner, Misty holds an Associate of Arts in Business Administration and has continued to expand her expertise through certifications in content writing, copywriting, and social media management. Beyond her professional accomplishments, Misty is an avid skydiver, dedicated writer, and has a deep interest in the cosmos. Misty’s unique blend of administrative expertise and creative flair makes her an invaluable asset to the CARMA team, where she’s excited to contribute to the forward-thinking mission of AI Risk Management and Alignment.
Anthony Aguirre is the Executive Director of the Future of Life Institute, an NGO examining the implications of transformative technologies, particularly AI. As a professor at UC Santa Cruz, his research spans foundational physics to AI policy. Aguirre co-founded Metaculus, a platform leveraging collective intelligence to forecast science
Anthony Aguirre is the Executive Director of the Future of Life Institute, an NGO examining the implications of transformative technologies, particularly AI. As a professor at UC Santa Cruz, his research spans foundational physics to AI policy. Aguirre co-founded Metaculus, a platform leveraging collective intelligence to forecast science and technology developments, and the Foundational Questions Institute, supporting fundamental physics research. With a PhD from Harvard and postdoctoral work at the Institute for Advanced Study, Aguirre's multidisciplinary background enables him to play a significant role in shaping the future of transformative technologies and their societal impact.
Eric Drexler, a pioneering researcher in nanotechnology and AI, is known for his seminal works “Engines of Creation” and “Nanosystems,” which laid the foundation for the field of molecular systems engineering. In recent years, he has focused on the potential development and implications of advanced AI systems. His “Comprehensive AI Servic
Eric Drexler, a pioneering researcher in nanotechnology and AI, is known for his seminal works “Engines of Creation” and “Nanosystems,” which laid the foundation for the field of molecular systems engineering. In recent years, he has focused on the potential development and implications of advanced AI systems. His “Comprehensive AI Services” model explores the emergence of general intelligence through diverse AI systems that perform distinct, interpretable tasks within role architectures. This perspective informs his analysis of the opportunities and challenges posed by AI, as well as strategies for managing risks and harnessing benefits.
Dr. Drexler's current work explores how advances in AI will expand general implementation capacity—the ability of humans to achieve broad goals by designing, developing, deploying, applying, and adapting complex sociotechnical systems at scale. Through his research and writing, he seeks to deepen understanding of the transformative potential of advanced AI capabilities and their implications for potential cooperative global strategies.
Shezaad J. Dastoor is the Program Manager for the Digital Transformation of United Nations Peacekeeping, where he leverages data, technology, and innovation to transform the UN's approach to peace and security. As a seasoned international civil servant working at the confluence of technology and geopolitics, he brings a unique set of skil
Shezaad J. Dastoor is the Program Manager for the Digital Transformation of United Nations Peacekeeping, where he leverages data, technology, and innovation to transform the UN's approach to peace and security. As a seasoned international civil servant working at the confluence of technology and geopolitics, he brings a unique set of skills and perspectives to CARMA's advisory board.
He has a proven track record of drafting analytical products, statements, and policy communications, incorporating elements of game theory and risk management to facilitate decision-making processes for UN leadership, including the Secretary-General. His prior roles include serving as Special Assistant and Advisor to the Chief of Staff for UN Peacekeeping, where he coordinated strategic initiatives, crisis management, and stakeholder engagement. He also served as an intelligence analyst in South Sudan and a liaison officer in Afghanistan, providing actionable insights and risk mitigation strategies in high-stakes environments. Before his tenure at the UN, Shezaad worked at the World Bank, where he focused on strategic planning and international development projects. He is also a co-founder of LINC Negotiation Architects, a firm specializing in negotiation analysis, training, and simulations. Additionally, he serves on OpenAI's Red Team Network, assessing risks and policy implications of AI models and systems.
Shezaad holds a master’s degree in international peace and conflict resolution from American University and a bachelor’s degree in political science from St. Xavier’s College, Mumbai. Fluent in English, Hindi, and Urdu, he is dedicated to advancing global stability through innovative approaches to AI risk management and strategic operations.
Center for AI Risk Management & Alignment
Copyright © 2024 Center for AI Risk Management & Alignment - All Rights Reserved.
We use cookies to analyze website traffic and optimize your website experience. By accepting our use of cookies, your data will be aggregated with all other user data.