Richard Mallah founded and leads CARMA, and operates across its portfolio of projects in risk assessment, policy strategy, and technical safety. Richard also has a part-time hat as the Principal AI Safety Strategist at the Future of Life Institute, which he joined in 2014, and where he does research, analysis, advocacy, strategy, and fiel
Richard Mallah founded and leads CARMA, and operates across its portfolio of projects in risk assessment, policy strategy, and technical safety. Richard also has a part-time hat as the Principal AI Safety Strategist at the Future of Life Institute, which he joined in 2014, and where he does research, analysis, advocacy, strategy, and field building regarding technical, strategy, and policy aspects of transformative AI safety.
He’s taken AGI seriously since 2010, and has maintained a sense of urgency about societal risks from AGI since 2012.
Over the past decade he’s studied, ideated, collaborated, and contributed on scalable safety, computational ethics, pathways to AGI, risk types and paths, theories of change, and governance recommendations.
Mr. Mallah has also co-led the Fairness, Auditing, Transparency, and Externalities of AI Center of Excellence at management consultancy Keystone Strategy, which provided perspective on AI auditing and multibillion-dollar litigations. Heading enterprise risk management systems at the world’s largest asset manager BlackRock during The Financial Crisis also lent him appreciation for the interplay among systemic tail risk, technology, multiscale foresight, risk reduction, and catalysts for systemic improvement.
Mr. Mallah has been working in machine learning and AI in industry for over twenty years, spanning roles in AI algorithms research, research management, development management, systems architecture, product management, CTO, chief scientist, management consulting, and strategy. He holds a degree in Intelligent Systems from Columbia University.
Anna Katariina Wisakanto is a researcher and strategist at CARMA, where she leads the Comprehensive Risk Assessment project. Anna has been working in AI since 2018 across industry, academia and business, and levers a background in philosophy, engineering physics, and complex adaptive systems to analyze and address the risks posed by advan
Anna Katariina Wisakanto is a researcher and strategist at CARMA, where she leads the Comprehensive Risk Assessment project. Anna has been working in AI since 2018 across industry, academia and business, and levers a background in philosophy, engineering physics, and complex adaptive systems to analyze and address the risks posed by advanced AI systems and their capabilities.
She contributes to the field of AI safety by focusing on evaluations, risk assessment, and developing a holistic understanding of the actual risks and limitations of AI systems. Anna's work on the Comprehensive Risk Assessment project involves utilizing novel analytical methods derived from first principles to model the pathways connecting AI capabilities to potential harms at a global scale and identifying those that pose the greatest risk.
Alongside her work at CARMA, Anna explores the intersection of philosophy of AI, complex adaptive systems, and cognitive science, investigating topics such as the impact of global AI systems on human cognitive and moral autonomy. This multifaceted perspective allows her to bring a unique blend of holistic thinking, creative problem-solving, and ethical considerations to the challenges of managing risks from highly transformative technological systems like AI. Anna holds an Engineering Physics degree from Chalmers University of Technology, where she wrote her thesis on quantum error correction.
As a Senior Risk Assessment Associate with CARMA, Joe Rogero provides part-time support for the development of CARMA's risk assessment frameworks.
Joe is a former Reliability Engineer with a background in risk assessment, root cause analysis, incident investigation, technical writing, and data analysis,. His prior engineering work included
As a Senior Risk Assessment Associate with CARMA, Joe Rogero provides part-time support for the development of CARMA's risk assessment frameworks.
Joe is a former Reliability Engineer with a background in risk assessment, root cause analysis, incident investigation, technical writing, and data analysis,. His prior engineering work included the use of failure modes and effect analysis, event tree risk assessment, multidisciplinary facilitation, and other tools and techniques to identify, quantify, aggregate, prioritize, and communicate safety and financial risk scenarios from the mundane to the catastrophic. More recently he served as a Teaching Fellow in the 2024 AI Safety Fundamentals course, guiding more than 50 participants through an introduction to AI Safety with BlueDot Impact. He has also volunteered with the Centre for Effective Altruism's EA Virtual Programs and as a facilitator and career navigator for AI Safety Quest. In April, he began writing for the Communications team at the Machine Intelligence Research Institute.
As a Risk Assessment Research Associate at CARMA, Corin Katzke assists on clarifying aspects of functionality, agency, and risk, and helps structure scenario-oriented models.
Corin is also an AI scenario researcher at Convergence Analysis, and also writes the AI Safety Newsletter for the Center for AI Safety. Recently, he's written about s
As a Risk Assessment Research Associate at CARMA, Corin Katzke assists on clarifying aspects of functionality, agency, and risk, and helps structure scenario-oriented models.
Corin is also an AI scenario researcher at Convergence Analysis, and also writes the AI Safety Newsletter for the Center for AI Safety. Recently, he's written about scenario planning and on the implications of AI agency for AI risk, on theories of victory for AI governance, and how the US could respond to AI emergencies. Before that, he studied philosophy at Yale, working on the cognitive science of morality.
Anthony Aguirre is the Executive Director of the Future of Life Institute, an NGO examining the implications of transformative technologies, particularly AI. As a professor at UC Santa Cruz, his research spans foundational physics to AI policy. Aguirre co-founded Metaculus, a platform leveraging collective intelligence to forecast science
Anthony Aguirre is the Executive Director of the Future of Life Institute, an NGO examining the implications of transformative technologies, particularly AI. As a professor at UC Santa Cruz, his research spans foundational physics to AI policy. Aguirre co-founded Metaculus, a platform leveraging collective intelligence to forecast science and technology developments, and the Foundational Questions Institute, supporting fundamental physics research. With a PhD from Harvard and postdoctoral work at the Institute for Advanced Study, Aguirre's multidisciplinary background enables him to play a significant role in shaping the future of transformative technologies and their societal impact.
Eric Drexler, a pioneering researcher in nanotechnology and AI, is known for his seminal works “Engines of Creation” and “Nanosystems,” which laid the foundation for the field of molecular systems engineering. In recent years, he has focused on the potential development and implications of advanced AI systems. His “Comprehensive AI Servic
Eric Drexler, a pioneering researcher in nanotechnology and AI, is known for his seminal works “Engines of Creation” and “Nanosystems,” which laid the foundation for the field of molecular systems engineering. In recent years, he has focused on the potential development and implications of advanced AI systems. His “Comprehensive AI Services” model explores the emergence of general intelligence through diverse AI systems that perform distinct, interpretable tasks within role architectures. This perspective informs his analysis of the opportunities and challenges posed by AI, as well as strategies for managing risks and harnessing benefits.
Dr. Drexler's current work explores how advances in AI will expand general implementation capacity—the ability of humans to achieve broad goals by designing, developing, deploying, applying, and adapting complex sociotechnical systems at scale. Through his research and writing, he seeks to deepen understanding of the transformative potential of advanced AI capabilities and their implications for potential cooperative global strategies.
Shezaad J. Dastoor is the Program Manager for the Digital Transformation of United Nations Peacekeeping, where he leverages data, technology, and innovation to transform the UN's approach to peace and security. As a seasoned international civil servant working at the confluence of technology and geopolitics, he brings a unique set of skil
Shezaad J. Dastoor is the Program Manager for the Digital Transformation of United Nations Peacekeeping, where he leverages data, technology, and innovation to transform the UN's approach to peace and security. As a seasoned international civil servant working at the confluence of technology and geopolitics, he brings a unique set of skills and perspectives to CARMA's advisory board.
He has a proven track record of drafting analytical products, statements, and policy communications, incorporating elements of game theory and risk management to facilitate decision-making processes for UN leadership, including the Secretary-General. His prior roles include serving as Special Assistant and Advisor to the Chief of Staff for UN Peacekeeping, where he coordinated strategic initiatives, crisis management, and stakeholder engagement. He also served as an intelligence analyst in South Sudan and a liaison officer in Afghanistan, providing actionable insights and risk mitigation strategies in high-stakes environments. Before his tenure at the UN, Shezaad worked at the World Bank, where he focused on strategic planning and international development projects. He is also a co-founder of LINC Negotiation Architects, a firm specializing in negotiation analysis, training, and simulations. Additionally, he serves on OpenAI's Red Team Network, assessing risks and policy implications of AI models and systems.
Shezaad holds a master’s degree in international peace and conflict resolution from American University and a bachelor’s degree in political science from St. Xavier’s College, Mumbai. Fluent in English, Hindi, and Urdu, he is dedicated to advancing global stability through innovative approaches to AI risk management and strategic operations.
As a project of Social & Environmental Entrepreneurs, CARMA benefits from the substantial operations, administration, human resources, payroll, finance, accounting, legal, and general nonprofit expertise and support that SEE provides.
Center for AI Risk Management & Alignment
Copyright © 2024 Center for AI Risk Management & Alignment - All Rights Reserved.
We use cookies to analyze website traffic and optimize your website experience. By accepting our use of cookies, your data will be aggregated with all other user data.