Lead legal research on potential U.S. federal and state-level authorities and policy interventions supporting public safety, security, and wellbeing for scenarios of highly multipolar AGI run amok.
Research, develop, measure, and write about particular technical architectural directions for safer general-purpose AI involving metareasoning, pragmatics, multiobjective algorithms, certainty management, context management, and constraint satisfaction --
Please contact us to submit an expression of interest or a research plan you'd like to lead using the above elements.
CARMA is a virtual-first organization, but opportunities for in-person meetings tend to cluster around Berkeley, CA, Cambridge, MA, Washington, DC, and London, UK.. CARMA/SEE is proud to be an Equal Opportunity Employer. We will not discriminate on the basis of race, ethnicity, sex, age, religion, gender reassignment, partnership status, maternity, or sexual orientation. We are, by policy and action, an inclusive organization and actively promote equal opportunities for all humans with the right mix of talent, knowledge, skills, attitude, and potential, so hiring is only based on individual merit for the job. Note that we are unable to sponsor visas at this time, but non-U.S. contractors may be considered.
The Center for AI Risk Management & Alignment is a project of Social & Environmental Entrepreneurs, Inc., a 501(c)(3) nonprofit public charity.
Center for AI Risk Management & Alignment
Copyright © 2024 Center for AI Risk Management & Alignment - All Rights Reserved.
We use cookies to analyze website traffic and optimize your website experience. By accepting our use of cookies, your data will be aggregated with all other user data.