AI: Futures and Responsibility

We aim to shape the long-term impacts of AI in ways that are safe and beneficial for humanity

To achieve this, we focus on:

Understanding impacts: how might developments in AI have long-lasting or extreme impacts on society?

Developing interventions: what can we do to shape those impacts in positive directions and mitigate risks?

Implementing policies: how can we work with partners across policy, industry, and others to make these things happen in practice?

 

AI: Futures and Responsibility is a collaboration between the Leverhulme Centre for the Future of Intelligence and the Centre for the Study of Existential Risk at the University of Cambridge.

Understanding impacts

We aim to understand future trajectories in AI progress, and how AI might impact society in particularly profound and long-lasting ways. We examine specific challenges, study the role of governance, and make use of participatory approaches to understand a broad range of future scenarios.

Developing interventions

We aim to identify interventions and actions that are likely to be robustly beneficial across a range of potential futures, given uncertainties about how AI systems and their impacts will develop over time.

Implementing policies

We work with partners across academia, policy, industry and civil society to see high-priority policy recommendations implemented in practice.