London Initiative for Safe AI (LISA)’s mission is to improve the safety of advanced AI systems by supporting and empowering individual researchers and small organisations. In addition to providing a supportive, productive, and collaborative research environment, we want to offer research fellowships to strong individual researchers advancing neglected but promising AI safety agendas. The fellowships will offer financial stability to individuals as well as significant research management and support for twelve months. We expect alumni of the fellowship to continue as individual researchers (at LISA or elsewhere); to secure AI safety positions in industry, government, or academia; or to start their own AI safety organisation.

Why a research fellowship?

Why at LISA?

Who are we looking for?

The AI safety researcher pipeline.

The AI safety researcher pipeline.

This figure illustrates the progression from motivated and talented individuals to researchers outputting high-impact AI safety work. LISA's fellowship is designed to address the critical bottleneck between Phase 3 and Phase 4 because it is difficult for talented individuals to solve for themselves. For example, at the end of MATS or highly relevant PhD programmes, many strong scholars [still struggle](https://www.lesswrong.com/posts/MhudbfBNQcMxBBvj8/there-should-be-more-ai-safety-orgs#:~:text=The core argument is that,Thus%2C more orgs are needed.) to secure AI safety roles in existing organisations.

This fellowship will provide a productive research environment, financial stability, recognition, and research management to individuals who have already shown evidence of high-impact AI safety research (as part of MATS, Astra, a PhD or postdoc, or otherwise). We expect most fellowships (~80%) to be allocated to “focused” fellows, advancing agendas with well-defined theories of change. The remaining fellowships (~20%) will be allocated to “exploratory” fellows dedicated to less well-defined research agendas or seeking out new ones (e.g., the finding of singular learning theory).

Concretely, we are looking for `T-shaped researchers' with: