London Initiative for Safe AI (LISA)’s mission is to improve the safety of advanced AI systems by supporting and empowering individual researchers and small organisations. In addition to providing a supportive, productive, and collaborative research environment, we want to offer research fellowships to strong individual researchers advancing neglected but promising AI safety agendas. The fellowships will offer financial stability to individuals as well as significant research management and support for twelve months. We expect alumni of the fellowship to continue as individual researchers (at LISA or elsewhere); to secure AI safety positions in industry, government, or academia; or to start their own AI safety organisation.
Why a research fellowship?
- To improve the safety of advanced AI systems: With AI capabilities advancing at an unprecedented pace, there is an urgent need for corresponding insights in AI safety. The field of AI safety is pre-paradigmatic, so a diversity of research agendas ought to be explored and developed to decorrelate failure.
- To offer financial stability and recognition to proven researchers: Current AI safety grantmakers (e.g., Open Philanthropy, LTFF, Lightspeed) are capacity-constrained (at least in the short term). LISA, with its diverse ecosystem and connection to established mentoring programmes such as MATS, is well-placed to efficiently identify and support promising AI-safety researchers in a timely fashion.
- To act as a “post-doc” for AI safety researchers. In academia, post-docs traditionally serve as a stopgap for strong researchers between PhD programmes and professor positions. This research fellowship would act similar to a post-doc for strong AI safety researchers (former AI safety PhD students, MATS/Astra scholars etc); a year of financial stability in a great environment before they get full-time positions in organisations, government, or academia.
- To leverage the UK’s talent pool which is underutilised because of:
- Geographic limitations: Many capable UK-based researchers are unable or unwilling to relocate to the Bay Area, the current main hub of AI safety research, due to visas, partners, family, culture, etc.
- Limited job opportunities: Compared to the magnitude of AI safety challenges, there are currently still relatively few industry and academic roles dedicated to AI safety research.
- Independent research is daunting: The challenges and uncertainties of entering independent research can be formidable. LISA’s fellowship program alleviates these concerns by providing financial stability, community support, and a collaborative environment. This makes independent research in AI safety a more viable and appealing option.
- Individual researchers working alone are less productive.
- Collective synergy: creating a cohort of fellows fosters a collaborative and energetic atmosphere with a higher cadence of iteration due to adversarial challenge and collaboration.
- Diverse perspectives. At LISA, fellows have the opportunity to engage with a diverse group of researchers and member organisations. This exposes their ideas to fresh perspectives.
- Structured research management: Individuals often lack guidance and structured feedback, which is often crucial for accountability and progress. Fellows benefit from research management that provides direction, constructive critique, and mentorship, ensuring continual progress and development of their research endeavours.
Why at LISA?
- A highly supportive, productive and collaborative research environment where ideas can be exchanged, refined, and challenged.
- Our office space is a “melting pot” of epistemically diverse AI safety researchers working on collaborative research projects. This setup provides fellows with a supportive network and community, potential research collaborators or co-founders for new organisations, and a culture of accountability that fosters rapid iteration and progress in their research endeavours.
- Access to a flourishing ecosystem and expertise: Current member organisations include Apollo Research, Leap Labs, Blue Dot Impact, MATS, and ARENA. Both Apollo and Leap Labs were founded by former MATS scholars working in LISA’s former office and both have hired several researchers who they met at LISA. Other researchers based at LISA are affiliated with Anthropic, MILA, and Conjecture, amongst many others. Fellows’ research will benefit from the presence of other LISA residents and vice versa.
- Comprehensive operational and research support:
- Fellows at LISA benefit from extensive operational support from the LISA team, as well as access to computational resources, workstations, and catering (amongst other benefits).
- Fellows will meet with LISA’s Research Director biweekly for mentorship, constructive critique, and direction.
- Recognition and networking opportunities: Being part of LISA, an emerging hub for AI safety research in London, positions fellows with invaluable AI safety career development opportunities and significant opportunities to expand their professional network. Recently, LISA has hosted speakers and visiting researchers from CHAI, Google DeepMind, Anthropic, AI Safety Institute, Future of Humanity Institute, Cooperative AI Foundation, GovAI, and FAR AI, among others.
- Mission alignment: LISA recognises the importance of supporting research on a variety of AI safety agendas and guarding against groupthink to mitigate correlated risks. The fellowship is a strategic initiative to nurture the advancement of a variety of AI safety research agendas.
Who are we looking for?

The AI safety researcher pipeline.
This figure illustrates the progression from motivated and talented individuals to researchers outputting high-impact AI safety work. LISA's fellowship is designed to address the critical bottleneck between Phase 3 and Phase 4 because it is difficult for talented individuals to solve for themselves. For example, at the end of MATS or highly relevant PhD programmes, many strong scholars [still struggle](https://www.lesswrong.com/posts/MhudbfBNQcMxBBvj8/there-should-be-more-ai-safety-orgs#:~:text=The core argument is that,Thus%2C more orgs are needed.) to secure AI safety roles in existing organisations.
This fellowship will provide a productive research environment, financial stability, recognition, and research management to individuals who have already shown evidence of high-impact AI safety research (as part of MATS, Astra, a PhD or postdoc, or otherwise). We expect most fellowships (~80%) to be allocated to “focused” fellows, advancing agendas with well-defined theories of change. The remaining fellowships (~20%) will be allocated to “exploratory” fellows dedicated to less well-defined research agendas or seeking out new ones (e.g., the finding of singular learning theory).
Concretely, we are looking for `T-shaped researchers' with:
- Depth: specialist expertise in a few research directions/techniques, ideally at the frontier of science/engineering;
- Breadth: a broad overview of the entire research field, ideally encompassing adjacent fields that might allow knowledge transfer;
- Taste: able to develop new research directions and techniques, and adapt existing techniques to new contexts.