AI Safety Bundle

The AI Safety bundle aims to address potential risks from advanced artificial intelligence. Charities that work in this area do things like:

  • Technical AI safety research into computer science techniques for building AI systems that are scalably safe or aligned with human values

  • Understanding how geopolitics, governance structures, and strategic trends shape the development or deployment of advanced AI systems

  • AI safety-specific community building 

AI Safety is both important and highly neglected in many respects.

AI is one of the most dynamic, unpredictable, and potentially world-changing areas of science. Researchers working on AI safety believe there’s a nontrivial possibility that transformative AI will be developed in the next 20 years. Tech companies are pouring money into the development of advanced AI systems; while the upside could be enormous, there are significant potential risks if humanity ends up creating AI systems that are many times smarter than we are, but do not share our goals. Unaligned AI is an example of a global catastrophic risk, that is, a risk that could cause humanity to fail to reach its full potential. 

While there is a substantial and growing field around artificial intelligence and machine learning research, most of it is not focused on reducing potential risks. At the same time, it seems like useful progress can be made today, including working on important technical problems and scoping out potential strategic and policy considerations. 

The charities in the AI Safety bundle involve more speculative interventions compared to some other areas we support (like our Global Health and Development Bundle), and instead rely more on the judgment of experts in the field.

Further reading