Information

What is the AI Safety Camp?

Would you like to work on AI safety or strategy research and are looking for a concrete way to get started? We are organizing this camp for aspiring AI safety and strategy researchers. At the camp, you:

  • build connections with others in the field
  • build your research portfolio
  • receive feedback on your research ideas and help others with theirs
  • make concrete progress on open AI safety research questions

Read more about the previous camps here.

What’s the structure of the camp?

The camp is preceded by 7 weeks of preparation in form of an online study group of 3-4 people, followed by a 9-day intensive camp with the aim of creating and publishing a research paper, extensive blog post, or github repository. 

What will we work on?

Participants will work in groups on tightly-defined research projects, for example in the following areas:

  • Strategy and Policy
  • Agent Foundations (decision theory, subsystem alignment, embedded world models, MIRI-style)
  • Value learning (IRL, approval-directed agents, wireheading, …)
  • Corrigibility / Interruptibility
  • Side Effects, Safe Exploration
  • Scalable & Informed Oversight
  • Robustness (distributional shift, adversarial examples)
  • Human Values (including philosophical and psychological approaches)

(See research summaries from previous camp here)

When and where?

The third AI Safety Camp will take place between 26th April – 5th May, 2019. Near Madrid, Spain.

Pricing

Attendance is free! 

More questions?

Check out the FAQ section of this website. If in doubt, contact us at contact@aisafetycamp.com.

Ready to apply?

Applications are open till 13. 1. 2019. Apply here