“I read every major AI lab’s safety plan so you don’t have to” by sarahhw
EA Forum Podcast (All audio) - Un podcast de EA Forum Team
Catégories:
This is a link post. I recently completed the AI Safety Fundamentals governance course. For my project, which won runner-up in the Technical Governance Explainer category, I summarised the safety frameworks published by OpenAI, Anthropic and Deepmind, and offered some of my high-level thoughts. Posting it here in case it can be of any use to people! A handful of tech companies are competing to build advanced, general-purpose AI systems that radically outsmart all of humanity. Each acknowledges that this will be a highly – perhaps existentially – dangerous undertaking. How do they plan to mitigate these risks? Three industry leaders have released safety frameworks outlining how they intend to avoid catastrophic outcomes. They are OpenAI's Preparedness Framework, Anthropic's Responsible Scaling Policy and Google DeepMind's Frontier Safety Framework. Despite having been an avid follower of AI safety issues for almost two years now, and having heard plenty about these safety [...] ---Outline:(02:04) What are they?(03:14) Thresholds and triggers(06:39) Evaluations(10:06) Mitigations(10:25) Security standards(12:40) Deployment standards(14:53) Development standards(15:49) My thoughts and open questions(16:09) These are not ‘plans’(19:08) What are acceptable levels of risk?(21:25) The get-out-of-RSP-free cardThe original text contained 4 footnotes which were omitted from this narration. The original text contained 4 images which were described by AI. --- First published: December 16th, 2024 Source: https://forum.effectivealtruism.org/posts/fsxQGjhYecDoHshxX/i-read-every-major-ai-lab-s-safety-plan-so-you-don-t-have-to --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.