“Cutting AI Safety down to size” by Holly Elmore ⏸️ 🔸
EA Forum Podcast (All audio) - Un podcast de EA Forum Team
Catégories:
This is more of a personal how-I-think post than an invitation to argue the details. This post discuss two sources of bloat, mental gymnastics, and panic in AI Safety: p(doom)/timeline creep lesswrong dogma accommodates embedded libertarian assumptions I think there's been an inflation of the stakes of AI Safety unchecked for a long time. I feel it myself since I switched into AI Safety. When you’re thinking about AI capabilities every day it's easy to become more and more scared and convinced that doom is almost certain, SOON. My general take on timelines and p(doom) is that the threshold for trying to get AI paused should be pretty low (5% chance of extinction or catastrophe seems more than enough to me) and that, above that, differences in p(doom) have their uses to keep track of but generally aren’t action-relevant. But the subjective sense of doom component [...] --- First published: November 9th, 2024 Source: https://forum.effectivealtruism.org/posts/LcJ7zoQWv3zDDYFmD/cutting-ai-safety-down-to-size --- Narrated by TYPE III AUDIO.