The Inside View
Un podcast de Michaël Trazzi
54 Épisodes
-
Curtis Huebner on Doom, AI Timelines and Alignment at EleutherAI
Publié: 16/07/2023 -
Eric Michaud on scaling, grokking and quantum interpretability
Publié: 12/07/2023 -
Jesse Hoogland on Developmental Interpretability and Singular Learning Theory
Publié: 06/07/2023 -
Clarifying and predicting AGI by Richard Ngo
Publié: 09/05/2023 -
Alan Chan And Max Kauffman on Model Evaluations, Coordination and AI Safety
Publié: 06/05/2023 -
Breandan Considine on Neuro Symbolic AI, Coding AIs and AI Timelines
Publié: 04/05/2023 -
Christoph Schuhmann on Open Source AI, Misuse and Existential risk
Publié: 01/05/2023 -
Simeon Campos on Short Timelines, AI Governance and AI Alignment Field Building
Publié: 29/04/2023 -
Collin Burns On Discovering Latent Knowledge In Language Models Without Supervision
Publié: 17/01/2023 -
Victoria Krakovna–AGI Ruin, Sharp Left Turn, Paradigms of AI Alignment
Publié: 12/01/2023 -
David Krueger–Coordination, Alignment, Academia
Publié: 07/01/2023 -
Ethan Caballero–Broken Neural Scaling Laws
Publié: 03/11/2022 -
Irina Rish–AGI, Scaling and Alignment
Publié: 18/10/2022 -
Shahar Avin–Intelligence Rising, AI Governance
Publié: 23/09/2022 -
Katja Grace on Slowing Down AI, AI Expert Surveys And Estimating AI Risk
Publié: 16/09/2022 -
Markus Anderljung–AI Policy
Publié: 09/09/2022 -
Alex Lawsen—Forecasting AI Progress
Publié: 06/09/2022 -
Robert Long–Artificial Sentience
Publié: 28/08/2022 -
Ethan Perez–Inverse Scaling, Language Feedback, Red Teaming
Publié: 24/08/2022 -
Robert Miles–Youtube, AI Progress and Doom
Publié: 19/08/2022
The goal of this podcast is to create a place where people discuss their inside views about existential risk from AI.