AXRP - the AI X-risk Research Podcast
Un podcast de Daniel Filan
59 Épisodes
-
46 - Tom Davidson on AI-enabled Coups
Publié: 07/08/2025 -
45 - Samuel Albanie on DeepMind's AGI Safety Approach
Publié: 06/07/2025 -
44 - Peter Salib on AI Rights for Human Safety
Publié: 28/06/2025 -
43 - David Lindner on Myopic Optimization with Non-myopic Approval
Publié: 15/06/2025 -
42 - Owain Evans on LLM Psychology
Publié: 06/06/2025 -
41 - Lee Sharkey on Attribution-based Parameter Decomposition
Publié: 03/06/2025 -
40 - Jason Gross on Compact Proofs and Interpretability
Publié: 28/03/2025 -
38.8 - David Duvenaud on Sabotage Evaluations and the Post-AGI Future
Publié: 01/03/2025 -
38.7 - Anthony Aguirre on the Future of Life Institute
Publié: 09/02/2025 -
38.6 - Joel Lehman on Positive Visions of AI
Publié: 24/01/2025 -
38.5 - Adrià Garriga-Alonso on Detecting AI Scheming
Publié: 20/01/2025 -
38.4 - Shakeel Hashim on AI Journalism
Publié: 05/01/2025 -
38.3 - Erik Jenner on Learned Look-Ahead
Publié: 12/12/2024 -
39 - Evan Hubinger on Model Organisms of Misalignment
Publié: 01/12/2024 -
38.2 - Jesse Hoogland on Singular Learning Theory
Publié: 27/11/2024 -
38.1 - Alan Chan on Agent Infrastructure
Publié: 16/11/2024 -
38.0 - Zhijing Jin on LLMs, Causality, and Multi-Agent Systems
Publié: 14/11/2024 -
37 - Jaime Sevilla on AI Forecasting
Publié: 04/10/2024 -
36 - Adam Shai and Paul Riechers on Computational Mechanics
Publié: 29/09/2024 -
New Patreon tiers + MATS applications
Publié: 28/09/2024
AXRP (pronounced axe-urp) is the AI X-risk Research Podcast where I, Daniel Filan, have conversations with researchers about their papers. We discuss the paper, and hopefully get a sense of why it's been written and how it might reduce the risk of AI causing an existential catastrophe: that is, permanently and drastically curtailing humanity's future potential. You can visit the website and read transcripts at axrp.net.
