Future of Life Institute Podcast
Un podcast de Future of Life Institute
Catégories:
215 Épisodes
-
AI and Nuclear Weapons - Trust, Accidents, and New Risks with Paul Scharre and Mike Horowitz
Publié: 28/09/2018 -
AIAP: Moral Uncertainty and the Path to AI Alignment with William MacAskill
Publié: 18/09/2018 -
AI: Global Governance, National Policy, and Public Trust with Allan Dafoe and Jessica Cussins
Publié: 31/08/2018 -
The Metaethics of Joy, Suffering, and Artificial Intelligence with Brian Tomasik and David Pearce
Publié: 16/08/2018 -
Six Experts Explain the Killer Robots Debate
Publié: 31/07/2018 -
AIAP: AI Safety, Possible Minds, and Simulated Worlds with Roman Yampolskiy
Publié: 16/07/2018 -
Mission AI - Giving a Global Voice to the AI Discussion With Charlie Oliver and Randi Williams
Publié: 29/06/2018 -
AIAP: Astronomical Future Suffering and Superintelligence with Kaj Sotala
Publié: 14/06/2018 -
Nuclear Dilemmas, From North Korea to Iran with Melissa Hanham and Dave Schmerler
Publié: 31/05/2018 -
What are the odds of nuclear war? A conversation with Seth Baum and Robert de Neufville
Publié: 30/04/2018 -
AIAP: Inverse Reinforcement Learning and Inferring Human Preferences with Dylan Hadfield-Menell
Publié: 25/04/2018 -
Navigating AI Safety -- From Malicious Use to Accidents
Publié: 30/03/2018 -
AI, Ethics And The Value Alignment Problem With Meia Chita-Tegmark And Lucas Perry
Publié: 28/02/2018 -
Top AI Breakthroughs and Challenges of 2017
Publié: 31/01/2018 -
Beneficial AI And Existential Hope In 2018
Publié: 21/12/2017 -
Balancing the Risks of Future Technologies With Andrew Maynard and Jack Stilgoe
Publié: 30/11/2017 -
AI Ethics, the Trolley Problem, and a Twitter Ghost Story with Joshua Greene And Iyad Rahwan
Publié: 31/10/2017 -
80,000 Hours with Rob Wiblin and Brenton Mayer
Publié: 29/09/2017 -
Life 3.0: Being Human in the Age of Artificial Intelligence with Max Tegmark
Publié: 29/08/2017 -
The Art Of Predicting With Anthony Aguirre And Andrew Critch
Publié: 31/07/2017
The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change. The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions. FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.