Future of Life Institute Podcast

Un podcast de Future of Life Institute

Catégories:

215 Épisodes

  1. Dan Hendrycks on Why Evolution Favors AIs over Humans

    Publié: 08/06/2023
  2. Roman Yampolskiy on Objections to AI Safety

    Publié: 26/05/2023
  3. Nathan Labenz on How AI Will Transform the Economy

    Publié: 11/05/2023
  4. Nathan Labenz on the Cognitive Revolution, Red Teaming GPT-4, and Potential Dangers of AI

    Publié: 04/05/2023
  5. Maryanna Saenko on Venture Capital, Philanthropy, and Ethical Technology

    Publié: 27/04/2023
  6. Connor Leahy on the State of AI and Alignment Research

    Publié: 20/04/2023
  7. Connor Leahy on AGI and Cognitive Emulation

    Publié: 13/04/2023
  8. Lennart Heim on Compute Governance

    Publié: 06/04/2023
  9. Lennart Heim on the AI Triad: Compute, Data, and Algorithms

    Publié: 30/03/2023
  10. Liv Boeree on Poker, GPT-4, and the Future of AI

    Publié: 23/03/2023
  11. Liv Boeree on Moloch, Beauty Filters, Game Theory, Institutions, and AI

    Publié: 16/03/2023
  12. Tobias Baumann on Space Colonization and Cooperative Artificial Intelligence

    Publié: 09/03/2023
  13. Tobias Baumann on Artificial Sentience and Reducing the Risk of Astronomical Suffering

    Publié: 02/03/2023
  14. Neel Nanda on Math, Tech Progress, Aging, Living up to Our Values, and Generative AI

    Publié: 23/02/2023
  15. Neel Nanda on Avoiding an AI Catastrophe with Mechanistic Interpretability

    Publié: 16/02/2023
  16. Neel Nanda on What is Going on Inside Neural Networks

    Publié: 09/02/2023
  17. Connor Leahy on Aliens, Ethics, Economics, Memetics, and Education

    Publié: 02/02/2023
  18. Connor Leahy on AI Safety and Why the World is Fragile

    Publié: 26/01/2023
  19. Connor Leahy on AI Progress, Chimps, Memes, and Markets

    Publié: 19/01/2023
  20. Sean Ekins on Regulating AI Drug Discovery

    Publié: 12/01/2023

3 / 11

The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change. The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions. FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.

Visit the podcast's native language site