96 Épisodes

  1. Interviewing OLMo 2 leads: Open secrets of training language models

    Publié: 22/01/2025
  2. DeepSeek R1's recipe to replicate o1 and the future of reasoning LMs

    Publié: 21/01/2025
  3. Let me use my local LMs on Meta Ray-Bans

    Publié: 15/01/2025
  4. (Voiceover) DeepSeek V3 and the actual cost of training frontier AI models

    Publié: 09/01/2025
  5. The state of post-training in 2025

    Publié: 08/01/2025
  6. Quick recap on the state of reasoning

    Publié: 02/01/2025
  7. (Voiceover) 2024 Interconnects year in review

    Publié: 31/12/2024
  8. (Voiceover) OpenAI's o3: The grand finale of AI in 2024

    Publié: 20/12/2024
  9. (Voiceover) The AI agent spectrum

    Publié: 18/12/2024
  10. (Voiceover) OpenAI's Reinforcement Finetuning and RL for the masses

    Publié: 11/12/2024
  11. Interviewing Finbarr Timbers on the "We are So Back" Era of Reinforcement Learning

    Publié: 05/12/2024
  12. (Voiceover) OpenAI's o1 using "search" was a PSYOP

    Publié: 04/12/2024
  13. (Voiceover) OLMo 2 and building effective teams for training language models

    Publié: 26/11/2024
  14. (Voiceover) Tülu 3: The next era in open post-training

    Publié: 21/11/2024
  15. (Voiceover) Scaling realities

    Publié: 14/11/2024
  16. (Voiceover) Saving the National AI Research Resource & my AI policy outlook

    Publié: 13/11/2024
  17. Interviewing Tim Dettmers on open-source AI: Agents, scaling, quantization and what's next

    Publié: 07/11/2024
  18. Interviewing Andrew Carr of Cartwheel on the State of Generative AI

    Publié: 31/10/2024
  19. (Voiceover) Why I build open language models

    Publié: 30/10/2024
  20. (Voiceover) Claude's agentic future and the current state of the frontier models

    Publié: 23/10/2024

2 / 5

Audio essays about the latest developments in AI and interviews with leading scientists in the field. Breaking the hype, understanding what's under the hood, and telling stories. www.interconnects.ai

Visit the podcast's native language site