109 Épisodes

  1. Managing frontier model training organizations (or teams)

    Publié: 19/03/2025
  2. Gemma 3, OLMo 2 32B, and the growing potential of open-source AI

    Publié: 13/03/2025
  3. Interviewing Eugene Vinitsky on self-play for self-driving and what else people do with RL

    Publié: 12/03/2025
  4. Elicitation, the simplest way to understand post-training

    Publié: 10/03/2025
  5. Where inference-time scaling pushes the market for AI companies

    Publié: 05/03/2025
  6. GPT-4.5: "Not a frontier model"?

    Publié: 28/02/2025
  7. Character training: Understanding and crafting a language model's personality

    Publié: 26/02/2025
  8. Claude 3.7 thonks and what's next for inference-time scaling

    Publié: 24/02/2025
  9. Grok 3 and an accelerating AI roadmap

    Publié: 18/02/2025
  10. An unexpected RL Renaissance

    Publié: 13/02/2025
  11. Deep Research, information vs. insight, and the nature of science

    Publié: 12/02/2025
  12. Making the U.S. the home for open-source AI

    Publié: 05/02/2025
  13. Why reasoning models will generalize

    Publié: 28/01/2025
  14. Interviewing OLMo 2 leads: Open secrets of training language models

    Publié: 22/01/2025
  15. DeepSeek R1's recipe to replicate o1 and the future of reasoning LMs

    Publié: 21/01/2025
  16. Let me use my local LMs on Meta Ray-Bans

    Publié: 15/01/2025
  17. (Voiceover) DeepSeek V3 and the actual cost of training frontier AI models

    Publié: 09/01/2025
  18. The state of post-training in 2025

    Publié: 08/01/2025
  19. Quick recap on the state of reasoning

    Publié: 02/01/2025
  20. (Voiceover) 2024 Interconnects year in review

    Publié: 31/12/2024

2 / 6

Audio essays about the latest developments in AI and interviews with leading scientists in the field. Breaking the hype, understanding what's under the hood, and telling stories. www.interconnects.ai

Visit the podcast's native language site