Interconnects
Un podcast de Nathan Lambert
109 Épisodes
-
Managing frontier model training organizations (or teams)
Publié: 19/03/2025 -
Gemma 3, OLMo 2 32B, and the growing potential of open-source AI
Publié: 13/03/2025 -
Interviewing Eugene Vinitsky on self-play for self-driving and what else people do with RL
Publié: 12/03/2025 -
Elicitation, the simplest way to understand post-training
Publié: 10/03/2025 -
Where inference-time scaling pushes the market for AI companies
Publié: 05/03/2025 -
GPT-4.5: "Not a frontier model"?
Publié: 28/02/2025 -
Character training: Understanding and crafting a language model's personality
Publié: 26/02/2025 -
Claude 3.7 thonks and what's next for inference-time scaling
Publié: 24/02/2025 -
Grok 3 and an accelerating AI roadmap
Publié: 18/02/2025 -
An unexpected RL Renaissance
Publié: 13/02/2025 -
Deep Research, information vs. insight, and the nature of science
Publié: 12/02/2025 -
Making the U.S. the home for open-source AI
Publié: 05/02/2025 -
Why reasoning models will generalize
Publié: 28/01/2025 -
Interviewing OLMo 2 leads: Open secrets of training language models
Publié: 22/01/2025 -
DeepSeek R1's recipe to replicate o1 and the future of reasoning LMs
Publié: 21/01/2025 -
Let me use my local LMs on Meta Ray-Bans
Publié: 15/01/2025 -
(Voiceover) DeepSeek V3 and the actual cost of training frontier AI models
Publié: 09/01/2025 -
The state of post-training in 2025
Publié: 08/01/2025 -
Quick recap on the state of reasoning
Publié: 02/01/2025 -
(Voiceover) 2024 Interconnects year in review
Publié: 31/12/2024
Audio essays about the latest developments in AI and interviews with leading scientists in the field. Breaking the hype, understanding what's under the hood, and telling stories. www.interconnects.ai
