Interconnects
Un podcast de Nathan Lambert
96 Épisodes
-
State of play of AI progress (and related brakes on an intelligence explosion)
Publié: 30/04/2025 -
Transparency and (shifting) priority stacks
Publié: 28/04/2025 -
OpenAI's o3: Over-optimization is back and weirder than ever
Publié: 19/04/2025 -
OpenAI's GPT-4.1 and separating the API from ChatGPT
Publié: 14/04/2025 -
Llama 4: Did Meta just push the panic button?
Publié: 07/04/2025 -
RL backlog: OpenAI's many RLs, clarifying distillation, and latent reasoning
Publié: 05/04/2025 -
Gemini 2.5 Pro and Google's second chance with AI
Publié: 26/03/2025 -
Managing frontier model training organizations (or teams)
Publié: 19/03/2025 -
Gemma 3, OLMo 2 32B, and the growing potential of open-source AI
Publié: 13/03/2025 -
Interviewing Eugene Vinitsky on self-play for self-driving and what else people do with RL
Publié: 12/03/2025 -
Elicitation, the simplest way to understand post-training
Publié: 10/03/2025 -
Where inference-time scaling pushes the market for AI companies
Publié: 05/03/2025 -
GPT-4.5: "Not a frontier model"?
Publié: 28/02/2025 -
Character training: Understanding and crafting a language model's personality
Publié: 26/02/2025 -
Claude 3.7 thonks and what's next for inference-time scaling
Publié: 24/02/2025 -
Grok 3 and an accelerating AI roadmap
Publié: 18/02/2025 -
An unexpected RL Renaissance
Publié: 13/02/2025 -
Deep Research, information vs. insight, and the nature of science
Publié: 12/02/2025 -
Making the U.S. the home for open-source AI
Publié: 05/02/2025 -
Why reasoning models will generalize
Publié: 28/01/2025
Audio essays about the latest developments in AI and interviews with leading scientists in the field. Breaking the hype, understanding what's under the hood, and telling stories. www.interconnects.ai