Interconnects
Un podcast de Nathan Lambert
96 Épisodes
-
Interviewing OLMo 2 leads: Open secrets of training language models
Publié: 22/01/2025 -
DeepSeek R1's recipe to replicate o1 and the future of reasoning LMs
Publié: 21/01/2025 -
Let me use my local LMs on Meta Ray-Bans
Publié: 15/01/2025 -
(Voiceover) DeepSeek V3 and the actual cost of training frontier AI models
Publié: 09/01/2025 -
The state of post-training in 2025
Publié: 08/01/2025 -
Quick recap on the state of reasoning
Publié: 02/01/2025 -
(Voiceover) 2024 Interconnects year in review
Publié: 31/12/2024 -
(Voiceover) OpenAI's o3: The grand finale of AI in 2024
Publié: 20/12/2024 -
(Voiceover) The AI agent spectrum
Publié: 18/12/2024 -
(Voiceover) OpenAI's Reinforcement Finetuning and RL for the masses
Publié: 11/12/2024 -
Interviewing Finbarr Timbers on the "We are So Back" Era of Reinforcement Learning
Publié: 05/12/2024 -
(Voiceover) OpenAI's o1 using "search" was a PSYOP
Publié: 04/12/2024 -
(Voiceover) OLMo 2 and building effective teams for training language models
Publié: 26/11/2024 -
(Voiceover) Tülu 3: The next era in open post-training
Publié: 21/11/2024 -
(Voiceover) Scaling realities
Publié: 14/11/2024 -
(Voiceover) Saving the National AI Research Resource & my AI policy outlook
Publié: 13/11/2024 -
Interviewing Tim Dettmers on open-source AI: Agents, scaling, quantization and what's next
Publié: 07/11/2024 -
Interviewing Andrew Carr of Cartwheel on the State of Generative AI
Publié: 31/10/2024 -
(Voiceover) Why I build open language models
Publié: 30/10/2024 -
(Voiceover) Claude's agentic future and the current state of the frontier models
Publié: 23/10/2024
Audio essays about the latest developments in AI and interviews with leading scientists in the field. Breaking the hype, understanding what's under the hood, and telling stories. www.interconnects.ai