Interconnects
Un podcast de Nathan Lambert
96 Épisodes
-
Model commoditization and product moats
Publié: 13/03/2024 -
The koan of an open-source LLM
Publié: 06/03/2024 -
Interviewing Louis Castricato of Synth Labs and Eleuther AI on RLHF, Gemini Drama, DPO, founding Carper AI, preference data, reward models, and everything in between
Publié: 04/03/2024 -
How to cultivate a high-signal AI feed
Publié: 28/02/2024 -
Google ships it: Gemma open LLMs and Gemini backlash
Publié: 22/02/2024 -
10 Sora and Gemini 1.5 follow-ups: code-base in context, deepfakes, pixel-peeping, inference costs, and more
Publié: 20/02/2024 -
Releases! OpenAI’s Sora for video, Gemini 1.5's infinite context, and a secret Mistral model
Publié: 16/02/2024 -
Why reward models are still key to understanding alignment
Publié: 14/02/2024 -
Alignment-as-a-Service: Scale AI vs. the new guys
Publié: 07/02/2024 -
Open Language Models (OLMos) and the LLM landscape
Publié: 01/02/2024 -
Model merging lessons in The Waifu Research Department
Publié: 29/01/2024 -
Local LLMs, some facts some fiction
Publié: 24/01/2024 -
Multimodal blogging: My AI tools to expand your audience
Publié: 17/01/2024 -
Multimodal LM roundup: Unified IO 2, inputs and outputs, Gemini, LLaVA-RLHF, and RLHF questions
Publié: 10/01/2024 -
Where 2024’s “open GPT4” can’t match OpenAI’s
Publié: 05/01/2024 -
Interviewing Tri Dao and Michael Poli of Together AI on the future of LLM architectures
Publié: 21/12/2023
Audio essays about the latest developments in AI and interviews with leading scientists in the field. Breaking the hype, understanding what's under the hood, and telling stories. www.interconnects.ai