Interconnects
Un podcast de Nathan Lambert
96 Épisodes
-
Interviewing Arvind Narayanan on making sense of AI hype
Publié: 17/10/2024 -
(Voiceover) Building on evaluation quicksand
Publié: 16/10/2024 -
Interviewing Andrew Trask on how language models should store (and access) information
Publié: 10/10/2024 -
How scaling changes model behavior
Publié: 09/10/2024 -
[Article Voiceover] AI Safety's Crux: Culture vs. Capitalism
Publié: 02/10/2024 -
Interviewing Riley Goodside on the science of prompting
Publié: 30/09/2024 -
[Article Voiceover] Llama 3.2 Vision and Molmo: Foundations for the multimodal open-source ecosystem
Publié: 27/09/2024 -
[Article Voiceover] Reverse engineering OpenAI's o1
Publié: 17/09/2024 -
Futures of the data foundry business model
Publié: 11/09/2024 -
A post-training approach to AI regulation with Model Specs
Publié: 10/09/2024 -
OpenAI's Strawberry, LM self-talk, inference scaling laws, and spending more on inference
Publié: 05/09/2024 -
OLMoE and the hidden simplicity in training better foundation models
Publié: 04/09/2024 -
On the current definitions of open-source AI and the state of the data commons
Publié: 28/08/2024 -
Nous Hermes 3 and exploiting underspecified evaluations
Publié: 16/08/2024 -
Interviewing Ross Taylor on LLM reasoning, Llama fine-tuning, Galactica, agents
Publié: 08/08/2024 -
A recipe for frontier model post-training
Publié: 07/08/2024 -
Interviewing Sebastian Raschka on the state of open LLMs, Llama 3.1, and AI education
Publié: 01/08/2024 -
GPT-4o-mini changed ChatBotArena
Publié: 31/07/2024 -
Llama 3.1 405b, Meta's AI strategy, and the new open frontier model ecosystem
Publié: 23/07/2024 -
SB 1047, AI regulation, and unlikely allies for open models
Publié: 17/07/2024
Audio essays about the latest developments in AI and interviews with leading scientists in the field. Breaking the hype, understanding what's under the hood, and telling stories. www.interconnects.ai