What Do LLMs Tell Us About the Nature of Language—And Ourselves? - Ep. 23 with Robin Sloan
AI and I - Un podcast de Dan Shipper - Les mercredis
Catégories:
An interview with best-selling sci-fi novelist Robin SloanOne of my favorite fiction writers, New York Times best-selling author Robin Sloan, just wrote the first novel I’ve seen that’s inspired by LLMs. The book is called Moonbound, and Robin originally wanted to write it with language models. He tried doing this in 2016 with a rudimentary model he built himself, and more recently with commercially available LLMs. Both times Robin found himself unsatisfied with the creative output generated by the models. AI couldn’t quite generate the fiction he was looking for—the kind that pushes the boundaries of literature. He did, however, find himself fascinated by the inner workings of LLMs Robin was particularly interested in how LLMs map language into math—the notion that each letter is represented by a unique series of numbers, allowing the model to understand human language in a computational way. He thinks LLMs are language personified, given its first heady dose of autonomy. Robin’s body of work reflects his deep understanding of technology, language, and storytelling. He’s the author of the novels Mr. Penumbra’s 24-hour Bookstore and Sourdough, and has also written for publications like the New York Times, the Atlantic, and MIT Technology Review. Before going full-time on fiction writing, he worked at Twitter and in traditional media institutions. In Moonbound, Robin puts LLMs into perspective as part of a broader human story. I sat down with Robin to unpack his fascination with LLMs, their nearly sentient nature, and what they reveal about language and our own selves. It was a wide-ranging discussion about technology, philosophy, ethics, and biology—and I came away more excited than ever about the possibilities that the future holds. This is a must-watch for science-fiction enthusiasts, and anyone interested in the deep philosophical questions raised by LLMs and the way they function. If you found this episode interesting, please like, subscribe, comment, and share! Want even more? Sign up for Every to unlock our ultimate guide to prompting ChatGPT. It’s usually only for paying subscribers, but you can get it here for free. To hear more from Dan Shipper: Subscribe to Every: https://every.to/subscribe Follow him on X: https://twitter.com/danshipper Links to resources mentioned in the episode: Robin Sloan: https://www.robinsloan.com/ Robin’s books: Mr. Penumbra's 24-Hour Bookstore, Sourdough, Moonbound Dan’s first interview with Robin four years ago: https://every.to/superorganizers/tasting-notes-with-robin-sloan-25629085 Anthropic AI’s paper about how concepts are represented inside LLMs: https://www.anthropic.com/news/mapping-mind-language-model Dan’s interview with Notion engineer Linus Lee: https://www.youtube.com/watch?v=OeKEXnNP2yA Big Biology, the podcast that Robin enjoys listening to: https://www.bigbiology.org/