🔍Gemma Scope: helping the safety community shed light on the inner workings of language models.

AI Unraveled: Latest AI News & Trends, ChatGPT, Gemini, Gen AI, LLMs, Prompting, AI Ethics & Bias - Un podcast de Etienne Noumen

Catégories:

🔍Gemma Scope: helping the safety community shed light on the inner workings of language models.Explainable AI: One of the most requested feature for LLMs is to understand how to take internal decisions. This is a big step towards interpretability "This is a barebones tutorial on how to use Gemma Scope, Google DeepMind's suite of Sparse Autoencoders (SAEs) on every layer and sublayer of Gemma 2 2B and 9B. Sparse Autoencoders are an interpretability tool that act like a "microscope" on language model activations. They let us zoom in on dense, compressed activations, and expand them to a larger but sparser and seemingly more interpretable form, which can be a very useful tool when doing interpretability research!"Listen to it at our podcast and Support us by subscribing at https://podcasts.apple.com/ca/podcast/ai-unraveled-latest-ai-news-trends-gpt-gemini-generative/id1684415169Enjoying these FREE AI updates without the clutter, Set yourself up for promotion or get a better job by Acing the AWS Certify Data Engineer Associate Exam (DEA-C01) with the book or App below:Get it now at Google at https://play.google.com/store/books/details?id=lzgPEQAAQBAJ or Apple at https://books.apple.com/ca/book/ace-the-aws-certified-data-engineer-associate/id650457218Download the Ace AWS DEA-C01 Exam App at https://apps.apple.com/ca/app/ace-the-aws-data-engineer-exam/id6566170013Source: https://enoumen.com/2024/08/02/ai-innovations-in-august-2024/

Visit the podcast's native language site