AI Safety Fundamentals: Alignment
Un podcast de BlueDot Impact
Catégories:
83 Épisodes
-
Constitutional AI Harmlessness from AI Feedback
Publié: 19/07/2024 -
Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback
Publié: 19/07/2024 -
Illustrating Reinforcement Learning from Human Feedback (RLHF)
Publié: 19/07/2024 -
Chinchilla’s Wild Implications
Publié: 17/06/2024 -
Deep Double Descent
Publié: 17/06/2024 -
Intro to Brain-Like-AGI Safety
Publié: 17/06/2024 -
Eliciting Latent Knowledge
Publié: 17/06/2024 -
Toy Models of Superposition
Publié: 17/06/2024 -
Least-To-Most Prompting Enables Complex Reasoning in Large Language Models
Publié: 17/06/2024 -
Discovering Latent Knowledge in Language Models Without Supervision
Publié: 17/06/2024 -
ABS: Scanning Neural Networks for Back-Doors by Artificial Brain Stimulation
Publié: 17/06/2024 -
Two-Turn Debate Doesn’t Help Humans Answer Hard Reading Comprehension Questions
Publié: 17/06/2024 -
Imitative Generalisation (AKA ‘Learning the Prior’)
Publié: 17/06/2024 -
An Investigation of Model-Free Planning
Publié: 17/06/2024 -
Low-Stakes Alignment
Publié: 17/06/2024 -
Gradient Hacking: Definitions and Examples
Publié: 17/06/2024 -
Empirical Findings Generalize Surprisingly Far
Publié: 17/06/2024 -
Compute Trends Across Three Eras of Machine Learning
Publié: 13/06/2024 -
Worst-Case Thinking in AI Alignment
Publié: 29/05/2024 -
Public by Default: How We Manage Information Visibility at Get on Board
Publié: 12/05/2024
Listen to resources from the AI Safety Fundamentals: Alignment course!https://aisafetyfundamentals.com/alignment