Best AI papers explained
Un podcast de Enoch H. Kang
515 Épisodes
-
Understanding Prompt Tuning and In-Context Learning via Meta-Learning
Publié: 11/10/2025 -
MLPs Learn In-Context on Regression and Classification tasks
Publié: 11/10/2025 -
Is Pre-Training Truly Better than Meta-Learning?
Publié: 11/10/2025 -
Agentic Context Engineering: Evolving Contexts for Self-Improving Language Models
Publié: 11/10/2025 -
Do LLMs Recognize Your Preferences? Evaluating Personalized Preference Following in LLMs
Publié: 09/10/2025 -
Learning dynamics of LLM finetuning
Publié: 09/10/2025 -
Iterative Data Smoothing: Mitigating Reward Overfitting and Overoptimization in RLHF
Publié: 09/10/2025 -
OpenAI Agent Builder and n8n: Orchestrating Reasoning Versus Automating Process
Publié: 08/10/2025 -
Training Agents Inside of Scalable World Models
Publié: 08/10/2025 -
Small Language Models are the Future of Agentic AI
Publié: 07/10/2025 -
Activation Steering in Generative Settings via Contrastive Causal Mediation Analysis
Publié: 06/10/2025 -
Eliciting Secret Knowledge from Language Models
Publié: 06/10/2025 -
Temporal difference flow
Publié: 06/10/2025 -
Personalized reasoning: just-in-time personalization and why LLMs fail at it
Publié: 05/10/2025 -
Prompt Curriculum Learning for Efficient LLM Post-Training
Publié: 05/10/2025 -
Personalizing Reinforcement Learning from Human Feedback with Variational Preference Learning
Publié: 04/10/2025 -
Enhancing Personalized Multi-Turn Dialogue with Curiosity Reward
Publié: 04/10/2025 -
Learning to summarize user information for personalized reinforcement learning from human feedback
Publié: 04/10/2025 -
Distributional Preference Learning: Understanding and Accounting for Hidden Context in RLHF
Publié: 03/10/2025 -
LIMI: Less is More for Agency
Publié: 01/10/2025
Cut through the noise. We curate and break down the most important AI papers so you don’t have to.
