主播
节目简介
来源:小宇宙
想让AI学会推理,是给它一本百科全书,还是塞给它一张“学霸的草稿纸”?最新论文说,看别人怎么思考,比死记硬背更管用。我们还会一起探索,AI离“独立盖起一栋楼”般的复杂工程到底还有多远,并用“脑回路CT”技术,看看你的AI管家为什么有时会“一本正经地胡说八道”。更有趣的是,我们将揭秘AI如何通过自我反思,从“差生”逆袭成“学霸”,以及如何靠一个“探测器”就画出整张看不见的“藏宝图”。准备好了吗?我们马上出发!
00:00:37 AI也需要“学霸笔记”吗?
00:05:42 AI离“独立盖起一栋楼”还有多远?
00:10:46 你的AI管家,为什么有时“一本正经地胡说八道”?
00:16:33 如何用AI画一张看不见的藏宝图?
00:21:30 AI的自我修炼,从“差生”到“学霸”的秘密
[IR] RAG over Thinking Traces Can Improve Reasoning Tasks
[UC Berkeley]
https://arxiv.org/abs/2605.03344
---
[AI] ProgramBench: Can Language Models Rebuild Programs From Scratch?
[Meta FAIR]
https://arxiv.org/abs/2605.03546
---
[AI] What Happens Inside Agent Memory? Circuit Analysis from Emergence to Diagnosis
[City University of Hong Kong & University of Toronto]
https://arxiv.org/abs/2605.03354
---
[LG] Flow Sampling: Learning to Sample from Unnormalized Densities via Denoising Conditional Processes
[FAIR at Meta & Weizmann Institute of Science]
https://arxiv.org/abs/2605.03984
---
[AI] Self-Improvement for Fast, High-Quality Plan Generation
[Amazon]
https://arxiv.org/abs/2605.03625
00:00:37 AI也需要“学霸笔记”吗?
00:05:42 AI离“独立盖起一栋楼”还有多远?
00:10:46 你的AI管家,为什么有时“一本正经地胡说八道”?
00:16:33 如何用AI画一张看不见的藏宝图?
00:21:30 AI的自我修炼,从“差生”到“学霸”的秘密
[IR] RAG over Thinking Traces Can Improve Reasoning Tasks
[UC Berkeley]
https://arxiv.org/abs/2605.03344
---
[AI] ProgramBench: Can Language Models Rebuild Programs From Scratch?
[Meta FAIR]
https://arxiv.org/abs/2605.03546
---
[AI] What Happens Inside Agent Memory? Circuit Analysis from Emergence to Diagnosis
[City University of Hong Kong & University of Toronto]
https://arxiv.org/abs/2605.03354
---
[LG] Flow Sampling: Learning to Sample from Unnormalized Densities via Denoising Conditional Processes
[FAIR at Meta & Weizmann Institute of Science]
https://arxiv.org/abs/2605.03984
---
[AI] Self-Improvement for Fast, High-Quality Plan Generation
[Amazon]
https://arxiv.org/abs/2605.03625