主播
节目简介
来源:小宇宙
你有没有想过,AI的大脑里到底是什么样子的?当它答不上来问题时,究竟是知识库里没有,还是只是暂时“丢了钥匙”?当它写出一长串思考过程时,我们又该如何分辨它是在“深度思考”,还是在“无效瞎忙”?本期节目,我们将通过五篇最新的论文,一起窥探AI的心智世界:从给语言做一次“CT扫描”,看懂语义的弯曲,到发现AI竟能凭语言统计“画”出世界地图,再到用“金发姑娘”策略为它匹配“刚刚好”的练习题。准备好,让我们一起出发,探索AI思考的奇妙运作机制!
00:00:40 大模型知道答案,为什么就是不说?
00:05:48 你的努力,是“真忙”还是“瞎忙”?
00:11:04 给语言做个CT扫描,文本里的弯曲与折叠
00:18:28 大模型“脑”中的世界地图,原来是这样画出来的
00:24:03 AI刷题的“天花板”在哪?
本期介绍的几篇论文:
[CL] Empty Shelves or Lost Keys? Recall Is the Bottleneck for Parametric Factuality
[Google Research & Technion]
https://arxiv.org/abs/2602.14080
---
[CL] Think Deep, Not Just Long: Measuring LLM Reasoning Effort via Deep-Thinking Tokens
[Google & University of Virginia]
https://arxiv.org/abs/2602.13517
---
[LG] Text Has Curvature
[CMU & Meta]
https://arxiv.org/abs/2602.13418
---
[LG] Symmetry in language statistics shapes the geometry of model representations
[Google DeepMind & UC Berkeley & EPFL]
https://arxiv.org/abs/2602.15029
---
[LG] Goldilocks RL: Tuning Task Difficulty to Escape Sparse Rewards for Reasoning
[EPFL & Apple]
https://arxiv.org/abs/2602.14868
00:00:40 大模型知道答案,为什么就是不说?
00:05:48 你的努力,是“真忙”还是“瞎忙”?
00:11:04 给语言做个CT扫描,文本里的弯曲与折叠
00:18:28 大模型“脑”中的世界地图,原来是这样画出来的
00:24:03 AI刷题的“天花板”在哪?
本期介绍的几篇论文:
[CL] Empty Shelves or Lost Keys? Recall Is the Bottleneck for Parametric Factuality
[Google Research & Technion]
https://arxiv.org/abs/2602.14080
---
[CL] Think Deep, Not Just Long: Measuring LLM Reasoning Effort via Deep-Thinking Tokens
[Google & University of Virginia]
https://arxiv.org/abs/2602.13517
---
[LG] Text Has Curvature
[CMU & Meta]
https://arxiv.org/abs/2602.13418
---
[LG] Symmetry in language statistics shapes the geometry of model representations
[Google DeepMind & UC Berkeley & EPFL]
https://arxiv.org/abs/2602.15029
---
[LG] Goldilocks RL: Tuning Task Difficulty to Escape Sparse Rewards for Reasoning
[EPFL & Apple]
https://arxiv.org/abs/2602.14868
评价
空空如也
小宇宙热评
暂无小宇宙热门评论