时长:
27分钟
播放:
129
发布:
2天前
主播...
简介...
你有没有想过,我们该如何与越来越聪明的AI相处?本期节目,我们将一起踏上一场探索AI心智的奇妙旅程。我们会聊聊几篇最新的论文,看科学家们如何像外科医生、神经科学家和行为教练一样,深入AI的“大脑”:从给它安装无需手术的“技能外挂”,到发现它大脑深处的“奖赏开关”;从治好它“越大越笨”的“注意力涣散症”,到教会它像顶尖高手一样“分而治之”地解决难题。准备好了吗?让我们一起揭开AI黑箱的神秘面纱。
00:00:37 给AI装个“外挂”,不动手术也能变聪明?
00:05:31 拆解AI大脑,我们找到了它的“奖赏开关”
00:11:35 给AI立规矩,我们终于有了一份“使用说明书”
00:16:41 人工智能的“注意力涣散症”
00:21:56 拆解问题,一个被我们忽视的超能力
本期介绍的几篇论文:
[LG] ReasonCACHE: Teaching LLMs To Reason Without Weight Updates
[FAIR at Meta & MIT CSAIL]
https://arxiv.org/abs/2602.02366
---
[CL] Sparse Reward Subsystem in Large Language Models
[Tsinghua University & Stanford University]
https://arxiv.org/abs/2602.00986
---
[LG] Interpreting and Controlling Model Behavior via Constitutions for Atomic Concept Edits
[Google DeepMind]
https://arxiv.org/abs/2602.00092
---
[LG] TQL: Scaling Q-Functions with Transformers by Preventing Attention Collapse
[Stanford University]
https://arxiv.org/abs/2602.01439
---
[CL] Training LLMs for Divide-and-Conquer Reasoning Elevates Test-Time Scalability
[University of California, Los Angeles & Microsoft]
https://arxiv.org/abs/2602.02477
00:00:37 给AI装个“外挂”,不动手术也能变聪明?
00:05:31 拆解AI大脑,我们找到了它的“奖赏开关”
00:11:35 给AI立规矩,我们终于有了一份“使用说明书”
00:16:41 人工智能的“注意力涣散症”
00:21:56 拆解问题,一个被我们忽视的超能力
本期介绍的几篇论文:
[LG] ReasonCACHE: Teaching LLMs To Reason Without Weight Updates
[FAIR at Meta & MIT CSAIL]
https://arxiv.org/abs/2602.02366
---
[CL] Sparse Reward Subsystem in Large Language Models
[Tsinghua University & Stanford University]
https://arxiv.org/abs/2602.00986
---
[LG] Interpreting and Controlling Model Behavior via Constitutions for Atomic Concept Edits
[Google DeepMind]
https://arxiv.org/abs/2602.00092
---
[LG] TQL: Scaling Q-Functions with Transformers by Preventing Attention Collapse
[Stanford University]
https://arxiv.org/abs/2602.01439
---
[CL] Training LLMs for Divide-and-Conquer Reasoning Elevates Test-Time Scalability
[University of California, Los Angeles & Microsoft]
https://arxiv.org/abs/2602.02477
评价...
空空如也
小宇宙热门评论...
暂无小宇宙热门评论