时长:
29分钟
播放:
26
发布:
10小时前
主播...
简介...
你有没有想过,为什么投入巨大的AI模型有时反而会“学傻了”?当AI的“词典”里没有“我错了”这个词时,我们又该如何教会它自我反思?本期节目,我们将一起钻进AI的大脑,从几篇最新的论文出发,看看AI是如何诊断自己内部的“罢工”,如何通过一场“无限游戏”变得更安全,以及它在绘画时,究竟是在搞创作,还是在“背书”。
00:00:30 规模的诅咒,AI为何会“学傻”?
00:06:29 AI的语言里,没有“我错了”
00:11:35 想让AI更安全?答案可能藏在一场“无限游戏”里
00:16:13 我们如何看穿世界的规则?AI给了新思路
00:23:44 揭秘AI绘画,它“抄袭”的秘密藏在哪?
本期介绍的几篇论文:
[LG] Understanding Scaling Laws in Deep Neural Networks via Feature Learning Dynamics
[DePaul University & Iowa State University]
https://arxiv.org/abs/2512.21075
---
[CL] Reflection Pretraining Enables Token-Level Self-Correction in Biological Sequence Models
[Fudan University & Shanghai Artificial Intelligence Laboratory]
https://arxiv.org/abs/2512.20954
---
[LG] Safety Alignment of LMs via Non-cooperative Games
[FAIR at Meta & University of Tübingen]
https://arxiv.org/abs/2512.20806
---
[LG] Active inference and artificial reasoning
[University College London & VERSES]
https://arxiv.org/abs/2512.21129
---
[LG] Generalization of Diffusion Models Arises with a Balanced Representation Space
[University of Michigan]
https://arxiv.org/abs/2512.20963
00:00:30 规模的诅咒,AI为何会“学傻”?
00:06:29 AI的语言里,没有“我错了”
00:11:35 想让AI更安全?答案可能藏在一场“无限游戏”里
00:16:13 我们如何看穿世界的规则?AI给了新思路
00:23:44 揭秘AI绘画,它“抄袭”的秘密藏在哪?
本期介绍的几篇论文:
[LG] Understanding Scaling Laws in Deep Neural Networks via Feature Learning Dynamics
[DePaul University & Iowa State University]
https://arxiv.org/abs/2512.21075
---
[CL] Reflection Pretraining Enables Token-Level Self-Correction in Biological Sequence Models
[Fudan University & Shanghai Artificial Intelligence Laboratory]
https://arxiv.org/abs/2512.20954
---
[LG] Safety Alignment of LMs via Non-cooperative Games
[FAIR at Meta & University of Tübingen]
https://arxiv.org/abs/2512.20806
---
[LG] Active inference and artificial reasoning
[University College London & VERSES]
https://arxiv.org/abs/2512.21129
---
[LG] Generalization of Diffusion Models Arises with a Balanced Representation Space
[University of Michigan]
https://arxiv.org/abs/2512.20963
评价...
空空如也
小宇宙热门评论...
暂无小宇宙热门评论