时长:
36分钟
播放:
152
发布:
4天前
主播...
简介...
本期节目,我们将一起挑战几个关于AI的“想当然”:它真的无所不能,又或者只是个模式复读机?我们会发现,AI能反过来给人类科学论文“挑错”,但它自己预测的数据也可能布满陷阱。更进一步,我们将从逻辑的根源探讨机器创新的“天花板”,并揭示让AI实现“协调”与“自我进化”的巧妙新思路。
00:00:28 AI当监工:我们读的顶会论文,到底有多少bug?
00:05:55 你的AI为什么总“犯傻”?缺的不是智商,是“协调”
00:12:48 给AI的狂热泼一盆冷水:为什么机器无法真正创新?
00:19:44 AI预测的数据,是馅饼还是陷阱?
00:30:00 AI的自我修养:没有人类老师,它如何变得更聪明?
本期介绍的几篇论文:
[AI] To Err Is Human: Systematic Quantification of Errors in Published AI Papers via LLM Analysis
[Together AI & NEC Labs America]
https://arxiv.org/abs/2512.05925
---
[AI] The Missing Layer of AGI: From Pattern Alchemy to Coordination Physics
[Stanford University]
https://arxiv.org/abs/2512.05765
---
[AI] On the Computability of Artificial General Intelligence
[N/A]
https://arxiv.org/abs/2512.05212
---
[LG] Do We Really Even Need Data? A Modern Look at Drawing Inference with Predicted Data
[Fred Hutchinson Cancer Center & University of Washington]
https://arxiv.org/abs/2512.05456
---
[CV] Self-Improving VLM Judges Without Human Annotations
[FAIR at Meta]
https://arxiv.org/abs/2512.05145
00:00:28 AI当监工:我们读的顶会论文,到底有多少bug?
00:05:55 你的AI为什么总“犯傻”?缺的不是智商,是“协调”
00:12:48 给AI的狂热泼一盆冷水:为什么机器无法真正创新?
00:19:44 AI预测的数据,是馅饼还是陷阱?
00:30:00 AI的自我修养:没有人类老师,它如何变得更聪明?
本期介绍的几篇论文:
[AI] To Err Is Human: Systematic Quantification of Errors in Published AI Papers via LLM Analysis
[Together AI & NEC Labs America]
https://arxiv.org/abs/2512.05925
---
[AI] The Missing Layer of AGI: From Pattern Alchemy to Coordination Physics
[Stanford University]
https://arxiv.org/abs/2512.05765
---
[AI] On the Computability of Artificial General Intelligence
[N/A]
https://arxiv.org/abs/2512.05212
---
[LG] Do We Really Even Need Data? A Modern Look at Drawing Inference with Predicted Data
[Fred Hutchinson Cancer Center & University of Washington]
https://arxiv.org/abs/2512.05456
---
[CV] Self-Improving VLM Judges Without Human Annotations
[FAIR at Meta]
https://arxiv.org/abs/2512.05145
评价...
空空如也
小宇宙热门评论...
暂无小宇宙热门评论