时长:
28分钟
播放:
205
发布:
2周前
主播...
简介...
这一期,我们将一口气潜入五篇最新论文的智慧深海,看看AI的世界又发生了哪些奇妙的变化。我们会一起探索,“大力出奇迹”这个口号背后,那张描绘AI生长规律的神秘“DNA”图谱;学习一种最高效的“偷懒”智慧,看看AI如何通过“抄自己的作业”来惊人地提速;我们还会给AI的大脑装上一部“专属字典”,让它的知识不仅能被检索,还能被精准地“手术”修改;更会戴上CT眼镜,看看聪明的AI解难题时,究竟是在严密推理,还是在玩一场高维度的“猜谜游戏”;最后,我们将学习一种资源管理的艺术,看AI如何像一位聪明的项目经理,把“好钢”用在最关键的“刀刃”上。准备好了吗?让我们一起出发!
00:00:50 AI升级指南,大力出奇迹背后有地图?
00:06:50 为什么说,最高效的偷懒是“抄作业”?
00:11:56 给AI的大脑装一个“专属字典”
00:16:39 你的AI在思考,还是在蒙答案?
00:22:20 AI世界的“好钢”,怎么用在刀刃上?
本期介绍的几篇论文:
[LG] On the origin of neural scaling laws: from random graphs to natural language
[Meta Superintelligence Lab & Axiom Math]
https://arxiv.org/abs/2601.10684
---
[LG] Single-Stage Huffman Encoder for ML Compression
[Google LLC]
https://arxiv.org/abs/2601.10673
---
[LG] STEM: Scaling Transformers with Embedding Modules
[Meta AI & CMU]
https://arxiv.org/abs/2601.10639
---
[LG] Are Your Reasoning Models Reasoning or Guessing? A Mechanistic Analysis of Hierarchical Reasoning Models
[Shanghai Qi Zhi Institute]
https://arxiv.org/abs/2601.10679
---
[CL] TRIM: Hybrid Inference via Targeted Stepwise Routing in Multi-Step Reasoning Tasks
[Amazon & CMU]
https://arxiv.org/abs/2601.10245
00:00:50 AI升级指南,大力出奇迹背后有地图?
00:06:50 为什么说,最高效的偷懒是“抄作业”?
00:11:56 给AI的大脑装一个“专属字典”
00:16:39 你的AI在思考,还是在蒙答案?
00:22:20 AI世界的“好钢”,怎么用在刀刃上?
本期介绍的几篇论文:
[LG] On the origin of neural scaling laws: from random graphs to natural language
[Meta Superintelligence Lab & Axiom Math]
https://arxiv.org/abs/2601.10684
---
[LG] Single-Stage Huffman Encoder for ML Compression
[Google LLC]
https://arxiv.org/abs/2601.10673
---
[LG] STEM: Scaling Transformers with Embedding Modules
[Meta AI & CMU]
https://arxiv.org/abs/2601.10639
---
[LG] Are Your Reasoning Models Reasoning or Guessing? A Mechanistic Analysis of Hierarchical Reasoning Models
[Shanghai Qi Zhi Institute]
https://arxiv.org/abs/2601.10679
---
[CL] TRIM: Hybrid Inference via Targeted Stepwise Routing in Multi-Step Reasoning Tasks
[Amazon & CMU]
https://arxiv.org/abs/2601.10245
评价...
空空如也
小宇宙热门评论...
暂无小宇宙热门评论