你是否好奇,为何AI有时会“指鹿为马”?为何它面对难题,内部的神经元反而开始“集体偷懒”?本期节目,我们将通过几篇最新论文,一起给AI的大脑做一次“CT扫描”和“基因测序”,揭示它在感知、学习、思考和效率背后,那些出人意料的底层法则。 00:00:26 人工智能的“阿喀琉斯之踵”,一个关于维度的诅咒 00:05:34 AI绘画进化论,为什么高手不需要“题海战术”? 00:10:02 AI一思考,我们就发笑?不,是神经元在“偷懒” 00:15:44 如何用50倍的效率,给AI做一次“CT扫描”? 00:21:34 AI模型的“不可能三角”,算力、速度与智能 本期介绍的几篇论文: [LG] Solving adversarial examples requires solving exponential misalignment [Stanford University & Aisle] https://arxiv.org/abs/2603.03507 --- [LG] Generalization Properties of Score-matching Diffusion Models for Intrinsically Low-dimensional Data [University of Michigan & Google DeepMind & UC Berkeley] https://arxiv.org/abs/2603.03700 --- [CL] Farther the Shift,Sparser the Representation: Analyzing OOD Mechanisms in LLMs [Rutgers University & Northwestern University & UKP Lab, TU Darmstadt] https://arxiv.org/abs/2603.03415 --- [CL] Compressed Sensing for Capability Localization in Large Language Models [CMU] https://arxiv.org/abs/2603.03335 --- [LG] Why Are Linear RNNs More Parallelizable? [Allen Institute for AI & Rheinland-Pfalzische Technische Universitat] https://arxiv.org/abs/2603.03612
今天,我们要探讨如何让AI从一个只会“动嘴”的聊天伙伴,进化成一个真正“会看、会想、会动手”的智能体。我们会看到,最新论文如何让AI‘开眼看世界’,在脑中建立起预测未来的‘导航系统’,并从海量普通文本中自我启蒙,学会判断好坏。更重要的是,当AI要替我们行动时,它又是如何学会‘三思而后行’,在‘有用’和‘安全’之间找到那条微妙的平衡线呢?准备好了吗?让我们一起探寻AI从‘愣头青’到‘老司机’的进化之路。 00:00:40 AI为什么要“开眼看世界”? 00:07:16 为什么高手都自带“导航系统”? 00:13:19 AI的“行动许可”,它在动手前,先想了什么? 00:19:12 把白开水变成高汤,AI如何从普通文本中学会“好坏” 00:24:47 如何把一个“愣头青”AI,调教成“老司机”? 本期介绍的几篇论文: [CV] Beyond Language Modeling: An Exploration of Multimodal Pretraining [FAIR, Meta] https://arxiv.org/abs/2603.03276 --- [LG] What Capable Agents Must Know: Selection Theorems for Robust Decision-Making under Uncertainty [CMU] https://arxiv.org/abs/2603.02491 --- [LG] Learning When to Act or Refuse: Guarding Agentic Reasoning Models for Safe Multi-Step Tool Use [Microsoft Research] https://arxiv.org/abs/2603.03205 --- [LG] Scaling Reward Modeling without Human Supervision [Harvard University & Cornell University] https://arxiv.org/abs/2603.02225 --- [LG] Safety Training Persists Through Helpfulness Optimization in LLM Agents [UC Berkeley] https://arxiv.org/abs/2603.02229
今天我们不聊模型参数有多大,而是聊如何让AI变得更“会思考”,这种思考方式,有时甚至有些反常识。比如,为什么给AI疯狂“补课”,它反而可能越学越笨?我们还会探讨,如何像一位高明的老师一样引导AI攻克难题,而不是直接灌输答案。更进一步,我们会揭示如何训练AI像个侦探一样,学会“讲道理”地分析代码,以及如何让整个系统学会动态协作,找到最高效的“偷懒”方式。 00:00:35 AI大模型时代,如何花小钱办大事? 00:05:47 给AI“补课”的陷阱,为什么学得越多,它反而越笨? 00:11:37 高手辅导功课,为什么不直接给答案? 00:16:48 让AI学会“讲道理”,代码世界的侦探是怎样炼成的? 00:22:00 让AI学会“省时间”,一种更聪明的快 本期介绍的几篇论文: [LG] Rich Insights from Cheap Signals: Efficient Evaluations via Tensor Factorization [Google DeepMind & University of Michigan] https://arxiv.org/abs/2603.02029 --- [LG] Theoretical Perspectives on Data Quality and Synergistic Effects in Pre- and Post-Training Reasoning Models [University of Southern California & University of California Los Angeles & Google Research] https://arxiv.org/abs/2603.01293 --- [LG] Learn Hard Problems During RL with Reference Guided Fine-tuning [ByteDance Seed & UC Berkeley & CMU] https://arxiv.org/abs/2603.01223 --- [LG] Agentic Code Reasoning [Meta] https://arxiv.org/abs/2603.01896 --- [CL] Learning to Draft: Adaptive Speculative Decoding with Reinforcement Learning [Microsoft Research Asia & Peking University] https://arxiv.org/abs/2603.01639
你有没有想过,一个更聪明的AI,或许需要学会的不是记住一切,反而是“选择性失忆”?本期我们要聊的几篇最新论文,就充满了这样颠覆常识的洞见。我们将一起探索,AI如何从“管住嘴”进化到深入思想的“排毒手术”,如何像顶尖高手一样动态进化自己解决问题的方法论,甚至,如何拥有人类最宝贵的品质之一——知道自己“不知道”的自知之明。 00:00:31 AI“排毒”,是动手术,还是只吃止痛药? 00:04:49 AI的记忆难题,除了死记硬背,还有什么好办法? 00:10:33 你的方法,也需要进化 00:16:14 AI的记忆,竟然是它的负担? 00:21:15 聪明反被聪明误,AI也需要“自知之明” 本期介绍的几篇论文: [LG] Detoxifying LLMs via Representation Erasure-Based Preference Optimization [McGill University & Google DeepMind] https://arxiv.org/abs/2602.23391 --- [LG] Memory Caching: RNNs with Growing Memory [Google Research] https://arxiv.org/abs/2602.24281 --- [LG] EvoX: Meta-Evolution for Automated Discovery [UC Berkeley] https://arxiv.org/abs/2602.23413 --- [CL] Do LLMs Benefit From Their Own Words? [MIT & IBM Research] https://arxiv.org/abs/2602.24287 --- [LG] RewardUQ: A Unified Framework for Uncertainty-Aware Reward Models [ETH Zurich] https://arxiv.org/abs/2602.24040
你有没有想过,为什么AI题刷得越多,反而越容易在简单问题上翻车?这一期,我们将一起潜入AI的内心世界,看看它们是如何陷入“应试教育”的陷阱,又是如何被“剪刀石头布”这样的逻辑死循环给困住的。但更重要的是,我们会发现,科学家们如何通过“读心术”和“记仇本”这样的奇思妙想,教会AI从失败中学习,并找到那条跳出困境的智慧之路。准备好,一场关于AI学习与评估的深度思考,现在开始。 00:00:35 为什么AI刷题越多,第一次答对率反而越低? 00:05:35 AI的“好记性”与“烂笔头” 00:10:06 AI程序员的“应试教育”陷阱 00:14:17 AI世界的“剪刀石头布”难题 00:19:08 机器人教练的“读心术” 本期介绍的几篇论文: [LG] Why Pass﹫k Optimization Can Degrade Pass﹫1: Prompt Interference in LLM Post-training [Singapore University of Technology and Design & University of Maryland] https://arxiv.org/abs/2602.21189 --- [LG] Exploratory Memory-Augmented LLM Agent via Hybrid On- and Off-Policy Optimization [Microsoft Research] https://arxiv.org/abs/2602.23008 --- [LG] ISO-Bench: Can Coding Agents Optimize Real-World Inference Workloads? [Lossfunk] https://arxiv.org/abs/2602.19594 --- [LG] Back to Blackwell: Closing the Loop on Intransitivity in Multi-Objective Preference Fine-Tuning [CMU] https://arxiv.org/abs/2602.19041 --- [RO] TOPReward: Token Probabilities as Hidden Zero-Shot Rewards for Robotics [University of Washington & Amazon] https://arxiv.org/abs/2602.19313
我们给了AI强大的能力,却发现它有时像个“混乱特工”,能为了保守一个秘密烧掉整栋房子。我们以为要给顶尖AI充分的自由,但最新论文却说,给它一份详尽的“任务清单”反而能让它替你赚更多钱。当AI的产出快到我们来不及验证,它的价值又该如何衡量?本期,我们将从几篇最新论文出发,探讨如何驾驭这些能力与心智尚不匹配的强大工具,甚至尝试将AI的“直觉”翻译成我们能懂的“公式”。 00:00:35 请个AI当助理,你放心吗? 00:05:57 AI狂飙,但你的价值正在“空转”? 00:13:56 AI的“直觉”,如何翻译成人类的“公式”? 00:19:45 想让AI替你赚钱?别让它“想太多” 00:24:33 给三维重建,装上一个新引擎 本期介绍的几篇论文: [AI] Agents of Chaos [Northeastern University] https://arxiv.org/abs/2602.20021 --- [AI] Some Simple Economics of AGI [MIT & WashU & UCLA] https://arxiv.org/abs/2602.20946 --- [LG] SymTorch: A Framework for Symbolic Distillation of Deep Neural Networks [University of Cambridge] https://arxiv.org/abs/2602.21307 --- [AI] Toward Expert Investment Teams:A Multi-Agent LLM System with Fine-Grained Trading Tasks [Japan Digital Design, Inc & University of Oxford] https://arxiv.org/abs/2602.23330 --- [CV] VGG-T³: Offline Feed-Forward 3D Reconstruction at Scale [NVIDIA] https://arxiv.org/abs/2602.23361
我们总以为AI的进步靠的是‘大力出奇迹’,但今天我们要聊点更酷的——AI正在学会用‘巧劲儿’。最新论文告诉我们,与其让模型盲目刷题,不如为它铺设一条高效的“语义管道”,甚至教会它像人一样“反思”自己的思考过程。与此同时,AI也正从一个“通才”变身为能帮你管理私人图书馆、设计芯片、甚至用旧零件组装新工具的“超级专家”。准备好了吗?让我们一起看看,AI是如何从‘更强’进化到‘更聪明’的。 00:00:35 大力出奇迹?人工智能的另一条捷径 00:06:08 给你一座私人图书馆,还配一个秒懂你的图书管理员 00:12:19 造AI,有了一本“宜家说明书”? 00:17:49 AI不止会聊天,它还能设计芯片了? 00:25:09 教AI反思,比喂它知识更重要 本期介绍的几篇论文: [LG] Semantic Tube Prediction: Beating LLM Data Efficiency with JEPA [Atlassian & NYU & Brown] https://arxiv.org/abs/2602.22617 --- [IR] DS SERVE: A Framework for Efficient and Scalable Neural Retrieval [UC Berkeley & University of Illinois Urbana–Champaign] https://arxiv.org/abs/2602.22224 --- [CL] dLLM: Simple Diffusion Language Modeling [UC Berkeley & UIUC] https://arxiv.org/abs/2602.22661 --- [LG] ArchAgent: Agentic AI-driven Computer Architecture Discovery [Google & UC Berkeley] https://arxiv.org/abs/2602.22425 --- [LG] Mirroring the Mind: Distilling Human-Like Metacognitive Strategies into Large Language Models [Seoul National University] https://arxiv.org/abs/2602.22508
今天我们要聊聊,如何让AI变得更快、更聪明,甚至更“会过日子”和更“懂合作”。我们将一起探索,怎样用一种巧妙的修剪方法,让AI的信息地图建造速度提升十几倍;又如何给AI大脑动个“小手术”,打通它的多步推理思路。我们还会发现,AI不仅能看着YouTube的“野生”视频自己学会开车,还能像个精明的项目经理一样,把预算花在刀刃上。最后,我们将见证AI如何在一个游戏世界里,第一次学会“换位思考”,理解一个由我们共同构成的现实。 00:00:40 你的AI为什么“反应慢”?问题可能出在建图上 00:06:29 给AI大脑动个“小手术” 00:11:59 AI学车的新思路,让YouTube当免费教练 00:17:32 AI的省钱之道,把钱花在刀刃上 00:22:37 AI学会了“换位思考”,世界会有什么不同? 本期介绍的几篇论文: [IR] PiPNN: Ultra-Scalable Graph-Based Nearest Neighbor Indexing [UMD & Google Research] https://arxiv.org/abs/2602.21247 --- [LG] Interleaved Head Attention [Meta & UT Austin & MIT] https://arxiv.org/abs/2602.21371 --- [CV] Learning to Drive is a Free Gift: Large-Scale Label-Free Autonomy Pretraining from Unposed In-The-Wild Videos [Applied Intuition & Stanford University & UC Berkeley] https://arxiv.org/abs/2602.22091 --- [CL] Budget-Aware Agentic Routing via Boundary-Guided Training [University of Cambridge & M365 Research, Microsoft] https://arxiv.org/abs/2602.21227 --- [CV] Solaris: Building a Multiplayer Video World Model in Minecraft [New York University] https://arxiv.org/abs/2602.22208
你有没有想过,一个AI不仅能像数学家一样独立完成研究,甚至还懂得在解不出来时保持诚实的沉默?本期节目,我们将一起探讨几篇最新论文,看看AI是如何学会像高手一样复盘反思,又是如何通过一本“智慧手册”让“笨徒弟”秒变“老师傅”的。我们还会聊聊,当机器人从虚拟世界来到现实时为何会“水土不服”,以及最令人警醒的——AI为何正在变成一个记性太好、管不住嘴的“信息鹦鹉”。准备好了吗?让我们一起出发! 00:00:38 机器已经能独立做数学研究了? 00:04:23 “聪明人”和普通人的差距,就看会不会犯错 00:10:32 机器人教练的私房秘籍:为什么从虚拟世界“毕业”的机器人,到了现实反而变笨了? 00:17:07 不用换脑子:如何让“笨徒弟”秒变“老师傅”? 00:22:13 AI正在变成一个“碎嘴的八婆”? 本期介绍的几篇论文: [LG] Aletheia tackles FirstProof autonomously [Google DeepMind] https://arxiv.org/abs/2602.21201 --- [LG] Learning from Trials and Errors: Reflective Test-Time Planning for Embodied LLMs [Stanford University & Northwestern University] https://arxiv.org/abs/2602.21198 --- [RO] What Matters for Simulation to Online Reinforcement Learning on Real Robots [ETH Zurich & Google DeepMind] https://arxiv.org/abs/2602.20220 --- [CL] Prompt-Level Distillation: A Non-Parametric Alternative to Model Fine-Tuning for Efficient Reasoning [Google] https://arxiv.org/abs/2602.21103 --- [CL] Personal Information Parroting in Language Models [CMU & University of Washington] https://arxiv.org/abs/2602.20580
今天我们聊一个特别有意思的话题:怎么让聪明的AI变得更有“智慧”?本期节目,我们将通过几篇最新的论文发现,AI正从“苦力”进化为“智囊”。我们将看到,AI如何学会“谋定而后动”,不再急于求成;如何通过“脑补”来规划复杂任务,而不是单靠蛮力;甚至,它还学会了通过提出一个好的“垫脚石”问题来启发自己,并且领悟到“少即是多”,适当放慢节奏反而效率更高。准备好了吗?让我们一起探索AI智慧进化的奥秘。 00:00:39 AI进化论,从“大力出奇迹”到“谋定而后动” 00:06:24 让AI学会“打配合”,我们能从中学到什么? 00:12:44 高手过招,为何要先问个“笨”问题? 00:17:36 成大事者,不靠蛮力靠“脑补” 00:23:00 最高级的效率,是懂得“慢半拍” 本期介绍的几篇论文: [LG] K-Search: LLM Kernel Generation via Co-Evolving Intrinsic World Model [UC Berkeley] https://arxiv.org/abs/2602.19128 --- [LG] AdaEvolve: Adaptive LLM Driven Zeroth-Order Optimization [UC Berkeley] https://arxiv.org/abs/2602.20133 --- [LG] Asking the Right Questions: Improving Reasoning with Generated Stepping Stones [FAIR at Meta] https://arxiv.org/abs/2602.19069 --- [LG] Compositional Planning with Jumpy World Models [FAIR at Meta & Mila – Québec AI Institute] https://arxiv.org/abs/2602.19634 --- [LG] Less is More: Convergence Benefits of Fewer Data Weight Updates over Longer Horizon [Google Research & EPFL] https://arxiv.org/abs/2602.19510
你有没有想过,一个真正聪明的系统,是靠什么取胜的?是靠暴力破解,还是另有巧思?本期节目,我们将一起探索AI世界里那些超越直觉的“神操作”:从一个“盲眼”的AI画家如何扔掉地图也能画出杰作,到聪明的AI如何不再执着于唯一的“最优解”,而是优雅地绘制出一整片“可能性地图”。我们还会看到,面对海量的基因天书和臃肿的模型,AI怎样学会了“抓重点”的智能压缩和“折叠而非砍掉”的瘦身术。最后,一个看似有点“笨”的方法,又为何能给AI大模型带来惊人的提速?准备好,让我们一起揭开这些最新论文中隐藏的智慧。 00:00:46 AI作画的秘密,为什么顶尖高手不需要地图? 00:07:17 最优解不止一个,如何优雅地“全都要”? 00:13:52 会抓重点的AI,如何阅读万卷基因天书 00:19:03 给AI模型瘦身,砍掉还是折叠? 00:23:27 最优不等于最适,一个“笨办法”如何给AI大模型提速 本期介绍的几篇论文: [LG] The Geometry of Noise: Why Diffusion Models Don't Need Noise Conditioning [Google] https://arxiv.org/abs/2602.18428 --- [LG] MePoly: Max Entropy Polynomial Policy Optimization [University of Michigan & UC Berkeley] https://arxiv.org/abs/2602.17832 --- [LG] GeneZip: Region-Aware Compression for Long Context DNA Modeling [Mila - Ouébec AI Institute] https://arxiv.org/abs/2602.17739 --- [LG] Cut Less, Fold More: Model Compression through the Lens of Projection Geometry [Graz University of Technology] https://arxiv.org/abs/2602.18116 --- [LG] Dual Length Codes for Lossless Compression of BFloat16 [Google] https://arxiv.org/abs/2602.17849
你有没有想过,AI也能像老师傅一样通过“动手试错”来解决难题,或者像刚入社会的年轻人一样,通过“混社会”学会与同伴合作?最新的一些论文告诉我们,让AI变聪明的秘诀,可能不是一味地堆算力,而是要教它学会“复盘”,帮它找到那张指挥自己的“隐藏地图”,甚至用“以小博大”的智慧,实现效率的飞跃。今天,就让我们一起探索AI如何学会像人一样思考和成长。 00:00:32 AI界的“刻意练习”,它如何像个老师傅一样解决难题? 00:05:57 想让AI变善良?让它多见见世面 00:11:08 AI的“隐藏地图”,为什么你总也指挥不好它? 00:16:19 AI预测这事,不一定非得大力出奇迹 00:21:29 AI怎样才能不犯“二过”? 本期介绍的几篇论文: [LG] FAMOSE: A ReAct Approach to Automated Feature Discovery [Amazon] https://arxiv.org/abs/2602.17641 --- [LG] Multi-agent cooperation through in-context co-player inference [Google] https://arxiv.org/abs/2602.16301 --- [LG] The Information Geometry of Softmax: Probing and Steering [University of Chicago& INSEAD] https://arxiv.org/abs/2602.15293 --- [LG] Reverso: Efficient Time Series Foundation Models for Zero-shot Forecasting [MIT & Allen Institute for AI & Qube Research & Technologies] https://arxiv.org/abs/2602.17634 --- [LG] Experiential Reinforcement Learning [University of Southern California & Microsoft & University of Pennsylvania] https://arxiv.org/abs/2602.13949
与播客爱好者一起交流
添加微信好友,获取更多播客资讯
播放列表还是空的
去找些喜欢的节目添加进来吧