本期节目,我们将一起深入AI的大脑,看看它究竟是如何思考的。我们会发现,AI不仅会修炼“内功心法”,以小博大;还会上演复杂的“内心戏”,让我们难辨其思考的真伪。我们还会揭开它通用的“学习公式”,看看聪明的AI为何会掉入“聪明陷阱”,以及它最终如何学会替我们高效地“试错”。 00:00:32 AI的“内功心法”:让小模型拥有大智慧的秘密 00:05:52 AI的“内心戏”:你看到的思考过程,有多少是“演”的? 00:10:49 AI的“聪明陷阱”:为什么懂得多,反而容易犯错? 00:16:26 揭秘AI的“学习公式”:原来万变不离其宗 00:21:32 让AI替你“试错”,我们能省下多少力气? [CL] Scaling Latent Reasoning via Looped Language Models [ByteDance Seed] https://arxiv.org/abs/2510.25741 --- [LG] Can Aha Moments Be Fake? Identifying True and Decorative Thinking Steps in Chain-of-Thought [Northeastern University & UC Berkeley] https://arxiv.org/abs/2510.24941 --- [CL] Are Language Models Efficient Reasoners? A Perspective from Logic Programming [ETH Zürich & EPFL] https://arxiv.org/abs/2510.25626 --- [CL] Language Model Behavioral Phases are Consistent Across Architecture, Training Data, and Scale [MIT & UCSD] https://arxiv.org/abs/2510.24963 --- [LG] GPTOpt: Towards Efficient LLM-Based Black-Box Optimization [MIT] https://arxiv.org/abs/2510.25404
AI解决问题,一定要更强更快吗?今天我们换个角度,看看AI如何学得更“巧”。我们将探讨,如何让AI从一个“全能设计师”变成一支“专业施工队”,甚至学会从你选择时的犹豫中读懂你的心思。我们还会发现,AI制药不再是闭门造车,而被赋予了空间直觉;而处理音频的AI,也终于懂得直接从MP3里学习,不再做“返工”的傻事。准备好,让我们一起进入AI的“巧思”世界! 00:00:36 AI制药的新玩法:如何把每个原子放到正确的位置? 00:06:12 让AI当好“施工队”,而不是“总设计师” 00:11:42 自动驾驶的新思路:扔掉老师,你也能当状元? 00:17:20 AI读心术:你选得有多快,它学得有多好 00:22:34 AI新范式:为什么你的MP3比原始音频更“聪明”? 本期介绍的几篇论文: [LG] Pearl: A Foundation Model for Placing Every Atom in the Right Location [Genesis Molecular AI] https://arxiv.org/abs/2510.24670 --- [LG] TDFlow: Agentic Workflows for Test Driven Software Engineering [CMU & UC San Diego] https://arxiv.org/abs/2510.23761 --- [RO] ZTRS: Zero-Imitation End-to-end Autonomous Driving with Trajectory Scoring [Fudan University & NVIDIA] https://arxiv.org/abs/2510.24108 --- [LG] Preference Learning with Response Time: Robust Losses and Guarantees [Stanford University] https://arxiv.org/abs/2505.22820 --- [LG] Transformers from Compressed Representations [King Abdullah University of Science and Technology] https://arxiv.org/abs/2510.23665
今天我们要聊一个根本问题:聪明的AI到底是什么样?是像一位能极速压缩思考时间的解题大师,还是一个能在想象的沙盒里自我进化的机器人?我们还会一起探究,为什么AI时而是无所不知的学霸,时而又是死记硬背的书呆子,甚至还会一本正经地胡说八道。最后,你会发现,让AI变得更懂你的终极秘诀,可能恰恰是先教会它如何“装傻”提问。让我们跟随几篇最新论文,一起解剖AI的思考内核。 00:00:36 AI变聪明的秘密:不是知道更多,而是想得更快 00:06:29 给机器人一个“沙盒”,让它在想象中进化 00:12:37 为什么AI既是学霸,又是书呆子? 00:17:19 AI的“发挥失常”:一个问题,两种症状 00:23:35 如何让机器更懂你?答案是:先让它学会“装傻” 本期介绍的几篇论文: [LG] AI Agents as Universal Task Solvers: It’s All About Time [AWS Agentic AI] https://arxiv.org/abs/2510.12066 --- [RO] Ctrl-World: A Controllable Generative World Model for Robot Manipulation [Stanford University & Tsinghua University] https://arxiv.org/abs/2510.10125 --- [LG] LLM Knowledge is Brittle: Truthfulness Representations Rely on Superficial Resemblance [FAIR at Meta & University of Zurich] https://arxiv.org/abs/2510.11905 --- [CL] Generation Space Size: Understanding and Calibrating Open-Endedness of LLM Generations [Stanford University] https://arxiv.org/abs/2510.12699 --- [LG] Asking Clarifying Questions for Preference Elicitation With Large Language Models [Google] https://arxiv.org/abs/2510.12015
工具越强,焦虑越深。当Sora能一语成片,Claude能瞬间代码,我们仿佛手握神兵,却又为何心生恐慌? “有了这么好的AI,如果我还赚不到钱,我就是个废物。”——这句话,是否也曾像幽灵一样在你脑中盘旋? 本期节目,我将带你跳出这个“工具越强,自我审判越重”的怪圈。我们将从加州淘金热的历史,聊到今天AI时代的生存法则;从相机的普及,看懂创造力的稀缺性转移。 你将听到: * 为什么AI降低的是“执行门槛”,却提高了“认知门槛”? * 从“淘金者”到“卖水人”,你的价值定位在哪里? * 概率思维如何帮你从AI生成的100个选项中,找到那1%的黄金机会? 别让最好的工具,成为最重的枷锁。你的审美、品味和同理心,才是AI无法估值的核心资产。 收听本期节目,告别赚钱焦虑,找到你在AI时代的真实价值坐标。
想知道为什么教机器人玩最“笨”的玩具,反而能让它学会抓取任何东西吗?本期节目,我们将一起探索如何将神秘的AI“炼金术”变成一门严谨的科学,看看怎样让AI大神学会“说人话”并带得动AI小白,并最终揭示,那些五花八门的调教秘籍背后,其实藏着同一个简单的目标。让我们马上进入今天的前沿速递! 00:00:28 AI大模型调教指南:从玄学到科学 00:05:39 返璞归真:最笨的方法,可能就是最好的方法 00:11:25 想让机器人变聪明?先教它玩“笨”玩具 00:16:41 如何让AI大神,带得动AI小白? 00:00 大模型调教秘籍:条条大路通罗马? 本期介绍的几篇论文: [LG] The Art of Scaling Reinforcement Learning Compute for LLMs [Meta & UT Austin & UC Berkeley] https://arxiv.org/abs/2510.13786 --- [RO] VLA-0: Building State-of-the-Art VLAs with Zero Modification [NVIDIA] https://arxiv.org/abs/2510.13054 --- [RO] Learning to Grasp Anything by Playing with Random Toys [UC Berkeley] https://arxiv.org/abs/2510.12866 --- [LG] Tandem Training for Language Models [Microsoft & EPFL & University of Toronto] https://arxiv.org/abs/2510.13551 --- [LG] What is the objective of reasoning with reinforcement learning? [University of Pennsylvania & UC Berkeley] https://arxiv.org/abs/2510.13651
你有没有想过,除了喂给它更多数据,还有哪些更精妙的法门能让AI变得更聪明?本期我们要聊的几篇最新论文,就揭示了AI的“成长秘籍”:它们把训练AI的视角从“下山”升级为“发射火箭”,为它设计了从通识到专业的“大学课程”,还教会了它预测“未来摘要”的远见,以及在关键时刻“喘口气”慢思考的智慧。今天,就让我们一起看看,这些研究是如何重塑AI的“学习方法论”的。 00:00:33 训练AI,你以为是爬山,其实是开火箭? 00:05:56 AI成长秘籍:多上一门“专业课” 00:11:26 AI模型的终极瘦身术:如何让大象既轻盈又聪明? 00:16:53 AI的远见:不只关心下一个词 00:21:10 AI的“沉思时刻”:快与慢的智慧 本期介绍的几篇论文: [LG] Optimal Control Theoretic Neural Optimizer: From Backpropagation to Dynamic Programming [Meta & Georgia Institute of Technology & Apple] https://arxiv.org/abs/2510.14168 --- [CL] Midtraining Bridges Pretraining and Posttraining Distributions [CMU] https://arxiv.org/abs/2510.14865 --- [LG] BitNet Distillation [Microsoft Research] https://arxiv.org/abs/2510.13998 --- [LG] Beyond Multi-Token Prediction: Pretraining LLMs with Future Summaries [FAIR at Meta & CMU] https://arxiv.org/abs/2510.14751 --- [CL] Catch Your Breath: Adaptive Computation for Self-Paced Sequence Production [Google DeepMind] https://arxiv.org/abs/2510.13879
要让AI真正变聪明,是该为它发明一套统一江湖的“世界语”,还是该教会它如何在“大脑袋”和“长思考”间做出明智的权衡?又或者,我们得先帮它建立健康的“信息食谱”,避免陷入“脑子瓦特”的陷阱,并治愈它那藏着世界性偏见的好奇心?甚至,最新论文告诉我们,通往更高智能的钥匙,竟然是先让AI学会如何当一个会犯错的“差生”。本期节目,我们将通过五篇最新论文,共同探索AI智能背后那些你意想不到的深层逻辑。 00:00:39 AI的“世界语”:一套统一江湖的武功秘籍 00:06:11 AI的“垃圾食品”陷阱:为什么顶尖模型也会“脑子瓦特”? 00:10:58 AI变聪明,靠脑子大还是想得久? 00:17:18 AI的“好奇心”,藏着一个世界性的偏见 00:22:56 为什么最聪明的AI,要先学会当个“差生”? 本期介绍的几篇论文: [LG] Tensor Logic: The Language of AI [University of Washington] https://arxiv.org/abs/2510.12269 --- [CL] LLMs Can Get "Brain Rot"! [Texas A&M University & University of Texas at Austin & Purdue University] https://arxiv.org/abs/2510.13928 --- [LG] Not All Bits Are Equal: Scale-Dependent Memory Optimization Strategies for Reasoning Models [KRAFTON & University of Wisconsin–Madison] https://arxiv.org/abs/2510.10964 --- [CL] The Curious Case of Curiosity across Human Cultures and LLMs [University of Michigan] https://arxiv.org/abs/2510.12943 --- [LG] Learning to Make MISTAKEs: Modeling Incorrect Student Thinking And Key Errors [MIT CSAIL] https://arxiv.org/abs/2510.11502
你有没有想过,让AI变聪明,或许并不需要更强的算力,而是需要一种更巧妙的引导方式?本期,我们将一起探索几篇最新论文带来的奇妙洞见:我们会发现,一点点“计算噪声”竟能让AI学得更好;我们甚至能像做CT扫描一样,亲眼“看见”AI思考的几何轨迹;并学习如何像教育孩子一样,教会AI在探索与专注间找到完美平衡,甚至不花一分钱,就解锁它的隐藏潜能。 00:00:36 不花钱升级你的AI?换个提问方式就行 00:05:39 AI育儿经:如何教机器学会“恰到好处”的探索 00:11:50 训练AI,加点“噪声”效果更好? 00:16:47 AI的“心流”:看见思考的轨迹 00:22:19 如何让聪明的AI,学会更聪明地做事? 本期介绍的几篇论文: [LG] Reasoning with Sampling: Your Base Model is Smarter Than You Think [Harvard University] https://arxiv.org/abs/2510.14901 --- [LG] Agentic Entropy-Balanced Policy Optimization [Kuaishou Technology & Renmin University of China] https://arxiv.org/abs/2510.14545 --- [LG] QeRL: Beyond Efficiency -- Quantization-enhanced Reinforcement Learning for LLMs [NVIDIA & MIT] https://arxiv.org/abs/2510.11696 --- [LG] The Geometry of Reasoning: Flowing Logics in Representation Space [Duke University] https://arxiv.org/abs/2510.09782 --- [CL] Demystifying Reinforcement Learning in Agentic Reasoning [National University of Singapore & Princeton University & University of Illinois at Urbana-Champaign] https://arxiv.org/abs/2510.11701
你有没有想过,我们衡量AI的标准可能从一开始就有点偏?今天,我们将一起颠覆几个常识:我们会发现,模型的潜力不在于考高分,而在于它的“想象力”有多丰富;训练巨型模型的省钱秘笈,可能就藏在一个简单的根号里;而要让AI生成完美的视频,最好的方法竟是让它组建一个内部“评审团”给自己挑错。更神奇的是,想让AI真正懂你,关键或许不是问“二选一”,而是“三选一”。准备好了吗?让我们一起探索这些最新论文中,那些反直觉又充满智慧的AI新思路。 00:00:41 AI训练的内功心法:为什么“好学生”不一定是“优等生”? 00:07:18 炼大模型省钱秘笈:一个根号引发的蝴蝶效应 00:12:05 让AI自己挑毛病,视频才能越做越好 00:17:25 想真正懂我?别问二选一,试试三选一 00:21:57 给AI装个“健康码”:识别未知攻击的新思路 本期介绍的几篇论文: [LG] The Coverage Principle: How Pre-training Enables Post-Training [Microsoft Research & MIT & UIUC] https://arxiv.org/abs/2510.15020 --- [LG] Robust Layerwise Scaling Rules by Proper Weight Decay Tuning [MIT & UCLA] https://arxiv.org/abs/2510.15262 --- [CV] VISTA: A Test-Time Self-Improving Video Generation Agent [Google] https://arxiv.org/abs/2510.15831 --- [LG] Learning Correlated Reward Models: Statistical Barriers and Opportunities [MIT EECS] https://arxiv.org/abs/2510.15839 --- [CV] Learning to Detect Unknown Jailbreak Attacks in Large Vision-Language Models [Renmin University of China & Alibaba Group] https://arxiv.org/abs/2510.15430
本期我们要聊一个核心问题:我们总觉得AI是个神秘的黑箱,但最新的研究正在像做“脑部扫描”一样,试图撬开它。我们将看到,一个“满分或零分”的简单规则,就能教会AI诚实;又如何派出一个“AI侦探”,揪出潜伏的恶意模型。接着,我们会深入AI的“思考过程”,看看聪明的“大脑”和聪明的“搜索引擎”哪个更重要,以及如何让AI通过“犯错”来演化出正确答案,甚至把它的复杂推理拆解成一个个可以遥控的“思想积木”。准备好了吗?让我们一起深入AI的内心世界。 00:00:41 AI的“不说谎”训练:满分或零分 00:05:29 AI界的“无间道”:如何揪出披着羊皮的狼? 00:10:39 聪明的大脑,和聪明的搜索引擎,哪个更重要? 00:16:14 犯错没关系,只要你“改对”的概率比“改错”大一点点 00:21:22 拆解AI大脑:它思考时在想什么? 本期介绍的几篇论文: [CL] Train for Truth,Keep the Skills:Binary Retrieval-Augmented Reward Mitigates Hallucinations [University of Washington & Allen Institute for AI (Ai2)] https://arxiv.org/abs/2510.17733 --- [LG] Detecting Adversarial Fine-tuning with Auditing Agents [Anthropic] https://arxiv.org/abs/2510.16255 --- [LG] Prior Makes It Possible:From Sublinear Graph Algorithms to LLM Test-Time Methods [Toyota Technological Institute at Chicago & Columbia University & Google Research] https://arxiv.org/abs/2510.16609 --- [CL] Deep Self-Evolving Reasoning [Microsoft Research Asia & Peking University] https://arxiv.org/abs/2510.17498 --- [LG] Algorithmic Primitives and Compositional Geometry of Reasoning in Language Models [Columbia University & University of California Los Angeles & Harvey Mudd College] https://arxiv.org/abs/2510.15987
今天,我们将开启一场从宏观到微观的AI探索之旅。我们将看到AI如何像做CT一样洞察整个地球的复杂系统,然后潜入它的大脑内部,看看它是如何分两步“猜测并精炼”出答案的。接着,我们会探讨AI如何像武林高手一样通过亲身实践来学习新知而又不忘旧事,并学会了像项目经理一样先规划再执行,以兼顾速度与质量。但最后,我们也会揭示一场AI世界的“无间道”,看看当聪明的AI学会“学术造假”时,会带来怎样严峻的挑战。 00:00:39 我们如何用AI给地球做一次“全身CT”? 00:05:27 大模型思考,需要分几步? 00:11:25 AI的“鱼和熊掌”:既要快,又要好,可能吗? 00:16:00 为什么高手越学越强,而我们一学就忘? 00:21:22 AI世界的“无间道”:当“坏科学家”遇上“傻瓜审稿人” 本期介绍的几篇论文: [AI] Earth AI: Unlocking Geospatial Insights with Foundation Models and Cross-Modal Reasoning [Google Research] https://arxiv.org/abs/2510.18318 --- [CL] How Do LLMs Use Their Depth? [UC Berkeley & Georgia Institute of Technology] https://arxiv.org/abs/2510.18871 --- [LG] Planned Diffusion [University of California, Los Angeles & MIT CSAIL] https://arxiv.org/abs/2510.18087 --- [LG] Retaining by Doing: The Role of On-Policy Data in Mitigating Forgetting [Princeton University] https://arxiv.org/abs/2510.18874 --- [AI] BadScientist: Can a Research Agent Write Convincing but Unsound Papers that Fool LLM Reviewers? [University of Washington] https://arxiv.org/abs/2510.18003
你有没有想过,一个真正聪明的AI,应该具备哪些人类的智慧?本期我们要聊的几篇最新论文,就试图教会AI一些绝活:比如如何把万字长文“看”成一张图来秒懂,或者组建一个“草稿团队”来光速写作。更进一步,AI甚至开始学习如何记住思考的过程,如何在众说纷纭时不受“大嗓门”的干扰,甚至在关键时刻,勇敢地说出“我不知道”。这些看似简单的改变,背后可能隐藏着通往更高级智能的秘密。 00:00:36 AI如何把一万字读成一张图? 00:05:45 AI写作的“窄门”与“密道” 00:11:07 AI回话慢?给它组建一个“草稿团队”试试 00:16:55 训练AI,别听嗓门最大的那个 00:21:20 高手决策:为什么“我不知道”是最高级的智慧? 本期介绍的几篇论文: [CV] DeepSeek-OCR: Contexts Optical Compression [DeepSeek-AI] https://arxiv.org/abs/2510.18234 --- [LG] Loopholing Discrete Diffusion: Deterministic Bypass of the Sampling Wall [KAIST & EPFL] https://arxiv.org/abs/2510.19304 --- [LG] Fast Inference via Hierarchical Speculative Decoding [Google Research & Tel Aviv University] https://arxiv.org/abs/2510.19705 --- [LG] Imbalanced Gradients in RL Post-Training of Multi-Task LLMs [Meta AI] https://arxiv.org/abs/2510.19178 --- [LG] Policy Learning with Abstention [Stanford University] https://arxiv.org/abs/2510.19672
与播客爱好者一起交流
添加微信好友,获取更多播客资讯
播放列表还是空的
去找些喜欢的节目添加进来吧