我们总惊叹于AI的聪明,但你有没有想过,它们也会有思维盲区,甚至会犯一些“聪明人”的“笨”错误吗?这一期,我们就来深入AI的“内心世界”:我们将一起探索如何让机器人通过“做梦”来理解物理世界,看看AI会如何像“开普勒”一样只懂皮毛,又如何被引导成洞悉规律的“牛顿”。我们还会聊聊怎样训练一个AI成为另一个AI的“天敌”,以及如何绘制一张AI的“思想地图”,给它做一次全面的“体检”。准备好了吗?让我们一起出发! 00:00:37 让机器人做梦,是为了让它更好地干活 00:06:01 聪明人的“笨”办法,我们能从AI的失败中学到什么? 00:12:11 你的AI是“牛顿”还是“开普勒”? 00:18:18 如何把一个AI,训练成另一个AI的“天敌”? 00:23:44 AI的“思想地图”,我们如何给大模型做“体检”? 本期介绍的几篇论文: [RO] DreamDojo: A Generalist Robot World Model from Large-Scale Human Videos [NVIDIA] https://arxiv.org/abs/2602.06949 --- [CL] Large Language Model Reasoning Failures [Stanford University & Carleton College] https://arxiv.org/abs/2602.06176 --- [LG] From Kepler to Newton: Inductive Biases Guide Learned World Models in Transformers [Stanford University] https://arxiv.org/abs/2602.06923 --- [CL] SEMA: Simple yet Effective Learning for Multi-Turn Jailbreak Attacks [Microsoft Research & University of Rochester] https://arxiv.org/abs/2602.06854 --- [LG] Learning a Generative Meta-Model of LLM Activations [UC Berkeley] https://arxiv.org/abs/2602.06964
今天我们来聊一个特别有意思的话题:AI是如何学习和思考的?我们不再满足于AI能做什么,而是想知道它怎样才能做得更好。本期节目,我们将通过几篇最新论文,揭秘AI如何拥有自己的“私教系统”实现共同进化,如何通过“训练吃苦”换来我们使用时的“一步到位”,甚至如何在信息不全时“拜师学艺”,以及在思考时如何像高手一样进行“全局推演”。准备好了吗?让我们一起潜入AI的大脑深处。 00:00:34 如何打造一个完美的“AI私教”系统? 00:06:13 为什么说最快的AI,都在训练时“吃苦”? 00:11:23 不开“上帝视角”,如何成为高手? 00:15:52 想让机器人变聪明?别只教它“干活” 00:21:13 AI思考的秘密,为什么有的模型更会解谜? 本期介绍的几篇论文: [LG] RLAnything: Forge Environment, Policy, and Reward Model in Completely Dynamic RL System [Princeton University] https://arxiv.org/abs/2602.02488 --- [LG] Generative Modeling via Drifting [MIT] https://arxiv.org/abs/2602.04770 --- [LG] Privileged Information Distillation for Language Models [ServiceNow] https://arxiv.org/abs/2602.04942 --- [RO] A Systematic Study of Data Modalities and Strategies for Co-training Large Behavior Models for Robot Manipulation [Toyota Research Institute] https://arxiv.org/abs/2602.01067 --- [LG] Reasoning with Latent Tokens in Diffusion Language Models [CMU] https://arxiv.org/abs/2602.03769
你有没有想过,AI的“内心”也会上演一出出精彩的戏码?这一期,我们将一起潜入AI的大脑,看看它如何像我们一样,在解题前就有了“这题我能行”的直觉;然后我们会给它一张“地图”,看它如何从迷茫游客变身城市规划师,看懂整个复杂的软件世界;接着,我们将见证一位机器人“偷师学艺”,只通过观看视频就学会了打篮球;最后,我们还会聊聊顶尖数学家们如何给AI办一场杜绝作弊的“闭卷考”,以及AI训练场上一条好心办坏事的“交通规则”是如何被修正的。 00:00:40 AI的“第六感”,它如何知道自己快答对了? 00:05:17 给AI一张地图,让它看懂整个软件世界 00:10:47 机器人偷师记,它怎么光看视频就学会了打篮球? 00:18:33 给AI一场“闭卷考”,顶尖数学家们想干啥? 00:23:05 AI训练场上的“交规”,为什么好心会办坏事? 本期介绍的几篇论文: [CL] Sparse Reward Subsystem in Large Language Models [Tsinghua University & Stanford University] https://arxiv.org/abs/2602.00986 --- [CL] Closing the Loop: Universal Repository Representation with RPG-Encoder [Microsoft Research Asia] https://arxiv.org/abs/2602.02084 --- [RO] HumanX: Toward Agile and Generalizable Humanoid Interaction Skills from Human Videos [The Hong Kong University of Science and Technology] https://arxiv.org/abs/2602.02473 --- [AI] First Proof [Stanford University & Columbia University & EPFL ] https://arxiv.org/abs/2602.05192 --- [LG] Rethinking the Trust Region in LLM Reinforcement Learning [Sea AI Lab & National University of Singapore] https://arxiv.org/abs/2602.04879
AI会因为“全军覆没”而心态崩溃吗?本期,我们将探讨一篇最新论文如何用“全局视野”为AI建立稳定心态。接着,我们会揭示AI如何通过构建“世界模型”学会预判,真正从“知道”走向“行动”。我们还会聊聊,怎样给AI作画模型装上一个“未来探测器”,让它更听话、更有创造力。最后,我们将看到大小模型间“四两拨千斤”的师徒协作模式,并解剖一个反常识的发现:大模型变聪明,靠的可能不是层层递进的深度,而是“人多力量大”的朴素智慧。 00:00:41 高手过招,如何不让“一题之失”毁了心态? 00:06:19 让AI“涨记性”,它怎样才能不犯“想当然”的错? 00:11:40 AI作画不听话?给它装个“未来探测器” 00:16:40 AI世界的“四两拨千斤”,如何让小模型拥有大智慧? 00:22:02 大模型变聪明的秘密,不是靠层层深入,而是靠人多力量大? 本期介绍的几篇论文: [LG] EBPO: Empirical Bayes Shrinkage for Stabilizing Group-Relative Policy Optimization [Meta AI] https://arxiv.org/abs/2602.05165 --- [CL] Reinforcement World Model Learning for LLM-based Agents [Columbia University & Microsoft Research & Dartmouth College] https://arxiv.org/abs/2602.05842 --- [LG] Diamond Maps: Efficient Reward Alignment via Stochastic Flow Maps [MIT CSAIL & CMU & TU Munich] https://arxiv.org/abs/2602.05993 --- [CL] MentorCollab: Selective Large-to-Small Inference-Time Guidance for Efficient Reasoning [UIUC & University of Washington] https://arxiv.org/abs/2602.05307 --- [LG] Inverse Depth Scaling From Most Layers Being Similar [MIT & Harvard University] https://arxiv.org/abs/2602.05970
你有没有想过,我们到底怎样才能让AI从一个“博闻强识的学霸”,进化成一个“举一反三的宗师”?今天,我们就从五篇最新论文出发,揭秘几种让AI变聪明的“心法”:我们将看到,如何只用13个参数就撬动一个AI大脑;如何训练AI的“眼神”而不是答案;以及,如何让AI在一次次失败后,学会“吃一堑,长一智”,甚至在不知不觉中被一位“隐形教练”所塑造。准备好了吗?让我们一起探索AI学习能力的全新边界。 00:00:38 AI变聪明的秘密,不只看答案,更要看“眼神” 00:05:59 13个参数,撬动一个AI大脑 00:10:24 你的AI,为什么总在同一个地方犯错? 00:16:39 如何训练一个没有“标准答案”的AI? 00:00 你的数据里,藏着一位“隐形教练” 本期介绍的几篇论文: [CL] Reinforced Attention Learning [Google & Google DeepMind & UC Davis] https://arxiv.org/abs/2602.04884 --- [LG] Learning to Reason in 13 Parameters [FAIR at Meta] https://arxiv.org/abs/2602.04118 --- [LG] Scaling In-Context Online Learning Capability of LLMs via Cross-Episode Meta-RL [Boston University & LinkedIn] https://arxiv.org/abs/2602.04089 --- [CL] Likelihood-Based Reward Designs for General LLM Reasoning [Meta FAIR & University of Amsterdam] https://arxiv.org/abs/2602.03979 --- [LG] Subliminal Effects in Your Data: A General Mechanism via Log-Linearity [UC Berkeley & Microsoft Research] https://arxiv.org/abs/2602.04863
我们都希望AI越来越聪明,但它究竟是如何“开窍”的呢?本期节目,我们将深入AI的大脑,看看它如何拥有自己的“错题本”进行考场反思,又如何通过“自我暗示”突破学习瓶颈。我们还会探讨AI“思考”背后看不见的成本,以及一种更聪明的奖励机制,如何让AI偏爱攻克难题。最后,看看这一切如何让AI从一个工具,变成我们真正的“科研合伙人”。 00:00:32 你的错题本,AI现在也学会了 00:05:36 你的下一位科研合伙人,可能不是人 00:12:57 为什么AI有时“装傻”,算力背后的隐形成本 00:19:22 AI学习卡壳了怎么办?让它自己给自己提个醒 00:23:55 AI训练的“差生”偏爱法则 本期介绍的几篇论文: [CL] Test-time Recursive Thinking: Self-Improvement without External Feedback [Microsoft Research] https://arxiv.org/abs/2602.03094 --- [CL] Accelerating Scientific Research with Gemini: Case Studies and Common Techniques [Google Research] https://arxiv.org/abs/2602.03837 --- [LG] Reasoning about Reasoning: BAPO Bounds on Chain-of-Thought Token Complexity in LLMs [Microsoft Research & Netflix] https://arxiv.org/abs/2602.02909 --- [LG] Self-Hinting Language Models Enhance Reinforcement Learning [Microsoft Research] https://arxiv.org/abs/2602.03143 --- [LG] Maximum Likelihood Reinforcement Learning [CMU & Tsinghua University & Zhejiang University] https://arxiv.org/abs/2602.02710
你有没有想过,我们该如何与越来越聪明的AI相处?本期节目,我们将一起踏上一场探索AI心智的奇妙旅程。我们会聊聊几篇最新的论文,看科学家们如何像外科医生、神经科学家和行为教练一样,深入AI的“大脑”:从给它安装无需手术的“技能外挂”,到发现它大脑深处的“奖赏开关”;从治好它“越大越笨”的“注意力涣散症”,到教会它像顶尖高手一样“分而治之”地解决难题。准备好了吗?让我们一起揭开AI黑箱的神秘面纱。 00:00:37 给AI装个“外挂”,不动手术也能变聪明? 00:05:31 拆解AI大脑,我们找到了它的“奖赏开关” 00:11:35 给AI立规矩,我们终于有了一份“使用说明书” 00:16:41 人工智能的“注意力涣散症” 00:21:56 拆解问题,一个被我们忽视的超能力 本期介绍的几篇论文: [LG] ReasonCACHE: Teaching LLMs To Reason Without Weight Updates [FAIR at Meta & MIT CSAIL] https://arxiv.org/abs/2602.02366 --- [CL] Sparse Reward Subsystem in Large Language Models [Tsinghua University & Stanford University] https://arxiv.org/abs/2602.00986 --- [LG] Interpreting and Controlling Model Behavior via Constitutions for Atomic Concept Edits [Google DeepMind] https://arxiv.org/abs/2602.00092 --- [LG] TQL: Scaling Q-Functions with Transformers by Preventing Attention Collapse [Stanford University] https://arxiv.org/abs/2602.01439 --- [CL] Training LLMs for Divide-and-Conquer Reasoning Elevates Test-Time Scalability [University of California, Los Angeles & Microsoft] https://arxiv.org/abs/2602.02477
你有没有想过,当AI把所有练习册都做完时该怎么办?那些被它丢弃的“废稿”里,又藏着怎样的智慧?这一期,我们将一起探索AI如何像炼金术士一样点石成金,学会从百次尝试中预见万次风险,并揭开它思考时那个深藏不露的“工具箱”,看看它是如何学会聪明地“偷懒”的。 00:00:27 给AI一本永远也做不完的“练习册” 00:05:50 AI的“废稿”里,藏着通往智慧的捷径 00:11:28 大模型思考的秘密,它有几把刷子? 00:16:01 只需百次尝试,如何预见AI的万次风险? 00:20:50 AI的“降本增效”,一个聪明的偷懒办法 本期介绍的几篇论文: [LG] Golden Goose: A Simple Trick to Synthesize Unlimited RLVR Tasks from Unverifiable Internet Text [NVIDIA] https://arxiv.org/abs/2601.22975 --- [CL] Residual Context Diffusion Language Models [UC Berkeley] https://arxiv.org/abs/2601.22954 --- [CL] Context Structure Reshapes the Representational Geometry of Language Models [Google DeepMind] https://arxiv.org/abs/2601.22364 --- [LG] Statistical Estimation of Adversarial Risk in Large Language Models under Best-of-N Sampling [Microsoft Research] https://arxiv.org/abs/2601.22636 --- [LG] EUGens: Efficient, Unified, and General Dense Layers [Seoul National University] https://arxiv.org/abs/2601.22563
你是否想过,AI不仅能让你“看电影”,还能让你亲自走进画面里“玩世界”?你是否好奇,AI如何像笔迹专家一样,通过分析你鼠标的微小轨迹,就揪出游戏里的作弊玩家?本期节目,我们将一同见证AI的几场关键进化:看它如何从一个不靠谱的助理,变身成懂得主动提问的神队友;如何从一个全能大脑,分化成一个分工明确的数据分析团队;以及,如何通过精准的“抽脂”手术,在学习之初就剔除掉那些我们不希望它掌握的知识。准备好了吗?让我们即刻出发! 00:00:40 从“看电影”到“玩世界”,AI的下一站是什么? 00:07:25 你瞄准的动作,出卖了你 00:13:27 让AI学会提问,不靠谱的助理如何变身神队友? 00:18:42 给你的生意,请一个AI数据分析团队 00:23:51 AI“减肥”新思路,如何实现精准“抽脂”? 本期介绍的几篇论文: [CV] Advancing Open-source World Models [Robbyant Team] https://arxiv.org/abs/2601.20540 --- [LG] XGuardian: Towards Explainable and Generalized AI Anti-Cheat on FPS Games [The University of Hong Kong] https://arxiv.org/abs/2601.18068 --- [LG] Teaching LLMs to Ask: Self-Querying Category-Theoretic Planning for Under-Specified Reasoning [Stanford University] https://arxiv.org/abs/2601.20014 --- [CL] Insight Agents: An LLM-Based Multi-Agent System for Data Insights [Amazon] https://arxiv.org/abs/2601.20048 --- [LG] Shaping capabilities with token-level data filtering [Anthropic] https://arxiv.org/abs/2601.21571
今天我们来聊一个有点“扎心”又特别重要的话题。AI这根“拐杖”,会不会让我们忘了怎么走路?当我们把思考和决策悄悄“外包”给AI时,到底谁说了算?最新论文用实验揭示,我们的大脑可能真的在“生锈”。但别担心,我们也会看到AI自身的进化——科学家们正教它学会像人脑一样“偷懒”,甚至给它配了个“教练”,让它从“大力出奇迹”迈向“聪明地使劲”。这期节目,我们一起看看AI如何改变我们,以及我们如何让AI变得更聪明。 00:00:38 AI这根“拐杖”,正在让你忘了怎么走路吗? 00:07:08 大模型有个“天生缺陷”,我们该如何修复它? 00:13:47 我们的大脑,正在悄悄外包吗? 00:19:09 AI大模型,能不能学学人脑的“偷懒”智慧? 00:25:25 AI训练,如何从「大力出奇迹」到「聪明地使劲」? 本期介绍的几篇论文: [AI] How AI Impacts Skill Formation [Anthropic] https://arxiv.org/abs/2601.20245 --- [CL] Zonkey: A Hierarchical Diffusion Language Model with Differentiable Tokenization and Probabilistic Attention [A Rozental] https://arxiv.org/abs/2601.21768 --- [AI] Who's in Charge? Disempowerment Patterns in Real-World LLM Usage [Anthropic & ACS Research Group & University of Toronto] https://arxiv.org/abs/2601.19062 --- [LG] Resonant Sparse Geometry Networks [University of Arkansa] https://arxiv.org/abs/2601.18064 --- [LG] Value-Based Pre-Training with Downstream Feedback [CMU] https://arxiv.org/abs/2601.22108
我们总在惊叹AI变得多聪明,但你有没有想过,我们该如何从根基上,打造一个学得更快、身形更巧、感知更敏锐、评价更科学,甚至能自我进化的AI呢?本期节目,我们将通过五篇最新的AI论文,一次性揭开这些秘密。我们会聊聊AI学习速度原来只有四个“档位”;探讨如何给大模型“减肥”却不牺牲效果;见证AI如何拥有“听声辨位”的超能力;学习如何给眼花缭乱的AI科学地“排座次”;最后,我们还会看到一个“博士生”AI是如何手把手教出一个更聪明的“小学生”AI的。准备好了吗?让我们即刻出发,探索AI的底层构造蓝图。 00:00:45 人工智能学习的速度,原来只有四档 00:07:40 AI减肥记,如何不花钱还把活干好? 00:13:35 AI的“听声辨位”,我们离《三体》里的智子还有多远? 00:19:43 给AI大模型排座次,你信的榜单可能用错了尺子 00:26:35 让AI自己教自己,我们如何从根上培养一个更聪明的模型? 本期介绍的几篇论文: [LG] A Theory of Universal Agnostic Learning [Purdue University & Technion and Google Research] https://arxiv.org/abs/2601.20961 --- [CL] ECO: Quantized Training without Full-Precision Master Weights [Google Research & ISTA] https://arxiv.org/abs/2601.22101 --- [AS] PhaseCoder: Microphone Geometry-Agnostic Spatial Audio Understanding for Multimodal LLMs [Google DeepMind & Google AR] https://arxiv.org/abs/2601.21124 --- [LG] Nonparametric LLM Evaluation from Preference Data [LMU Munich & CMU & University of Cambridge] https://arxiv.org/abs/2601.21816 --- [CL] Self-Improving Pretraining: using post-trained models to pretrain better models [FAIR at Meta] https://arxiv.org/abs/2601.21343
今天我们来聊聊AI的“内心世界”:我们找到了那把能解锁所有学习方法的“万能钥匙”,却也发现AI的“人格”竟会随着对话见风使舵。我们试图让它像生物一样“进化”,却不小心让它患上了“灾难性遗忘症”。面对越来越强的AI,我们这些“菜鸟裁判”又该如何确保它的诚实?最后,我们会发现,让AI飞速成长的秘诀,可能不是好评,而是一份详尽的“错误报告”。 00:00:32 人工智能的“万能钥匙”藏在哪? 00:06:34 AI的“人格”,为什么聊着聊着就变了? 00:11:47 AI的“进化”陷阱,为什么学得越多,忘得越快? 00:16:47 菜鸟裁判,如何拿捏顶尖高手? 00:21:48 差评,好评,不如一份详细的“错误报告” 本期介绍的几篇论文: [LG] Spectral Ghost in Representation Learning: from Component Analysis to Self-Supervised Learning [Google DeepMind & Harvard University] https://arxiv.org/abs/2601.20154 --- [CL] Linear representations in language models can change dramatically over a conversation [Google DeepMind] https://arxiv.org/abs/2601.20834 --- [LG] Evolutionary Strategies lead to Catastrophic Forgetting in LLMs [UC Berkeley] https://arxiv.org/abs/2601.20861 --- [LG] Truthfulness Despite Weak Supervision: Evaluating and Training LLMs Using Peer Prediction [UC Berkeley] https://arxiv.org/abs/2601.20299 --- [LG] Reinforcement Learning via Self-Distillation [ETH Zurich] https://arxiv.org/abs/2601.20802
与播客爱好者一起交流
添加微信好友,获取更多播客资讯
播放列表还是空的
去找些喜欢的节目添加进来吧