想知道如何把临时指令“刻”进AI的大脑,让它拥有真正的肌肉记忆吗?我们又该如何教AI学会“抄近道”,一步生成作品,而不是慢慢搭建?本期节目,我们将深入最新论文,探讨如何让AI不仅做对事,更要想对事,并揭示在调教AI时,那些我们习以为常却可能导致它“偏执”或“精神分裂”的惊人误区。 00:00:28 AI的“肌肉记忆”是怎么炼成的? 00:05:48 造物,如何抄近道? 00:11:04 AI调教指南,你以为的,不是你以为的 00:17:32 比做对事更重要的,是想对事 00:22:45 AI调教指南,为什么你喂得越多,它可能变得越偏执 本期介绍的几篇论文: [CL] On-Policy Context Distillation for Language Models [Microsoft Research] https://arxiv.org/abs/2602.12275 --- [LG] Categorical Flow Maps [University of Amsterdam & University of Oxford] https://arxiv.org/abs/2602.12233 --- [LG] The Magic Correlations: Understanding Knowledge Transfer from Pretraining to Supervised Fine-Tuning [Google DeepMind & Google Research] https://arxiv.org/abs/2602.11217 --- [LG] Right for the Wrong Reasons: Epistemic Regret Minimization for Causal Rung Collapse in LLMs [Stanford University] https://arxiv.org/abs/2602.11675 --- [LG] How Sampling Shapes LLM Alignment: From One-Shot Optima to Iterative Dynamics [PSL Research University & Northwestern University] https://arxiv.org/abs/2602.12180
你有没有想过,那个能解开奥数难题的AI,可能连小学生的加法都会算错?这一期,我们就来深入AI的“脑回路”,看看它如何像一个真正的数学家那样自主探索未知,又如何像一个“AI工程师”一样实现自我进化。我们还会揭示,为什么让机器人学会“偷懒”和使用“小抄”,才是让它走向我们物理世界的关键一步。准备好了吗?让我们一起出发,探索这些最新论文背后,那个既熟悉又陌生的AI心智。 00:00:35 AI成了数学家,然后呢? 00:05:49 AI,一个能解奥数题,却不会列竖式的小天才 00:11:13 你的手机,正在悄悄招聘一位AI工程师 00:17:57 给AI当军师,我们能不算卦就预知未来吗? 00:24:09 你的世界,其实只需要一个“小抄” 本期介绍的几篇论文: [LG] Towards Autonomous Mathematics Research [Google DeepMind] https://arxiv.org/abs/2602.10177 --- [LG] AI-rithmetic [Google] https://arxiv.org/abs/2602.10416 --- [LG] Self-Evolving Recommendation System: End-To-End Autonomous Model Optimization With LLM Agents [Google] https://arxiv.org/abs/2602.10226 --- [LG] Configuration-to-Performance Scaling Law with Neural Ansatz [Tsinghua University & Stanford University] https://arxiv.org/abs/2602.10300 --- [LG] Affordances Enable Partial World Modeling with LLMs [Google Deepmind] https://arxiv.org/abs/2602.10390
今天我们要聊一个特别有意思的话题:如何“看透”AI并让它变得更好?我们将通过几篇最新论文,揭示一些反常识的智慧:比如,有时让AI“盲目”一点,它反而画得更好;想让它变聪明,关键可能不是“教”得多,而是“教”得巧。我们还会看到,攻击AI的最高境界,可能不是塞给它坏东西,而是对好东西做一次肉眼看不见的“微创手术”! 00:00:31 AI“投毒”新姿势,不是塞坏东西,而是让好人变坏 00:07:00 让AI变聪明的秘密,不是加法,是减法 00:11:29 AI的瘦身难题,如何高效地“抓重点”? 00:17:14 AI的“思想慢镜头”,我们如何看懂它在想什么? 00:22:54 AI绘画新思路,有时候,少即是多 本期介绍的几篇论文: [LG] Infusion: Shaping Model Behavior by Editing Training Data via Influence Functions [University of Oxford & UCL] https://arxiv.org/abs/2602.09987 --- [CL] Effective Reasoning Chains Reduce Intrinsic Dimensionality [Google DeepMind & UNC Chapel Hill] https://arxiv.org/abs/2602.09276 --- [LG] WildCat: Near-Linear Attention in Theory and Practice [Imperial College London & Microsoft Research] https://arxiv.org/abs/2602.10056 --- [LG] Step-resolved data attribution for looped transformers [University of Potsdam & Technical University of Munich & MunichHarvard University] https://arxiv.org/abs/2602.10097 --- [LG] Blind denoising diffusion models and the blessings of dimensionality [Simons Foundation & Yale University] https://arxiv.org/abs/2602.09639
你有没有想过,真正的高手和普通人的思维差异在哪?今天我们要聊的,就是AI如何向各路高手“偷师学艺”。我们会看到,AI如何学会像园艺大师一样“精准剪枝”,做出最少却最关键的改动;如何像一个学霸,通过模仿,赢在训练的“起跑线”上。甚至,它还学会了我们最熟悉的两个策略:像写作者一样“先打草稿再定稿”,以及像我们读书时一样,边读边在脑子里贴上“思维小纸条”。当然,我们还会聊聊,如何给AI的“说明书”能力,建立一个既靠谱又高效的自动化考场。准备好了吗?让我们一起探索AI思考的进化之路! 00:00:45 高手调参,为什么“少做”有时比“多做”更聪明? 00:05:50 AI训练的起跑线,一个被忽视的“小动作” 00:10:08 AI的“说明书”能力,我们该如何衡量? 00:16:29 AI如何像高手一样思考,先打草稿,再定稿 00:21:02 AI“开小差”的秘密,边读边想,效率翻倍 本期介绍的几篇论文: [LG] BONSAI: Bayesian Optimization with Natural Simplicity and Interpretability [Meta] https://arxiv.org/abs/2602.07144 --- [LG] Mimetic Initialization of MLPs [CMU] https://arxiv.org/abs/2602.07156 --- [LG] How2Everything: Mining the Web for How-To Procedures to Evaluate and Improve LLMs [Allen Institute for AI & University of Maryland] https://arxiv.org/abs/2602.08808 --- [LG] iGRPO: Self-Feedback-Driven LLM Reasoning [NVIDIA] https://arxiv.org/abs/2602.09000 --- [CL] Latent Reasoning with Supervised Thinking States [Google Research] https://arxiv.org/abs/2602.08332
我们总惊叹于AI的聪明,但你有没有想过,它们也会有思维盲区,甚至会犯一些“聪明人”的“笨”错误吗?这一期,我们就来深入AI的“内心世界”:我们将一起探索如何让机器人通过“做梦”来理解物理世界,看看AI会如何像“开普勒”一样只懂皮毛,又如何被引导成洞悉规律的“牛顿”。我们还会聊聊怎样训练一个AI成为另一个AI的“天敌”,以及如何绘制一张AI的“思想地图”,给它做一次全面的“体检”。准备好了吗?让我们一起出发! 00:00:37 让机器人做梦,是为了让它更好地干活 00:06:01 聪明人的“笨”办法,我们能从AI的失败中学到什么? 00:12:11 你的AI是“牛顿”还是“开普勒”? 00:18:18 如何把一个AI,训练成另一个AI的“天敌”? 00:23:44 AI的“思想地图”,我们如何给大模型做“体检”? 本期介绍的几篇论文: [RO] DreamDojo: A Generalist Robot World Model from Large-Scale Human Videos [NVIDIA] https://arxiv.org/abs/2602.06949 --- [CL] Large Language Model Reasoning Failures [Stanford University & Carleton College] https://arxiv.org/abs/2602.06176 --- [LG] From Kepler to Newton: Inductive Biases Guide Learned World Models in Transformers [Stanford University] https://arxiv.org/abs/2602.06923 --- [CL] SEMA: Simple yet Effective Learning for Multi-Turn Jailbreak Attacks [Microsoft Research & University of Rochester] https://arxiv.org/abs/2602.06854 --- [LG] Learning a Generative Meta-Model of LLM Activations [UC Berkeley] https://arxiv.org/abs/2602.06964
今天我们来聊一个特别有意思的话题:AI是如何学习和思考的?我们不再满足于AI能做什么,而是想知道它怎样才能做得更好。本期节目,我们将通过几篇最新论文,揭秘AI如何拥有自己的“私教系统”实现共同进化,如何通过“训练吃苦”换来我们使用时的“一步到位”,甚至如何在信息不全时“拜师学艺”,以及在思考时如何像高手一样进行“全局推演”。准备好了吗?让我们一起潜入AI的大脑深处。 00:00:34 如何打造一个完美的“AI私教”系统? 00:06:13 为什么说最快的AI,都在训练时“吃苦”? 00:11:23 不开“上帝视角”,如何成为高手? 00:15:52 想让机器人变聪明?别只教它“干活” 00:21:13 AI思考的秘密,为什么有的模型更会解谜? 本期介绍的几篇论文: [LG] RLAnything: Forge Environment, Policy, and Reward Model in Completely Dynamic RL System [Princeton University] https://arxiv.org/abs/2602.02488 --- [LG] Generative Modeling via Drifting [MIT] https://arxiv.org/abs/2602.04770 --- [LG] Privileged Information Distillation for Language Models [ServiceNow] https://arxiv.org/abs/2602.04942 --- [RO] A Systematic Study of Data Modalities and Strategies for Co-training Large Behavior Models for Robot Manipulation [Toyota Research Institute] https://arxiv.org/abs/2602.01067 --- [LG] Reasoning with Latent Tokens in Diffusion Language Models [CMU] https://arxiv.org/abs/2602.03769
你有没有想过,AI的“内心”也会上演一出出精彩的戏码?这一期,我们将一起潜入AI的大脑,看看它如何像我们一样,在解题前就有了“这题我能行”的直觉;然后我们会给它一张“地图”,看它如何从迷茫游客变身城市规划师,看懂整个复杂的软件世界;接着,我们将见证一位机器人“偷师学艺”,只通过观看视频就学会了打篮球;最后,我们还会聊聊顶尖数学家们如何给AI办一场杜绝作弊的“闭卷考”,以及AI训练场上一条好心办坏事的“交通规则”是如何被修正的。 00:00:40 AI的“第六感”,它如何知道自己快答对了? 00:05:17 给AI一张地图,让它看懂整个软件世界 00:10:47 机器人偷师记,它怎么光看视频就学会了打篮球? 00:18:33 给AI一场“闭卷考”,顶尖数学家们想干啥? 00:23:05 AI训练场上的“交规”,为什么好心会办坏事? 本期介绍的几篇论文: [CL] Sparse Reward Subsystem in Large Language Models [Tsinghua University & Stanford University] https://arxiv.org/abs/2602.00986 --- [CL] Closing the Loop: Universal Repository Representation with RPG-Encoder [Microsoft Research Asia] https://arxiv.org/abs/2602.02084 --- [RO] HumanX: Toward Agile and Generalizable Humanoid Interaction Skills from Human Videos [The Hong Kong University of Science and Technology] https://arxiv.org/abs/2602.02473 --- [AI] First Proof [Stanford University & Columbia University & EPFL ] https://arxiv.org/abs/2602.05192 --- [LG] Rethinking the Trust Region in LLM Reinforcement Learning [Sea AI Lab & National University of Singapore] https://arxiv.org/abs/2602.04879
AI会因为“全军覆没”而心态崩溃吗?本期,我们将探讨一篇最新论文如何用“全局视野”为AI建立稳定心态。接着,我们会揭示AI如何通过构建“世界模型”学会预判,真正从“知道”走向“行动”。我们还会聊聊,怎样给AI作画模型装上一个“未来探测器”,让它更听话、更有创造力。最后,我们将看到大小模型间“四两拨千斤”的师徒协作模式,并解剖一个反常识的发现:大模型变聪明,靠的可能不是层层递进的深度,而是“人多力量大”的朴素智慧。 00:00:41 高手过招,如何不让“一题之失”毁了心态? 00:06:19 让AI“涨记性”,它怎样才能不犯“想当然”的错? 00:11:40 AI作画不听话?给它装个“未来探测器” 00:16:40 AI世界的“四两拨千斤”,如何让小模型拥有大智慧? 00:22:02 大模型变聪明的秘密,不是靠层层深入,而是靠人多力量大? 本期介绍的几篇论文: [LG] EBPO: Empirical Bayes Shrinkage for Stabilizing Group-Relative Policy Optimization [Meta AI] https://arxiv.org/abs/2602.05165 --- [CL] Reinforcement World Model Learning for LLM-based Agents [Columbia University & Microsoft Research & Dartmouth College] https://arxiv.org/abs/2602.05842 --- [LG] Diamond Maps: Efficient Reward Alignment via Stochastic Flow Maps [MIT CSAIL & CMU & TU Munich] https://arxiv.org/abs/2602.05993 --- [CL] MentorCollab: Selective Large-to-Small Inference-Time Guidance for Efficient Reasoning [UIUC & University of Washington] https://arxiv.org/abs/2602.05307 --- [LG] Inverse Depth Scaling From Most Layers Being Similar [MIT & Harvard University] https://arxiv.org/abs/2602.05970
你有没有想过,我们到底怎样才能让AI从一个“博闻强识的学霸”,进化成一个“举一反三的宗师”?今天,我们就从五篇最新论文出发,揭秘几种让AI变聪明的“心法”:我们将看到,如何只用13个参数就撬动一个AI大脑;如何训练AI的“眼神”而不是答案;以及,如何让AI在一次次失败后,学会“吃一堑,长一智”,甚至在不知不觉中被一位“隐形教练”所塑造。准备好了吗?让我们一起探索AI学习能力的全新边界。 00:00:38 AI变聪明的秘密,不只看答案,更要看“眼神” 00:05:59 13个参数,撬动一个AI大脑 00:10:24 你的AI,为什么总在同一个地方犯错? 00:16:39 如何训练一个没有“标准答案”的AI? 00:00 你的数据里,藏着一位“隐形教练” 本期介绍的几篇论文: [CL] Reinforced Attention Learning [Google & Google DeepMind & UC Davis] https://arxiv.org/abs/2602.04884 --- [LG] Learning to Reason in 13 Parameters [FAIR at Meta] https://arxiv.org/abs/2602.04118 --- [LG] Scaling In-Context Online Learning Capability of LLMs via Cross-Episode Meta-RL [Boston University & LinkedIn] https://arxiv.org/abs/2602.04089 --- [CL] Likelihood-Based Reward Designs for General LLM Reasoning [Meta FAIR & University of Amsterdam] https://arxiv.org/abs/2602.03979 --- [LG] Subliminal Effects in Your Data: A General Mechanism via Log-Linearity [UC Berkeley & Microsoft Research] https://arxiv.org/abs/2602.04863
我们都希望AI越来越聪明,但它究竟是如何“开窍”的呢?本期节目,我们将深入AI的大脑,看看它如何拥有自己的“错题本”进行考场反思,又如何通过“自我暗示”突破学习瓶颈。我们还会探讨AI“思考”背后看不见的成本,以及一种更聪明的奖励机制,如何让AI偏爱攻克难题。最后,看看这一切如何让AI从一个工具,变成我们真正的“科研合伙人”。 00:00:32 你的错题本,AI现在也学会了 00:05:36 你的下一位科研合伙人,可能不是人 00:12:57 为什么AI有时“装傻”,算力背后的隐形成本 00:19:22 AI学习卡壳了怎么办?让它自己给自己提个醒 00:23:55 AI训练的“差生”偏爱法则 本期介绍的几篇论文: [CL] Test-time Recursive Thinking: Self-Improvement without External Feedback [Microsoft Research] https://arxiv.org/abs/2602.03094 --- [CL] Accelerating Scientific Research with Gemini: Case Studies and Common Techniques [Google Research] https://arxiv.org/abs/2602.03837 --- [LG] Reasoning about Reasoning: BAPO Bounds on Chain-of-Thought Token Complexity in LLMs [Microsoft Research & Netflix] https://arxiv.org/abs/2602.02909 --- [LG] Self-Hinting Language Models Enhance Reinforcement Learning [Microsoft Research] https://arxiv.org/abs/2602.03143 --- [LG] Maximum Likelihood Reinforcement Learning [CMU & Tsinghua University & Zhejiang University] https://arxiv.org/abs/2602.02710
你有没有想过,我们该如何与越来越聪明的AI相处?本期节目,我们将一起踏上一场探索AI心智的奇妙旅程。我们会聊聊几篇最新的论文,看科学家们如何像外科医生、神经科学家和行为教练一样,深入AI的“大脑”:从给它安装无需手术的“技能外挂”,到发现它大脑深处的“奖赏开关”;从治好它“越大越笨”的“注意力涣散症”,到教会它像顶尖高手一样“分而治之”地解决难题。准备好了吗?让我们一起揭开AI黑箱的神秘面纱。 00:00:37 给AI装个“外挂”,不动手术也能变聪明? 00:05:31 拆解AI大脑,我们找到了它的“奖赏开关” 00:11:35 给AI立规矩,我们终于有了一份“使用说明书” 00:16:41 人工智能的“注意力涣散症” 00:21:56 拆解问题,一个被我们忽视的超能力 本期介绍的几篇论文: [LG] ReasonCACHE: Teaching LLMs To Reason Without Weight Updates [FAIR at Meta & MIT CSAIL] https://arxiv.org/abs/2602.02366 --- [CL] Sparse Reward Subsystem in Large Language Models [Tsinghua University & Stanford University] https://arxiv.org/abs/2602.00986 --- [LG] Interpreting and Controlling Model Behavior via Constitutions for Atomic Concept Edits [Google DeepMind] https://arxiv.org/abs/2602.00092 --- [LG] TQL: Scaling Q-Functions with Transformers by Preventing Attention Collapse [Stanford University] https://arxiv.org/abs/2602.01439 --- [CL] Training LLMs for Divide-and-Conquer Reasoning Elevates Test-Time Scalability [University of California, Los Angeles & Microsoft] https://arxiv.org/abs/2602.02477
你有没有想过,当AI把所有练习册都做完时该怎么办?那些被它丢弃的“废稿”里,又藏着怎样的智慧?这一期,我们将一起探索AI如何像炼金术士一样点石成金,学会从百次尝试中预见万次风险,并揭开它思考时那个深藏不露的“工具箱”,看看它是如何学会聪明地“偷懒”的。 00:00:27 给AI一本永远也做不完的“练习册” 00:05:50 AI的“废稿”里,藏着通往智慧的捷径 00:11:28 大模型思考的秘密,它有几把刷子? 00:16:01 只需百次尝试,如何预见AI的万次风险? 00:20:50 AI的“降本增效”,一个聪明的偷懒办法 本期介绍的几篇论文: [LG] Golden Goose: A Simple Trick to Synthesize Unlimited RLVR Tasks from Unverifiable Internet Text [NVIDIA] https://arxiv.org/abs/2601.22975 --- [CL] Residual Context Diffusion Language Models [UC Berkeley] https://arxiv.org/abs/2601.22954 --- [CL] Context Structure Reshapes the Representational Geometry of Language Models [Google DeepMind] https://arxiv.org/abs/2601.22364 --- [LG] Statistical Estimation of Adversarial Risk in Large Language Models under Best-of-N Sampling [Microsoft Research] https://arxiv.org/abs/2601.22636 --- [LG] EUGens: Efficient, Unified, and General Dense Layers [Seoul National University] https://arxiv.org/abs/2601.22563
与播客爱好者一起交流
添加微信好友,获取更多播客资讯
播放列表还是空的
去找些喜欢的节目添加进来吧