00:00:37 AI界的乐高革命:如何让你的模型“活”在当下 00:05:41 AI的“手术刀”:我们如何精准“切除”它的坏心思? 00:10:22 机器人学功夫:抄近道,还是练笨功? 00:15:03 AI怎么当学徒:让机器学会看“领导脸色” 00:20:18 给AI动个“小手术”,治好它的“选择困难症” 本期介绍的几篇论文: [LG] SequenceLayers: Sequence Processing and Streaming Neural Networks Made Easy [Google DeepMind] https://arxiv.org/abs/2507.23292 --- [LG] The Geometry of Harmfulness in LLMs through Subconcept Probing [Algoverse AI Research] https://arxiv.org/abs/2507.21141 --- [LG] Retrieve-Augmented Generation for Speeding up Diffusion Policy without Additional Training [The University of Tokyo] https://arxiv.org/abs/2507.21452 --- [LG] NPO: Learning Alignment and Meta-Alignment through Structured Human Feedback [Microsoft & Amrita Vishwa Vidyapeetham] https://arxiv.org/abs/2507.21131 --- [LG] TokenBlowUp: Resolving Representational Singularities in LLM Token Spaces via Monoidal Transformations [University of Washington] https://arxiv.org/abs/2507.19747
在你的生活中,是否也有一套需要重写的“人生代码”?那个让你内耗的“BUG”,或许就藏在你对世界过高的期待里。
00:00:37 你的AI管家,靠谱吗?一份来自未来的安全报告 00:04:40 AI“发疯”?科学家找到了它的“性格开关” 00:09:33 比结果更重要的,是“想明白”的过程 00:14:09 AI的“降维打击”:复杂世界里的简单活法 00:18:23 AI的“暖男”人设,可能是个陷阱? 本期介绍的几篇论文: [LG] Security Challenges in AI Agent Deployment: Insights from a Large Scale Public Competition [Gray Swan AI] https://arxiv.org/abs/2507.20526 --- [CL] Persona Vectors: Monitoring and Controlling Character Traits in Language Models [Anthropic Fellows Program & Constellation] https://arxiv.org/abs/2507.21509 --- [LG] RLVMR: Reinforcement Learning with Verifiable Meta-Reasoning Rewards for Robust Long-Horizon Agents [Tencent] https://arxiv.org/abs/2507.22844 --- [LG] Geometry of Neural Reinforcement Learning in Continuous State and Action Spaces [Brown University & Amazon Web Services] https://arxiv.org/abs/2507.20853 --- [CL] Training language models to be warm and empathetic makes them less reliable and more sycophantic [University of Oxford] https://arxiv.org/abs/2507.21919 --- [CL] On The Role of Pretrained Language Models in General-Purpose Text Embeddings: A Survey [Not explicitly stated, survey paper] https://arxiv.org/abs/2507.20783
“你思索之事,决定了你心智的品质。你的灵魂,会沾染上你思想的色彩。”
00:00:31 你的AI有多聪明?关键看它会不会“彩排” 00:05:06 AI内卷时代,如何找到“天选之子”? 00:09:38 AI的“自我修养”:如何让机器自己教自己? 00:13:34 给AI装个方向盘,指哪打哪 00:17:37 AI巨头里的“扫地僧”:没有他,AI秒变“人工智障” 本期介绍的五篇论文: [LG] SimuRA: Towards General Goal-Oriented Agent via Simulative Reasoning Architecture with LLM-Based World Model [Mohamed bin Zayed University of Artificial Intelligence & Samsung Research] https://arxiv.org/abs/2507.237 --- [LG] Consensus-Driven Active Model Selection [MIT & UMass Amherst] https://arxiv.org/abs/2507.23771 --- [CL] CoT-Self-Instruct: Building high-quality synthetic prompts for reasoning and non-reasoning tasks [FAIR at Meta & NYU] https://arxiv.org/abs/2507.237 --- [CL] Model Directions, Not Words: Mechanistic Topic Models Using Sparse Autoencoders [Columbia University] https://arxiv.org/abs/2507.23220 --- [CL] Unveiling Super Experts in Mixture-of-Experts Large Language Models [Meituan & Tsinghua University] https://arxiv.org/abs/2507.23279
或许,真正的成长,就始于你收回期待,敢于让某些人失望的那一刻。
00:00:38 AI进化论:从“打工人”到“CEO” 00:03:57 驯服AI的秘密:一句话,放在哪儿说效果大不同 00:07:58 你看到的,就是真相吗?——AI给我们的一个新警告 00:12:40 数据稀缺的时代,如何拼出一张完整的世界地图? 00:17:23 AI正在偷偷学心理学,但好像学偏了 本期介绍的五篇论文: [LG] MetaAgent: Automatically Constructing Multi-Agent Systems Based on Finite State Machines [University of Wisconsin - Madison] https://arxiv.org/abs/2507.22606 --- [CL] Where to show Demos in Your Prompt: A Positional Bias of In-Context Learning [University of Maryland] https://arxiv.org/abs/2507.22887 --- [LG] Representation biases: will we achieve complete understanding by analyzing representations? [Google DeepMind] https://arxiv.org/abs/2507.22216 --- [LG] AlphaEarth Foundations: An embedding field model for accurate and efficient global mapping from sparse label data [Google DeepMind] https://arxiv.org/abs/2507.22291 --- [LG] The Incomplete Bridge: How AI Research (Mis)Engages with Psychology [Johns Hopkins University & Rice University & Microsoft Research Asia] https://arxiv.org/abs/2507.22847
我们花了无数精力去设计能战胜人类的AI,却很少思考,如何为自己设计一个能战胜‘人性’的系统。
00:00:33 高手与普通人的差距,在于“记忆预算”的分配 00:04:16 AI当“牛顿”:我们如何找到万物生长的公式? 00:08:26 让AI不止听话,更要会提问 00:12:09 AI 思考的艺术:如何做到又快又好? 00:18:06 AI识人心:20盘棋,就“看穿”了你 本期介绍的五篇论文: [LG] Capacity-Constrained Continual Learning [Google DeepMind] https://arxiv.org/abs/2507.21479 --- [LG] EvoSLD: Automated Neural Scaling Law Discovery With Large Language Models [Peking University & Tsinghua University] https://arxiv.org/abs/2507.21184 --- [LG] Teaching Language Models To Gather Information Proactively [Microsoft] https://arxiv.org/abs/2507.21389 --- [LG] TriangleMix: A Lossless and Efficient Attention Pattern for Long Context Prefilling [Microsoft Research] https://arxiv.org/abs/2507.21526 --- [LG] Learning to Imitate with Less: Efficient Individual Behavior Modeling in Chess [University of Toronto] https://arxiv.org/abs/2507.21488
关于成功,总有一个变量,是连最强的AI都难以计算的,这个变量,就藏在我们今天的故事里。
00:00:34 让AI更“有谱儿”:不止一条路通罗马 00:05:21 如何让AI更聪明?一个“求稳”的智慧 00:09:14 造车新智慧:如何用“搬沙子”的办法,算出最省油的外形? 00:12:31 AI也会“路径依赖”?一个简单动作,让它“老树发新芽” 00:17:01 AI炼丹术:我们如何“教会”机器遵守化学规则? 00:21:50 AI进化论:从“死记硬背”到“自我成长” 本期介绍的几篇论文: [LG] Flow Matching Policy Gradients [UC Berkeley] https://arxiv.org/abs/2507.21053 --- [CL] Geometric-Mean Policy Optimization [Microsoft Research] https://arxiv.org/abs/2507.20673 --- [LG] Geometric Operator Learning with Optimal Transport [California Institute of Technology & Nvidia] https://arxiv.org/abs/2507.20065 --- [LG] What Can Grokking Teach Us About Learning Under Nonstationarity? [Google DeepMind] https://arxiv.org/abs/2507.20057 --- [LG] Enhancing Materials Discovery with Valence Constrained Design in Generative Modeling [MIT] https://arxiv.org/abs/2507.19799 --- [LG] A Survey of Self-Evolving Agents: On Path to Artificial Super Intelligence https://arxiv.org/abs/2507.21046
当你能清晰地分辨出什么是值得你全身心投入的‘信号’,并且有能力为它创造一个‘无菌舱’去深度运算时,你就不再是时间的囚徒,而是与时间共舞的伙伴。
与播客爱好者一起交流
添加微信好友,获取更多播客资讯
播放列表还是空的
去找些喜欢的节目添加进来吧