https://babi.com/
真正困住我们的,是我们为自己亲手建造的那座“心牢”。只要你还被关在这座牢里,那么无论你逃到天涯海角,换多少份工作,认识多少新的人,你都只是一个“带着牢笼赶路”的囚徒。
00:01:19 AI的“偏科”难题:学好数理化,走遍天下真的不怕吗? 00:05:08 AI 也会“复盘”?聊聊如何让机器像高手一样思考 00:09:19 语言的“橡皮泥”:我们如何“捏”出更智能的AI? 00:13:57 AI科学家的新玩法:它不猜答案,专找“意外” 00:17:42 AI“长篇阅读”的秘密:如何让机器像螺旋一样思考? 本期介绍的五篇论文: [LG] Does Math Reasoning Improve General LLM Capabilities? Understanding Transferability of LLM Reasoning [CMU & University of Washington & M-A-P] https://arxiv.org/abs/2507.00432 --- [LG] ASTRO: Teaching Language Models to Reason by Reflecting and Backtracking In-Context [AI at Meta] https://arxiv.org/abs/2507.00417 --- [LG] Flexible Language Modeling in Continuous Space with Transformer-based Autoregressive Flows [Apple] https://arxiv.org/abs/2507.00425 --- [LG] Open-ended Scientific Discovery via Bayesian Surprise [University of Massachusetts Amherst & Allen Institute for AI] https://arxiv.org/abs/2507.00310 --- [LG] HelixPipe: Efficient Distributed Training of Long Sequence Transformers with Attention Parallel Pipeline Parallelism [National University of Singapore] https://arxiv.org/abs/2507.00394
人生,没有真正的绝境。
本期「人人能懂的AI前沿」,我们重点介绍五篇最新的AI论文: 00:00:27 高手过招:AI是如何在游戏中“悟道”的? 00:04:31 造图的“慢炖”与“快炒”:AI绘画的新思路 00:08:51 AI也懂“看情况办事”了? 00:13:34 用对锤子:AI工具的正确使用说明书 00:18:08 AI点餐的智慧:如何花小钱办大事 详细论文信息供参考: [LG] SPIRAL: Self-Play on Zero-Sum Games Incentivizes Reasoning via Multi-Agent Multi-Turn Reinforcement Learning [National University of Singapore & A*STAR & Northeastern University] https://arxiv.org/abs/2506.24119 --- [LG] Transition Matching: Scalable and Flexible Generative Modeling [Weizmann Institute of Science & FAIR at Meta] https://arxiv.org/abs/2506.23589 --- [LG] Curious Causality-Seeking Agents Learn Meta Causal World [Chinese Academy of Sciences & Peking University] https://arxiv.org/abs/2506.23068 --- [LG] Use Sparse Autoencoders to Discover Unknown Concepts, Not to Act on Known Concepts [Cornell Tech & UC Berkeley] https://arxiv.org/abs/2506.23845 --- [LG] BEST-Route: Adaptive LLM Routing with Test-Time Optimal Compute [The University of British Columbia & Microsoft & Pennsylvania State University] https://arxiv.org/abs/2506.22716
有一个我们不易察觉的变化,正在重塑我们和知识、和世界相处的方式。
[LG] Transformers are Graph Neural Networks [University of Cambridge] arxiv.org
[LG] Why Neural Network Can Discover Symbolic Structures with Gradient-based Training: An Algebraic and Geometric Foundation for Neurosymbolic Reasoning [University of Texas at Austin] arxiv.org
[LG] Performance Prediction for Large Systems via Text-to-Text Regression [Google Research] arxiv.org
[CL] Sequential Diagnosis with Language Models [Microsoft AI] arxiv.org
[LG] Hierarchical Reasoning Model [Sapient Intelligence, Singapore] arxiv.org
[CL] OMEGA: Can LLMs Reason Outside the Box in Math? Evaluating Exploratory, Compositional, and Transformative Generalization [dmodel.ai & UC Berkeley] arxiv.org
[CL] LongWriter-Zero: Mastering Ultra-Long Text Generation via Reinforcement Learning [Singapore University of Technology and Design & Tsinghua University] arxiv.org
与播客爱好者一起交流
添加微信好友,获取更多播客资讯
播放列表还是空的
去找些喜欢的节目添加进来吧