我们每个人的人生,都充满了各种各样的“刺激”——可能是老板的一句批评,伴侣的一句抱怨,甚至只是社交媒体上的一条负面评论。而决定我们成为什么样的人,拥有怎样的人生的,并非这些刺激本身,而是我们选择如何利用那个宝贵的“空间”去回应。
00:01:19 高手过招,拼的不是“脑子”,而是“书房”? 00:05:35 AI的“自我反思”:如何让机器像高手一样思考? 00:10:31 AI的“第六感”:如何听懂线索背后的言外之意 00:16:06 AI的“犯罪现场调查”:我们能倒推出你对AI说了什么吗? 00:20:50 你的“队友”是真懂还是装懂?人工智能辩论赛里的一个发现 本期介绍的五篇论文: [CL] Frustratingly Simple Retrieval Improves Challenging, Reasoning-Intensive Benchmarks [Allen Institute for AI & University of Southern California & University of Washington] https://arxiv.org/abs/2507.01297 --- [LG] Test-Time Scaling with Reflective Generative Model [MetaStone-AI & USTC] https://arxiv.org/abs/2507.01951 --- [LG] GradMetaNet: An Equivariant Architecture for Learning on Gradients [University of Oxford & Technion] https://arxiv.org/abs/2507.01649 --- [LG] GPT, But Backwards: Exactly Inverting Language Model Outputs [University of Manchester & Mohamed bin Zayed University of Artificial Intelligence] https://arxiv.org/abs/2507.01693 --- [CL] The Thin Line Between Comprehension and Persuasion in LLMs [Microsoft & The University of York] https://arxiv.org/abs/2507.01936
真正困住我们的,是我们为自己亲手建造的那座“心牢”。只要你还被关在这座牢里,那么无论你逃到天涯海角,换多少份工作,认识多少新的人,你都只是一个“带着牢笼赶路”的囚徒。
00:01:19 AI的“偏科”难题:学好数理化,走遍天下真的不怕吗? 00:05:08 AI 也会“复盘”?聊聊如何让机器像高手一样思考 00:09:19 语言的“橡皮泥”:我们如何“捏”出更智能的AI? 00:13:57 AI科学家的新玩法:它不猜答案,专找“意外” 00:17:42 AI“长篇阅读”的秘密:如何让机器像螺旋一样思考? 本期介绍的五篇论文: [LG] Does Math Reasoning Improve General LLM Capabilities? Understanding Transferability of LLM Reasoning [CMU & University of Washington & M-A-P] https://arxiv.org/abs/2507.00432 --- [LG] ASTRO: Teaching Language Models to Reason by Reflecting and Backtracking In-Context [AI at Meta] https://arxiv.org/abs/2507.00417 --- [LG] Flexible Language Modeling in Continuous Space with Transformer-based Autoregressive Flows [Apple] https://arxiv.org/abs/2507.00425 --- [LG] Open-ended Scientific Discovery via Bayesian Surprise [University of Massachusetts Amherst & Allen Institute for AI] https://arxiv.org/abs/2507.00310 --- [LG] HelixPipe: Efficient Distributed Training of Long Sequence Transformers with Attention Parallel Pipeline Parallelism [National University of Singapore] https://arxiv.org/abs/2507.00394
人生,没有真正的绝境。
本期「人人能懂的AI前沿」,我们重点介绍五篇最新的AI论文: 00:00:27 高手过招:AI是如何在游戏中“悟道”的? 00:04:31 造图的“慢炖”与“快炒”:AI绘画的新思路 00:08:51 AI也懂“看情况办事”了? 00:13:34 用对锤子:AI工具的正确使用说明书 00:18:08 AI点餐的智慧:如何花小钱办大事 详细论文信息供参考: [LG] SPIRAL: Self-Play on Zero-Sum Games Incentivizes Reasoning via Multi-Agent Multi-Turn Reinforcement Learning [National University of Singapore & A*STAR & Northeastern University] https://arxiv.org/abs/2506.24119 --- [LG] Transition Matching: Scalable and Flexible Generative Modeling [Weizmann Institute of Science & FAIR at Meta] https://arxiv.org/abs/2506.23589 --- [LG] Curious Causality-Seeking Agents Learn Meta Causal World [Chinese Academy of Sciences & Peking University] https://arxiv.org/abs/2506.23068 --- [LG] Use Sparse Autoencoders to Discover Unknown Concepts, Not to Act on Known Concepts [Cornell Tech & UC Berkeley] https://arxiv.org/abs/2506.23845 --- [LG] BEST-Route: Adaptive LLM Routing with Test-Time Optimal Compute [The University of British Columbia & Microsoft & Pennsylvania State University] https://arxiv.org/abs/2506.22716
有一个我们不易察觉的变化,正在重塑我们和知识、和世界相处的方式。
[LG] Transformers are Graph Neural Networks [University of Cambridge] arxiv.org
[LG] Why Neural Network Can Discover Symbolic Structures with Gradient-based Training: An Algebraic and Geometric Foundation for Neurosymbolic Reasoning [University of Texas at Austin] arxiv.org
[LG] Performance Prediction for Large Systems via Text-to-Text Regression [Google Research] arxiv.org
[CL] Sequential Diagnosis with Language Models [Microsoft AI] arxiv.org
[LG] Hierarchical Reasoning Model [Sapient Intelligence, Singapore] arxiv.org
与播客爱好者一起交流
添加微信好友,获取更多播客资讯
播放列表还是空的
去找些喜欢的节目添加进来吧