本期「人人能懂的AI前沿」,我们重点介绍五篇最新的AI论文: 00:00:27 高手过招:AI是如何在游戏中“悟道”的? 00:04:31 造图的“慢炖”与“快炒”:AI绘画的新思路 00:08:51 AI也懂“看情况办事”了? 00:13:34 用对锤子:AI工具的正确使用说明书 00:18:08 AI点餐的智慧:如何花小钱办大事 详细论文信息供参考: [LG] SPIRAL: Self-Play on Zero-Sum Games Incentivizes Reasoning via Multi-Agent Multi-Turn Reinforcement Learning [National University of Singapore & A*STAR & Northeastern University] https://arxiv.org/abs/2506.24119 --- [LG] Transition Matching: Scalable and Flexible Generative Modeling [Weizmann Institute of Science & FAIR at Meta] https://arxiv.org/abs/2506.23589 --- [LG] Curious Causality-Seeking Agents Learn Meta Causal World [Chinese Academy of Sciences & Peking University] https://arxiv.org/abs/2506.23068 --- [LG] Use Sparse Autoencoders to Discover Unknown Concepts, Not to Act on Known Concepts [Cornell Tech & UC Berkeley] https://arxiv.org/abs/2506.23845 --- [LG] BEST-Route: Adaptive LLM Routing with Test-Time Optimal Compute [The University of British Columbia & Microsoft & Pennsylvania State University] https://arxiv.org/abs/2506.22716
有一个我们不易察觉的变化,正在重塑我们和知识、和世界相处的方式。
[LG] Transformers are Graph Neural Networks [University of Cambridge] arxiv.org
[LG] Why Neural Network Can Discover Symbolic Structures with Gradient-based Training: An Algebraic and Geometric Foundation for Neurosymbolic Reasoning [University of Texas at Austin] arxiv.org
[LG] Performance Prediction for Large Systems via Text-to-Text Regression [Google Research] arxiv.org
[CL] Sequential Diagnosis with Language Models [Microsoft AI] arxiv.org
当你给出“新答案”的瞬间,你会发现,那道反复折磨你的题,就这么烟消云散了。
[LG] Robust Reward Modeling via Causal Rubrics [Google DeepMind] https://arxiv.org/abs/2506.16507
[LG] Latent Concept Disentanglement in Transformer-based Language Models [Purdue University & University of Southern California] https://arxiv.org/abs/2506.16975
[CL] When Does Divide and Conquer Work for Long Context LLM? A Noise Decomposition Framework [University of Chicago & Together AI] https://arxiv.org/abs/2506.16411
[LG] On the Theoretical Understanding of Identifiable Sparse Autoencoders and Beyond [Peking University & MIT] https://arxiv.org/abs/2506.15963
[CL] EvoLM: In Search of Lost Language Model Training Dynamics [Harvard & Stanford & EPFL] https://arxiv.org/abs/2506.16029
与播客爱好者一起交流
添加微信好友,获取更多播客资讯
播放列表还是空的
去找些喜欢的节目添加进来吧