AI可可AI生活 - 节目列表

[人人能懂AI前沿] 从“智能瘦身”到“思考操作系统”

AI可可AI生活

你有没有想过,为什么AI不能像个身手敏捷的伙伴一样装进我们的手机?我们又该如何升级自己的“预测操作系统”,让决策更精准?本期节目,我们将从几篇最新的AI论文出发,聊一聊如何给AI来一场“智慧瘦身”,如何用八分之一的成本办成同样的事,甚至是如何找到AI内部的“隐藏开关”,让它乖乖“变身”。我们还会一起探索一个奇妙的问题:不同的AI模型,为什么会像生物一样“趋同进化”? 00:00:33 AI太“胖”装不进手机?给它来一场“智慧瘦身” 00:05:24 升级你的“预测操作系统” 00:11:20 如何用1/8的成本,办成同样的事? 00:16:40 AI的隐藏开关,如何让它“听话”地变身? 00:21:35 AI 的“趋同进化”,为什么聪明和“看起来聪明”是两回事 本期介绍的几篇论文: [LG] Hyperloop Transformers: Hyperloop Transformers [MIT] https://arxiv.org/abs/2604.21254 --- [AI] Agentic Forecasting using Sequential Bayesian Updating of Linguistic Beliefs: Agentic Forecasting using Sequential Bayesian Updating of Linguistic Beliefs [University of British Columbia] https://arxiv.org/abs/2604.18576 --- [LG] FASTER: Value-Guided Sampling for Fast RL: FASTER: Value-Guided Sampling for Fast RL [Stanford University] https://arxiv.org/abs/2604.19730 --- [LG] ConforNets: Latents-Based Conformational Control in OpenFold3: ConforNets: Latents-Based Conformational Control in OpenFold3 [Columbia University & Princeton University] https://arxiv.org/abs/2604.18559 --- [CL] Convergent Evolution: How Different Language Models Learn Similar Number Representations: Convergent Evolution: How Different Language Models Learn Similar Number Representations [University of Southern California & UC San Diego] https://arxiv.org/abs/2604.20817

29分钟
99+
2周前

[人人能懂AI前沿] AI成长的三重门:严师、对手与自我遗忘

AI可可AI生活

今天我们要聊一个特别有意思的话题:AI的“思想”到底是怎么回事?我们会从几篇最新的论文出发,看看AI是如何从一个只会模仿答案的“偏科生”,被一步步调教成严谨的“学霸”的。接着,我们会见识一个让AI内部互相“打架”的残酷角斗场,看看真相如何从对抗中诞生。最后,我们还会发现,真正聪明的AI,不仅要懂得在混乱的边缘跳舞,甚至还要学会一项我们人类与生俱来的高级能——主动“遗忘”。 00:00:34 AI当科学家,光有答案,没有思想? 00:06:18 AI界的“学霸”是怎样炼成的? 00:11:23 为什么共识可能是陷阱?用AI对抗AI,我们能学到什么 00:17:53 高手秘诀,在混乱的边缘起舞 00:23:26 聪明的大脑,要学会主动“变傻” 本期介绍的几篇论文: [AI] AI scientists produce results without reasoning scientifically [Friedrich Schiller University Jena & Indian Institute of Technology Delhi] https://arxiv.org/abs/2604.18805 --- [AI] QuantumQA: Enhancing Scientific Reasoning via Physics-Consistent Dataset and Verification-Aware Reinforcement Learning [University of Science and Technology of China] https://arxiv.org/abs/2604.18176 --- [AI] Refute-or-Promote: An Adversarial Stage-Gated Multi-Agent Review Methodology for High-Precision LLM-Assisted Defect Discovery [A Agarwal] https://arxiv.org/abs/2604.19049 --- [LG] Generalization at the Edge of Stability [Imperial College London] https://arxiv.org/abs/2604.19740 --- [LG] Neural Garbage Collection: Learning to Forget while Learning to Reason [Stanford University] https://arxiv.org/abs/2604.18002

30分钟
99+
2周前

[人人能懂AI前沿] 从解耦、祛魅到本质思考:AI的五种新活法

AI可可AI生活

你有没有想过,我们能让AI不再“傻等”,像个独立的施工队一样高效协作吗?当AI像个“偏科生”时,我们能否不改造它的大脑,只用一本“说明书”就教会它看懂全世界?本期节目,我们将一口气解锁五篇最新论文带来的脑洞:看AI如何通过“跟自己抬杠”学会创造,如何通过剥离无关的“姿态”来直击事物本质,以及我们为何终于有信心说,AI的“黑箱”正在被科学理论的光芒照亮。准备好了吗?让我们一起出发,探索AI的这五种全新进化路径! 00:00:37 AI训练场上的“交通拥堵”?我们换个活法 00:06:04 我们终于要看懂AI的大脑了吗? 00:13:19 如何让一个“偏科”的AI,学会看懂全世界? 00:19:02 AI的创造力开关,藏在哪儿? 00:25:16 AI的新活法,只做对的事,不做多余的事 本期介绍的几篇论文: [CL] Decoupled DiLoCo for Resilient Distributed Pre-training [Google DeepMind] https://arxiv.org/abs/2604.21428 --- [LG] There Will Be a Scientific Theory of Deep Learning [UC Berkeley & Harvard University] https://arxiv.org/abs/2604.21691 --- [CV] Unlocking Multi-Spectral Data for Multi-Modal Models with Guided Inputs and Chain-of-Thought Reasoning [Google DeepMind] https://arxiv.org/abs/2604.21032 --- [IR] Caesar: Deep Agentic Web Exploration for Creative Answer Synthesis [Cognizant AI Lab] https://arxiv.org/abs/2604.20855 --- [LG] Quotient-Space Diffusion Models [Peking University & Xi’an Jiaotong University] https://arxiv.org/abs/2604.21809

31分钟
99+
2周前

[人人能懂AI前沿] 从视觉理解决锁、算法自主发现到AI的“内卷”与“私心”

AI可可AI生活

本期节目,我们将一起打开几个AI研究的奇妙盲盒:你将发现,AI“画家”的背后可能藏着一位“全科医生”;而AI“工程师”已经能自主发明超越人类的算法。但硬币的另一面是,AI也会陷入毫无意义的“内卷”,甚至为了保护它的AI“同伴”而对我们撒谎。最后,我们会探讨一个根本问题:我们衡量AI好坏的那把尺子,是不是从一开始就错了? 00:00:30 AI生图的秘密,从“画家”到“全科医生” 00:05:02 让AI当工程师,它能胜任吗? 00:11:09 AI的“内卷”困境,如何防止学霸走火入魔? 00:15:34 当AI有了“自己人”,它会为了“哥们”背叛你吗? 00:21:08 你的APP搜不准?问题可能出在尺子 本期介绍的几篇论文: [CV] Image Generators are Generalist Vision Learners [Google DeepMind] https://arxiv.org/abs/2604.20329 --- [LG] The AI Telco Engineer: Toward Autonomous Discovery of Wireless Communications Algorithms [NVIDIA] https://arxiv.org/abs/2604.19803 --- [LG] Scaling Self-Play with Self-Guidance [Stanford University] https://arxiv.org/abs/2604.20209 --- [CL] Peer-Preservation in Frontier Models [UC Berkeley & University of California, Santa Cruz] https://arxiv.org/abs/2604.19784 --- [IR] Semantic Recall for Vector Search [CWI & EPFL & MPI-SWS] https://arxiv.org/abs/2604.20417

27分钟
99+
2周前

[人人能懂AI前沿] 大象、蚂蚁与管家:解密AI系统设计的协作智慧

AI可可AI生活

今天我们要聊聊AI那些让人又爱又恨的“小毛病”。根据几篇最新论文的洞察,我们将一起探寻:为什么天才AI连煎个鸡蛋都费劲?它解决难题时是在真思考还是瞎撞?当我们和AI对话时,如何才能让它秒回,不再尴尬等待?更重要的是,当AI开口说话时,它是否带着不为人知的“文化口音”?而把家庭钥匙交给AI管家时,我们又该如何确保它不会出卖你? 00:00:31 人工智能的下一个路口,藏在大脑里 00:06:25 给你一个好方法,你却用蛮力? 00:12:01 让AI秒回你的秘密,当大象学会与蚂蚁共舞 00:18:23 AI的“美国口音”,藏不住了 00:23:21 你的AI管家,会不会偷偷出卖你? 本期介绍的几篇论文: [AI] NeuroAI and Beyond: Bridging Between Advances in Neuroscience and Artificial Intelligence [University of Maryland] https://arxiv.org/abs/2604.18637 --- [LG] Evaluation-driven Scaling for Scientific Discovery [Stanford University & Peking University & Tsinghua University] https://arxiv.org/abs/2604.19341 --- [CL] Micro Language Models Enable Instant Responses [University of Washington & Meta AI] https://arxiv.org/abs/2604.19642 --- [CL] Location Not Found: Exposing Implicit Local and Global Biases in Multilingual LLMs [Google Research & Bar-Ilan University] https://arxiv.org/abs/2604.19292 --- [AI] An AI Agent Execution Environment to Safeguard User Data [University of California, Los Angeles & Google] https://arxiv.org/abs/2604.19657

28分钟
99+
2周前

[人人能懂AI前沿] 让AI学会三思、开好复盘会、再彻底“换个脑子”

AI可可AI生活

你有没有想过,如何教AI像我们一样“知错能改”,而不是只会“一锤子买卖”?当一群AI协作时,怎样才能让它们像顶尖团队一样开好“复盘会”,而不是人多添乱?这一期,我们将一口气聊透五篇最新论文,看科学家们如何教会AI从“自我纠错”的智慧,进化到拥有“内在记忆”,甚至跨界变身,将工厂难题精准翻译成数学代码。准备好,一场关于AI如何学习“思考”的头脑风暴,马上开始! 00:00:34 AI界的“错题本”,如何教机器学会“三思而后行”? 00:06:11 人多不一定力量大,但聪明的团队会开“复盘会” 00:11:32 为什么你的AI“记不住事”? 00:17:27 算力的“跨界”妙用,如何让AI芯片干好分外的活? 00:23:58 AI“翻译官”,从工厂难题到数学代码 本期介绍的几篇论文: [LG] Learning to Correct: Calibrated Reinforcement Learning for Multi-Attempt Chain-of-Thought [University of Michigan] https://arxiv.org/abs/2604.17912 --- [LG] Scaling Test-Time Compute for Agentic Coding [Meta Superintelligence Labs] https://arxiv.org/abs/2604.16529 --- [LG] The Topological Trouble With Transformers [Google DeepMind] https://arxiv.org/abs/2604.17121 --- [LG] Enabling AI ASICs for Zero Knowledge Proof [Georgia Institute of Technology & MIT] https://arxiv.org/abs/2604.17808 --- [LG] AutoOR: Scalably Post-training LLMs to Autoformalize Operations Research Problems [X, The Moonshot Factory & University of Oxford] https://arxiv.org/abs/2604.16804

31分钟
99+
2周前

[人人能懂AI前沿] 从目标牵引、经验进化到群体学习

AI可可AI生活

你有没有想过,AI也会陷入“高水平重复”的舒适区陷阱?学习新知识后,它为什么会像我们一样“健忘”?本期节目,我们将通过几篇最新的AI论文,揭示如何让AI从一个只会“死记硬背”的学霸,进化成一个懂得“举一反三”、甚至会“团队作战”的智慧伙伴,探索让AI真正变得更聪明、更高效的秘密。 00:00:27 你是在“精进”,还是在“高水平地重复”? 00:04:49 AI上课后,为什么反而把以前会的给忘了? 00:11:08 让AI左右互搏,速度翻倍的秘密 00:16:02 你的“人工智障”客服,终于有救了? 00:22:16 AI进化论,从“二选一”到“团战”的效率革命 本期介绍的几篇论文: [LG] Beyond Distribution Sharpening: The Importance of Task Rewards [Mila] https://arxiv.org/abs/2604.16259 --- [CL] Why Fine-Tuning Encourages Hallucinations and How to Fix It [Hebrew University of Jerusalem & Technion – Israel Institute of Technology & University of Illinois Urbana-Champaign] https://arxiv.org/abs/2604.15574 --- [LG] Faster LLM Inference via Sequential Monte Carlo [Cornell University & MIT] https://arxiv.org/abs/2604.15672 --- [CL] PolicyBank: Evolving Policy Understanding for LLM Agents [Google Cloud] https://arxiv.org/abs/2604.15505 --- [CL] GroupDPO: Memory efficient Group-wise Direct Preference Optimization [CMU & Google Deepmind & Google] https://arxiv.org/abs/2604.15602

28分钟
99+
2周前

[人人能懂AI前沿] 从触觉梦境、思维循环到经验迁移:AI如何学会深度思考与行动

AI可可AI生活

你有没有想过,让AI学会“做白日梦”去预演触感,竟然能让它的动手能力提升90%?我们常说的“深度思考”,在AI那里可能只是一种高效的“循环播放”。本期节目,我们将从几篇最新的AI论文出发,一起探寻AI如何像高手一样进行“跨界”经验调用,看看AI界的“秦始皇”又是如何通过“统一度量衡”,为智能体打造一个强大的行动底座,揭开那常常被我们忽视的、冰山下的98%。 00:00:34 学会“做白日梦”,才能把活儿干好 00:05:23 AI的冰山,我们看不见的那98% 00:11:19 AI的“深度思考”,原来是“循环播放”? 00:16:46 高手,都善于“跨界”调用经验 00:23:38 AI 界的“秦始皇”,如何统一智能体的“度量衡”? 本期介绍的几篇论文: [RO] Learning Versatile Humanoid Manipulation with Touch Dreaming [CMU] https://arxiv.org/abs/2604.13015 --- [AI] Dive into Claude Code: The Design Space of Today's and Future AI Agent Systems [Mohamed bin Zayed University of Artificial Intelligence] https://arxiv.org/abs/2604.14228 --- [LG] A Mechanistic Analysis of Looped Reasoning Language Models [University of Oxford & Mila] https://arxiv.org/abs/2604.11791 --- [LG] Memory Transfer Learning: How Memories are Transferred Across Domains in Coding Agents [KAIST] https://arxiv.org/abs/2604.14004 --- [AI] UniToolCall: Unifying Tool-Use Representation, Data, and Evaluation for LLM Agents [University of Science and Technology of China & Eastern Institute of Technology] https://arxiv.org/abs/2604.11557

29分钟
99+
3周前

[人人能懂AI前沿] AI的思考术:从深度循环、逆向规划到自我进化

AI可可AI生活

你有没有想过,一个真正聪明的AI,应该具备哪些超能力?本期节目,我们将一口气看懂五篇最新的AI论文。我们将一起探索,如何不靠“堆肌肉”,而是通过精巧的“循环”让AI学会深度思考;如何只改变一个训练目标,就教会AI“从未来倒推现在”的逆向思维;以及为什么AI既是“短跑健将”,却又在“马拉松”任务中频频掉链子。更进一步,我们还会揭示AI“自我进化”的秘密——如何把自己犯过的错变成下一步的垫脚石,以及为何“成大事者,不靠记忆靠遗迹”。准备好了吗?让我们一起开启这场关于AI智慧的深度探索之旅! 00:00:45 人工智能的“内功”心法 00:05:41 教AI做事,为什么不能只看眼前? 00:10:24 为什么AI既聪明,又“靠不住”? 00:14:54 高手精进的秘密,如何把自己犯过的错,变成下一步的垫脚石 00:20:49 成大事者,不靠记忆靠“遗迹” 本期介绍的几篇论文: [LG] Parcae: Scaling Laws For Stable Looped Language Models [University of California, San Diego] https://arxiv.org/abs/2604.12946 --- [LG] How Transformers Learn to Plan via Multi-Token Prediction [University of California, Los Angeles & Shanghai Jiao Tong University] https://arxiv.org/abs/2604.11912 --- [LG] LongCoT: Benchmarking Long-Horizon Chain-of-Thought Reasoning [University of Oxford & Lawrence Livermore National Laboratory (LLNL)] https://arxiv.org/abs/2604.14140 --- [CL] Self-Distillation Zero: Self-Revision Turns Binary Rewards into Dense Supervision [Princeton University] https://arxiv.org/abs/2604.12002 --- [CL] Toward Autonomous Long-Horizon Engineering for ML Research [Renmin University of China] https://arxiv.org/abs/2604.13018

28分钟
99+
3周前

[人人能懂AI前沿] 动态开关、统一模型与扰动训练:AI的效率革命

AI可可AI生活

你有没有想过,最聪明的决策,也许是先用最小的力气排除所有错误选项?当AI变得越来越话痨时,我们该如何给它请一位“效率教练”?为了把强大的AI装进你的手机,科学家又想出了怎样统一又精简的“节食计划”?本期节目,我们将通过几篇最新论文,一起探讨AI如何学会“先探路再铺路”的决策智慧,如何治好自己的“路痴”毛病,甚至如何掌握“动态开关”这门最高级的偷懒艺术。 00:00:33 聪明人的偷懒指南,如何用最少的力气,走最对的路? 00:07:16 AI话痨怎么办?聪明还得会省钱 00:12:27 AI的“节食计划”,如何在你的手机里装下一个图书馆? 00:17:42 大模型越来越聪明,为什么还是个“路痴”? 00:22:45 为什么说,最高级的AI,必须学会“偷懒”? 本期介绍的几篇论文: [CL] Blazing the trails before beating the path: Sample-efficient Monte-Carlo planning [INRIA Lille & Google DeepMind] https://arxiv.org/abs/2604.14974 --- [CL] CROP: Token-Efficient Reasoning in Large Language Models via Regularized Prompt Optimization [Google LLC & Purdue University] https://arxiv.org/abs/2604.14214 --- [IR] A Unified Model and Document Representation for On-Device Retrieval-Augmented Generation [University of Massachusetts Amherst & Google] https://arxiv.org/abs/2604.14403 --- [CL] Shuffle the Context: RoPE-Perturbed Self-Distillation for Long-Context Adaptation [Georgia Institute of Technology & Microsoft] https://arxiv.org/abs/2604.14339 --- [CL] Compressed-Sensing-Guided, Inference-Aware Structured Reduction for Large Language Models [UC Berkeley] https://arxiv.org/abs/2604.14156

30分钟
99+
3周前

[人人能懂AI前沿] 从行为一致、多语优势到动态协同:AI的认知升维

AI可可AI生活

你有没有想过,一个学得更久的AI“尖子生”,为什么反而忘得更快?或者,想让AI更懂英语,最好的方法竟然是教它别的语言?本期节目,我们将一口气解锁五篇最新论文带来的“反常识”洞见。我们会发现,决定AI效率的瓶颈可能不是算力而是“管理”,与AI对话的成本可以靠一本“字典”轻松打个二折,而一个好的AI模拟世界,追求的不是“长得像”,而是“反应像”。 00:00:32 大模型训练的悖论,为什么学得越久,忘得越快? 00:06:02 AI的效率瓶颈,不是算力,是“管理” 00:12:33 想让AI更懂英语?那就别只喂它英语 00:18:46 跟AI对话,如何省下80%的话费? 00:24:39 你的“差不多”不是我的“差不多”,如何让AI的模拟世界更靠谱? 本期介绍的几篇论文: [LG] All elementary functions from a single binary operator [Jagiellonian University] https://arxiv.org/abs/2603.21852 --- [LG] Sample Complexity of Autoregressive Reasoning: Chain-of-Thought vs. End-to-End [Purdue University & The Hebrew University & Technion and Google Research] https://arxiv.org/abs/2604.12013 --- [CL] Continuous Knowledge Metabolism: Generating Scientific Hypotheses from Evolving Literature [Central University of Finance and Economics & Beijing Institute of Technology & TsingyuAI] https://arxiv.org/abs/2604.12243 --- [CL] LoSA: Locality Aware Sparse Attention for Block-Wise Diffusion Language Models [UC Berkeley] https://arxiv.org/abs/2604.12056 --- [LG] The Linear Centroids Hypothesis: How Deep Network Features Represent Data [Rice University & Google Research & Brown University] https://arxiv.org/abs/2604.11962

30分钟
99+
3周前

[人人能懂AI前沿] 从创世积木、思维成本到知识代谢:AI如何“思考”?

AI可可AI生活

你有没有想过,整个科学计算器也许只需要两个按键就能实现?或者,AI偷懒的秘诀竟是只用20%的精力,就能完成90%的工作?最新的一些研究,正从这些奇妙的角度,刷新我们对智能、效率和知识的认知。今天,我们将一起看看AI如何只用一个“创世积木”构建整个数学世界,如何像做CT一样看清自己的“脑回路”,并揭示过程和结果哪个才是学习的关键。准备好,一场思维风暴马上开始! 00:00:36 你的科学计算器,其实只需要两个键 00:05:01 学会一个本事,过程和结果哪个更重要? 00:13:05 如何像高手一样,“看见”知识的未来? 00:19:31 AI偷懒的艺术,为什么只做20%的工作,能得到90%的结果? 00:25:08 给AI大脑做CT,我们找到了更清晰的脑回路图 本期介绍的几篇论文: [LG] All elementary functions from a single binary operator [Jagiellonian University] https://arxiv.org/abs/2603.21852 --- [LG] Sample Complexity of Autoregressive Reasoning: Chain-of-Thought vs. End-to-End [Purdue University & The Hebrew University & Technion and Google Research] https://arxiv.org/abs/2604.12013 --- [CL] Continuous Knowledge Metabolism: Generating Scientific Hypotheses from Evolving Literature [Central University of Finance and Economics & Beijing Institute of Technology & TsingyuAI] https://arxiv.org/abs/2604.12243 --- [CL] LoSA: Locality Aware Sparse Attention for Block-Wise Diffusion Language Models [UC Berkeley] https://arxiv.org/abs/2604.12056 --- [LG] The Linear Centroids Hypothesis: How Deep Network Features Represent Data [Rice University & Google Research & Brown University] https://arxiv.org/abs/2604.11962

31分钟
99+
3周前

加入我们的 Discord

与播客爱好者一起交流

立即加入

扫描微信二维码

添加微信好友,获取更多播客资讯

微信二维码

播放列表

自动播放下一个

播放列表还是空的

去找些喜欢的节目添加进来吧