主播
节目简介
来源:小宇宙
你有没有想过,一个更聪明的AI,或许需要学会的不是记住一切,反而是“选择性失忆”?本期我们要聊的几篇最新论文,就充满了这样颠覆常识的洞见。我们将一起探索,AI如何从“管住嘴”进化到深入思想的“排毒手术”,如何像顶尖高手一样动态进化自己解决问题的方法论,甚至,如何拥有人类最宝贵的品质之一——知道自己“不知道”的自知之明。
00:00:31 AI“排毒”,是动手术,还是只吃止痛药?
00:04:49 AI的记忆难题,除了死记硬背,还有什么好办法?
00:10:33 你的方法,也需要进化
00:16:14 AI的记忆,竟然是它的负担?
00:21:15 聪明反被聪明误,AI也需要“自知之明”
本期介绍的几篇论文:
[LG] Detoxifying LLMs via Representation Erasure-Based Preference Optimization
[McGill University & Google DeepMind]
https://arxiv.org/abs/2602.23391
---
[LG] Memory Caching: RNNs with Growing Memory
[Google Research]
https://arxiv.org/abs/2602.24281
---
[LG] EvoX: Meta-Evolution for Automated Discovery
[UC Berkeley]
https://arxiv.org/abs/2602.23413
---
[CL] Do LLMs Benefit From Their Own Words?
[MIT & IBM Research]
https://arxiv.org/abs/2602.24287
---
[LG] RewardUQ: A Unified Framework for Uncertainty-Aware Reward Models
[ETH Zurich]
https://arxiv.org/abs/2602.24040
00:00:31 AI“排毒”,是动手术,还是只吃止痛药?
00:04:49 AI的记忆难题,除了死记硬背,还有什么好办法?
00:10:33 你的方法,也需要进化
00:16:14 AI的记忆,竟然是它的负担?
00:21:15 聪明反被聪明误,AI也需要“自知之明”
本期介绍的几篇论文:
[LG] Detoxifying LLMs via Representation Erasure-Based Preference Optimization
[McGill University & Google DeepMind]
https://arxiv.org/abs/2602.23391
---
[LG] Memory Caching: RNNs with Growing Memory
[Google Research]
https://arxiv.org/abs/2602.24281
---
[LG] EvoX: Meta-Evolution for Automated Discovery
[UC Berkeley]
https://arxiv.org/abs/2602.23413
---
[CL] Do LLMs Benefit From Their Own Words?
[MIT & IBM Research]
https://arxiv.org/abs/2602.24287
---
[LG] RewardUQ: A Unified Framework for Uncertainty-Aware Reward Models
[ETH Zurich]
https://arxiv.org/abs/2602.24040
评价
空空如也
小宇宙热评
暂无小宇宙热门评论