本期的 29 篇论文如下: [00:23] ⚡ Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention(原生稀疏注意力:硬件对齐与原生可训练的稀疏注意力) [01:10] 🤖 Learning Getting-Up Policies for Real-World Humanoid Robots(学习真实世界人形机器人起身策略) [01:55] 🧠 ReLearn: Unlearning via Learning for Large Language Models(ReLearn:通过学习实现大型语言模型的遗忘) [02:35] 💻 SWE-Lancer: Can Frontier LLMs Earn $1 Million from Real-World Freelance Software Engineering?(SWE-Lancer:前沿大语言模型能否从真实世界的自由软件工程中赚取100万美元?) [03:21] 🌐 HermesFlow: Seamlessly Closing the Gap in Multimodal Understanding and Generation(赫尔墨斯流:无缝衔接多模态理解和生成) [03:58] 🧠 How Do LLMs Acquire New Knowledge? A Knowledge Circuits Perspective on Continual Pre-Training(大型语言模型如何获取新知识?知识电路视角下的持续预训练) [04:33] 🤖 SURGE: On the Potential of Large Language Models as General-Purpose Surrogate Code Executors(SURGE:关于大型语言模型作为通用代理代码执行器的潜力) [05:12] 🔧 Diffusion-Sharpening: Fine-tuning Diffusion Models with Denoising Trajectory Sharpening(扩散锐化:利用去噪轨迹锐化优化扩散模型微调) [05:55] 🧠 I Think, Therefore I Diffuse: Enabling Multimodal In-Context Reasoning in Diffusion Models(我思故我扩散:在扩散模型中实现多模态上下文推理) [06:38] 🔧 SAFE-SQL: Self-Augmented In-Context Learning with Fine-grained Example Selection for Text-to-SQL(SAFE-SQL:基于细粒度示例选择的自增强上下文学习用于文本到SQL转换) [07:25] 🧠 CRANE: Reasoning with constrained LLM generation(CRANE:受限LLM生成的推理) [08:07] 🧠 Intuitive physics understanding emerges from self-supervised pretraining on natural videos(直觉物理理解从自然视频的自监督预训练中涌现) [08:46] 🐦 Cuckoo: An IE Free Rider Hatched by Massive Nutrition in LLM's Nest(杜鹃:在大型语言模型的巢中孵化出的信息抽取搭便车者) [09:22] 🧠 Dyve: Thinking Fast and Slow for Dynamic Process Verification(Dyve:动态过程验证中的快思与慢想) [10:06] 🧠 PhysReason: A Comprehensive Benchmark towards Physics-Based Reasoning(物理推理:基于物理推理的综合基准) [10:53] 🤖 System Message Generation for User Preferences using Open-Source Models(基于开源模型的用户偏好系统消息生成) [11:38] 🎥 video-SALMONN-o1: Reasoning-enhanced Audio-visual Large Language Model(视频-SALMONN-o1:推理增强的音视频大型语言模型) [12:33] 🧠 Building A Proof-Oriented Programmer That Is 64% Better Than GPT-4o Under Data Scarsity(构建一个在数据稀缺情况下比GPT-4o好64%的证明导向程序员) [13:11] 🤖 Memory, Benchmark & Robots: A Benchmark for Solving Complex Tasks with Reinforcement Learning(记忆、基准与机器人:一种用于强化学习解决复杂任务的基准) [13:52] 🤖 MagicArticulate: Make Your 3D Models Articulation-Ready(魔法清晰:让你的3D模型准备好关节动画) [14:37] 🤖 Talk Structurally, Act Hierarchically: A Collaborative Framework for LLM Multi-Agent Systems(结构化交流,层次化行动:LLM多智能体系统的协作框架) [15:21] 🧠 One Example Shown, Many Concepts Known! Counterexample-Driven Conceptual Reasoning in Mathematical LLMs(一个示例展示,多个概念知晓!数学大语言模型中的反例驱动概念推理) [16:03] 🤖 Can a Single Model Master Both Multi-turn Conversations and Tool Use? CALM: A Unified Conversational Agentic Language Model(单一模型能否同时掌握多轮对话与工具使用?CALM:一个统一的对话代理语言模型) [16:40] 🚀 Better Embeddings with Coupled Adam(结合Adam优化器的更好嵌入) [17:18] 🧐 Show Me the Work: Fact-Checkers' Requirements for Explainable Automated Fact-Checking(展示工作:事实核查员对可解释自动化事实核查的需求) [17:56] 🧪 Towards Data-Efficient Pretraining for Atomic Property Prediction(面向原子性质预测的数据高效预训练) [18:46] 🌀 The Mirage of Model Editing: Revisiting Evaluation in the Wild(模型编辑的幻象:重新审视实际应用中的评估) [19:31] 🧮 Large Language Models and Mathematical Reasoning Failures(大型语言模型与数学推理失败) [20:11] 📊 Language Complexity Measurement as a Noisy Zero-Shot Proxy for Evaluating LLM Performance(语言复杂度测量作为评估LLM性能的噪声零样本代理) 【关注我们】 您还可以在以下平台找到我们,获得播客内容以外更多信息 小红书: AI速递
本期的 21 篇论文如下: [00:22] 🌐 Region-Adaptive Sampling for Diffusion Transformers(区域自适应采样扩散变换器) [01:05] 🎥 Step-Video-T2V Technical Report: The Practice, Challenges, and Future of Video Foundation Model(步进视频生成技术报告:视频基础模型的实践、挑战与未来) [01:48] 🌊 Large Language Diffusion Models(大规模语言扩散模型) [02:31] 🧠 ZeroBench: An Impossible Visual Benchmark for Contemporary Large Multimodal Models(零基准:当代大型多模态模型的不可视觉基准) [03:15] 🌟 MM-RLHF: The Next Step Forward in Multimodal LLM Alignment(MM-RLHF:多模态大语言模型对齐的下一步进展) [03:58] 🖼 Precise Parameter Localization for Textual Generation in Diffusion Models(扩散模型中文本生成精确参数定位) [04:40] 🧠 Diverse Inference and Verification for Advanced Reasoning(高级推理的多重推断与验证) [05:22] 🧬 DarwinLM: Evolutionary Structured Pruning of Large Language Models(达尔文LM:大型语言模型的进化结构剪枝) [06:02] 📈 AdaPTS: Adapting Univariate Foundation Models to Probabilistic Multivariate Time Series Forecasting(AdaPTS:将单变量基础模型适配到概率性多变量时间序列预测) [06:40] 🖼 ImageRAG: Dynamic Image Retrieval for Reference-Guided Image Generation(ImageRAG:动态图像检索用于引导图像生成) [07:23] 🤖 We Can't Understand AI Using our Existing Vocabulary(我们无法用现有词汇理解人工智能) [08:03] 📊 FoNE: Precise Single-Token Number Embeddings via Fourier Features(FoNE:通过傅里叶特征实现精确的单标记数字嵌入) [08:53] 🌍 Small Models, Big Impact: Efficient Corpus and Graph-Based Adaptation of Small Multilingual Language Models for Low-Resource Languages(小模型,大影响:面向低资源语言的多语言小模型的有效语料库与基于图的适应) [09:41] 🔓 Jailbreaking to Jailbreak(越狱以越狱) [10:23] 🤖 STMA: A Spatio-Temporal Memory Agent for Long-Horizon Embodied Task Planning(STMA:一种用于长时程具身任务规划的时空记忆代理) [11:05] 📊 Text-guided Sparse Voxel Pruning for Efficient 3D Visual Grounding(文本引导的稀疏体素剪枝用于高效的三维视觉定位) [11:41] ⚡ MRS: A Fast Sampler for Mean Reverting Diffusion based on ODE and SDE Solvers(基于ODE和SDE求解器的均值回归扩散快速采样器) [12:26] 🚗 V2V-LLM: Vehicle-to-Vehicle Cooperative Autonomous Driving with Multi-Modal Large Language Models(V2V-LLM:基于多模态大语言模型的车辆间协同自动驾驶) [13:06] 🎵 CLaMP 3: Universal Music Information Retrieval Across Unaligned Modalities and Unseen Languages(CLaMP 3:跨模态与跨语言的通用音乐信息检索) [13:49] 🧩 Cluster and Predict Latents Patches for Improved Masked Image Modeling(基于聚类与预测潜在补丁的改进掩码图像建模) [14:31] 🧬 Agentic End-to-End De Novo Protein Design for Tailored Dynamics Using a Language Diffusion Model(基于语言扩散模型的端到端从头蛋白质设计以实现定制动力学) 【关注我们】 您还可以在以下平台找到我们,获得播客内容以外更多信息 小红书: AI速递
本期的 5 篇论文如下: [00:54] TOP1(🔥121) | 🤔 Can 1B LLM Surpass 405B LLM? Rethinking Compute-Optimal Test-Time Scaling(10亿参数LLM能否超越4050亿参数LLM?重新思考计算最优的测试时缩放) [03:41] TOP2(🔥119) | 🚀 InfiniteHiP: Extending Language Model Context Up to 3 Million Tokens on a Single GPU(InfiniteHiP:在单个GPU上扩展语言模型上下文至300万 tokens) [06:11] TOP3(🔥117) | 💼 Expect the Unexpected: FailSafe Long Context QA for Finance(预料之外:金融领域长上下文问答的FailSafe) [08:23] TOP4(🔥104) | 🦜 The Stochastic Parrot on LLM's Shoulder: A Summative Assessment of Physical Concept Understanding(随机鹦鹉在大语言模型肩上:物理概念理解的总结性评估) [10:40] TOP5(🔥100) | 🧠 Scaling up Test-Time Compute with Latent Reasoning: A Recurrent Depth Approach(通过潜在推理扩展测试时计算:一种递归深度方法) 【关注我们】 您还可以在以下平台找到我们,获得播客内容以外更多信息 小红书: AI速递
本期的 18 篇论文如下: [00:21] 🚀 InfiniteHiP: Extending Language Model Context Up to 3 Million Tokens on a Single GPU(InfiniteHiP:在单个GPU上扩展语言模型上下文至300万 tokens) [01:07] 🖼 Skrr: Skip and Re-use Text Encoder Layers for Memory Efficient Text-to-Image Generation(Skrr:跳过并重用文本编码器层以实现内存高效文本到图像生成) [01:49] 🧠 An Open Recipe: Adapting Language-Specific LLMs to a Reasoning Model in One Day via Model Merging(一个开放的方案:通过模型合并在一日内将语言特定LLM适应为推理模型) [02:31] 📚 SelfCite: Self-Supervised Alignment for Context Attribution in Large Language Models(SelfCite:大语言模型中上下文归属的自监督对齐方法) [03:14] 🐕 Can this Model Also Recognize Dogs? Zero-Shot Model Search from Weights(该模型也能识别狗吗?基于权重的零样本模型搜索) [03:56] 🌐 Exploring the Potential of Encoder-free Architectures in 3D LMMs(探索无编码器架构在三维大尺度多模态模型中的潜力) [04:39] 🎭 CoSER: Coordinating LLM-Based Persona Simulation of Established Roles(协同角色模拟:基于大语言模型的角色扮演语言代理) [05:26] 🌐 TripoSG: High-Fidelity 3D Shape Synthesis using Large-Scale Rectified Flow Models(TripoSG:使用大规模校正流模型生成高保真3D形状) [06:09] 🤖 EmbodiedBench: Comprehensive Benchmarking Multi-modal Large Language Models for Vision-Driven Embodied Agents(EmbodiedBench:全面评估视觉驱动具身智能体多模态大语言模型) [07:00] 🌪 Typhoon T1: An Open Thai Reasoning Model(台风T1:一个开放的泰语推理模型) [07:54] 🤖 Logical Reasoning in Large Language Models: A Survey(大型语言模型中的逻辑推理:综述) [08:36] 🧠 MME-CoT: Benchmarking Chain-of-Thought in Large Multimodal Models for Reasoning Quality, Robustness, and Efficiency(MME-CoT:评估大型多模态模型中链式思维推理质量、鲁棒性和效率) [09:23] 🧠 CoT-Valve: Length-Compressible Chain-of-Thought Tuning(长度可压缩的链式思维调优) [10:11] 🤖 SQuARE: Sequential Question Answering Reasoning Engine for Enhanced Chain-of-Thought in Large Language Models(SQuARE:增强大型语言模型链式思考的顺序问答推理引擎) [10:52] 🌐 mmE5: Improving Multimodal Multilingual Embeddings via High-quality Synthetic Data(mmE5:通过高质量合成数据改进多模态多语言嵌入) [11:36] 🦜 The Stochastic Parrot on LLM's Shoulder: A Summative Assessment of Physical Concept Understanding(随机鹦鹉在大语言模型肩上:物理概念理解的总结性评估) [12:18] 🤖 DexTrack: Towards Generalizable Neural Tracking Control for Dexterous Manipulation from Human References(DexTrack:面向人类参考的灵巧操作通用神经跟踪控制) [13:00] 🔍 3CAD: A Large-Scale Real-World 3C Product Dataset for Unsupervised Anomaly(3CAD:一个大规模真实3C产品数据集用于无监督异常检测) 【关注我们】 您还可以在以下平台找到我们,获得播客内容以外更多信息 小红书: AI速递
本期的 20 篇论文如下: [00:23] 🌍 BenchMAX: A Comprehensive Multilingual Evaluation Suite for Large Language Models(BenchMAX:大型语言模型的综合多语言评估套件) [01:08] 📄 TextAtlas5M: A Large-scale Dataset for Dense Text Image Generation(TextAtlas5M:用于密集文本图像生成的大规模数据集) [01:48] 🎥 Light-A-Video: Training-free Video Relighting via Progressive Light Fusion(光影视频:基于渐进光融合的无训练视频重照明) [02:36] 🎥 CineMaster: A 3D-Aware and Controllable Framework for Cinematic Text-to-Video Generation(CineMaster:一个三维感知与可控的电影级文本到视频生成框架) [03:16] 🖥 WorldGUI: Dynamic Testing for Comprehensive Desktop GUI Automation(世界GUI:桌面GUI自动化的综合动态测试) [04:06] ⚡ LASP-2: Rethinking Sequence Parallelism for Linear Attention and Its Hybrid(LASP-2:重新思考线性注意力及其混合模型的序列并行性) [04:45] 🧠 TransMLA: Multi-head Latent Attention Is All You Need(TransMLA:多头潜在注意力机制的全部需求) [05:31] 💼 Fino1: On the Transferability of Reasoning Enhanced LLMs to Finance(Fino1:关于推理增强型大型语言模型在金融领域的可迁移性研究) [06:23] 📏 Distillation Scaling Laws(蒸馏缩放定律) [07:02] 🚀 Ignore the KL Penalty! Boosting Exploration on Critical Tokens to Enhance RL Fine-Tuning(忽略KL惩罚!通过增强关键标记的探索来提升强化学习微调效果) [07:52] 🌍 SARChat-Bench-2M: A Multi-Task Vision-Language Benchmark for SAR Image Interpretation(SARChat-Bench-2M:用于SAR图像解释的多任务视觉语言基准) [08:25] 🧠 LLM Pretraining with Continuous Concepts(基于连续概念的LLM预训练) [09:09] 🎭 Animate Anyone 2: High-Fidelity Character Image Animation with Environment Affordance(动画任何人2:利用环境可操作性生成高保真角色图像动画) [09:52] 🔍 NoLiMa: Long-Context Evaluation Beyond Literal Matching(NoLiMa:超越字面匹配的长上下文评估) [10:39] 🧠 Mediator: Memory-efficient LLM Merging with Less Parameter Conflicts and Uncertainty Based Routing(中介:基于参数冲突少和不确定性路由的高效LLM合并) [11:15] 📚 Towards Trustworthy Retrieval Augmented Generation for Large Language Models: A Survey(面向可信赖的大语言模型检索增强生成:综述) [11:58] 🎥 Next Block Prediction: Video Generation via Semi-Autoregressive Modeling(下一区块预测:通过半自回归建模生成视频) [12:43] 🔄 DPO-Shift: Shifting the Distribution of Direct Preference Optimization(DPO-Shift:直接偏好优化分布的可控转移) [13:28] 🧠 LLM Modules: Knowledge Transfer from a Large to a Small Model using Enhanced Cross-Attention(LLM模块:使用增强交叉注意力机制从大模型向小模型进行知识迁移) [14:15] 🛡 MetaSC: Test-Time Safety Specification Optimization for Language Models(MetaSC:语言模型推理时的安全规范优化) 【关注我们】 您还可以在以下平台找到我们,获得播客内容以外更多信息 小红书: AI速递
本期的 21 篇论文如下: [00:25] 🧠 Competitive Programming with Large Reasoning Models(使用大型推理模型进行编程竞赛) [01:03] 🧠 CodeI/O: Condensing Reasoning Patterns via Code Input-Output Prediction(代码输入输出:通过代码输入输出预测凝练推理模式) [01:47] 🎥 Magic 1-For-1: Generating One Minute Video Clips within One Minute(魔幻1对1:在一分钟内生成一分钟视频片段) [02:27] 🧠 Teaching Language Models to Critique via Reinforcement Learning(通过强化学习教授语言模型进行批判) [03:09] 💼 Expect the Unexpected: FailSafe Long Context QA for Finance(预料之外:金融领域长上下文问答的FailSafe) [03:49] 🌍 Scaling Pre-training to One Hundred Billion Data for Vision Language Models(视觉语言模型预训练扩展至千亿级数据) [04:24] 🧠 LLMs Can Easily Learn to Reason from Demonstrations Structure, not content, is what matters!(大模型能够轻松从示范结构中学习推理,内容不是关键!) [05:07] 📈 Enhancing Financial Time-Series Forecasting with Retrieval-Augmented Large Language Models(通过检索增强的大型语言模型提升金融时间序列预测) [05:50] 📄 Éclair -- Extracting Content and Layout with Integrated Reading Order for Documents(Éclair -- 提取文档内容的集成阅读顺序) [06:34] 🛠 Hephaestus: Improving Fundamental Agent Capabilities of Large Language Models through Continual Pre-Training(赫菲斯托斯:通过持续预训练提升大型语言模型的基础代理能力) [07:15] 🛠 CAD-Editor: A Locate-then-Infill Framework with Automated Training Data Synthesis for Text-Based CAD Editing(CAD编辑器:基于文本指令的CAD编辑框架及自动训练数据合成) [08:10] 🎥 Enhance-A-Video: Better Generated Video for Free(增强视频:免费生成更高质量的视频) [08:49] 🌍 NatureLM: Deciphering the Language of Nature for Scientific Discovery(NatureLM:解密科学发现的自然语言) [09:34] 🦎 Forget What You Know about LLMs Evaluations - LLMs are Like a Chameleon(忘掉你对LLM评估的认知 - LLM就像变色龙) [10:22] 🎥 VidCRAFT3: Camera, Object, and Lighting Control for Image-to-Video Generation(VidCRAFT3:图像到视频生成的相机、物体与光照控制) [11:01] 📹 CoS: Chain-of-Shot Prompting for Long Video Understanding(CoS:长视频理解的链式镜头提示) [11:42] 🧩 Mask-Enhanced Autoregressive Prediction: Pay Less Attention to Learn More(掩码增强的自回归预测:少关注以学更多) [12:28] 🎤 FocalCodec: Low-Bitrate Speech Coding via Focal Modulation Networks(FocalCodec:通过焦点调制网络实现低比特率语音编码) [13:09] 🕵 Auditing Prompt Caching in Language Model APIs(语言模型API中的提示缓存审计) [13:49] 💎 Gemstones: A Model Suite for Multi-Faceted Scaling Laws(宝石:多面性缩放定律的模型套件) [14:32] 🧠 Skill Expansion and Composition in Parameter Space(参数空间中的技能扩展与组合) 【关注我们】 您还可以在以下平台找到我们,获得播客内容以外更多信息 小红书: AI速递
本期的 21 篇论文如下: [00:25] 🤖 SynthDetoxM: Modern LLMs are Few-Shot Parallel Detoxification Data Annotators(SynthDetoxM:现代大语言模型是少样本并行去毒化数据标注器) [01:10] 🧠 Exploring the Limit of Outcome Reward for Learning Mathematical Reasoning(探索数学推理中结果奖励的学习极限) [01:55] 🤔 Can 1B LLM Surpass 405B LLM? Rethinking Compute-Optimal Test-Time Scaling(10亿参数LLM能否超越4050亿参数LLM?重新思考计算最优的测试时缩放) [02:38] ⚡ Lossless Acceleration of Large Language Models with Hierarchical Drafting based on Temporal Locality in Speculative Decoding(基于时间局部性的层次化草稿实现大语言模型无损加速) [03:19] 🚀 Show-o Turbo: Towards Accelerated Unified Multimodal Understanding and Generation(Show-o Turbo:迈向加速统一多模态理解和生成) [03:57] 🤖 Training Language Models for Social Deduction with Multi-Agent Reinforcement Learning(利用多智能体强化学习训练语言模型进行社会推理) [04:38] 🧠 ReasonFlux: Hierarchical LLM Reasoning via Scaling Thought Templates(ReasonFlux:通过扩展思维模板实现分层LLM推理) [05:28] 🌐 EVEv2: Improved Baselines for Encoder-Free Vision-Language Models(EVEv2:改进的无编码器视觉语言模型基线) [06:11] 🧠 LM2: Large Memory Models(大型记忆模型) [06:57] 🧠 The Hidden Life of Tokens: Reducing Hallucination of Large Vision-Language Models via Visual Information Steering(标记的隐秘生命:通过视觉信息引导减少大型视觉语言模型的幻觉) [07:50] 🪆 Matryoshka Quantization(嵌套量化) [08:35] 🎥 Lumina-Video: Efficient and Flexible Video Generation with Multi-scale Next-DiT(Lumina-Video: 多尺度Next-DiT的高效灵活视频生成) [09:22] 🎥 History-Guided Video Diffusion(历史引导的视频扩散) [10:12] 🎥 CustomVideoX: 3D Reference Attention Driven Dynamic Adaptation for Zero-Shot Customized Video Diffusion Transformers(CustomVideoX:三维参考注意力驱动的零样本定制视频扩散变换器动态适应) [10:59] ⚡ APE: Faster and Longer Context-Augmented Generation via Adaptive Parallel Encoding(自适应并行编码:通过自适应并行编码实现更快更长的上下文增强生成) [11:38] ⏱ Efficient-vDiT: Efficient Video Diffusion Transformers With Attention Tile(高效视频扩散Transformer模型) [12:21] 🤖 MetaChain: A Fully-Automated and Zero-Code Framework for LLM Agents(元链:一个全自动且无需代码的LLM代理框架) [13:03] 🚀 Steel-LLM:From Scratch to Open Source -- A Personal Journey in Building a Chinese-Centric LLM(Steel-LLM:从零到开源——构建以中文为中心的LLM的个人历程) [13:47] 🧠 The Curse of Depth in Large Language Models(深度在大语言模型中的诅咒) [14:24] 🎨 DreamDPO: Aligning Text-to-3D Generation with Human Preferences via Direct Preference Optimization(DreamDPO:通过直接偏好优化对齐文本到3D生成与人偏好) [15:14] 🎨 Dual Caption Preference Optimization for Diffusion Models(双标题偏好优化用于扩散模型) 【关注我们】 您还可以在以下平台找到我们,获得播客内容以外更多信息 小红书: AI速递
本期的 21 篇论文如下: [00:22] 🎥 VideoRoPE: What Makes for Good Video Rotary Position Embedding?(视频旋转位置嵌入:什么使得视频旋转位置嵌入有效?) [01:07] 🎥 Fast Video Generation with Sliding Tile Attention(基于滑动瓦片注意力的快速视频生成) [01:54] 🎥 Goku: Flow Based Video Generative Foundation Models(悟空:基于流的视频生成基础模型) [02:35] 🌍 AuraFusion360: Augmented Unseen Region Alignment for Reference-based 360° Unbounded Scene Inpainting(AuraFusion360:基于参考的360°无界场景修补增强未见区域对齐) [03:19] 🔢 QuEST: Stable Training of LLMs with 1-Bit Weights and Activations(QuEST:使用1位权重和激活值稳定训练大型语言模型) [03:57] 🛡 DuoGuard: A Two-Player RL-Driven Framework for Multilingual LLM Guardrails(DuoGuard:一种基于双玩家强化学习的多语言大模型防护框架) [04:40] 🧠 Scaling up Test-Time Compute with Latent Reasoning: A Recurrent Depth Approach(通过潜在推理扩展测试时计算:一种递归深度方法) [05:28] 🎯 Agency Is Frame-Dependent(代理是框架依赖的) [06:04] 🎥 FlashVideo:Flowing Fidelity to Detail for Efficient High-Resolution Video Generation(闪视频:高效高分辨率视频生成中的细节保真) [06:46] 📊 Linear Correlation in LM's Compositional Generalization and Hallucination(语言模型中的组合泛化与幻觉的线性相关性) [07:32] 🧠 Generating Symbolic World Models via Test-time Scaling of Large Language Models(通过测试时扩展大型语言模型生成符号世界模型) [08:09] 📱 On-device Sora: Enabling Diffusion-Based Text-to-Video Generation for Mobile Devices(设备上的Sora:为移动设备实现基于扩散的文本到视频生成) [08:51] ⚡ CMoE: Fast Carving of Mixture-of-Experts for Efficient LLM Inference(CMoE:用于高效LLM推理的快速混合专家模型雕刻) [09:32] 🧩 Scaling Laws in Patchification: An Image Is Worth 50,176 Tokens And More(补丁化缩放定律:图像价值50,176个标记及以上) [10:20] 🔄 Step Back to Leap Forward: Self-Backtracking for Boosting Reasoning of Language Models(退一步跃进:提升语言模型推理能力的自回溯机制) [11:06] 🧠 CodeSteer: Symbolic-Augmented Language Models via Code/Text Guidance(CodeSteer:通过代码/文本引导的符号增强语言模型) [11:50] 🧩 No Task Left Behind: Isotropic Model Merging with Common and Task-Specific Subspaces(无任务落后:各向同性模型合并与通用及任务特定子空间) [12:39] 🌓 YINYANG-ALIGN: Benchmarking Contradictory Objectives and Proposing Multi-Objective Optimization based DPO for Text-to-Image Alignment(阴阳对齐:基准测试矛盾目标并提出基于多目标优化的DPO用于文本到图像对齐) [13:20] 🌐 QLIP: Text-Aligned Visual Tokenization Unifies Auto-Regressive Multimodal Understanding and Generation(QLIP:文本对齐视觉标记化统一自回归多模态理解和生成) [14:02] 🧠 ARR: Question Answering with Large Language Models via Analyzing, Retrieving, and Reasoning(ARR:通过分析、检索和推理进行问答的大语言模型) [14:48] 🤖 MEETING DELEGATE: Benchmarking LLMs on Attending Meetings on Our Behalf(会议代表:评估大型语言模型在代为参加会议中的表现) 【关注我们】 您还可以在以下平台找到我们,获得播客内容以外更多信息 小红书: AI速递
本期的 5 篇论文如下: [00:39] TOP1(🔥162) | 🤖 OmniHuman-1: Rethinking the Scaling-Up of One-Stage Conditioned Human Animation Models(OmniHuman-1:重新思考单阶段条件式人体动画模型的放大) [02:42] TOP2(🔥137) | 🤖 SmolLM2: When Smol Goes Big -- Data-Centric Training of a Small Language Model(SmolLM2:当小型模型走向大型化——以数据为中心的小型语言模型训练) [04:42] TOP3(🔥108) | 🤔 The Differences Between Direct Alignment Algorithms are a Blur(直接对齐算法的差异逐渐模糊) [06:27] TOP4(🔥93) | 🧠 s1: Simple test-time scaling(简单的测试时间缩放) [08:14] TOP5(🔥53) | 💡 Process Reinforcement through Implicit Rewards(基于隐式奖励的过程强化) 【关注我们】 您还可以在以下平台找到我们,获得播客内容以外更多信息 小红书: AI速递
本期的 21 篇论文如下: [00:24] 🔄 Analyze Feature Flow to Enhance Interpretation and Steering in Language Models(分析特征流以增强语言模型的解释与控制) [01:03] 🤖 UltraIF: Advancing Instruction Following from the Wild(超IF:从野外推进指令跟随) [01:40] 🎥 DynVFX: Augmenting Real Videos with Dynamic Content(DynVFX:用动态内容增强真实视频) [02:16] 🌐 Ola: Pushing the Frontiers of Omni-Modal Language Model with Progressive Modality Alignment(Ola:通过渐进式模态对齐推动全模态语言模型的前沿) [02:51] 🏃 MotionLab: Unified Human Motion Generation and Editing via the Motion-Condition-Motion Paradigm(MotionLab:基于运动-条件-运动范式的统一人体运动生成与编辑) [03:31] 🤖 Great Models Think Alike and this Undermines AI Oversight(伟大的模型思维相似,这削弱了AI监督) [04:07] 📚 MAGA: MAssive Genre-Audience Reformulation to Pretraining Corpus Expansion(MAGA:大规模体裁-受众重构以扩展预训练语料库) [04:47] 🏆 Gold-medalist Performance in Solving Olympiad Geometry with AlphaGeometry2(在解决奥林匹克几何问题中实现金牌选手水平的AlphaGeometry2) [05:25] 🤖 ScoreFlow: Mastering LLM Agent Workflows via Score-based Preference Optimization(ScoreFlow:基于评分偏好优化的LLM代理工作流掌握) [06:07] 🎙 Llasa: Scaling Train-Time and Inference-Time Compute for Llama-based Speech Synthesis(Llasa:扩展基于Llama的语音合成中的训练和推理计算) [06:51] 🎥 MotionCanvas: Cinematic Shot Design with Controllable Image-to-Video Generation(MotionCanvas:基于可控图像到视频生成的电影镜头设计) [07:38] 📊 ChartCitor: Multi-Agent Framework for Fine-Grained Chart Visual Attribution(ChartCitor:细粒度图表视觉归属的多代理框架) [08:18] 🧠 BOLT: Bootstrap Long Chain-of-Thought in Language Models without Distillation(BOLT:无需蒸馏的大语言模型长链思维自举) [09:01] 🔄 Beyond Prompt Content: Enhancing LLM Performance via Content-Format Integrated Prompt Optimization(超越提示内容:通过内容-格式集成提示优化提升大语言模型性能) [09:45] 🌀 Weak-to-Strong Diffusion with Reflection(从弱到强扩散与反射) [10:26] 🤖 PlotGen: Multi-Agent LLM-based Scientific Data Visualization via Multimodal Feedback(PlotGen:基于多智能体LLM的科学数据可视化通过多模态反馈) [11:04] 🔧 Enhancing Code Generation for Low-Resource Languages: No Silver Bullet(提升低资源语言的代码生成:没有银弹) [11:48] 🔓 Speak Easy: Eliciting Harmful Jailbreaks from LLMs with Simple Interactions(轻松对话:通过简单互动从LLM中引出有害越狱行为) [12:22] 🤖 PILAF: Optimal Human Preference Sampling for Reward Modeling(PILAF:最优人类偏好采样用于奖励建模) [13:05] 🎥 Towards Physical Understanding in Video Generation: A 3D Point Regularization Approach(面向视频生成的物理理解:一种3D点正则化方法) [13:47] 🤖 Learning Real-World Action-Video Dynamics with Heterogeneous Masked Autoregression(基于异质掩码自回归的现实世界动作视频动态学习) 【关注我们】 您还可以在以下平台找到我们,获得播客内容以外更多信息 小红书: AI速递
本期的 10 篇论文如下: [00:26] 🤖 SmolLM2: When Smol Goes Big -- Data-Centric Training of a Small Language Model(SmolLM2:当小型模型走向大型化——以数据为中心的小型语言模型训练) [01:08] 🌐 TwinMarket: A Scalable Behavioral and Social Simulation for Financial Markets(双市场:一种可扩展的金融市场的行为与社会模拟) [01:45] 🧠 Demystifying Long Chain-of-Thought Reasoning in LLMs(揭秘大语言模型中的长链推理) [02:23] 🧠 LIMO: Less is More for Reasoning(LIMO:少即是多的推理) [03:15] 🧠 Boosting Multimodal Reasoning with MCTS-Automated Structured Thinking(通过蒙特卡洛树搜索提升多模态推理的自动化结构化思考) [04:04] 🧠 A Probabilistic Inference Approach to Inference-Time Scaling of LLMs using Particle-Based Monte Carlo Methods(基于粒子蒙特卡罗方法的概率推理在大语言模型推理时缩放中的应用) [04:47] 🔓 Jailbreaking with Universal Multi-Prompts(基于通用多提示的越狱技术) [05:25] 🎨 LayerTracer: Cognitive-Aligned Layered SVG Synthesis via Diffusion Transformer(LayerTracer:基于扩散变换器的认知对齐分层SVG合成) [06:27] 🧠 Token Assorted: Mixing Latent and Text Tokens for Improved Language Model Reasoning(令牌混合:通过混合潜在与文本令牌提升语言模型推理能力) [07:09] 🧠 On Teacher Hacking in Language Model Distillation(语言模型蒸馏中的教师模型攻击现象研究) 【关注我们】 您还可以在以下平台找到我们,获得播客内容以外更多信息 小红书: AI速递
本期的 9 篇论文如下: [00:25] ⚡ Inverse Bridge Matching Distillation(逆桥匹配蒸馏) [01:02] 🎥 VideoJAM: Joint Appearance-Motion Representations for Enhanced Motion Generation in Video Models(视频JAM:增强视频模型运动生成的联合外观-运动表示) [01:44] 🤖 ACECODER: Acing Coder RL via Automated Test-Case Synthesis(ACECODER:通过自动化测试用例合成提升编码模型) [02:25] 🧠 QLASS: Boosting Language Agent Inference via Q-Guided Stepwise Search(QLASS:通过Q引导的逐步搜索提升语言代理推理) [03:09] 📉 Can LLMs Maintain Fundamental Abilities under KV Cache Compression?(LLM在KV缓存压缩下的基本能力保持情况) [03:56] 🧠 Satori: Reinforcement Learning with Chain-of-Action-Thought Enhances LLM Reasoning via Autoregressive Search(Satori:通过链式动作思维增强LLM推理的自回归搜索) [04:46] 🖼 Generating Multi-Image Synthetic Data for Text-to-Image Customization(生成多图像合成数据用于文本到图像定制) [05:31] 🤔 Rethinking Mixture-of-Agents: Is Mixing Different Large Language Models Beneficial?(重新思考混合代理:混合不同大型语言模型是否有益?) [06:13] 🎯 Concept Steerers: Leveraging K-Sparse Autoencoders for Controllable Generations(概念引导器:利用K稀疏自编码器实现可控生成) 【关注我们】 您还可以在以下平台找到我们,获得播客内容以外更多信息 小红书: AI速递
与播客爱好者一起交流
添加微信好友,获取更多播客资讯
播放列表还是空的
去找些喜欢的节目添加进来吧