https://babi.com/

节目列表: HuggingFace 每日AI论文速递 - EarsOnMe - 精选播客,一听即合

2025.12.24 | 语义蓝图提速视频生成;逐层剖析炼出强策略

HuggingFace 每日AI论文速递

本期的 15 篇论文如下: [00:19] 🎬 SemanticGen: Video Generation in Semantic Space(SemanticGen:在语义空间中的视频生成) [01:01] 🔍 Bottom-up Policy Optimization: Your Language Model Policy Secretly Contains Internal Policies(自底向上策略优化:你的语言模型策略中暗含内部策略) [01:48] 🧠 SpatialTree: How Spatial Abilities Branch Out in MLLMs(SpatialTree:多模态大语言模型中的空间能力如何分支发展) [02:23] 🤖 LongVideoAgent: Multi-Agent Reasoning with Long Videos(LongVideoAgent:基于多智能体推理的长视频理解) [03:06] 🧠 MemEvolve: Meta-Evolution of Agent Memory Systems(MemEvolve:智能体记忆系统的元进化) [03:46] 🔍 Step-DeepResearch Technical Report(Step-DeepResearch技术报告) [04:22] 🎧 SAM Audio: Segment Anything in Audio(SAM Audio:音频中的任意分割) [05:00] 🚀 INTELLECT-3: Technical Report(INTELLECT-3:技术报告) [05:30] 🔍 FaithLens: Detecting and Explaining Faithfulness Hallucination(FaithLens:检测与解释忠实性幻觉) [06:07] 🧠 Reinforcement Learning for Self-Improving Agent with Skill Library(基于技能库与强化学习的自进化智能体研究) [06:53] 📊 QuantiPhy: A Quantitative Benchmark Evaluating Physical Reasoning Abilities of Vision-Language Models(QuantiPhy:评估视觉语言模型物理推理能力的定量基准) [07:38] 🔊 Simulstream: Open-Source Toolkit for Evaluation and Demonstration of Streaming Speech-to-Text Translation Systems(Simulstream:用于流式语音到文本翻译系统评估与演示的开源工具包) [08:18] 🧠 Active Intelligence in Video Avatars via Closed-loop World Modeling(通过闭环世界建模实现视频化身的主动智能) [08:55] 🔬 Multi-LLM Thematic Analysis with Dual Reliability Metrics: Combining Cohen's Kappa and Semantic Similarity for Qualitative Research Validation(基于多LLM与双重可靠性度量的主题分析:结合Cohen's Kappa与语义相似度进行定性研究验证) [09:32] ⚠ Toxicity Ahead: Forecasting Conversational Derailment on GitHub(毒性预警:预测GitHub对话中的脱轨行为) 【关注我们】 您还可以在以下平台找到我们,获得播客内容以外更多信息 小红书: AI速递

10分钟
99+
1个月前

2025.12.23 | 数据工厂提效;棱镜假说统合

HuggingFace 每日AI论文速递

本期的 15 篇论文如下: [00:22] ⚙ DataFlow: An LLM-Driven Framework for Unified Data Preparation and Workflow Automation in the Era of Data-Centric AI(DataFlow:面向数据为中心AI时代的统一数据准备与工作流自动化LLM驱动框架) [01:04] 🔍 The Prism Hypothesis: Harmonizing Semantic and Pixel Representations via Unified Autoencoding(棱镜假说:通过统一自编码协调语义与像素表示) [01:50] 🎬 Region-Constraint In-Context Generation for Instructional Video Editing(区域约束的上下文生成用于教学视频编辑) [02:33] 🎥 Infinite-Homography as Robust Conditioning for Camera-Controlled Video Generation(无限单应性变换作为相机控制视频生成的鲁棒条件) [03:08] 🔍 QuCo-RAG: Quantifying Uncertainty from the Pre-training Corpus for Dynamic Retrieval-Augmented Generation(QuCo-RAG:基于预训练语料的动态检索增强生成不确定性量化) [03:58] 🤔 Can LLMs Estimate Student Struggles? Human-AI Difficulty Alignment with Proficiency Simulation for Item Difficulty Prediction(大型语言模型能否评估学生困境?基于能力模拟的人机难度对齐用于试题难度预测) [04:35] 🧭 LoGoPlanner: Localization Grounded Navigation Policy with Metric-aware Visual Geometry(LoGoPlanner:基于定位与度量感知视觉几何的导航策略) [05:13] 🎬 WorldWarp: Propagating 3D Geometry with Asynchronous Video Diffusion(WorldWarp:利用异步视频扩散传播三维几何) [06:08] 🔍 UCoder: Unsupervised Code Generation by Internal Probing of Large Language Models(UCoder:通过内部探测大语言模型实现无监督代码生成) [06:45] 🧬 GenEnv: Difficulty-Aligned Co-Evolution Between LLM Agents and Environment Simulators(GenEnv:基于难度对齐的大语言模型智能体与环境模拟器协同进化框架) [07:22] 🎨 Reasoning Palette: Modulating Reasoning via Latent Contextualization for Controllable Exploration for (V)LMs(推理调色板:通过潜在情境化调节推理以实现(视觉)语言模型的可控探索) [07:56] ⚡ LoPA: Scaling dLLM Inference via Lookahead Parallel Decoding(LoPA:通过前瞻并行解码扩展扩散大语言模型推理) [08:38] 📱 MobileWorld: Benchmarking Autonomous Mobile Agents in Agent-User Interactive, and MCP-Augmented Environments(MobileWorld:在智能体-用户交互与MCP增强环境中评测自主移动智能体) [09:20] ⚖ Does It Tie Out? Towards Autonomous Legal Agents in Venture Capital(它能对上吗?迈向风险投资领域的自主法律智能体) [10:00] 🎬 StoryMem: Multi-shot Long Video Storytelling with Memory(StoryMem:基于记忆的多镜头长视频故事讲述) 【关注我们】 您还可以在以下平台找到我们,获得播客内容以外更多信息 小红书: AI速递

10分钟
99+
1个月前

2025.12.22 | PhysBrain用第一人称视频让AI学会动手;大模型离科学家AI还差得远

HuggingFace 每日AI论文速递

本期的 15 篇论文如下: [00:24] 🧠 PhysBrain: Human Egocentric Data as a Bridge from Vision Language Models to Physical Intelligence(PhysBrain:以人类第一人称数据为桥梁,从视觉语言模型迈向物理智能) [01:05] 🔬 Probing Scientific General Intelligence of LLMs with Scientist-Aligned Workflows(通过科学家对齐的工作流程探究大语言模型的科学通用智能) [01:34] 🧠 When Reasoning Meets Its Laws(当推理遇见其定律) [02:16] 🧠 Seed-Prover 1.5: Mastering Undergraduate-Level Theorem Proving via Learning from Experience(Seed-Prover 1.5:通过经验学习掌握本科级定理证明) [03:02] 🧠 4D-RGPT: Toward Region-level 4D Understanding via Perceptual Distillation(4D-RGPT:通过感知蒸馏实现区域级4D理解) [03:51] 🎨 Both Semantics and Reconstruction Matter: Making Representation Encoders Ready for Text-to-Image Generation and Editing(语义与重建皆重要:让表征编码器为文本到图像生成与编辑做好准备) [04:30] ⚖ Are We on the Right Way to Assessing LLM-as-a-Judge?(我们评估LLM作为评判者的方法正确吗?) [05:05] 📡 RadarGen: Automotive Radar Point Cloud Generation from Cameras(RadarGen:基于摄像头的汽车雷达点云生成) [05:54] 🔬 Physics of Language Models: Part 4.1, Architecture Design and the Magic of Canon Layers(语言模型的物理学:第4.1部分,架构设计与Canon层的魔力) [06:41] 🎬 HERBench: A Benchmark for Multi-Evidence Integration in Video Question Answering(HERBench:视频问答中多证据整合的基准测试) [07:26] 🔍 GroundingME: Exposing the Visual Grounding Gap in MLLMs through Multi-Dimensional Evaluation(GroundingME:通过多维评估揭示MLLMs中的视觉基础能力差距) [08:06] ⚙ SWE-Bench++: A Framework for the Scalable Generation of Software Engineering Benchmarks from Open-Source Repositories(SWE-Bench++:一种从开源仓库可扩展生成软件工程基准的框架) [08:39] 🧠 Turn-PPO: Turn-Level Advantage Estimation with PPO for Improved Multi-Turn RL in Agentic LLMs(Turn-PPO:基于回合级优势估计与PPO的智能体大语言模型多轮强化学习优化) [09:14] ⚡ StageVAR: Stage-Aware Acceleration for Visual Autoregressive Models(StageVAR:面向视觉自回归模型的阶段感知加速) [09:48] 🤖 An Anatomy of Vision-Language-Action Models: From Modules to Milestones and Challenges(视觉-语言-动作模型剖析:从模块、里程碑到挑战) 【关注我们】 您还可以在以下平台找到我们,获得播客内容以外更多信息 小红书: AI速递

11分钟
99+
1个月前

2025.12.19 | Kling-Omni一统视频生成;LLaDA2.0百亿扩散模型

HuggingFace 每日AI论文速递

本期的 14 篇论文如下: [00:26] 🎬 Kling-Omni Technical Report(Kling-Omni技术报告) [01:02] 🚀 LLaDA2.0: Scaling Up Diffusion Language Models to 100B(LLaDA2.0:将扩散语言模型扩展至1000亿参数) [01:41] 🔮 Next-Embedding Prediction Makes Strong Vision Learners(下一嵌入预测构建强大的视觉学习器) [02:27] 👓 StereoPilot: Learning Unified and Efficient Stereo Conversion via Generative Priors(StereoPilot:通过生成先验学习统一且高效的立体转换) [02:58] 🎬 Seedance 1.5 pro: A Native Audio-Visual Joint Generation Foundation Model(Seedance 1.5 pro:一个原生音视频联合生成基础模型) [03:34] 🔭 Depth Any Panoramas: A Foundation Model for Panoramic Depth Estimation(全景深度估计基础模型:深度任意全景) [04:11] 📸 Generative Refocusing: Flexible Defocus Control from a Single Image(生成式重聚焦:从单张图像实现灵活散焦控制) [04:56] 🤖 Adaptation of Agentic AI(智能体人工智能的适应性研究) [05:36] ⚗ Alchemist: Unlocking Efficiency in Text-to-Image Model Training via Meta-Gradient Data Selection(炼金术士:通过元梯度数据选择提升文本到图像模型训练效率) [06:12] 🛡 DeContext as Defense: Safe Image Editing in Diffusion Transformers(以去上下文为防御:扩散变换器中的安全图像编辑) [06:58] 🧭 N3D-VLM: Native 3D Grounding Enables Accurate Spatial Reasoning in Vision-Language Models(N3D-VLM:原生3D基础实现视觉语言模型中的精确空间推理) [07:49] 🎨 The World is Your Canvas: Painting Promptable Events with Reference Images, Trajectories, and Text(世界即画布:用参考图像、轨迹和文本绘制可提示事件) [08:30] 🔧 AdaTooler-V: Adaptive Tool-Use for Images and Videos(AdaTooler-V:面向图像与视频的自适应工具使用) [09:19] 🤔 Exploration v.s. Exploitation: Rethinking RLVR through Clipping, Entropy, and Spurious Reward(探索与利用之辩:通过裁剪、熵与虚假奖励重新审视RLVR) 【关注我们】 您还可以在以下平台找到我们,获得播客内容以外更多信息 小红书: AI速递

10分钟
99+
1个月前

2025.12.18 | 校准步长奖励砍成本;扩散草稿自回归验证提速

HuggingFace 每日AI论文速递

本期的 14 篇论文如下: [00:25] 🤖 Step-GUI Technical Report(Step-GUI技术报告) [00:59] ⚡ DEER: Draft with Diffusion, Verify with Autoregressive Models(DEER:基于扩散模型生成草稿,基于自回归模型验证) [01:31] ⚡ Fast and Accurate Causal Parallel Decoding using Jacobi Forcing(使用雅可比强制实现快速准确的因果并行解码) [02:10] 🚀 HyperVL: An Efficient and Dynamic Multimodal Large Language Model for Edge Devices(HyperVL:面向边缘设备的高效动态多模态大语言模型) [02:48] 🎬 IC-Effect: Precise and Efficient Video Effects Editing via In-Context Learning(IC-Effect:基于上下文学习的精确高效视频特效编辑) [03:30] 🔍 Skyra: AI-Generated Video Detection via Grounded Artifact Reasoning(Skyra:基于可感知视觉伪影推理的AI生成视频检测) [04:03] 🧠 Universal Reasoning Model(通用推理模型) [04:45] 🔍 Robust and Calibrated Detection of Authentic Multimedia Content(鲁棒且可校准的真实多媒体内容检测) [05:33] 🧭 Can LLMs Guide Their Own Exploration? Gradient-Guided Reinforcement Learning for LLM Reasoning(大型语言模型能否引导自身探索?基于梯度引导的强化学习用于LLM推理) [06:14] 🌍 FiNERweb: Datasets and Artifacts for Scalable Multilingual Named Entity Recognition(FiNERweb:用于可扩展多语言命名实体识别的数据集与工具集) [06:54] 📊 MMSI-Video-Bench: A Holistic Benchmark for Video-Based Spatial Intelligence(MMSI-Video-Bench:一个面向视频空间智能的综合性基准测试) [07:47] 🔄 DiffusionVL: Translating Any Autoregressive Models into Diffusion Vision Language Models(DiffusionVL:将任意自回归模型转化为扩散视觉语言模型) [08:24] 🧠 SAGE: Training Smart Any-Horizon Agents for Long Video Reasoning with Reinforcement Learning(SAGE:通过强化学习训练智能任意时域代理以进行长视频推理) [09:02] 🎬 End-to-End Training for Autoregressive Video Diffusion via Self-Resampling(通过自重采样实现自回归视频扩散模型的端到端训练) 【关注我们】 您还可以在以下平台找到我们,获得播客内容以外更多信息 小红书: AI速递

10分钟
99+
1个月前

2025.12.17 | MMGR揭多模态推理短板;WorldPlay保几何一致实时建模

HuggingFace 每日AI论文速递

本期的 15 篇论文如下: [00:23] 🧠 MMGR: Multi-Modal Generative Reasoning(MMGR:多模态生成式推理评估与基准) [01:14] 🎮 WorldPlay: Towards Long-Term Geometric Consistency for Real-Time Interactive World Modeling(WorldPlay:面向实时交互式世界建模的长期几何一致性研究) [01:47] 🤖 Video Reality Test: Can AI-Generated ASMR Videos fool VLMs and Humans?(视频真实性测试:AI生成的ASMR视频能否欺骗视觉语言模型与人类?) [02:46] 🎨 Scone: Bridging Composition and Distinction in Subject-Driven Image Generation via Unified Understanding-Generation Modeling(Scone:通过统一理解-生成建模桥接主题驱动图像生成中的组合与区分) [03:29] 🤖 RoboTracer: Mastering Spatial Trace with Reasoning in Vision-Language Models for Robotics(RoboTracer:视觉语言模型在机器人学中掌握基于推理的空间轨迹追踪) [04:13] 📊 OpenDataArena: A Fair and Open Arena for Benchmarking Post-Training Dataset Value(OpenDataArena:一个用于基准测试训练后数据集价值的公平开放平台) [04:50] 🎨 Vector Prism: Animating Vector Graphics by Stratifying Semantic Structure(矢量棱镜:通过分层语义结构实现矢量图形动画) [05:36] 🧊 Reveal Hidden Pitfalls and Navigate Next Generation of Vector Similarity Search from Task-Centric Views(揭示隐藏陷阱并从任务中心视角导航下一代向量相似性搜索) [06:14] 🧠 RecGPT-V2 Technical Report(RecGPT-V2 技术报告) [07:04] 📊 ShowTable: Unlocking Creative Table Visualization with Collaborative Reflection and Refinement(ShowTable:通过协作反思与精炼解锁创意表格可视化) [07:43] 🎬 MemFlow: Flowing Adaptive Memory for Consistent and Efficient Long Video Narratives(MemFlow:用于一致且高效长视频叙事的自适应记忆流) [08:22] 🧠 VersatileFFN: Achieving Parameter Efficiency in LLMs via Adaptive Wide-and-Deep Reuse(VersatileFFN:通过自适应宽深复用实现大语言模型的参数高效性) [09:04] 🎨 Feedforward 3D Editing via Text-Steerable Image-to-3D(基于文本可操控图像到三维的前馈式编辑方法) [09:52] 🤖 A4-Agent: An Agentic Framework for Zero-Shot Affordance Reasoning(A4-Agent:一种用于零样本可供性推理的智能体框架) [10:26] 🎬 SS4D: Native 4D Generative Model via Structured Spacetime Latents(SS4D:基于结构化时空潜在表示的本地4D生成模型) 【关注我们】 您还可以在以下平台找到我们,获得播客内容以外更多信息 小红书: AI速递

11分钟
99+
1个月前

2025.12.16 | 代理记忆三维框架;VTP刷新生成纪录

HuggingFace 每日AI论文速递

本期的 15 篇论文如下: [00:20] 🧠 Memory in the Age of AI Agents(人工智能代理时代下的记忆) [00:57] 🚀 Towards Scalable Pre-training of Visual Tokenizers for Generation(迈向可扩展的视觉分词器预训练用于生成任务) [01:42] 🎬 LongVie 2: Multimodal Controllable Ultra-Long Video World Model(LongVie 2:多模态可控超长视频世界模型) [02:41] ⚡ ReFusion: A Diffusion Large Language Model with Parallel Autoregressive Decoding(ReFusion:一种具有并行自回归解码能力的扩散大语言模型) [03:11] 🧪 NL2Repo-Bench: Towards Long-Horizon Repository Generation Evaluation of Coding Agents(NL2Repo-Bench:面向编码智能体长周期仓库生成能力的评估) [03:53] ⚡ Error-Free Linear Attention is a Free Lunch: Exact Solution from Continuous-Time Dynamics(无误差线性注意力是免费午餐:基于连续时间动力学的精确解) [04:29] 🎬 KlingAvatar 2.0 Technical Report(KlingAvatar 2.0 技术报告) [05:17] 🧠 QwenLong-L1.5: Post-Training Recipe for Long-Context Reasoning and Memory Management(QwenLong-L1.5:实现长上下文推理与记忆管理的后训练方法) [05:57] 🧠 MentraSuite: Post-Training Large Language Models for Mental Health Reasoning and Assessment(MentraSuite:用于心理健康推理与评估的大型语言模型后训练) [06:35] 🤖 Openpi Comet: Competition Solution For 2025 BEHAVIOR Challenge(Openpi Comet:2025 BEHAVIOR挑战赛竞赛解决方案) [07:14] 🤖 Spatial-Aware VLA Pretraining through Visual-Physical Alignment from Human Videos(通过人类视频中的视觉-物理对齐实现空间感知的VLA预训练) [07:46] 🔍 V-REX: Benchmarking Exploratory Visual Reasoning via Chain-of-Questions(V-REX:基于问题链的探索性视觉推理基准测试) [08:30] 👁 Toward Ambulatory Vision: Learning Visually-Grounded Active View Selection(迈向动态视觉:学习基于视觉的主动视角选择) [09:14] 🌳 WebOperator: Action-Aware Tree Search for Autonomous Agents in Web Environment(WebOperator:面向Web环境中自主智能体的动作感知树搜索方法) [09:58] 🛡 VLSA: Vision-Language-Action Models with Plug-and-Play Safety Constraint Layer(VLSA:具有即插即用安全约束层的视觉-语言-动作模型) 【关注我们】 您还可以在以下平台找到我们,获得播客内容以外更多信息 小红书: AI速递

10分钟
99+
1个月前

2025.12.15 | 牙科小模型逆袭;扩散模型弃VAE

HuggingFace 每日AI论文速递

本期的 14 篇论文如下: [00:22] 🦷 DentalGPT: Incentivizing Multimodal Complex Reasoning in Dentistry(DentalGPT:激励牙科领域多模态复杂推理) [00:53] 🎨 SVG-T2I: Scaling Up Text-to-Image Latent Diffusion Model Without Variational Autoencoder(SVG-T2I:无需变分自编码器即可扩展文本到图像潜在扩散模型) [01:41] 🎥 EgoX: Egocentric Video Generation from a Single Exocentric Video(EgoX:从单视角外中心视频生成自我中心视频) [02:26] 🎬 V-RGBX: Video Editing with Accurate Controls over Intrinsic Properties(V-RGBX:基于内在属性精确控制的视频编辑) [03:03] 🔍 Sliding Window Attention Adaptation(滑动窗口注意力适应) [03:43] 🎬 PersonaLive! Expressive Portrait Image Animation for Live Streaming(PersonaLive!面向直播场景的富有表现力的肖像图像动画) [04:10] 🎬 Structure From Tracking: Distilling Structure-Preserving Motion for Video Generation(基于跟踪的结构生成:为视频生成提炼结构保持的运动) [04:41] 🎨 Exploring MLLM-Diffusion Information Transfer with MetaCanvas(探索MLLM-扩散信息传递与MetaCanvas) [05:18] 🔄 MeshSplatting: Differentiable Rendering with Opaque Meshes(MeshSplatting:基于不透明网格的可微分渲染) [06:02] 🤖 LEO-RobotAgent: A General-purpose Robotic Agent for Language-driven Embodied Operator(LEO-RobotAgent:一种用于语言驱动具身操作的通用机器人智能体) [06:39] ⚡ The N-Body Problem: Parallel Execution from Single-Person Egocentric Video(N体问题:从单人第一人称视频中实现并行执行) [07:11] 🧬 CheXmask-U: Quantifying uncertainty in landmark-based anatomical segmentation for X-ray images(CheXmask-U:X射线图像中基于解剖标志点分割的不确定性量化) [07:52] 🏆 Task adaptation of Vision-Language-Action model: 1st Place Solution for the 2025 BEHAVIOR Challenge(视觉-语言-动作模型的任务适应:2025 BEHAVIOR挑战赛冠军方案) [08:32] 🚀 Sharp Monocular View Synthesis in Less Than a Second(一秒钟内实现锐利的单目视图合成) 【关注我们】 您还可以在以下平台找到我们,获得播客内容以外更多信息 小红书: AI速递

9分钟
99+
1个月前

2025.12.12 | RL捏3D新纪录;AI奥赛摘银牌

HuggingFace 每日AI论文速递

本期的 15 篇论文如下: [00:25] 🤖 Are We Ready for RL in Text-to-3D Generation? A Progressive Investigation(我们准备好将强化学习应用于文本到3D生成领域了吗?一项渐进式研究) [01:01] 🧠 Long-horizon Reasoning Agent for Olympiad-Level Mathematical Problem Solving(用于奥赛级数学问题求解的长程推理智能体) [01:36] 🚀 T-pro 2.0: An Efficient Russian Hybrid-Reasoning Model and Playground(T-pro 2.0:一个高效的俄语混合推理模型与实验平台) [02:18] 🔍 OPV: Outcome-based Process Verifier for Efficient Long Chain-of-Thought Verification(OPV:基于结果的流程验证器,用于高效的长链思维验证) [03:04] 🏆 Achieving Olympia-Level Geometry Large Language Model Agent via Complexity Boosting Reinforcement Learning(通过复杂度提升强化学习实现奥林匹克级别的几何大语言模型智能体) [04:06] 🎬 MoCapAnything: Unified 3D Motion Capture for Arbitrary Skeletons from Monocular Videos(MoCapAnything:基于单目视频的任意骨架统一三维运动捕捉) [04:46] 🔬 From Macro to Micro: Benchmarking Microscopic Spatial Intelligence on Molecules via Vision-Language Models(从宏观到微观:基于视觉语言模型的分子微观空间智能基准测试) [05:22] 🧠 Thinking with Images via Self-Calling Agent(通过自调用智能体进行图像思维推理) [06:08] 🧩 VQRAE: Representation Quantization Autoencoders for Multimodal Understanding, Generation and Reconstruction(VQRAE:用于多模态理解、生成与重建的表征量化自编码器) [06:48] 🤖 Evaluating Gemini Robotics Policies in a Veo World Simulator(在Veo世界模拟器中评估Gemini机器人策略) [07:30] 🚀 Stronger Normalization-Free Transformers(更强大的无归一化Transformer) [08:05] 📊 The FACTS Leaderboard: A Comprehensive Benchmark for Large Language Model Factuality(FACTS 排行榜:大型语言模型事实准确性综合基准) [08:36] 🎬 Tool-Augmented Spatiotemporal Reasoning for Streamlining Video Question Answering Task(工具增强的时空推理:简化视频问答任务) [09:14] 🌀 MoRel: Long-Range Flicker-Free 4D Motion Modeling via Anchor Relay-based Bidirectional Blending with Hierarchical Densification(MoRel:基于锚点中继双向混合与分层致密化的长程无闪烁4D运动建模) [09:50] 🤖 Confucius Code Agent: An Open-sourced AI Software Engineer at Industrial Scale(孔子代码智能体:工业级开源AI软件工程师) 【关注我们】 您还可以在以下平台找到我们,获得播客内容以外更多信息 小红书: AI速递

11分钟
99
1个月前

2025.12.11 | StereoWorld单目秒变立体大片;BiCo跨域拼贴新概念

HuggingFace 每日AI论文速递

本期的 15 篇论文如下: [00:22] 🎥 StereoWorld: Geometry-Aware Monocular-to-Stereo Video Generation(StereoWorld:几何感知的单目到立体视频生成) [00:59] 🎨 Composing Concepts from Images and Videos via Concept-prompt Binding(通过概念-提示绑定从图像和视频中组合概念) [01:43] 🧠 BrainExplore: Large-Scale Discovery of Interpretable Visual Representations in the Human Brain(BrainExplore:人脑中可解释视觉表征的大规模发现) [02:20] 🎨 OmniPSD: Layered PSD Generation with Diffusion Transformer(OmniPSD:基于扩散Transformer的分层PSD生成) [03:05] 🚀 InfiniteVL: Synergizing Linear and Sparse Attention for Highly-Efficient, Unlimited-Input Vision-Language Models(InfiniteVL:融合线性与稀疏注意力以实现高效、无限输入的视觉语言模型) [03:47] ⚡ Fast-Decoding Diffusion Language Models via Progress-Aware Confidence Schedules(通过进度感知置信度调度实现扩散语言模型的快速解码) [04:31] 🚗 UniUGP: Unifying Understanding, Generation, and Planing For End-to-end Autonomous Driving(UniUGP:面向端到端自动驾驶的理解、生成与规划统一框架) [05:06] 🧠 EtCon: Edit-then-Consolidate for Reliable Knowledge Editing(EtCon:面向可靠知识编辑的先编辑后巩固范式) [05:56] 🤖 HiF-VLA: Hindsight, Insight and Foresight through Motion Representation for Vision-Language-Action Models(HiF-VLA:通过运动表征实现视觉-语言-动作模型的后见、洞见与先见) [06:46] 🔍 WonderZoom: Multi-Scale 3D World Generation(WonderZoom:多尺度三维世界生成) [07:23] 🤖 Learning Unmasking Policies for Diffusion Language Models(扩散语言模型的解掩码策略学习) [07:53] 🔭 IF-Bench: Benchmarking and Enhancing MLLMs for Infrared Images with Generative Visual Prompting(IF-Bench:基于生成式视觉提示的红外图像多模态大语言模型基准测试与增强) [08:51] ⚡ Beyond Unified Models: A Service-Oriented Approach to Low Latency, Context Aware Phonemization for Real Time TTS(超越统一模型:面向服务的低延迟、上下文感知实时TTS音素化方法) [09:31] 🎬 VideoSSM: Autoregressive Long Video Generation with Hybrid State-Space Memory(VideoSSM:基于混合状态空间记忆的自回归长视频生成) [10:16] 🛡 Pay Less Attention to Function Words for Free Robustness of Vision-Language Models(减少对功能词的关注以免费提升视觉语言模型的鲁棒性) 【关注我们】 您还可以在以下平台找到我们,获得播客内容以外更多信息 小红书: AI速递

11分钟
76
1个月前
EarsOnMe

加入我们的 Discord

与播客爱好者一起交流

立即加入

扫描微信二维码

添加微信好友,获取更多播客资讯

微信二维码

播放列表

自动播放下一个

播放列表还是空的

去找些喜欢的节目添加进来吧