主播
节目简介
来源:小宇宙
【目录】
本期的 15 篇论文如下:
00:20 🔍 Elucidating the SNR-t Bias of Diffusion Probabilistic Models(阐明扩散概率模型的信噪比-时间步偏差)
01:00 💥 Maximal Brain Damage Without Data or Optimization: Disrupting Neural Networks via Sign-Bit Flips(无需数据或优化的最大脑损伤:通过符号位翻转破坏神经网络)
01:45 🧠 PersonaVLM: Long-Term Personalized Multimodal LLMs(PersonaVLM:面向长期个性化的多模态大语言模型)
02:56 🧩 Web Retrieval-Aware Chunking (W-RAC) for Efficient and Cost-Effective Retrieval-Augmented Generation Systems(面向高效且经济高效的检索增强生成系统的Web检索感知分块(W-RAC))
03:40 ✂ Cut Your Losses! Learning to Prune Paths Early for Efficient Parallel Reasoning(削减你的损失!学习早期剪枝路径以实现高效并行推理)
04:32 🚀 Qwen3.5-Omni Technical Report(Qwen3.5-Omni技术报告)
05:17 🧱 Repurposing 3D Generative Model for Autoregressive Layout Generation(重新利用三维生成模型进行自回归布局生成)
06:02 🔍 (1D) Ordered Tokens Enable Efficient Test-Time Search((一维)有序分词实现高效的测试时搜索)
06:55 📈 QuantCode-Bench: A Benchmark for Evaluating the Ability of Large Language Models to Generate Executable Algorithmic Trading Strategies(QuantCode-Bench:评估大语言模型生成可执行算法交易策略能力的基准)
07:36 🧠 Learning Adaptive Reasoning Paths for Efficient Visual Reasoning(学习自适应推理路径以实现高效视觉推理)
08:29 🔍 TIPSv2: Advancing Vision-Language Pretraining with Enhanced Patch-Text Alignment(TIPSv2:通过增强的补丁-文本对齐推进视觉-语言预训练)
09:33 💡 Can Large Language Models Reinvent Foundational Algorithms?(大型语言模型能否重新发明基础算法?)
10:17 📊 GTA-2: Benchmarking General Tool Agents from Atomic Tool-Use to Open-Ended Workflows(GTA-2:从原子工具使用到开放式工作流的通用工具智能体基准测试)
11:10 ⚡ AccelOpt: A Self-Improving LLM Agentic System for AI Accelerator Kernel Optimization(AccelOpt:一种用于AI加速器内核优化的自我改进型LLM智能体系统)
11:55 🎭 Hierarchical Codec Diffusion for Video-to-Speech Generation(基于分层编解码扩散的视频到语音生成)
【关注我们】
您还可以在以下平台找到我们,获得播客内容以外更多信息
小红书: AI速递
本期的 15 篇论文如下:
00:20 🔍 Elucidating the SNR-t Bias of Diffusion Probabilistic Models(阐明扩散概率模型的信噪比-时间步偏差)
01:00 💥 Maximal Brain Damage Without Data or Optimization: Disrupting Neural Networks via Sign-Bit Flips(无需数据或优化的最大脑损伤:通过符号位翻转破坏神经网络)
01:45 🧠 PersonaVLM: Long-Term Personalized Multimodal LLMs(PersonaVLM:面向长期个性化的多模态大语言模型)
02:56 🧩 Web Retrieval-Aware Chunking (W-RAC) for Efficient and Cost-Effective Retrieval-Augmented Generation Systems(面向高效且经济高效的检索增强生成系统的Web检索感知分块(W-RAC))
03:40 ✂ Cut Your Losses! Learning to Prune Paths Early for Efficient Parallel Reasoning(削减你的损失!学习早期剪枝路径以实现高效并行推理)
04:32 🚀 Qwen3.5-Omni Technical Report(Qwen3.5-Omni技术报告)
05:17 🧱 Repurposing 3D Generative Model for Autoregressive Layout Generation(重新利用三维生成模型进行自回归布局生成)
06:02 🔍 (1D) Ordered Tokens Enable Efficient Test-Time Search((一维)有序分词实现高效的测试时搜索)
06:55 📈 QuantCode-Bench: A Benchmark for Evaluating the Ability of Large Language Models to Generate Executable Algorithmic Trading Strategies(QuantCode-Bench:评估大语言模型生成可执行算法交易策略能力的基准)
07:36 🧠 Learning Adaptive Reasoning Paths for Efficient Visual Reasoning(学习自适应推理路径以实现高效视觉推理)
08:29 🔍 TIPSv2: Advancing Vision-Language Pretraining with Enhanced Patch-Text Alignment(TIPSv2:通过增强的补丁-文本对齐推进视觉-语言预训练)
09:33 💡 Can Large Language Models Reinvent Foundational Algorithms?(大型语言模型能否重新发明基础算法?)
10:17 📊 GTA-2: Benchmarking General Tool Agents from Atomic Tool-Use to Open-Ended Workflows(GTA-2:从原子工具使用到开放式工作流的通用工具智能体基准测试)
11:10 ⚡ AccelOpt: A Self-Improving LLM Agentic System for AI Accelerator Kernel Optimization(AccelOpt:一种用于AI加速器内核优化的自我改进型LLM智能体系统)
11:55 🎭 Hierarchical Codec Diffusion for Video-to-Speech Generation(基于分层编解码扩散的视频到语音生成)
【关注我们】
您还可以在以下平台找到我们,获得播客内容以外更多信息
小红书: AI速递