2024.10.09 每日AI论文 | 长上下文生成能力评估,指令多样性影响泛化

本期的 9 篇论文如下: [00:28] 📚 LongGenBench: Long-context Generation Benchmark(长上下文生成基准:LongGenBench) [01:11] 🌐 $\textbf{Only-IF}$:Revealing the Decisive Effect of Instruction Diversity on Generalization(仅限IF:揭示指令多样性对泛化的决定性影响) [01:50] 📊 RevisEval: Improving LLM-as-a-Judge via Response-Adapted References(RevisEval:通过响应自适应参考改进LLM作为评判者) [02:35] 🌟 A Spark of Vision-Language Intelligence: 2-Dimensional Autoregressive Transformer for Efficient Finegrained Image Generation(视觉语言智能的火花:用于高效细粒度图像生成的二维自回归Transformer) [03:25] 🎥 Grounded-VideoLLM: Sharpening Fine-grained Temporal Grounding in Video Large Language Models(基于视频的大型语言模型:细化视频中的细粒度时间定位) [04:00] 🎨 ControlAR: Controllable Image Generation with Autoregressive Models(ControlAR:可控图像生成的自回归模型) [04:45] 🔍 Hyper-multi-step: The Truth Behind Difficult Long-context Tasks(超多步:困难长上下文任务背后的真相) [05:21] 🤖 MA-RLHF: Reinforcement Learning from Human Feedback with Macro Actions(MA-RLHF:基于宏动作的人类反馈强化学习) [06:03] 📊 EBES: Easy Benchmarking for Event Sequences(EBES:事件序列的简易基准测试) 【关注我们】 您还可以在以下平台找到我们,获得播客内容以外更多信息 小红书: AI速递

7分钟
88
7个月前

2024.10.08 每日AI论文 | 差分Transformer优化注意力,LLM幻觉研究揭示错误模式。

本期的 21 篇论文如下: [00:26] 🔍 Differential Transformer(差分Transformer) [01:04] 🧠 LLMs Know More Than They Show: On the Intrinsic Representation of LLM Hallucinations(大语言模型知多于表:关于LLM幻觉的内在表征) [01:50] 📹 VideoGuide: Improving Video Diffusion Models without Training Through a Teacher's Guide(视频指南:通过教师指导提升视频扩散模型无需训练) [02:28] 📈 FAN: Fourier Analysis Networks(傅里叶分析网络) [03:05] 🏥 Named Clinical Entity Recognition Benchmark(命名临床实体识别基准) [03:37] 🔬 ScienceAgentBench: Toward Rigorous Assessment of Language Agents for Data-Driven Scientific Discovery(科学智能基准:面向数据驱动科学发现的语言智能体严格评估) [04:19] 🎶 UniMuMo: Unified Text, Music and Motion Generation(统一文本、音乐与动作生成) [04:55] 🔍 TLDR: Token-Level Detective Reward Model for Large Vision Language Models(TLDR:大视觉语言模型的令牌级侦探奖励模型) [05:35] 🎵 Presto! Distilling Steps and Layers for Accelerating Music Generation(快速!加速音乐生成的步骤和层级蒸馏) [06:08] 🖥 Navigating the Digital World as Humans Do: Universal Visual Grounding for GUI Agents(像人类一样导航数字世界:GUI代理的通用视觉基础) [06:49] 🖼 OmniBooth: Learning Latent Control for Image Synthesis with Multi-modal Instruction(全能展台:通过多模态指令学习图像合成的潜在控制) [07:29] 🌀 MonST3R: A Simple Approach for Estimating Geometry in the Presence of Motion(MonST3R:一种在动态场景中估计几何的简单方法) [08:09] 🧠 LLaMA-Berry: Pairwise Optimization for O1-like Olympiad-Level Mathematical Reasoning(LLaMA-Berry:O1类奥林匹克级数学推理的成对优化) [08:50] 📊 MathHay: An Automated Benchmark for Long-Context Mathematical Reasoning in LLMs(MathHay:LLMs长上下文数学推理自动化基准) [09:39] 📊 GSM-Symbolic: Understanding the Limitations of Mathematical Reasoning in Large Language Models(GSM-符号化:理解大型语言模型在数学推理中的局限性) [10:34] 🤖 Autonomous Character-Scene Interaction Synthesis from Text Instruction(从文本指令自主合成角色场景互动) [11:12] 🧩 TurtleBench: Evaluating Top Language Models via Real-World Yes/No Puzzles(TurtleBench:通过真实世界的Yes/No谜题评估顶级语言模型) [12:00] 🤖 Grounding Language in Multi-Perspective Referential Communication(多视角指称通信中的语言接地) [12:48] 🎯 SePPO: Semi-Policy Preference Optimization for Diffusion Alignment(SePPO:扩散模型对齐的半策略偏好优化) [13:25] 🧩 What Matters for Model Merging at Scale?(大规模模型合并的关键因素是什么?) [14:02] 📊 SELECT: A Large-Scale Benchmark of Data Curation Strategies for Image Classification(SELECT:图像分类数据策展策略的大规模基准) 【关注我们】 您还可以在以下平台找到我们,获得播客内容以外更多信息 小红书: AI速递

15分钟
99+
7个月前

2024.10.07 每日AI论文 | 高效能语言模型节能新算法,视觉语言模型推理能力待提升。

本期的 12 篇论文如下: [00:25] ⚡ Addition is All You Need for Energy-efficient Language Models(加法即所需:高效能语言模型) [01:03] 🧠 NL-Eye: Abductive NLI for Images(NL-Eye:图像的溯因自然语言推理) [01:40] 🔍 Selective Attention Improves Transformer(选择性注意力提升Transformer) [02:17] ⚡ Accelerating Auto-regressive Text-to-Image Generation with Training-free Speculative Jacobi Decoding(加速自回归文本到图像生成:无训练的推测性雅可比解码) [02:48] 🤖 Tutor CoPilot: A Human-AI Approach for Scaling Real-Time Expertise(导师助手:一种用于扩展实时专家知识的人机协作方法) [03:27] 🩺 A Comprehensive Survey of Mamba Architectures for Medical Image Analysis: Classification, Segmentation, Restoration and Beyond(医学图像分析中的Mamba架构综合调查:分类、分割、恢复及超越) [04:12] 🎨 RoCoTex: A Robust Method for Consistent Texture Synthesis with Diffusion Models(RoCoTex:一种基于扩散模型的鲁棒一致纹理合成方法) [04:59] 🧠 Erasing Conceptual Knowledge from Language Models(从语言模型中消除概念知识) [05:37] 📈 MIGA: Mixture-of-Experts with Group Aggregation for Stock Market Prediction(MIGA:基于专家组聚合的混合模型用于股票市场预测) [06:16] 🤖 CANVAS: Commonsense-Aware Navigation System for Intuitive Human-Robot Interaction(CANVAS:常识感知导航系统用于直观人机交互) [06:54] 🌳 NRGBoost: Energy-Based Generative Boosted Trees(NRGBoost:基于能量的生成增强树) [07:37] 🤖 GenSim2: Scaling Robot Data Generation with Multi-modal and Reasoning LLMs(GenSim2:利用多模态和推理LLMs扩展机器人数据生成) 【关注我们】 您还可以在以下平台找到我们,获得播客内容以外更多信息 小红书: AI速递

8分钟
93
7个月前

2024.10.04 每日AI论文 | 字幕类型影响模型表现,长视频生成技术突破。

本期的 19 篇论文如下: [00:24] 🔄 Revisit Large-Scale Image-Caption Data in Pre-training Multimodal Foundation Models(重新审视大规模图像-文本数据在多模态基础模型预训练中的作用) [01:04] 🎥 Loong: Generating Minute-level Long Videos with Autoregressive Language Models(使用自回归语言模型生成分钟级长视频) [01:39] 🎥 Video Instruction Tuning With Synthetic Data(使用合成数据进行视频指令调优) [02:18] 🧐 LLaVA-Critic: Learning to Evaluate Multimodal Models(LLaVA-Critic:学习评估多模态模型) [02:56] 🔍 Contrastive Localized Language-Image Pre-Training(对比本地化语言-图像预训练) [03:31] 🌱 VinePPO: Unlocking RL Potential For LLM Reasoning Through Refined Credit Assignment(VinePPO:通过精细化的信用分配解锁LLM推理的RL潜力) [04:07] 🌟 Depth Pro: Sharp Monocular Metric Depth in Less Than a Second(Depth Pro:不到一秒内实现锐利的单目度量深度) [04:51] 🔗 Large Language Models as Markov Chains(大型语言模型作为马尔可夫链) [05:26] 🧠 CLIP-MoE: Towards Building Mixture of Experts for CLIP with Diversified Multiplet Upcycling(CLIP-MoE:通过多样化多重升级构建CLIP的专家混合模型) [06:03] 🔄 Eliminating Oversaturation and Artifacts of High Guidance Scales in Diffusion Models(消除扩散模型中高指导尺度引起的过饱和和伪影) [06:51] 🔄 Training Language Models on Synthetic Edit Sequences Improves Code Synthesis(在合成编辑序列上训练语言模型改进代码合成) [07:36] ⚡ SageAttention: Accurate 8-Bit Attention for Plug-and-play Inference Acceleration(SageAttention:用于即插即用推理加速的精确8位注意力机制) [08:14] 🌐 MVGS: Multi-view-regulated Gaussian Splatting for Novel View Synthesis(MVGS:多视角调节的高斯喷射用于新视角合成) [08:54] 📚 L-CiteEval: Do Long-Context Models Truly Leverage Context for Responding?(L-CiteEval:长上下文模型是否真正利用上下文进行响应?) [09:38] 🩺 MedVisionLlama: Leveraging Pre-Trained Large Language Model Layers to Enhance Medical Image Segmentation(利用预训练大型语言模型层增强医学图像分割) [10:24] 🎥 Vinoground: Scrutinizing LMMs over Dense Temporal Reasoning with Short Videos(Vinoground: 通过短视频密集时间推理审视大型多模态模型) [11:01] 🗣 Distilling an End-to-End Voice Assistant Without Instruction Training Data(无需指令训练数据的端到端语音助手蒸馏) [11:46] ♟ Learning the Latent Rules of a Game from Data: A Chess Story(从数据中学习游戏的潜在规则:一个国际象棋的故事) [12:29] 🎵 Synthio: Augmenting Small-Scale Audio Classification Datasets with Synthetic Data(Synthio:使用合成数据增强小规模音频分类数据集) 【关注我们】 您还可以在以下平台找到我们,获得播客内容以外更多信息 小红书: AI速递

13分钟
92
7个月前

2024.10.03 每日AI论文 | 分层调试提升代码准确性,多模态模型优化图像任务。

本期的 20 篇论文如下: [00:23] 🐞 From Code to Correctness: Closing the Last Mile of Code Generation with Hierarchical Debugging(从代码到正确性:通过分层调试解决代码生成的最后一步) [01:08] 📄 LEOPARD : A Vision Language Model For Text-Rich Multi-Image Tasks(LEOPARD:用于文本丰富的多图像任务的视觉语言模型) [01:48] 📊 Is Preference Alignment Always the Best Option to Enhance LLM-Based Translation? An Empirical Analysis(偏好对齐是否总是提升基于LLM的翻译的最佳选择?一项实证分析) [02:27] 🖼 ComfyGen: Prompt-Adaptive Workflows for Text-to-Image Generation(ComfyGen:文本到图像生成的提示自适应工作流) [03:08] 🧠 RATIONALYST: Pre-training Process-Supervision for Improving Reasoning(RATIONALYST:通过预训练过程监督改进推理) [03:45] 🧠 Not All LLM Reasoners Are Created Equal(并非所有LLM推理器都相同) [04:18] 📊 Quantifying Generalization Complexity for Large Language Models(量化大型语言模型的泛化复杂性) [04:59] 🔍 3DGS-DET: Empower 3D Gaussian Splatting with Boundary Guidance and Box-Focused Sampling for 3D Object Detection(3DGS-DET:利用边界引导和框聚焦采样增强3D高斯喷洒进行3D物体检测) [05:45] 🔄 HelpSteer2-Preference: Complementing Ratings with Preferences(HelpSteer2-Preference:通过偏好补充评分) [06:25] 🗣 MOSEL: 950,000 Hours of Speech Data for Open-Source Speech Foundation Model Training on EU Languages(MOSEL:用于欧盟语言开源语音基础模型训练的95万小时语音数据) [07:03] 🤖 Closed-loop Long-horizon Robotic Planning via Equilibrium Sequence Modeling(通过平衡序列建模实现闭环长期机器人规划) [07:40] 🌐 EVER: Exact Volumetric Ellipsoid Rendering for Real-time View Synthesis(EVER:实时视图合成的精确体积椭球体渲染) [08:22] 📄 FactAlign: Long-form Factuality Alignment of Large Language Models(FactAlign:大型语言模型的长篇事实对齐) [08:57] 📹 E.T. Bench: Towards Open-Ended Event-Level Video-Language Understanding(E.T. 基准:面向开放式事件级视频语言理解) [09:37] 🌍 BordIRlines: A Dataset for Evaluating Cross-lingual Retrieval-Augmented Generation(BordIRlines:评估跨语言检索增强生成的数据集) [10:13] 🔊 SonicSim: A customizable simulation platform for speech processing in moving sound source scenarios(SonicSim:移动声源场景下语音处理的定制化仿真平台) [10:53] 🔄 HarmoniCa: Harmonizing Training and Inference for Better Feature Cache in Diffusion Transformer Acceleration(HarmoniCa:在扩散Transformer加速中协调训练与推理以实现更好的特征缓存) [11:35] 🔍 Selective Aggregation for Low-Rank Adaptation in Federated Learning(联邦学习中低秩适应的选择性聚合) [12:14] 📚 Old Optimizer, New Norm: An Anthology(旧优化器,新范数:文集) [12:49] 📱 InfiniPot: Infinite Context Processing on Memory-Constrained LLMs(InfiniPot:内存受限的LLM无限上下文处理) 【关注我们】 您还可以在以下平台找到我们,获得播客内容以外更多信息 小红书: AI速递

13分钟
79
7个月前

2024.10.02 每日AI论文 | 跨能力任务表现受限,边缘设备高效部署模型

本期的 13 篇论文如下: [00:26] 🔗 Law of the Weakest Link: Cross Capabilities of Large Language Models(最弱环节法则:大型语言模型的跨能力) [01:05] 🌐 TPI-LLM: Serving 70B-scale LLMs Efficiently on Low-resource Edge Devices(TPI-LLM:在低资源边缘设备上高效服务70B规模的大型语言模型) [01:46] 🌍 Atlas-Chat: Adapting Large Language Models for Low-Resource Moroccan Arabic Dialect(Atlas-Chat:为低资源摩洛哥阿拉伯方言定制的大型语言模型) [02:22] 🎥 One Token to Seg Them All: Language Instructed Reasoning Segmentation in Videos(一令分段:视频中的语言指令推理分割) [02:59] 🌐 Flex3D: Feed-Forward 3D Generation With Flexible Reconstruction Model And Input View Curation(Flex3D:利用灵活的重建模型和输入视图优化进行前馈3D生成) [03:46] 🎨 Illustrious: an Open Advanced Illustration Model(辉煌:一个开放的高级插画模型) [04:22] 🚗 SyntheOcc: Synthesize Geometric-Controlled Street View Images through 3D Semantic MPIs(通过3D语义MPIs合成几何控制街景图像) [05:00] 📸 Posterior-Mean Rectified Flow: Towards Minimum MSE Photo-Realistic Image Restoration(后验均值校正流:迈向最小均方误差照片真实图像恢复) [05:47] 🎨 ACE: All-round Creator and Editor Following Instructions via Diffusion Transformer(遵循扩散变换器的全方位创作者和编辑) [06:22] 🎥 Visual Context Window Extension: A New Perspective for Long Video Understanding(视觉上下文窗口扩展:长视频理解的新视角) [07:05] 🤖 Helpful DoggyBot: Open-World Object Fetching using Legged Robots and Vision-Language Models(帮助型DoggyBot:使用四足机器人和视觉语言模型进行开放世界物体抓取) [07:46] 🎥 DressRecon: Freeform 4D Human Reconstruction from Monocular Video(DressRecon:单目视频中的自由形式4D人体重建) [08:32] 🤖 What the Harm? Quantifying the Tangible Impact of Gender Bias in Machine Translation with a Human-centered Study(性别偏见的影响?通过以人为本的研究量化机器翻译中的性别偏见) 【关注我们】 您还可以在以下平台找到我们,获得播客内容以外更多信息 小红书: AI速递

9分钟
78
7个月前

2024.10.01 每日AI论文 | 多模态模型提升图像理解,长度控制方法增强生成精确性。

本期的 11 篇论文如下: [00:26] 🌐 MM1.5: Methods, Analysis & Insights from Multimodal LLM Fine-tuning(MM1.5:多模态大语言模型微调的方法、分析与见解) [01:04] 📏 Ruler: A Model-Agnostic Method to Control Generated Length for Large Language Models(Ruler:一种用于控制大型语言模型生成长度的模型无关方法) [01:41] 🗣 DiaSynth -- Synthetic Dialogue Generation Framework(DiaSynth -- 合成对话生成框架) [02:22] 📊 Hyper-Connections(OLMo-1B:探索DHC和SHC中的规模与训练) [02:57] 🤖 UniAff: A Unified Representation of Affordances for Tool Usage and Articulation with Vision-Language Models(UniAff:一种结合视觉语言模型的工具使用和关节运动的统一表示方法) [03:35] 🔍 Cottention: Linear Transformers With Cosine Attention(Cottention:基于余弦注意力的线性变换器) [04:10] 🤖 Scaling Proprioceptive-Visual Learning with Heterogeneous Pre-trained Transformers(通过异构预训练Transformer扩展本体感觉-视觉学习) [04:49] 🏋 Coffee-Gym: An Environment for Evaluating and Improving Natural Language Feedback on Erroneous Code(咖啡健身房:评估和改进错误代码的自然语言反馈环境) [05:29] 🖼 Image Copy Detection for Diffusion Models(扩散模型图像复制检测) [06:09] 🧠 Can Models Learn Skill Composition from Examples?(模型能否从示例中学习技能组合?) [06:43] 🎧 IDEAW: Robust Neural Audio Watermarking with Invertible Dual-Embedding(IDEAW:具有可逆双嵌入的鲁棒神经音频水印) 【关注我们】 您还可以在以下平台找到我们,获得播客内容以外更多信息 小红书: AI速递

7分钟
74
7个月前

2024.09.30 每日AI论文 | Emu3简化多模态设计,MIO提升视频理解表现。

本期的 9 篇论文如下: [00:24] 🧠 Emu3: Next-Token Prediction is All You Need(Emu3:下一个词预测是您所需要的全部) [00:53] 🌐 MIO: A Foundation Model on Multimodal Tokens(多模态标记的基础模型:MIO) [01:26] 🔍 VPTQ: Extreme Low-bit Vector Post-Training Quantization for Large Language Models(VPTQ:大语言模型的极端低比特向量后训练量化) [02:21] 🎥 PhysGen: Rigid-Body Physics-Grounded Image-to-Video Generation(PhysGen:基于刚体物理的图像到视频生成) [03:05] 🔄 Modulated Intervention Preference Optimization (MIPO): Keep the Easy, Refine the Difficult(调制干预偏好优化(MIPO):保持简单,细化困难) [03:46] 📄 MinerU: An Open-Source Solution for Precise Document Content Extraction(MinerU:一种用于精确文档内容提取的开源解决方案) [04:24] 🤖 MSI-Agent: Incorporating Multi-Scale Insight into Embodied Agents for Superior Planning and Decision-Making(MSI-Agent:将多尺度洞察融入具身代理以提升规划与决策能力) [05:01] 🤖 A Survey on the Honesty of Large Language Models(大型语言模型诚实性综述) [05:45] 📊 LML: Language Model Learning a Dataset for Data-Augmented Prediction(LML:用于数据增强预测的数据集学习语言模型) 【关注我们】 您还可以在以下平台找到我们,获得播客内容以外更多信息 小红书: AI速递

6分钟
61
7个月前

2024.09.27 每日AI论文 | 3D感知能力提升,计算开销减少。

本期的 12 篇论文如下: [00:27] 🌐 LLaVA-3D: A Simple yet Effective Pathway to Empowering LMMs with 3D-awareness(LLaVA-3D:一种简单而有效的路径,赋予多模态模型3D感知能力) [01:10] 🧩 MaskLLM: Learnable Semi-Structured Sparsity for Large Language Models(MaskLLM:大型语言模型的可学习半结构化稀疏性) [01:49] 🎭 EMOVA: Empowering Language Models to See, Hear and Speak with Vivid Emotions(EMOVA:赋予语言模型以生动的情感,使其能够看、听和说) [02:35] 🌸 Lotus: Diffusion-based Visual Foundation Model for High-quality Dense Prediction(莲花:基于扩散的高质量密集预测视觉基础模型) [03:15] ⚡ Discovering the Gems in Early Layers: Accelerating Long-Context LLMs with 1000x Input Token Reduction(探索早期层的瑰宝:通过1000倍输入令牌减少加速长上下文LLM) [03:58] 🖼 Pixel-Space Post-Training of Latent Diffusion Models(潜在扩散模型的像素空间后训练) [04:36] 🔍 Reducing the Footprint of Multi-Vector Retrieval with Minimal Performance Impact via Token Pooling(通过令牌池化减少多向量检索的足迹并保持最小性能影响) [05:17] 🎭 Disco4D: Disentangled 4D Human Generation and Animation from a Single Image(Disco4D:从单张图像生成和动画化分离的4D人体模型) [05:55] 🧠 Instruction Following without Instruction Tuning(无需指令微调的指令跟随) [06:30] 📊 The Imperative of Conversation Analysis in the Era of LLMs: A Survey of Tasks, Techniques, and Trends(大语言模型时代对话分析的必要性:任务、技术与趋势综述) [07:07] 🤖 Robot See Robot Do: Imitating Articulated Object Manipulation with Monocular 4D Reconstruction(机器人看机器人做:通过单目4D重建模仿关节物体操作) [07:43] ⚽ Enhancing Structured-Data Retrieval with GraphRAG: Soccer Data Case Study(增强结构化数据检索与GraphRAG:足球数据案例研究) 【关注我们】 您还可以在以下平台找到我们,获得播客内容以外更多信息 小红书: AI速递

8分钟
87
7个月前

2024.09.26 每日AI论文 | 提升预训练数据质量,多模态模型开源创新

本期的 10 篇论文如下: [00:32] 🤖 Programming Every Example: Lifting Pre-training Data Quality like Experts at Scale(编程每个示例:大规模提升预训练数据质量如专家) [01:13] 🌐 Molmo and PixMo: Open Weights and Open Data for State-of-the-Art Multimodal Models(Molmo 和 PixMo:用于最先进多模态模型的开源权重和数据) [01:51] 🩺 Boosting Healthcare LLMs Through Retrieved Context(通过检索上下文提升医疗领域大语言模型) [02:31] 📊 AIM 2024 Sparse Neural Rendering Challenge: Dataset and Benchmark(AIM 2024 稀疏神经渲染挑战:数据集与基准) [03:12] 🎸 Synchronize Dual Hands for Physics-Based Dexterous Guitar Playing(基于物理模拟的灵巧吉他演奏双手同步) [03:52] 🎭 DreamWaltz-G: Expressive 3D Gaussian Avatars from Skeleton-Guided 2D Diffusion(DreamWaltz-G:从骨骼引导的2D扩散生成富有表现力的3D高斯头像) [04:28] 🚁 Game4Loc: A UAV Geo-Localization Benchmark from Game Data(基于游戏数据的无人机地理定位基准) [05:13] 🌐 Degradation-Guided One-Step Image Super-Resolution with Diffusion Priors(基于扩散先验的退化引导一步图像超分辨率) [05:53] 🎥 TalkinNeRF: Animatable Neural Fields for Full-Body Talking Humans(TalkinNeRF:用于全身说话人类动画的可动画神经场) [06:32] 🤖 HyperAgent: Generalist Software Engineering Agents to Solve Coding Tasks at Scale(HyperAgent:通用软件工程代理解决大规模编码任务) 【关注我们】 您还可以在以下平台找到我们,获得播客内容以外更多信息 小红书: AI速递

7分钟
73
7个月前
EarsOnMe

加入我们的 Discord

与播客爱好者一起交流

立即加入

播放列表

自动播放下一个

播放列表还是空的

去找些喜欢的节目添加进来吧