本期的 14 篇论文如下:
[00:26] 🤖 MLLM can see? Dynamic Correction Decoding for Hallucination Mitigation(多模态大语言模型能看见吗?动态校正解码以减轻幻觉)
[01:07] 🛠 MTU-Bench: A Multi-granularity Tool-Use Benchmark for Large Language Models(MTU-Bench:大型语言模型的多粒度工具使用基准)
[01:47] 📚 LLM$\times$MapReduce: Simplified Long-Sequence Processing using Large Language Models(LLM×MapReduce:利用大型语言模型简化长序列处理)
[02:25] 🛡 SecCodePLT: A Unified Platform for Evaluating the Security of Code GenAI(SecCodePLT:评估代码生成AI安全性的统一平台)
[03:01] 📹 LVD-2M: A Long-take Video Dataset with Temporally Dense Captions(LVD-2M:一个带有时间密集标注的长镜头视频数据集)
[03:44] 🧠 What Matters in Transformers? Not All Attention is Needed(Transformer中什么最重要?并非所有注意力机制都必要)
[04:18] 🌟 GS^3: Efficient Relighting with Triple Gaussian Splatting(GS^3:高效的三重高斯点云重光照)
[04:51] 🤯 Your Mixture-of-Experts LLM Is Secretly an Embedding Model For Free(你的混合专家大型语言模型实际上是一个免费的嵌入模型)
[05:31] 🌍 Efficiently Democratizing Medical LLMs for 50 Languages via a Mixture of Language Family Experts(通过语言家族专家混合模型高效实现50种语言的医疗大语言模型民主化)
[06:08] 🚀 SimBa: Simplicity Bias for Scaling Up Parameters in Deep Reinforcement Learning(SimBa:深度强化学习中扩展参数的简单性偏置)
[06:43] 📊 Efficient Diffusion Models: A Comprehensive Survey from Principles to Practices(高效扩散模型:从原理到实践的综合调查)
[07:14] 🤖 Towards Synergistic, Generalized, and Efficient Dual-System for Robotic Manipulation(面向协同、广义和高效的双系统机器人操作)
[07:58] 🔄 Empirical Study of Mutual Reinforcement Effect and Application in Few-shot Text Classification Tasks via Prompt(互增强效应的实证研究及其在少样本文本分类任务中的应用通过提示)
[08:37] 🌍 Towards Natural Image Matting in the Wild via Real-Scenario Prior(面向自然图像抠图的现实场景先验)

【关注我们】
您还可以在以下平台找到我们,获得播客内容以外更多信息
小红书: AI速递

空空如也
暂无小宇宙热门评论