本期的 21 篇论文如下:
[00:24] 🤖 CompassJudger-1: All-in-one Judge Model Helps Model Evaluation and Evolution(指南针评判者-1:一体化评判模型助力模型评估与进化)
[01:11] 🌲 SAM2Long: Enhancing SAM 2 for Long Video Segmentation with a Training-Free Memory Tree(SAM2长:通过无需训练的记忆树增强SAM 2以实现长视频分割)
[01:55] 🌐 PUMA: Empowering Unified MLLM with Multi-granular Visual Generation(PUMA:赋予统一多模态大语言模型多粒度视觉生成能力)
[02:37] 🤖 AutoTrain: No-code training for state-of-the-art models(AutoTrain:无代码训练最先进的模型)
[03:10] ⚡ FrugalNeRF: Fast Convergence for Few-shot Novel View Synthesis without Learned Priors(节俭NeRF:无学习先验的少样本新视角合成快速收敛)
[03:56] 📊 Baichuan Alignment Technical Report(百川对齐技术报告)
[04:39] 🌍 Pangea: A Fully Open Multilingual Multimodal LLM for 39 Languages(泛亚:一个完全开放的多语种多模态LLM,涵盖39种语言)
[05:21] 🔍 RM-Bench: Benchmarking Reward Models of Language Models with Subtlety and Style(RM-Bench:评估语言模型奖励模型的细致性与风格敏感度)
[06:05] 📚 Meta-Chunking: Learning Efficient Text Segmentation via Logical Perception(元分块:通过逻辑感知学习高效的文本分割)
[06:41] 🔍 Pre-training Distillation for Large Language Models: A Design Space Exploration(大型语言模型预训练蒸馏:设计空间探索)
[07:16] 🔬 Alchemy: Amplifying Theorem-Proving Capability through Symbolic Mutation(炼金术:通过符号变异增强定理证明能力)
[07:55] 🔄 SemiEvol: Semi-supervised Fine-tuning for LLM Adaptation(半监督微调:LLM适应的半监督微调框架)
[08:31] 📚 Selecting Influential Samples for Long Context Alignment via Homologous Models' Guidance and Contextual Awareness Measurement(通过同源模型引导和上下文意识测量选择长上下文对齐的关键样本)
[09:11] 🤖 Zero-shot Model-based Reinforcement Learning using Large Language Models(基于大语言模型的零样本模型强化学习)
[09:53] 🗣 Ichigo: Mixed-Modal Early-Fusion Realtime Voice Assistant(一护:混合模态早期融合实时语音助手)
[10:28] 🧠 CBT-Bench: Evaluating Large Language Models on Assisting Cognitive Behavior Therapy(CBT-Bench:评估大型语言模型在辅助认知行为疗法中的应用)
[11:12] 🛠 Router-Tuning: A Simple and Effective Approach for Enabling Dynamic-Depth in Transformers(路由器调优:一种简单有效的Transformer动态深度调整方法)
[11:58] 🧠 Hallucination Detox: Sensitive Neuron Dropout (SeND) for Large Language Model Training(幻觉解毒:用于大型语言模型训练的敏感神经元丢弃方法)
[12:45] 🌍 Cross-Lingual Auto Evaluation for Assessing Multilingual LLMs(多语言大语言模型的跨语言自动评估)
[13:25] 🗣 DM-Codec: Distilling Multimodal Representations for Speech Tokenization(多模态表示蒸馏用于语音标记化)
[14:17] 🧠 In-context learning and Occam's razor(上下文学习与奥卡姆剃刀)

【关注我们】
您还可以在以下平台找到我们,获得播客内容以外更多信息
小红书: AI速递

空空如也
暂无小宇宙热门评论