主播
节目简介
来源:小宇宙
【赞助商】
通勤路上就听AI每周谈。AI每周谈,每周带你回顾上周AI大事
传送门 🔗https://www.xiaoyuzhoufm.com/podcast/688a34636f5a275f1cba40fd
【目录】
本期的 15 篇论文如下:
[00:35] 😊 PixelSmile: Toward Fine-Grained Facial Expression Editing(PixelSmile:面向细粒度面部表情编辑)
[01:27] 🚀 Intern-S1-Pro: Scientific Multimodal Foundation Model at Trillion Scale(Intern-S1-Pro:万亿参数规模的科学多模态基础模型)
[02:10] 🖼 RealRestorer: Towards Generalizable Real-World Image Restoration with Large-Scale Image Editing Models(RealRestorer:基于大规模图像编辑模型实现可泛化的真实世界图像复原)
[02:52] 🖼 MACRO: Advancing Multi-Reference Image Generation with Structured Long-Context Data(MACRO:利用结构化长上下文数据推进多参考图像生成)
[03:42] ⚙ Calibri: Enhancing Diffusion Transformers via Parameter-Efficient Calibration(Calibri:通过参数高效校准增强扩散变换器)
[04:25] 🗣 Voxtral TTS(Voxtral TTS:基于混合架构的富有表现力多语言文本转语音模型)
[05:03] 📉 SlopCodeBench: Benchmarking How Coding Agents Degrade Over Long-Horizon Iterative Tasks(SlopCodeBench:基准测试编码智能体在长视野迭代任务中的性能退化)
[05:49] 🧠 MSA: Memory Sparse Attention for Efficient End-to-End Memory Model Scaling to 100M Tokens(MSA:内存稀疏注意力机制,实现端到端内存模型高效扩展至1亿词元)
[06:39] 🎬 AVControl: Efficient Framework for Training Audio-Visual Controls(AVControl:用于训练视听控制的高效框架)
[07:23] 🎨 Less Gaussians, Texture More: 4K Feed-Forward Textured Splatting(更少的高斯,更多的纹理:4K前馈纹理化高斯泼溅)
[08:10] 🔍 MuRF: Unlocking the Multi-Scale Potential of Vision Foundation Models(MuRF:解锁视觉基础模型的多尺度潜力)
[09:12] 🔍 Representation Alignment for Just Image Transformers is not Easier than You Think(表征对齐对于纯图像Transformer而言并非易事)
[10:06] ⚡ S2D2: Fast Decoding for Diffusion LLMs via Training-Free Self-Speculation(S2D2:基于免训练自推测的扩散大语言模型快速解码方法)
[10:46] 📊 FinMCP-Bench: Benchmarking LLM Agents for Real-World Financial Tool Use under the Model Context Protocol(FinMCP-Bench:基于模型上下文协议的真实世界金融工具使用场景下大语言模型智能体基准测试)
[11:35] 🔬 BioVITA: Biological Dataset, Model, and Benchmark for Visual-Textual-Acoustic Alignment(BioVITA:面向视觉-文本-声学对齐的生物数据集、模型与基准)
【关注我们】
您还可以在以下平台找到我们,获得播客内容以外更多信息
小红书: AI速递
通勤路上就听AI每周谈。AI每周谈,每周带你回顾上周AI大事
传送门 🔗https://www.xiaoyuzhoufm.com/podcast/688a34636f5a275f1cba40fd
【目录】
本期的 15 篇论文如下:
[00:35] 😊 PixelSmile: Toward Fine-Grained Facial Expression Editing(PixelSmile:面向细粒度面部表情编辑)
[01:27] 🚀 Intern-S1-Pro: Scientific Multimodal Foundation Model at Trillion Scale(Intern-S1-Pro:万亿参数规模的科学多模态基础模型)
[02:10] 🖼 RealRestorer: Towards Generalizable Real-World Image Restoration with Large-Scale Image Editing Models(RealRestorer:基于大规模图像编辑模型实现可泛化的真实世界图像复原)
[02:52] 🖼 MACRO: Advancing Multi-Reference Image Generation with Structured Long-Context Data(MACRO:利用结构化长上下文数据推进多参考图像生成)
[03:42] ⚙ Calibri: Enhancing Diffusion Transformers via Parameter-Efficient Calibration(Calibri:通过参数高效校准增强扩散变换器)
[04:25] 🗣 Voxtral TTS(Voxtral TTS:基于混合架构的富有表现力多语言文本转语音模型)
[05:03] 📉 SlopCodeBench: Benchmarking How Coding Agents Degrade Over Long-Horizon Iterative Tasks(SlopCodeBench:基准测试编码智能体在长视野迭代任务中的性能退化)
[05:49] 🧠 MSA: Memory Sparse Attention for Efficient End-to-End Memory Model Scaling to 100M Tokens(MSA:内存稀疏注意力机制,实现端到端内存模型高效扩展至1亿词元)
[06:39] 🎬 AVControl: Efficient Framework for Training Audio-Visual Controls(AVControl:用于训练视听控制的高效框架)
[07:23] 🎨 Less Gaussians, Texture More: 4K Feed-Forward Textured Splatting(更少的高斯,更多的纹理:4K前馈纹理化高斯泼溅)
[08:10] 🔍 MuRF: Unlocking the Multi-Scale Potential of Vision Foundation Models(MuRF:解锁视觉基础模型的多尺度潜力)
[09:12] 🔍 Representation Alignment for Just Image Transformers is not Easier than You Think(表征对齐对于纯图像Transformer而言并非易事)
[10:06] ⚡ S2D2: Fast Decoding for Diffusion LLMs via Training-Free Self-Speculation(S2D2:基于免训练自推测的扩散大语言模型快速解码方法)
[10:46] 📊 FinMCP-Bench: Benchmarking LLM Agents for Real-World Financial Tool Use under the Model Context Protocol(FinMCP-Bench:基于模型上下文协议的真实世界金融工具使用场景下大语言模型智能体基准测试)
[11:35] 🔬 BioVITA: Biological Dataset, Model, and Benchmark for Visual-Textual-Acoustic Alignment(BioVITA:面向视觉-文本-声学对齐的生物数据集、模型与基准)
【关注我们】
您还可以在以下平台找到我们,获得播客内容以外更多信息
小红书: AI速递