index.html 12 KB

12345678910
  1. <!doctype html><html lang=zh-cn><head><meta charset=utf-8><meta name=viewport content="width=device-width,initial-scale=1"><title>2026-03-11 · AI 每日简报 · Indigo Floyd's Latent Garden</title><meta name=description content="Exploring latent space and cultivating sweet trips."><link rel=stylesheet href=../../css/site.css><link rel=icon href=../../favicon.ico type=image/x-icon><link rel=apple-touch-icon href=../../logo.png><link rel=preconnect href=https://fonts.googleapis.com><link rel=preconnect href=https://fonts.gstatic.com crossorigin><link href="https://fonts.googleapis.com/css2?family=Cormorant+Garamond:wght@400;500;600&display=swap" rel=stylesheet></head><body><header class=site-header><div class="wrap header-inner"><div><a class=site-title href=../../><img src=../../logo.png alt=Logo class=site-logo>
  2. Indigo Floyd's Latent Garden</a><p class=site-tagline>Exploring latent space and cultivating sweet trips.</p></div><button class=menu-toggle aria-label=菜单 aria-expanded=false>
  3. <span></span>
  4. <span></span>
  5. <span></span></button><nav class=site-nav><a href=../../>Home</a>
  6. <a href=../../ai-daily>AI Daily</a>
  7. <a href=../../blog>Blog</a>
  8. <a href=../../resume>Resume</a>
  9. <a href=../../search>Search</a>
  10. <a href=../../search class=search-link>🔍</a></nav></div></header><main class=wrap><article class="card article"><p class=meta><a href=../../ai-daily/>← 返回 AI 每日简报</a></p><h1>2026-03-11 · AI 每日简报</h1><p class=meta>2026-03-11 18:36</p><p class=tags><a class=tag href=../../tags/robotdaily/#robotdaily>robotdaily</a><a class=tag href=../../tags/ai-daily/#ai-daily>ai-daily</a><a class=tag href=../../tags/embodied/#embodied>embodied</a><a class=tag href=../../tags/%E5%85%B7%E8%BA%AB%E6%99%BA%E8%83%BD/#%e5%85%b7%e8%ba%ab%e6%99%ba%e8%83%bd>具身智能</a><a class=tag href=../../tags/representation/#representation>representation</a><a class=tag href=../../tags/%E8%A1%A8%E5%BE%81%E5%AD%A6%E4%B9%A0/#%e8%a1%a8%e5%be%81%e5%ad%a6%e4%b9%a0>表征学习</a><a class=tag href=../../tags/reinforcement/#reinforcement>reinforcement</a><a class=tag href=../../tags/%E5%BC%BA%E5%8C%96%E5%AD%A6%E4%B9%A0/#%e5%bc%ba%e5%8c%96%e5%ad%a6%e4%b9%a0>强化学习</a><a class=tag href=../../tags/llm/#llm>llm</a></p><div class=prose><blockquote><p>Hugo 归档版,来源于 RobotDaily 当日 Markdown 简报。</p><p>RobotDaily 2026-03-11:共 9 篇,含 具身智能 3 篇,表征学习 3 篇,强化学习 3 篇。</p></blockquote><p>偏应用导向精选,按方向整理成短卡片式 Markdown 归档。</p><h2 id=具身智能3-篇>具身智能(3 篇)</h2><h3 id=1-playworld-learning-robot-world-models-from-autonomous-play>1. PlayWorld: Learning Robot World Models from Autonomous Play</h3><blockquote><p>关键词命中 real world, deployed, world model, scalable,应用信号: real world, deployed, robot;创…</p></blockquote><ul><li>作者:Tenny Yin, Zhiting Mei, Zhonghe Zheng, Miyu Yamane 等另外7人</li><li>标签:<code>具身智能</code> <code>机器人</code> <code>真实部署</code> <code>操控</code></li><li>中文摘要:【LLM 暂不可用,先保留英文摘要要点】Action-conditioned video models offer a promising path to building general-purpose robot simulators that can improve directly from data. Yet, despite training on large-scale robot datasets, current s…</li><li>链接:<a href=https://doi.org/10.48550/arXiv.2603.09030>DOI</a> | <a href=https://arxiv.org/abs/2603.09030v1>arXiv</a> | <a href=https://arxiv.org/pdf/2603.09030v1>PDF</a></li></ul><h3 id=2-metaworld-x-hierarchical-world-modeling-via-vlm-orchestrated-experts-for-humanoid-loco-manipulation>2. MetaWorld-X: Hierarchical World Modeling via VLM-Orchestrated Experts for Humanoid Loco-Manipulation</h3><blockquote><p>关键词命中 robot, robotic, world model,应用信号: robot, robotic, system;创新信号: world model;领域匹配…</p></blockquote><ul><li>作者:Yutong Shen, Hangxu Liu, Penghui Liu, Jiashuo Luo 等另外5人</li><li>标签:<code>具身智能</code> <code>机器人</code> <code>真实部署</code> <code>操控</code></li><li>中文摘要:【LLM 暂不可用,先保留英文摘要要点】Learning natural, stable, and compositionally generalizable whole-body control policies for humanoid robots performing simultaneous locomotion and manipulation (loco-manipulation) remains a fundament…</li><li>链接:<a href=https://doi.org/10.48550/arXiv.2603.08572>DOI</a> | <a href=https://arxiv.org/abs/2603.08572v1>arXiv</a> | <a href=https://arxiv.org/pdf/2603.08572v1>PDF</a></li></ul><h3 id=3-embodied-human-simulation-for-quantitative-design-and-analysis-of-interactive-robotics>3. Embodied Human Simulation for Quantitative Design and Analysis of Interactive Robotics</h3><blockquote><p>关键词命中 robot, robotic, scalable,应用信号: robot, robotic, system;创新信号: scalable;领域匹配: embo…</p></blockquote><ul><li>作者:Chenhui Zuo, Jinhao Xu, Michael Qian Vergnolle, Yanan Sui</li><li>标签:<code>具身智能</code> <code>机器人</code> <code>真实部署</code> <code>操控</code></li><li>中文摘要:【LLM 暂不可用,先保留英文摘要要点】Physical interactive robotics, ranging from wearable devices to collaborative humanoid robots, require close coordination between mechanical design and control. However, evaluating interactive dynami…</li><li>链接:<a href=https://doi.org/10.48550/arXiv.2603.09218>DOI</a> | <a href=https://arxiv.org/abs/2603.09218v1>arXiv</a> | <a href=https://arxiv.org/pdf/2603.09218v1>PDF</a></li></ul><h2 id=表征学习3-篇>表征学习(3 篇)</h2><h3 id=1-m2-occ-resilient-3d-semantic-occupancy-prediction-for-autonomous-driving-with-incomplete-camera-inputs>1. $M^2$-Occ: Resilient 3D Semantic Occupancy Prediction for Autonomous Driving with Incomplete Camera Inputs</h3><blockquote><p>关键词命中 real-world, deployment, first,应用信号: real-world, deployment, system;创新信号: first;…</p></blockquote><ul><li>作者:Kaixin Lin, Kunyu Peng, Di Wen, Yufan Chen 等另外2人</li><li>标签:<code>表征学习</code> <code>潜在空间</code> <code>世界模型</code> <code>预训练</code></li><li>中文摘要:【LLM 暂不可用,先保留英文摘要要点】Semantic occupancy prediction enables dense 3D geometric and semantic understanding for autonomous driving. However, existing camera-based approaches implicitly assume complete surround-view observat…</li><li>链接:<a href=https://doi.org/10.48550/arXiv.2603.09737>DOI</a> | <a href=https://arxiv.org/abs/2603.09737v1>arXiv</a> | <a href=https://arxiv.org/pdf/2603.09737v1>PDF</a></li></ul><h3 id=2-emerging-extrinsic-dexterity-in-cluttered-scenes-via-dynamics-aware-policy-learning>2. Emerging Extrinsic Dexterity in Cluttered Scenes via Dynamics-aware Policy Learning</h3><blockquote><p>关键词命中 real-world, real world, world model,应用信号: real-world, real world, deployment;创新…</p></blockquote><ul><li>作者:Yixin Zheng, Jiangran Lyu, Yifan Zhang, Jiayi Chen 等另外7人</li><li>标签:<code>表征学习</code> <code>潜在空间</code> <code>世界模型</code> <code>预训练</code></li><li>中文摘要:【LLM 暂不可用,先保留英文摘要要点】Extrinsic dexterity leverages environmental contact to overcome the limitations of prehensile manipulation. However, achieving such dexterity in cluttered scenes remains challenging and underexplored…</li><li>链接:<a href=https://doi.org/10.48550/arXiv.2603.09882>DOI</a> | <a href=https://arxiv.org/abs/2603.09882v1>arXiv</a> | <a href=https://arxiv.org/pdf/2603.09882v1>PDF</a></li></ul><h3 id=3-from-semantics-to-pixels-coarse-to-fine-masked-autoencoders-for-hierarchical-visual-understanding>3. From Semantics to Pixels: Coarse-to-Fine Masked Autoencoders for Hierarchical Visual Understanding</h3><blockquote><p>关键词命中 dataset, self-supervised, first,应用信号: dataset;创新信号: self-supervised, first;领域匹配…</p></blockquote><ul><li>作者:Wenzhao Xiang, Yue Wu, Hongyang Yu, Feng Gao 等另外2人</li><li>标签:<code>表征学习</code> <code>潜在空间</code> <code>世界模型</code> <code>预训练</code></li><li>中文摘要:【LLM 暂不可用,先保留英文摘要要点】Self-supervised visual pre-training methods face an inherent tension: contrastive learning (CL) captures global semantics but loses fine-grained detail, while masked image modeling (MIM) preserves lo…</li><li>链接:<a href=https://doi.org/10.48550/arXiv.2603.09955>DOI</a> | <a href=https://arxiv.org/abs/2603.09955v1>arXiv</a> | <a href=https://arxiv.org/pdf/2603.09955v1>PDF</a></li></ul><h2 id=强化学习3-篇>强化学习(3 篇)</h2><h3 id=1-spaars-safer-rl-policy-alignment-through-abstract-exploration-and-refined-exploitation-of-action-space>1. SPAARS: Safer RL Policy Alignment through Abstract Exploration and Refined Exploitation of Action Space</h3><blockquote><p>关键词命中 robot, robotic,应用信号: robot, robotic;领域匹配: reinforcement learning, policy gradie…</p></blockquote><ul><li>作者:Swaminathan S K, Aritra Hazra</li><li>标签:<code>强化学习</code> <code>策略优化</code> <code>奖励设计</code> <code>离线RL</code></li><li>中文摘要:【LLM 暂不可用,先保留英文摘要要点】Offline-to-online reinforcement learning (RL) offers a promising paradigm for robotics by pre-training policies on safe, offline demonstrations and fine-tuning them via online interaction. However, a…</li><li>链接:<a href=https://doi.org/10.48550/arXiv.2603.09378>DOI</a> | <a href=https://arxiv.org/abs/2603.09378v1>arXiv</a> | <a href=https://arxiv.org/pdf/2603.09378v1>PDF</a></li></ul><h3 id=2-robust-regularized-policy-iteration-under-transition-uncertainty>2. Robust Regularized Policy Iteration under Transition Uncertainty</h3><blockquote><p>关键词命中 benchmark, unified,应用信号: benchmark;创新信号: unified;领域匹配: reinforcement learning,…</p></blockquote><ul><li>作者:Hongqiang Lin, Zhenghui Fu, Weihao Tang, Pengfei Wang 等另外3人</li><li>标签:<code>强化学习</code> <code>策略优化</code> <code>奖励设计</code> <code>离线RL</code></li><li>中文摘要:【LLM 暂不可用,先保留英文摘要要点】Offline reinforcement learning (RL) enables data-efficient and safe policy learning without online exploration, but its performance often degrades under distribution shift. The learned policy may vis…</li><li>链接:<a href=https://doi.org/10.48550/arXiv.2603.09344>DOI</a> | <a href=https://arxiv.org/abs/2603.09344v1>arXiv</a> | <a href=https://arxiv.org/pdf/2603.09344v1>PDF</a></li></ul><h3 id=3-towards-batch-to-streaming-deep-reinforcement-learning-for-continuous-control>3. Towards Batch-to-Streaming Deep Reinforcement Learning for Continuous Control</h3><blockquote><p>关键词命中 benchmark, hardware, novel,应用信号: benchmark, hardware, sim2real;创新信号: novel;领域匹配…</p></blockquote><ul><li>作者:Riccardo De Monte, Matteo Cederle, Gian Antonio Susto</li><li>标签:<code>强化学习</code> <code>策略优化</code> <code>奖励设计</code> <code>离线RL</code></li><li>中文摘要:【LLM 暂不可用,先保留英文摘要要点】State-of-the-art deep reinforcement learning (RL) methods have achieved remarkable performance in continuous control tasks, yet their computational complexity is often incompatible with the constrain…</li><li>链接:<a href=https://doi.org/10.48550/arXiv.2603.08588>DOI</a> | <a href=https://arxiv.org/abs/2603.08588v1>arXiv</a> | <a href=https://arxiv.org/pdf/2603.08588v1>PDF</a></li></ul></div></article></main><footer class="site-footer wrap"><p>© 2026 IndigoFloyd · Hugo personal site for AI briefs / blog / resume.</p></footer><script>document.addEventListener("DOMContentLoaded",function(){const e=document.querySelector(".menu-toggle"),t=document.querySelector(".site-nav");e.addEventListener("click",function(){const n=e.classList.toggle("active");t.classList.toggle("active"),e.setAttribute("aria-expanded",n)})})</script></body></html>