0


单眼三维姿态估计和跟踪检测

Monocular 3D Pose Estimation and Tracking by Detection
课程网址: http://videolectures.net/cvpr2010_andriluka_m3de/  
主讲教师: Mykhaylo Andriluka
开课单位: 达姆施塔特工业大学
开课时间: 2010-07-19
课程语种: 英语
中文简介:
从单眼图像序列自动恢复3D人体姿势是具有众多应用的具有挑战性和重要的研究课题。尽管当前的方法能够在受控环境中为单个人恢复3D姿势,但是它们受到诸如拥挤的街道场景之类的真实场景的严重挑战。为了解决这个问题,我们提出了一个基于最近一些进展的三阶段过程。第一阶段从单帧获得2D关节的初始估计和人的视点。第二阶段允许基于逐个检测的跨帧的早期数据关联。这两个阶段成功地将可用的2D图像证据累积到短图像序列(=轨迹)上的2D肢体位置的稳健估计中。第三阶段和最后阶段使用基于轨迹的估计作为稳健的图像观察来可靠地恢复3D姿势。我们在HumanEva II基准测试中展示了最先进的性能,并展示了我们在逼真的街道条件下对铰接式3D跟踪方法的适用性。
课程简介: Automatic recovery of 3D human pose from monocular image sequences is a challenging and important research topic with numerous applications. Although current methods are able to recover 3D pose for a single person in controlled environments, they are severely challenged by realworld scenarios, such as crowded street scenes. To address this problem, we propose a three-stage process building on a number of recent advances. The first stage obtains an initial estimate of the 2D articulation and viewpoint of the person from single frames. The second stage allows early data association across frames based on tracking-by-detection. These two stages successfully accumulate the available 2D image evidence into robust estimates of 2D limb positions over short image sequences (= tracklets). The third and final stage uses those tracklet-based estimates as robust image observations to reliably recover 3D pose. We demonstrate state-of-the-art performance on the HumanEva II benchmark, and also show the applicability of our approach to articulated 3D tracking in realistic street conditions.
关 键 词: 三维; 单目图像序列; 铰街3D跟踪
课程来源: 视频讲座网
最后编审: 2020-06-01:吴雨秋(课程编辑志愿者)
阅读次数: 255