PanoContext:一种用于全景场景理解的全房间三维上下文模型PanoContext: A Whole-room 3D Context Model for Panoramic Scene Understanding |
|
课程网址: | http://videolectures.net/eccv2014_zhang_panoramic_scene/ |
主讲教师: | Yinda Zhang |
开课单位: | 普林斯顿大学 |
开课时间: | 2014-10-29 |
课程语种: | 英语 |
中文简介: | 标准相机的视野很小,这是上下文信息不能如应用于对象检测那样有用的主要原因之一。为克服此限制,我们主张在场景理解中使用360°全视角全景图,并提出3D整体房间环境模型。对于输入全景图,我们的方法输出房间和内部所有主要对象的3D边界框及其语义类别。我们的方法基于上下文约束生成3D假设,并结合自下而上和自上而下的上下文信息对假设进行整体排名。为了训练我们的模型,我们构建了一个带注释的全景图数据集,并使用手动注释从单个视图重建了3D模型。实验表明,仅基于3D上下文而不使用任何图像区域类别分类器,我们就可以与先进的对象检测器实现可比的性能。这表明当FOV很大时,上下文与对象外观一样强大。所有数据和源代码都可以在线获得。 p> |
课程简介: | The field-of-view of standard cameras is very small, which is one of the main reasons that contextual information is not as useful as it should be for object detection. To overcome this limitation, we advocate the use of 360° full-view panoramas in scene understanding, and propose a whole-room context model in 3D. For an input panorama, our method outputs 3D bounding boxes of the room and all major objects inside, together with their semantic categories. Our method generates 3D hypotheses based on contextual constraints and ranks the hypotheses holistically, combining both bottom-up and top-down context information. To train our model, we construct an annotated panorama dataset and reconstruct the 3D model from single-view using manual annotation. Experiments show that solely based on 3D context without any image region category classifier, we can achieve a comparable performance with the state-of-the-art object detector. This demonstrates that when the FOV is large, context is as powerful as object appearance. All data and source code are available online. |
关 键 词: | 三维模型; 分类器; 对象检测 |
课程来源: | 视频讲座网 |
数据采集: | 2020-11-02:zyk |
最后编审: | 2020-11-02:zyk |
阅读次数: | 53 |