0


深度神经决策森林

Deep Neural Decision Forests
课程网址: http://videolectures.net/iccv2015_kontschieder_decision_forests/  
主讲教师: Peter Kontschieder
开课单位: 微软研究院、剑桥大学、微软研究院
开课时间: 2016-02-10
课程语种: 英语
中文简介:
我们提出了深度神经决策森林——一种新的方法,通过以端到端方式训练它们,将分类树与深度卷积网络中已知的表示学习功能统一起来。为了将这两个世界结合起来,我们引入了一个随机和可微的决策树模型,该模型指导通常在(深度)卷积网络的初始层进行的表示学习。我们的模型与传统的深度网络不同,因为决策森林提供了最终的预测,它与传统的决策森林不同,因为我们提出了分裂和叶节点参数的原则性、联合和全局优化。我们展示了在MNIST和ImageNet等基准机器学习数据集上的实验结果,并与最先进的深度模型相比,发现了相同或更好的结果。最值得注意的是,当将我们的森林集成到单一作物、单一/七个模型的GoogLeNet架构中时,我们在ImageNet验证数据上获得的top5 - error分别仅为7.84%/6.38%。因此,即使没有任何形式的训练数据集增强,我们也在最佳GoogLeNet架构(7个模型,144种作物)获得的6.67%误差的基础上进行了改进.
课程简介: We present Deep Neural Decision Forests – a novel approach that unifies classification trees with the representation learning functionality known from deep convolutional networks, by training them in an end-to-end manner. To combine these two worlds, we introduce a stochastic and differentiable decision tree model, which steers the representation learning usually conducted in the initial layers of a (deep) convolutional network. Our model differs from conventional deep networks because a decision forest provides the final predictions and it differs from conventional decision forests since we propose a principled, joint and global optimization of split and leaf node parameters. We show experimental results on benchmark machine learning datasets like MNIST and ImageNet and find on par or superior results when compared to state-of-the-art deep models. Most remarkably, we obtain Top5-Errors of only 7.84%/6.38% on ImageNet validation data when integrating our forests in a single-crop, single/seven model GoogLeNet architecture, respectively. Thus, even without any form of training data set augmentation we are improving on the 6.67% error obtained by the best GoogLeNet architecture (7 models, 144 crops).
关 键 词: 深度神经; 决策模型; 决策森林
课程来源: 视频讲座网
数据采集: 2023-03-22:chenxin01
最后编审: 2023-05-17:liyy
阅读次数: 23