0


:shareboost:通过性能保证增强多视图学习

ShareBoost: Boosting for Multi-View Learning with Performance Guarantees
课程网址: http://videolectures.net/ecmlpkdd2011_palaniappan_shareboost/  
主讲教师: Kannappan Palaniappan
开课单位: 密苏里大学
开课时间: 2011-10-03
课程语种: 英语
中文简介:
已知组合多视图信息的算法以指数方式加快分类,并且已经应用​​于许多领域。但是,他们缺乏挖掘大多数判别信息源(或数据类型)进行预测的能力。在本文中,我们提出了一种基于增强的算法来解决这些问题。所提出的算法独立于每个数据类型(视图)构建基本分类器,其提供关于感兴趣对象的部分视图。与AdaBoost不同,每个视图都有自己的重新采样权重,我们的算法在每次增强轮次中对所有视图使用单个重新采样分布。此分布由其训练误差最小的视图确定。这种共享采样机制将噪声限制在各个视图中,从而降低了对噪声的敏感度。此外,为了建立性能保证,我们引入了算法的随机版本,其中以概率方式选择获胜视图。因此,它可以在多武装强盗框架内进行投射,这使我们能够高概率地证明该算法寻找用于进行预测的大多数判别数据视图。我们提供实验结果,显示其抗噪声和竞争技术的性能。
课程简介: Algorithms combining multi-view information are known to exponentially quicken classification, and have been applied to many fields. However, they lack the ability to mine most discriminant information sources (or data types) for making predictions. In this paper, we propose an algorithm based on boosting to address these problems. The proposed algorithm builds base classifiers independently from each data type (view) that provides a partial view about an object of interest. Different from AdaBoost, where each view has its own re-sampling weight, our algorithm uses a single re-sampling distribution for all views at each boosting round. This distribution is determined by the view whose training error is minimal. This shared sampling mechanism restricts noise to individual views, thereby reducing sensitivity to noise. Furthermore, in order to establish performance guarantees, we introduce a randomized version of the algorithm, where a winning view is chosen probabilistically. As a result, it can be cast within a multi-armed bandit framework, which allows us to show that with high probability the algorithm seeks out most discriminant views of data for making predictions. We provide experimental results that show its performance against noise and competing techniques.
关 键 词: 多视图信息; 指数方式; 重新采样分布
课程来源: 视频讲座网
最后编审: 2019-04-03:lxf
阅读次数: 44