0


在线批量强凸多核学习

Online-Batch Strongly Convex Multi Kernel Learning
课程网址: http://videolectures.net/cvpr2010_orabona_obsc/  
主讲教师: Francesco Orabona
开课单位: 芝加哥丰田技术学院
开课时间: 2010-07-19
课程语种: 英语
中文简介:
几种对象分类算法使用内核方法而不是多个提示, 因为它们提供了一种有原则的方法来组合多个线索, 并获得最新的性能。这些策略的一个普遍缺点是训练期间的计算成本很高, 这使得它们无法应用于大规模的问题。它们也没有为其收敛速度提供理论上的保证。在这里, 我们提出了一个多类多核学习 (mkl) 算法, 在相当短的训练时间内获得最先进的性能。我们推广了标准 mkl 公式, 引入了一个参数, 使我们能够确定解决方案的稀疏度。由于这种新的设置, 我们可以直接解决原始配方中的问题。我们在理论和实验上证明, 1) 我们的算法随着内核数的增加具有更快的收敛速度;2) 训练实例的数量是线性的;3) 很少有迭代足以达到好的解决方案。在三个标准基准数据库上进行的实验支持我们的说法。
课程简介: Several object categorization algorithms use kernel methods over multiple cues, as they offer a principled approach to combine multiple cues, and to obtain state-of-theart performance. A general drawback of these strategies is the high computational cost during training, that prevents their application to large-scale problems. They also do not provide theoretical guarantees on their convergence rate. Here we present a Multiclass Multi Kernel Learning (MKL) algorithm that obtains state-of-the-art performance in a considerably lower training time. We generalize the standardMKL formulation to introduce a parameter that allows us to decide the level of sparsity of the solution. Thanks to this new setting, we can directly solve the problem in the primal formulation. We prove theoretically and experimentally that 1) our algorithm has a faster convergence rate as the number of kernels grow; 2) the training complexity is linear in the number of training examples; 3) very few iterations are enough to reach good solutions. Experiments on three standard benchmark databases support our claims.
关 键 词: 计算机科学; 计算机视觉; 稀疏和凸优化
课程来源: 视频讲座网
最后编审: 2020-06-03:毛岱琦(课程编辑志愿者)
阅读次数: 53