0


预算感知的一个更简单的统一分析

A Simpler Unified Analysis of Budget Perceptrons
课程网址: http://videolectures.net/icml09_sutskever_asua/  
主讲教师: Ilya Sutskever
开课单位: 多伦多大学
开课时间: 2009-08-26
课程语种: 英语
中文简介:
内核Perceptron是一种吸引人的在线学习算法,它有一个缺点:每当它出错时它必须增加其支持集,如果错误数量很大,这会减慢训练和测试速度。 Forgetron和随机预算感知器算法通过限制Perceptron允许的支持向量的数量来克服这个问题。这些算法具有遗憾的界限,其证明不同。在本文中,我们通过观察它们移除支持向量的方式可以被视为$ L_2 $正则化的类型,提出对这两种算法的统一分析。通过将这些算法作为在线凸优化问题的实例并将Zinkevich定理的变量应用于噪声和不正确的梯度,我们可以比以前更容易地约束这些算法的遗憾。我们的界限与现有界限相似,但证据不太技术性。
课程简介: The kernel Perceptron is an appealing online learning algorithm that has a drawback: whenever it makes an error it must increase its support set, which slows training and testing if the number of errors is large. The Forgetron and the Randomized Budget Perceptron algorithms overcome this problem by restricting the number of support vectors the Perceptron is allowed to have. These algorithms have regret bounds whose proofs are dissimilar. In this paper we propose a unified analysis of both of these algorithms by observing that the way in which they remove support vectors can be seen as types of $L_2$-regularization. By casting these algorithms as instances of online convex optimization problems and applying a variant of Zinkevich's theorem for noisy and incorrect gradient, we can bound the regret of these algorithms more easily than before. Our bounds are similar to the existing ones, but the proofs are less technical.
关 键 词: 在线学习算法; 凸优化问题; 正则化
课程来源: 视频讲座网
最后编审: 2019-04-24:lxf
阅读次数: 38