学习的内在几何学The Intrinsic Geometries of Learning |
|
课程网址: | http://videolectures.net/etvc08_nock_tigol/ |
主讲教师: | Richard Nock |
开课单位: | 子宫内膜异位症研究中心 |
开课时间: | 2008-12-05 |
课程语种: | 英语 |
中文简介: | 在一篇开创性的论文中,Amari(1998)证明,当使用算法的参数空间的内在黎曼结构将梯度指向更好的解时,学习可以变得更有效。在本文中,我们展示了许多学习算法,包括线性分离器的各种提升算法,最流行的自顶向下决策树归纳算法,以及一些在线学习算法,是对一些特定非黎曼人的Amari自然梯度的推广的产生。空间。这些算法利用参数空间的固有双重几何结构,与要最小化的特定积分损失相关。我们将它们中的一些联合起来,例如AdaBoost,加性回归与平方损失,逻辑损失,在CART和C4.5中执行的自顶向下归纳,作为单一算法,我们在该算法上显示出对最佳和显式收敛速度的一般收敛性在非常微弱的假设下。因此,Bartlett等人的许多经过分类校准的替代品。 (2006)承认有效的最小化算法。 |
课程简介: | In a seminal paper, Amari (1998) proved that learning can be made more efcient when one uses the intrinsic Riemannian structure of the algorithms' spaces of parameters to point the gradient towards better solutions. In this paper, we show that many learning algorithms, including various boosting algorithms for linear separators, the most popular top-down decision-tree induction algorithms, and some on-line learning algorithms, are spawns of a generalization of Amari's natural gradient to some particular non-Riemannian spaces. These algorithms exploit an intrinsic dual geometric structure of the space of parameters in relationship with particular integral losses that are to be minimized. We unite some of them, such as AdaBoost, additive regression with the square loss, the logistic loss, the top-down induction performed in CART and C4.5, as a single algorithm on which we show general convergence to the optimum and explicit convergence rates under very weak assumptions. As a consequence, many of the classication calibrated surrogates of Bartlett et al. (2006) admit efficient minimization algorithms. |
关 键 词: | 黎曼结构; 学习算法; 决策树归纳算法 |
课程来源: | 视频讲座网 |
最后编审: | 2019-10-17:cwx |
阅读次数: | 70 |