高阶正则化的半监督学习Semi-supervised Learning by Higher Order Regularization |
|
课程网址: | http://videolectures.net/aistats2011_zhou_learning/ |
主讲教师: | Xueyuan Zhou |
开课单位: | 芝加哥大学 |
开课时间: | 2011-05-06 |
课程语种: | 英语 |
中文简介: | 在半监督学习中,Nadler等人给出了几种基于图拉普拉斯正则化的算法在固定标记点的同时,在无限个非标记点的极限下的解。(2009)在d¸;2的rd中的标记点退化为具有";峰值";的常量函数。这些优化问题都使用图拉普拉斯正则化器作为一个常见的惩罚项。本文用一个迭代拉普拉斯函数的正则化方法来解决这个问题,它等价于一个高阶索波列夫半范数。或者,可以将其视为薄板样条线对高维未知子流形的推广。我们还讨论了复制核希尔伯特空间和格林函数之间的关系。实验结果支持我们的分析,通过使用迭代拉普拉斯算法显示出一致的改进结果。 |
课程简介: | In semi-supervised learning, at the limit of infinite unlabeled points while fixing labeled ones, the solutions of several graph Laplacian regularization based algorithms were shown by Nadler et al. (2009) to degenerate to constant functions with "spikes" at labeled points in Rd for d ¸ 2. These optimization problems all use the graph Laplacian regularizer as a common penalty term. In this paper, we address this problem by using regularization based on an iterated Laplacian, which is equivalent to a higher order Sobolev semi-norm. Alternatively, it can be viewed as a generalization of the thin plate spline to an unknown submanifold in high dimensions. We also discuss relationships between Reproducing Kernel Hilbert Spaces and Green's functions. Experimental results support our analysis by showing consistently improved results using iterated Laplacians. |
关 键 词: | 离散无限逻辑; 随机变量; 贝叶斯算法 |
课程来源: | 视频讲座网 |
最后编审: | 2019-12-20:lxf |
阅读次数: | 39 |