最优逆向预测:监督,无监督和半监督学习的统一视角Optimal Reverse Prediction: A Unified Perspective on Supervised, Unsupervised and Semi-Supervised Learning |
|
课程网址: | http://videolectures.net/icml09_xu_orp/ |
主讲教师: | Linli Xu |
开课单位: | 阿尔伯塔大学 |
开课时间: | 2009-08-26 |
课程语种: | 英语 |
中文简介: | 无监督学习的下雨原则通常源于似乎独立于监督学习的动机,导致半监督训练方法的激增。在本文中,我们通过最优反向预测的概念提出了几种有监督和无监督训练原则的简单统一:预测目标标签的输入,优化模型参数和任何缺失标签。特别地,我们展示了监督最小二乘,主成分分析,k均值聚类和归一化图切割聚类如何都可以表示为相同训练原理的实例,不同之处仅在于对目标标签的约束。然后自动导出自然形式的半监督回归和分类,产生用于回归和分类的半监督学习算法,令人惊讶地,这些算法是新颖的并且改进了现有技术。这些算法都可以与标准正则化器结合,并通过内核实现非线性。 |
课程简介: | raining principles for unsupervised learning are often derived from motivations that appear to be independent of supervised learning, causing a proliferation of semisupervised training methods. In this paper we present a simple unification of several supervised and unsupervised training principles through the concept of optimal reverse prediction: predict the inputs from the target labels, optimizing both over model parameters and any missing labels. In particular, we show how supervised least squares, principal components analysis, k-means clustering and normalized graph-cut clustering can all be expressed as instances of the same training principle, differing only in constraints made on the target labels. Natural forms of semi-supervised regression and classification are then automatically derived, yielding semi-supervised learning algorithms for regression and classification that, surprisingly, are novel and refine the state of the art. These algorithms can all be combined with standard regularizers and made non-linear via kernels. |
关 键 词: | 无监督学习; 最优反向预测; 最小二乘 |
课程来源: | 视频讲座网 |
最后编审: | 2019-04-24:lxf |
阅读次数: | 32 |