0


稀疏多尺度高斯过程回归

Sparse Multiscale Gaussian Process Regression
课程网址: http://videolectures.net/icml08_walder_spg/  
主讲教师: Christian Walder
开课单位: 马克斯普朗克研究所
开课时间: 2008-07-29
课程语种: 英语
中文简介:
大多数现有的稀疏高斯过程(g.p.)模型通过将它们的计算基于一组m个基函数来寻求计算优势,所述m个基函数是g.p.的协方差函数。其中一个输入已修复。我们通过使用任意对角线协方差矩阵(或长度尺度)对m个高斯基函数进行计算,对高斯协方差函数的情况进行推广。对于固定数量的基函数和任何给定的标准,这种额外的灵活性允许近似不会比以前更好并且通常更好。我们对边际似然进行基于梯度的优化,其花费O(m2n)时间,其中n是数据点的数量,并将该方法与各种其他稀疏g.p进行比较。方法。虽然我们专注于g.p.回归,中心思想适用于所有基于内核的算法,我们还为支持向量机(s.v.m.)和内核回归(k.r.r.)提供了一些结果。我们的方法优于其他方法,特别是对于很少基函数的情况,即非常高的稀疏比。
课程简介: Most existing sparse Gaussian process (g.p.) models seek computational advantages by basing their computations on a set of m basis functions that are the covariance function of the g.p. with one of its two inputs fixed. We generalise this for the case of Gaussian covariance function, by basing our computations on m Gaussian basis functions with arbitrary diagonal covariance matrices (or length scales). For a fixed number of basis functions and any given criteria, this additional flexibility permits approximations no worse and typically better than was previously possible. We perform gradient based optimisation of the marginal likelihood, which costs O(m2n) time where n is the number of data points, and compare the method to various other sparse g.p. methods. Although we focus on g.p. regression, the central idea is applicable to all kernel based algorithms, and we also provide some results for the support vector machine (s.v.m.) and kernel ridge regression (k.r.r.). Our approach outperforms the other methods, particularly for the case of very few basis functions, i.e. a very high sparsity ratio.
关 键 词: 稀疏高斯过程; 协方差函数; 基函数
课程来源: 视频讲座网
最后编审: 2019-04-21:lxf
阅读次数: 192