首页函数论
0


Inductive Regularized Learning of Kernel Functions

Inductive Regularized Learning of Kernel Functions
课程网址: http://videolectures.net/nips2010_jain_irl/  
主讲教师: Prateek Jain
开课单位: 纽昂斯通讯公司
开课时间: 2011-03-25
课程语种: 英语
中文简介:
在本文中,我们考虑半监督核函数学习的基本问题。我们提出了一个用于学习核矩阵的通用正则化框架,然后证明了我们提出的核矩阵学习框架和一般线性转换学习问题之间的等价性。我们的结果表明,学习的内核矩阵参数化线性变换核函数,并且可以归纳地应用于新的数据点。此外,我们的结果为内核化大多数现有的Mahalanobis度量学习公式提供了一种建设性的方法。为了使我们的结果适用于大规模数据,我们修改框架以限制优化过程中的参数数量。我们还考虑了半监督设置中核化归纳维数降低的问题。我们通过考虑我们通用内核学习框架的一个特例,我们选择跟踪范数函数作为正则化器,为这个问题引入了一种新方法。我们凭经验证明我们的框架学习了有用的核函数,在各种领域中显着提高了$ k $ -NN分类的准确性。此外,我们的核心降维技术显着降低了特征空间的维数,同时实现了竞争分类的准确性。
课程简介: In this paper we consider the fundamental problem of semi-supervised kernel function learning. We propose a general regularized framework for learning a kernel matrix, and then demonstrate an equivalence between our proposed kernel matrix learning framework and a general linear transformation learning problem. Our result shows that the learned kernel matrices parameterize a linear transformation kernel function and can be applied inductively to new data points. Furthermore, our result gives a constructive method for kernelizing most existing Mahalanobis metric learning formulations. To make our results practical for large-scale data, we modify our framework to limit the number of parameters in the optimization process. We also consider the problem of kernelized inductive dimensionality reduction in the semi-supervised setting. We introduce a novel method for this problem by considering a special case of our general kernel learning framework where we select the trace norm function as the regularizer. We empirically demonstrate that our framework learns useful kernel functions, improving the $k$-NN classification accuracy significantly in a variety of domains. Furthermore, our kernelized dimensionality reduction technique significantly reduces the dimensionality of the feature space while achieving competitive classification accuracies.
关 键 词: 内核函数; 内核矩阵线性变换核函数; 内核矩阵
课程来源: 视频讲座网
最后编审: 2020-06-01:吴雨秋(课程编辑志愿者)
阅读次数: 77