0


去噪和降维的特征空间

Denoising and Dimension Reduction in Feature Space
课程网址: http://videolectures.net/acs07_muller_braun_ddr/  
主讲教师: Klaus-Robert Müller; Mikio Braun
开课单位: 柏林工业大学
开课时间: 2007-12-10
课程语种: 英语
中文简介:
该演讲介绍了最近的工作,这有趣地补充了我们对基于内核的学习中的VC图片的理解。我们的发现是,如果内核与潜在的学习问题匹配,则监督学习问题的相关信息在有限数量的主要内核PCA组件中包含可忽略的错误。因此,内核不仅变换数据集,使得仅使用线性判别函数就可以实现良好的泛化,但是这种变换也以经济地使用特征空间维度的方式执行。在最好的情况下,内核为监督学习问题提供数据的有效隐式表示。实际上,我们提出了一种算法,使我们能够恢复与良好分类相关的子空间和维度。因此,我们的算法可以应用于(1)以几何方式分析数据集和内核的相互作用,(2)以帮助模型选择,以及(3)在特征空间中去噪以便产生更好的分类结果。我们通过报告我们的方法应用于基因发现和脑计算机接口数据来补充我们的理论发现。这是与Claudia Sannelli和Joachim M. Buhmann的联合工作
课程简介: The talk presents recent work that interestingly complements our understanding the VC picture in kernel based learning. Our finding is that the relevant information of a supervised learning problem is contained up to negligible error in a finite number of leading kernel PCA components if the kernel matches the underlying learning problem. Thus, kernels not only transform data sets such that good generalization can be achieved using only linear discriminant functions, but this transformation is also performed in a manner which makes economic use of feature space dimensions. In the best case, kernels provide efficient implicit representations of the data for supervised learning problems. Practically, we propose an algorithm which enables us to recover the subspace and dimensionality relevant for good classification. Our algorithm can therefore be applied (1) to analyze the interplay of data set and kernel in a geometric fashion, (2) to aid in model selection, and to (3) denoise in feature space in order to yield better classification results. We complement our theoretical findings by reporting on applications of our method to data from gene finding and brain computer interfacing. This is joint work with Claudia Sannelli and Joachim M. Buhmann
关 键 词: VC的画面; 领导核心; 数据集; 去噪特征空间
课程来源: 视频讲座网
最后编审: 2020-06-13:邬启凡(课程编辑志愿者)
阅读次数: 77