0


GP的LVM数据合并

GP-LVM for Data Consolidation
课程网址: http://videolectures.net/lms08_lawrence_gpdc/  
主讲教师: Neil D. Lawrence
开课单位: 谢菲尔德大学
开课时间: 2008-12-20
课程语种: 英语
中文简介:
许多机器学习任务涉及将信息从一个表示转移到相应的表示或任务,其中几个不同的观察代表相同的潜在现象。使用来自多个源或表示的信息进行特征选择的经典算法是典型相关分析(CCA)。在CCA中,目标是在每个观察空间中选择与降维相比最大相关的特征,其中目标是以更有效的形式重新表示数据。我们建议建立在CCA基础上的降维技术。通过用两个额外的空间扩展潜在空间,每个空间特定于数据的分区,该模型能够表示数据的完全方差。在本文中,我们建议一个类似于CCA的共享降维的生成模型。
课程简介: Manymachine learning task are involvedwith the transfer of information fromone representation to a corresponding representation or tasks where several different observations represent the same underlying phenomenon. A classical algorithm for feature selection using information from multiple sources or representations is Canonical Correlation Analysis (CCA). In CCA the objective is to select features in each observation space that are maximally correlated compared to dimensionality reduction where the objective is to re-represent the data in a more efficient form. We suggest a dimensionality reduction technique that builds on CCA. By extending the latent space with two additional spaces, each specific to a partition of the data, the model is capable of representing the full variance of the data. In this paper we suggest a generative model for shared dimensionality reduction analogous to that of CCA.
关 键 词: 机器学习; 降维技术; 相关分析
课程来源: 视频讲座网
最后编审: 2020-01-13:chenxin
阅读次数: 70