0


用张量方法学习超完全潜变量模型

Learning Overcomplete Latent Variable Models through Tensor Methods
课程网址: http://videolectures.net/colt2015_anandkumar_tensor_methods/  
主讲教师: Animashree Anandkumar
开课单位: 加州大学欧文分校
开课时间: 2015-08-20
课程语种: 英语
中文简介:
我们为学习强调过完备状态的潜在变量模型提供了保证,其中潜在空间的维度超过了观察到的维度。特别地,我们考虑多视点混合、ICA和稀疏编码模型。我们的主要工具是一种新的张量分解算法,它适用于过完备域。在半监督设置中,我们利用标签信息来获得模型参数的粗略估计,然后在未标记样本上使用张量方法对其进行细化。当分量的数量为$k=o(d^{p/2})$时,我们建立了学习保证,其中$d$是观测维度,$p$是张量方法中使用的观测矩的阶数(通常为$p=3,4$)。在无监督设置中,提出了一种基于张量切片SVD的简单初始化算法,并在$k\le\beta-d$(其中常数$\beta$可以大于$1$)的更严格条件下提供了保证。对于学习应用,我们通过新颖的覆盖自变量提供了严格的样本复杂度边界。
课程简介: We provide guarantees for learning latent variable models emphasizing on the overcomplete regime, where the dimensionality of the latent space exceeds the observed dimensionality. In particular, we consider multiview mixtures, ICA, and sparse coding models. Our main tool is a new algorithm for tensor decomposition that works in the overcomplete regime. In the semi-supervised setting, we exploit label information to get a rough estimate of the model parameters, and then refine it using the tensor method on unlabeled samples. We establish learning guarantees when the number of components scales as $k=o(d^{p/2})$, where $d$ is the observed dimension, and $p$ is the order of the observed moment employed in the tensor method (usually $p=3,4$). In the unsupervised setting, a simple initialization algorithm based on SVD of the tensor slices is proposed, and the guarantees are provided under the stricter condition that $k\le \beta d$ (where constant $\beta$ can be larger than $1$). For the learning applications, we provide tight sample complexity bounds through novel covering arguments.
关 键 词: 张量方法; 变量模型; 样本边界
课程来源: 视频讲座网
数据采集: 2023-06-11:chenxin01
最后编审: 2023-06-11:chenxin01
阅读次数: 15