0


张量分解的保证非凸学习算法

Guaranteed Non-convex Learning Algorithms through Tensor Factorization
课程网址: http://videolectures.net/iclr2016_anandkumar_nonconvex_learning/  
主讲教师: Animashree Anandkumar
开课单位: 加利福尼亚大学
开课时间: 2016-05-27
课程语种: 英语
中文简介:
现代机器学习涉及大量的数据集,包括文本、图像、视频、生物数据等等。大多数学习任务都可以被构建为优化问题,这些优化问题最终被证明是非凸的和np难解决的。这种硬度障碍可以通过以下方法克服:(i)专注于使学习变得容易处理的条件,(ii)用表现更好的目标替换给定的优化目标,以及(iii)利用学习问题中大量存在的非明显联系。 我将在以下背景下讨论上述问题:(I)潜变量模型的无监督学习和(ii)训练多层神经网络,通过一个涉及矩矩阵和张量谱分解的新框架。张量是一种丰富的结构,可以编码数据中的高阶关系。尽管张量分解是非凸的,但在温和的条件下,可以用简单的迭代算法最优求解。在实践中,张量方法在运行时间和学习精度方面都比训练概率模型的传统方法(如变分推理)获得了巨大的收益。这些积极的结果表明,许多具有挑战性的学习任务可以有效地解决,无论是在理论上还是在实践中。
课程简介: Modern machine learning involves massive datasets of text, images, videos, biological data, and so on. Most learning tasks can be framed as optimization problems which turn out to be non-convex and NP-hard to solve. This hardness barrier can be overcome by: (i) focusing on conditions which make learning tractable, (ii) replacing the given optimization objective with better behaved ones, and (iii) exploiting non-obvious connections that abound in learning problems. I will discuss the above in the context of: (i) unsupervised learning of latent variable models and (ii) training multi-layer neural networks, through a novel framework involving spectral decomposition of moment matrices and tensors. Tensors are rich structures that can encode higher order relationships in data. Despite being non-convex, tensor decomposition can be solved optimally using simple iterative algorithms under mild conditions. In practice, tensor methods yield enormous gains both in running times and learning accuracy over traditional methods for training probabilistic models such as variational inference. These positive results demonstrate that many challenging learning tasks can be solved efficiently, both in theory and in practice.
关 键 词: 现代机器学习; 优化问题; 概率模型
课程来源: 视频讲座网
数据采集: 2022-11-18:chenjy
最后编审: 2022-11-18:chenjy
阅读次数: 25