0


逃离鞍点——张量分解的在线随机梯度

Escaping From Saddle Points --- Online Stochastic Gradient for Tensor Decomposition
课程网址: http://videolectures.net/colt2015_huang_tensor_decomposition/  
主讲教师: Furong Huang
开课单位: 加州大学欧文分校
开课时间: 2015-08-20
课程语种: 英语
中文简介:
我们分析了优化非凸函数的随机梯度下降。对于非凸函数,通常可以找到一个合理的局部最小值,主要担心的是梯度更新被困在鞍点中。在本文中,我们确定了允许有效优化的非凸问题的严格鞍性质,并证明了随机梯度下降在多项式迭代次数中收敛到局部极小值。据我们所知,这是第一个为具有指数多个局部极小值和鞍点的非凸函数的随机梯度下降提供全局收敛保证的工作。我们的分析可以应用于正交张量分解,它被广泛用于学习一类丰富的潜变量模型。针对具有严格鞍性质的张量分解问题,我们提出了一个新的优化公式。结果得到了第一个具有收敛性保证的正交张量分解在线算法。
课程简介: We analyze stochastic gradient descent for optimizing non-convex functions. For non-convex functions often it is good to find a reasonable local minimum, and the main concern is that gradient updates are trapped in saddle points. In this paper we identify strict saddle property for non-convex problem that allows for efficient optimization, and show that stochastic gradient descent converges to a local minimum in a polynomial number of iterations. To the best of our knowledge this is the first work that gives global convergence guarantees for stochastic gradient descent on non-convex functions with exponentially many local minima and saddle points. Our analysis can be applied to orthogonal tensor decomposition, which is widely used in learning a rich class of latent variable models. We propose a new optimization formulation for the tensor decomposition problem that has strict saddle property. As a result we get the first online algorithm for orthogonal tensor decomposition with convergence guarantee.
关 键 词: 张量分解; 随机梯度; 非凸函数
课程来源: 视频讲座网
数据采集: 2023-07-24:chenxin01
最后编审: 2023-07-24:chenxin01
阅读次数: 9