0


稀疏学习中非凸优化的算法策略

Algorithmic Strategies for Non-convex Optimization in Sparse Learning
课程网址: http://videolectures.net/smls09_zhang_asfnc/  
主讲教师: Tong Zhang
开课单位: 新泽西州立大学
开课时间: 2009-05-06
课程语种: 英语
中文简介:
我们考虑具有非凸正则化的优化公式,这对于学习稀疏线性模型是很自然的。有两种方法可以解决此问题:1.启发式方法,例如梯度下降法,只能找到局部最小值;这种方法的缺点是缺乏理论上的保证,无法证明局部最小值给出了很好的解决方案。 2.凸松弛(例如L1正则化)在某些情况下解决了问题,但实际上常常导致次优稀疏性。本演讲试图通过提出两种直接优化非凸目标的算法来弥补理论和实践之间的上述差距。稀疏学习,可​​以为此建立性能保证。第一种方法是多阶段凸松弛方案,该方案迭代地完善L1正则化。第二种方法是求解非凸稀疏正则化的贪婪过程。我们表明,在适当的条件下,这两个过程均会导致所需的稀疏非凸公式的局部解优于L1正则化的全局解。
课程简介: We consider optimization formulations with non-convex regularization that are natural for learning sparse linear models. There are two approaches to this problem: 1. Heuristic methods such as gradient descent that only find a local minimum; a drawback of this approach is the lack of theoretical guarantee showing that the local minimum gives a good solution. 2. Convex relaxation such as L1-regularization that solves the problem under some conditions, but often leads to sub-optimal sparsity in reality. This talk tries to remedy the above gap between theory and practice by presenting two algorithms for direct optimization of non-convex objectives in sparse learning, for which performance guarantees can be established. The first method is a multi-stage convex relaxation scheme that iteratively refines the L1-regularization. The second approach is a greedy procedure for solving non-convex sparse regularization. We show that under appropriate conditions, both procedures lead to desirable local solutions of sparse non-convex formulations that are superior to the global solution of L1-regularization.
关 键 词: 非凸正则化; 优化公式; 稀疏线性模型
课程来源: 视频讲座网
最后编审: 2019-09-22:cwx
阅读次数: 57