0


大尺度问题的原始对偶子梯度方法

Primal-Dual Subgradient Methods for Huge-Scale Problems
课程网址: http://videolectures.net/roks2013_nesterov_methods/  
主讲教师: Yurii Nesterov
开课单位: 卢旺天主教大学
开课时间: 2013-08-26
课程语种: 英语
中文简介:
我们考虑了一类新的大规模问题,稀疏亚梯度问题。这类最重要的函数是分段线性的。对于具有一致稀疏性的线性算子的优化问题,我们提出了一种非常有效的亚梯度迭代的实现,它的总代价与维数成对数关系。该技术基于矩阵/向量乘积的结果和对称函数值的递归更新。它工作得很好,例如,对于很少有非零对角线的矩阵和max型函数。我们证明了更新技术可以有效地与最简单的亚梯度方法、Polyak的无约束最小化方法和Shor的有约束最小化方案耦合。对于坐标下降格式的一种新的非光滑随机变体,也可以得到类似的结果。我们讨论了该技术在圆锥优化问题上的扩展。
课程简介: We consider a new class of huge-scale problems, the problems with sparse subgradients. The most important functions of this type are piece-wise linear. For optimization problems with uniform sparsity of corresponding linear operators, we suggest a very efficient implementation of subgradient iterations, which total cost depends logarithmically on the dimension. This technique is based on a recursive update of the results of matrix/vector products and the values of symmetric functions. It works well, for example, for matrices with few nonzero diagonals and for max-type functions. We show that the updating technique can be efficiently coupled with the simplest subgradient methods, the unconstrained minimization method by Polyak, and the constrained minimization scheme by Shor. Similar results can be obtained for a new non- smooth random variant of a coordinate descent scheme. We discuss an extension of this technique onto conic optimization problems.
关 键 词: 稀疏亚梯度问题; 分段线性; 非零对角线; 非光滑随机变体
课程来源: 视频讲座网
数据采集: 2022-11-23:chenjy
最后编审: 2022-11-23:chenjy
阅读次数: 30