0


大规模经验风险最小化的现代凸优化方法

Modern Convex Optimization Methods for Large-scale Empirical Risk Minimization
课程网址: http://videolectures.net/icml2015_schmidt_risk_minimization/  
主讲教师: Mark Schmidt
开课单位: 不列颠哥伦比亚大学计算机科学系
开课时间: 2015-12-05
课程语种: 英语
中文简介:
本教程回顾了通过(正则化)经验风险最小化来训练(线性)预测器的凸优化的最新进展。我们专注于实际有效的方法,这些方法也配备了复杂性界限,确认算法适用于解决大维问题(非常大量的示例或非常大量的特征)。本教程的第一部分致力于现代原始方法(属于随机梯度下降类型),而第二部分侧重于现代对偶方法(属于随机坐标上升类型)。在我们进行这种区分的同时,原始方法和双重方法之间有非常密切的联系,其中一些将予以强调。我们还将评论方法的小批量、并行和分布式变体,因为这是涉及大数据的应用程序的重要考虑因素。
课程简介: This tutorial reviews recent advances in convex optimization for training (linear) predictors via (regularized) empirical risk minimization. We exclusively focus on practically efficient methods which are also equipped with complexity bounds confirming the suitability of the algorithms for solving huge-dimensional problems (a very large number of examples or a very large number of features). The first part of the tutorial is dedicated to modern primal methods (belonging to the stochastic gradient descent variety), while the second part focuses on modern dual methods (belonging to the randomized coordinate ascent variety). While we make this distinction, there are very close links between the primal and dual methods, some of which will be highlighted. We shall also comment on mini-batch, parallel and distributed variants of the methods as this is an important consideration for applications involving big data.
关 键 词: 经验风险; 复杂界限; 随机梯度; 对偶方法
课程来源: 视频讲座网
数据采集: 2023-04-24:chenxin01
最后编审: 2023-05-18:chenxin01
阅读次数: 28