0


机器学习中的边界超额风险

Bounding Excess Risk in Machine Learning
课程网址: http://videolectures.net/mlss09us_koltchinskii_berml/  
主讲教师: Vladimir Koltchinskii
开课单位: 佐治亚理工学院
开课时间: 2009-07-30
课程语种: 英语
中文简介:
我们将讨论基于经验风险最小化(可能受到惩罚)的学习算法的超额风险限制问题的一般方法。这一方法是近年来由几位作者(其中包括:马萨特、巴特利特、布斯克和门德尔森、科尔钦斯基)提出的。它是基于由Talgrand引起的强大的集中不等式以及经验过程理论的各种工具(比较不等式、熵和高斯上的一般链边界、经验和Rademacher过程等)。它提供了一种在回归、密度估计和分类等许多问题以及许多不同类型的学习方法(核心机、集成方法、稀疏恢复)中获得明显的超额风险边界的方法。它还提供了一种构造超风险数据依赖边界的一般方法,可用于模型选择和适应问题。
课程简介: We will discuss a general approach to the problem of bounding the excess risk of learning algorithms based on empirical risk minimization (possibly penalized). This approach has been developed in the recent years by several authors (among others: Massart; Bartlett, Bousquet and Mendelson; Koltchinskii). It is based on powerful concentration inequalities due to Talagrand as well as on a variety of tools of empirical processes theory (comparison inequalities, entropy and generic chaining bounds on Gaussian, empirical and Rademacher processes, etc.). It provides a way to obtain sharp excess risk bounds in a number of problems such as regression, density estimation and classification and for many different classes of learning methods (kernel machines, ensemble methods, sparse recovery). It also provides a general way to construct sharp data dependent bounds on excess risk that can be used in model selection and adaptation problems.
关 键 词: 风险最小化; 超额风险界限; 密度估计; 稀疏恢复; 模型选择
课程来源: 视频讲座网
最后编审: 2020-05-29:吴雨秋(课程编辑志愿者)
阅读次数: 326