0


在线学习算法的错误界限和风险界限

Mistake bounds and risk bounds for on-line learning algorithms
课程网址: http://videolectures.net/mcslw04_bianchi_mbrbl/  
主讲教师: Nicolò Cesa-Bianchi
开课单位: 米兰大学
开课时间: 2007-02-25
课程语种: 英语
中文简介:
在统计学习理论中,风险界限通常是通过操纵经验过程的至上来获得的,这些经验过程测量经验风险与一类模型中真实风险的最大偏差。 在本次演讲中,我们描述了通过以在线方式运行任意学习算法而获得的假设集合的风险界限的替代方法。 这允许我们基于对在线学习者产生的经验过程的分析,用更简单的参数替换统一的大偏差参数。 这种经验过程的大偏差很容易通过伯恩斯坦对鞅的不等式的单一应用来控制,并且由此产生的风险界限表现出强烈的数据依赖性。
课程简介: In statistical learning theory, risk bounds are typically obtained via the manipulation of suprema of empirical processes measuring the largest deviation of the empirical risk from the true risk in a class of models. In this talk we describe the alternative approach of deriving risk bounds for the ensemble of hypotheses obtained by running an arbitrary learning algorithm in an-on line fashion. This allows us to replace the uniform large deviation argument with a simpler argument based on the analysis of the empirical process engendered by the on-line learner. The large deviations of such empirical processes are easily controlled by a single application of Bernstein's inequality for martingales, and the resulting risk bounds exhibit strong data-dependence.
关 键 词: 风险界限; 数据依赖性; 在线方式运行任意学习算法
课程来源: 视频讲座网
最后编审: 2019-05-16:cjy
阅读次数: 97