0


演算法是普遍一致的

AdaBoost is Universally Consistent
课程网址: http://videolectures.net/mlss06tw_bartlett_auc/  
主讲教师: Peter L. Bartlett
开课单位: 加州大学伯克利分校
开课时间: 2007-02-25
课程语种: 英语
中文简介:
我们考虑AdaBoost生成的分类器的风险或错误概率,特别是用于确保通用一致性的停止策略。 (如果分类器产生的分类器的风险接近贝叶斯风险,随着样本量的增长风险最小,分类方法是普遍一致的。)AdaBoost的几个相关算法正规化版本已被证明是普遍一致的,但AdaBoost的普遍一致性没有成立了。 Jiang已经证明,对于满足某些平滑条件的每个概率分布,样本大小为n的停止时间,因此如果在迭代之后停止AdaBoost,则其风险接近该分布的贝叶斯风险。我们的主要结果是,如果AdaBoost在迭代后停止,它是普遍一致的,其中n是样本大小和。
课程简介: We consider the risk, or probability of error, of the classifier produced by AdaBoost, and in particular the stopping strategy to be used to ensure universal consistency. (A classification method is universally consistent if the risk of the classifiers it produces approaches the Bayes risk---the minimal risk---as the sample size grows.) Several related algorithms---regularized versions of AdaBoost---have been shown to be universally consistent, but AdaBoost's universal consistency has not been established. Jiang has demonstrated that, for each probability distribution satisfying certain smoothness conditions, there is a stopping time for sample size n, so that if AdaBoost is stopped after iterations, its risk approaches the Bayes risk for that distribution. Our main result is that if AdaBoost is stopped after iterations, it is universally consistent, where n is the sample size and .
关 键 词: 分类器; 贝叶斯风险; 迭代
课程来源: 视频讲座网
最后编审: 2019-07-23:cwx
阅读次数: 39