安全的学习:结合贝叶斯的差距,MDL和统计学习理论通过实证性Safe Learning: bridging the gap between Bayes, MDL and statistical learning theory via empirical convexity |
|
课程网址: | http://videolectures.net/colt2011_grunwald_gap/ |
主讲教师: | Peter Grünwald |
开课单位: | 威斯康星信息中心 |
开课时间: | 2011-08-02 |
课程语种: | 英语 |
中文简介: | 我们扩展了贝叶斯映射和最小描述长度(MDL)学习,通过测试MDL/映射分布与模型的另一个元素的混合是否能够更大程度地压缩数据,并在这种情况下调整学习速度。虽然如果模型错误,标准Bayes和MDL可能无法收敛,但由此产生的";安全";估计量在错误的模型下继续获得良好的速率。此外,当应用于统计学习理论中所考虑的分类和回归模型时,该方法可在tsybakov条件下达到最佳率,并揭示了新的情况,在这种情况下,我们可以通过(-log prior)/n而不是√(-log prior)/n进行惩罚。 |
课程简介: | We extend Bayesian MAP and Minimum Description Length (MDL) learning by testing whether the data can be substantially more compressed by a mixture of the MDL/MAP distribution with another element of the model, and adjusting the learning rate if this is the case. While standard Bayes and MDL can fail to converge if the model is wrong, the resulting "safe" estimator continues to achieve good rates with wrong models. Moreover, when applied to classification and regression models as considered in statistical learning theory, the approach achieves optimal rates under, e.g., Tsybakov's conditions, and reveals new situations in which we can penalize by (-log PRIOR)/n rather than √(-log PRIOR)/n. |
关 键 词: | 贝叶斯学习; 计算机科学; 机器学习; 统计学习 |
课程来源: | 视频讲座网 |
最后编审: | 2020-06-29:heyf |
阅读次数: | 45 |