不专心学习Learning without Concentration |
|
课程网址: | http://videolectures.net/colt2014_mendelson_learning/ |
主讲教师: | Shahar Mendelson |
开课单位: | 澳大利亚国立大学 |
开课时间: | 2014-07-15 |
课程语种: | 英语 |
中文简介: | 我们无需对类成员或目标进行任何有界假设,就凸类中的经验风险最小化的收敛速度和平方损失有一个清晰的边界。 p> 而不是求助于对于基于集中度的论证,该方法依赖于“小球”假设,因此适用于重尾采样和重尾目标。此外,所得的估计值可以根据问题的“噪声级别”正确缩放。 p> 在经典的有界场景中,该方法始终会改进已知的估计值。 p> |
课程简介: | We obtain sharp bounds on the convergence rate of Empirical Risk Minimization performed in a convex class and with respect to the squared loss, without any boundedness assumptions on class members or on the target. Rather than resorting to a concentration-based argument, the method relies on a ‘small-ball’ assumption and thus holds for heavy-tailed sampling and heavy-tailed targets. Moreover, the resulting estimates scale correctly with the ‘noise level’ of the problem. When applied to the classical, bounded scenario, the method always improves the known estimates. |
关 键 词: | 风险最小化 |
课程来源: | 视频讲座网 |
数据采集: | 2020-11-10:zyk |
最后编审: | 2020-11-10:zyk |
阅读次数: | 43 |