L1惩罚支持向量机的损失Support vector machines loss with l1 penalty |
|
课程网址: | http://videolectures.net/nc04_geer_svmlp/ |
主讲教师: | Sara van de Geer |
开课单位: | 苏黎世联邦理工学院 |
开课时间: | 2007-02-25 |
课程语种: | 英语 |
中文简介: | 我们考虑一个独立同分布的样本(X,Y),其中X和Y为特征的二元标签,说值1或1。我们利用一个高维线性近似的y对x和L1处罚,回归系数,回归支持向量机的损失。此过程不依赖于噪声水平(未知的)或在(未知的)的近似稀疏贝叶斯规则,但其预测误差较小的较小的噪声水平和/或稀疏逼近。因此,它适应的基本分布未知的性质。在一个例子中,我们证明了在样本量的对数,该过程产生的极大极小率的超额风险。 |
课程简介: | We consider an i.i.d. sample from (X,Y), where X is a feature and Y a binary label, say with values +1 or -1. We use a high-dimensional linear approximation of the regression of Y on X and support vector machine loss with l1 penalty on the regression coefficients. This procedure does not depend on the (unknown) noise level or on the (unknown) sparseness of approximations of Bayes rule, but nevertheless its prediction error is smaller for smaller noise levels and/or sparser approximations. Thus, it adapts to unknown properties of the underlying distribution. In an example, we show that up to terms logarithmic in the sample size, the procedure yields minimax rates for the excess risk. |
关 键 词: | 独立同分布; 贝叶斯规则; 样本量 |
课程来源: | 视频讲座网 |
最后编审: | 2021-02-03:nkq |
阅读次数: | 73 |