0


自适应套索的一些结果

Some results for the adaptive Lasso
课程网址: http://videolectures.net/smls09_geer_srftal/  
主讲教师: Sara van de Geer
开课单位: 苏黎世联邦理工学院
开课时间: 2009-05-06
课程语种: 英语
中文简介:
我们考虑具有n个观测值和p>> n变量的高维线性回归模型。自适应套索使用具有加权l_1罚分的最小二乘损失,其中权重与系数的初始估计量的倒数成比例。我们例如显示了从标准套索获得初始估计量的情况,然后,在Bickel等人给出的受限特征值条件下。 (2007年),大概率将不会出现误报,并且自适应系数套索将检测所有大于某个值c_n的系数,前提是系数小于c_n的系数较小。如果我们假设受限的特征值条件是一个(也许)更强的版本,那么自适应套索实际上将检测到甚至更小的系数。在极限情况下,在无法表示的条件下(Zhao和Yu(2006)),将检测至少为(log(p)/ n)^ {1/2}的阶次系数。这些结果可以从具有一般权重的套索的预言不等式中获得。我们将以非渐近形式展示结果。
课程简介: We consider the high-dimensional linear regression model, with n observations, and p > > n variables. The adaptive Lasso uses least squares loss with a weighted l_1-penalty, where the weights are proportional to the inverse of an initial estimator of the coefficients. We e.g. show for the case that the initial estimator is obtained from the standard Lasso, then, under the restricted eigenvalue condition as given in Bickel et al. (2007), with large probability, there will be no false positives, and adaptive Lasso will detect all coefficients larger than a certain value c_n, provided the number of coefficients smaller than c_n is small. If we assume a (perhaps) stronger version of the restricted eigenvalue condition, the adaptive Lasso will in fact detect even smaller coefficients. In the limiting case, under the irrepresentable condition (Zhao and Yu (2006)), coefficients of order at least (log (p)/n)^{1/2} will be detected. These results can be obtained from an oracle inequality for Lasso with general weights. We will present the results in an non-asymptotic form.
关 键 词: 高维线性回归; 自适应套索; 预言不等式
课程来源: 视频讲座网
最后编审: 2019-09-21:cwx
阅读次数: 102