0


L1正则化损失最小化的随机方法

Stochastic Methods for L1 Regularized Loss Minimization
课程网址: http://videolectures.net/icml09_tewari_smrl/  
主讲教师: Ambuj Tewari
开课单位: 芝加哥丰田技术学院
开课时间: 2009-08-26
课程语种: 英语
中文简介:
我们描述和分析了$ \ ell_1 $正则化损失最小化问题的两种随机方法,例如Lasso。第一种方法在每次迭代时更新单个特征的权重,而第二种方法更新整个权重向量,但在每次迭代时仅使用单个训练示例。在这两种方法中,特征/示例的选择是均匀随机的。我们的理论运行时分析表明,当问题的规模很大时,随机方法应该优于现有的确定性方法,包括它们的确定性对应方法。我们通过试验合成和自然数据集来证明随机方法的优势。
课程简介: We describe and analyze two stochastic methods for $\ell_1$ regularized loss minimization problems, such as the Lasso. The first method updates the weight of a single feature at each iteration while the second method updates the entire weight vector but only uses a single training example at each iteration. In both methods, the choice of feature/example is uniformly at random. Our theoretical runtime analysis suggests that the stochastic methods should outperform state-of-the-art deterministic approaches, including their deterministic counterparts, when the size of the problem is large. We demonstrate the advantage of stochastic methods by experimenting with synthetic and natural data sets.
关 键 词: 正则化; 权重向量; 随机方法
课程来源: 视频讲座网
最后编审: 2019-04-24:lxf
阅读次数: 37