0


利用L1-惩罚对数 - 行列式正则化在高维空间中进行有效的稀疏度量学习

An Efficient Sparse Metric Learning in High-Dimensional Space via L1-Penalized Log-Determinant Regularization
课程网址: http://videolectures.net/icml09_qi_esml/  
主讲教师: Guo-Jun Qi
开课单位: 伊利诺伊大学
开课时间: 2009-09-17
课程语种: 英语
中文简介:
本文通过$ \ ell_1 $惩罚对数行列式正则化提出了一种高维空间中的有效稀疏度量学习算法。与现有的距离度量学习算法相比,该算法利用了内在高维特征空间的稀疏性。学习距离度量之前的这种稀疏性用于规范距离模型的复杂性,尤其是在“较少示例数$ p $和高维度$ d $”设置中。理论上,通过类比协方差估计问题,我们发现建议远程学习算法在速率$ \ mathcal O \ left(\ sqrt {\ left({m ^ 2 \ log d} \ right)/ n} \ right)$到目标距离矩阵时具有一致的结果,最多$ m每行$ nonzeros。此外,从实现的角度来看,这个$ \ ell_1 $惩罚对数行列式公式可以以块坐标下降的方式有效地优化,这比标准的半定规划快得多,后者在许多其他高级程序中被广泛采用。远程学习算法。我们将该算法与各种数据集上的其他现有算法进行比较,得到竞争结果。
课程简介: This paper proposes an efficient sparse metric learning algorithm in high dimensional space via an $\ell_1$-penalized log-determinant regularization. Compare to the most existing distance metric learning algorithms, the proposed algorithm exploits the sparsity nature underlying the intrinsic high dimensional feature space. This sparsity prior of learning distance metric serves to regularize the complexity of the distance model especially in the ``less example number $p$ and high dimension $d$" setting. Theoretically, by analogy to the covariance estimation problem, we find the proposed distance learning algorithm has a consistent result at rate $\mathcal O\left(\sqrt{\left( {m^2 \log d} \right)/n}\right)$ to the target distance matrix with at most $m$ nonzeros per row. Moreover, from the implementation perspective, this $\ell_1$-penalized log-determinant formulation can be efficiently optimized in a block coordinate descent fashion which is much faster than the standard semi-definite programming which has been widely adopted in many other advanced distance learning algorithms. We compare this algorithm with other state-of-the-art ones on various datasets and competitive results are obtained.
关 键 词: 高维空间; 对数行列式; 稀疏度量
课程来源: 视频讲座网
最后编审: 2020-07-13:yumf
阅读次数: 92