0


马尔可夫网络的初等和对偶稀疏性

On Primal and Dual Sparsity of Markov Networks
课程网址: http://videolectures.net/icml09_zhu_pdsm/  
主讲教师: Jun Zhu
开课单位: 清华大学
开课时间: 2009-09-17
课程语种: 英语
中文简介:
稀疏性是高维学习中的理想特性。 范数正则化可以导致原始稀疏性,而最大边际方法实现双稀疏性;但是在单一结构化预测模型中实现这两者仍然很困难。本文提出了一个范围最大边际马尔可夫网络,它同时具有原始和双重稀疏性,并分析其与拉普拉斯最大边缘马尔可夫网络的连接(,它继承了最大边缘模型的双稀疏性,但是伪原始稀疏。我们证明当正则化常数为无穷大时,是LapM的极端情况。我们还展示了与自适应之间的等价关系,我们从中为开发了一个强大的EM风格算法。我们证明了同时(伪)原始和双稀疏模型相对于在合成和实际数据集上都享有原始或双稀疏性的模型的优点。
课程简介: Sparsity is a desirable property in high dimensional learning. The $\ell_1$-norm regularization can lead to primal sparsity, while max-margin methods achieve dual sparsity; but achieving both in a single structured prediction model remains difficult. This paper presents an $\ell_1$-norm max-margin Markov network ($\ell_1$-M$^3$N), which enjoys both primal and dual sparsity, and analyzes its connections to the Laplace max-margin Markov network (LapM$^3$N), which inherits the dual sparsity of max-margin models but is pseudo-primal sparse. We show that $\ell_1$-M$^3$N is an extreme case of LapM$^3$N when the regularization constant is infinity. We also show an equivalence between $\ell_1$-M$^3$N and an adaptive M$^3$N, from which we develop a robust EM-style algorithm for $\ell_1$-M$^3$N. We demonstrate the advantages of the simultaneously (pseudo-) primal and dual sparse models over the ones which enjoy either primal or dual sparsity on both synthetic and real data sets.
关 键 词: 稀疏性; 高维学习; 范数正则化
课程来源: 视频讲座网
最后编审: 2019-04-24:cwx
阅读次数: 68