0


通过加权L1正则化学习无尺度网络

Learning Scale Free Networks by Reweighted L1 regularization
课程网址: http://videolectures.net/aistats2011_liu_learning/  
主讲教师: Qiang Liu
开课单位: 加利福尼亚大学
开课时间: 2011-05-06
课程语种: 英语
中文简介:
l1正则化方法已广泛应用于高斯图形模型选择任务中,以鼓励稀疏结构。然而,我们通常希望包含更多的结构信息,而不仅仅是稀疏性。在这项工作中,我们专注于学习所谓的“无标度”;模型是许多实际工作网络中出现的一个常见特征。我们将L1正则化替换为幂律正则化,通过一系列迭代加权L1正则化问题来优化目标函数,使得高阶节点的正则化系数减小,促进了高阶集线器的出现。我们的方法可以很容易地改进任何现有的基于l1的方法,如图形lasso、邻域选择和JSRM,当底层网络被认为是无标度的或具有支配中心时。仿真结果表明,该方法在无标度网络和集线器网络的学习性能上明显优于基准L1方法,并对基因表达数据的行为进行了说明。
课程简介: Methods for L1-type regularization have been widely used in Gaussian graphical model selection tasks to encourage sparse structures. However, often we would like to include more structural information than mere sparsity. In this work, we focus on learning so-called "scale-free" models, a common feature that appears in many real-work networks. We replace the L1 regularization with a power law regularization and optimize the objective function by a sequence of iteratively reweighted L1 regularization problems, where the regularization coefficients of nodes with high degree are reduced, encouraging the appearance of hubs with high degree. Our method can be easily adapted to improve any existing L1-based methods, such as graphical lasso, neighborhood selection, and JSRM when the underlying networks are believed to be scale free or have dominating hubs. We demonstrate in simulation that our method significantly outperforms the a baseline L1 method at learning scale-free networks and hub networks, and also illustrate its behavior on gene expression data.
关 键 词: 加权L1正则化; 无尺度网络
课程来源: 视频讲座网
最后编审: 2021-01-30:nkq
阅读次数: 52