0


最优规则组合的l1正则化框架

An l1 Regularization Framework for Optimal Rule Combination
课程网址: http://videolectures.net/ecmlpkdd09_han_l1rforc/  
主讲教师: Yanjun Han
开课单位: 中国科学院
开课时间: 2009-10-20
课程语种: 汉简
中文简介:
本文将l1正则化引入到关系学习中以产生稀疏规则组合。换句话说,最终规则集中包含尽可能少的规则。此外,我们设计了规则复杂性惩罚,以鼓励文字较少的规则。结果优化问题必须在喇叭子句$ R_m $的无限维空间中表达,并与它们相应的复杂度$ \ mathcal {C} _m $相关联。证明了如果在每次迭代时生成局部最优规则,则最终获得的规则集将是全局最优的。所提出的元算法适用于任何单个规则生成器。我们提出了两种算法,即$ \ ell_1 $ FOIL和$ \ ell_1 $ Progol。从生物信息学和化学信息学的角度对10个现实世界的任务进行了实证分析。结果表明,我们的方法提供了有竞争力的预测准确性,而可解释性很简单。
课程简介: In this paper l1 regularization is introduced into relational learning to produce sparse rule combination. In other words, as few as possible rules are contained in the final rule set. Furthermore, we design a rule complexity penalty to encourage rules with fewer literals. The resulted optimization problem has to be formulated in an infinite dimensional space of horn clauses $R_m$ associated with their corresponding complexity $\mathcal{C}_m$. It is proved that if a locally optimal rule is generated at each iteration, the final obtained rule set will be globally optimal. The proposed meta-algorithm is applicable to any single rule generator. We bring forward two algorithms, namely, $\ell_1$FOIL and $\ell_1$Progol. Empirical analysis is carried on ten real world tasks from bioinformatics and cheminformatics. The results demonstrate that our approach offers competitive prediction accuracy while the interpretability is straightforward.
关 键 词: l1正则化; 关系学习; 最优规则
课程来源: 视频讲座网
最后编审: 2019-03-27:lxf
阅读次数: 68