0


多核学习与SMO算法

Multiple Kernel Learning and the SMO Algorithm
课程网址: http://videolectures.net/nips2010_varma_mkl/  
主讲教师: Manik Varma
开课单位: 微软公司
开课时间: 2011-03-25
课程语种: 英语
中文简介:
我们的目标是使用序列最小优化(SMO)算法训练p norm多核学习(MKL),更一般地,训练由Bregman散度正则化的线性MKL。 SMO算法简单,易于实现和适应,并且有效地扩展到大问题。因此,它得到了广泛的认可,并且SVM在不同的实际应用中使用SMO进行常规培训。出于同样的原因,使用SMO进行训练一直是MKL的长期目标。不幸的是,标准MKL双重不可区分,因此无法使用SMO样式的坐标上升进行优化。在本文中,我们证明了使用SMO确实可以训练具有p范数平方或具有某些Bregma偏差的线性MKL。由此产生的算法保留了简单性和效率,并且明显快于最先进的专用p范数MKL求解器。我们证明,我们可以在大约7分钟内训练十万个核心,在不到半个小时的时间内在一个核心上训练五万个点。
课程简介: Our objective is to train p-norm Multiple Kernel Learning (MKL) and, more generally, linear MKL regularised by the Bregman divergence, using the Sequential Minimal Optimization (SMO) algorithm. The SMO algorithm is simple, easy to implement and adapt, and efficiently scales to large problems. As a result, it has gained widespread acceptance and SVMs are routinely trained using SMO in diverse real world applications. Training using SMO has been a long standing goa in MKL for the very same reasons. Unfortunately, the standard MKL dual is not differentiable, and therefore can not be optimised using SMO style co-ordinate ascent. In this paper, we demonstrate that linear MKL regularised with the p-norm squared, or with certain Bregma divergences, can indeed be trained using SMO. The resulting algorithm retains both simplicity and efficiency and is significantly faster than the state-of-the-art specialised p-norm MKL solvers. We show that we can train on a hundred thousand kernels in approximately seven minutes and on fifty thousand points in less than half an hour on a single core.
关 键 词: 序列最小优化; 多核学习; 散度正则化
课程来源: 视频讲座网
最后编审: 2020-06-19:cxin
阅读次数: 71