0


通过矩阵正则化进行多任务学习

Multi-Task Learning via Matrix Regularization
课程网址: http://videolectures.net/smls09_argyriou_mtlvmr/  
主讲教师: Andreas Argyriou
开课单位: 巴黎中心学校
开课时间: 2009-05-06
课程语种: 英语
中文简介:
我们提出了一种学习跨多个任务共享表示的方法。最近,多任务学习在诸如协同过滤,对象检测,数据库集成,信号处理等应用中变得越来越重要。我们的方法解决了学习任务回归向量所处的低维子空间的问题。这个非凸问题可以作为迹线(核)范数正则化问题来缓和,我们可以使用交替最小化算法来解决。可以证明该算法方案总是收敛于最优解。而且,可以容易地扩展该方法,以便使用非线性特征图作为通过再生内核的输入。这是称为代表定理的最优性条件的结果,为此我们展示了必要条件和充分条件。最后,我们考虑使用更通用的频谱函数(例如Schatten Lp范数)而不是迹线范数的矩阵正则化。我们证明了我们的算法和结果也适用于这些情况。
课程简介: We present a method for learning representations shared across multiple tasks. Multi-task learning has become increasingly important recently in applications such as collaborative filtering, object detection, integration of databases, signal processing etc. Our method addresses the problem of learning a low-dimensional subspace on which task regression vectors lie. This non-convex problem can be relaxed as a trace (nuclear) norm regularization problem, which we solve with an alternating minimization algorithm. This algorithmic scheme can be shown to always converge to an optimal solution. Moreover, the method can easily be extended in order to use nonlinear feature maps as inputs via reproducing kernels. This is a consequence of optimality conditions known as representer theorems, for which we show a necessary and sufficient condition. Finally, we consider matrix regularization with more general spectral functions, such as the Schatten Lp norms, instead of the trace norm. We show that our algorithm and results apply in these cases as well.
关 键 词: 任务共享; 低维子空间; 最优解
课程来源: 视频讲座网
最后编审: 2019-10-17:cwx
阅读次数: 99