0


并行大幅度学习的算法和硬度结果

Algorithms and hardness results for parallel large margin learning
课程网址: http://videolectures.net/nips2011_long_learning/  
主讲教师: Phil Long
开课单位: 美国感知技术有限公司
开课时间: 2012-09-06
课程语种: 英语
中文简介:
研究了在并行计算环境下学习未知大边缘半空间的基本问题。我们的主要积极成果是一个学习大边缘半空间的并行算法,它基于凸优化的内点法和矩阵计算的快速并行算法。我们证明,该算法使用poly(n,1/gamma)处理器在n维上学习未知的gamma边缘半空间,并在时间~o(1/gamma)+o(log n)中运行。相比之下,幼稚的并行算法在时间上学习一个伽玛边缘半空间,这个半空间在多对数上依赖于n,它在运行时依赖于伽玛。我们的主要负面结果是关于增强,这是学习大边际半空间的标准方法。我们给出了一个信息论证明,在最初的PAC框架中,弱学习算法作为一个由助推器调用的甲骨文提供,助推不能并行化:在一个助推阶段内多次并行调用弱学习者的能力不会减少连续阶段的总数。需要的增压。
课程简介: We study the fundamental problem of learning an unknown large-margin halfspace in the context of parallel computation. Our main positive result is a parallel algorithm for learning a large-margin halfspace that is based on interior point methods from convex optimization and fast parallel algorithms for matrix computations. We show that this algorithm learns an unknown gamma-margin halfspace over n dimensions using poly(n,1/gamma) processors and runs in time ~O(1/gamma) + O(log n). In contrast, naive parallel algorithms that learn a gamma-margin halfspace in time that depends polylogarithmically on n have Omega(1/gamma^2) runtime dependence on gamma. Our main negative result deals with boosting, which is a standard approach to learning large-margin halfspaces. We give an information-theoretic proof that in the original PAC framework, in which a weak learning algorithm is provided as an oracle that is called by the booster, boosting cannot be parallelized: the ability to call the weak learner multiple times in parallel within a single boosting stage does not reduce the overall number of successive stages of boosting that are required.
关 键 词: 集成方法; 分类; 计算机科学; 机器学习; 统计学习
课程来源: 视频讲座网公开课
最后编审: 2020-06-04:dingaq
阅读次数: 42