0


平均算法与分布式优化

Averaging algorithms and distributed optimization
课程网址: http://videolectures.net/nipsworkshops2010_tsitsiklis_aad/  
主讲教师: John N. Tsitsiklis
开课单位: 麻省理工学院
开课时间: 2011-01-13
课程语种: 英语
中文简介:
在分布式平均和一致性算法中,处理器通过用其邻居的值形成局部平均值来交换和更新某些值(或“估计”或“意见”)。在合适的条件下,此类算法收敛于一致(每个处理器最终保持相同的值)或甚至平均一致(在处理器保持的初始值的平均值上达成共识)。已经提出这种类型的算法作为分布式优化方法的子例程,用于在主算法运行时组合不同处理器的结果。我们概述了平均算法的一些应用,重点是像优化方法这样的梯度。然后,我们继续强调一些新的和新的结果,重点是收敛率。我们终于讨论了一些未解决的问
课程简介: In distributed averaging and consensus algorithms, processors exchange and update certain values (or "estimates", or "opinions") by forming a local average with the values of their neighbors. Under suitable conditions, such algorithms converge to consensus (every processor ends up holding the same value) or even average-consensus (consensus is achieved on the average of the initial values held by the processors). Algorithms of this type have been proposed as a subroutine of distributed optimization methods, used to combine the results of different processors while a master algorithm is running. We overview a few applications of averaging algorithms, with a focus on gradient-like optimization methods. We then proceed to highlight some results, old and new, with a focus on convergence rates. We finally discuss some open problems.
关 键 词: 分布式平均; 一致性算法; 收敛率
课程来源: 视频讲座网
最后编审: 2019-09-07:lxf
阅读次数: 47