0


平均算法与分布式优化

Averaging algorithms and distributed optimization
课程网址: http://videolectures.net/nipsworkshops2010_tsitsiklis_aad/  
主讲教师: John N. Tsitsiklis
开课单位: 麻省理工学院
开课时间: 2011-01-13
课程语种: 英语
中文简介:

在分布式平均算法和共识算法中,处理器通过与邻居的值形成局部平均值来交换和更新某些值(或“估计”或“观点”)。在合适的条件下,这样的算法收敛到共识(每个处理器最终都保持相同的值)或平均共识(共识是在处理器所拥有的初始值的平均值上实现的)。已经提出了这种算法作为分布式优化方法的子例程,用于在主算法运行时组合不同处理器的结果。我们概述了平均算法的一些应用,重点放在诸如优化方法之类的梯度上。然后,我们着重介绍一些新旧结果,重点是收敛速度。我们终于讨论了一些未解决的问题。

课程简介: In distributed averaging and consensus algorithms, processors exchange and update certain values (or "estimates", or "opinions") by forming a local average with the values of their neighbors. Under suitable conditions, such algorithms converge to consensus (every processor ends up holding the same value) or even average-consensus (consensus is achieved on the average of the initial values held by the processors). Algorithms of this type have been proposed as a subroutine of distributed optimization methods, used to combine the results of different processors while a master algorithm is running. We overview a few applications of averaging algorithms, with a focus on gradient-like optimization methods. We then proceed to highlight some results, old and new, with a focus on convergence rates. We finally discuss some open problems.
关 键 词: 处理器; 平均算法
课程来源: 视频讲座网
数据采集: 2020-12-03:zyk
最后编审: 2020-12-03:zyk
阅读次数: 60