0


10亿个实例,1000台机器和3.5个小时

1 Billion Instances, 1 Thousand Machines and 3.5 Hours
课程网址: http://videolectures.net/nipsworkshops09_walker_bitmh/  
主讲教师: Daniel Walker
开课单位: 杨百翰大学
开课时间: 2010-01-19
课程语种: 英语
中文简介:
在海量数据集上训练条件最大熵模型需要大量的计算资源,但通过分布计算,可以大大缩短训练时间。最近的理论结果表明,由独立训练模型的权重混合训练的条件最大熵模型收敛速度与传统的分布方案相同,但明显更快。这种效率主要是通过降低网络通信成本来实现的,这种成本通常不被考虑,但实际上非常关键。
课程简介: Training conditional maximum entropy models on massive data sets requires significant computational resources, but by distributing the computation, training time can be significant reduced. Recent theoretical results have demonstrated conditional maximum entropy models trained by weight mixtures of independently trained models converge at the same rate as traditional distributed schemes, but significantly faster. This efficiency is achieved primarily by reducing network communication costs, a cost not usually considered but actually quite crucial.
关 键 词: 最大熵模型; 分布式计算; 降低网络通信成本
课程来源: 视频讲座网
最后编审: 2020-06-01:wuyq
阅读次数: 36