0


有限数值精度的深度学习

Deep Learning with Limited Numerical Precision
课程网址: http://videolectures.net/icml2015_gupta_numerical_precision/  
主讲教师: Suyog Gupta
开课单位: IBM
开课时间: 2015-12-05
课程语种: 英语
中文简介:
大规模深度神经网络的训练往往受到现有计算资源的限制。我们研究了有限精度的数据表示和计算对神经网络训练的影响。在低精度定点计算的情况下,我们观察到舍入方案在确定训练期间的网络行为方面起着至关重要的作用。我们的研究结果表明,在使用随机舍入时,仅使用16位宽定点数表示就可以训练深度网络,并且在分类精度方面几乎没有下降。我们还演示了一种节能硬件加速器,它实现了具有随机舍入的低精度不动点算法
课程简介: Training of large-scale deep neural networks is often constrained by the available computational resources. We study the effect of limited precision data representation and computation on neural network training. Within the context of low-precision fixed-point computations, we observe the rounding scheme to play a crucial role in determining the network’s behavior during training. Our results show that deep networks can be trained using only 16-bit wide fixed-point number representation when using stochastic rounding, and incur little to no degradation in the classification accuracy. We also demonstrate an energy-efficient hardware accelerator that implements low-precision fixed-point arithmetic with stochastic rounding
关 键 词: 深度神经网络; 有限精度; 节能硬件加速器
课程来源: 视频讲座网
数据采集: 2022-11-29:chenjy
最后编审: 2022-11-29:chenjy
阅读次数: 34