0


深度压缩:使用剪枝、训练量化和哈夫曼编码对深层神经网络进行压缩

Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding
课程网址: http://videolectures.net/iclr2016_han_deep_compression/  
主讲教师: Song Han
开课单位: 斯坦福大学
开课时间: 2016-05-27
课程语种: 英语
中文简介:
神经网络是计算密集型和内存密集型的,这使得它们很难部署在硬件资源有限的嵌入式系统上。为了解决这一局限性,我们引入了“深度压缩”(deep compression),这是一种三级流水线:剪枝、训练量化和哈夫曼编码,它们协同工作,将神经网络的存储需求减少35倍至49倍,而不影响其准确性。我们的方法首先通过只学习重要的连接来修剪网络。接下来,我们量化权重以强制权重共享,最后,我们应用哈夫曼编码。在前两步之后,我们重新训练网络以微调剩余的连接和量化质心。剪枝,将连接数减少9倍到13倍;然后量化将表示每个连接的比特数从32减少到5。在ImageNet数据集上,我们的方法将AlexNet所需的存储空间减少了35倍,从240MB减少到6.9MB,而不损失准确性。我们的方法将VGG-16的大小缩小了49倍,从552MB减少到11.3MB,同样不损失精度。这使得模型适合于片上SRAM缓存,而不是片外DRAM存储器。我们的压缩方法也有助于在应用程序大小和下载带宽受到限制的移动应用程序中使用复杂的神经网络。以CPU、GPU和移动GPU为基准,压缩网络具有3到4倍的分层加速和3到7倍的能效。
课程简介: Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems with limited hardware resources. To address this limitation, we introduce "deep compression", a three stage pipeline: pruning, trained quantization and Huffman coding, that work together to reduce the storage requirement of neural networks by 35x to 49x without affecting their accuracy. Our method first prunes the network by learning only the important connections. Next, we quantize the weights to enforce weight sharing, finally, we apply Huffman coding. After the first two steps we retrain the network to fine tune the remaining connections and the quantized centroids. Pruning, reduces the number of connections by 9x to 13x; Quantization then reduces the number of bits that represent each connection from 32 to 5. On the ImageNet dataset, our method reduced the storage required by AlexNet by 35x, from 240MB to 6.9MB, without loss of accuracy. Our method reduced the size of VGG-16 by 49x from 552MB to 11.3MB, again with no loss of accuracy. This allows fitting the model into on-chip SRAM cache rather than off-chip DRAM memory. Our compression method also facilitates the use of complex neural networks in mobile applications where application size and download bandwidth are constrained. Benchmarked on CPU, GPU and mobile GPU, compressed network has 3x to 4x layerwise speedup and 3x to 7x better energy efficiency.
关 键 词: 深度压缩; 深层神经网络; 剪枝
课程来源: 视频讲座网
数据采集: 2020-11-27:yxd
最后编审: 2020-11-27:yxd
阅读次数: 53