0


使用哈希技巧压缩神经网络

Compressing Neural Networks with the Hashing Trick
课程网址: http://videolectures.net/icml2015_weinberger_neural_networks/  
主讲教师: Kilian Q. Weinberger
开课单位: 康奈尔大学
开课时间: 2015-12-05
课程语种: 英语
中文简介:
随着深度网络越来越多地用于适合移动设备的应用程序,一个基本的困境变得显而易见:深度学习的趋势是增长模型以吸收不断增加的数据集大小;然而,移动设备的内存设计很小,无法存储如此大的模型。我们提出了一种新颖的网络架构 HashedNets,它利用神经网络中固有的冗余来实现模型大小的大幅减小。HashedNets 使用低成本哈希函数将连接权重随机分组到哈希桶中,并且同一哈希桶内的所有连接共享单个参数值。这些参数经过调整,可在训练期间调整为具有标准反向传播的 HashedNets 权重共享架构。我们的散列过程不会引入额外的内存开销,
课程简介: As deep nets are increasingly used in applications suited for mobile devices, a fundamental dilemma becomes apparent: the trend in deep learning is to grow models to absorb ever-increasing data set sizes; however mobile devices are designed with very little memory and cannot store such large models. We present a novel network architecture, HashedNets, that exploits inherent redundancy in neural networks to achieve drastic reductions in model sizes. HashedNets uses a low-cost hash function to randomly group connection weights into hash buckets, and all connections within the same hash bucket share a single parameter value. These parameters are tuned to adjust to the HashedNets weight sharing architecture with standard backprop during training. Our hashing procedure introduces no additional memory overhead, and we demonstrate on several benchmark data sets that HashedNets shrink the storage requirements of neural networks substantially while mostly preserving generalization performance.
关 键 词: 深度网络; 神经网络; 机器学习
课程来源: 视频讲座网
数据采集: 2023-12-14:wujk
最后编审: 2023-12-14:wujk
阅读次数: 23