神经网络Neural Networks |
|
课程网址: | http://videolectures.net/deeplearning2016_larochelle_neural_netwo... |
主讲教师: | Hugo Larochelle |
开课单位: | 推特公司 |
开课时间: | 2016-08-23 |
课程语种: | 英语 |
中文简介: | 在本讲座中,我将介绍前馈神经网络背后的基本概念。演讲将分为两部分。在第一部分,我将介绍神经网络中的前向传播和反向传播。具体来说,我将讨论前馈网络的参数化、最常见的单元类型、神经网络的容量以及如何计算训练损失的梯度以使用神经网络进行分类。在第二部分,我将讨论通过梯度下降训练神经网络所需的最终组件,然后讨论现在常用于训练深度神经网络的最新想法。因此,我将介绍梯度下降算法、dropout、批量归一化和无监督预训练的不同变体。 |
课程简介: | In this lecture, I will cover the basic concepts behind feedforward neural networks. The talk will be split into 2 parts. In the first part, I'll cover forward propagation and backpropagation in neural networks. Specifically, I'll discuss the parameterization of feedforward nets, the most common types of units, the capacity of neural networks and how to compute the gradients of the training loss for classification with neural networks. In the second part, I'll discuss the final components necessary to train neural networks by gradient descent and then discuss the more recent ideas that are now commonly used for training deep neural networks. I will thus present different variants of gradient descent algorithms, dropout, batch normalization and unsupervised pretraining. |
关 键 词: | 前馈神经网络; 深度神经网络; dropout |
课程来源: | 视频讲座网 |
数据采集: | 2021-06-03:liyy |
最后编审: | 2021-06-03:liyy |
阅读次数: | 56 |