0


理论神经科学与深度学习理论

Theoretical neuroscience and deep learning theory
课程网址: http://videolectures.net/deeplearning2016_ganguli_theoretical_neu...  
主讲教师: Surya Ganguli
开课单位: 斯坦福大学
开课时间: 2016-08-23
课程语种: 英语
中文简介:

神经科学和机器学习都在经历一场复兴,根本性的技术变革正在推动概念进步的新阶段。在神经科学中,在行为过程中探测和扰动多神经元动力学的新方法已经导致能够创建复杂的神经网络模型来从大脑中出现行为。在机器学习中,用于训练神经网络的新方法和计算基础设施催生了能够解决复杂计算问题的深度神经网络。这些单独领域的这些进步为神经科学和机器学习之间的新联盟奠定了基础。该联盟的一个关键红利将是基于神经学习动力学、表达能力、泛化能力以及生物和人工网络的可解释性的新统一理论的起源。理想情况下,这些理论既可以产生对生物和人工神经网络操作的科学见解,也可以为创建新的人工神经网络提供工程设计原则。在这里,我们概述了这个新联盟的路线图,并讨论了联盟开始时的几个小插曲,包括神经网络学习动力学如何模拟婴儿语义学习,动态临界权重初始化如何导致快速训练,以及表达能力如何深度神经网络的起源可以追溯到混沌理论。我们还推测尚未被神经网络从业者广泛采用的神经生物学现实的几个元素如何有助于未来人工神经网络的设计。这些元素包括由典型皮层微电路驱动的结构化神经网络架构、具有多样性时间尺度的嵌套神经回路以及具有丰富内部动态的复杂突触。

课程简介: Both neuroscience and machine learning are experiencing a renaissance in which fundamental technological changes are driving qualitatively new phases of conceptual progress. In neuroscience, new methods for probing and perturbing multi-neuronal dynamics during behavior have lead to the ability to create complex neural network models for the emergence of behavior from the brain. In machine learning, new methods and computing infrastructure for training neural networks have lead to the creation of deep neural networks capable of solving complex computational problems. These advances in each of these individual fields are laying the groundwork for a new alliance between neuroscience and machine learning. A key dividend of this alliance would be the genesis of new unified theories underlying neural learning dynamics, expressive power, generalization capability, and interpretability of both biological and artificial networks. Ideally such theories could yield both scientific insight into the operation of biological and artificial neural networks, as well as provide engineering design principles for the creation of new artificial neural networks. Here we outline a roadmap for this new alliance, and discuss several vignettes from the beginnings of such an alliance, including how neural network learning dynamics can model infant semantic learning, how dynamically critical weight initializations can lead to rapid training, and how the expressive power of deep neural networks can have its origins in the theory of chaos. We also speculate on how several elements of neurobiological reality, as yet not extensively employed by neural network practitioners, could aid in the design of future artificial neural networks. Such elements include structured neural network architectures motivated by the canonical cortical microcircuit, nested neural loops with a diversity of time scales, and complex synapses with rich internal dynamics.
关 键 词: 神经科学; 机器学习; 神经生物学
课程来源: 视频讲座网
数据采集: 2021-06-17:liyy
最后编审: 2021-06-17:liyy
阅读次数: 33