具有概率人口代码和主题模型的神经回路中的复杂推理Complex Inference in Neural Circuits with Probabilistic Population Codes and Topic Models |
|
课程网址: | http://videolectures.net/machine_beck_models/ |
主讲教师: | Jeff Beck |
开课单位: | 罗彻斯特大学 |
开课时间: | 2013-01-14 |
课程语种: | 英语 |
中文简介: | 最近的实验已经证明人类和动物通常在概率上推断其环境。这种能力需要一个代表概率分布的神经代码和能够实现概率推理操作的神经回路。所提出的概率人口编码(PPC)框架提供概率分布的统计上有效的神经表示,其与生理测量大致一致并且能够以生物学上合理的方式实现概率推理的一些基本操作。然而,这些实验和相应的神经模型主要集中在简单(易处理)的概率计算上,例如线索组合,坐标变换和决策制定。因此,目前尚不清楚如何将该框架推广到更复杂的概率计算。在这里,我们通过展示可以在线性PPC框架内实现称为变分贝叶斯期望最大化的非常一般的近似推理算法来解决这个问题。我们将这种方法应用于任何给定皮质层面临的一般问题,即识别复杂尖峰混合物的潜在原因。我们确定了这个尖峰模式分类问题和用于文档分类的主题模型之间的形式等价,特别是潜在Dirichlet分配(LDA)。然后,我们构建了一个利用线性PPC的LDA变分推理和学习的神经网络实现。该网络主要依赖于两种非线性操作:分裂归一化和超线性促进,这两种操作在神经回路中普遍存在。我们还演示了如何使用Hebb规则的变体来实现在线学习,并描述了这项工作的扩展,这使我们能够处理时变和相关的潜在原因。 |
课程简介: | Recent experiments have demonstrated that humans and animals typically reason probabilistically about their environment. This ability requires a neural code that represents probability distributions and neural circuits that are capable of implementing the operations of probabilistic inference. The proposed probabilistic population coding (PPC) framework provides a statistically efficient neural representation of probability distributions that is both broadly consistent with physiological measurements and capable of implementing some of the basic operations of probabilistic inference in a biologically plausible way. However, these experiments and the corresponding neural models have largely focused on simple (tractable) probabilistic computations such as cue combination, coordinate transformations, and decision making. As a result it remains unclear how to generalize this framework to more complex probabilistic computations. Here we address this short coming by showing that a very general approximate inference algorithm known as Variational Bayesian Expectation Maximization can be implemented within the linear PPC framework. We apply this approach to a generic problem faced by any given layer of cortex, namely the identification of latent causes of complex mixtures of spikes. We identify a formal equivalent between this spike pattern demixing problem and topic models used for document classification, in particular Latent Dirichlet Allocation (LDA). We then construct a neural network implementation of variational inference and learning for LDA that utilizes a linear PPC. This network relies critically on two non-linear operations: divisive normalization and super-linear facilitation, both of which are ubiquitously observed in neural circuits. We also demonstrate how online learning can be achieved using a variation of Hebb’s rule and describe an extesion of this work which allows us to deal with time varying and correlated latent causes. |
关 键 词: | 概率人口编码; 神经代码; 分裂归一化; 超线性促进 |
课程来源: | 视频讲座网 |
最后编审: | 2019-05-15:cjy |
阅读次数: | 66 |