0


基于参数先验知识的二值尖峰序列数据贝叶斯熵估计

Bayesian entropy estimation for binary spike train data using parametric prior knowledge
课程网址: http://videolectures.net/machine_archer_bayesian_entropy/  
主讲教师: Evan W.Archer
开课单位: 德克萨斯大学
开课时间: 2014-11-07
课程语种: 英语
中文简介:
香农熵是信息论中的一个基本量,也是分析神经代码的基本构件。从样本中估计离散分布的熵是一个重要而困难的问题,在统计学和理论神经科学中得到了相当大的关注。然而,神经响应具有一般熵估计器无法利用的特征统计结构。例如,现有的贝叶斯熵估计器天真地假设所有尖峰词都是先验的,这使得在尖峰稀疏的情况下,先验概率质量的分配效率低下。在这里,我们使用设计用于灵活利用同时记录的尖峰响应的统计结构的先验来开发二进制尖峰序列的熵的贝叶斯估计。我们使用以简单参数模型为中心的Dirichlet分布的混合来定义尖峰词上的两个先验分布。参数模型捕获数据的高级统计特征,例如尖峰字中的平均尖峰计数,这允许后验过熵比标准估计更快速地集中(例如,在尖峰概率与0.5显著不同的情况下)。相反,狄利克雷分布将先验质量分配给远离参数模型的分布,确保了任意分布的一致估计。我们设计了数据和先验的紧凑表示,允许使用大量神经元实现贝叶斯最小二乘和经验贝叶斯熵估计的计算效率。我们将这些估计器应用于模拟和真实的神经数据,并表明它们显著优于传统方法。
课程简介: Shannon's entropy is a basic quantity in information theory, and a fundamental building block for the analysis of neural codes. Estimating the entropy of a discrete distribution from samples is an important and difficult problem that has received considerable attention in statistics and theoretical neuroscience. However, neural responses have characteristic statistical structure that generic entropy estimators fail to exploit. For example, existing Bayesian entropy estimators make the naive assumption that all spike words are equally likely a priori, which makes for an inefficient allocation of prior probability mass in cases where spikes are sparse. Here we develop Bayesian estimators for the entropy of binary spike trains using priors designed to flexibly exploit the statistical structure of simultaneously-recorded spike responses. We define two prior distributions over spike words using mixtures of Dirichlet distributions centered on simple parametric models. The parametric model captures high-level statistical features of the data, such as the average spike count in a spike word, which allows the posterior over entropy to concentrate more rapidly than with standard estimators (e.g., in cases where the probability of spiking differs strongly from 0.5). Conversely, the Dirichlet distributions assign prior mass to distributions far from the parametric model, ensuring consistent estimates for arbitrary distributions. We devise a compact representation of the data and prior that allow for computationally efficient implementations of Bayesian least squares and empirical Bayes entropy estimators with large numbers of neurons. We apply these estimators to simulated and real neural data and show that they substantially outperform traditional methods.
关 键 词: 神经代码; 尖峰稀疏; 计算效率
课程来源: 视频讲座网
数据采集: 2022-12-20:chenjy
最后编审: 2022-12-20:chenjy
阅读次数: 12