0


深刻的自然语言理解

Deep Natural Language Understanding
课程网址: http://videolectures.net/deeplearning2016_cho_language_understand...  
主讲教师: Kyunghyun Cho
开课单位: 纽约大学
开课时间: 2016-08-23
课程语种: 英语
中文简介:
在这堂课中,我从一个主张开始,自然语言理解基本上可以作为建立一个更好的语言模型来处理,并解释了三种广泛采用的语言建模方法。它们是n元语言模型、前馈神经语言模型和递归语言模型。在从传统的n元语言模型向递归语言模型发展的过程中,通过连续空间表示讨论了数据稀疏性和泛化的概念。然后,我继续介绍一种基于递归语言建模的机器翻译新范式的最新发展,通常称为神经机器翻译。讲座的最后,通过在深层神经网络中引入连续空间表示,在自然语言处理/理解方面有三个新的机会。
课程简介: In this lecture, I start with a claim that natural language understanding can largely be approached as building a better language model and explain three widely-adopted approaches to language modelling. They are n-gram language modelling, feedforward neural language modelling and recurrent language modelling. As I develop from the traditional n-gram language model toward recurrent language model, I discuss the concepts of data sparsity and generalization via continuous space representations. I then continue on to the recent development of a novel paradigm in machine translation based on recurrent language modelling, often called neural machine translation. The lecture concludes with three new opportunities in natural language processing/understanding made possible by the introduction of continuous space representations in deep neural networks.
关 键 词: 自然语言; 理解; 处理
课程来源: 视频讲座网
数据采集: 2020-11-27:yxd
最后编审: 2020-11-27:yxd
阅读次数: 34