深度学习的对比意义表示和组成Deep Learning for Contrasting Meaning Representation and Composition |
|
课程网址: | http://videolectures.net/kdd2017_zhu_contrasting_meaning/ |
主讲教师: | Xiaodan Zhu |
开课单位: | 加拿大国家研究院 |
开课时间: | 2017-12-01 |
课程语种: | 英语 |
中文简介: | 意义对比是语义学的一个基本方面。情感可以被看作是它的一个特例。在这次演讲中,我们讨论了我们的深度学习方法来建模两个基本问题:在词汇水平上学习对比意义的表示,以及执行语义组合来获得更大范围的文本表示,例如短语和句子。我们首先提出了用于学习分布式表示的神经网络模型,该模型对单词之间的对比含义进行编码。我们讨论了模型如何利用分布统计和词汇资源来获得基准数据集(GRE“最具反差的单词”问题)上的最新性能。基于词汇表示法,下一个基本问题是通过语义组合学习更大范围文本的表示法。在讲座的后半部分,我们将重点介绍通过考虑构成和非构成因素来学习构成函数的深度学习模型。这些模型可以有效地获得短语和句子的表示,并且它们在不同的情感分析基准上展示了最先进的性能,包括斯坦福情感树库和Twitter中SemEval情感分析中使用的数据集。 |
课程简介: | Contrasting meaning is a basic aspect of semantics. Sentiment can be regarded as a special case of it. In this talk, we discuss our deep learning approaches to modeling two basic problems: learning representation for contrasting meaning at the lexical level and performing semantic composition to obtain representation for larger text spans, e.g., phrases and sentences. We first present our neural network models for learning distributed representation that encodes contrasting meaning among words. We discuss how the models utilize both distributional statistics and lexical resources to obtain the state-of-the-art performance on the benchmark dataset, the GRE “most contrasting word” questions. Based on lexical representation, the next basic problem is to learn representation for larger text spans through semantic composition. In the second half of the talk, we focus on deep learning models that learn composition functions by considering both compositional and non-compositional factors. The models can effectively obtain representation for phrases and sentences, and they demonstrate the state-of-the-art performance on different sentiment analysis benchmarks, including the Stanford Sentiment Treebank and the datasets used in SemEval Sentiment Analysis in Twitter. |
关 键 词: | 意义对比; 深度学习; 语义组合 |
课程来源: | 视频讲座网 |
数据采集: | 2023-03-26:chenxin01 |
最后编审: | 2023-05-22:chenxin01 |
阅读次数: | 26 |