视觉属性驱动的电子商务目录搜索的联合多模态表示Joint multi-modal representations for e-commerce catalog search driven by visual attributes |
|
课程网址: | https://videolectures.net/videos/kdd2016_saha_visual_attributes |
主讲教师: | Amrita Saha |
开课单位: | KDD 2016研讨会 |
开课时间: | 2016-10-12 |
课程语种: | 英语 |
中文简介: | 在许多视觉领域(如时尚、家具等),在线平台上对产品的搜索在很大程度上是由视觉属性驱动的。传统搜索要求目录中的所有项目都手动标记所有不可扩展的可能属性值。本文提出了一种通过联合表示进行多模态目录搜索的新范式。用户以自然语言提供搜索查询(例如,粉红色花朵顶部),返回的结果具有不同的模态(即粉红色花朵顶部的图像集)。具体来说,我们使用基于相关自编码器的模型来学习图像及其相应描述的联合表示,以便将这两种表示嵌入同一空间并尽可能靠近。这些表示是通过从多个时尚电子商务门户网站抓取的70多万张图片的大型精选时尚数据集学习的。我们的实验结果表明,这些表示是一种可行的替代方法,可以在不进行手动标记的情况下搜索大型时尚目录。同样的表示也可用于视觉搜索、图像标记和查询扩展。 |
课程简介: | In many visual domains (like fashion, furniture etc.) the search for products on online platforms is highly driven by visual attributes. Conventional search requires that all items in the catalog are manually tagged with all possible attribute values which is not scalable. In this paper we propose a novel paradigm for multi-modal catalog search via joint representations. The user provides a search query in natural language (e.g.,pink floral top) and the returned results are of a different modality (that is the set of images of pink floral tops). Specifically we use a correlational autoencoder based model to learn the joint representation for both the image and its corresponding description such that the two representations are embedded in the same space and as close as possible to each other. These representations are learnt over a large curated fashion dataset of over 700 thousand images crawled from multiple fashion e-commerce portals. Our experimental results show that these representations are a viable alternative for searching large fashion catalogs without manual tagging. The same representations can also be used for visual search, image tagging, and query expansion. |
关 键 词: | 视觉属性; 新范式; 时尚数据集 |
课程来源: | 视频讲座网 |
数据采集: | 2025-01-08:liyq |
最后编审: | 2025-01-08:liyq |
阅读次数: | 10 |