Affiliation of Author(s):计算机科学与技术学院/人工智能学院/软件学院
Journal:IJCAI Int. Joint Conf. Artif. Intell.
Abstract:On many video websites, the recommendation is implemented as a prediction problem of video-user pairs, where the videos are represented by text features extracted from the metadata. However, the metadata is manually annotated by users and is usually missing for online videos. To train an effective recommender system with lower annotation cost, we propose an active learning approach to fully exploit the visual view of videos, while querying as few annotations as possible from the text view. On one hand, a joint model is proposed to learn the mapping from visual view to text view by simultaneously aligning the two views and minimizing the classification loss. On the other hand, a novel strategy based on prediction inconsistency and watching frequency is proposed to actively select the most important videos for metadata querying. Experiments on both classification datasets and real video recommendation tasks validate that the proposed approach can significantly reduce the annotation cost. © 2019 International Joint Conferences on Artificial Intelligence. All rights reserved.
ISSN No.:1045-0823
Translation or Not:no
Date of Publication:2019-01-01
Co-author:Cai, Jia-Jia,Tang, Jun,Chen, Qing-Guo,Hu, Yao,Wang, Xiaobo
Correspondence Author:Sheng Jun Huang
Date of Publication:2019-01-01
黄圣君
+
Gender:Male
Education Level:南京大学
Alma Mater:南京大学
Paper Publications
Multi-view active learning for video recommendation
Date of Publication:2019-01-01 Hits: