的个人主页 http://faculty.nuaa.edu.cn/huangsj/zh_CN/index.htm
点击次数:
所属单位:计算机科学与技术学院/人工智能学院/软件学院
发表刊物:IJCAI Int. Joint Conf. Artif. Intell.
摘要:On many video websites, the recommendation is implemented as a prediction problem of video-user pairs, where the videos are represented by text features extracted from the metadata. However, the metadata is manually annotated by users and is usually missing for online videos. To train an effective recommender system with lower annotation cost, we propose an active learning approach to fully exploit the visual view of videos, while querying as few annotations as possible from the text view. On one hand, a joint model is proposed to learn the mapping from visual view to text view by simultaneously aligning the two views and minimizing the classification loss. On the other hand, a novel strategy based on prediction inconsistency and watching frequency is proposed to actively select the most important videos for metadata querying. Experiments on both classification datasets and real video recommendation tasks validate that the proposed approach can significantly reduce the annotation cost. © 2019 International Joint Conferences on Artificial Intelligence. All rights reserved.
ISSN号:1045-0823
是否译文:否
发表时间:2019-01-01
合写作者:Cai, Jia-Jia,Tang, Jun,Chen, Qing-Guo,Hu, Yao,Wang, Xiaobo
通讯作者:黄圣君