的个人主页 http://faculty.nuaa.edu.cn/yjb1/zh_CN/index.htm
点击次数:
所属单位:计算机科学与技术学院/人工智能学院/软件学院
发表刊物:Lect. Notes Comput. Sci.
摘要:Describing videos in human language is of vital importance in many applications, such as managing massive videos on line and providing descriptive video service (DVS) for blind people. In order to further promote existing video description frameworks, this paper presents an end-to-end deep learning model incorporating Convolutional Neural Networks (CNNs) and Bidirectional Recurrent Neural Networks (BiRNNs) based on a multimodal attention mechanism. Firstly, the model produces richer video representations, including image feature, motion feature and audio feature, than other similar researches. Secondly, BiRNNs model encodes these features in both forward and backward directions. Finally, an attention-based decoder translates sequential outputs of encoder to sequential words. The model is evaluated on Microsoft Research Video Description Corpus (MSVD) dataset. The results demonstrate the necessity of combining BiRNNs with a multimodal attention mechanism and the superiority of this model over other state-of-the-art methods conducted on this dataset. © Springer Nature Switzerland AG 2018.
ISSN号:0302-9743
是否译文:否
发表时间:2018-01-01
合写作者:Du, Xiaotong,刘晖
通讯作者:Du, Xiaotong,袁家斌