摘要: Video description aims to automatically generate descriptive natural language for videos. With its successful implementations and a broad range of applications, lots of work based on Deep Neural Network (DNN) models have been put forward by researchers. This paper takes inspiration from an image caption model and develops an end-to-end video description model using Long Short-Term Memory (LSTM). Single video feature is fed to the first unit of LSTM decoder, and subsequent words of sentence are generated on previous predicted words. Experimental results on two publicly available datasets demonstrate that the performance of the proposed model outperforms that of baseline.