JOURNAL OF CHINA UNIVERSITIES OF POSTS AND TELECOM ›› 2016, Vol. 23 ›› Issue (6): 1-7.doi: 10.1016/S1005-8885(16)60063-8

    Next Articles

Progressive framework for deep neural networks:from linear to non-linear

  

  • Received:2016-09-08 Revised:2016-12-21 Online:2016-12-31 Published:2016-12-30

Abstract: We propose a novel progressive framework to optimize deep neural networks. The idea is to try to combine the stability of linear methods and the ability of learning complex and abstract internal representations of deep learning methods. We insert a linear loss layer between the input layer and the first hidden non-linear layer of a traditional deep model. The loss objective for optimization is a weighted sum of linear loss of the added new layer and non-linear loss of the last output layer. We modify the model structure of deep canonical correlation analysis (DCCA), i.e., adding a third semantic view to regularize text and image pairs and embedding the structure into our framework, for cross-modal retrieval tasks such as text-to-image search and image-to-text search. The experimental results show the performance of the modified model is better than similar state-of-art approaches on a dataset of National University of Singapore (NUS-WIDE). To validate the generalization ability of our framework, we apply our framework to RankNet, a ranking model optimized by stochastic gradient descent. Our method outperforms RankNet and converges more quickly, which indicates our progressive framework could provide a better and faster solution for deep neural networks.

Key words: framework, neural network, DCCA, semantic, RankNet