中国邮电高校学报(英文) ›› 2016, Vol. 23 ›› Issue (6): 1-7.doi: 10.1016/S1005-8885(16)60063-8

• Artificial Intelligence •    下一篇

Progressive framework for deep neural networks:from linear to non-linear

邵杰1,赵志诚1,苏菲2,蔡安妮1   

  1. 1. 北京邮电大学
    2. 北邮
  • 收稿日期:2016-09-08 修回日期:2016-12-21 出版日期:2016-12-31 发布日期:2016-12-30
  • 通讯作者: 邵杰 E-mail:shaojielyg@163.com
  • 基金资助:
    中国国家自然科学基金;中国国家自然科学基金;中国国家自然科学基金

Progressive framework for deep neural networks:from linear to non-linear

  • Received:2016-09-08 Revised:2016-12-21 Online:2016-12-31 Published:2016-12-30

摘要: We propose a novel progressive framework to optimize deep neural networks. The idea is to try to combine the stability of linear methods and the ability of learning complex and abstract internal representations of deep learning methods. We insert a linear loss layer between the input layer and the first hidden non-linear layer of a traditional deep model. The loss objective for optimization is a weighted sum of linear loss of the added new layer and non-linear loss of the last output layer. We modify the model structure of deep canonical correlation analysis (DCCA), i.e., adding a third semantic view to regularize text and image pairs and embedding the structure into our framework, for cross-modal retrieval tasks such as text-to-image search and image-to-text search. The experimental results show the performance of the modified model is better than similar state-of-art approaches on a dataset of National University of Singapore (NUS-WIDE). To validate the generalization ability of our framework, we apply our framework to RankNet, a ranking model optimized by stochastic gradient descent. Our method outperforms RankNet and converges more quickly, which indicates our progressive framework could provide a better and faster solution for deep neural networks.

关键词: framework, neural network, DCCA, semantic, RankNet

Abstract: We propose a novel progressive framework to optimize deep neural networks. The idea is to try to combine the stability of linear methods and the ability of learning complex and abstract internal representations of deep learning methods. We insert a linear loss layer between the input layer and the first hidden non-linear layer of a traditional deep model. The loss objective for optimization is a weighted sum of linear loss of the added new layer and non-linear loss of the last output layer. We modify the model structure of deep canonical correlation analysis (DCCA), i.e., adding a third semantic view to regularize text and image pairs and embedding the structure into our framework, for cross-modal retrieval tasks such as text-to-image search and image-to-text search. The experimental results show the performance of the modified model is better than similar state-of-art approaches on a dataset of National University of Singapore (NUS-WIDE). To validate the generalization ability of our framework, we apply our framework to RankNet, a ranking model optimized by stochastic gradient descent. Our method outperforms RankNet and converges more quickly, which indicates our progressive framework could provide a better and faster solution for deep neural networks.

Key words: framework, neural network, DCCA, semantic, RankNet