The Journal of China Universities of Posts and Telecommunications ›› 2023, Vol. 30 ›› Issue (2): 61-72.doi: 10.19682/j.cnki.1005-8885.2023.0004

Previous Articles     Next Articles

L2,1-norm robust regularized extreme learning machine for regression using CCCP method

Wu Qing, Wang Fan, Fan Jiulun, Hou Jing   

  • Received:2022-03-31 Revised:2022-11-07 Online:2023-04-30 Published:2023-04-27
  • Contact: Qing Wu E-mail:xiyouwuq@126.com
  • Supported by:
    National Natural Science Foundation of China;Key Research Project of Shaanxi Province;Natural Science Foundation of Shaanxi Province of China;Special Scientific Research Plan Project of Shaanxi Province Education Department

Abstract:

As a way of training a single hidden layer feedforward network (SLFN),extreme learning machine (ELM) is rapidly becoming popular due to its efficiency. However, ELM tends to overfitting, which makes the model sensitive to noise and outliers. To solve this problem, L2,1-norm is introduced to ELM and an L2,1-norm robust regularized ELM (L2,1-RRELM) was proposed. L2,1-RRELM gives constant penalties to outliers to reduce their adverse effects by replacing least square loss function with a non-convex loss function. In light of the non-convex feature of L2,1-RRELM, the concave-convex procedure (CCCP) is applied to solve its model. The convergence of L2,1-RRELM is also given to show its robustness. In order to further verify the effectiveness of L2,1-RRELM, it is compared with the three popular extreme learning algorithms based on the artificial dataset and University of California Irvine (UCI) datasets. And each algorithm in different noise environments is tested with two evaluation criterions root mean square error (RMSE) and fitness. The results of the simulation indicate that L2,1-RRELM has smaller RMSE and greater fitness under different noise settings. Numerical analysis shows that L2,1-RRELM has better generalization performance, stronger robustness, and higher anti-noise ability and fitness.

Key words:

extreme learning machine (ELM)| non-convex loss| L2,1 -norm| concave-convex procedure (CCCP)

CLC Number: