中国邮电高校学报(英文版) ›› 2023, Vol. 30 ›› Issue (2): 61-72.doi: 10.19682/j.cnki.1005-8885.2023.0004

• Artificial Intelligence • 上一篇    下一篇

L2,1-norm robust regularized extreme learning machine for regression using CCCP method 

吴青1,王凡1,范九伦2,侯静1   

  1. 1. 西安邮电大学
    2. 西安邮电学院信息与控制系手机 13002985988,029-85383403(办)
  • 收稿日期:2022-03-31 修回日期:2022-11-07 出版日期:2023-04-30 发布日期:2023-04-27
  • 通讯作者: 吴青 E-mail:xiyouwuq@126.com
  • 基金资助:
    中国国家自然科学基金;陕西省重点研究项目;陕西省自然科学基金;陕西省教育厅科研专项计划项目

L2,1-norm robust regularized extreme learning machine for regression using CCCP method

Wu Qing, Wang Fan, Fan Jiulun, Hou Jing   

  • Received:2022-03-31 Revised:2022-11-07 Online:2023-04-30 Published:2023-04-27
  • Contact: Qing Wu E-mail:xiyouwuq@126.com
  • Supported by:
    National Natural Science Foundation of China;Key Research Project of Shaanxi Province;Natural Science Foundation of Shaanxi Province of China;Special Scientific Research Plan Project of Shaanxi Province Education Department

摘要:

As a way of training a single hidden layer feedforward network (SLFN),extreme learning machine (ELM) is rapidly becoming popular due to its efficiency. However, ELM tends to overfitting, which makes the model sensitive to noise and outliers. To solve this problem, L2,1-norm is introduced to ELM and an L2,1-norm robust regularized ELM (L2,1-RRELM) was proposed. L2,1-RRELM gives constant penalties to outliers to reduce their adverse effects by replacing least square loss function with a non-convex loss function. In light of the non-convex feature of L2,1-RRELM, the concave-convex procedure (CCCP) is applied to solve its model. The convergence of L2,1-RRELM is also given to show its robustness. In order to further verify the effectiveness of L2,1-RRELM, it is compared with the three popular extreme learning algorithms based on the artificial dataset and University of California Irvine (UCI) datasets. And each algorithm in different noise environments is tested with two evaluation criterions root mean square error (RMSE) and fitness. The results of the simulation indicate that L2,1-RRELM has smaller RMSE and greater fitness under different noise settings. Numerical analysis shows that L2,1-RRELM has better generalization performance, stronger robustness, and higher anti-noise ability and fitness.

关键词: extreme learning machine (ELM)| non-convex loss| L2,1 -norm| concave-convex procedure (CCCP)

Abstract:

As a way of training a single hidden layer feedforward network (SLFN),extreme learning machine (ELM) is rapidly becoming popular due to its efficiency. However, ELM tends to overfitting, which makes the model sensitive to noise and outliers. To solve this problem, L2,1-norm is introduced to ELM and an L2,1-norm robust regularized ELM (L2,1-RRELM) was proposed. L2,1-RRELM gives constant penalties to outliers to reduce their adverse effects by replacing least square loss function with a non-convex loss function. In light of the non-convex feature of L2,1-RRELM, the concave-convex procedure (CCCP) is applied to solve its model. The convergence of L2,1-RRELM is also given to show its robustness. In order to further verify the effectiveness of L2,1-RRELM, it is compared with the three popular extreme learning algorithms based on the artificial dataset and University of California Irvine (UCI) datasets. And each algorithm in different noise environments is tested with two evaluation criterions root mean square error (RMSE) and fitness. The results of the simulation indicate that L2,1-RRELM has smaller RMSE and greater fitness under different noise settings. Numerical analysis shows that L2,1-RRELM has better generalization performance, stronger robustness, and higher anti-noise ability and fitness.

Key words:

extreme learning machine (ELM)| non-convex loss| L2,1 -norm| concave-convex procedure (CCCP)

中图分类号: