[1] ESHTAY M, FARI H, OBEID N. Improving extreme learning machine by competitive swarm optimization and its application for medical diagnosis problems. Expert Systems with Application, 2018, 104: 134 -152.
[2] LESHNO M, LIN V Y, PINKUS A, et al. Multilayer feedforward networks with a nonpolynomial activation function can approximate any function. Neural Networks, 1993, 6(6): 861 -867.
[3] HUANG G B, ZHU Q Y, SIEW C K. Extreme learning machine: theory and applications. Neurocomputing, 2005, 70 (1/2/3): 489 -501.
[4] HUANG G B, CHEN L, SIEW C K. Universal approximation using incremental constructive feedforward networks with random hidden nodes. IEEE Transactions on Neural Networks, 2006, 17(4): 879 -892.
[5] FENG G R, QIAN Z X, ZHANG X P. Evolutionary selection extreme learning machine optimization for regression. Soft Computing, 2012, 16 (9): 1485 -1491.
[6] XU J T, ZHOU H M, HUANG G B. Extreme learning machine based fast object recognition. Proceedings of the 15th International Conference on Information Fusion, 2012, Jul 9 - 12, Singapore. Piscataway, NJ, USA: IEEE, 2012: 1490 -1496.
[7] CAI W Q, NIAN R, HE B, et al. A fast sonar-based benthic object recognition model via extreme learning machine. Proceedings of the OCEANS 2015-MTS/ IEEE Washington, 2015, Oct 19 - 22, Washington, DC, USA. Piscataway, NJ, USA: IEEE, 2015: 1 -4.
[8] ERGUL U, BILHIN G. HCKBoost: hybridized composite kernel boosting with extreme learning machines for hyperspectral image classification. Neurocomputing, 2019, 334: 100 -113.
[9] ZHOU Y C, PENG J T, CHEN C L P. Extreme learning machine with composite kernels for hyperspectral image classification. IEEE Journal of Selected Topics in Applied Earth Observations & Remote Sensing, 2015, 8(6): 2351 -2360.
[10] YANG J, YU H L, YANG X B, et al. Imbalanced extreme learning machine based on probability density estimation. Proceedings of the 9th International Workshop on Multi- disciplinary Trends in Artificial Intelligence (MIWAI'15), 2015, Nov 13 - 15, Fuzhou, China. LNAI 9426. Berlin, Germany: Springer, 2015: 160 -167.
[11] GAO M, HONG X, CHEN S, et al. Probability density function estimation based over-sampling for imbalanced two-class problems. Proceedings of the 2012 International Joint Conference on Neural Networks (IJCNN'12), 2012, Jun 10 -15, Brisbane, Australia. Piscataway, NJ, USA: IEEE, 2012: 1 -8.
[12] LI X D, XIE H R, WANG R, et al. Empirical analysis: stock market prediction via extreme learning machine. Neural Computing and Applications, 2016, 27(1): 67 -78.
[13] XIONG Z B. Stock price prediction based on sparse Bayesian extreme learning machine. Journal of Jiaxing University, 2018, 30(5): 106 -113 (in Chinese).
[14] RUMELHART D E, HINTON G E, WILLIAMS R J. Learningrepresentations by back propagating errors. Nature, 1986, 323(6088): 533 -536.
[15] MICHE Y, SORJAMAA A, BAS P, et al. OP-ELM: optimally pruned extreme learning machine. IEEE Transactions on Neural Networks, 2010, 21(1): 158 -162.
[16] MICHE Y, VAN HEESWIJK M, BAS P, et al. TROP-ELM: a double-regularized ELM using LARS and Tikhonov regularization. Neurocomputing, 2011, 74(16): 2413 -2421.
[17] DENG W Y, ZHENG Q H, CHEN L. Regularized extreme learning machine. Proceedings of the 2009 IEEE Symposium on Computational Intelligence and Data Mining (CIDM'09), 2009, Mar 30 - Apr 2, Nashville, TN, USA. Piscataway, NJ, USA: IEEE, 2009: 389 -395.
[18] WANG Y B, LI D, DU Y, et al. Anomaly detection in traffic using L1-norm minimization extreme learning machine. Neurocomputing, 2015, 149(1): 415 -425.
[19] ZHOU S H, LIU X W, LIU Q, et al. Random Fourier extreme learning machine with L2,1 -norm regularization. Neurocomputing, 2016, 174(6): 143 -153.
[20] HASTIE T, TIBSHIRANI R, FRIEDMAN J. The elements of statistical learning: data mining, inference and prediction. Berlin, Germany: Springer, 2003.
[21] SHEN X, NIU L F, QI Z Q, et al. Support vector machine classifier with truncated pinball loss. Pattern Recognition, 2017, 68: 199 -210.
[22] WANG L, JIA H D, LI J. Training robust support vector machine with smooth Ramp loss in the primal space. Neurocomputing, 2008, 71(13/14/15): 3020 -3025.
[23] ZHAO Y P, SUN J G. Robust support vector regression in the primal. Neural Networks, 2008, 21(10): 1548 -1555.
[24] SINGH A, POKHAREL R, PRINCIPE J. The C-loss function for pattern classification, Pattern Recognition. 2014, 47(1): 441 - 453.
[25] ZHAO Y P, TAN J F, WANG J J, et al. C-loss based extreme learning machine for estimating power of small-scale turbojet engine. Aerospace Science and Technology, 2019, 89: 407 - 419.
[26] GUPTA D, HAZARIKA B B, BERLIN M. Robust regularized extreme learning machine with asymmetric Huber loss function. Neural Computing and Applications, 2020, 32(3/4): 12971 - 12998.
[27] BALASUNDARAM S, MEENA Y. Robust support vector regression in primal with asymmetric Huber loss. Neural Processing Letters, 2019, 49(3): 1399 -1431.
[28] WANG K N, ZHONG P. Robust non-convex least squares loss function for regression with outliers. Knowledge-Based Systems, 2014, 71: 290 -302.
[29] XU G B, HU B G, PRINCIPE J C. Robust C-Loss kernel classifiers. IEEE Transactions on Neural Networks and Learning Systems, 2018, 29(3): 510 -522.
[30] YUILLE A L, RANGARAJAN A. The concave-convex procedure (CCCP). Neural Computation, 2003, 15(4): 915 -936.
[31] DEMSAR J. Statistical comparisons of classifiers over multiple data sets. Journal of Machine Learning Research, 2006, 7(1): 1 - 30.
|