![]() |
个人信息Personal Information
副研究员
招生学科专业:
动力工程及工程热物理 -- 【招收硕士研究生】 -- 能源与动力学院
航空宇航科学与技术 -- 【招收博士、硕士研究生】 -- 能源与动力学院
能源动力 -- 【招收博士、硕士研究生】 -- 能源与动力学院
学历:南京航空航天大学
学位:工学博士学位
所在单位:能源与动力学院
电子邮箱:
Feature selection of generalized extreme learning machine for regression problems
点击次数:
所属单位:能源与动力学院
发表刊物:NEUROCOMPUTING
关键字:Single hidden layer feedforward network Extreme learning machine Feature selection Greedy learning Iterative updating
摘要:Recently a generalized single-hidden layer feedforward network was proposed, which is an extension of the original extreme learning machine (ELM). Different from the traditional ELM, this generalized ELM (GELM) utilizes the p-order reduced polynomial functions of complete input features as output weights. According to the empirical results, there may be insignificant or redundant input features to construct the p-order reduced polynomial function as output weights in GELM. However, to date there has not been such work of selecting appropriate input features used for constructing output weights of GELM. Hence, in this paper two greedy learning algorithms, i.e., a forward feature selection algorithm (FFS-GELM) and a backward feature selection algorithm (BFS-GELM), are first proposed to tackle this issue. To reduce the computational complexity, an iterative strategy is used in FFS-GELM, and its convergence is proved. In BFS-GELM, a decreasing iteration is applied to decay this model, and in this process an accelerating scheme was proposed to speed up computation of removing the insignificant or redundant features. To show the effectiveness of the proposed FFS-GELM and BFS-GELM, twelve benchmark data sets are employed to do experiments. From these reports, it is demonstrated that both FFS-GELM and BFS-GELM can select appropriate input features to construct the p-order reduced polynomial function as output weights for GELM. FFS-GELM and BFS-GELM enhance the generalization performance and simultaneously reduce the testing time compared to the original GELM. BFS-GELM works better than FFS-GELM in terms of the sparsity ratio, the testing time and the training time. However, it slightly loses the advantage in the generalization performance over FFS-GELM. (C) 2017 Elsevier B.V. All rights reserved.
ISSN号:0925-2312
是否译文:否
发表时间:2018-01-31
合写作者:Pan, Ying-Ting,Song, Fang-Quan,Sun, Liguo,Chen, Ting-Hao
通讯作者:Sun, Liguo,赵永平