引用本文:王超,王建辉,顾树生,王枭,张宇献.基于多层学习克隆选择的改进式增量型超限学习机算法[J].控制理论与应用,2016,33(3):368~379.[点击复制]
WANG Chao,WANG Jian-hui,GU Shu-sheng,WANG Xiao,ZHANG Yu-xian.Improved incremental extreme learning machine based on multi-learning clonal selection algorithm[J].Control Theory and Technology,2016,33(3):368~379.[点击复制]
基于多层学习克隆选择的改进式增量型超限学习机算法
Improved incremental extreme learning machine based on multi-learning clonal selection algorithm
摘要点击 3121  全文点击 2379  投稿时间:2015-07-24  修订日期:2015-10-21
查看全文  查看/发表评论  下载PDF阅读器
DOI编号  10.7641/CTA.2016.50640
  2016,33(3):368-379
中文关键词  克隆选择算法  鲍德温学习  拉马克学习  神经网络  增量型超限学习机  软计算
英文关键词  clonal selection algorithm  Baldwinian learning  Lamarckian learning  neural networks  incremental extreme learning machine  soft computing
基金项目  国家自然科学基金项目(61102124), 辽宁省科学技术计划项目(JH2/101)资助.
作者单位E-mail
王超* 东北大学 supper_king1018@163.com 
王建辉 东北大学  
顾树生 东北大学  
王枭 东北大学  
张宇献 沈阳工业大学  
中文摘要
      针对增量型超限学习机(incremental extreme learning machine, I–ELM)中大量冗余节点可导致算法学习效 率降低, 网络结构复杂化等问题, 提出基于多层学习(multi-learning)优化克隆选择算法(clone selection algorithm, CSA)的改进式I–ELM. 利用Baldwinian learning操作改变抗体信息的搜索范围, 结合Lamarckian learning操作提高 CSA的搜索能力. 改进后的算法能够有效控制I–ELM的隐含层节点数, 使网络结构更加紧凑, 提高算法精度. 仿真结 果表明, 所提出的基于多层学习克隆选择的增量型核超限学习机(multi-learning clonal selection I–ELMK, MLCSI– ELMK)算法能够有效简化网络结构, 并保持较好的泛化能力, 较强的学习能力和在线预测能力.
英文摘要
      The great number of redundant nodes in an incremental extreme learning machine (I–ELM) may lower the learning efficiency of the algorithm, and complicate the network structure. To deal with this problem, we propose the improved I–ELM with kernel (I–ELMK) on the basis of multi-learning clonal selection algorithm (MLCSA). The MLCSA uses Baldwinian learning and Lamarckian learning, to exploit the search space by employing the information of antibodies, and reinforce the exploitation capacity of individual information. The proposed algorithm can limit the number of hidden layer neurons effectively to obtain more compact network architecture. The simulations show that MLCSI–ELMK has higher prediction accuracies online and off-line, while providing a better capacity of generalization compared with other algorithms.