引用本文:魏海坤, 李奇, 宋文忠.梯度算法下RBF网的参数变化动态[J].控制理论与应用,2007,24(3):356~360.[点击复制]
WEI Hai-kun, LI Qi, SONG Wen-zhong.Gradient learning dynamics of radial basis function networks[J].Control Theory and Technology,2007,24(3):356~360.[点击复制]
梯度算法下RBF网的参数变化动态
Gradient learning dynamics of radial basis function networks
摘要点击 2225  全文点击 1595  投稿时间:2005-03-17  修订日期:2006-08-20
查看全文  查看/发表评论  下载PDF阅读器
DOI编号  10.7641/j.issn.1000-8152.2007.3.005
  2007,24(3):356-360
中文关键词  梯度算法  RBF网  学习动态  神经网络  泛化能力
英文关键词  gradient method  RBF network  learning dynamics  neural networks  generalization ability
基金项目  
作者单位
魏海坤, 李奇, 宋文忠 东南大学自动化学院, 江苏南京210096 
中文摘要
      分析神经网络学习过程中各参数的变化动态, 对理解网络的动力学行为, 改进网络的结构和性能等具有积极意义. 本文讨论了用梯度算法优化误差平方和损失函数时RBF网隐节点参数的变化动态, 即算法收敛后各隐节点参数的可能取值. 主要结论包括: 如果算法收敛后损失函数不为零, 则各隐节点将位于样本输入的加权聚类中心;如果损失函数为零, 则网络中的冗余隐节点将出现萎缩、衰减、外移或重合现象. 进一步的试验发现, 对结构过大的RBF网, 冗余隐节点的萎缩、外移、衰减和重合是频繁出现的现象.
英文摘要
      To understand the dynamic behavior and improve the structure and performance of neural networks, it is very important to investigate their parameter changing dynamics during the learning. For radial basis function (RBF) networks using gradient descent method to minimize the least squares error cost function, this paper discusses the learning dynamics of the hidden unit parameters, i.e., their possible values after learning. It is proved that if the cost function is not zero after the algorithm converges, then all hidden units will move to the weighted cluster centers of sample inputs. If cost function is zero, then hidden units will have shrinking, eliminating, out-moving and overlapping happened to those redundant units.Further simulation shows that such phenomena occur frequently in oversized RBF networks.