引用本文:王寅,王永华,尹泽中,万频.基于深度强化学习与旋量法的机械臂路径规划[J].控制理论与应用,2023,40(3):516~524.[点击复制]
WANG Yin,WANG Yong-hua,YIN Ze-zhong,WAN pin.Path planning of manipulator based on deep reinforcement learning and screw method[J].Control Theory and Technology,2023,40(3):516~524.[点击复制]
基于深度强化学习与旋量法的机械臂路径规划
Path planning of manipulator based on deep reinforcement learning and screw method
摘要点击 1773  全文点击 433  投稿时间:2021-09-14  修订日期:2023-02-22
查看全文  查看/发表评论  下载PDF阅读器
DOI编号  10.7641/CTA.2022.10867
  2023,40(3):516-524
中文关键词  强化学习  机械臂  旋量法  数据增强
英文关键词  reinforcement learning  manipulator  screw method  data enhancement
基金项目  国家自然科学基金项目(61971147), 广东省研究生教育创新计划项目(2020JGXM040)资助.
作者单位E-mail
王寅 广东工业大学自动化学院 2111904099@mail2.gdut.edu.cn 
王永华* 广东工业大学自动化学院 sjzwyh@163.com 
尹泽中 广东工业大学自动化学院  
万频 广东工业大学自动化学院  
中文摘要
      深度强化学习在机械臂路径规划的应用中仍面临样本需求量大和获取成本高的问题. 针对这些问题, 本文 基于数据增强的思路, 提出了深度强化学习与旋量法的融合算法. 本算法通过旋量法将与环境交互所得的自然轨 迹进行有效复制, 使深度强化学习样本利用率和算法训练效率得到提高; 复制轨迹的同时对被控物体、障碍物等环 境元素进行同步复制, 以此提高机械臂在非结构环境中的泛化性能. 最后, 在具备物理模拟引擎的Mujoco仿真平台 中, 通过Fetch机械臂和UR5机械臂在非结构化环境下进行实验对比分析, 结果表明了本文算法对于提升深度强化 学习样本利用率和机械臂模型泛化性能的可行性及有效性.
英文摘要
      The application of deep reinforcement learning in manipulator path planning still faces the problems of large sample demand and high acquisition cost. Aiming at these problems, a fusion algorithm of deep reinforcement learning and screw method based on the idea of data enhancement is proposed in this paper. In this algorithm, the natural trajectory from interaction with environment is effectively copied by the screw method, which improves the sample utilization of deep reinforcement learning and the training efficiency of the algorithm. Environmental elements such as the controlled objects and obstacles are synchronously copied while copying trajectories to improve the generalization performance of the robotic arm in non-structural environments. Finally, experimental comparisons are carried out by Fetch manipulator and UR5 manipulator in the unstructured environment in the Mujoco simulation platform with physical simulation engine. The results show that the proposed algorithm is feasible and effective to improve sample utilization of deep reinforcement learning and generalization performance of the manipulator model.