quotation:[Copy]
P. Yi,Y. Hong.[en_title][J].Control Theory and Technology,2015,13(4):333~347.[Copy]
【Print page】 【Online reading】【Download 【PDF Full text】 View/Add CommentDownload reader Close

←Previous page|Page Next →

Back Issue    Advanced search

This Paper:Browse 1308   Download 1317 本文二维码信息
码上扫一扫!
Stochastic sub-gradient algorithm for distributed optimization with random sleep scheme
P.Yi,Y.Hong
0
(Key Lab of Systems and Control, Academy of Mathematics and Systems Science, Chinese Academy of Sciences)
摘要:
关键词:  
DOI:
Received:October 10, 2015Revised:October 27, 2015
基金项目:
Stochastic sub-gradient algorithm for distributedoptimization with random sleep scheme
P. Yi,Y. Hong
(Key Lab of Systems and Control, Academy of Mathematics and Systems Science, Chinese Academy of Sciences)
Abstract:
In this paper, we consider a distributed convex optimization problem of a multi-agent system with the global objective function as the sum of agents’ individual objective functions. To solve such an optimization problem, we propose a distributed stochastic sub-gradient algorithm with random sleep scheme. In the random sleep scheme, each agent independently and randomly decides whether to inquire the sub-gradient information of its local objective function at each iteration. The algorithm not only generalizes distributed algorithms with variable working nodes and multi-step consensus-based algorithms, but also extends some existing randomized convex set intersection results. We investigate the algorithm convergence properties under two types of stepsizes: the randomized diminishing stepsize that is heterogeneous and calculated by individual agent, and the fixed stepsize that is homogeneous. Then we prove that the estimates of the agents reach consensus almost surely and in mean, and the consensus point is the optimal solution with probability 1, both under randomized stepsize. Moreover, we analyze the algorithm error bound under fixed homogeneous stepsize, and also show how the errors depend on the fixed stepsize and update rates.
Key words:  Distributed optimization, sub-gradient algorithm, random sleep, multi-agent systems, randomized algorithm