Chen, Xiaobiao and Yan, Kaixin and Gao, Yu and Xu, Xuefeng and Yan, Kang and Wang, Jing (2020) Push-Pull Finite-Time Convergence Distributed Optimization Algorithm. American Journal of Computational Mathematics, 10 (01). pp. 118-146. ISSN 2161-1203
ajcm_2020032314593434.pdf - Published Version
Download (2MB)
Abstract
With the widespread application of distributed systems, many problems need to be solved urgently. How to design distributed optimization strategies has become a research hotspot. This article focuses on the solution rate of the distributed convex optimization algorithm. Each agent in the network has its own convex cost function. We consider a gradient-based distributed method and use a push-pull gradient algorithm to minimize the total cost function. Inspired by the current multi-agent consensus cooperation protocol for distributed convex optimization algorithm, a distributed convex optimization algorithm with finite time convergence is proposed and studied. In the end, based on a fixed undirected distributed network topology, a fast convergent distributed cooperative learning method based on a linear parameterized neural network is proposed, which is different from the existing distributed convex optimization algorithms that can achieve exponential convergence. The algorithm can achieve finite-time convergence. The convergence of the algorithm can be guaranteed by the Lyapunov method. The corresponding simulation examples also show the effectiveness of the algorithm intuitively. Compared with other algorithms, this algorithm is competitive.
Item Type: | Article |
---|---|
Subjects: | Open Asian Library > Mathematical Science |
Depositing User: | Unnamed user with email support@openasianlibrary.com |
Date Deposited: | 15 Jun 2023 07:34 |
Last Modified: | 04 Jun 2024 11:24 |
URI: | http://publications.eprintglobalarchived.com/id/eprint/1547 |