Next Article in Journal
Modified Unit-Half-Normal Distribution with Applications
Previous Article in Journal
Analyzing Malware Propagation on Wireless Sensor Networks: A New Approach Using Queueing Theory and HJ-Biplot with a SIRS Model
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Adaptive Neural Consensus of Unknown Non-Linear Multi-Agent Systems with Communication Noises under Markov Switching Topologies

Shien-Ming Wu School of Intelligent Engineering, South China University of Technology, Guangzhou 511442, China
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(1), 133; https://doi.org/10.3390/math12010133
Submission received: 23 November 2023 / Revised: 21 December 2023 / Accepted: 27 December 2023 / Published: 31 December 2023
(This article belongs to the Section Engineering Mathematics)

Abstract

:
In this paper, the adaptive consensus problem of unknown non-linear multi-agent systems (MAs) with communication noises under Markov switching topologies is studied. Based on the adaptive control theory, a novel distributed control protocol for non-linear multi-agent systems is designed. It consists of the local interfered relative information and the estimation of the unknown dynamic. The Radial Basis Function networks (RBFNNs) approximate the nonlinear dynamic, and the estimated weight matrix is updated by utilizing the measurable state information. Then, using the stochastic Lyapunov analysis method, conditions for attaining consensus are derived on the consensus gain and the weight of RBFNNs. The main findings of this paper are as follows: the consensus control of multi-agent systems under more complicated and practical circumstances, including unknown nonlinear dynamic, Markov switching topologies and communication noises, is discussed; the nonlinear dynamic is approximated based on the RBFNNs and the local interfered relative information; the consensus gain k must to be small to guarantee the consensus performance; and the proposed algorithm is validated by the numerical simulations finally.

Graphical Abstract

1. Introduction

Multi-agent systems consist of multiple interacting agents with simple structures and limited abilities, it can complete complex tasks that a single agent or a monolithic system cannot accomplish itself. Because of the advantages of strong autonomy and low cost, multi-agent systems are widely applied in various fields, such as spacecraft [1,2], Wireless Sensor Networks [3], biological systems [4], microgrids [5,6] and so on.
As the fundamental research issue of multi-agent systems, consensus control refers to designing a valid distributed protocol to prompt all agents to asymptotically reach an agreement in a state (position, velocity) level [7]. There have been numerous significant research studies focusing on the consensus control of multi-agent systems. Olfati-Saber et al. considered the consensus problems of several common linear continuous, discrete multi-agent systems and derived the conditions for achieving consensus [8,9,10,11,12,13], but they assumed that the communications among agents are ideal, i.e., each agent receives accurate state information from their neighboring agents. However, because of the existence of various factors, such as measurement errors, channel fading, thermal noises and so on, the information each agent from their neighboring agents inevitably undergoes interference from communication noises. And the systems with communication noises cannot achieve a consensus by using the conventional state feedback protocols adopted in [8,9,10,11,12,13]. Based on the stochastic approximation theory, Huang et al. [14] introduced the decreasing step size a ( k ) to reduce the effects of the communication noises and obtained the mean square consensus conditions of discrete systems in three cases: a symmetric network, a general network and a leader-following network. Immediately after Huang’s research was conducted, Li and Ren et al. [15,16,17] introduced time-varying gain a ( t ) to attenuate the communication noises and established the sufficient and necessary conditions for achieving mean square consensus of integer-order continuous multi-agent systems. Subsequently, numerous significant works emerged that focused on consensus control of multi-agent systems with communications noise on distinct scenarios, such as Markov switching topologies [18,19,20,21], time-delays [22,23], event-triggered [24,25], heterogeneous [26,27], collaborate–compete interactions [28,29] and so on.
The studies mentioned above are all about multi-agent systems with linear dynamics. However, because of the complexity and uncertainty of the actual environment, most of the practical systems are nonlinear and even the dynamics are unknown. Although there has been research concerning the consensus control of nonlinear multi-agent systems with communications noises, such as [30,31,32,33], the noises considered in these papers are multiplicative noise, the non-linear parts of the systems and the coefficient of the noises are usually assumed to satisfy the Lipschitz conditions f ( x i ) f ( x j ) L | | x i x j | | , and the Lipschitz L constant is known, which means the information on the non-linear dynamic is known. And in references [34], although the additive noises are considered, the Lipschitz constant is still essential knowledge before the non-linear dynamic can be established. For the non-linear multi-agent systems with coupling parametric uncertainties and stochastic measurement noises, Huang [35] designed a distributed adaptive mechanism consisting of two parts: the adaptive individual estimates for the uncertainties and the local relative information multiplying time-varying gain, which were guaranteed to solve the asymptotic consensus non-linear multi-agent systems with parametric uncertainties. The results show that the proposed protocol prevents the inherent nonlinearities from incurring finite-time explosion and ensures an almost-certain consensus.
Currently, Neural networks (NNs) have increasingly widespread applications in many fields such as artificial intelligence, system control, image recognition, language processing and so on. In the field of system control, Neural networks play an important role in model predictive control [36], fuzzy control [37], and adaptive control [38], and became powerful tools in the stability analysis of nonlinear multi-agent systems because of their excellent approximation abilities [38,39,40,41,42,43]. In ref. [38], Chen et al. proposed a NNs control approach for the nonlinear multi-agent systems with time delay. The nonlinear dynamic is approximated by RBFNNs and the time-delay is compensated by a Lyapunov–Krasovskii functional. Ma et al. [39] considered the nonlinear multi-agent systems with time delay and external noises. Meng et al. [40] proposed a robust consensus protocol which can alleviate the effects of external disturbances and residual error generated by the approximated term, which guarantee the uniformly ultimately consensus of the system. Mo et al. [41] proposed an integer-order adaptive protocol for the fractional-order multi-agent systems with unknown nonlinear dynamic and external disturbances, and the effectiveness of the proposed protocol of verified by constructing the Riemman–Liouvile fractional-order integral and the finite-time tracking consensus problems of high-order nonlinear multi-agent systems were studied in refs. [42,43]. Although significance results have been obtained in the consensus control of multi-agent systems [38,39,40,41,42,43], these works did not take into account the existence of communication noises.
It is, therefore, the purpose of the current paper to resolve the consensus problems of multi-agent systems with unknown nonlinear dynamic and communication noises. Inspired by these works above, we consider the consensus problems of unknown nonlinear multi-agent systems with communication noises and Markov switching topologies. Firstly, the interfered local relative information is constructed to ensure the consensus performance, and the RBFNNs are used to model the nonlinearity. Finally, by using the stochastic stability theory, conditions for reaching consensus are derived. The main contributions of this paper are as follows: (1) The non-linear dynamic is unknown, and it is approximated by the RBFNNs; (2) conditions for reaching consensus on consensus gain and the weight of RBFNNs are obtained, the consensus gain k needs to be small enough to guarantee the consensus performance; (3) this paper considers the communication environment with communication noises and Markov switching topologies; this is more practical, since the communication is inevitable.
The rest of this paper is organized as follows. The problem description and preliminary knowledge, including graph theory, RBFNNs and the Markov process, are introduced in Section 2. Section 3 introduces how to design the protocol and conduct a detailed convergence analysis.Three simulation results are shown in Section 4 and conclusions are drawn in Section 5.
Notations: R denote the set of real numbers, R N × N denotes the set of N × N matrixes; I N denote the N × N dimensional identity matrix; 0 represent the zero vector or matrix with corresponding dimension; λ ( L ) represent the eigenvalue of the matrix; λ max ( L ) and λ min ( L ) represent the maximum and minimum eigenvalue of matrix L, respectively; | | · | | denotes the Frobenius norm; ⊗ denotes the Kronecker product; tr ( P ) denotes the trace of matrix P . o ( x ) represents the equivalent infinitesimal of x.

2. Problem Description and Preliminaries

2.1. Graph Theory

The interaction and communication of agents can be characterized by an undirected graph G . A undirected graph G = ( V , E , A ) includes vertex sets V = { v i | i = 1 , , N } , v i that represent agent ith, and the edges set E = { ( υ j , υ i ) | j N i } , the adjacency matrix of G A = R N × N , and a i j = 1 if edge ( υ j , υ i ) E , and a i j = 0 ; otherwise, a i j is the element of the i-th and j-th. Agent j is called the neighbor of i if i can obtain the state information of j and a i j = 1 . And N i = ( j | a i j = 1 ) represents the neighbors set of i. The degree of i is denoted as d i = j N i a i j , and the degree matrix D = diag { d 1 , d 2 , , d N } ; the Laplacian matrix L is defined as L = D A . G is connected if, for each pair-wise agent j and i, there exists a path ( j , j 1 ) , ( j 1 , j 2 ) , , ( j s , i ) [44].
According to the undirected property, the following lemma exists:
Lemma 1
([10]). For a connected undirected graph G , 1 N is the eigenvector of L , and all the eigenvalues of L can be written as λ N ( L ) > > λ 2 ( L ) > λ 1 ( L ) = 0 . And there exists an orthogonal matrix such that T 1 L T = diag { λ 1 ( L ) , λ 2 ( L ) , , λ N ( L ) } .

2.2. Problem Description

Consider unknown nonlinear multi-agent systems; the dynamic is described as follows:
x ˙ i ( t ) = h i ( x i ( t ) ) + u i ( t ) ,
where x i ( t ) R n , u i ( t ) R n represent the state and input of i, respectively, h i ( x i ( t ) ) : R n R n is the unknown nonlinear vector function, i = 1 , 2 , , N .
The state information of i-th agent received from neighbor agent j is:
ξ i j ( t ) = x j ( t ) + ψ i j w i j ( t ) ,
where ψ i j = diag { ψ i j 1 , ψ i j 2 , , ψ i j n } R n × n , w i j = ( w i j 1 , w i j 2 , , w i j n ) T R n , ψ i j k ,   ψ i j k ,   k = 1 , 2 , , n ,   i , j = 1 , 2 , , N are independent standard white noises and finite noises intensities, respectively.
Assumption 1.
The system (1) is measurable; that is, each agent can measure the state information of its neighbor agents, and interference is caused by the communication noises.
Definition 1.
If there exists a distributed protocol, we make sure the following equation holds.
lim t E | | x i ( t ) x j ( t ) | | 2 = 0 ,
i j , i , j = 1 , 2 , , N , we say system (1)–(2) can reach the asymptotically mean square consensus under the protocol.
Remark 1.
Denote e i ( t ) = a i j ( x i ( t ) x j ( t ) ) , e ( t ) = ( L I n ) x ( t ) , x ( t ) = ( x 1 T ( t ) , , x N T ( t ) ) T , e ( t ) = ( e 1 T ( t ) , , e N T ( t ) ) T . Obviously, Equation (3) is equivalent to lim t E | | e i ( t ) | | 2 = 0 . Basically, Definition 1 is a rigorous description of the consensus. From the point of view of practical control, we can relax Definition 1 as follows.
Definition 2.
For any chosen ε > 0 , if there exists a distributed protocol that satisfies
lim t E | | e i ( t ) | | 2 = lim t E | | a i j ( x i ( t ) x j ( t ) ) | | 2 ε ,
 the consensus is achieved.
Remark 2.
In [15], all agents converge to a random variable, which is determined by the initial states of agents. Unlike the mean square average consensus of linear multi-agent systems in [15], however, the results are not available for the systems with a nonlinear dynamic. We committed to the final states of all agents achieving consensus without converge to a fixed random variable in this paper.
For an ergodic Markov process σ ( t ) with limited states ( s 1 , s 2 , , s m ) , which is defined in a complete probability space ( Ω , F , P ) , the state transition probability matrix is defined as:
P i j ( t ) = P { σ ( t + s ) = s j | σ ( s ) = s i } = P { σ ( t ) = s j | σ ( 0 ) = s i } ,
where i j ,   i , j = 1 , , m .
Since the Markov process is ergodic, there exists a stationary distribution ( π 1 , π 2 , , π m ) , and i = 1 m π i = 1 .
This paper considers that the multi-agent system’s topologies switch among G 1 , G 2 , , G m , and the switching is dominated by an ergodic Markov process.
Assumption 2.
Each graph G i , i = 1 , 2 , , m is unconnected, but their union graph G ¯ is connected.

2.3. RBFNNs

The Neural network has an excellent ability to approximate unknown functions. The RBFNNs are extraordinarily effective in the analysis of consensus control. For a unknown nonlinear function h ( x ) = ( h 1 ( x ) , h 2 ( x ) , , h n ( x ) ) T R n , it can be rewritten as the approximated form
h ( W , x ) = W T Φ ( x ) ,
where h ( W , x ) = h ( x ) = ( h 1 ( W , x ) , h 2 ( W , x ) , , h n ( W , x ) ) , W ( t ) R κ × n is the weight matrix, Φ ( x ) = ( ϕ 1 ( x ) , ϕ 2 ( x ) , , ϕ κ ( x ) ) T is the basis function vector, and
ϕ i ( x ) = exp [ ( x ϑ i ) T ( x ϑ i ) φ i 2 ] ,
where ϑ i represents the receptive field of the center, x = ( x 1 , x 2 , , x n ) T R n is the input of the RBFNNs, φ i is the width of Gaussian function. Figure 1 shows the structure of the RBFNNs, and x i , ψ j , h i ( W , x ) , i = 1 , 2 , , n , j = 1 , 2 , , κ , represent the input layer, neuron layer and output layer, respectively.
It is proved that any unknown nonlinear function can be approximated by the Neural network to any accuracy over a compact set Ξ if the Neural networks contain enough neurons [45]. Therefore, for a given unknown nonlinear function h ( x ) and any approximation error ϵ , there holds the following equation,
h ( x ) = W * Φ ( x ) + ϵ ,
where W * is the ideal weight matrix, ϵ satisfies | | ϵ | | ϵ N . In practical terms, the ideal weight matrix W * is unknown and it needs to be estimated. Define W ^ ( t ) as the estimation of W * ; then h ( x ) can be written in the following form.
h ( x ) = W ^ T ( t ) Φ ( x ) ,
and W ^ T ( t ) updates online to approximate the ideal weight matrix by using the states information of x.
Lemma 2
([46]). For a continuous and bounded function V ( t ) > 0 , if V ˙ ( t ) satisfies V ˙ ( t ) α V ( t ) + β , and α > 0 , β > 0 , then V ( t ) V ( 0 ) e α t + β / α ( 1 e α t ) .

3. Consensus Analysis

In this paper, we aim to derive an appropriate adaptive control protocol based on the local interfered relative state information and the RBFNNs for each agent of the system, such that the effect of communication noises is reduced and the consensus of non-linear multi-agent systems (1) can be achieved simultaneously.
For convenience, we will omit ( t ) if there is no confusion in the following paper.
We denote the local consensus error of system (1) as
e σ , i = j N i a σ , i j ( x i x j ) ,
where σ represents the Markov process. Since the communication topology is switching and is dominated by the switching signal σ , the a σ , i j is adopted to indicate the internal communication among agents.
Next, we denote a positive semidefinite scalar function
V x = 1 2 x T ( L σ I n ) x ,
where x = ( x 1 T , x 2 T , , x N T ) T .
Since the communication topology G σ is unconnected, according to the graph theory and matrix theory, L σ has at least two non-zero eigenvalues. If we assume the number of zero eigenvalue is μ , then, the eigenvalues can be written as λ ( L σ ) = diag { 0 , , 0 , λ μ + 1 ( L ) , , λ N ( L ) } , and the eigenvalues of ( L σ I n ) can be written as λ ( L σ I n ) = diag { 0 , , 0 , λ μ + 1 I n , , λ N I n } , where λ μ + 1 represents the minimum nonzero eigenvalue of L σ , then
V σ , x = 1 2 x T ( L σ I n ) x = 1 2 x T T ¯ 1 Λ T ¯ x = 1 2 x T T ¯ 1 Λ Λ ^ Λ T ¯ x = 1 2 x T T ¯ 1 Λ T ¯ T ¯ 1 Λ ^ T ¯ T ¯ 1 Λ x = 1 2 x T ( L σ I n ) T 1 Λ ^ T ( L σ I n ) x = 1 2 e σ T D σ e σ
where T ¯ = T σ I n , Λ ^ = diag { 0 , , 0 , 1 λ κ ( L σ ) I n , , 1 λ N ( L σ ) I n } , e σ = ( e 1 T , e 2 T , , e N T ) T , D σ = T ¯ 1 Λ ^ T ¯ .
If the dynamic of h i ( x i ) is exactly known, the communication noises do not exist, and the communication topology of system (1) is connected and stationary, we take the derivative of V x
V ˙ x = x ˙ T ( L I n ) x = x T ( L I n ) x ˙ = i = 1 N e i T x ˙ i = i = 1 N ( h i ( x i ( t ) ) + u i ( t ) ) ,
And the consensus problem of system (1) can be solved by designing a simple adaptive control protocol u i = k e i + h i ( x i ) , k > 0 , then, V x 0 and V x = 0 only when e i = 0 , i = 1 , 2 , , N . V x can be regarded as a Lyapunov function, and V ˙ x < 0 , according to the Lyapunov stability theory, we can obtain system (1) to achieve consensus when t .
However, due to the existence of the unknown dynamic h i ( x i ) , communication noises and topology switching, the simple adaptive control protocol has no effect on the consensus control of system (1).
Based on the above analysis, the proposed distributed adaptive control scheme is as follows.
u i = k a σ , i j j N i ( x i ξ i j ) h i ( x i ) .
Since h i ( x i ) are unknown nonlinear functions, protocol (9) cannot be used directly for the analysis. According to the description of Section 2.3, for any given constant ϵ i , h i ( x i ) can be rewritten as:
h i ( x i ) = W i * T Φ i ( x i ) + ϵ i ( x i ) = W ^ i T Φ i ( x i )
where W i * T is the ideal weight matrix, W ^ i T is the estimation of W i * T , Φ i ( x i ) = [ ϕ 1 ( x i ) , , ϕ κ ( x i ) ] T , ϕ ρ ( x i ) = exp [ ( x i ϑ ) T ( x i ϑ ) / ψ i 2 ] , κ represent the number of neurons, | | ϵ i | | ϵ c i .
Remark 3.
In [30,32,33], the nonlinear dynamics usually are assumed to satisfy the Lipschitz condition, | | f ( x i ) f ( x j ) | | L | | x i x j | | , which means that the Lipschitz constant is known, and the conditions for achieving consensus contain the Lipschitz constant L. However, because the nonlinear functions f i ( x i ( t ) ) are unknown, namely, the Lipschitz constant is also unknown, the protocol used in these references cannot be applied for system (1) . So the RBFNNs is adopted to approximate h i ( x i ( t ) ) in this paper because of its excellent approximation ability.
Let W ^ i ( t ) be the estimate of the ideal weight matrix W i * ; then, based on the RBFNNs, we proposed the following adaptive control protocol:
u i = k e ^ σ , i W ^ i T Φ i ( x i )
W ^ ˙ i = γ i Φ i ( x i ) e σ , i T ( t ) , if tr W ^ i T W ^ i < W i max or tr W ^ i T W ^ i = W i max and e σ , i T W ^ i T Φ i ( x i ) < 0 ; γ i Φ i ( x i ) e σ , i T γ i e σ , i T W ^ i T Φ i ( x i ) tr W ^ i W ^ i W ^ i , if tr W ^ i T W ^ i = W i max and e σ , i T W ^ i T Φ i ( x i ) 0 ,
where e ^ σ , i ( t ) = a σ , i j j N i ( x i ( t ) ξ i j ( t ) ) , W max i > 0 is a predetermined constant, k = k 1 + k 2 , k 1 , k 2 , γ i R are positive parameters to be determined.
Because the ideal weight matrix W i * T is also unknown, the Neural networks are introduced to estimate W i * T , and the nonlinear dynamic h i ( x i ) can be replaced by W ^ i T Φ i ( x i ) . Then, the estimated matrix W ^ i T updated by using the measurable state information.
In protocol (11), k e ^ σ , i is the feedback controller, which is adopted to drive all agent achieve consensus. It contains the ith agent state information and its neighbor agents’ state information, which are the essence of distributed control. When the consensuses are eventually achieved, the inequalities (4) in Definition 2 will hold.
The update law (12) is to prompt the W ^ i T = 0 when the time tends to infinity, which means W ^ i T estimates the ideal matrix W i * T accurately, namely, the W ^ i T Φ i ( x i ) approximate the non-linear dynamic h i ( x i ) accurately. And the term e σ , i T W ^ i T Φ i ( x i ) can be seen as the scalar product of e σ , i T and W ^ i T Φ i ( x i ) , when e σ , i T W ^ i T Φ i ( x i ) > 0 , the angle between e σ , i T and W ^ i T Φ i ( x i ) is smaller than 90 , and the W ^ i T should be decreased; otherwise, the W ^ i T should be increased. When the angle between e σ , i T and W ^ i T Φ i ( x i ) is adjusted to 90 , W ^ ˙ i T = 0 , the ideal weight matrix is estimated well; namely, the non-linear dynamic h i ( x i ) is approximated well.
Substituting the adaptive control protocol (11)–(12) into system (1), we have
x ˙ i ( t ) = h i ( x i ) k e ^ σ , i W ^ i T Φ i ( x i ) , = h i ( x i ) k e σ , i W ^ i T ( t ) Φ i ( x i ) k Ψ σ , i w i ,
where w i = ( w i 11 ( t ) , w i 12 ( t ) , , w i 1 n ( t ) , , w i N 1 ( t ) , , w i N n ( t ) ) T ,
Ψ σ , i = a σ , i 1 ψ i 11 ψ i 1 n , , a σ , i N ψ i N 1 ψ i N n . Then, according to the theory of stochastic differential equations, (13) can be rewritten as
d x i = h i ( x i ) d t k e σ , i d t W ^ i T Φ i ( x i ) d t k Ψ σ , i d B i ,
where B i = ( B i 11 ( t ) , , B i 1 n ( t ) , , B i N 1 ( t ) , , B i N n ( t ) ) T , B i j k ( t ) ,   i , j = 1 , , N ,   k = 1 , , n , are standard Brownian motions.
Theorem 1.
For the nonlinear multi-agent systems described in (1)–(2), the control protocol (11)–(12), and any consensus error ε > 0 , under Assumption 1–2, if the consensus gain k 1 , k 2 = o ( ε ) , the predetermined constant W i max = o ( ε ) , ϵ c i = o ( ε 2 ) , W ^ ( 0 ) = 0 , and taking appropriate value, all agents will reach a final consensus for any initial state.
Proof. 
Choosing the Lyapunov candidate:
V = E V x + 1 2 E i = 1 N tr 1 γ i W ˜ i T W ˜ i ,
V l = E V x 1 { σ ( t ) = l } + 1 2 E i = 1 N tr 1 γ i W ˜ i T W ˜ i 1 { σ ( t ) = l } ,
where V x = 1 2 x T ( L σ I n ) x , x = ( x 1 T , x 2 T , , x N T ) T , W ˜ i = W i * W ^ i , 1 { σ ( t ) = l } = 1 if σ ( t ) = l and 1 { σ ( t ) = l } = 0 , otherwise. Obviously V ( t ) = l = 1 s V l ( t ) .
From (7), we obtain
λ ¯ min ( D σ ) 2 i = 1 N | | e i | | 2 V x λ max ( D σ ) 2 i = 1 N | | e i | | 2 .
where λ ¯ min ( D σ ) , λ max ( D σ ) represent the minimum nonzero eigenvalue and maximum eigenvalue of D σ , respectively.
Substituting protocol (11)–(12) into (1):
d V l d t = E [ ( i = 1 N k | | e σ , i | | 2 + e σ , i T h i ( x i ) e σ , i T W ^ i T Φ i ( x i ) + k 2 2 i = 1 N tr ( Ψ σ , i T Ψ σ , i ) i = 1 N tr 1 γ i W ˜ i T W ^ ˙ i ) 1 { σ ( t ) = l } ] + E V x d d t 1 { σ ( t ) = l } + E 1 2 i = 1 N tr 1 γ i W ˜ i T W ˜ i d d t 1 { σ ( t ) = l } = E [ ( i = 1 N k | | e σ , i | | 2 + e σ , i T h i ( x i ) e σ , i T ( t ) W ^ i T Φ i ( x i ) + k 2 i = 1 N tr ( Ψ σ , i T Ψ σ , i ) i = 1 N tr 1 γ i W ˜ i T W ^ ˙ i ) 1 { σ ( t ) = l } ] + q = 1 m p q l V l
Then,
d V d t = E [ ( i = 1 N k | | e σ , i | | 2 + e σ , i T h i ( x i ) e σ , i T W ^ i T Φ i ( x i ) + k 2 i = 1 N tr ( Ψ σ , i T Ψ σ , i ) i = 1 N tr 1 γ i W ˜ i T W ^ ˙ i ) ] + l = 1 m q = 1 m p q l V l ( t ) = E [ ( i = 1 N ( k | | e σ , i | | 2 + e σ , i T W i * T Φ i ( x i ) + e σ , i T ϵ i e σ , i T W ^ i T Φ i ( x i ) ) + k 2 i = 1 N tr ( Ψ σ , i T Ψ σ , i ) i = 1 N tr 1 γ i W ˜ i T W ^ ˙ i ) ] = E [ ( i = 1 N k | | e σ , i | | 2 + e σ , i T W ˜ i T ( Φ i ( x i ) + e σ , i T ( t ) ϵ i + k 2 i = 1 N tr ( Ψ σ , i T Ψ σ , i ) i = 1 N tr 1 γ i W ˜ i W ^ ˙ i ) ] = E [ ( i = 1 N k | | e i | | 2 + e σ , i T ϵ i + k 2 i = 1 N tr ( Ψ σ , i T Ψ σ , i ) i = 1 N tr W ˜ i T ( 1 γ i W ^ ˙ i Φ i ( x i ) e σ , i T ) ) ] = E [ ( i = 1 N k 1 | | e i | | 2 k 2 | | e i | | 2 + e σ , i T ϵ i + k 2 i = 1 N tr ( Ψ σ , i T Ψ σ , i ) i = 1 N tr W ˜ i T ( t ) ( 1 γ i W ^ ˙ i Φ i ( x i ) e σ , i T ) ) ]
With reference to Theorem 1 in [39], we can obtain
tr ( W i * T W i * ) tr ( W ^ i W ^ i ) W i max
E tr W ˜ i T ( 1 γ i w ^ ˙ i Φ i ( x i ) e σ , i T ) 0 ,
And, according to Young’s inequality, we obtain
k 2 | | e σ , i | | 2 + e σ , i T ϵ i ϵ c i 2 4 k 2 ,
Since the Markov process is ergodic, it will travel through all states and eventually tend to the stationary distribution π , and because the union topology of G i , i = 1 , 2 , , m , is connected, there exists an orthogonal matrix T 1 L u T = diag { λ 1 ( L u ) , λ 2 ( L u ) , , λ N ( L u ) } , L u = j = 1 m π j L ( G j ) .
Thus, when t ,
d V ( t ) d t i = 1 N E k 1 | | e i | | 2 + ϵ c i 2 4 k 2 + k 2 tr ( Ψ i T Ψ i ) i = 1 N E 2 k 1 λ max ( D ) V x + ϵ c i 2 4 k 2 + k 2 tr ( Ψ i T Ψ i ) = i = 1 N E [ 2 k 1 λ max ( D ) V x + ϵ c i 2 4 k 2 + k 2 tr ( Ψ i T Ψ i ) i = 1 N 4 k 1 W i max γ i λ max ( D ) + i = 1 N 4 k 1 W i max γ i λ max ( D ) ] E [ 2 k 1 λ max ( D ) V x + 1 2 i = 1 N tr ( W ˜ i T W ˜ i ) + i = 1 N ϵ c i 2 4 k 2 + i = 1 N k 2 tr ( Ψ i T Ψ i ) + i = 1 N 4 k 1 W i max γ i λ max ( D ) ] 2 k 1 λ max ( D ) E V x + 1 2 i = 1 N tr ( 1 γ i W ˜ i T W ˜ i ) + i = 1 N ϵ c i 2 4 k 2 + i = 1 N k 2 tr ( Ψ i T Ψ i ) + i = 1 N 4 k 1 W i max γ i λ max ( D ) = ϱ V + Δ
where ϱ = 2 k 1 λ max ( D ) , Δ = i = 1 N ϵ c i 2 4 k 2 + 4 k 1 γ i λ max ( D ) W i max + i = 1 N k 2 tr ( Ψ i T Ψ i ) , λ max ( D ) = max { λ σ , i , λ max ( D u ) } , Ψ i = max { Ψ σ , i T Ψ σ , i , Ψ u , i T Ψ u , i } , x T L u x = e T D u e , Ψ u , i = j = 1 m π j Ψ σ ( G j ) , i . From Lemma 2, we obtain
V V ( 0 ) e ϱ t + Δ ϱ ( 1 e ϱ t ) V ( 0 ) e ϱ t + Δ ϱ
Furthermore,
λ ¯ min ( D ) 2 i = 1 N E | | e i | | 2 V x V V ( 0 ) e ϱ t + Δ ϱ ,
where λ ¯ min ( D ) = min { λ ¯ σ , i , λ 2 ( D u ) } .
Then,
i = 1 N E | | e i | | 2 2 λ ¯ min ( D ) V ( 0 ) e ϱ t + 2 Δ λ ¯ min ( D ) ϱ ,
i = 1 N E | | e i | | 2 = i = 1 N λ max ( D ) λ ¯ min ( D ) ϵ c i 2 4 k 1 k 2 + 4 W i max γ i λ max ( D ) + k 2 k 1 tr ( Ψ u , i T Ψ u , i ) , which means that V ( x ) is bounded when t , and for ε > 0 , by choosing the W i max = o ( ε ) , k 1 , k 2 = o ( ε ) , ϵ c i = o ( ε 2 ) , choosing the value properly, and because of the arbitrariness of ε , there exists lim t E | | e | | 2 < ε .
Furthermore,
lim t E | | ( L u I N ) x | | 2 = lim t E | | e | | 2 ε ,
which means that, asymptotically, convergence of system (1) is achieved. □
Remark 4.
The analysis of Theorem 1 shows that the consensus gain k determines the convergence speed and attenuate the communication noise simultaneously. It should be neither too small, so that control effect is not obvious, or too big, so that the communication noises cannot be attenuated. We can choose the appropriate value of k and ϵ c i , W i max to make a trade-off between the convergence rate and the noises reduction.
Remark 5.
The analysis in [16,17] is based on solving the stochastic differential equation; unfortunately, it is hard or perhaps even impossible for non-linear systems, and it is obvious that these control protocols are not applicable to system (1). Therefore, the RBFNNs were adopted to approximate the nonlinear dynamic and have proved to be extraordinarily effective.
Remark 6.
In [38,39,40,42,43], the nonlinear multi-agent systems without communication noise were studied and conditions for achieving consensus were obtained. The results of the analysis show that the value of consensus gain k is bigger, and the convergence rate is faster. However, the analysis in Theorem 1 shows that to attenuate the communication noises and achieve a better consensus performance, the consensus gain k should be sufficiently small.

4. Simulations

Example 1.
Consider a nonlinear multi-agent system consisting of six agents. The interaction communications are shown in Figure 2; the switching of the topologies is dominated by a Markov process whose transition probability matrix is P = 0.2 0.5 0.3 0.5 0.15 0.35 0.4 0.1 0.5 , and the switching frequency is 10 / s .
The nonlinear dynamic is described as follows:
x i 1 ( t ) = x i 1 ( t ) + 2 tanh ( r i 1 x i 1 ( t ) ) 1.4 tanh ( x i 2 ( t ) ) + u i 1 ( t ) , x i 2 ( t ) = x i 2 ( t ) + 1.8 tanh ( x i 1 ( t ) ) 0.7 tanh ( r i 2 x i 2 ( t ) ) + u i 2 ( t ) ,
where r i 1 = ( 0.9 , 0.81 , 1.2 , 0.76 , 1.1 , 0.98 ) T , r i 2 = ( 0.95 , 1.1 , 0.85 , 0.7 , 0.9 , 0.6 ) T , i = 1 , 2 , 3 , 4 , 5 , 6 , the noise intensities are σ 12 = σ 21 = σ 65 = σ 56 = 0.5 , σ 23 = σ 32 = σ 45 = σ 54 = 1.5 , σ 25 = σ 52 = σ 14 = σ 41 = 1 , σ 16 = σ 61 = 0.4 , σ 34 = σ 43 = 1.1 . The initial states are x ( 0 ) = [ 2 , 2 , 5 , 2 , 1 , 5 ; 5.7 , 3 , 4 , 6 , 8 , 3 ] T .
In this paper, we choose the RBFNNs with 12 neurons and a random center, which are distributed uniformly in the range of [ 6 , 6 ] × [ 6 , 6 ] , and Φ i ( x i ) = [ ϕ 1 ( x i ) , , ϕ 12 ( x i ) ] T , ϕ κ ( x i ) = exp [ ( x i ϑ κ ) T ( x i ϑ κ ) / 2 ] , κ = 1 , 2 , , 12 , γ i = 2 , c i = 10 , W i max = 2 , i = 1 , 2 , , 6 .
Figure 3 shows the trajectories of all agents without and control input; Figure 4 and Figure 5 show the state of all agents and consensus errors when k = 0.5 , from which we can see that all agents achieve consensus and the consensus errors tend towards zero eventually. Figure 6 and Figure 7 show the state trajectories of all agents and consensus errors when k = 6 ; they show that when k is not small enough, the communication noise are not attenuated and the consensus cannot be achieved, which is consistent with Remark 1.
Example 2.
The multi-manipulator systems have wide applications in industrial production, such as in automobile assembly and freight handling. In this example, consider a multi-manipulator system consisting of four single-link manipulators. Each manipulator can be regarded as an agent. The communication topologies are switching among G 4 , G 5 , G 6 , which are shown in Figure 8, with probability P, and the dynamic of each manipulator can be found in [47].
q ˙ i = q i 1 + 0.1 q i 2 1.962 sin ( q i 1 + 0.2 q i 2 ) + 0 0.4 u i ,
Let all the noise intensities be 1, choosing the RBFNNs with six neurons, and let the centers be distributed uniformly in the range of [ 4 , 4 ] × [ 4 , 4 ] , Φ i ( q i ) = [ ϕ 1 ( q i ) , , ϕ 12 ( q i ) ] T , ϕ κ ( q i ) = exp [ ( q i ϑ κ ) T ( q i ϑ κ ) / 2 ] , κ = 1 , 2 , , 6 , γ i = 1 , c i = 10 , W i max = 2 , i = 1 , 2 , , 4 . The initial states are set as q ( 0 ) = [ 1 , 2 , 3 , 6 ; 4 , 5 , 3 , 2 ] T .
Figure 9 and Figure 10 show the trajectories of q i when k = 4 and k = 80 , respectively. This shows that taking a smaller value of k can make a better consensus performance.
Example 3.
To further validate the effectiveness of the proposed protocol, we consider a multi-manipulator system consisting of four two-link manipulators, assuming that communication topologies and the switching are the same as in Example 2. And the dynamic of each manipulator can be found in [48]
M i ( q i ) q ¨ i + V i ( q i , q ˙ i ) q ˙ i + G i ( q i ) = ζ i
where q i = ( q i 1 , q i 2 ) T represent the position of the ith manipulators; M i ( q i ) R 2 × 2 is the inertia matrix, and we choose M i ( q i ) = I 2 , V i ( q i , q ˙ i ) R 2 × 2 is the centripetal-Coriolis matrix with V i 11 = m i 2 r i 1 r i 2 sin ( q ˙ i 2 ) , V i 12 = m i 2 r i 1 r i 2 sin ( q ˙ i 2 ) m i 2 r i 1 r i 2 sin ( q i 2 ) q ˙ i 1 , V i 21 = m i 2 r i 1 r i 2 sin ( q i 2 ) q ˙ i 1 , V i 22 = 0 , G i ( q i ) R 2 is the gravitational vector with G i 1 = ( m i 1 + m i 2 ) g r i 1 sin ( q i 1 ) + m i 2 g l i 2 sin ( q i 1 + q i 2 ) , G i 2 = m i 2 g r i 2 sin ( q i 1 + q i 2 ) , and l i 1 = 1.5   m , l i 2 = 1   m , m i 1 = 2.5   kg , m i 2 = 1   kg , g = 9.8   m / s 2 .
We denote q ˜ i = ( q i T , q ˙ i T ) T ; then, the system model can be transformed as the form of (1)
q ˜ ˙ i = h ˜ ( q ˜ i ) + u ˜ i .
Letting all the noise intensities be 2, we choose the RBFNNs with six neurons which have the same setting as described in example 2, γ i = 1 , c i = 10 , W i max = 4 , i = 1 , 2 , , 4 . The initial set is also q ( 0 ) = [ 1 , 2 , 3 , 6 ; 4 , 5 , 3 , 2 ] T and q ( 0 ) = [ 0 , 0 , 0 , 0 ; 0 , 0 , 0 , 0 ] T .
Figure 11, Figure 12 and Figure 13 show the trajectories of q i ,   q ˙ i and e i ( t ) when k = 20 and Figure 14, Figure 15 and Figure 16 show the trajectories of q i , q ˙ i and e i ( t ) when k = 160 , obviously, the multi-manipulator system has a more stable performance with a minor k.
The three examples validate the effectiveness of our protocol; furthermore, although k needs to be small enough, the value of k depends on the system model. A minor k has a better performance in terms of noise reduction; however, it sacrifices the convergence rate. Therefore, a trade-off should be made in practical applications.

5. Conclusions

In this paper, we consider the consensus control of unknown nonlinear multi-agent systems with communication noises under Markov switching topologies. The unknown dynamic is approximated by utilizing the RBFNNs, and the estimated weight matrix updated online using the local interfered state information. Based on the stochastic Lyapunov analysis method, consensus conditions on the consensus gain and the weight matrix of NNs are derived, and the results show that the consensus gain should be small enough to guarantee the consensus and attenuate the communication noise simultaneously. In future, we will further consider designing a better protocol to ensure the convergence rate while reducing the noise effect.
Additionally, this paper mainly focuses on the design of distributed control protocol and the analysis of obtaining the consensus conditions regardless of the specific information of the nonlinear dynamic. Although the NNs perform excellently in the control protocol, there exist uncertainties in NNs weights because of the insufficient knowledge of the nonlinear dynamic, Markov switching signal and communications noises, which may affect the performance of the protocol or the instability of the system. Accordingly, quantifying the uncertainties (such as the work in ref. [49]), to help design more efficient control protocols will be another topic of interest that we will pursue in our future work.

Author Contributions

Conceptualization, L.X.; Methodology, S.G. and L.X.; Software, S.G.; Validation, S.G.; Formal analysis, S.G.; Writing—original draft, S.G.; Writing—review & editing, L.X.; Supervision, L.X. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the National Natural Science Foundation of China under Grant 52075177, in part by the National Key Research and Development Program of China under Grant 2021YFB3301400, in part by the Research Foundation of Guangdong Province under Grant 2019A050505001 and Grant 2018KZDXM002, in part by Guangzhou Research Foundation under Grant 202002030324 and Grant 201903010028.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.The data are not publicly available due to the ongoing study.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
MAsMulti-agent systems
RBFNNsRadial Basis Function Neural Networks
NNNeural Network

References

  1. Bae, J.H.; Kim, Y.D. Design of optimal controllers for spacecraft formation flying based on the decentralized approach. Int. J. Aeronaut. Space 2009, 10, 58–66. [Google Scholar] [CrossRef]
  2. Kankashvar, M.; Bolandi, H.; Mozayani, N. Multi-agent Q-Learning control of spacecraft formation flying reconfiguration trajectories. Adv. Space Res. 2023, 71, 1627–1643. [Google Scholar] [CrossRef]
  3. Kar, S.; Moura, J.M.F. Distributed consensus algorithms in sensor networks with imperfect communication: Link failures and channel noise. IEEE Trans. Signal Process. 2008, 57, 355–369. [Google Scholar] [CrossRef]
  4. Gancheva, V. SOA based multi-agent approach for biological data searching and integration. Int. J. Biol. Biomed. Eng. 2019, 13, 32–37. [Google Scholar]
  5. Zhang, B.; Hu, W.; Ghias, A.M.Y.M. Multi-agent deep reinforcement learning based distributed control architecture for interconnected multi-energy microgrid energy management and optimization. Energy Convers. Manag. 2023, 277, 116647. [Google Scholar] [CrossRef]
  6. Fan, Z.; Zhang, W.; Liu, W. Multi-agent deep reinforcement learning based distributed optimal generation control of DC microgrids. IEEE Trans. Smart. Grid 2023, 14, 3337–3351. [Google Scholar] [CrossRef]
  7. Jadbabaie, A.; Lin, J.; Morse, A.S. Coordination of groups of mobile autonomous agents using nearest neighbor rules. IEEE Trans. Autom. Contr. 2003, 48, 988–1001. [Google Scholar] [CrossRef]
  8. Olfati-Saber, R.; Murray, R.M. Consensus problems in networks of agents with switching topology and time-delays. IEEE Trans. Autom. Contr. 2004, 49, 1520–1533. [Google Scholar] [CrossRef]
  9. Olfati-Saber, R.; Fax, J.A.; Murray, R.M. Consensus and cooperation in networked multi-agent systems. Proc. IEEE 2007, 95, 215–233. [Google Scholar] [CrossRef]
  10. Ren, W.; Beard, R.W. Consensus seeking in multiagent systems under dynamically changing interaction topologies. IEEE Trans. Autom. Contr. 2005, 50, 655–661. [Google Scholar] [CrossRef]
  11. Cao, Z.; Li, C.; Wang, X. Finite-time consensus of linear multi-agent system via distributed event-triggered strategy. J. Franklin Inst. 2018, 355, 1338–1350. [Google Scholar] [CrossRef]
  12. Ma, Q.; Xu, S. Intentional delay can benefit consensus of second-order multi-agent systems. Automatica 2023, 147, 110750. [Google Scholar] [CrossRef]
  13. Zhang, S.; Zhang, Z.; Cui, R. Distributed Optimal Consensus Protocol for High-Order Integrator-Type Multi-Agent Systems. J. Franklin. Inst. 2023, 360, 6862–6879. [Google Scholar] [CrossRef]
  14. Huang, M.; Manton, J.H. Coordination and consensus of networked agents with noisy measurements: Stochastic algorithms and asymptotic behavior. SIAM J. Control. Optim. 2009, 48, 134–161. [Google Scholar] [CrossRef]
  15. Li, T.; Zhang, J.F. Mean square average-consensus under measurement noises and fixed topologies: Necessary and sufficient conditions. Automatica 2009, 45, 1929–1936. [Google Scholar] [CrossRef]
  16. Cheng, L.; Hou, Z.G.; Tan, M. Necessary and sufficient conditions for consensus of double-integrator multi-agent systems with measurement noises. IEEE Trans. Autom. Contr. 2011, 56, 1958–1963. [Google Scholar] [CrossRef]
  17. Cheng, L.; Hou, Z.G.; Tan, M. A mean square consensus protocol for linear multi-agent systems with communication noises and fixed topologies. IEEE Trans. Autom. Contr. 2013, 59, 261–267. [Google Scholar] [CrossRef]
  18. Miao, G.; Xu, S.; Zhang, B. Mean square consensus of second-order multi-agent systems under Markov switching topologies. IMA J. Math. Control. Inf. 2014, 31, 151–164. [Google Scholar] [CrossRef]
  19. Ming, P.; Liu, J.; Tan, S. Consensus stabilization of stochastic multi-agent system with Markovian switching topologies and stochastic communication noise. J. Franklin Inst. 2015, 352, 3684–3700. [Google Scholar] [CrossRef]
  20. Li, M.; Deng, F.; Ren, H. Scaled consensus of multi-agent systems with switching topologies and communication noises. Nonlinear Anal. Hybrid. 2020, 36, 100839. [Google Scholar] [CrossRef]
  21. Guo, H.; Meng, M.; Feng, G. Mean square leader-following consensus of heterogeneous multi-agent systems with Markovian switching topologies and communication delays. Int. J. Robust. Nonlin. 2023, 33, 355–371. [Google Scholar] [CrossRef]
  22. Zhang, Y.; Li, R.; Zhao, W. Stochastic leader-following consensus of multi-agent systems with measurement noises and communication time-delays. Neurocomputing 2018, 282, 136–145. [Google Scholar] [CrossRef]
  23. Zong, X.; Li, T.; Zhang, J.F. Consensus conditions of continuous-time multi-agent systems with time-delays and measurement noises. Automatica 2019, 99, 412–419. [Google Scholar] [CrossRef]
  24. Sun, F.; Shen, Y.; Kurths, J. Mean-square consensus of multi-agent systems with noise and time delay via event-triggered control. J. Franklin Inst. 2020, 357, 5317–5339. [Google Scholar] [CrossRef]
  25. Xing, M.; Deng, F.; Li, P. Event-triggered tracking control for multi-agent systems with measurement noises. Int. J. Syst. Sci. 2021, 52, 1974–1986. [Google Scholar] [CrossRef]
  26. Guo, S.; Mo, L.; Yu, Y. Mean-square consensus of heterogeneous multi-agent systems with communication noises. J. Franklin Inst. 2018, 355, 3717–3736. [Google Scholar] [CrossRef]
  27. Chen, W.; Ren, G.; Yu, Y. Mean-square output consensus of heterogeneous multi-agent systems with communication noises. IET Control Theory Appl. 2021, 15, 2232–2242. [Google Scholar] [CrossRef]
  28. Wang, C.; Liu, Z.; Zhang, A. Stochastic Bipartite Consensus for Second-Order Multi-Agent Systems with Communication Noise and Antagonistic Information. Neurocomputing 2023, 527, 130–142. [Google Scholar] [CrossRef]
  29. Du, Y.; Wang, Y.; Zuo, Z. Bipartite consensus for multi-agent systems with noises over Markovian switching topologies. Neurocomputing 2021, 419, 295–305. [Google Scholar] [CrossRef]
  30. Zhou, R.; Li, J. Stochastic consensus of double-integrator leader-following multi-agent systems with measurement noises and time delays. Int. J. Syst. Sci. 2019, 50, 365–378. [Google Scholar] [CrossRef]
  31. Zhang, R.; Zhang, Y.; Zong, X. Stochastic leader-following consensus of discrete-time nonlinear multi-agent systems with multiplicative noises. J. Franklin Inst. 2022, 359, 7753–7774. [Google Scholar] [CrossRef]
  32. Zong, X.; Li, T.; Zhang, J.F. Consensus of nonlinear multi-agent systems with multiplicative noises and time-varying delays. In Proceedings of the 2018 IEEE Conference on Decision and Control (CDC), Miami Beach, FL, USA, 17–19 December 2018; pp. 5415–5420. [Google Scholar]
  33. Chen, K.; Yan, C.; Ren, Q. Dynamic event-triggered leader-following consensus of nonlinear multi-agent systems with measurement noises. IET Control Theory Appl. 2023, 17, 1367–1380. [Google Scholar] [CrossRef]
  34. Tariverdi, A.; Talebi, H.A.; Shafiee, M. Fault-tolerant consensus of nonlinear multi-agent systems with directed link failures, communication noise and actuator faults. Int. J. Control 2021, 94, 60–74. [Google Scholar] [CrossRef]
  35. Huang, Y. Adaptive consensus for uncertain multi-agent systems with stochastic measurement noises. Commun. Nonlinear Sci. 2023, 120, 107156. [Google Scholar] [CrossRef]
  36. Bao, Y.; Chan, K.J.; Mesbah, A. Learning-based adaptive-scenario-tree model predictive control with improved probabilistic safety using robust Bayesian neural networks. Int. J. Robust Nonlin. 2023, 33, 3312–3333. [Google Scholar] [CrossRef]
  37. Ma, T.; Zhang, Z.; Cui, B. Impulsive consensus of nonlinear fuzzy multi-agent systems under DoS attack. Nonlinear Anal. Hybr. 2022, 44, 101155. [Google Scholar] [CrossRef]
  38. Chen, C.L.P.; Wen, G.X.; Liu, Y.J. Adaptive consensus control for a class of nonlinear multiagent time-delay systems using neural networks. IEEE Trans. Neur. Netw. Lear. Syst. 2014, 25, 1217–1226. [Google Scholar] [CrossRef]
  39. Ma, H.; Wang, Z.; Wang, D. Neural-network-based distributed adaptive robust control for a class of nonlinear multiagent systems with time delays and external noises. IEEE Trans. Syst. Man. Cybern. Syst. 2015, 46, 750–758. [Google Scholar] [CrossRef]
  40. Meng, W.; Yang, Q.; Sarangapani, J. Distributed control of nonlinear multiagent systems with asymptotic consensus. IEEE Trans. Syst. Man. Cybern. Syst. 2017, 47, 749–757. [Google Scholar] [CrossRef]
  41. Mo, L.; Yuan, X.; Yu, Y. Neuro-adaptive leaderless consensus of fractional-order multi-agent systems. Neurocomputing 2019, 339, 17–25. [Google Scholar] [CrossRef]
  42. Wen, G.; Yu, W.; Li, Z. Neuro-adaptive consensus tracking of multiagent systems with a high-dimensional leader. IEEE Trans. Cybern. 2016, 47, 1730–1742. [Google Scholar] [CrossRef] [PubMed]
  43. Liu, J.; Wang, C.; Cai, X. Adaptive neural network finite-time tracking control for a class of high-order nonlinear multi-agent systems with powers of positive odd rational numbers and prescribed performance. Neurocomputing 2021, 419, 157–167. [Google Scholar] [CrossRef]
  44. Godsil, C.; Royle, G.F. Algebraic Graph Theory; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2001. [Google Scholar]
  45. Stone, M.H. The generalized Weierstrass approximation theorem. Math. Mag. 1948, 21, 237–254. [Google Scholar] [CrossRef]
  46. Ge, S.S.; Wang, C. Adaptive neural control of uncertain MIMO nonlinear systems. IEEE Trans. Neur. Netw. 2004, 15, 674–692. [Google Scholar] [CrossRef]
  47. Jiang, Y.; Liu, L.; Feng, G. Adaptive optimal control of networked nonlinear systems with stochastic sensor and actuator dropouts based on reinforcement learning. IEEE Trans. Neur. Netw. Lear. 2022, 22, 1–14. [Google Scholar] [CrossRef]
  48. Wang, D.; Ma, H.; Liu, D. Distributed control algorithm for bipartite consensus of the nonlinear time-delayed multi-agent systems with neural networks. Neurocomputing 2016, 174, 928–936. [Google Scholar] [CrossRef]
  49. Bao, Y.; Velni, J.M.; Shahbakhti, M. Epistemic uncertainty quantification in state-space LPV model identification using Bayesian neural networks. IEEE Control Syst. Lett. 2020, 5, 719–724. [Google Scholar] [CrossRef]
Figure 1. Strcture of the RBFNNs.
Figure 1. Strcture of the RBFNNs.
Mathematics 12 00133 g001
Figure 2. The communication topologies of Example 1.
Figure 2. The communication topologies of Example 1.
Mathematics 12 00133 g002
Figure 3. State trajectories of x i 1 , x i 2 without protocol.
Figure 3. State trajectories of x i 1 , x i 2 without protocol.
Mathematics 12 00133 g003
Figure 4. State trajectories of x i 1 , x i 2 when k = 0.5 .
Figure 4. State trajectories of x i 1 , x i 2 when k = 0.5 .
Mathematics 12 00133 g004
Figure 5. The trajectories of consensus errors e i 1 , e i 2 when k = 0.5 .
Figure 5. The trajectories of consensus errors e i 1 , e i 2 when k = 0.5 .
Mathematics 12 00133 g005
Figure 6. State trajectories of x i 1 , x i 2 when k = 6 .
Figure 6. State trajectories of x i 1 , x i 2 when k = 6 .
Mathematics 12 00133 g006
Figure 7. The performance of consensus errors e i 1 , e i 2 when k = 6 .
Figure 7. The performance of consensus errors e i 1 , e i 2 when k = 6 .
Mathematics 12 00133 g007
Figure 8. The communication topologies of Example 2 and Example 3.
Figure 8. The communication topologies of Example 2 and Example 3.
Mathematics 12 00133 g008
Figure 9. The trajectories of q i 1 , q i 2 when k = 4 .
Figure 9. The trajectories of q i 1 , q i 2 when k = 4 .
Mathematics 12 00133 g009
Figure 10. The trajectories of q i 1 , q i 2 when k = 80 .
Figure 10. The trajectories of q i 1 , q i 2 when k = 80 .
Mathematics 12 00133 g010
Figure 11. State trajectories of q i 1 , q i 2 when k = 20 .
Figure 11. State trajectories of q i 1 , q i 2 when k = 20 .
Mathematics 12 00133 g011
Figure 12. The trajectories of q ˙ i 1 , q ˙ i 2 when k = 20 .
Figure 12. The trajectories of q ˙ i 1 , q ˙ i 2 when k = 20 .
Mathematics 12 00133 g012
Figure 13. The trajectories of consensus errors e i 1 , e i 2 when k = 20 .
Figure 13. The trajectories of consensus errors e i 1 , e i 2 when k = 20 .
Mathematics 12 00133 g013
Figure 14. State trajectories of q i 1 , q i 2 when k = 160 .
Figure 14. State trajectories of q i 1 , q i 2 when k = 160 .
Mathematics 12 00133 g014
Figure 15. The trajectories of q ˙ i 1 , q ˙ i 2 when k = 160 .
Figure 15. The trajectories of q ˙ i 1 , q ˙ i 2 when k = 160 .
Mathematics 12 00133 g015
Figure 16. The trajectories of consensus errors e i 1 , e i 2 when k = 160 .
Figure 16. The trajectories of consensus errors e i 1 , e i 2 when k = 160 .
Mathematics 12 00133 g016
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Guo, S.; Xie, L. Adaptive Neural Consensus of Unknown Non-Linear Multi-Agent Systems with Communication Noises under Markov Switching Topologies. Mathematics 2024, 12, 133. https://doi.org/10.3390/math12010133

AMA Style

Guo S, Xie L. Adaptive Neural Consensus of Unknown Non-Linear Multi-Agent Systems with Communication Noises under Markov Switching Topologies. Mathematics. 2024; 12(1):133. https://doi.org/10.3390/math12010133

Chicago/Turabian Style

Guo, Shaoyan, and Longhan Xie. 2024. "Adaptive Neural Consensus of Unknown Non-Linear Multi-Agent Systems with Communication Noises under Markov Switching Topologies" Mathematics 12, no. 1: 133. https://doi.org/10.3390/math12010133

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop