Kolmogorov equations limiting probabilities of states. Limiting probabilities of states. Find the gross output for a balanced diversified economy in the Leontief model, if the direct cost matrix A and the final consumption vector Y are given

Let there be a physical system S with discrete states:

S 1 ,S 2 ,...,S n ,

in which a Markov random process with continuous time occurs (a continuous Markov chain). The state graph is shown in Fig. 23.

Let us assume that all intensities of event flows that transfer the system from state to state are constant:

in other words, all event flows are simplest (stationary, Poisson) flows.

By writing down the Kolmogorov system of differential equations for the probabilities of states and integrating these equations under given initial conditions, we obtain the probabilities of states as a function of time, i.e. n functions:

p 1 (t), p 2 (t),…,p n (t),

for any t giving a total of one: .

Let us now pose the following question: what will happen to the system S at t®¥? Will the functions p 1 (t), p 2 (t),...,p n (t) tend to some limits? These limits, if they exist, are called limiting (or "final") state probabilities.

The following general proposition can be proven. If the number of states of the system S is finite and one can go from each state (in a certain number of steps) to each other, then the limiting probabilities of the states exist and do not depend on the initial state of the system .

In Fig. Figure 24 shows a state graph that satisfies the stated condition: from any state the system can sooner or later move to any other. On the contrary, for a system whose state graph is shown in Fig. 25, the condition is not met. It is obvious that if the initial state of such a system is S 1, then, for example, the state S 6 at t®¥ can be achieved, but if the initial state is S 2, it cannot.

Let us assume that the stated condition is met and the limiting probabilities exist:



(i = 1, 2,..., n). (6.1)

We will denote the limiting probabilities by the same letters p 1, p 2, ... p n as the probabilities of the states themselves, meaning this time we raise not variable quantities (functions of time), but constant numbers.

Obviously, the limiting probabilities of the state, as well as the pre-limiting ones, should add up to one:

Thus, at t®¥ in the system S a certain limiting stationary regime is established: it consists in the fact that the system randomly changes its states, but the probability of each of them no longer depends on time: each of the states occurs with a certain constant probability. What is the meaning of this probability? It is nothing more than the average relative time the system remains in a given state. For example, if the system S three possible states: S 1, S 2 and S 3, and their limiting probabilities are 0.2, 0.3 and 0.5, this means that after the transition to a steady state the system S on average, two tenths of the time will be in the S 1 state, three tenths will be in the S 2 state and half of the time will be in the S 3 state. The question arises: how to calculate the limiting probabilities of states p 1, p 2, ... p n?

It turns out that to do this, in the system of Kolmogorov equations describing the probabilities of states, you need to set all left-hand sides (derivatives) equal to zero.

Indeed, in the limiting (steady) mode, all state probabilities are constant, which means their derivatives are equal to zero.

If all the left-hand sides of the Kolmogorov equations for state probabilities are set to different zeros, then the system of differential equations will turn into a system of linear algebraic equations. Together with the condition

(the so-called “normalization condition”) these equations make it possible to calculate all the limiting probabilities

р 1, р 2, … р n

Example 1. The physical system S has possible states: S l, S 2, S 3, S 4, the marked graph of which is given in Fig. 26 (each arrow has a numerical value of the corresponding intensity). Calculate the limiting probabilities of the states: p 1, p 2, p 3, p 4.

Solution. We write the Kolmogorov equations for the probabilities of states:

(6.3)

Setting the left-hand sides equal to zero, we obtain a system of algebraic equations for the limiting probabilities of states:

(6.4)

Equations (6.4) are so-called homogeneous equations (without a free term). As is known from algebra, these equations determine the quantities p 1, p 2, p 3, p 4 only up to a constant factor. Fortunately, we have a normalization condition:

p 1 + p 2 + p 3 + p 4 = 1, (6.5)

which, together with equations (64), makes it possible to find all unknown probabilities.

Indeed, let us express from (6.4) all unknown probabilities through one of them, for example, through p 1. From the first equation:

p 3 = 5p 1

Substituting into the second equation, we get:

p 2 = 2 p 1 + 2 p 3 = 12 p 1.

The fourth equation gives:

p 4 = 1/2p 2 = 6 p 1 .

Substituting all these expressions instead of р 2 , р 3 , р 4 into the normalization condition (6.5), we obtain

p 1 + 12p 1 + 5 p 1 + 6 p 1 = 1.

24 p 1 = 1, p 1 = 1/24, p 2 =12p 1 = 1/2.

p 3 = 5p 1 = 5/24. p 4 = 6 p 1 = 1/4.

Thus, the limiting probabilities of states are obtained, they are equal;

p 1 = 1/24, p 2 = 1/2, p 3 = 5/24, p 4 = 1/4 (6.6)

This means that in the limit, steady state, the system S will spend on average one twenty-fourth of the time in the S 1 state, half the time in the S 2 state, five twenty-fourths in the S 3 state and one quarter of the time in the S 4 state.

Note that when solving this problem, we did not use one of the equations (6.4) at all - the third. It is easy to see that it is a consequence of the other three: adding all four equations, we get the identical zero. With equal success, when solving the system, we could discard any of the four equations (6.4).

The method we used for composing algebraic equations for limiting probabilities of states boiled down to the following: first write differential equations, and then put the left-hand sides in them equal to zero. However, you can write algebraic equations for limiting probabilities directly, without going through the differential stage. Let's illustrate this with an example.

Example 2. The system state graph is shown in Fig. 27. Write algebraic equations for the limiting probabilities of states.

Solution. Without writing differential equations, we directly write the corresponding right-hand sides and equate them to zero; in order not to deal with negative terms, we immediately transfer them to another part, changing the sign:

(6.7)

In order to immediately write such equations in the future, it is useful to remember the following mnemonic rule: “what flows in, flows out,” that is, for each state, the sum of the terms corresponding to the incoming arrows is equal to the sum of the terms corresponding to the outgoing ones; each term is equal to the intensity of the flow of events that moves the system along a given arrow, multiplied by the probability of the state from which the arrow emerges.

In what follows, in all cases we will use precisely this shortest way of writing equations for limiting probabilities.

Example 3. Write algebraic equations for the limiting probabilities of system states S, the state graph of which is shown in Fig. 28. Solve these equations.

Solution. We write algebraic equations for the limiting probabilities of states;

Normalization condition;

p 1 + p 2 + p 3 = 1. (6.9)

Using the first two equations (6.8), we express p 2 and p 3 through p 1:

Let us substitute them into the normalization condition (6.9):

,

where .

; .

What will happen to the probabilities of states when . Will P 1 (t), P 2 (t), ... tend to some limits? If these limits exist and do not depend on the initial state of the system, then they are called final probabilities of states. In the theory of random processes it is proven that if numbernstates of the system are finite and from each of them it is possible (in a finite number of steps) to go to any other, then the final probabilities exist(this condition is sufficient, but not necessary for the existence of final probabilities).

Let's assume that this condition is met and the final probabilities exist:

We will denote them by the same letters P 1 , P 2 , ... as the state probabilities themselves, but by them we mean not functions of time, but constant numbers. Obviously, they also add up to one:

. (4.10)

How to understand these final probabilities? At
in system S, a limiting stationary regime is established, during which the system randomly changes its states, but their probabilities no longer depend on time. The final probability of state S i can be understood as the average relative time the system remains in this state.

For example, if the system S has three states S 1, S 2, S 3 and their final probabilities are equal to 0.2; 0.3; 0.5, this means that in the limiting stationary mode the system spends on average two tenths of the time in the S1 state, three tenths in the S2 state and half the time in the S3 state.

How to calculate the final probabilities? If the probabilities P 1, P 2, ... are constant, then their derivatives are equal to zero. This means that in order to find the final probabilities, you need to set all the left-hand sides in Kolmogorov’s equations equal to zero and solve the resulting system of linear algebraic equations rather than differential ones. You can even immediately write a system of algebraic equations using the state graph. If we move the negative term of each equation from the right side to the left, we immediately get a system of equations, where on the left is the final probability of a given state P i , multiplied by the total intensity of all flows,leading out of this state, and on the right is the sum of the products of the intensities of all flows,included in i – e state, on the probabilities of the states from which these flows emanate.

Using this rule, we write linear algebraic equations for the final probabilities of the system states; the state graph is shown in Fig. 4.9:

(4.11)

This system of 4 equations with 4 unknowns P 0 , P 1 , P 2 , P 3 can be solved using the so-called normalization condition:

, (4.12)

in this case, one (any) of the equations can be discarded (it follows as a consequence of the others).

Let us set the numerical values ​​of the intensities λ 1 =1, λ 2 =2, μ 1 =2, μ 2 =3 and solve system (4.11). Let us discard the fourth equation and add instead the normalization condition (4.12). The equations will take the form:

(4.13)

Solving them, we get i.e. in the limiting, stationary mode, the system S will spend on average 40% of the time in the S 0 state (both nodes are working), 20% in the S 1 state (the first node is being repaired, the second is working), 27% in the S 2 state (the second node is being repaired) , the first one is working) and 13% are in state S 3 of complete disrepair (both units are being repaired). Knowing these limiting probabilities can help estimate the average efficiency of the system and the workload of repair parts. Let us assume that system S in state S 0 brings income 8 (conventional units) per unit of time, in state S 1 - income 3, in state S 2 - income 5, and in state S 3 - no income at all. Then, in the limiting stationary mode, the average income per unit of time will be . Now let’s estimate the workload of the repair bodies (workers) busy repairing nodes 1 and 2. Node 1 is repaired for a fraction of the time equal to Node 2 is repaired a fraction of the time
.

Here the question of optimizing the solution may already arise. Let's say that we can reduce the average repair time of one or another unit (or maybe both), but this will cost us some money. And it is necessary to evaluate whether the increase in income associated with speeding up repairs will pay off the increased costs of repairs? (for this you will need to solve a system of 4 equations with 4 unknowns).

Let there be a system S with discrete states in which a Markov process occurs in continuous time.

What will happen to the system S at t ® ¥ ? Will the functions p 1 (t), p 2 (t), ..., p n (t) tend to some limits? These limits, if they exist, are called limiting (or "final") probabilities of states.

The following can be proven general position :

If the number of states of the system S is finite and one can move from each state (in a certain number of steps) to each other, then the limiting probabilities of the states exist and do not depend on the initial state of the system.

Let us assume that the stated condition is met and the limiting probabilities exist:

Obviously, the limiting probabilities of states, as well as the pre-limiting ones, should add up to one:

Thus, at t®¥ a certain limit stationary mode : it consists in the fact that the system randomly changes its states, but the probability of each of them no longer depends on time. What is the meaning of probability? She represents average relative time the system remains in a given state .

How to calculate marginal probabilities? In the Kolmogorov system of equations, we must set all derivatives equal to zero.

Example 1 . Calculate the limiting probabilities for the system:

Example 2 . Write equations for marginal probabilities.


Example 3. Find the limiting probabilities for the system.

9. The process of “death and reproduction”.

A Markov process is called a “process of death and reproduction” if its state graph is extended into a chain, i.e. only l n,n+1 and. l n,n-1 are not equal to zero, i.e. Only the probability densities of transition to a neighboring state are not zero.

Example 1 . The technical device consists of three identical units; each of them can fail (fail); the failed device immediately begins to recover. The system state is numbered by the number of faulty nodes.

In what follows, for the process of death and reproduction we will denote l n,n+1 =l n, l n,n-1 =m n.

Let us define a general solution scheme for the processes of death and reproduction. Let's write algebraic equations for the probabilities of states

For the first state S 1 we have:

l 1 p 1 =m 2 p 2

For the second state we have:

l 2 p 2 +m 2 p 2 =l 1 p 1 +m 3 p 3

But by virtue of (9.1), we can cancel the terms l 1 p 1 and m 2 p 2 equal to each other on the right and left, we get

l 3 p 3 =m 4 p 4

and in general for all k

l k p k =m k+1 p k+1 for k=1,2,..., n-1

The solution to this system is:

p 1 +p 2 +....+p n = 1

Example 2 . Find the limiting probabilities of states for the process of death and reproduction with a graph

Example 3 . The device consists of three units; the failure flow is the simplest, the average failure-free operation time of each node is equal to T in. The failed unit immediately begins to be repaired; the average repair (restoration) time of a node is t p ; the law of distribution of this time is indicative (the flow of restorations is the simplest). Find the average productivity of the device if with three working nodes it is equal to 100%, with two - 50%, and with one or less - the device does not work at all.

Let's consider the mathematical description of a Markov process with discrete states and continuous time using the example of a random process from the previous example, the graph of which is shown in Fig. 15. We will assume that all transitions of the system from the state S i V Sj occur under the influence of simple streams of events with intensities ( i, j= 0, 1, 2, 3); Thus, the system transitions from the state S 0 in S 1 will occur under the influence of the failure flow of the first node, and the reverse transition from the state S 1 in S 0 - under the influence of the flow of completions of repairs of the first node, etc.

The graph of states of the system with the intensities marked at the arrows will be called labeled (see Fig. 3.1). System under consideration S has four possible states: S 0 ,S 1 , S 2 , S 3 .

The probability of the i-th state is the probability p i(t) what at the moment t the system will be in a state S,. Obviously, for any moment t the sum of the probabilities of all states is equal to one:

Kolmogorov’s system of differential equations for state probabilities:

(3.2.)

Let us formulate a rule for composing Kolmogorov equations. On the left side of each of them is the derivative of the probability i-th state. On the right side is the sum of the products of the probabilities of all states (from which arrows go to a given state) by the intensity of the corresponding flows of events, minus the total intensity of all flows that lead the system out of a given state, multiplied by the probability of a given (1st state).

In system (3.2) there are one less independent equations than the total number of equations. Therefore, to solve the system it is necessary to add equation (3.1).

The peculiarity of solving differential equations in general is that it is necessary to set the so-called initial conditions, i.e. in this case, the probabilities of the system states at the initial moment t= 0. So, for example, it is natural to solve the system of equations (15.9) under the condition that at the initial moment both teams are free and the system was in the state S 0, i.e. at initial conditions p 0 (0) = 1, p 1 (0) = 0, p 2 (0) = 0, p 3 (0) = 0.

Kolmogorov's equations make it possible to find all probabilities of states as a function of time. Of particular interest are the system probabilities p i(t) in the limiting stationary mode, i.e. at , which are called the limiting (or final) probabilities of states.

In the theory of random processes, it is proven that if the number of states of a system is finite and from each of them it is possible (in a finite number of steps) to go to any other state, then limiting probabilities exist.

Limit probability of state S, has a clear meaning: it shows the average relative time the system remains in this state. For example, if the marginal probability of a state S 0 i.e. R 0 = 0.5, this means that on average half of the time the system is in the state S 0 .

Since the limiting probabilities are constant, replacing their derivatives in the Kolmogorov equations with zero values, we obtain a system of linear algebraic equations describing the stationary regime. For system S with the state graph shown in Fig. 3.2), such a system of equations has the form:

(3.3)

System (4.3) can be compiled directly from a labeled state graph if one is guided by the rule that on the left side of the equations is the marginal probability of a given state p„ multiplied by the total intensity of all flows leading from a given state, and on the right is the sum of the products of the intensities of all flows entering to the 1st state, on the probability of those states from which these flows come.

Asymptotic estimates in accordance with the well-known theorem of A.A. Markov chains can be obtained for Markov chains that have the ergodic property.

Definition 1. If the number of states of a system is finite and one can go from each state to any other in an arbitrary number of steps, then such a system is said to have the ergodic property.

Definition 2. Let the Markov process be characterized by the probabilities of transition from state i to state j in time t

p ij (t) (0?i?n; 0?j?n).

A process is called transitive if there is a t>0 such that p ij (t)>0 (0?i?n; 0?j?n). From Definitions 1 and 2 it follows that processes in Markov chains with the ergodic property are transitive.

Markov's theorem. For any transitive Markov process, the limit exists and does not depend on the initial state.

This means that at t?? a certain limiting stationary regime is established in the system, characterized by a constant, time-independent probability of each of the states of the system. In this case, this probability represents the average relative time the system remains in a given state. This means that if the operating time of the entire system is 100 hours, and the probability of state S 1 is equal to p 1 = 0.15, then the system will be in state S 1 on average for 15 hours.

The limits to which the probabilities of each of the states of a Markov chain with the ergodic property tend as t?? are called limit probabilities. When considering QS we will deal only with ergodic Markov chains. Let V be a subset of the set of states of the system S, and V’ be its complement to S. If a set V has the ergodic property and one cannot go from any state of the set V to any of the states of the set V’, then the set is called a closed or ergodic set. Ergodic systems consist of one single ergodic set (S=V, V’=?) and are therefore called indecomposable. If in a system S there is a set V"?? or in this system several ergodic sets S = V 1 ?V 2 ?...?V n can be distinguished, then such a system is called decomposable. Examples of such systems are shown in Fig. 1.3.

Figure 1.3a shows a system with two ergodic sets V 1 = (S 2 , S 3 , S 4) and V 2 (S 5 , S 6). In Fig. 1.3,b, the ergodic set consists of only one state (S 4). If an ergodic set consists of only one state, then this state is called absorbing, since once it gets into it, the process remains forever in the absorbing state. A characteristic feature of the state graph of an indecomposable ergodic Markov system is that each vertex of this graph is incident with arcs with both positive and negative incidence (i.e., each vertex has arcs directed both towards and away from the vertex, see ., for example, Fig. 1.1 and 1.2).

The calculation of the limiting probabilities of states for such systems is simplified due to the fact that, since all these probabilities are constant values, their time derivatives are equal to 0 (dp i /dt = 0 for all i). Therefore, the left sides of the Kolmogorov system of equations (1.7) are equal to zero and it turns into a system of linear algebraic equations

A nontrivial solution to system (1.8) can be obtained only in the case of degeneracy of the matrix?. It was proven above that the probability density matrix? is degenerate. System (1.8) without one of its equations is supplemented with the normalization condition

Relations (1.8) and (1.9) allow us to determine the limiting probabilities of states. Since part of the terms corresponding to arcs with negative incidence is positive, and the other part corresponding to arcs with positive incidence is negative, then each equation of system (1.8) can be compiled taking into account the mnemonic rule: for each state, the sum of terms corresponding to incoming arcs is equal to the sum of terms corresponding to the outgoing arcs.

Example. For the system shown in Fig. 1.2, from Kolmogorov’s equations (1.7) it follows

  • (? 12 +? 13)p 1 =? 41 p 4 (? 41 +? 45) p 4 =? 34 p 3
  • ? 25 p 1 =? 12 p 1 +? 32 p 3 ? 53 p 3 =? 52 p 2 +? 45p4
  • (? 3 2 +? 3 4)p 4 =? 13 p 1 +? 5 3 p 5 (1.10)

To solve (1.10), it is necessary to exclude any of the first five equations (for example, the fifth, as containing the largest number of terms).

Limiting probabilities of states are used in TMT much more often than solutions to Kolmogorov’s equations, and, knowing the solution to the system of Kolmogorov’s equations, it is possible to determine the moment of the end of the transition process of changing the probabilities of states over time. This makes it possible to calculate the period of time, starting from the inclusion of the system in operation, after which the probabilities of states will reach their limiting values ​​and estimates using these values ​​will be valid. To conclude this section, let us consider one particular, but practically very important class of Markov processes that are widely used in the study of QS. These are the processes of “reproduction and death”. These include Markov chains, represented by a labeled graph, which consists of an elongated chain of states, shown in Fig. 1.4.

The transition probability density matrix of such a system is Jacobian (tridiagonal):


Considering the initial state S 0 , we obtain in accordance with (1.8)

01 p 0 =? 10 p 1 (1.11)

For state S 1 we have

01 p 0 +? 21 p 2 =? 10 p 1 +? 12 p 1 (1.12)

Subtracting equality (1.11) from (1.12), we obtain

21 p 2 = ? 12 p 1 (1.13)

Continuing this process up to the n-state inclusive, we obtain

N , n -1 p n =? n -1, n p n -1

From (1.11) we can now express p 1 in terms of p 0:

p 1 =p 0 (? 01 /? 10) (1.14)

Substituting (1.14) into (1.13), we obtain

p 2 =p 0 (? 01 ? 12 /? 10 ? 21)

Obviously, for arbitrary k (1?k?n) the expression will be valid

In accordance with (1.15) and the labeled state graph presented in Fig. 1.4, it is possible to formulate a rule with which one can express the limiting probabilities of the states of the “reproduction and death” process through the probability of the initial state p 0 . This rule states: the probability of an arbitrary state p k (l?k?n) is equal to the probability of the initial state p 0, multiplied by a fraction, the numerator of which is equal to the product of transition probability densities for arcs transferring the system state from left to right, and the denominator is the product of transition probability densities from the right to the left from the initial to the k-th states inclusive.

The probability p 0 is found from the normalization condition and expressions (1.15) as follows:

Expressions (1.15) and (1.16) completely determine the limiting probabilities of the process of “reproduction and death”.

mob_info