Random variables. Discrete random variable. Mathematical expectation. The concept of a random process in mathematics Find the mathematical expectation of a random process examples

Here, we will briefly consider the main issues of systematization (classification) of random processes.

A random process occurring (passing) in any physical system represents random transitions of the system from one state to another. Depending on the variety of these conditions
from many argument values all random processes are divided into classes (groups):

1. Discrete process ( discrete state) with discrete time.

2. Discrete process with continuous time.

3. Continuous process (continuous state) with discrete time.

4. Continuous process with continuous time.

In the 1st 3 cases a lot discretely, i.e. argument takes discrete values
usually
in the 1st case set of random function values
are defined by the equalities:, is a discrete set
(set
finite or countable).

In the third case, the set
uncountable, i.e. cross section of a random process at any time is a continuous random variable.

In the 2nd and 4th cases there are many continuous, in the second case the set of states of the system
finite or countable, and in the fourth case a set
uncountable.

Let us give some examples of random processes of classes 1-4, respectively:

1. A hockey player may or may not score one or more goals into the opponent’s goal during matches played at certain points (according to the game schedule)

Random process
is the number of goals scored until .

2. Random process
- number of films watched at the Zvezda cinema

from the opening of the cinema until the moment in time .

3. At certain points in time
temperature is measured
patient in some treatment center.
- is a random process of continuous type with discrete time.

4. Indicator of air humidity level during the day in city A.

Other more complex classes of random processes can also be considered. For each class of random processes, appropriate methods for studying them are developed.

You can find a number of varied and interesting examples of random flows in textbooks [V. Feller, part 1.2] and in the monograph. Here we will limit ourselves to this.

For random processes, simple functional characteristics are also introduced, depending on the parameter , similar to the basic numerical characteristics of random variables.

Knowledge of these characteristics is sufficient to solve many problems (recall that a complete characteristic of a random process is given by its multidimensional (finite-dimensional) distribution law.

In contrast numerical characteristics Random variables in the general case, functional characteristics represent certain functions.

4. Mathematical expectation and variance of a random process

Mathematical expectation of a random process

defined for any fixed argument value is equal to the mathematical expectation of the corresponding section of the random process:

(12)
.

To briefly denote the mathematical expectation of s.p. the designation is also used
.

Function
characterizes the behavior of a random process on average. Geometric meaning of mathematical expectation
interpreted as the “average curve”, around which the implementation curves are located (see Fig. 60).

(see Fig. 60 Letters).

Based on the property of mathematical expectation random variable and given that
random process, and
non-random function, we get properties mathematical expectation random process:

1. The mathematical expectation of a non-random function is equal to the function itself:
.

2. A non-random multiplier (non-random function) can be taken as a sign of the mathematical expectation of a random process, i.e.

3. The mathematical expectation of the sum (difference) of two random processes is equal to the sum

(differences) in the mathematical expectations of the terms, i.e.

Note that if we fix the argument (parameter) , then we move from a random process to a random variable (i.e., we move to the cross section of a random process), we can find the m.o. of this process at this fixed

Since, if the section of the s.p.
for a given there is a continuous r.v. with density
then its mathematical expectation can be calculated using the formula

(13)
.

Example 2. Let s.p. is determined by the formula, i.e.
s.v.,


Find the mathematical expectation of a random process

Solution. Property 2. we have

because
and therefore,
.

Exercise. I will use equalities to calculate the mathematical expectation

,
,

and then, based on formula (13), calculate the integral and make sure that the result is the same.

Note. Take advantage of equality

.

Variance of a random process.

Variance of a random process
called a non-random function

Dispersion
s.p. is considered, also characterize the spread (dispersion) of possible values ​​of r.p. relative to its mathematical expectation.

Along with the dispersion of the sp. the standard deviation is also considered

(s.c.o. for short), which is defined by the equality

(15)

Function dimension
equal to the dimension of the s.p.
.

Realization values ​​of s.p. at every deviates from mathematical expectation
by an amount of the order
(see figure 60).

Let us note the simplest properties of the dispersion of random processes.

1. Variance of a non-random function
is equal to zero, i.e.

2. Variance of a random process
non-negative i.e.

3. Variance of the product of a non-random function
to a random function
is equal to the product of the square of the non-random function and the variance of the random function, i.e.

4. Dispersion of the sum of s.p.
and non-random function
equal to the dispersion of the sp., i.e.

Example 3. Lets.p. is determined by the formula, i.e.
s.v.

distributed according to the normal law with

Find the variance and standard deviation of the s.p.
.

Solution. Let's calculate the variance based on the formula from property 3. We have

But
, therefore, by definition of the dispersion of r.v.

Hence,
those.
And

Ministry of Education and Science of the Russian Federation

Cherepovets State University

Institute of Engineering and Economics

The concept of a random process in mathematics

Performed by a student

Group 5 GMU-21

Ivanova Yulia

Cherepovets


Introduction

Main part

· Definition of a random process and its characteristics

· Markov random processes with discrete states

· Stationary random processes

Ergodic property of stationary random processes

Literature


Introduction

The concept of a random process was introduced in the 20th century and is associated with the names of A.N. Kolmogorov (1903-1987), A.Ya. Khinchin (1894-1959), E.E. Slutsky (1880-1948), N. Wiener (1894-1965).

This concept today is one of the central ones not only in probability theory, but also in natural science, engineering, economics, production organization, and communication theory. The theory of random processes belongs to the category of the fastest growing mathematical disciplines. There is no doubt that this circumstance is largely determined by its deep connections with practice. The 20th century could not be satisfied with the ideological heritage that was received from the past. Indeed, while the physicist, biologist, and engineer were interested in the process, i.e. change in time of the phenomenon being studied, the theory of probability offered them as a mathematical apparatus only means that studied stationary states.

To study changes over time, probability theory late XIX- the beginning of the 20th century did not have any developed specific schemes, much less general techniques. And the need to create them was literally knocking on the windows and doors mathematical science. The study of Brownian motion in physics brought mathematics to the threshold of creating a theory of random processes.

I consider it necessary to mention two more important groups of studies begun in different times and for various reasons.

Firstly, this work by A.A. Markov (1856-1922) on the study of chain dependencies. Secondly, the works of E.E. Slutsky (1880-1948) on theory random functions.

Both of these directions played a very significant role in the formation general theory random processes.

For this purpose, significant initial material had already been accumulated, and the need to build a theory seemed to be in the air.

It remained to carry out a deep analysis of the existing works, the ideas and results expressed in them, and on its basis to carry out the necessary synthesis.


Definition of a random process and its characteristics

Definition: By a random process X(t) is a process whose value, for any value of the argument t, is a random variable.

In other words, a random process is a function that, as a result of testing, can take one or another specific type, unknown in advance. For a fixed t=t 0 X(t 0) is an ordinary random variable, i.e. section random process at time t 0.

Examples of random processes:

1. population of the region over time;

2. the number of requests received by the company’s repair service over time.

A random process can be written as a function of two variables X(t,ω), where ω€Ω, t€T, X(t, ω) € ≡ and ω is an elementary event, Ω is the space of elementary events, T is the set of argument values t, ≡ is the set of possible values ​​of the random process X(t, ω).

Implementation random process X(t, ω) is the non-random function x(t) into which the random process X(t) turns as a result of testing (for a fixed ω), i.e. the specific form taken by the random process X(t), its trajectory.

Thus, random process X(t, ω) combines the features of a random variable and a function. If we fix the value of the argument t, the random process turns into an ordinary random variable; if we fix ω, then as a result of each test it turns into an ordinary non-random function. In the following discussion we will omit the argument ω, but it will be assumed by default.

Figure 1 shows several implementations of a random process. Let the cross section of this process for a given t be a continuous random variable. Then the random process X(t) for a given t is determined entirely by the probability φ(x‚ t). It is obvious that the density φ(x, t) is not an exhaustive description of the random process X(t), because it does not express the dependence between its sections at different times.

The random process X(t) is a collection of all sections for all possible values ​​of t, therefore, to describe it it is necessary to consider a multidimensional random variable (X(t 1), X(t 2), ..., X(t n)), consisting of all combinations this process. In principle, there are an infinite number of such combinations, but to describe a random process it is possible to get by with a relatively small number of combinations.

They say that a random process has ordern, if it is completely determined by the density of the joint distribution φ(x 1, x 2, ..., x n; t 1, t 2, ..., t n) n of arbitrary sections of the process, i.e. density of an n-dimensional random variable (X(t 1), X(t 2), ..., X(t n)), where X(t i) is a combination of the random process X(t) at time t i, i=1, 2 , …, n.

Like a random variable, a random process can be described by numerical characteristics. If for a random variable these characteristics are constant numbers, then for a random process – non-random functions.

Mathematical expectation random process X(t) is a non-random function a x (t), which for any value of the variable t is equal to the mathematical expectation of the corresponding section of the random process X(t), i.e. a x (t)=M .

Variance random process X(t) is a non-random function D x (t), for any value of the variable t equal to the dispersion of the corresponding combination of the random process X(t), i.e. D x (t)= D.

Standard deviationσ x (t) of a random process X(t) is the arithmetic value of the square root of its variance, i.e. σ x (t)= D x (t).

The mathematical expectation of a random process characterizes average trajectory of all its possible implementations, and its dispersion or standard deviation - spread implementations relative to the average trajectory.

The characteristics of a random process introduced above turn out to be insufficient, since they are determined only by a one-dimensional distribution law. If the random process X 1 (t) is characterized by a slow change in the values ​​of implementations with a change in t, then for the random process X 2 (t) this change occurs much faster. In other words, the random process X 1 (t) is characterized by a close probabilistic dependence between its two combinations X 1 (t 1) and X 1 (t 2), while for the random process X 2 (t) this dependence between the combinations X 2 (t 1) and X 2 (t 2) are practically absent. The indicated dependence between combinations is characterized by a correlation function.

Definition: Correlation function random process X(t) is called a non-random function

K x (t 1 , t 2) = M[(X(t 1) – a x (t 1))(X(t 2) – a x (t 2))] (1.)

two variables t 1 and t 2, which for each pair of variables t 1 and t 2 is equal to the covariance of the corresponding combinations X(t 1) and X(t 2) of the random process.

Obviously, for a random process X(t 1) the correlation function K x 1 (t 1, t 2) decreases as the difference t 2 - t 1 increases much more slowly than K x 2 (t 1, t 2) for a random process X (t 2).

The correlation function K x (t 1, t 2) characterizes not only the degree of crowding linear dependence between two combinations, but also the spread of these combinations relative to the mathematical expectation a x (t). Therefore, the normalized correlation function of the random process is also considered.

Normalized correlation function random process X(t) is called the function:

P x (t 1, t 2) = K x (t 1, t 2) / σ x (t 1)σ x (t 2) (2)

Example #1

A random process is defined by the formula X(t) = X cosωt, where X is a random variable. Find the main characteristics of this process if M(X) = a, D(X) = σ 2.

SOLUTION:

Based on the properties of mathematical expectation and dispersion, we have:

a x (t) = M(X cosωt) = cosωt * M(X) = a cosωt,

D x (t) = D(X cosωt) = cos 2 ωt * D(X) = σ 2 cos 2 ωt.

We find the correlation function using formula (1.)

K x (t 1 , t 2) = M[(X cosωt 1 – a cosωt 1) (X cos ωt 2 – a cosωt 2)] =

Cosωt 1 cosωt 2 * M[(X – a)(X - a)] = cosωt 1 cosωt 2 * D(X) = σ 2 cosωt 1 cosωt 2 .

We find the normalized correlation function using formula (2.):

P x (t 1, t 2) = σ 2 cosωt 1 cosωt 2 / (σ cosωt 1)(σ cosωt 2) ≡ 1.

Random processes can be classified depending on whether the states of the system in which they occur change smoothly or abruptly, whether the set of these states is finite (countable) or infinite, etc. Among random processes, a special place belongs to the Markov random process.

Theorem. A random process X(t) is Hilbert if and only if there exists R(t, t^) for all (t, t^)€ T*T.

The theory of Hilbert random processes is called correlation theory.

Note that the set T can be discrete and continuous. In the first case, the random process X t is called a process with discrete time, in the second - with continuous time.

Accordingly, combinations of X t can be discrete and continuous random variables.

The random process is called X(t) selectively irregular, differentiable and integrable at a point ω€Ω if its realization x(t) = x(t, ω) is respectively continuous, differentiable and integrable.

The random process X(t) is called continuous: almost, probably If

P(A)=1, A = (ω € Ω : lim x(t n) = x(t))

IN mean square, If

Lim M[(X(t n) – X(t)) 2 ] = 0

By probability, If

Aδ ≥ 0: lim P[| X(t n) – X(t)| > δ] = 0

Mean square convergence is also denoted by:

X(t) = lim X(t n)

It turns out that from sample continuity follows continuity almost certainly, from continuity almost certainly and in the mean square follows continuity by probability.

Theorem. If X(t) is a Hilbert random process, continuous in the mean square, then m x (t) – continuous function and there is a relation

Lim M = M = M .

Theorem. A Hilbert random process X(t) is mean square continuous if and only if its covariance function R(t, t^) at the point (t, t) is continuous.

A Hilbert random process X(t) is called mean square differentiable if there exists a random function X(t) = dX(t)/dt such that

X(t) = dX(t)/ dt = lim X(t+∆t) – X(t) / ∆t

(t € T, t +∆t € T),

those. When

Lim M [((X(t + ∆t) – X(t) / (∆t)) – X(t)) 2 ] = 0

We will call the random function X(t) mean square derivative random process X(t) at point t or on T, respectively.

Theorem. The Hilbert random process X(t) is differentiable in the mean square at the point t if and only if there exists

δ 2 R(t, t^) / δtδt^ at point (t, t^). In this case:

R x (t, t^) = M = δ 2 R(t, t^) / δtδt^.

If a Hilbert random process is differentiable on T, then its mean square derivative is also a Hilbert random process; if sample trajectories of a process are differentiable on T with probability 1, then with probability 1 their derivatives coincide with the mean square derivatives on T.

Theorem. If X(t) is a Hilbert random process, then

M = (d / dt) M = dm x (t) / dt.

Let (0, t) be a finite interval, 0

X(t) is a Hilbert random process.

Y n = ∑ X(t i)(t i – t i-1) (n = 1,2, …).

Then the random variable

max (t i – t i -1)→0

Called integral in mean square process X(t) on (0, t) and is denoted by:

Y(t) = ∫ X(τ)dτ.

Theorem . The mean square integral Y(t) exists if and only if the covariance function R(t, t^) of the Hilbert process X(t) is continuous on T×T and the integral exists

R y (t, t^) = ∫ ∫ R(τ, τ^) dτdτ^

If the mean square integral of the function X(t) exists, then

M = ∫ Mdτ,

R Y (t, t^) = ∫ ∫ R(τ, τ^)dτdτ^

K y (t, t^) = ∫ ∫ K(τ, τ^)dτdτ^

Here R y (t, t^) = M, K y (t, t^) = M are the covariance and correlation functions of the random process Y(t).

Theorem. Let X(t) be a Hilbert random process with covariance function R(t, t^), φ(t) be a real function, and let there exist an integral

∫ ∫ φ(t)φ(t^)R(t, t^)dtdt^

Then there is a mean square integral

∫ φ(t)X(t)dt.

Random processes:

X i (t) = V i φ i (t) (i = 1n)

Where φ i (t) are given real functions

Vi - random variables with characteristics

They are called elementary.

Canonical expansion random process X(t) is called its representation in the form

Where V i are the coefficients, and φ i (t) are the coordinate functions of the canonical expansion of the process X(t).

From relations:

M(V I = 0), D(V I) = D I, M(V i V j) = 0 (i ≠ j)

X(t) = m x (t) + ∑ V i φ i (t) (t € T)

K(t, t^) = ∑ D i φ i (t)φ i (t^)

This formula is called canonical expansion correlation function of a random process.

In the case of the equation

X(t) = m x (t) + ∑ V i φ i (t) (t € T)

The following formulas apply:

X(t) = m x (t) + ∑ V i φ(t)

∫ x(τ)dt = ∫ m x (τ)dτ + ∑ V i ∫ φ i (t)dt.

Thus, if a process X(t) is represented by its canonical expansion, then its derivative and integral can also be represented as canonical expansions.

Markov random processes with discrete states

A random process occurring in a certain system S with possible states S 1, S 2, S 3, ... is called Markovsky, or random process without consequences, if for any moment t 0 the probable characteristics of the process in the future (at t>t 0) depend only on its state at the given moment t 0 and do not depend on when and how the system came to this state; those. do not depend on its behavior in the past (at t

An example of a Markov process: system S is a taxi meter. The state of the system at moment t is characterized by the number of kilometers (tenths of kilometers) traveled by the car up to at this moment. Let at the moment t 0 the counter show S 0 / The probability that at the moment t>t 0 the counter will show this or that number of kilometers (more precisely, the corresponding number of rubles) S 1 depends on S 0, but does not depend on at what moments time, the meter readings changed until the moment t 0.

Many processes can be approximately considered Markovian. For example, the process of playing chess; system S is a group of chess pieces. The state of the system is characterized by the number of enemy pieces remaining on the board at time t 0 . The probability that at the moment t>t 0 the material advantage will be on the side of one of the opponents depends primarily on the state of the system at the moment t 0, and not on when and in what sequence the pieces with boards until time t 0 .

In some cases, the prehistory of the processes under consideration can simply be neglected and Markov models can be used to study them.

Markov random process with discrete states and discrete time (or Markov chain ) is called a Markov process, in which its possible states S 1, S 2, S 3, ... can be listed in advance, and the transition from state to state occurs instantly (jump), but only at certain times t 0, t 1, t 2, ..., called steps process.

Let us denote p ij – transition probability random process (system S) from state I to state j. If these probabilities do not depend on the number of the process step, then such a Markov chain is called homogeneous.

Let the number of states of the system be finite and equal to m. Then it can be characterized transition matrix P 1 , which contains all transition probabilities:

p 11 p 12 … p 1m

p 21 p 22 … p 2m

P m1 p m2 … p mm

Naturally, for each row ∑ p ij = 1, I = 1, 2, …, m.

Let us denote p ij (n) as the probability that, as a result of n steps, the system will move from state I to state j. In this case, for I = 1 we have transition probabilities that form the matrix P 1, i.e. p ij (1) = p ij

It is necessary, knowing the transition probabilities p ij , to find p ij (n) – the probabilities of the system transition from state I to state j in n steps. For this purpose, we will consider an intermediate (between I and j) state r, i.e. we will assume that from the initial state I in k steps the system will move to an intermediate state r with probability p ir (k), after which in the remaining n-k steps from the intermediate state r it will go to the final state j with probability p rj (n-k). Then, according to the total probability formula

P ij (n) = ∑ p ir (k) p rj (n-k) – Markov equality.

Let us make sure that, knowing all the transition probabilities p ij = p ij (1), i.e. matrix P 1 of transition from state to state in one step, you can find the probability p ij (2), i.e. matrix P 2 of transition from state to state in two steps. And knowing the matrix P 2, find the matrix P 3 of the transition from state to state in three steps, etc.

Indeed, putting n = 2 in the formula P ij (n) = ∑ p ir (k) p rj (n-k), i.e. k=1 (intermediate state between steps), we get

P ij (2) = ∑ p ir (1)p rj (2-1) = ∑ p ir p rj

The resulting equality means that P 2 = P 1 P 1 = P 2 1

Assuming n = 3, k = 2, we similarly obtain P 3 = P 1 P 2 = P 1 P 1 2 = P 1 3 , and in the general case P n = P 1 n

Example

The totality of families in a certain region can be divided into three groups:

1. families who do not have a car and do not intend to buy one;

2. families who do not have a car, but intend to purchase one;

3. families with a car.

The statistical survey carried out showed that the transition matrix for an interval of one year has the form:

(In the matrix P 1, the element p 31 = 1 means the probability that a family that has a car will also have one, and, for example, the element p 23 = 0.3 is the probability that a family that does not have a car, but decides to purchase, will fulfill his intention next year, etc.)

Find the probability that:

1. a family that did not have a car and was not planning to buy one will be in the same situation in two years;

2. a family that did not have a car, but intends to buy one, will have a car in two years.

SOLUTION: Let's find the transition matrix P 2 after two years:

0,8 0,1 0,1 0,8 0,1 0,1 0,64 0,15 0,21

0 0,7 0,3 0 0,7 0,3 0 0,49 0,51

0 0 1 0 0 1 0 0 1

That is, the probabilities sought in example 1) and 2) are equal, respectively

p 11 =0.64, p 23 =0.51

Next we will consider Markov random process with discrete states and continuous time, in which, unlike the Markov chain discussed above, the moments of possible transitions of the system from the state are not fixed in advance, but are random.

When analyzing random processes with discrete states, it is convenient to use a geometric scheme - the so-called schedule of events. Typically, system states are depicted by rectangles (circles), and possible transitions from state to state are depicted by arrows (oriented arcs) connecting the states.

Example. Construct a state graph of the following random process: device S consists of two nodes, each of which can fail at a random moment in time, after which repair of the node immediately begins, continuing for a previously unknown random time.

SOLUTION. Possible system states: S 0 – both nodes are operational; S 1 – the first unit is being repaired, the second is operational; S 2 – the second unit is being repaired, the first one is operational; S 3 – both units are being repaired.

An arrow in the direction, for example, from S 0 to S 1, means a transition of the system at the moment of failure of the first node, from S 1 to S 0 - a transition at the moment of completion of repair of this node.

There are no arrows from S 0 to S 3 and from S 1 to S 2 on the graph. This is explained by the fact that the failures of nodes are assumed to be independent of each other and, for example, the probabilities of simultaneous failure of two nodes (transition from S 0 to S 3) or simultaneous completion of repairs of two nodes (transition from S 3 to S 0) can be neglect.

Stationary random processes

stationary in the narrow sense, If

F(x 1, …, x n; t 1, …, t n) = F(x 1, …, x n; t 1 +∆, …, t n +∆)

For arbitrary

n≥1, x 1, …, x n, t 1, …, t n; ∆; t 1 € T, t i + ∆ € T.

Here F(x 1, …, x n; t 1, …, t n) is the n-dimensional distribution function of the random process X(t).

The random process X(t) is called stationary in the broad sense, If

It is obvious that stationarity in the narrow sense implies stationarity in the broad sense.

From the formulas:

m(t) = m(t + ∆), K(t, t^) = K(t + ∆, t^ + ∆)

(t € T, t^ € T, t + ∆€ T), t^ + ∆€ T)

It follows that for a process that is stationary in the broad sense, we can write

m (t) = m x (0) = const;

D (t) = K(t, t) = K(0,0) = const;

K(t, t^) = K(t – t^, 0) = K (0, t^ - t)

Thus, for a process that is stationary in the broad sense, the mathematical expectation and variance do not depend on time, and K(t, t^) is a function of the form:

It can be seen that k(τ) – even function, while

Here D is the dispersion of the stationary process

Х(t), α i (I = 1, n) – arbitrary numbers.

First equality of the system

K(0) = B = σ 2 ; |k(τ)| ≤ k(0); ∑ ∑ ά i α j k(t i - t j) ≥ 0

follows from the equation K(t, t^) = k(τ) = k(-τ), τ = t^ – t. First equality

K(0) = B = σ 2 ; |k(τ)| ≤ k(0); ∑ ∑ ά i α j k(t i - t j) ≥ 0 is a simple consequence of the Schwartz inequality for the sections X(t), X(t^) of the stationary random process X(t). Last inequality:

K(0) = B = σ 2 ; |k(τ)| ≤ k(0); ∑ ∑ ά i α j k(t i - t j) ≥ 0

Obtained as follows:

∑ ∑ α i α j k(t i - t j) = ∑ ∑ K(t i , t j)α i α j = ∑ ∑ M[(α i X i)(α j X j)] = M[(∑ α i X i) 2 ] ≥0

Taking into account the formula for the correlation function of the derivative dX(t)/dt of a random process, for a stationary random function X(t) we obtain

K 1 (t, t^) = M[(dX(t)/dt)*(dX(t^)/dt^)] = δ 2 K(t, t^) / δtδt^ = δ 2 k(t ^ - t) / δtδt^

Since

δk(t^ ​​- t) / δt = (δk(τ) / δτ) * (δτ / δτ) = - δk(τ) / δτ,

δ 2 k(t^ - t) / δtδt^ = - (δ 2 k(τ) / δτ 2) * (δτ / δt^) = - (δ 2 k(τ) / δτ 2)

then K 1 (t, t^) = k 1 (τ) = - (δ 2 k(τ) / δτ 2), τ = t^ – t.

Here K 1 (t, t^) and k 1 (τ) are the correlation function of the first derivative of the stationary random process X(t).

For the nth derivative of a stationary random process, the formula of the correlation function has the form:

K n (τ) = (-1) n * (δ 2 n *k(τ) / δτ 2 n)

Theorem. A stationary random process X(t) with correlation function k(τ) is mean square continuous at point t € T if and only if

Lim k(τ) = k(0)

To prove it, let’s write down an obvious chain of equalities:

M [|X(t+τ)-X(T)| 2 ] = M[|X(t)| 2 ] – 2M[|X(t+τ)X(t)|] + M =

2D-2k(τ) = 2.

Hence it is obvious that the condition for continuity in the mean square of the process X(t) at the point t € T

Lim M[|X(t+τ) – X(t)| 2 ] = 0

Occurs if and only if Lim k(τ) = k(0)

Theorem. If the correlation function k(τ) of a stationary random process X(t) is continuous in the mean square at the point τ=0, then it is continuous in the mean square at any point τ € R 1 .

To prove it, we write down the obvious equalities:

k(τ+∆τ)-k(τ) = M – M =

M(X(t))

Then, applying the Schwartz inequality to the factors in the curly brace and considering the relations:

K(t, t^) = k(τ) = k(-τ), τ = t^ – t.

K(0) = B = σ 2 ; |k(τ)| ≤ k(0); ∑ ∑ ά i α j k(t i - t j) ≥ 0

0 ≤ 2 ≤ MM[|X(t+τ+∆τ)-X(t+τ)| 2 ] =

Passing to the limit at ∆τ→0 and taking into account the condition of the theorem on the continuity of k(τ) at the point τ=0, as well as the first equality of the system

K(0) = B = σ 2 , we find

Lim k(τ+∆τ) = k(τ)

Since here τ is an arbitrary number, the theorem should be considered proven.

Ergodic property of stationary random processes

Let X(t) be a stationary random process over a period of time with characteristics

τ = t^ – t, (t, t^) € T×T.

The ergodic property of a stationary random process is that based on a sufficiently long implementation of the process, one can judge its mathematical expectation, dispersion, and correlation function.

We will call a more strictly stationary random process X(t) ergodic in mathematical expectation, If

Lim M (|(1 / T)∫ X(t)dt| 2 ) = 0

Theorem

Stationary random process X(t) with characteristics:

M = 0, K(t, t^) = M = k(τ),

τ = t^ – t, (t, t^) € T×T

is ergodic in mathematical expectation if and only if

Lim (2 / T) ∫ k(τ) (1 – τ/t)dτ = 0.

To prove it, obviously, it is enough to verify that the equality is true

Let us write down the obvious relations

C = M (|(1 / T)) ∫X(t)dt| 2 ) = (1 / T 2) ∫ ∫ k(t^ - t)dt^dt = (1/T) ∫ dt ∫ k(t^ - t)dt^.

Assuming here τ = t^ – t, dτ = dt^ and taking into account the conditions (t^ = T) → (τ = T - t),

(t^ = 0)→(τ = -t), we get

С = (1/T 2) ∫ dt ∫ k(τ)dτ = (1/T 2) ∫ dt ∫ k(τ)dτ + (1/T 2) ∫ dt ∫ k(τ)dτ =

= -(1/T 2) ∫ dt ∫ k(τ)dτ - (1/T 2) ∫ dt ∫ k(τ)dτ

Putting in the first and second terms of the right side of this equality, respectively, τ = -τ^, dτ = -dτ^, τ = T-τ^, dτ = -dτ^, we find

С = (1/T 2) ∫ dt ∫ k(τ)dτ + (1/T 2) ∫ dt ∫ k(T - τ)dτ

Applying Dirichlet's formula for double integrals, let's write

С = (1/T 2) ∫ dt ∫ k(τ)dτ + (1/T 2) ∫ dt ∫ k(T - τ)dτ = (1/T 2) ∫ (T - τ) k(τ) dτ + (1/T 2) ∫ τk (T – τ)dτ

In the second term on the right side we can put τ^ = T-τ, dτ = -dτ^, after which we will have

From this and from the definition of constants it is clear that the equality

M((1 / T) ∫X(t)dt| 2 ) = (2 / T) ∫ k(τ) (1 – τ/t)dτ

Fair.

Theorem

If the correlation function k(τ) of a stationary random process X(t) satisfies the condition

Lim (1/T) ∫ |k(τ)| dt = 0

Then X(t) is ergodic in mathematical expectation.

Indeed, given the ratio

M((1 / T) ∫X(t)dt| 2 ) = (2 / T) ∫ k(τ) (1 – τ/t)dτ

You can write down

0 ≤ (2/T) ∫ (1 – τ/t) k(τ)dτ ≤ (2/T) ∫ (1- τ/t) |k(τ)|dτ ≤ (1/T) ∫ |k (τ)|dτ

From this it is clear that if the condition is satisfied, then

Lim (2/T) ∫ (1 – τ/T) k(τ)dτ = 0

Now, taking into account the equality

C = (1/T 2) ∫ (T - τ) k(τ)dτ – (1/T 2) ∫ (T - τ) k(τ)dτ = 2/T ∫ (1- (τ/T) ) k(τ)dτ

And the condition Lim M (|(1 / T)∫ X(t)dt| 2 ) = 0

Ergodicity by the mathematical expectation of a stationary random process X(t), we find that the required is proven.

Theorem.

If the correlation function k(τ) of a stationary random process

X(t) is integrable and decreases without limit as τ → ∞, i.e. condition is met

For arbitrary ε > 0, then X(t) is a stationary random process ergodic in mathematical expectation.

Indeed, given the expression

For T≥T 0 we have

(1/T) ∫ |k(τ)|dτ = (1/T)[ ∫ |k(τ)|dτ + ∫ |k(τ)|dτ ≤ (1/T) ∫ |k(τ)| dτ ε(1 – T 1 /T).

Passing to the limit as Т → ∞, we find

0 ≤ lim ∫ |k(τ)|dτ = ε.

Since here ε > 0 is an arbitrary, arbitrarily small value, then the condition of ergodicity in terms of mathematical expectation is satisfied. Since this follows from the condition

On the unlimited decrease of k(τ), then the theorem should be considered proven.

The proven theorems establish constructive criteria for the ergodicity of stationary random processes.

X(t) = m + X(t), m=const.

Then M = m, and if X(t) is an ergodic stationary random process, then the ergodicity condition Lim M (|(1 / T)∫ X(t)dt| 2 ) = 0 after simple transformations can be represented as

Lim M([(1/T) ∫ X(t)dt – m] 2 ) = 0

It follows that if X(t) is a stationary random process ergodic in mathematical expectation, then the mathematical expectation of the process X(t) = m + X(t) can be approximately calculated using the formula

M = (1/T) ∫ x(t)dt

Here T is a fairly long period of time;

x(t) – implementation of the process X(t) on the time interval.

We can consider the ergodicity of a stationary random process X(t) with respect to the correlation function.

A stationary random process X(t) is called ergodic in correlation function, If

Lim M ([ (1/T) ∫ X(t) X(t + τ)dt – k(τ)] 2 ]) = 0

It follows that for a stationary random process X(t) that is ergodic in the correlation function, we can set

k (τ) = (1/T) ∫ x(t)x(t + τ)dt

at a sufficiently large T.

It turns out that the condition

the boundedness of k(τ) is sufficient for the stationary normally distributed process X(t) to be ergodic in the correlation function.

Note that the random process is called normally distributed, if any of its finite-dimensional distribution functions is normal.

A necessary and sufficient condition for the ergodicity of a stationary normally distributed random process is the relation

τ 0: lim (1/T) ∫ (1 – τ/T)dτ = 0


Literature

1. N.Sh. Kremer "Theory of probability and mathematical statistics» / UNITY / Moscow 2007.

2. Yu.V. Kozhevnikov “Probability theory and mathematical statistics” / Mechanical Engineering / Moscow 2002.

3. B.V. Gnedenko “Course in Probability Theory” / Main editorial office of physical and mathematical literature / Moscow 1988.

Considering a random process as a system of already three or four random variables, difficulties arise in the analytical expression of the laws of distribution of a random process. Therefore, in a number of cases, they are limited to the characteristics of a random process, similar to the numerical characteristics of random variables.

The characteristics of a random process, in contrast to the numerical characteristics of random variables, are non-random functions. Among them, the functions of mathematical expectation and dispersion of a random process, as well as the correlation function of a random process, are widely used to evaluate a random process.

Mathematical expectation of the random process X(t)is a non-random function which, for each value of the argument t, is equal to the mathematical expectation of the corresponding section of the random process

.

From the definition of the mathematical expectation of a random process it follows that if the one-dimensional probability density is known, then

. (6.3)

Random process X(t) can always be represented as a sum of elementary random functions

, where is an elementary random function.

. (6.4)

If many implementations of a random process are given X(t), then for a graphical representation of the mathematical expectation a series of sections are carried out and in each of them the corresponding mathematical expectation (average value) is found, and then a curve is drawn through these points (Fig. 6.3).

Figure 6.3 – Graph of the mathematical expectation function

The more sections are made, the more accurately the curve will be constructed.

Expectation of a random process there is some non-random function around which implementations of a random process are grouped.

If the implementations of a random process are current or voltage, then the mathematical expectation is interpreted as the average value of the current or voltage.

Variance of the random process X(t)is a non-random function that, for each value of the argument t, is equal to the dispersion of the corresponding section of the random process.

.

From the definition of the variance of a random process it follows that if the one-dimensional probability density is known, then

or (6.5)

If a random process is represented in the form , That

The variance of a random process characterizes the spread or dispersion of implementations relative to the mathematical expectation function.

If the realizations of a random process are current or voltage, then the variance interpreted as the difference between the power of the entire process and the power of the average component of current or voltage in a given section, i.e.

. (6.7)

In some cases, instead of the variance of a random process, the standard deviation of the random process is used

.

The mathematical expectation and dispersion of a random process make it possible to identify the type of average function around which realizations of a random process are grouped and to estimate their spread relative to this function. However, the internal structure of the random process, i.e. the nature and degree of dependence (connection) of various sections of the process among themselves remains unknown (Fig. 6.4).

Figure 6.4 – Implementations of random processes X(t) And Y(t)

To characterize the connection between cross sections of a random process, the concept of a second-order mixed moment function is introduced - correlation function.

Correlation function random process X(t) is called a non-random function, which for each pair of values ​​is equal to the correlation moment of the corresponding sections of the random process:

Where , .

Relationship (see Fig. 6.4) between sections of a random process X(t) greater than between cross sections of a random process Y(t), i.e.

.

From the definition it follows that if a two-dimensional probability density is given random process X(t), That

The correlation function is a set of correlation moments of two random variables at moments , and both moments are considered in any combination of all current possible values ​​of the argument t random process. Thus, the correlation function characterizes the statistical relationship between instantaneous values ​​at different points in time.

Properties of the correlation function.

1) If , then . Consequently, the variance of a random process is a special case of the correlation function.

Random (stochastic) processes are external noise, fluctuation noise at the output of the discriminator and other RAS devices, internal disturbances in the RAS: instability of the PG frequency, instability of adjustable time delay devices, etc.

The study of RAS under random influences can, in principle, be carried out using conventional methods, determining the quality parameters of RAS at the most unfavorable (maximum) values ​​of disturbance ( worst case ).

However, since maximum value random variable is unlikely and will be observed rarely, deliberately stringent requirements will be imposed on RAS. More rational decisions can be obtained by looking at most likely value random variable.

The law of distribution of fluctuation components in linear RAS can be considered normal (Gaussian). The normal distribution law is characteristic of internal disturbances. When a random process passes through a linear system, the normal distribution law remains unchanged . If at the input of the RAS or at any other point (for example, at the output of the discriminator) there is a disturbance with a distribution law different from normal and having a wide spectrum S(ω), this perturbation is effective normalizes narrowband RAS filter elements.

A random process with a normal distribution law is completely determined mathematical expectation m(t) And correlation function R(τ).

Expectation(expectation) of a random process x(t) represents some regular function m x(t), around which all implementations are grouped this process( – probability density). It is also called average value over the set (ensemble).

m x(t) = M{x(t)} = . (6.1)

Random process ( t) without a regular component m x(t) is called centered .

To take into account the degree of scattering of a random process relative to its average value m x(t) introduce the concept variances :

Dx(t) = M{( (t)) 2 } = . (6.2)

The average value of the square of a random process is related to its expectation m x(t) and dispersion Dx(t) by the formula: .

In practice, it is convenient to evaluate a random process using statistical characteristics x Sq.(t) and s x(t), having the same dimension as the process itself.

RMS value x Sq.(t) random process:

Standard deviation x sq (t) of a random process:

. (6.4)

Expectation and dispersion do not provide sufficient insight into the nature of individual implementations of a random process. In order to take into account the degree of variability of a process or the relationship between its values ​​at different points in time, the concept of correlation ( autocorrelation ) functions.

Correlation function centered process ( t) is equal to

where is the two-dimensional probability density.

The correlation function is even : R(τ ) = R(–τ ).

If the distribution and probability density functions of a process do not depend on the time shift of all time arguments by the same amount, such a random process is called stationary .

If a stationary process has the same values average of the set And time average , such a random process is called ergodic .

Knowing R(τ) we can determine the dispersion of a stationary process:

Spectral Density S l y(ω) output process y(t) V linear system and spectral density S l (ω) of the input influence are related by the relation:

. (6.7)

Correlation function R(τ) of a stationary random process and its spectral density S(ω) are related by the Fourier transform, so the analysis is often carried out in the frequency domain. Having performed the Fourier transform for (6.7), we obtain an expression for the correlation function of the output process Ry(τ):

Spectral densities S l y(ω) and S l (ω) are bilateral .

You can enter one-sided spectral density N(f), which is defined only for positive frequencies().

Given parity R(τ) and Euler formulas (6.8) can be simplified:

. (6.9)

The quality of RAS work is relatively random signals and interference is characterized total root mean square error (SKO).

Let us consider a generalized PAC, the diagram of which is shown in Fig. 2.11. We consider the impact λ( t) deterministic, and the disturbance ξ( t) at the output of the discriminator – a random process. Using formulas (2.28)–(2.31), we determine the PF for the error under influence and disturbance.

In general, there may exist between the processes of influence and disturbance correlation (connection). In this case, except autocorrelation functions of the form (6.8) for each of the processes must be taken into account cross correlation functions of processes relative to each other. Through spectral densities, the communication data is written by error as follows:

After substituting expression (6.11) into formula (6.8), we obtain the corresponding dispersion components:

If there is no correlation between processes, then S l x (ω) = S x l (ω) = 0, and also D l x = D x l = 0, and formula (6.12) is simplified

Expectation of error X(t) is similar to the definition in steady state: .

If the spectral density S x(ω) is described by a fractional rational function with respect to ω, then to calculate Dx it is represented as:

where is a polynomial containing even degrees iω up to 2 n–2 inclusive; a is a polynomial of degree n, whose roots lie in the upper half-plane of the complex variable ω.

Integrals (6.14) can be calculated using formula (6.15):

, (6.15)

where D n– the leading Hurwitz determinant of the form (4.7), composed of coefficients a j, A Q n– determinant of type D n, in which in the first row the coefficients a j replaced by b j.

For integral (6.15) there are tables of values ​​for n ≤ 7.

Values ​​at n≤ 4 are determined by the formulas:

, , ,

Example 6.1. Let us determine the standard deviation of the PLL system from Example 4.2.

Let the signal λ( t) = 1 + 0,1t, and the disturbance ξ( t) is white noise with amplitude N 0= 1 mV ().

Error rates for this PAC have already been found in Example 5.1.

.

For the PF, errors due to the disturbance from formula (2.30) after changing variables r ® iω we get ( K 1 = S d , k 0 = k 1 S d , k 1 = k f k and):

After substituting formula (6.17) into (6.13) ( D l = 0) we get:

Comparing (6.18) with expression (6.14), we find the order and coefficients of polynomials (6.14): n = 3, b 2 = 0, b 1= –(T 2) 2 , b 0 = 1; a 3 = T f T d, a 2 = T f+ T d , a 1 = 1 + k 0 T 2, a 0 = k 0 .

After substituting the numerical values, the result is:

m x= 5×10 –4 (1/s), Dx= 1.06×10 –3 (1/s 2) (at k 0 = 200, S d = 10, k 1 = 20) or

m x= 5×10 –4 (1/s), Dx= 0.66 (1/s 2) (with k 0 = 200, S d = 0,4 , k 1 = 500).

From (6.3), (6.4) it follows that x sq.m.≈ s x= 0.032 (1/s) at S d= 10, and at S d = 0,4 x sq.m.≈ s x= 0.81 (1/s).

Example 6.2. Let us determine the RMS deviation of the RAS from Example 4.5 for the same signals: λ( t) = 1 + 0,1t and ξ( t) = N 0= 1 mV. λ′( t) = λ 1 , λ″( t) = 0

We find the error coefficients for a given RAS using formula (5.19): .

v = 0, d 1 = 0, d 0 = S d, b 3 = T 1 T 2 T 3, b 2 = T 1 T 2+T 2 T 3+T 1 T 3, b 1 = T 1 + T 2 + T 3, b 0 = 1.

From formulas (5.19)–(5.22) we obtain

For the PF, the errors due to the disturbance from formula (2.30) after replacing the variables p ® iω in (6.20) we obtain:

After substituting formula (6.20) into (6.13) (D l = 0) we obtain:

Comparing (6.21) with expression (6.14), we find the coefficients of the polynomials (6.14): n = 3, b 2 = b 1 = 0, b 0 = 1; a 3 = T 1 T 2 T 3, a 2 = T 1 T 2 + T 2 T 3 + T 1 T 3, a 1 = T 1 + T 2 + T 3, a 0 = S d + 1.

After substitution into formula (6.16) and transformations, we obtain:

After substituting the numerical values, the result is:

m x= (9.2 + 0.9 t)10 –2, Dx= 4.2×10 –4.

6.2. Graphic-analytical method for determining dispersion.

mob_info