Linear dependence and linear independence of vectors. Basis of vectors. Affine coordinate system. Basis of the vector system

Lectures on algebra and geometry. Semester 1.

Lecture 9. Basis of vector space.

Summary: system of vectors, linear combination of a system of vectors, coefficients of a linear combination of a system of vectors, basis on a line, plane and in space, dimensions of vector spaces on a line, plane and in space, decomposition of a vector along a basis, coordinates of a vector relative to the basis, equality theorem two vectors, linear operations with vectors in coordinate notation, orthonormal triple of vectors, right and left triple of vectors, orthonormal basis, fundamental theorem of vector algebra.

Chapter 9. Basis of a vector space and decomposition of a vector with respect to the basis.

clause 1. Basis on a straight line, on a plane and in space.

Definition. Any finite set of vectors is called a system of vectors.

Definition. Expression where
is called a linear combination of a system of vectors
, and the numbers
are called the coefficients of this linear combination.

Let L, P and S be a straight line, a plane and a space of points, respectively, and
. Then
– vector spaces of vectors as directed segments on the straight line L, on the plane P and in the space S, respectively.


any non-zero vector is called
, i.e. any non-zero vector collinear to the line L:
And
.

Basis designation
:
– basis
.

Definition. Basis of vector space
is any ordered pair of noncollinear vectors in space
.

, Where
,
– basis
.

Definition. Basis of vector space
is any ordered triple of non-coplanar vectors (that is, not lying in the same plane) of space
.

– basis
.

Comment. The basis of a vector space cannot contain a zero vector: in the space
by definition, in space
two vectors will be collinear if at least one of them is zero, in space
three vectors will be coplanar, that is, they will lie in the same plane, if at least one of the three vectors is zero.

clause 2. Decomposition of a vector by basis.

Definition. Let – arbitrary vector,
– arbitrary system of vectors. If equality holds

then they say that the vector presented as a linear combination of a given system of vectors. If a given system of vectors
is a basis of a vector space, then equality (1) is called the decomposition of the vector by basis
. Linear combination coefficients
are called in this case the coordinates of the vector relative to the basis
.

Theorem. (On the decomposition of a vector with respect to a basis.)

Any vector of a vector space can be expanded into its basis and, moreover, in a unique way.

Proof. 1) Let L be an arbitrary straight line (or axis) and
– basis
. Let's take an arbitrary vector
. Since both vectors And collinear to the same line L, then
. Let us use the theorem on the collinearity of two vectors. Because
, then there is (exists) such a number
, What
and thus we obtained the decomposition of the vector by basis
vector space
.

Now let us prove the uniqueness of such a decomposition. Let's assume the opposite. Let there be two decompositions of the vector by basis
vector space
:

And
, Where
. Then
and using the law of distributivity, we get:

Because
, then from the last equality it follows that
, etc.

2) Let now P be an arbitrary plane and
– basis
. Let
an arbitrary vector of this plane. Let us plot all three vectors from any one point of this plane. Let's build 4 straight lines. Let's make a direct , on which the vector lies , straight
, on which the vector lies . Through the end of the vector draw a straight line parallel to the vector and a line parallel to the vector . These 4 straight lines carve a parallelogram. See below fig. 3. According to the parallelogram rule
, And
,
,
– basis ,
– basis
.

Now, according to what has already been proven in the first part of this proof, there are such numbers
, What

And
. From here we get:

and the possibility of expansion in basis is proven.

Now we prove the uniqueness of the expansion in terms of the basis. Let's assume the opposite. Let there be two decompositions of the vector by basis
vector space
:
And
. We get equality

Where does it come from?
. If
, That
, and because
, That
and the expansion coefficients are equal:
,
. Let it now
. Then
, Where
. By the theorem on the collinearity of two vectors, it follows that
. We have obtained a contradiction to the conditions of the theorem. Hence,
And
, etc.

3) Let
– basis
let it go
arbitrary vector. Let us carry out the following constructions.

Let us set aside all three basis vectors
and vector from one point and construct 6 planes: the plane in which the basis vectors lie
, plane
and plane
; further through the end of the vector Let's draw three planes parallel to the three planes just constructed. These 6 planes carve a parallelepiped:

Using the rule for adding vectors, we obtain the equality:

. (1)

By construction
. From here, by the theorem on the collinearity of two vectors, it follows that there is a number
, such that
. Likewise,
And
, Where
. Now, substituting these equalities into (1), we get:

and the possibility of expansion in basis is proven.

Let us prove the uniqueness of such a decomposition. Let's assume the opposite. Let there be two decompositions of the vector by basis
:

AND . Then

Note that by condition the vectors
non-coplanar, therefore, they are pairwise non-collinear.

There are two possible cases:
or
.

a) Let
, then from equality (3) it follows:

. (4)

From equality (4) it follows that the vector expands according to the basis
, i.e. vector lies in the vector plane
and therefore the vectors
coplanar, which contradicts the condition.

b) There remains a case
, i.e.
. Then from equality (3) we obtain or

Because
is the basis of the space of vectors lying in the plane, and we have already proven the uniqueness of the expansion in the basis of vectors of the plane, then from equality (5) it follows that
And
, etc.

The theorem has been proven.

Consequence.

1) There is a one-to-one correspondence between the set of vectors in a vector space
and the set of real numbers R.

2) There is a one-to-one correspondence between the set of vectors in a vector space
and a Cartesian square

3) There is a one-to-one correspondence between the set of vectors in a vector space
and Cartesian cube
set of real numbers R.

Proof. Let us prove the third statement. The first two are proven in a similar way.

Select and fix in space
some basis
and arrange a display
according to the following rule:

those. For each vector we associate an ordered set of its coordinates.

Since, with a fixed basis, each vector has a single set of coordinates, the correspondence specified by rule (6) is indeed a mapping.

From the proof of the theorem it follows that different vectors have different coordinates relative to the same basis, i.e. mapping (6) is an injection.

Let
an arbitrary ordered set of real numbers.

Consider a vector
. This vector by construction has coordinates
. Consequently, mapping (6) is a surjection.

A mapping that is both injective and surjective is bijective, i.e. one-to-one, etc.

The investigation has been proven.

Theorem. (On the equality of two vectors.)

Two vectors are equal if and only if their coordinates relative to the same basis are equal.

The proof follows immediately from the previous corollary.

clause 3. Dimension of vector space.

Definition. The number of vectors in the basis of a vector space is called its dimension.

Designation:
– dimension of the vector space V.

Thus, in accordance with this and previous definitions, we have:

1)
– vector space of vectors of line L.

– basis
,
,
,
– vector decomposition
by basis
,
– vector coordinate relative to the basis
.

2)
– vector space of vectors of the plane R.

– basis
,
,
,
– vector decomposition
by basis
,
– vector coordinates relative to the basis
.

3)
– vector space of vectors in the space of points S.

– basis
,
,
– vector decomposition
by basis
,
– vector coordinates relative to the basis
.

Comment. If
, That
and you can choose a basis
space
So
– basis
And
– basis
. Then
, And
, .

Thus, any vector of the line L, plane P and space S can be expanded according to the basis
:

Designation. By virtue of the theorem on equality of vectors, we can identify any vector with an ordered triple of real numbers and write:

This is only possible if the basis
fixed and there is no danger of getting tangled.

Definition. Writing a vector in the form of an ordered triple of real numbers is called the coordinate form of writing a vector:
.

clause 4. Linear operations with vectors in coordinate notation.

Let
– basis of space
And
are two of its arbitrary vectors. Let
And
– recording of these vectors in coordinate form. Let, further,
is an arbitrary real number. Using this notation, the following theorem holds.

Theorem. (On linear operations with vectors in coordinate form.)

2)
.

In other words, in order to add two vectors, you need to add their corresponding coordinates, and to multiply a vector by a number, you need to multiply each coordinate of a given vector by a given number.

Proof. Since, according to the conditions of the theorem, , then using the axioms of vector space, which govern the operations of adding vectors and multiplying a vector by a number, we obtain:

This implies .

The second equality is proved in a similar way.

The theorem has been proven.

clause 5. Orthogonal vectors. Orthonormal basis.

Definition. Two vectors are called orthogonal if the angle between them is equal to a right angle, i.e.
.

Designation:
– vectors And orthogonal.

Definition. Troika of vectors
is called orthogonal if these vectors are pairwise orthogonal to each other, i.e.
,
.

Definition. Troika of vectors
is called orthonormal if it is orthogonal and the lengths of all vectors are equal to one:
.

Comment. From the definition it follows that an orthogonal and, therefore, orthonormal triple of vectors is non-coplanar.

Definition. Ordered non-coplanar vector triplet
plotted from one point is called right (right-oriented) if, when observed from the end of the third vector to the plane in which the first two vectors lie And , shortest rotation of the first vector to the second occurs counterclockwise. Otherwise, the triple of vectors is called left (left-oriented).

Here, in Fig. 6, the right three of vectors is shown
. The following figure 7 shows the left three of vectors
:

Definition. Basis
vector space
is called orthonormal if
orthonormal triple of vectors.

Designation. In what follows we will use the right orthonormal basis
, see the following figure.

In the article on n-dimensional vectors, we came to the concept of a linear space generated by a set of n-dimensional vectors. Now we have to consider equally important concepts, such as the dimension and basis of a vector space. They are directly related to the concept of a linearly independent system of vectors, so it is additionally recommended to remind yourself of the basics of this topic.

Let us introduce some definitions.

Definition 1

Dimension of vector space– a number corresponding to the maximum number of linearly independent vectors in this space.

Definition 2

Vector space basis– a set of linearly independent vectors, ordered and equal in number to the dimension of space.

Let's consider a certain space of n -vectors. Its dimension is correspondingly equal to n. Let's take a system of n-unit vectors:

e (1) = (1, 0, . . . 0) e (2) = (0, 1, . . , 0) e (n) = (0, 0, . . , 1)

We use these vectors as components of matrix A: it will be unit matrix with dimension n by n. The rank of this matrix is ​​n. Therefore, the vector system e (1) , e (2) , . . . , e(n) is linearly independent. In this case, it is impossible to add a single vector to the system without violating its linear independence.

Since the number of vectors in the system is n, then the dimension of the space of n-dimensional vectors is n, and the unit vectors are e (1), e (2), . . . , e (n) are the basis of the specified space.

From the resulting definition we can conclude: any system of n-dimensional vectors in which the number of vectors is less than n is not a basis of space.

If we swap the first and second vectors, we get a system of vectors e (2) , e (1) , . . . , e (n) . It will also be the basis of an n-dimensional vector space. Let's create a matrix by taking the vectors of the resulting system as its rows. The matrix can be obtained from the identity matrix by swapping the first two rows, its rank will be n. System e (2) , e (1) , . . . , e(n) is linearly independent and is the basis of an n-dimensional vector space.

By rearranging other vectors in the original system, we obtain another basis.

We can take a linearly independent system of non-unit vectors, and it will also represent the basis of an n-dimensional vector space.

Definition 3

A vector space with dimension n has as many bases as there are linearly independent systems of n-dimensional vectors of number n.

The plane is a two-dimensional space - its basis will be any two non-collinear vectors. The basis of three-dimensional space will be any three non-coplanar vectors.

Let's consider the application of this theory using specific examples.

Example 1

Initial data: vectors

a = (3 , - 2 , 1) b = (2 , 1 , 2) c = (3 , - 1 , - 2)

It is necessary to determine whether the specified vectors are the basis of a three-dimensional vector space.

Solution

To solve the problem, we study the given system of vectors for linear dependence. Let's create a matrix, where the rows are the coordinates of the vectors. Let's determine the rank of the matrix.

A = 3 2 3 - 2 1 - 1 1 2 - 2 A = 3 - 2 1 2 1 2 3 - 1 - 2 = 3 1 (- 2) + (- 2) 2 3 + 1 2 · (- 1) - 1 · 1 · 3 - (- 2) · 2 · (- 2) - 3 · 2 · (- 1) = = - 25 ≠ 0 ⇒ R a n k (A) = 3

Consequently, the vectors specified by the condition of the problem are linearly independent, and their number is equal to the dimension of the vector space - they are the basis of the vector space.

Answer: the indicated vectors are the basis of the vector space.

Example 2

Initial data: vectors

a = (3, - 2, 1) b = (2, 1, 2) c = (3, - 1, - 2) d = (0, 1, 2)

It is necessary to determine whether the specified system of vectors can be the basis of three-dimensional space.

Solution

The system of vectors specified in the problem statement is linearly dependent, because the maximum number of linearly independent vectors is 3. Thus, the indicated system of vectors cannot serve as a basis for a three-dimensional vector space. But it is worth noting that the subsystem of the original system a = (3, - 2, 1), b = (2, 1, 2), c = (3, - 1, - 2) is a basis.

Answer: the indicated system of vectors is not a basis.

Example 3

Initial data: vectors

a = (1, 2, 3, 3) b = (2, 5, 6, 8) c = (1, 3, 2, 4) d = (2, 5, 4, 7)

Can they be the basis of four-dimensional space?

Solution

Let's create a matrix using the coordinates of the given vectors as rows

A = 1 2 3 3 2 5 6 8 1 3 2 4 2 5 4 7

Using the Gaussian method, we determine the rank of the matrix:

A = 1 2 3 3 2 5 6 8 1 3 2 4 2 5 4 7 ~ 1 2 3 3 0 1 0 2 0 1 - 1 1 0 1 - 2 1 ~ ~ 1 2 3 3 0 1 0 2 0 0 - 1 - 1 0 0 - 2 - 1 ~ 1 2 3 3 0 1 0 2 0 0 - 1 - 1 0 0 0 1 ⇒ ⇒ R a n k (A) = 4

Consequently, the system of given vectors is linearly independent and their number is equal to the dimension of the vector space - they are the basis of a four-dimensional vector space.

Answer: the given vectors are the basis of four-dimensional space.

Example 4

Initial data: vectors

a (1) = (1 , 2 , - 1 , - 2) a (2) = (0 , 2 , 1 , - 3) a (3) = (1 , 0 , 0 , 5)

Do they form the basis of a space of dimension 4?

Solution

The original system of vectors is linearly independent, but the number of vectors in it is not sufficient to become the basis of a four-dimensional space.

Answer: no, they don’t.

Decomposition of a vector into a basis

Let us assume that arbitrary vectors e (1) , e (2) , . . . , e (n) are the basis of an n-dimensional vector space. Let's add to them a certain n-dimensional vector x →: the resulting system of vectors will become linearly dependent. The properties of linear dependence state that at least one of the vectors of such a system can be linearly expressed through the others. Reformulating this statement, we can say that at least one of the vectors of a linearly dependent system can be expanded into the remaining vectors.

Thus, we came to the formulation of the most important theorem:

Definition 4

Any vector of an n-dimensional vector space can be uniquely decomposed into a basis.

Evidence 1

Let's prove this theorem:

let's set the basis of the n-dimensional vector space - e (1) , e (2) , . . . , e (n) . Let's make the system linearly dependent by adding an n-dimensional vector x → to it. This vector can be linearly expressed in terms of the original vectors e:

x = x 1 · e (1) + x 2 · e (2) + . . . + x n · e (n) , where x 1 , x 2 , . . . , x n - some numbers.

Now we prove that such a decomposition is unique. Let's assume that this is not the case and there is another similar decomposition:

x = x ~ 1 e (1) + x 2 ~ e (2) + . . . + x ~ n e (n) , where x ~ 1 , x ~ 2 , . . . , x ~ n - some numbers.

Let us subtract from the left and right sides of this equality, respectively, the left and right sides of the equality x = x 1 · e (1) + x 2 · e (2) + . . . + x n · e (n) . We get:

0 = (x ~ 1 - x 1) · e (1) + (x ~ 2 - x 2) · e (2) + . . . (x ~ n - x n) e (2)

System of basis vectors e (1) , e (2) , . . . , e(n) is linearly independent; by definition of linear independence of a system of vectors, the equality above is possible only when all coefficients are (x ~ 1 - x 1) , (x ~ 2 - x 2) , . . . , (x ~ n - x n) will be equal to zero. From which it will be fair: x 1 = x ~ 1, x 2 = x ~ 2, . . . , x n = x ~ n . And this proves the only option for decomposing a vector into a basis.

In this case, the coefficients x 1, x 2, . . . , x n are called the coordinates of the vector x → in the basis e (1) , e (2) , . . . , e (n) .

The proven theory makes clear the expression “given an n-dimensional vector x = (x 1 , x 2 , . . . , x n)”: a vector x → n-dimensional vector space is considered, and its coordinates are specified in a certain basis. It is also clear that the same vector in another basis of n-dimensional space will have different coordinates.

Consider the following example: suppose that in some basis of n-dimensional vector space a system of n linearly independent vectors is given

and also the vector x = (x 1 , x 2 , . . . , x n) is given.

Vectors e 1 (1) , e 2 (2) , . . . , e n (n) in this case are also the basis of this vector space.

Suppose that it is necessary to determine the coordinates of the vector x → in the basis e 1 (1) , e 2 (2) , . . . , e n (n) , denoted as x ~ 1 , x ~ 2 , . . . , x ~ n.

Vector x → will be represented as follows:

x = x ~ 1 e (1) + x ~ 2 e (2) + . . . + x ~ n e (n)

Let's write this expression in coordinate form:

(x 1 , x 2 , . . . , x n) = x ~ 1 (e (1) 1 , e (1) 2 , . . , e (1) n) + x ~ 2 (e (2 ) 1 , e (2) 2 , . . . , e (2) n) + . . . + + x ~ n · (e (n) 1 , e (n) 2 , . . . , e (n) n) = = (x ~ 1 e 1 (1) + x ~ 2 e 1 (2) + . . . + x ~ n e 1 (n) , x ~ 1 e 2 (1) + x ~ 2 e 2 (2) + + . . + x ~ n e 2 (n) , . . . , x ~ 1 e n (1) + x ~ 2 e n (2) + ... + x ~ n e n (n))

The resulting equality is equivalent to a system of n linear algebraic expressions with n unknown linear variables x ~ 1, x ~ 2, . . . , x ~ n:

x 1 = x ~ 1 e 1 1 + x ~ 2 e 1 2 + . . . + x ~ n e 1 n x 2 = x ~ 1 e 2 1 + x ~ 2 e 2 2 + . . . + x ~ n e 2 n ⋮ x n = x ~ 1 e n 1 + x ~ 2 e n 2 + . . . + x ~ n e n n

The matrix of this system will have the following form:

e 1 (1) e 1 (2) ⋯ e 1 (n) e 2 (1) e 2 (2) ⋯ e 2 (n) ⋮ ⋮ ⋮ ⋮ e n (1) e n (2) ⋯ e n (n)

Let this be a matrix A, and its columns are vectors of a linearly independent system of vectors e 1 (1), e 2 (2), . . . , e n (n) . The rank of the matrix is ​​n, and its determinant is nonzero. This indicates that the system of equations has a unique solution, determined by any convenient method: for example, the Cramer method or the matrix method. This way we can determine the coordinates x ~ 1, x ~ 2, . . . , x ~ n vector x → in the basis e 1 (1) , e 2 (2) , . . . , e n (n) .

Let's apply the considered theory to a specific example.

Example 6

Initial data: vectors are specified in the basis of three-dimensional space

e (1) = (1 , - 1 , 1) e (2) = (3 , 2 , - 5) e (3) = (2 , 1 , - 3) x = (6 , 2 , - 7)

It is necessary to confirm the fact that the system of vectors e (1), e (2), e (3) also serves as the basis of a given space, and also to determine the coordinates of vector x in a given basis.

Solution

The system of vectors e (1), e (2), e (3) will be the basis of three-dimensional space if it is linearly independent. Let's find out this possibility by determining the rank of the matrix A, the rows of which are the given vectors e (1), e (2), e (3).

We use the Gaussian method:

A = 1 - 1 1 3 2 - 5 2 1 - 3 ~ 1 - 1 1 0 5 - 8 0 3 - 5 ~ 1 - 1 1 0 5 - 8 0 0 - 1 5

R a n k (A) = 3 . Thus, the system of vectors e (1), e (2), e (3) is linearly independent and is a basis.

Let the vector x → have coordinates x ~ 1, x ~ 2, x ~ 3 in the basis. The relationship between these coordinates is determined by the equation:

x 1 = x ~ 1 e 1 (1) + x ~ 2 e 1 (2) + x ~ 3 e 1 (3) x 2 = x ~ 1 e 2 (1) + x ~ 2 e 2 (2) + x ~ 3 e 2 (3) x 3 = x ~ 1 e 3 (1) + x ~ 2 e 3 (2) + x ~ 3 e 3 (3)

Let's apply the values ​​according to the conditions of the problem:

x ~ 1 + 3 x ~ 2 + 2 x ~ 3 = 6 - x ~ 1 + 2 x ~ 2 + x ~ 3 = 2 x ~ 1 - 5 x ~ 2 - 3 x 3 = - 7

Let's solve the system of equations using Cramer's method:

∆ = 1 3 2 - 1 2 1 1 - 5 - 3 = - 1 ∆ x ~ 1 = 6 3 2 2 2 1 - 7 - 5 - 3 = - 1 , x ~ 1 = ∆ x ~ 1 ∆ = - 1 - 1 = 1 ∆ x ~ 2 = 1 6 2 - 1 2 1 1 - 7 - 3 = - 1 , x ~ 2 = ∆ x ~ 2 ∆ = - 1 - 1 = 1 ∆ x ~ 3 = 1 3 6 - 1 2 2 1 - 5 - 7 = - 1 , x ~ 3 = ∆ x ~ 3 ∆ = - 1 - 1 = 1

Thus, the vector x → in the basis e (1), e (2), e (3) has coordinates x ~ 1 = 1, x ~ 2 = 1, x ~ 3 = 1.

Answer: x = (1 , 1 , 1)

Relationship between bases

Let us assume that in some basis of n-dimensional vector space two linearly independent systems of vectors are given:

c (1) = (c 1 (1) , c 2 (1) , . . . , c n (1)) c (2) = (c 1 (2) , c 2 (2) , . . . , c n (2)) ⋮ c (n) = (c 1 (n) , e 2 (n) , . . . , c n (n))

e (1) = (e 1 (1) , e 2 (1) , . . . , e n (1)) e (2) = (e 1 (2) , e 2 (2) , . . . , e n (2)) ⋮ e (n) = (e 1 (n) , e 2 (n) , . . . , e n (n))

These systems are also bases of a given space.

Let c ~ 1 (1) , c ~ 2 (1) , . . . , c ~ n (1) - coordinates of the vector c (1) in the basis e (1) , e (2) , . . . , e (3) , then the coordinate relationship will be given by a system of linear equations:

c 1 (1) = c ~ 1 (1) e 1 (1) + c ~ 2 (1) e 1 (2) + . . . + c ~ n (1) e 1 (n) c 2 (1) = c ~ 1 (1) e 2 (1) + c ~ 2 (1) e 2 (2) + . . . + c ~ n (1) e 2 (n) ⋮ c n (1) = c ~ 1 (1) e n (1) + c ~ 2 (1) e n (2) + . . . + c ~ n (1) e n (n)

The system can be represented as a matrix as follows:

(c 1 (1) , c 2 (1) , . . . , c n (1)) = (c ~ 1 (1) , c ~ 2 (1) , . . . , c ~ n (1)) e 1 (1) e 2 (1) … e n (1) e 1 (2) e 2 (2) … e n (2) ⋮ ⋮ ⋮ ⋮ e 1 (n) e 2 (n) … e n (n)

Let us make the same entry for the vector c (2) by analogy:

(c 1 (2) , c 2 (2) , . . . , c n (2)) = (c ~ 1 (2) , c ~ 2 (2) , . . . , c ~ n (2)) e 1 (1) e 2 (1) … e n (1) e 1 (2) e 2 (2) … e n (2) ⋮ ⋮ ⋮ ⋮ e 1 (n) e 2 (n) … e n (n)

(c 1 (n) , c 2 (n) , . . . , c n (n)) = (c ~ 1 (n) , c ~ 2 (n) , . . . , c ~ n (n)) e 1 (1) e 2 (1) … e n (1) e 1 (2) e 2 (2) … e n (2) ⋮ ⋮ ⋮ ⋮ e 1 (n) e 2 (n) … e n (n)

Let's combine the matrix equalities into one expression:

c 1 (1) c 2 (1) ⋯ c n (1) c 1 (2) c 2 (2) ⋯ c n (2) ⋮ ⋮ ⋮ ⋮ c 1 (n) c 2 (n) ⋯ c n (n) = c ~ 1 (1) c ~ 2 (1) ⋯ c ~ n (1) c ~ 1 (2) c ~ 2 (2) ⋯ c ~ n (2) ⋮ ⋮ ⋮ ⋮ c ~ 1 (n) c ~ 2 (n) ⋯ c ~ n (n) e 1 (1) e 2 (1) ⋯ e n (1) e 1 (2) e 2 (2) ⋯ e n (2) ⋮ ⋮ ⋮ ⋮ e 1 (n ) e 2 (n) ⋯ e n (n)

It will determine the connection between the vectors of two different bases.

Using the same principle, it is possible to express all basis vectors e(1), e(2), . . . , e (3) through the basis c (1) , c (2) , . . . , c (n) :

e 1 (1) e 2 (1) ⋯ e n (1) e 1 (2) e 2 (2) ⋯ e n (2) ⋮ ⋮ ⋮ ⋮ e 1 (n) e 2 (n) ⋯ e n (n) = e ~ 1 (1) e ~ 2 (1) ⋯ e ~ n (1) e ~ 1 (2) e ~ 2 (2) ⋯ e ~ n (2) ⋮ ⋮ ⋮ ⋮ e ~ 1 (n) e ~ 2 (n) ⋯ e ~ n (n) c 1 (1) c 2 (1) ⋯ c n (1) c 1 (2) c 2 (2) ⋯ c n (2) ⋮ ⋮ ⋮ ⋮ c 1 (n ) c 2 (n) ⋯ c n (n)

Let us give the following definitions:

Definition 5

Matrix c ~ 1 (1) c ~ 2 (1) ⋯ c ~ n (1) c ~ 1 (2) c ~ 2 (2) ⋯ c ~ n (2) ⋮ ⋮ ⋮ ⋮ c ~ 1 (n) c ~ 2 (n) ⋯ c ~ n (n) is the transition matrix from the basis e (1) , e (2) , . . . , e (3)

to the basis c (1) , c (2) , . . . , c (n) .

Definition 6

Matrix e ~ 1 (1) e ~ 2 (1) ⋯ e ~ n (1) e ~ 1 (2) e ~ 2 (2) ⋯ e ~ n (2) ⋮ ⋮ ⋮ ⋮ e ~ 1 (n) e ~ 2 (n) ⋯ e ~ n (n) is the transition matrix from the basis c (1) , c (2) , . . . , c(n)

to the basis e (1) , e (2) , . . . , e (3) .

From these equalities it is obvious that

c ~ 1 (1) c ~ 2 (1) ⋯ c ~ n (1) c ~ 1 (2) c ~ 2 (2) ⋯ c ~ n (2) ⋮ ⋮ ⋮ ⋮ c ~ 1 (n) c ~ 2 (n) ⋯ c ~ n (n) e ~ 1 (1) e ~ 2 (1) ⋯ e ~ n (1) e ~ 1 (2) e ~ 2 (2) ⋯ e ~ n (2) ⋮ ⋮ ⋮ ⋮ e ~ 1 (n) e ~ 2 (n) ⋯ e ~ n (n) = 1 0 ⋯ 0 0 1 ⋯ 0 ⋮ ⋮ ⋮ ⋮ 0 0 ⋯ 1 e ~ 1 (1) e ~ 2 ( 1) ⋯ e ~ n (1) e ~ 1 (2) e ~ 2 (2) ⋯ e ~ n (2) ⋮ ⋮ ⋮ ⋮ e ~ 1 (n) e ~ 2 (n) ⋯ e ~ n (n ) · c ~ 1 (1) c ~ 2 (1) ⋯ c ~ n (1) c ~ 1 (2) c ~ 2 (2) ⋯ c ~ n (2) ⋮ ⋮ ⋮ ⋮ c ~ 1 (n) c ~ 2 (n) ⋯ c ~ n (n) = 1 0 ⋯ 0 0 1 ⋯ 0 ⋮ ⋮ ⋮ ⋮ 0 0 ⋯ 1

those. the transition matrices are reciprocal.

Let's look at the theory using a specific example.

Example 7

Initial data: it is necessary to find the transition matrix from the basis

c (1) = (1 , 2 , 1) c (2) = (2 , 3 , 3) ​​c (3) = (3 , 7 , 1)

e (1) = (3 , 1 , 4) e (2) = (5 , 2 , 1) e (3) = (1 , 1 , - 6)

You also need to indicate the relationship between the coordinates of an arbitrary vector x → in the given bases.

Solution

1. Let T be the transition matrix, then the equality will be true:

3 1 4 5 2 1 1 1 1 = T 1 2 1 2 3 3 3 7 1

Multiply both sides of the equality by

1 2 1 2 3 3 3 7 1 - 1

and we get:

T = 3 1 4 5 2 1 1 1 - 6 1 2 1 2 3 3 3 7 1 - 1

2. Define the transition matrix:

T = 3 1 4 5 2 1 1 1 - 6 · 1 2 1 2 3 3 3 7 1 - 1 = = 3 1 4 5 2 1 1 1 - 6 · - 18 5 3 7 - 2 - 1 5 - 1 - 1 = - 27 9 4 - 71 20 12 - 41 9 8

3. Let us define the relationship between the coordinates of the vector x → :

Let us assume that in the basis c (1) , c (2) , . . . , c (n) vector x → has coordinates x 1 , x 2 , x 3 , then:

x = (x 1 , x 2 , x 3) 1 2 1 2 3 3 3 7 1 ,

and in the basis e (1) , e (2) , . . . , e (3) has coordinates x ~ 1, x ~ 2, x ~ 3, then:

x = (x ~ 1, x ~ 2, x ~ 3) 3 1 4 5 2 1 1 1 - 6

Because If the left-hand sides of these equalities are equal, we can equate the right-hand sides as well:

(x 1 , x 2 , x 3) · 1 2 1 2 3 3 3 7 1 = (x ~ 1 , x ~ 2 , x ~ 3) · 3 1 4 5 2 1 1 1 - 6

Multiply both sides on the right by

1 2 1 2 3 3 3 7 1 - 1

and we get:

(x 1 , x 2 , x 3) = (x ~ 1 , x ~ 2 , x ~ 3) · 3 1 4 5 2 1 1 1 - 6 · 1 2 1 2 3 3 3 7 1 - 1 ⇔ ⇔ ( x 1 , x 2 , x 3) = (x ~ 1 , x ~ 2 , x ~ 3) T ⇔ ⇔ (x 1 , x 2 , x 3) = (x ~ 1 , x ~ 2 , x ~ 3 ) · - 27 9 4 - 71 20 12 - 41 9 8

On the other side

(x ~ 1, x ~ 2, x ~ 3) = (x 1, x 2, x 3) · - 27 9 4 - 71 20 12 - 41 9 8

The last equalities show the relationship between the coordinates of the vector x → in both bases.

Answer: transition matrix

27 9 4 - 71 20 12 - 41 9 8

The coordinates of the vector x → in the given bases are related by the relation:

(x 1 , x 2 , x 3) = (x ~ 1 , x ~ 2 , x ~ 3) · - 27 9 4 - 71 20 12 - 41 9 8

(x ~ 1, x ~ 2, x ~ 3) = (x 1, x 2, x 3) · - 27 9 4 - 71 20 12 - 41 9 8 - 1

If you notice an error in the text, please highlight it and press Ctrl+Enter

Expression of the form called linear combination of vectors A 1 , A 2 ,...,A n with odds λ 1, λ 2 ,...,λ n.

Determination of linear dependence of a system of vectors

Vector system A 1 , A 2 ,...,A n called linearly dependent, if there is a non-zero set of numbers λ 1, λ 2 ,...,λ n, in which the linear combination of vectors λ 1 *A 1 +λ 2 *A 2 +...+λ n *A n equal to the zero vector, that is, the system of equations: has a non-zero solution.
Set of numbers λ 1, λ 2 ,...,λ n is nonzero if at least one of the numbers λ 1, λ 2 ,...,λ n different from zero.

Determination of linear independence of a system of vectors

Vector system A 1 , A 2 ,...,A n called linearly independent, if the linear combination of these vectors λ 1 *A 1 +λ 2 *A 2 +...+λ n *A n equal to the zero vector only for a zero set of numbers λ 1, λ 2 ,...,λ n , that is, the system of equations: A 1 x 1 +A 2 x 2 +...+A n x n =Θ has a unique zero solution.

Example 29.1

Check if a system of vectors is linearly dependent

Solution:

1. We compose a system of equations:

2. We solve it using the Gauss method. The Jordanano transformations of the system are given in Table 29.1. When calculating, the right-hand sides of the system are not written down since they are equal to zero and do not change during Jordan transformations.

3. From the last three rows of the table write down a resolved system equivalent to the original one system:

4. We obtain the general solution of the system:

5. Having set the value of the free variable x 3 =1 at your discretion, we obtain a particular non-zero solution X=(-3,2,1).

Answer: Thus, for a non-zero set of numbers (-3,2,1), the linear combination of vectors equals the zero vector -3A 1 +2A 2 +1A 3 =Θ. Hence, vector system linearly dependent.

Properties of vector systems

Property (1)
If a system of vectors is linearly dependent, then at least one of the vectors is expanded in terms of the others and, conversely, if at least one of the vectors of the system is expanded in terms of the others, then the system of vectors is linearly dependent.

Property (2)
If any subsystem of vectors is linearly dependent, then the entire system is linearly dependent.

Property (3)
If a system of vectors is linearly independent, then any of its subsystems is linearly independent.

Property (4)
Any system of vectors containing a zero vector is linearly dependent.

Property (5)
A system of m-dimensional vectors is always linearly dependent if the number of vectors n is greater than their dimension (n>m)

Basis of the vector system

The basis of the vector system A 1 , A 2 ,..., A n such a subsystem B 1 , B 2 ,...,B r is called(each of the vectors B 1,B 2,...,B r is one of the vectors A 1, A 2,..., A n), which satisfies the following conditions:
1. B 1 ,B 2 ,...,B r linearly independent system of vectors;
2. any vector A j system A 1 , A 2 ,..., A n is linearly expressed through the vectors B 1 , B 2 ,..., B r

r— the number of vectors included in the basis.

Theorem 29.1 On the unit basis of a system of vectors.

If a system of m-dimensional vectors contains m different unit vectors E 1 E 2 ,..., E m , then they form the basis of the system.

Algorithm for finding the basis of a system of vectors

In order to find the basis of the system of vectors A 1 ,A 2 ,...,A n it is necessary:

  • Create a homogeneous system of equations corresponding to the system of vectors A 1 x 1 +A 2 x 2 +...+A n x n =Θ
  • Bring this system

Definition of basis. A system of vectors forms a basis if:

1) it is linearly independent,

2) any vector of space can be linearly expressed through it.

Example 1. Space basis: .

2. In the vector system the basis is the vectors: , because linearly expressed in terms of vectors.

Comment. To find the basis of a given system of vectors you need to:

1) write the coordinates of the vectors into the matrix,

2) using elementary transformations, bring the matrix to a triangular form,

3) non-zero rows of the matrix will be the basis of the system,

4) the number of vectors in the basis is equal to the rank of the matrix.

Kronecker-Capelli theorem

The Kronecker–Capelli theorem provides a comprehensive answer to the question of the compatibility of an arbitrary system of linear equations with unknowns

Kronecker–Capelli theorem. A system of linear algebraic equations is consistent if and only if the rank of the extended matrix of the system is equal to the rank of the main matrix, .

The algorithm for finding all solutions to a simultaneous system of linear equations follows from the Kronecker–Capelli theorem and the following theorems.

Theorem. If the rank of a joint system is equal to the number of unknowns, then the system has a unique solution.

Theorem. If the rank of a joint system is less than the number of unknowns, then the system has an infinite number of solutions.

Algorithm for solving an arbitrary system of linear equations:

1. Find the ranks of the main and extended matrices of the system. If they are not equal (), then the system is inconsistent (has no solutions). If the ranks are equal ( , then the system is consistent.

2. For a joint system, we find some minor, the order of which determines the rank of the matrix (such a minor is called basic). Let's compose a new system of equations in which the coefficients of the unknowns are included in the basic minor (these unknowns are called the main unknowns), and discard the remaining equations. We will leave the main unknowns with coefficients on the left, and move the remaining unknowns (they are called free unknowns) to the right side of the equations.

3. Let's find expressions for the main unknowns in terms of free ones. We obtain the general solution of the system.



4. By giving arbitrary values ​​to the free unknowns, we obtain the corresponding values ​​of the main unknowns. In this way we find partial solutions to the original system of equations.

Linear programming. Basic Concepts

Linear programming is a branch of mathematical programming that studies methods for solving extremal problems that are characterized by a linear relationship between variables and a linear criterion.

A necessary condition for posing a linear programming problem are restrictions on the availability of resources, the amount of demand, the production capacity of the enterprise and other production factors.

The essence of linear programming is to find the points of the largest or smallest value of a certain function under a certain set of restrictions imposed on the arguments and generators system of restrictions , which, as a rule, has an infinite number of solutions. Each set of variable values ​​(function arguments F ) that satisfy the system of constraints is called valid plan linear programming problems. Function F , the maximum or minimum of which is determined is called target function tasks. A feasible plan at which the maximum or minimum of a function is achieved F , called optimal plan tasks.

The system of restrictions that determines many plans is dictated by production conditions. Linear programming problem ( ZLP ) is the choice of the most profitable (optimal) one from a set of feasible plans.

In its general formulation, the linear programming problem looks like this:

Are there any variables? x = (x 1, x 2, ... x n) and the function of these variables f(x) = f (x 1, x 2, ... x n) , which is called target functions. The task is set: to find the extremum (maximum or minimum) of the objective function f(x) provided that the variables x belong to some area G :

Depending on the type of function f(x) and regions G and distinguish between sections of mathematical programming: quadratic programming, convex programming, integer programming, etc. Linear programming is characterized by the fact that
a) function f(x) is a linear function of the variables x 1, x 2, … x n
b) region G determined by the system linear equalities or inequalities.

A linear combination of vectors is a vector
, where λ 1, ..., λ m are arbitrary coefficients.

Vector system
is called linearly dependent if there is a linear combination of it equal to , which has at least one non-zero coefficient.

Vector system
is called linearly independent if in any of its linear combinations equal to , all coefficients are zero.

The basis of the vector system
its non-empty linearly independent subsystem is called, through which any vector of the system can be expressed.

Example 2. Find the basis of a system of vectors = (1, 2, 2, 4),= (2, 3, 5, 1),= (3, 4, 8, -2),= (2, 5, 0, 3) and express the remaining vectors through the basis.

Solution: We build a matrix in which the coordinates of these vectors are arranged in columns. We bring it to a stepwise form.

~
~
~
.

The basis of this system is formed by the vectors ,,, which correspond to the leading elements of the lines, highlighted in circles. To express a vector solve the equation x 1 +x 2 + x 4 =. It reduces to a system of linear equations, the matrix of which is obtained from the original permutation of the column corresponding to , in place of the column of free terms. Therefore, to solve the system, we use the resulting matrix in stepwise form, making the necessary rearrangements in it.

We consistently find:

x 1 + 4 = 3, x 1 = -1;

= -+2.

Remark 1. If it is necessary to express several vectors through the basis, then for each of them a corresponding system of linear equations is constructed. These systems will differ only in the columns of free members. Therefore, to solve them, you can create one matrix, which will have several columns of free terms. Moreover, each system is solved independently of the others.

Remark 2. To express any vector, it is sufficient to use only the basis vectors of the system that precede it. In this case, there is no need to reformat the matrix; it is enough to put a vertical line in the right place.

Exercise 2. Find the basis of the system of vectors and express the remaining vectors through the basis:

A) = (1, 3, 2, 0),= (3, 4, 2, 1),= (1, -2, -2, 1),= (3, 5, 1, 2);

b) = (2, 1, 2, 3),= (1, 2, 2, 3),= (3, -1, 2, 2),= (4, -2, 2, 2);

V) = (1, 2, 3),= (2, 4, 3),= (3, 6, 6),= (4, -2, 1);= (2, -6, -2).

    1. 3. Fundamental system of solutions

A system of linear equations is called homogeneous if all its free terms are equal to zero.

The fundamental system of solutions of a homogeneous system of linear equations is the basis of the set of its solutions.

Let us be given an inhomogeneous system of linear equations. A homogeneous system associated with a given one is a system obtained from a given one by replacing all free terms by zeros.

If the inhomogeneous system is consistent and indefinite, then its arbitrary solution has the form f n +  1 f o1 + ... +  k f o k, where f n is a particular solution of the inhomogeneous system and f o1, ... , f o k is the fundamental system solutions of the associated homogeneous system.

Example 3. Find a particular solution to the inhomogeneous system from Example 1 and the fundamental system of solutions to the associated homogeneous system.

Solution. Let's write the solution obtained in example 1 in vector form and decompose the resulting vector into a sum over the free parameters present in it and fixed numerical values:

= (x 1 , x 2 , x 3 , x 4) = (–2a + 7b – 2, a, –2b + 1, b) = (–2a, a, 0, 0) + (7b, 0, – 2b, b) + +(– 2, 0, 1, 0) = a(-2, 1, 0, 0) + b(7, 0, -2, 1) + (– 2, 0, 1, 0 ).

We get f n = (– 2, 0, 1, 0), f o1 = (-2, 1, 0, 0), f o2 = (7, 0, -2, 1).

Comment. The problem of finding a fundamental system of solutions to a homogeneous system is solved similarly.

Exercise 3.1 Find the fundamental system of solutions of a homogeneous system:

A)

b)

c) 2x 1 – x 2 +3x 3 = 0.

Exercise 3.2. Find a particular solution to the inhomogeneous system and a fundamental system of solutions to the associated homogeneous system:

A)

b)

mob_info