Elementary Matrix

An unproblematic matrix Eastward is a foursquare matrix that generates an unproblematic row operation on a matrix A (which need not be foursquare) under the multiplication EA.

From: Linear Algebra (Tertiary Edition) , 2014

Due east

Fred E. Szabo PhD , in The Linear Algebra Survival Guide, 2015

Illustration

An elementary matrix resulting from the interchange of two rows

A   =   IdentityMatrix[3]

{{1, 0, 0}, {0, i, 0}, {0, 0, 1}}

A   =   {A[[1]] , A [[three]] , A [[2]] }

{{1, 0, 0}, {0, 0, one}, {0, 1, 0}}

An elementary matrix resulting from multiplication of a row by a nonzero constant

A   =   IdentityMatrix[3]

{{1, 0, 0}, {0, 1, 0}, {0, 0, 1}}

A[[2]]  =   3 A[[ii]];

A

{{ane, 0, 0} , {0, 3, 0} , {0, 0, ane}}

An elementary matrix resulting from the addition of a multiple of a row to another row

A   =   IdentityMatrix[3]

{{1, 0, 0}, {0, 1, 0}, {0, 0, 1}}

A[[3]]  =   A[[three]]    4 A[[i]] ;

A

{{1, 0, 0} , {0, one, 0} , {−   4, 0,1}}

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780124095205500126

Additional Applications

Stephen Andrilli , David Hecker , in Elementary Linear Algebra (4th Edition), 2010

Exercises for Section 8.6

1.

For each elementary matrix below, determine its corresponding row functioning. Besides, use the changed functioning to find the inverse of the given matrix.

⋆(a)

[ 1 0 0 0 0 1 0 ane 0 ]

⋆(b)

[ 1 0 0 0 2 0 0 0 one ]

(c)

[ one 0 0 0 1 0 4 0 1 ]

(d)

[ i 0 0 0 0 6 0 0 0 0 1 0 0 0 0 1 ]

⋆(due east)

[ one 0 0 0 0 1 0 0 0 0 1 ii 0 0 0 ane ]

(f)

[ 0 0 0 1 0 1 0 0 0 0 one 0 one 0 0 0 ]

2.

Limited each of the following every bit a production of elementary matrices (if possible), in the manner of Instance 5:

⋆(a)

[ iv 9 3 vii ]

(b)

[ three 2 1 13 8 9 1 1 2 ]

⋆(c)

[ 0 0 v 0 iii 0 0 two 0 6 10 i three 0 0 3 ]

iii.

Allow A and B be m × north matrices. Prove that A and B are row equivalent if and only if B = PA, for some nonsingular m × yard matrix P.

4.

Evidence that if U is an upper triangular matrix with all master diagonal entries nonzero, then U −1 exists and is upper triangular. (Hint: Show that the method for calculating the inverse of a matrix does non produce a row of zeroes on the left side of the augmented matrix. Besides, evidence that for each row reduction step, the respective simple matrix is upper triangular. Conclude that U −1 is the product of upper triangular matrices, and is therefore upper triangular (come across Exercise 18(b) in Section 1.5).)

5.

If East is an elementary matrix, show that E T is also an elementary matrix. What is the relationship between the row operation respective to Eastward and the row operation corresponding to Eastward T ?

vi.

Permit F be an uncomplicated n × n matrix. Show that the production AF T is the matrix obtained past performing a "column" functioning on A analogous to 1 of the 3 types of row operations. (Hint: What is (AF T ) T ?)

►7.

Prove Corollary eight.ix.

8.

Consider the homogeneous system AX = O, where A is an northward × n matrix. Prove that this system has a nontrivial solution if and only if A cannot be expressed equally the product of elementary due north × n matrices.

9.

Let A and B be 1000 × northward and n × p matrices, respectively, and allow E be an yard × m simple matrix.

(a)

Show that rank(EA) = rank(A).

(b)

Show that if A has grand rows of all zeroes, and so rank(A) ≤ thou − chiliad.

(c)

Show that if A is in reduced row echelon grade, then rank(AB) ≤ rank(A). (Use part (b).)

(d)

Use parts (a) and (c) to prove that for a general matrix A, rank(AB) ≤ rank(A).

(eastward)

Compare this exercise with Exercise xviii in Section 2.iii.

⋆10.

Truthful or Imitation:

(a)

Every elementary matrix is square.

(b)

If A and B are row equivalent matrices, then at that place must exist an simple matrix Eastward such that B = EA.

(c)

If Due east 1,…, E 1000 are n × due north elementary matrices, then the inverse of Due east i E 2Due east chiliad is East one thousand E 2 Eastward 1.

(d)

If A is a nonsingular matrix, then A −1 can be expressed every bit a product of uncomplicated matrices.

(due east)

If R is a row functioning, E is its corresponding m × m matrix, and A is any yard × northward matrix, then the opposite row performance R −1 has the belongings R −1(A) = East −1 A.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780123747518000196

Book 2

A. de Juan , ... R. Tauler , in Comprehensive Chemometrics, 2009

two.xix.2.two Resolution Based on Elementary Matrix Transformations (Gentle)

The resolution method based on elementary matrix transformations proposed by Manne and Grande, 34 time to come chosen Gentle, works within the space of raw data and the key idea is that the truthful solutions C and S T can be obtained past matrix transformation from the evolution of any pair of matrices, C o and S o T, fitting optimally the raw data matrix X. This starting point agrees with the model in Equation (2), where the link between the initial estimates for C o and S o T and the desired chemically meaningful solutions, C and South T, happens via a transformation matrix R.

Thus, this method starts with an initial set of real spectra from the original matrix, S o T, obtained with the utilise of a pure variable selection method. xiii This matrix South o T contains practiced approximate spectra, which are linear combinations of the truthful spectra sought. Later, an initial C o matrix is calculated by to the lowest degree squares from 10 and S o T. The pair of matrices C o and S o T obtained in this way reproduces the original data set 10 with an optimal fit. The matrix C o is then transformed into the chemically meaningful matrix C via a series of elementary matrix transformation steps that each affects a small function of the data. Thus, the series of successive transformations tin be expressed as

(12) C i = C o R o , C 2 = C 1 R 1 , , C = C due north R n

Or, in a more compact style,

(13) C = C o R o R 1 R n

To preserve the fit of the original data matrix 10, the spectra matrix S o T is successively transformed as

(xiv) S T = R n i R one 1 R o 1 S o T

If R is divers as R  = R o R 1R northward , we can easily recall the expression in Equation (2). The particularity of this approach lies in the fact that the successive transformation matrices R o, R ane, …, R n are elementary matrices. To sympathise the mode these R i matrices act on pocket-size parts of the concentration profiles and spectra, we reformulate the bilinear CR model in a profilewise way. Let us assume that we have a ii-component organisation. The related CR model, expressed every bit a sum of the contributions of each dyad of profiles, can be written as

(xv) Ten = c 1 south 1 T + c ii s 2 T

Adding and subtracting the term grand c 2 s one T to the in a higher place expression and reorganizing the terms, we obtain the following two equations:

(16) X = c 1 southward one T + c 2 s 2 T + yard c 2 southward 1 T grand c 2 southward ane T

(17) 10 = ( c i + thou c two ) s 1 T + c 2 ( southward two T one thousand due south 1 T )

Figure 3 shows the relationship betwixt the original bilinear model in Equation (fifteen) and the transformed one in Equation (17) in matrix course.

Effigy 3. Scheme of unmarried-step modification of profiles in a two-component system by unproblematic matrix transformation.

Figures three(a) and 3(c) show the original model in Equation (15) and the model in Equation (17), respectively. The link betwixt the two is the transformation matrix R shown in Figure three(b) . This transformation matrix has the property of having ones forth the diagonal and only one nonnull off-diagonal element. Matrices with this structure receive the name of uncomplicated matrices. 35

The elementary matrix in Figure 3(b) is represented by the algebraic notation E 21(g), where the subindex indicates the nonnull element in the data matrix. The changed of the elementary matrix Eastward 21(thousand) is E 21(−k). Right multiplication of C past the elementary matrix E 21(one thousand) modifies the starting time column of C by improver of thousand times the second profile, that is, c onec 1+1000c 2, and left multiplication of Southward T by E 21(−chiliad) modifies the second row of S T by addition of −k times the first profile, that is, due south 2s 2−gs 1.

Without loss of generality, for a organization with north components, an unproblematic matrix transformation using matrix Due east ji (k) would lead to the replacements c ic i+kc j and s js j−ks j. To represent a concentration profile c i equally a linear combination of all of the others in a data set, the product of the elementary matrices Π E ji (k j ) will pb to the replacement c ic i+Σkj c j. As a result, all spectra with ji would be modified appropriately, as due south jdue south j−yardj s j.

Figure iii and the explanations higher up have helped to visualize the two main ideas linked to elementary matrix transformations. First, the transformed concentration profiles and spectra are linear combinations of the starting profiles. 2d, an simple matrix transformation affects concentration profiles and spectra pairwise, hence the need of successive elementary matrix transformations to obtain a global modification of C and S T. The elementary matrix transformations presented are the ones used past the Gentle method to transform the initial concentration and spectra estimates to the real solutions sought in the resolution approach. The strategy used in do and the atmospheric condition that the data set up should fulfill to be analyzed are explained beneath in particular.

The application of the resolution method based on uncomplicated matrix transformations will be shown for the chromatographic arrangement (HPLC-DAD) with four components shown in Figure ane . Once the number of components in the system is known, the steps given below should be followed:

(a)

Generation of initial spectral estimates, S o T, from the original data matrix X, by pure variable selection methods and normalization to constant area.

(b)

Adding of C o past to the lowest degree squares according to the bilinear resolution model, as C o = X ( Due south o T ) + .

(c)

Organization of concentration profiles according to elution order in C o.

(d)

Application of constraints to the elution profiles via uncomplicated matrix transformations (nonnegativity in the elution profiles and preservation of the correct elution windows).

(e)

Modification of the pure spectra with the inverse of the uncomplicated matrices used in pace (d) to provide an optimal reproduction of the original information set.

Step (d) is the core of the algorithm, and highlights the fact that the application of constraints in Gentle differs from the methods used in other iterative approaches. Knowing that elementary matrix transformations produce linear combinations of existing profiles, it seems logical that the shape of a given concentration profile c i must be basically modified by combination with the closest left c i−1 and/or correct c i+ane neighboring profiles. Therefore, the elementary matrix transformations to eliminate the negative parts of a given concentration profile c i follow the sequence given below:

i.

Choice of the profile with the everyman minimum (largest negative contribution) c i (m) where m is the number of the iterative transformation step.

two.

See whether c i (1000) needs to pass through a transformation. This will happen if

(a)

The negative minimum is lower than a preset negative threshold value.

(b)

I of the neighboring profiles, c i−1 (m) or c i+1 (one thousand), is the largest profile above the minimum.

(c)

The largest profile is larger than the preset positive threshold value (the accented value of the threshold is usually 1% of the absolute maximum in C o).

iii.

If all conditions in step (two) are fulfilled, accept the largest neighboring contour, for example, c i−1 (m), and replace c i (m) by c i (m+1)  = c i (m)  + 1000 c i−i (g), where k is called so that the lowest minimum in the profile c i (m+1) equals the preset negative threshold value. This is equivalent to performing C m+1  = C m E i,i−i(k).

iv.

Transform appropriately the connected spectral profiles, that is, due south i−i (1000+1)  = s i−i (chiliad)k s i (g). This is the elementary matrix transformation South T m+one  = Eastward i,i−1(−k)S T chiliad . (For a better understanding of steps (iii) and (iv), encounter the matrix C o and the concentration profiles and related spectra after the first elementary transformation given in Figures 4(a) and 4(b) .)

v.

Normalize the spectra in S T m+1, rescale the profiles in C g+one accordingly, and get dorsum to stride (two) until none of the concentration profiles fulfill the conditions to exist transformed.

Figure 4. (a) Initial spectra estimates (Southward o T) and least-squares adding of C o. The blue profile is the most negative profile to be transformed. The green profile is used to perform the corrective elementary matrix transformation. (b) Concentration profiles after the start elementary matrix transformation, C 1. Related transformed spectra matrix, S one T. (c) Concluding solutions of C and South T later on awarding of nonnegativity and correction of elution windows.

Once the concentration profiles are positive, it is possible to check whether the inverse transformations go on the related spectra positive. If the factor grand is modified in a gradual manner, keeping the value slightly smaller than the verbal optimum, this check may be avoided very often.

This is the procedure followed to apply the nonnegativity constraint. After this pace, the concluding solution is obtained modifying the concentration profiles so that they match their predetermined elution windows, that is, minimizing the concentration profile elements outside the preset elution window. The initial setting of these windows is done in an automated style after the nonnegativity correction, taking the values around the maximum that exceed the preset positive threshold. One time this is done, each concentration profile is modified by a product of elementary matrix transformations that will requite

(18) c i ( chiliad ) = c i ( m 1 ) + j i c j ( one thousand one ) k j

where the yard j elements are calculated past linear regression minimizing the deviation from zilch of the elements of c i (m) exterior the determined elution window. This calculation slightly modifies the initial elution windows and, in social club to obtain more reliable solutions, this procedure is repeated a few times, modifying the definition of the elution windows according to the results of the terminal iterative bicycle. Figure four(c) shows the last profiles recovered for the data set up of Figure 1 by elementary matrix transformations afterwards suitable introduction of all needed constraints.

Quality checks of the results can be done by examining the meaningfulness of the resolved profiles, for example, lack of unimodality could be considered a serious fault and be probable related to a wrong estimation of the number of compounds. In addition, the presence of nonnegative elements in the R matrix is indicative of a successful result. An independent validation of the resolved spectra could be achieved by comparing the spectral shapes recovered with those obtained by subwindow cistron analysis (SFA), 36 a resolution method mainly focused on the resolution of pure spectra (run across the chapter on noniterative curve resolution methods for further explanation of SFA, Chapter ii.eighteen). Although rotational ambiguity can still be, if the two methods give the aforementioned solution, the results might be very likely right.

An of import point to make is that Gentle is a suitable method for analyzing process data, where concentration profiles evolve in a sequential mode and elution windows can be easily determined. Data sets with less structured concentration profiles, such as environmental or spectroscopic images, cannot be easily solved using this approach.

Read full chapter

URL:

https://world wide web.sciencedirect.com/science/article/pii/B9780444527011000508

Matrices

Richard Bronson , ... John T. Saccoman , in Linear Algebra (Third Edition), 2014

Of import Terms

augmented matrix

block diagonal matrix

coefficient matrix

cofactor

column matrix

component

consistent equations

derived set

determinant

diagonal element

diagonal matrix

dimension

directed line segment

element

elementary matrix

uncomplicated row operations

equivalent directed line segments

expansion by cofactor

Gaussian emptying

homogeneous equations

identity matrix

inconsistent equations

inverse

invertible matrix

linear equation

lower triangular matrix

LU decomposition

chief diagonal

mathematical induction

matrix

nonhomogeneous equations

nonsingular matrix

n-tuple

order

partitioned matrix

pivot

pivotal condersation

power of a matrix

row matrix

row-reduced form

scalar

singular matrix

skew-symmetric matrix

square

submatrix

symmetric matrix

transpose

trivial solution

upper triangular matrix

zero matrix

Read full chapter

URL:

https://www.sciencedirect.com/scientific discipline/article/pii/B9780123914200000019

Multirate and Wavelet Signal Processing

In Wavelet Analysis and Its Applications, 1998

2.2.one.5 Uncomplicated operations

Elementary row (or column) operations on integer matrices are of import because they allow the patterning of integer matrices into simpler forms, such equally triangular and diagonal forms.

Definition 2.2.i.eight

Any uncomplicated row operation on an integer-valued matrix P is defined to be whatsoever of the post-obit: Type-ane: Interchange two rows.Type-2: Multiply a row by a nonzero integer constant c.Blazon-3: Add an integer multiple of a row to another row.

These operations can exist represented by premultiplying P with an appropriate square matrix called an simple matrix. To illustrate these elementary operations, consider the following examples. (By convention, the rows and columns are numbered starting with nothing rather than one.) The start case is a Blazon-1 uncomplicated matrix that interchanges row 0 and row 3, which has the class

[ 0 0 0 ane 0 ane 0 0 0 0 1 0 1 0 0 0 ] .

The second instance is a Type-2 elementary matrix that multiplies elements in row 1 past c ≠ 0, which has the form

[ 1 0 0 0 0 c 0 0 0 0 i 0 0 0 0 1 ] .

The third case is a Type-3 uncomplicated matrix that replaces row three with row iii + (a * row 0), which has the form

[ 1 0 0 0 0 1 0 0 0 0 1 0 a 0 0 1 ] .

All 3 types of simple polynomial matrices are integer-valued unimodular matrices.

Read total affiliate

URL:

https://www.sciencedirect.com/science/commodity/pii/S1874608X98800472

The inverse

Richard Bronson , Gabriel B. Costa , in Matrix Methods (Fourth Edition), 2021

three.one Introduction

Definition 1

An inverse of an due north  × north matrix A is an northward  × n matrix B having the belongings that

(1) A B = B A = I .

Hither, B is called an inverse of A and is normally denoted by A −one. If a foursquare matrix A has an inverse, it is said to exist invertible or nonsingular. If A does non possess an changed, information technology is said to be singular. Annotation that inverses are just divers for square matrices. In particular, the identity matrix is invertible and is its ain changed because

I I = I .

Example 1

Determine whether

B = [ 1 ane 2 1 3 1 four ] or C = [ 2 1 three ii 1 2 ]

are inverses for

A = [ 1 2 3 4 ] .

Solution B is an changed if and only if AB   =   BA=  I; C is an inverse if and only if AC   =   CA=  I. Hither,

A B = [ one 2 iii 4 ] [ 1 1 2 1 3 1 four ] = [ 5 three 1 thirteen 3 five two ] [ 1 0 0 1 ] ,

while

A C = [ ane 2 3 4 ] [ 2 ane three 2 i 2 ] = [ ane 0 0 1 ] = [ 2 1 3 ii ane ii ] [ i two 3 iv ] = C A .

Thus, B is non an changed for A, but C is. Nosotros may write A −1  = C. ■

Definition 1 is a test for checking whether a given matrix is an inverse of another given matrix. In the Final Comments to this affiliate we prove that if AB=I for two square matrices of the same order, then A and B commute and BA=I. Thus, we can reduce the checking procedure by one-half. A matrix B is an changed for a foursquare matrix A if either AB=I or BA=I; each equality automatically guarantees the other for square matrices. We will show in Section 3.iv that an inverse is unique. If a square matrix has an inverse, it has simply one.

Definition 1 does not provide a method for finding inverses. We develop such a process in the side by side section. Nonetheless, inverses for some matrices tin be found straight.

The inverse for a diagonal matrix D having only nonzero elements on its main diagonal is as well a diagonal matrix whose diagonal elements are the reciprocals of the corresponding diagonal elements of D. That is, if

D = [ λ 1 0 λ 2 λ 3 0 λ n ] ,

and so,

D 1 = [ i λ 1 0 1 λ two 1 λ 3 0 1 λ n ] .

It is piece of cake to bear witness that if whatever diagonal chemical element in a diagonal matrix is zero, then that matrix is singular. (Run into Trouble 57.)

An unproblematic matrix E is a square matrix that generates an elementary row operation on a matrix A (which need not exist square) under the multiplication EA. Simple matrices are constructed by applying the desired elementary row performance to an identity matrix of appropriate order. The appropriate order for both I and E is a square matrix having as many columns as there are rows in A; and so, the multiplication EA is defined. Because identity matrices contain many zeros, the process for amalgam elementary matrices tin can be simplified still farther. Later all, nothing is accomplished by interchanging the positions of zeros, multiplying zeros by nonzero constants, or adding zeros to zeros.

(i)

To construct an simple matrix that interchanges the ithursday row with the jth row, begin with an identity matrix of the appropriate order. First, interchange the unity element in the i  i position with the zero in the j  i position and and so interchange the unity element in the j  j position with the zero in the i  j position.

(ii)

To construct an elementary matrix that multiplies the ithursday row of a matrix by the nonzero scalar k, supervene upon the unity element in the i  i position of the identity matrix of appropriate order with the scalar k.

(three)

To construct an uncomplicated matrix that adds to the jthursday row of a matrix grand times the ithursday row, replace the zip chemical element in the j  i position of the identity matrix of appropriate order with the scalar k.

Instance 2

Detect elementary matrices that when multiplied on the correct past whatever four   ×   iii matrix A will (a) interchange the 2d and fourth rows of A, (b) multiply the third row of A by iii, and (c) add to the fourth row of A    5 times its 2nd row.

Solution

(a) [ i 0 0 0 0 0 0 1 0 0 1 0 0 1 0 0 ] ,     (b) [ 1 0 0 0 0 1 0 0 0 0 iii 0 0 0 0 1 ] ,     (c) [ 1 0 0 0 0 1 0 0 0 0 1 0 0 5 0 1 ] .

Example 3

Observe unproblematic matrices that when multiplied on the right by whatever iii   ×   5 matrix A will (a) interchange the first and 2nd rows of A, (b) multiply the third row of A past −0.5, and (c) add to the third row of A −ane times its 2nd row.

Solution

(a) [ 0 ane 0 1 0 0 0 0 1 ] ,     (b) [ i 0 0 0 1 0 0 0 0.5 ] ,     (c) [ one 0 0 0 1 0 0 ane i ] .

The inverse of an uncomplicated matrix that interchanges ii rows is the matrix itself, it is its own inverse. The inverse of an simple matrix that multiplies ane row by a nonzero scalar 1000 is obtained by replacing k by 1/k. The inverse of an elementary matrix that adds to ane row a abiding k times another row is obtained by replacing the scalar k by −k.

Instance 4

Compute the inverses of the elementary matrices found in Example ii.

Solution

(a) [ 1 0 0 0 0 0 0 1 0 0 1 0 0 1 0 0 ] ,     (b) [ i 0 0 0 0 1 0 0 0 0 1 3 0 0 0 0 one ] ,     (c) [ 1 0 0 0 0 ane 0 0 0 0 one 0 0 5 0 1 ] .

Example 5

Compute the inverses of the elementary matrices establish in Case 3.

Solution

(a) [ 0 1 0 one 0 0 0 0 one ] ,     (b) [ 1 0 0 0 1 0 0 0 2 ] ,     (c) [ 1 0 0 0 i 0 0 i 1 ] .

Finally, if A can exist partitioned into the block diagonal form,

A = [ A 1 0 A ii A 3 0 A n ] ,

then A is invertible if and but if each of the diagonal blocks A i, A 2, …, A n is invertible and

A 1 = [ A i one 0 A 2 1 A 3 1 0 A n i ] .

Instance 6

Notice the changed of

A = [ 2 0 0 0 0 0 0 0 5 0 0 0 0 0 0 0 one 0 0 0 0 0 0 4 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 ] .

Solution Gear up

A 1 = [ 2 0 0 v ] , A 2 = [ 1 0 4 1 ] , and A three = [ 1 0 0 0 0 i 0 1 0 ] ;

then, A is in the cake diagonal form

A = [ A 1 0 A ii 0 A 3 ] .

Here A 1 is a diagonal matrix with nonzero diagonal elements, A 2 is an unproblematic matrix that adds to the 2d row four times the starting time row, and A three is an unproblematic matrix that interchanges the second and third rows; thus,

A i i = [ 1 2 0 0 1 v ] , A 2 1 = [ ane 0 four 1 ] , and A 3 i = [ ane 0 0 0 0 i 0 i 0 ] ,

and

A 1 = [ 1 2 0 0 0 0 0 0 0 i five 0 0 0 0 0 0 0 1 0 0 0 0 0 0 4 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 one 0 0 0 0 0 1 0 ] .

Read full chapter

URL:

https://world wide web.sciencedirect.com/science/article/pii/B9780128184196000034

Linear Transformations from a Geometric Viewpoint

J. Douglas Carroll , Paul E. Green , in Mathematical Tools for Applied Multivariate Analysis, 1997

4.7.ane Uncomplicated Operations

Elementary operations play an essential role in the solution of sets of simultaneous equations. Illustratively taking the case of the rows of a transformation matrix, at that place are 3 basic operations–called unproblematic row operations–that tin be used to transform one matrix into some other. Nosotros may

1.

interchange any ii rows;

two.

multiply any row by a nonzero scalar;

3.

add to any given row a scalar multiple of some other row.

If nosotros change some matrix A into another matrix B past the use of uncomplicated row operations, we say that B is row equivalent to A.

Uncomplicated row operations involve the multiplication of A past special kinds of matrices that event the in a higher place transformations. All the same, we could merely every bit easily talk about elementary column operations–the same kinds as these shown above–that are applied to the columns of A. A matrix so transformed would be chosen column equivalent to the original matrix. To simplify our discussion, we illustrate the ideas via unproblematic row operations. The reader should bear in mind, nevertheless, that the same arroyo is applicable to the columns of A.

To illustrate how simple row operations can be applied to the general problem of solving a set of simultaneous equations, let us again consider the two equations described before:

I { 4 ten 1 x x two = 2 3 ten 1 + 7 x 2 = 13

Suppose nosotros first multiply all members of the first equation past iii 4 and and so add the result to those of the second equation:

II { 4 10 ane 10 x 2 = two 29 ii x 2 = 29 2

Next, let usa multiply the second equation by 2 29 . If so, we obtain

3 { iv x 1 x 10 ii = 2 x 2 = 1

All these sets of equations are row equivalent in the sense that all three sets have the same solution:

x one = 2 x ii = 1

and each tin can be transformed to either of the others via elementary row operations. Nonetheless, information technology is clear that the solution to the third set up of equations is most apparent and hands institute.

While uncomplicated row operations are useful in the full general job of solving sets of simultaneous equations, this is not their only desirable feature. A second, and major, attraction is the fact that elementary operations (row or column), equally practical to a matrix A, do not alter its rank. Moreover, every bit volition be shown, simple operations transform the given matrix in such a way as to make its rank easy to determine by inspection. As it turns out, all three sets of equations above have the aforementioned rank (rank 2) since they are all equivalent in terms of elementary row operations.

Elementary row operations are performed by a special ready of foursquare, nonsingular matrices called elementary matrices. An elementary matrix is a nonsingular matrix that tin can exist obtained from the identity matrix by an uncomplicated row operation. For example, if we wanted to interchange ii rows of a matrix, nosotros could practise so by means of the permutation matrix

[ 0 one 1 0 ]

For example, if we take the bespeak 10 = [ one 2 ] in two dimensions, the premultiplication of x by the permutation matrix above would yield: eleven

ten * = [ 0 1 1 0 ] [ one 2 ] = [ 2 ane ]

We note that the coordinates of 10 take, indeed, been permuted.

As mentioned above, elementary matrices are nonsingular. In the two × ii matrix case, the gear up of elementary matrices consists of the following:

I.

Permutation

[ 0 i one 0 ]

II.

Stretches

[ k 0 0 one ] A stretch or pinch of the airplane that is parallel to the x  axis; [ i 0 0 m ] A stretch or pinch of the plane that is parallel to the y  axis

III.

Shears

[ 1 c 0 1 ] A shear parallel to the x  axis; [ 1 0 c one ] A shear parallel to the y  centrality

Standing with the numerical instance involving x = [ 1 2 ] , we have

Stretches

[ thou 0 0 1 ] [ i 2 ] = [ k 2 ] ; [ 1 0 0 one thousand ] [ ane two ] = [ 1 two yard ]

Shears

[ i c 0 1 ] [ 1 2 ] = [ one + 2 c 2 ] ; [ i 0 c 1 ] [ 1 2 ] = [ 1 c + 2 ]

But, in that location is cypher stopping us from applying, in some prescribed order, a serial of premultiplications by unproblematic matrices. Furthermore, the product of a gear up of nonsingular matrices will itself exist nonsingular.

The geometric grapheme of permutations, stretches, and shears has already been illustrated in Department four.five. Here we are interested in two major applications of simple row operations and the matrices that stand for them:

1.

determining the rank of a matrix, and

2.

finding the inverse of a matrix, when such inverse exists. Each awarding is described in turn.

Read full chapter

URL:

https://world wide web.sciencedirect.com/science/commodity/pii/B9780121609542500056

Numerical Analysis

John Due north. Shoosmith , in Encyclopedia of Physical Science and Technology (Third Edition), 2003

V.C Nonsymmetric Matrices

The trouble of finding the eigenvalues of a nonsymmetric matrix is inherently less stable than the symmetric case; furthermore, some or all of the eigenvalues and eigenvectors may be circuitous.

The first stage is to "rest" the matrix by applying similarity tranformations in order to make the maximum element in corresponding rows and columns approximately equal. Next, the matrix is reduced to "Hessenburg form" past applying stablilized unproblematic similarity transformations. An upper Hessenburg matrix has the form

H = [ a 1 , 1 a ane , two a one , 3 ⋅⋅⋅ a 1 , northward a 2 , 1 a 2 , two a 2 , three ⋅⋅⋅ a 2 , due north 0 a 3 , 2 a three , 3 ⋅⋅⋅ a 3 , n 0 0 a iv , 3 ⋅⋅⋅ a iv , due north ⋅⋅⋅ ⋅⋅⋅ ⋅⋅⋅ 0 0 0 ⋅⋅⋅ a north , north 1 a n , n ]

The simple matrices are like in form to those used in the LU factorization (Gaussian emptying) procedure, except that the pivotal element is taken to be one element below the diagonal. Equally in LU factorization for full general matrices, the rows are interchanged to maximize the pivotal element; all the same, a divergence is that for each elementary matrix multiplying A on the left, its inverse is multiplied on the right, and each row interchange is accompanied by the corresponding column interchange, in order to preserve the eigenvalues.

Since real nonsymmetric matrices may take complex cohabit pairs of eigenvalues, an application of the QR algorithm with origin shift as described for symmetric matrices will involve calculating with complex numbers. To avoid this, the double-shift QR algorithm is usually applied to the Hessenburg matrix H (0) obtained in the first stage. The kthursday iteration of this method is

( H ( thousand ) σ thousand , one I ) ( H ( k ) σ σ k , ii I ) = Q ( k ) R ( k ) H ( k + 1 ) = Q ( k ) T H ( thou ) Q ( k )

The shifts σ g, one and σ k, 2 are taken to be the eigenvalues of the lowest 2   ×   2 submatrix of H (k); however, when convergence of this submatrix is ascertained, the method is applied to the remaining upper submatrix in the same way. Eventually the sequence H (k), k  =   0, 1, 2, …, will converge to a triangular matrix, except that there will exist two   ×   2 blocks on the diagonal for each pair of complex eigenvalues.

As before, the transformations may be accumulated if all of the eigenvectors are required. If the eigenvector associated with a specific eigenvalue is required, inverse iteration may exist used (albeit with complex arithmetic if the eigenvalue is circuitous).

Read full chapter

URL:

https://www.sciencedirect.com/scientific discipline/article/pii/B0122274105005056

SOLUTION OF EQUATIONS

M.V.1000. Chari , S.J. Salon , in Numerical Methods in Electromagnetism, 2000

The LDLT Decomposition

If A is symmetric, and then the decomposition can exist made in the more convenient form A = LDLT , where L is unit lower triangular and D is a diagonal matrix. We see this as follows: We accept shown that A can exist expressed equally A = LDU′. In this instance A = AT = (LDU′) T = U T D T L T . Hither U T is unit of measurement lower triangular, DT is diagonal, and LT is upper triangular. As UTDTLT is a decomposition of A and this decomposition is unique, nosotros deduce that U T = LT . If A is positive definite, and so the elements of D are positive.

We will now show that Gauss elimination, triangular factorization, and LU decomposition involve the same operations. We will use the following instance [125]:

(11.38) ii 10 1 + ten 2 + 3 ten iii = 6 ii x 1 + 3 x 2 + 4 x 3 = 9 3 ten 1 + four x two + 7 x 3 = xiv

In matrix annotation

(xi.39) A = ( ii 1 three 2 3 4 3 4 7 )

Each step of the Gaussian emptying procedure is what nosotros have defined as an elementary row operation. Each corresponds to the premultiplication of A by an elementary matrix. Nosotros can therefore express the Gaussian elimination as follows:

(xi.twoscore) [ ane 1 / 2 iii / 2 0 1 0 0 0 i ] [ 1 0 0 0 ane i / ii 0 0 1 ] [ 1 0 0 0 ane 0 0 0 4 / v ] × [ 1 0 0 0 1 / 2 0 0 v / 4 i ] [ 1 / 2 0 0 1 1 0 3 / ii 0 ane ] [ two 1 3 2 iii iv iii 4 7 ]

This process will transform the original matrix into an identity matrix. Consider the last ii matrices. Premultiplying the original matrix by the row vector [one/2, 0, 0] divides the first row past 2, thus putting a one in the (ane,one) place. Premultiplying by [-1, 1, 0] subtracts the second row from the kickoff, putting a 0 in the (2,1) position. Premultiplying by the third row, [-3/2, 0, i], puts a nix in the (3,1) place. The reader tin can confirm that the process results in the identity matrix. Note that all of the uncomplicated matrices are triangular, that is,

(11.41) U one U 2 L iii L 2 L 1 A = I

Considering the product of upper (lower) triangular matrices is an upper (lower) triangular matrix, we tin can write

(eleven.42) U π 50 π A = I

Hither the subscript π is used to indicate a product of matrices. It is likewise true that the changed of an upper (lower) triangular matrix is an upper (lower) triangular matrix. Therefore

(11.43) A = 50 U , Fifty = L π ane , U = U π 1

The solution past Gauss elimination involves two stages: (1.) frontwards elimination (premultiplication past 50 π) brings the system into an upper triangular form (U = Lπ A) and (2.) back-substitution (premultiplication by U π) transforms the system into an identity matrix (I = Uπ50πA).

Returning to Ax = y,

(11.44) A x = y A x = y L Ï€ A x = 50 Ï€ y A = L u U x = z L z = y U Ï€ U x = z L z = y x = U 1 z U 10 = z Gauss elimination Fifty U dcomposition

Therefore the ii methods are equivalent.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780126157604500122

I

Fred E. Szabo PhD , in The Linear Algebra Survival Guide, 2015

Analogy

An invertible ii-by-2 matrix

MatrixForm[A1   =   {{one, 2}, {3, iv}}]

1 2 3 iv

MatrixForm [A2   =   Changed[A1]]

1 2 iii 4

2 ane 3 ii 1 ii

A1.A2 == IdentityMatrix [ii] == A2.A1

True

Invertible two-by-ii uncomplicated matrices

E A one = 0 1 1 0 interchange of rows 1 and 2

Due east A 2 = 1 0 0 s multiplication of row 2 b y a nonzero constant s

E A 3 = s 0 0 1 multiplcation of row one b y a nonzero constant s

E A 4 = 1 s 0 1 add-on of s times row 2 t o row 1

E A five = 1 0 s i addition of s times row 1 t o row 2

Invertible iii-past-3 matrices

A = 9 3 1 1 iii 0 4 0 nine ;

Calculation nine times the 2nd row to the first row

E one = 1 9 0 0 1 0 0 0 1 ;

MatrixForm [A2   =   E1. A]

0 30 1 1 iii 0 four 0 9

Adding 4 times the 2nd row to the third row

Due east 2 = i 0 0 0 one 0 0 4 1 ;

MatrixForm [A3   =   E2. A2]

0 30 one 1 thirty 0 0 12 9

Interchanging the first and second rows

E three = 0 1 0 1 0 0 0 0 1 ;

MatrixForm [A4   =   E3. A3]

1 three 0 0 thirty one 0 12 9

Multiplying the showtime row by −   one

E 4 = one 0 0 0 i 0 0 0 1 ;

MatrixForm[A5   =   E4. A4]

1 three 0 0 xxx 1 0 12 9

Dividing the second row by 30

E 5 = 1 0 0 0 1 / 30 0 0 0 1 ;

MatrixForm[A6   =   E5. A5]

1 iii 0 0 1 1 30 0 12 9

Subtracting 12 times the 2nd row from the third row

E 6 = ane 0 0 0 1 0 0 12 1 ;

MatrixForm[A7   =   E6. A6]

1 3 0 0 ane 1 30 0 0 43 5

Adding iii times the second row to the beginning row

Eastward seven = 1 iii 0 0 1 0 0 0 one ;

MatrixForm [A8   =   E7. A7 ]

1 0 one ten 0 one 1 thirty 0 0 43 5

Multiplying the third row by 5/43

East 8 = 1 0 0 0 i 0 0 0 v / 43 ;

MatrixForm[A9   =   E8. A8]

1 0 1 10 0 1 i 30 0 0 1

Subtracting i/thirty times the tertiary row from the second row

E 9 = one 0 0 0 1 1 / 30 0 0 1 ;

MatrixForm[A10   =   E9. A9]

one 0 1 10 0 1 0 0 0 1

Subtracting 1/x times the third row from the second row

E 10 = 1 0 ane / 10 0 1 0 0 0 one ;

MatrixForm[A11   =   E10. A10]

one 0 0 0 1 0 0 0 1

The product of the ten uncomplicated matrices used to convert the matrix A to the 3-by-three identity matrix is the inverse B matrix of the matrix A.

B   =   E10.E9.E8.E7.E6.E5.E4.E3.E2.E1 == Inverse[A]

Truthful

Read full affiliate

URL:

https://world wide web.sciencedirect.com/science/article/pii/B9780124095205500163