Introduction to dynamic systems luenberger djvu




















Written in English. Subjects Introduction to dynamic systems. Solutions manual for introduction to dynamic systems: theory, models, and applications. Solutions manual for introduction to dynamic systems First published in Subjects Introduction to dynamic systems. Community Reviews 0 Feedback? Lists containing this Book. Loading Related Books. December 15, Edited by WorkBot. August 31, Harmonic motion.

Thus, the roots of the characteristic polynomial are imaginary. It follows that the general solution to is which in generalis a complex value for each value of t. Indeed, the functions cos wt and sin wt form a fundamental set of solutions to the homogeneous equation. The pattern of solution is the pure harmonic motion illustrated in Fig.

It consists of a pure sine or cosine wave. Variation of A and B acts only to change the height of oscillations and the displacement of phase. Beats Suppose now that an oscillatory system is subjected to an additIOnal force, whi. This corresponds roughly to motion of a child's swing being pushed at other than the natural rat", or to one violin string being subjected to the force of air motion generated by ;l,e vigorous vibrations of a nearby string.

We seek to charactenze the general form of the induced vibration. Therefore, the general solution to the whole equation is. Find the general solutIOn. Intelligence Test Sequence.

Find the second-order linear homogeneous difference equation which generates the sequence 1,2,5,12,29,70, What is the limiting ratio of consecutive terms? Binomial Coefficients. The sequence 0,1,3,6,10,15,21,28,36, The p k -1 that appears in the supply equatlon can be considered to be an esnmate of the future price. In other words, when planning at time k - 1 how much to supply at lime k, suppliers would really like to know what p k will be.

Since they cannot observe the actual price in advance, they do their planmng on the basiS of p k -1 , uSlOg it as an estlmate of p k. Price extrapolation. One such procedure IS to use linear extrapolation, illustrated in Fig.

On the surface, It would seem that this "more sophisticated" estimatIOn scheme might be better. Answer should be m the form of a difference equation. What are the characteristic values of the equation? Informalion Theory. Messages are transmitted by first encoding them into a string of these symbols.

Each symbol requires some length of time for Its transmissIOn. Therefore, for a fixed total time duration only a finite number of different message strings IS possible. Let N, denote the number of different message strings of duratIOn I. C"" hm - - "" 1 bit per time unit. Note: Dot and dash are the only two allowed symbols. A blank space is nOI allowed. Repeaied Rools. Monte Carlo Roulette. In many European casinos, including Monte Carlo, a bet on red or black is not lost outright if the outcome is green.

See Example 5, Sect. Instead, the bet is "imprisoned" and play continues until the ball lands on either red or black. At that point the original bet is either returned to the player or lost, depending on whether the outcome matches the color originally selected. This is, on the average, equivalent to a probability of i that twice the bet will be returned.

Therefore, an "equivalent" game, with the same odds, but having the standard form of Example 5 is obtained by setting Show that although this equivalent game exactly matches the odds of Monte Carlo roulette, its ruin probabilities are not exactly the same as those found in part a.

Discrete Queue. A small business receives orders for work, and services those orders on a first-come, first-served basis.

In any given hour of the day there IS a probability p very small that the business will receive an order. It almost never receives two orders in one hour. It never completes two orders in an hour. On the average, how many orders will there be waiting for service to be completed?

The average number of waiting orders is L nu n. Geometric Forcing Term. Numencal Solutzon of Differennal Equations. DifferentIal equatIOns are often solved numerically by a discrete forward recursion method. To solve this equatIOn numerically one consIders the sequence of discrete points 0, s, 2s, 3s, See Fig. Euler's method. Alternate Method. Alternate method. RadioactIVe Dating. Normal carbon has an atomic weight of The radioisotope l C.

Carbon dioxide is absorbed by plants, these plants are eaten by ammals, and, consequently, all living matter contains radioactive carbon. The Isotope C l4 is unstable-by emittIng an electron, it eventually disintegrates to nitrogen.

Since at death the carbon in plant and animal tissue is no longer replenished, the percentage of C l4 in such tIssue begins to decrease. It decreases exponentially with a half-life of years that is, after years one half of the Cl.

Estimate the age of the ruins. Newton Cooling. According to Newton's law of cooling, an object of higher temperature than its environment cools at a rate that is proportional to the difference in temperature. What is the outside temperature? You want to drink the coffee only after it cools to your favorIte temperature. If you wish to get the coffee to proper temperature as qUickly as possible, should you add the cream immediately or should you walt awhile?

The equation is an example of an eqUl-dimenslOnal differentIal equatIon. Find a set of linearly independent solutions. An Elementary Seismograph. A seismograph is an Instrument that records SUdden ground movements. The Simplest kind of seismograph, measuring hOrIzontal dis- placement, consists of a mass attached to the instrument frame by a spring.

The frame moves when hit by a seismic wave, whereas the mass, Isolated by the spnng, initially tends to remam still. A recording pen, attached to the mass, traces a displacement in a direction opposite to the displacement of the frame. The mass will of course soon begin to oscillate. In order to be able to faithfully record additIOnal seismic waves, it is therefore desirable to suppress the oscillation of the mass by the addition of a damper often consisting of a plunger in a VISCOUS fluid.

To be most effective the seismograph must have a proper combination of mass, spring, and damper. Elementary seismograph. Find the general solutions for x t for all three cases. Prove Theorems 1, 2, and 3 of Sect. The elementary theones of difference and differential equations are so similar that mastery of one essentially implies mastery of the other.

However, because there are many more texts on differential equations than difference equatIOns, the reader mterested m supplemental material may find It most convenient to study differential equations.

An excellent text on difference equatIOns, which mcludes many examples, is GOldberg. See also Miller [M5]. Section 2. The cobweb model is an important claSSIC model. For further discussion see Henderson and Quant [H2].

It is also discussed further 10 Chapter 7 of this book. Information theory, as discussed briefly in Problem 9, is due to Shannon. See Shannon and Weaver [S4]. Linear Algebra Linear algebra is a nearly indispensable tool for modern analysis. It provides both a streamlined notation for problems with many variables and a powerful format for the rich theory of linear analysis. This chapter is an introductory account of that portion of linear algebra that is needed for a basic study of dynamic systems.

In particular, the first three sections of the chapter are essential prerequisites for the next chapter, and the remaining sections are prerequisites for later chapters. Other results from linear algebra that are important in the analysis of dynamic systems are discussed 10 IOdividual sections in later portions of the text. In some respects this chapter can be regarded as a kind of appendix on linear algebra.

As such it is suggested that the reader may wish to skim much of the material, bnefly reviewing that part which IS familiar, and spending at least some preliminary effort on the parts that are unfamiliar.

Many of the concepts presented here strictly from the viewpoint of linear algebra, particu- larly those related to eigenvectors, are reintroduced and elaborated on with appl;cations in Chapter 5 in the context of dynamic systems.

Accordingly, mUll' readers will find it advantageous to study thIS material by refemng back and forth between the two chapters. The values of the YI'S are generally considered to be known or given and the X;'s are considered unknown. Matrices and Vectors In general a matrix is a rectangular array of elements.

Matrices are generally denoted by boldface capital letters, such as A. Elements of the matrix are denoted, correspondingly, by lower case letters with subscripts to indicate the position of the element. Vectors are usually denoted by lower case boldface letters, and their elements have but a single subscnpt.

Column vectors are used for most purposes, particularly in systems of equations, but row vectors also arise naturally. Special Matrices For any dimension, one special matrix is the matrix whose elements are all zero.

Such a matrix is denoted by 0, and is called the zero matrix. If all elements except possibly the diagonal elements are zero, the square matrix A is said to be diagonal. A very special case of a diagonal matrix is the n x n square matrix whose elements are zero, except on the diagonal where they are equal to one.

This matrix for any dimension n is denoted I, and called the identity matnx. If two matrices A and B are of the same dimension, then their sum can be defined and is a matrix C, also of the same dimension. Scalar Multipliation. For any matrix A and any scalar real or complex number a, the product aA is the matrix obtained by mUltiplying every element of the matrix A by the factor a. Multiplication of two matrices to obtain a third is perhaps the most important of the elementary operations.

This is the operation that neatly packages the bulky individual operations associated with defining and manipulating systems of linear algebraic equations.

First, it should be noted that it is consistent with the matrix notation for a system of linear equations, as described by and Thus, in general, AB;6BA even if both products are defined. A special case of matrix multiplication is the dot or inner product of two vectors. This is just the product of an n-dimensional row vector, say r, and an n-dimensional column vector, say c. One common way that the inner product arises is when one vector represents quantities and another represents corresponding unit prices.

Thus, the transpose of a product is equal to the product of the transposes In the reverse order. If the elements of a matrix depend on a variable t, making the elements functions rather than constants, it is possible to consider differentia- tion of the matrix. Differentiation is simply defined by differentiating each element of the matrix individually. In order to produce its product, each industry must have on hand various amounts of the products of other industries and perhaps some of its own.

For example, the automotive industry purchases steel from the steel industry and tires from the rubber industry, while the agriculture industry purchases tractors from the automotive industry and fertilizers from the chemical industry.

The constants ajj are called technical coefficients. Denote by x.. Then the amount of product i required for this pattern of production is The total amount of product i produced goes in part to help produce other products as described above, and in part to consumers to meet their demand.

Thus, total production of a product exceeds the actual consumer demand because of the use of the product in various production processes. This is a compact representation of the complex interrelations among industries. The coefficient matrix is the sum of the identity I and -l A. If a given set of consumer demands is specified as for example by a yearly forecast of demand the required total level of production in each of the industries can be found by solving for x.

The determinant of the general 2 x 2 matrix is given by the formula Laplace's Expansion The value of the determinant corresponding to a general n x" matrix can be found in terms of lower-order determinants through use of Laplace's expan- sion. This expansion is defined in terms of minors or cofactors of elements of the matrix. Thus, if A is an n X n matrix, each minor is an n -1 x n -1 determinant. Thus, the cofactors are identical to the minors, except for a possible change In sign.

In terms of Laplace's expansion, the determinant of a matrix A is. The first of these is called an expansion along the ith row, while the second is an expansion along the Jth column. All such expansions yield identical values.

A Laplace expansion expresses an "th-order determinant as a combina- tion of n -1 th-order determinants. Each of the required n -1 th-order determinants can itself be expressed, by a Laplace expansion, In terms of n-2 th-order determinants, and so on, all the way down to first order if necessary. Therefore, this expansion together with the definition of the deter- minant for 1 x 1 matrices is sufficient to determine the value of any determinant. The determinant of a triangular matrix is equal to the product of its diagonal elements.

We can prove this easily using induction on the dimen- sion n together with Laplace's expansion. Suppose then that it is true for n For the lower triangular case, we would expand along the first row.

This is accomplished by use of rules governing the change in the value of a determinant when rows or columns of its array are linearly combined. There are three basic row operations, and associated rules, from which the effect of any linear combination of rows on the value of a detenninant can be deduced: a If all elements in one row are multiplied by a constant c, the value of the corresponding new determinant is c times the original value.

Each of these rules can be easily deduced from Laplace's expansion. Moreover, since the detenninant of the transpose of a matrix is equal to the determinant of the matrix itself, as given by , three identical rules hold for column operations. That is, the product of A-I and A is the identity matrix.

Not every square matrix has an inverse. Indeed, as discussed below, a square matrix has an inverse if and only if its determinant is nonzero. If the determinant is zero the matrix is said to be singular, and no inverse exists. Cofactor Formula for Inverses Perhaps the simplest way to prove that an inverse exists if the determinant is not zero is to display an explicit formula for the inverse.

There is a simple formula deriving from Cramer's rule for solving sets of linear equations, which is expressed in terms of the cofactors of the matrix. This formula can be verified using Laplace's expansion as follows.

Clearly the determinant of this new matrix is zero. The value of this determmant is unchanged if we now add the kth column to the ith column, fonning the matrix A. Let us compute AB -I in terms of the inverses of the individual matrices. Homogeneous Linear Equations One of the most fundamental results of linear algebra is concerned with the existence of nonzero solutions to a set of linear homogeneous equations.

Because of its importance, we display this result as a formal lemma, and give a complete proof. Fundamental umma. Let A be an n x n matrix. The "only if" portion is quite simple. To see this suppose there is a nonzero solution. Therefore, there can be a nonzero solution only if A is singular. The "if" portion is proved by induction on the dimension n.

Suppose that it is true for n We must construct a nonzero solution. The determinant of the entire transformed n-dimensional set is exactly equal to the determinant of the original set, since the transformed set was obtained by subtracting multiples of the first row.

Laplace's expansion down the first column, however, shows that the value of the determinant of the transformed set is just all times the determinant of the n -1 dimensional system. Since the n x n original determinant is assumed to be zero and all'" 0, it follows that the determinant of the n -I -dimensional system IS zero. It simultaneously provides both a compact notational framework and a set of systematic proce- dures for what might otherwise be complicated operations.

For purposes of conceptualization, however, to most effectively explore new ideas related to multivariable systems, it is useful to take yet another step away from detail. The appropriate step is to introduce the concept of vector space where vectors are regarded simply as elements in a space, rather than as special one-dimensIOnal arrays of coefficients. Vectors of this form can be visualized as points in n-dimensional space or as directed lines emanating from the origin, and indeed this vector space is equal to what is generally referred to as complex n-dimensional space.

The components of x are the amounts of the various n coordinate vectors that comprise x. This is illustrated in Fig.

For purposes of discussion and conceptualization, however, it is not really necessary to continu- ally think about the coordinates and the components, for they clutter up our visualization.

Instead, one imagines the vector simply as an element in the space, as illustrated in Fig. Furthermore, vectors can be added together, or multiplied by a constant without explicit reference to the components, as a b Figure 3. In this view, a vector has a meaning, and can be conceptually manipulated, quite apart from its representation in terms of the coordinate system. A set of vectors is linearly independent if it is not linearly dependent.

In general, to be linearly independent m vectors must "fill out" m dimensions. In En there is a simple test based On evaluating a determmant to check whether n given vectors are linearly independent. The validity of the test rests on the Fundamental Lemma for linear homogeneous equations. Stacking them side by side, one can fonn an n x n matnx A. A linear combination of the vectors ai' a2, By the Fundamental Lemma, Sect.

I Rank Suppose now that A is an arbitrary m x n matrix. The rank of A is the number of linearly independent columns in A. An important result which we do not prove is that the rank of AT is equal to the rank of A.

That means that the number of linearly independent rows of A is the same as the number of linearly independent columns. It is therefore apparent that the rank of an m x n matrix A can be at most equal to the smaller of the two integers m and n. Thus, a matrix with two rows can have rank at most equal to 2, no matter how many columns it has.

Basis A basis for En is any set of n linearly independent vectors. An arbitrary vector can be represented as a linear combmation of basIs vectors. Suppose now that a new basis is introduced. This basis consists of a set of n linearly independent vectors, say PI' Pz, Change of basis. Thus, since we are assured that P is nonsingular because the p;'s are linearly independent, we can WrIte This equation gives the new components in terms of the old.

Both sets of components represent the same point in the vector space-they Just define that point in terms of different bases. This process of changing basis is illustrated in Fig. The vector x is shown as being defined both in terms of the standard basis and In terms of a new basis consisting of the two vectors PI and P2. Geometrically, if one vISualizes a vector as a point in n-dimensional space, a transformation associates a new point wIth each point in the space.

Another is an elongation where every vector is multiplied by a constant, such as 3, and thus moves further away from the zero vector. In general, a transformation is defined on the vector space itself and has a meaning that is independent of the method used for representing vectors. An n x n matrix A together with a specified basis defines a linear transformation. Thus, a matrix transforms vectors into vectors.

Example 1 Book Rotations. Let us think of Xh X2, X3 as coordinates of a pomt in three-dimensional space. As a concrete visualization, one can hold a book vertically, and face the front cover.

The x, direction is to the right, the X2 direction is upward, and X3 is a ray coming out toward the viewer. Rotation of the book counterclockwise corresponds to A. To verify that, we note that the vector 0, corresponding to the center of the right edge of the book is transformed to U2.

LikewIse, U2 is transformed to -Oh and so forth. If one holds the book and carries out these two rotations successively, it will be found that the result is the rotation BA. In general, one linear transformation followed by another corresponds to multiplication of the two associated ma- trices; and since matrix multiplication is not commutative, the order of the transformations is important. Let us consider the effect of a change of the basis on the representation of the transformation.

The new baSIS introduces a new representation of vectors in terms of new components. We want to construct a matrix that in terms of this basis has the same effect on vectors as the original matrix.

Suppose the new basis consists of the columns of an n x n matrix P. The two sets of components are related, as shown in Sect. Thus, in terms of the new basis, the matrix P-' AP transforms the point represented by z into the point represented by w. For reference, we write to indicate how a transformation represented by the matrix A m the standard basis is represented in a new basis.

Let us review this important argument. Given the standard baSIS, vectors are defined as an array of n components. A given matrix A acts On these components yielding a new array of n components, and correspondingly a new vector defined by these components. The result of this action of A defines a transformation on the vector space-transformmg vectors into new vectors. If a basis other than the standard basis is introduced, vectors will have new representations; that is, new components.

It is to be expected that to define the SalTi': transformation as before, transforming vectors just as before, a new matrix must be derived. The appropriate new matrix is P-'AP. The new basis is selected so as to simplify the representation of a transformation. Consider the 2 x 2 matrix which, with respect to the standard basis, represents a counterclockwise rotation of 90". Let us introduce the new basis defined by used In the example of the last section. We easily calculate On the other hand, the original vector is represented by in the new basis.

In essence, the objective of this study is to find, for a given transformation, a new basis in which the transformation has a simple representation-perhaps as a diagonal matrix.

This topic forms the framework for much of the study of linear time-invariant systems that is a central subject of later chapters. The terms characteristic value and characteristic vector are sometimes used for eigenvalue and eigenvector. The geometric interpretation of an eigenvector is that operation by A on the vector merely changes the length and perhaps the sign of the vector.

It does not rotate the vector to a new position. The value of det[A - AI] is a function of the variable A. Indeed, it can be seen that det[A - AI], when expanded out, is a polynomial of degree n 10 the variable A with the coefficient of A" being -1 ". This polynomial peA is called the characteristic polynomial of the matrix A.

From the discussion above, it is clear that there is a direct correspondence between roots of the characteristic polynomial and eigenvalues of the matrix A. It follows that there is always at least one solution to the characteristic equation, and hence, always at least one eigenvalue. To summarize: Theorem. Every n x n matrix A possesses at least one eigenvalue and a corres- ponding nonzero eigenvector. These are the eIgenvalues of the matrIX.

It is a general property that eigenvectors are defined only to within a scalar multiple. If x is an eigenvector, then so is ax for any nonzero scalar a. Example 2 Complex Eigenvalues. Associated with each of these distinct eigenvalues there is at least one eigeilVector. As stated below, a set of such eigenvectors, each correspond- ing to a different eigenvalue, is always a linearly independent set.

Let A" A2,. Then any set e" ez, Suppose that the eigenvectors were linearly dependent. Then there would be a nonzero linear combination of these vectors that was equal to zerO. From- the possible such linear combinations, select one which has the minimum number of nonzero coefficients. Without loss of generality it can be assumed that these coefficients correspond to the first k eigenvectors, and that the first coefficient is unity.

I It is important to note that this result on linear independence is true even if the eigenvalues of A are not all distinct.

Any set of eigenvectors, one for each of the distinct eigenvalues, will be an mdependent set. In that case, as is shown in this section, the corresponding n eigenvectors serve as a convenient new set of basis vectors, and with respect to this baSIS the original transfor- matIOn is represented by a diagonal matrix. Suppose the n x n matrix A has the n distinct eigenvalues A" A2, Therefore, the n eigenvectors can serve as a basis for the vector space En. Expressed in this form, it is qUIte easy to find the corresponding representation for Ax.

Indeed, it follows immediately that Thus, the new coefficients of the basis vectors are just multiples of the old coefficients. There is no mixing among coefficients as there would be in an arbitrary basis. This simple but valuable idea can be translated into the mechamcs of matrix manipulation, where it takes on a form directly suitable for computa- tion.

Define the modal malnX of A to be the n x n matrix That is, M has the eigenvectors as its n columns. Any square matrix with distinct eigenvalues can be put in diagonal form by a change of basis. This is seen by viewing the matrix equation one column at a time. For example, the first column on the left-hand side of the equation is A times the first column in M; that is, A times the first eigenvector.

Correspond- ingly, the first column on the right-hand side of the equation is just Al times the first eigenvector. Identical interpretations apply to the other columns. Equation can be rewritten in column form by taking the transpose of both sides, yielding Therefore, a left eigenvector of A is really the same thing as an ordinary right eigenvector of AT. For most purposes, however, it is more convenient to work with left and right eigenvectors than with transposes.

The characteristic polynomial of AT is det[AT - AI], which, since the determinants of a matrix and its transpose are equal, is Identical to the characteristic polynomial of A.. Therefore, the right and left eigenvalues not eigenvectors are identical.

Suppose Ai and Ai are any two distinct eigenvalues of the matrix A. Let e, be a right eigenvector corresponding to At and let fi be a left eigenvector corresponding to Ai. It says that the inner product or the dot product of the vectors fi and e, is zero. The reader may wish to check this relation on the example above.

As a formal statement this result is expressed by the following theorem. For any two distinct eigenvalues of a matrix, the left eigenvector of one eigenvalue is orthogonal to the right eigenvector of the other. For some matnces with multiple roots it may still be possible to find n linearly independent eigenvectors and use these as a new basis, leading to a diagonal representation.

The simplest example is the identity matrix I that has 1 as an eigenvalue repeated n hmes. This matrix is, of course, already diagonal. Two important concepts for matrices with multiple roots, which help characterize the complexity of a given matrix, are the notions of algebraic and geometric multiplicity.

The algebraic multiplicity of an eigenvalue Ai IS the multiplicity determined by the characteristic polynomial. It is the integer o. If the algebraic multiplicity is one, the eigenvalue IS said to be Simple. The geometric multiplicity of A; is the number of linearly independent eigenvectors that can be associated with Ai' For any eigenvalue, the geometnc multiplicity is always at least unity.

Also, the geometric multiplicity never exceeds the algebraic multiplicity. Jordan Canonical Form In the general case, when there is not a full set of eigenvectors, a matrix cannot be transformed to diagonal form by a change of basis. It is, however, always possible to find a basis in which the matrix is nearly diagonal, as defined below. Siw:,o derivation of the general result is quite complex and because the Jordan fOri" IS only of modest importance for the development in other chapters, we stak; the result without proof.

Prove that matrix multiplication is associatIve, and construct an example showing that it IS not commutative. DifferentiatIOn Fonnuias. Find a formula for d [A t B t ] dt In terms of the derivatIves of the individual matnces. Show that for any n, the n x n identIty matrix I has determinant equal to UnIty.

Using Laplace's expansion, prove the linear combination properties of determi- nants. Find the inverses of the matrices of Problem 4. Prove Theorem 4, Sect. That is, show that the a,'s are umque. Suppose that in the standard basis a vector x is given by Find the representation of x with respect to the basiS defined by P.

Prove, by induction on the dimension n, that det[A - AI] is a polynomial of degree n. Him: Consider the defirution of the charactensllc polynomial. The trace of a square n x n matrix A is the sum of its diagonal elements. Hint: Consider the coefficient a. Show that for an upper or lower tnangular matrix the eigenvalues are equal to the diagonal eiements. Show that for a syrnmetnc matrix a all eigenvalues are real; b if e, and e j are eigenvectors associated with A; and Ai' where A.

Prove that Similar matnces have the same characteristic polynomial. Notes and References 89 The members of the basis associated with the Jordan Canonical Form are often referred to as occurring in chains-this terminology arising from the followmg mterpretation.

If the geometnc mUltiplicity of an eigenvalue A is m, then m lineariy independent eIgenvectors are part of the basIs. The original m eigenvectors generate m separate chains, which may have different lengths.

Mathematical optimization is the branch of computational science that seeks. Luenberger] on Amazon. Investment Science Luenberger D. Download Here If you are searching for the book Luenberger solution manual in pdf format, then you have come on to loyal site. You can read Luenberger solution manual online or downloading.



0コメント

  • 1000 / 1000