ControlOptimo MathForEco Hoy Livernois Pag999 1014

Embed Size (px)

Citation preview

  • 8/13/2019 ControlOptimo MathForEco Hoy Livernois Pag999 1014

    1/16

    Optimal Control Theory

    In this 'chapter we take up the problem of optirnization over time. Such problemsare common in econornics. For example, in the theory of investment, firms areassumed to choose the time path of investment expenditures to maxirnize the (dis-counted) sum of profits over time. Inthe theory of savings, individuals are assumedto choose the time path ofconsumption and saving that maxirnizes the (discounted)sum of lifetime utility. These are examples of dynarnic optirnization problems. Inthis chapter, we study a new technique, optimal control theory, which is used tosolve dynarnic optirnization problems.

    It is fundamental in econornics to assume optirnizing behavior by econornicagents such asfirms or consumers. Techniques for solving static optirnization prob-lems have already been covered in chapters 6,12, and 13. Why do we need to learna new mathematical theory (optimal control theory) for handling dynamic opti-rnization problems? To demonstrate the need, we consider the following econornicexample.

    Stat ic versus Dynamic Optimizat ion: An Investment ExampleSuppose that a firm's output depends only on the amount ofcapital it employs. Let

    Q =q(K)where Q is the firm's output level, q is the production function and K is the amountof capital employed. Assume that there is a well-functioning rental market for thekind of capi tal the firm uses and tha t the firrn is able to rent as much capital as itwants at the price R per unit, which it takes as given. To make this example moreconcrete, imagine that the fi rm is a fishing company that rents fully equipped unitsof fishing capital on a daily basis. (A unit of fishing capital would include boat,nets, fuel, crew, etc.). Q is the number of fish caught per day and K is the numberof units of fishing capital employed per day. If p is the price of fish, then currentprofit depends on the amount of fish caught, which in turn depends on the amountof K used and is given by the function nK):

    n K) =pq K) - RK

  • 8/13/2019 ControlOptimo MathForEco Hoy Livernois Pag999 1014

    2/16

    1000 CHAPTER 25 OPTIMAL CONTROL THEORY Ifthe firm's objective is to choose K tomaxirnize current profit, tbe optirnal amounof K is given implicitly by the usual first-order condition: tn K) = pq K) - R = O

    But why should the firm care only about current profit? Why would it not take alonger-term view and also care about future profits? A more realistic assumptionis that the firm's objective is to maximize the discounted sum of profits over aninterval of time running from tbe present time (t = O) to a given time horizon, T.This is given by tbe functional J[K(t)]

    max J[K(t)] =lT e-ptn[K(t)] dtwhere p is the firrn's discount rate and e-pt is tbe continuous-time discountingfactor. J[K (t)] is called a functional to distinguish it from a function. A functionmaps a single value for a variable l ike K, (or a finite number of values if K is avector of different types of capital) into a single number, like the amount of currentprofit. A functional maps a function like K (t )-or finite number of functions ifthere is more tban one type of capital-into a single number, like the discountedsum of profits.

    I t appears we now have a dynarnic optirnization problem. The differencebetween this and the static optirnization problem is that we now have to choose apath of K values, or in other words we have to choose a function of time, K (t), tomaxirnize J, rather than having tochoose a single value for K to maxirnize tt (K).This is the main reason tbat we require a new mathematical theory. Calculus helpsus findthe value of K that maxirnizes a function x K) because we can differentiaten:(K) with respect to K to f ind the maximum of n:(K). However, calculus is not,in general, suited to helping us find the function of time K (t) that maxirnizesthe functional J[K (t)] because we cannot differentiate a functional J[K (t)] witbrespect to a function K (t).

    It tums out, however, that we d o not have a truly dynarnic optirnization problemin this example. As a result calculus works well in solving this particular problem.The reason is that the amount of K rented in any period t affects only profits inthatperiod and not in any other periodo Thus it is fai rly obvious that the maximum ofthe discounted sum of profits occurs by maxirnizing profits at each point in t ime.As a resul t tbis dynarnic problem is really just a sequence of static optirnizationproblems. The solution therefore is just a sequence of solutions to a sequence ofstatic optirnization problems. Indeed, this is the justification for spending as muchtime as we do in econornics on static optirnization problems. .

    An optirnization problem becomes truly dynarnic only when the econoJ11.lcchoices made in tbe current period affect not only current payoffs (profit) bul

  • 8/13/2019 ControlOptimo MathForEco Hoy Livernois Pag999 1014

    3/16

    CHAPTER25 OPTIMAL CONTROLTHEORY - - -

    The problem facing the fishing firm at each point int ime is to decide how muchcapital to purchase. This is a truJy dynamic problem because current investmentaffects current profit, since it is a current expense, and also affects future profits,since i t affects the amount of capital available for future production. If the firm'sobjective is tomaximize the discounted sum of profits from zero to T, it maximizes

    OUnt also payoffs (profits) at a later date. The intuition is straightforward: if currentoutput affects only current profit, then in choosing current output, we need onlybe concerned with its e ffect on current profi t. Hence we choose current output tomaximize current profit. But if current output affects current profit and profi t ata later date, then in choosing current output, we need to be concerned about itseffect on current and future profit. This is a dynamic problem.

    To turn our fishing firm exampJe into a truly dynamic optimization problem,let us drop the assumption that a renta market for fishing capital exists. Instead, wesuppose that the firm must purchase its own capital. Once purchased, the capitallasts for a long time. Let I (t) be the amount of capital purchased (investment) attime t, and assume that capital depreciates at the rate 8. The amount (stock) ofcapital owned by the firm at time t is K (t) and changes according to the differentialequation

    ke atio nr an, T.

    K = I(t) - 8K(t)O gions aent; ifted

    which says that, at each point in time, the firrn's capital stock increases by theamount of investment and decreases by the amount of depreciation.

    Let c[l(t)] be a function that gives the cost of purchasing (investing) theamount I (t) of capital at time t; then profit at time t is

    Ice JT[K(t), I(t)] = pq[K(t)] - c[l(t)]~ato~.psuet,esth f[l(t)] = f o T e-PJT[K(t), I(t)]dtmnat,f

    subject to K ~ I (t) - 8KK (O ) = Ko

    nfh

    Once a path for 1(t) is chosen, the path of K (t) is completely determined becausethe initial condition for the capital stock is given at Ko . Thus, the functional Jdepends on the particular path chosen for 1 (t).

    There is an infinite number of paths, 1 (r), from which to choose. A fewexamples of feasible paths are as follows:e

    r t (i) 1 (t) = 8Ko. This is a constant amount of investment, just enough to coverdepreciation so that the capital stock remains intact at its initial leve.

  • 8/13/2019 ControlOptimo MathForEco Hoy Livernois Pag999 1014

    4/16

    1002 CHAPTER 25 OPTIMAL CONTROL THEORY

    l(t)

    1(0)

    S K ~ - - - - -- - -- - - - - - - -- - _ .- - - .o

    Figure 25.1 Optimal path ofinvestrnentover time

    . . . . . . .(ii) I (t) = O . This is the path of no investment,(iii) I (t) = Aea/. This is a path of investrnent that starts with 1(0) = A and the

    increases over time at the rate a, if a > O , or decreases at the rate a, if a O ~These are j ust a few arbitrary paths that we mention for illustration. In fact an

    function of t is a feasible path. The problem is to choose the path that maximiz/J[ I t ]. Since we know absolutely nothing about what this function of t ime migh~look like, choosing the right path would seem to be a formidable task.

    It tums out that in .the special case in which T = 00and the function n [K l,I(t)] takes the quadratic formn[K(t), I(t)] =K - aK2 - 12

    the solut ion to the above problem isa(Ko - K er/ o KI* (t) = r _ o _ p

    where r is the negative root of the characteristic equation of the differential eq ua-tion system that, as we shaJl see, results from solving this dynamic optimizationproblem, and K is the steady-state level of the capital stock that the f irm des ires,and is given by

    - 1K2 [0 p o a]Figure 25.1 display s the optimal path ofinvestment for the case inwhich Ko < k.Along the optimal path, investment declines. In the limit as t -+ 00, investmentconverges to a constant amount equal to o K (since r < O so that in the long runthe firm's investment is just replacement of depreciation.

    How did we find this path? We found it using optimal control theory, whichis the topic we tum to now.

    25 1 The Maximum PrincipieOptimal control theory relies heavily on the maximum principie, which amountSto a set of necessary conditions that hold only on optimal paths. Once you kno~how to apply these necessary conditions, then a knowJedge of basic caJculus andifferential equations is al l that is required to solve dynamic optimization problernsIike the one outlined above. In this section we provide a statement ofthe necessarYconditions ofthe maximum principIe and then provide ajustification. In additiollwe provide examples to illustrate the use of the maximum principIe.

    -

  • 8/13/2019 ControlOptimo MathForEco Hoy Livernois Pag999 1014

    5/16

    25.1 THEMAXIMUMPRINCIPLE 3- We begin with a def inition of the general form of the dynamic optimizationproblem that we shall study in this section.hen

    ~ O .Defin i io n 25. 1 The general form of the dynamic optimization problem with a finite time horizon

    and a free endpoint in continuous-time models isanyzesght

    max J = lT f[x (t) , y (t), t] d t (25.1)t),subject to =g[x( t), y(t) , t]

    x O =xo > O (given)

    The term free endpoint means that x(T) is unrestricted, and hence isfree to bechosen optimally. The significance of this is explored in more detail below.

    this general formulation, J is the value of the functional which is to bemaximized, x(t) is referred to as the state variable and y(t) is referred to as thecontrol variable As the name suggests, the control variable is the one directlychosen or control led. Since the control variable and state variables are linked by adifferential equation that is given, the state variable is indirectly inftuenced by thechoice of the control variable.

    the fishing firm example posed above, the state variable is the amount ofcapital held by the firm; the control variable is investment. The example wasa free-endpoint problem because there was no constraint placed on the finalamount of the capital stock As well, the integrand function, f[x( t), y( t), t], wasequal to n [K t, t)]e-p t, and the differential equation for the state variable,g[x( t), y( t), t)], was simply equal to (t) - 8K (t) .

    We will examine a number of important variations of this general specificationin later sections. In section 25.3 we examine the fixed endpoint version of thisproblem. This means that x (T), the final value of the state variable, is specifiedas an equality constraint to be satisfied. In section 25.4 we consider the case inwhich T is infinity. Finally in section 25.6 we consider the case in which the timehorizon, T, is also afree variable to be chosen optimally.Suppose that a unique solution to the dynamic optimization problem in defi-nition 25.1 exists. The solution is a path for the control variable, y(t) . Once thisis specified, the path for the state variable is automatically determined through thedifferential equation for the state variable, combined with its given initial condition.We assume that the control variable is a continuous function of time (we relax thisassumption in section 25.5) as is the state variable. The necessary conditions thatconstitute the maximum principie are stated in terms of a Hamiltonian function,which is akin to the Lagrangean function used to solve constrained optimizationproblems. We begin by defining this function:

    la- nS

  • 8/13/2019 ControlOptimo MathForEco Hoy Livernois Pag999 1014

    6/16

    1004 CHAPTER 25 OPTIMAL CONTROL THEORY ~oefin it ion 25. 2 The Hamiltonian function, H, for the dynamic optimization problem in defini_

    tion 25.1 isH[ x(t) , y(t), A(t), t] = f[x( t), y(t), t)] A( t)g [X (t), y(t), t]

    where A( t), referred to as the costate variable, is akin to the Lagrange multiplierin constrained opt imization problems.

    Forming the Hamiltonian function is straightforward: take the integrand (the func-tion nder the integral sign), and add to i t the equation for x multiplied by an, asyet, unspecified function of time, A(t).

    We can now state the necessary conditions .

    Theorem 25 1 Th e op timal s o lu t io n p a th f o r t h e co n t ro l v a ri a bl e, y (t), fo r t he d ynarn ic op t im za.t io n p ro bl em i n d e fi ni ti on 2 5. 1 mu st s at is fy t he f o ll ow i ng n e ce ss ar y c on d it io n s:(i) T he c on tro l v ari ab le i s c ho se n t o m ax im iz e H a t e ac h po in t in tim e: y(t )maximizes H[x(t) , y(t), ).,(t), t]. Th at i s,

    aH =0 yi i T h e p at hs o f x t) an d ).,(t) s ta te a nd c o st at e v ar ia bl es , a re g ive n by th e

    s olu ti on t o t he f ollow in g s ys tem o f d if fe re nt ia l e qu a ti on s:. oH).,=--

    x =g[ x(t) , y(t), t]i ii T he t wo b ou nd ar y c on di ti on s u se d t o so lv e the system of diffe ren tia l e qu a-t io ns a re g iv en b y

    x(O) = xo, A T = O

    In writing the first necessary condition, we have assumed that the Harrltonianfunction is strictly concave in y. This assumption implies that the maximum of Hwith respect to y will occur as an interior solution, so it can be found by settingthe derivative of H with respect to y equal to zero ateach point in time. In section25.5 we relax this assumption.

  • 8/13/2019 ControlOptimo MathForEco Hoy Livernois Pag999 1014

    7/16

    25.1 THE MAXIMUM PRINCIPLE 1005

    :r

    -

    s

    The second set of necessary conditions is a system of differential equations.The first one is obtained by taking the derivative of the Hamiltonian function withrespect to the state variable, x, and setting )equal to the negative of this derivative.The second is just the differential equation for the state variable that is given aspart of the optimization problem.

    Necessary conditions (i) and (ii) comprise the maximum principIe. The nec-essary conditions in (iii) are typically referred to as boundary conditions. Infree-endpoint problems, one boundary condition is given, xO , and the other isprovided by a transversality condition, A(T) =O.Ajustification for this transver-sality condition is provided later in the chapter; for now, we will just say that thisis a necessary condition for determining the optirnal value of x(T), when x(T) isfree to be chosen optimally.

    The maximum principie provides the first-order conditions. What are thesecond-order conditions in optimal control theory? In other words, when arethe necessary conditions also sufficient to ensure the solution path maximizesthe objective functional in equation (25.1)? Although it is beyond our scope toprove it, we state the answer as

    Theorem 252 The necessary conditions stated in theorem 25.1 are also sufficient for the maxi-mization of J in equation (25.1) if the following conditions are satisfied:(i) f(x, y, 1) is differentiable andjointly concave in x and y.(ii) One of the fol lowing i s t rue:

    g x, y, t) is l inear in x. yg x, y, t) is concave in x,y and > ..(t ) :: : O for 1 E O , T)g x, y, t) is convex in x, y) and > / ) : : : O for t E O , T)

    The sufficiency conditions are satisfied for all of the problems examined in thischapter. As a result we need look no further than the necessary conditions to solvethe dynarnic maximization problems.

    Example 25.1 Solve the following problem:

    subject to x = yx O =2

  • 8/13/2019 ControlOptimo MathForEco Hoy Livernois Pag999 1014

    8/16

    1006 CHAPTER 25 OPTIMAL CONTROL THEORY . . . . . . . .SolutionStep Form the Hanltonian function.

    H =x l+AYStep 2 Apply the maximum principle. Since the Hamiltonian is strictly concavein the control variable y and there are no constraints on the choice of y, Wecanfind the maximum of H with respect to y by applying the first-order condition:

    a Hay =2Y+A =0This gives

    y(t) = A(t)2 (25.2)Step 3 The differential equation for At is

    . a HA ih=-1We now have a system of two differential equations which, after using equation(25.2), is

    5 c = -1 (25.3)Ax= 2 (25.4)

    Step 4 We obtain the boundary conditions. This is a free-endpoint problem be-cause the value for x(l) is not specified in the problem. Therefore the boundar)'conditions are

    x O = 2, A l = OStep 5 Solve or analyze the system of differential equations. In this example w~have a system of linear differential equations, sowe proceed by obtaining explic1tsolutions. Because the first differential equation, (25.3), does not depend 00 ,we can salve it directly and then substitute the solution into the second equatioo.

  • 8/13/2019 ControlOptimo MathForEco Hoy Livernois Pag999 1014

    9/16

    25.1 THE MAXIMUMPRINCIPLE 7

    as the solution path for the control variable. At t = O, y O = 1/2. t then declinesover time and finishes at t = 1 with y(l) = O.

    Solving equation (25.3) gives A.(l) = el - t, where el is an arbitrary constantof integration, the value of which is determined by using the boundary condition,A.(l) = O.This gives o= el - 1 for which the solution is el = l. Therefore wehave A.(l) = 1 - t.

    Substituting this solution into equation (25.4) gives. 1- tx 2

    to which the solution ist t2x(t) =- - - e 22 4

    where e 2 is an arbitrary constant of integration. Its value is determined from theboundary condition x O =2. This gives 2 = e 2

    The solution then becomest t2x(t) =- - - 22 4

    To complete the solution to this maximization problem we substitute the solutionsto the differential equations back into equation (25.2). Doing this gives

    1- ty( t) = 2

    An Investment ProblemSuppose that a firm's only factor of production is i ts capital stock, K, and that itsproduction function is given by the relation

    a

    where Q is the quantity of output produced. Assuming that capital depreciates atthe rate 8 > O,then the change inthe capital stock is equal to the firrn's investment,1, less depreciation, 8K :

    k = 1 - 8K

  • 8/13/2019 ControlOptimo MathForEco Hoy Livernois Pag999 1014

    10/16

    8 CHAPTER 25 OPTIMAL CONTROL THEORY . . . . . . . . .If the price of the firrn's output is a constant $1, and the cost of investment is eqto 2 dollars, then the firm's profits at a point in time are u

    n = K - aK2 _2The optirnization problem we now consider is to maxirnize the integral sum ofprofits over a given interval of time (O, T). A more realistic objective would be tomaxirnize the present-valued integral sum of profits but we postpone treatment ofthis problem to the next section.

    max lT (K - aK2 - 2dtsubject to K =I - 8K

    K (O)=K o (given)To solve this, we take the following steps:

    Step Form the HamiltonianH = K - aK2 - 2 J . .. l - 8K)

    Step 2 Apply the maximum principie: since the Hamiltonian is strictly concavein the control variable lwe look for the I that maximizes the Hamiltonian byusing the first-order condition

    a H = -2 J =O 25 .5Since a 2H / a 2 =-2is negative, this gives a maximum. The solution is

    t) = J...t2

    2 5.6

    Step 3 Form the system of differential equations. J must obey the differentialequation

    . a HA = -- = - 1- 2aK - A8a K

    -

    25.1 THEMAXIMUMPRINCIPLE 9

  • 8/13/2019 ControlOptimo MathForEco Hoy Livernois Pag999 1014

    11/16

    = A o + 2aK - 1 (25.7)Using equation (25.6) to substitute for 1t), the system is

    A .K =- - oK2 (25.8)

    Step 4 Obtain the boundary conditions. The boundary condition for K (t) is givenby the initial condition K O = Ke: The boundary condition for 'A .(t) is 'A .(T ) = O .Step S Solve or analyze the system of differential equations. If the system islinear, as it is in this example, use the techniques of chapter 24 to obtain anexplicit solu tion . We do th is next. If the system is nonlinear, it is probably notpossible to obtain an explicit solution. In that case, use the techniques of chapter24 to undertake a qualitative analysis, preferably with the aid of a phase diagram.In either case keep in mind that the system of differential equations obtainedfrom employing optimal control theory provides the solution to the optimizationproblem.

    An explicit solution to the system of differential equations (25.7) and (25.8)is obtained using the techniques shown in chapter 24. The homogeneous forro ofthis system, written in matrix forro is

    (25.9)

    The determinant ofthe coefficient matrix ofthe homogeneous system is (-02 -awhich is negative. Wetherefore know immediately that the steady-state equilibriumis a saddle point.

    By theorem 24.2, the solutions to the system ofdifferential equations in (25.7) and (25.8) are

    (25.10)rl - o r: - o

    K(t) = Cert C2er2t K2a 2a (25.11)where rl and r: are the eigenvalues or roots of the coefficient matrix in equa-tion (25.9), Cl and C2 are arbitrary constants of integration, and and K are thesteady-state values of the system, and serve as particular solutions in finding thecomplete solutions.

  • 8/13/2019 ControlOptimo MathForEco Hoy Livernois Pag999 1014

    12/16

    CHAPTER 25 OPTIMAL CONTROL THEORY --If A denotes the coefficient matrix in equation (25.9), its characteristic roo

    (eigenvalues) are given by the equation tstr(A) 1r r z = -- _/tr(A)2 - 4 A I

    where tr(A) denotes the trace of A (sum of the diagonal elements). The roots ofequation (25.9) then are

    r , , r: =J 82 aThe steady-state values of.le and K are found by setting 5 c =O and K =o Doingthis and simplifying yields

    .le=1 - 2aK8.le

    K = 28Solving these for .leand K give the steady-state values

    8~= 82 a 1K = 2 8 2 +a)Because the steady state is a saddle point, it can be reached only along the

    saddle path and only if the exogenously specified time horizon, T, is large enoughto perrnit it to be reached.

    This leaves only the values of the arbitrary constants of integration to bedetermined. As usual, they are deterrnined using the boundary conditions K (O) ~Ko and .IeT = O.First, requiring the solution for K (t) to satisfy its initial conditiongives

    r, - 8 r2 - 8 -Ko =e e K2a 2aAfter simplifying, this gives

    e, = 2a(Ko - K) - (r2 - 8 e2r , - 8Next, requiring the solution for ),.(t) to satisfy its terminal condition giveS

    0= e. e' : + C2er2T +

    1.

    FjfOIsowt

    25.1 THE MAXIMUMPRINCIPLE

  • 8/13/2019 ControlOptimo MathForEco Hoy Livernois Pag999 1014

    13/16

    I t)

    T

    Figure 252 Solution path 1 (1)forinvestment when K o < K ;solution path /2 (1) for investmentwhen Ko > K

    from which we get an equation for C2 in terms of C :

    Subs ti tu ting this into the expression for e, and simplifying gives the solutionfor e,:

    Substituting this solution into the equation for e2 and simplifying gives theexplicit solution for e2:

    -2a(Ko - K)e(r,-r2)T - ~(r, - 8)e-r2Te 2 = c=r 8 - (r2 - 8)e(r,-r2)TThis completes the solution.The optimal path of investment is obtained using equation (25.6). If we denote

    the solution for A( t) in equation (25.10) as A (t), then the solution for investment,denoted J*(t) is

    A (t)*(t) =-2This solution gives the path of investment that maxirnizes total profi ts over theplanning horizon. Figure 25.2 shows two possible solution paths. When Ko < K ,the solution is a path like 1, ( t) that starts high and declines monotonically to Oattime T. When Ko > K , the solution is a path of disinvestment like l: (t) that staysnegative from zero to T.

    An Economic In terpretat ion of and the HamiltonianWe introduced A( t) as a sequence or path of Lagrange multipliers. It tums out thatthere is a natural econornic interpretation of this co-state variable. Intuitively A(t)can be interpreted as the marginal (imputed) value or shadow price of the statevariable x(t). This interpretation follows informally from the Lagrange multiplieranalogy. But it also follows more formally from a result that is proved in theappendix to the chapter. There it is shown that A O is the amount by which J*(the maximum value function) would increase if x O (the initial value of thestate variable) were to increase by a small amount. Therefore A O is the valueof a marginal increase in the state variable at time t = O and therefore can be

  • 8/13/2019 ControlOptimo MathForEco Hoy Livernois Pag999 1014

    14/16

    1012 CHAPTER 25 OPTIMAL CONTROL THEORY - -interpreted as the most we would be willing to pay (the shadow price) to acquirea bit more of it at time t = O.By extension, A( t) can be interpreted as the shadowprice or imputed value of the state variable at any time t.

    In the investment problem just exarnined, A(t) gives the marginal (imputed)value or shadow price of the firm's capital stock at time t. Armed with this in-terpretation, the first-order condition (25.5) makes econornic sense: it says thatat each moment of time, the firm should carry out the amount of investment tharsatisfies the following equality:

    2 1 (t) = A( t)The lft-hand side is the marginal cost of investment; the right-hand side is

    the marginal (imputed) value of capital and, as such, gives the marginal benefit ofinvestment. Thus tbe first -order condition of the maximum principie leads to a v erysimple investment rule: invest up to the point that marginal cost equals marginalbenefit.

    The Harniltonian function too can be given an econornic interpretation. Ingeneral, H measures the instantaneous total econornic contribution made by thecontrol variable toward the integral objective function. In the context of the in-vestment problem, H is the sum of total profits earned at a point in time and theaccrual of capital that occurs at that point in t ime valued at its shadow price. There-fore H is the instantaneous total contr ibution made by the control variable to theintegral of profits, J. It makes sense then to choose the control variable so as tomaxirnize H at each point in time. This, of course, is what the maximum principierequires.

    EXERCISES Solve

    max lT - ay l dtsubject to x =x - y

    xO =xowhere a, b are positive constants.

    25.1 THE MAXIMUM PRINCIPLE 3

  • 8/13/2019 ControlOptimo MathForEco Hoy Livernois Pag999 1014

    15/16

    2. Solve

    subject to x = yx O =xo

    3. Solve

    max laT -(ay bl ex) dtsubject to i=ax f3 y

    x O = xo4. Solve

    subject to i=x yx O =xo

    S. Solve

    subject to i=x yx 0 .= xo

    6. In equations (25.7) and (25.8) the differential equation system was written interms of A.and K. For the same model, transform the differential equationsystem into a systern in 1 and K. Solve this systern of equations for 1(t)and K(t).

  • 8/13/2019 ControlOptimo MathForEco Hoy Livernois Pag999 1014

    16/16

    4 CHAPTER 25 OPTIMAL CONTROL THEORY

    7. Assume that price is a constant, p, and the cost of investment is bl ,wherebis a positive constant. Then solve the foJlowing:

    max 1 T [p(K - aK2) - b/2] dtsubject to K = I - 8K

    K O = K o25 2 Optimizat ion Problems Involv ingDiscountingDiscounting is a fundamental feature of dynamic optimization problems in eco-nomic dynamics. In the remainder of this chapter, we assume that p is the goingrate of retum in the economy, that there is no uncertainty about this rate of returnand that it is constant over time. Recall from chapter 3 that yo = y(t)e-p t is thediscounted value (or present value) of y(t). In all of the subsequent models andexamples examined in the chapter, firms and consumers will be assumed to maxi-mize the discounted value (present value) of future streams of revenues or benefitsnet of costs.

    The General Form of Autonomous Optimizat ion ProblemsMost dynamic optimization problems in economics involve discounting. As a resulttime enters the objective function explicitly through the term e=.However, if thisis the only way the variable t explicitly enters the dynamic optimization problern,the system of dif ferential equations can be made autonomous. The importance ofthis fact is that autonomous differential equations (ones in which t is not an explicitvariable) are much easier to solve than nonautonomous differential equations.

    We specified the general form of the integrand function in definition 25.1 asf (x, y, t). If this reduces to some function of just x and y multiplied by the terme=e , say F(x, y)e-p t, and if the differential equation given for the state variabledoes not depend explicitly on t (is autonomous), so that g(x, y, t) specializes 10G(x, y), then we may state