Lqr Cost to Go Matrix Continuous

Linear optimal control technique

The theory of optimal control is concerned with operating a dynamic system at minimum cost. The case where the system dynamics are described by a set of linear differential equations and the cost is described by a quadratic function is called the LQ problem. One of the main results in the theory is that the solution is provided by the linear–quadratic regulator (LQR), a feedback controller whose equations are given below.

The LQR can be run repeatedly with a receding horizon; this is a form of model predictive control.

The LQR is also an important part of the solution to the LQG (linear–quadratic–Gaussian) problem. Like the LQR problem itself, the LQG problem is one of the most fundamental problems in control theory.

General description [edit]

The settings of a (regulating) controller governing either a machine or process (like an airplane or chemical reactor) are found by using a mathematical algorithm that minimizes a cost function with weighting factors supplied by a human (engineer). The cost function is often defined as a sum of the deviations of key measurements, like altitude or process temperature, from their desired values. The algorithm thus finds those controller settings that minimize undesired deviations. The magnitude of the control action itself may also be included in the cost function.

The LQR algorithm reduces the amount of work done by the control systems engineer to optimize the controller. However, the engineer still needs to specify the cost function parameters, and compare the results with the specified design goals. Often this means that controller construction will be an iterative process in which the engineer judges the "optimal" controllers produced through simulation and then adjusts the parameters to produce a controller more consistent with design goals.

The LQR algorithm is essentially an automated way of finding an appropriate state-feedback controller. As such, it is not uncommon for control engineers to prefer alternative methods, like full state feedback, also known as pole placement, in which there is a clearer relationship between controller parameters and controller behavior. Difficulty in finding the right weighting factors limits the application of the LQR based controller synthesis.

Versions [edit]

Finite-horizon, continuous-time [edit]

For a continuous-time linear system, defined on t [ t 0 , t 1 ] {\displaystyle t\in [t_{0},t_{1}]} , described by:

x ˙ = A x + B u {\displaystyle {\dot {x}}=Ax+Bu}

where x R n {\displaystyle x\in \mathbb {R} ^{n}} (that is, x {\displaystyle x} is an n {\displaystyle n} -dimensional real-valued vector) is the state of the system and u R m {\displaystyle u\in \mathbb {R} ^{m}} is the control input. Given a quadratic cost function for the system, defined as:

J = x T ( t 1 ) F ( t 1 ) x ( t 1 ) + t 0 t 1 ( x T Q x + u T R u + 2 x T N u ) d t {\displaystyle J=x^{T}(t_{1})F(t_{1})x(t_{1})+\int \limits _{t_{0}}^{t_{1}}\left(x^{T}Qx+u^{T}Ru+2x^{T}Nu\right)dt}

the feedback control law that minimizes the value of the cost is:

u = K x {\displaystyle u=-Kx\,}

where K {\displaystyle K} is given by:

K = R 1 ( B T P ( t ) + N T ) {\displaystyle K=R^{-1}(B^{T}P(t)+N^{T})\,}

and P {\displaystyle P} is found by solving the continuous time Riccati differential equation:

A T P ( t ) + P ( t ) A ( P ( t ) B + N ) R 1 ( B T P ( t ) + N T ) + Q = P ˙ ( t ) {\displaystyle A^{T}P(t)+P(t)A-(P(t)B+N)R^{-1}(B^{T}P(t)+N^{T})+Q=-{\dot {P}}(t)\,}

with the boundary condition:

P ( t 1 ) = F ( t 1 ) . {\displaystyle P(t_{1})=F(t_{1}).}

The first order conditions for Jmin are:

1) State equation

x ˙ = A x + B u {\displaystyle {\dot {x}}=Ax+Bu}

2) Co-state equation

λ ˙ = Q x + N u + A T λ {\displaystyle -{\dot {\lambda }}=Qx+Nu+A^{T}\lambda }

3) Stationary equation

0 = R u + N T x + B T λ {\displaystyle 0=Ru+N^{T}x+B^{T}\lambda }

4) Boundary conditions

x ( t 0 ) = x 0 {\displaystyle x(t_{0})=x_{0}}

and λ ( t 1 ) = F ( t 1 ) x ( t 1 ) {\displaystyle \lambda (t_{1})=F(t_{1})x(t_{1})}

Infinite-horizon, continuous-time [edit]

For a continuous-time linear system described by:

x ˙ = A x + B u {\displaystyle {\dot {x}}=Ax+Bu}

with a cost function defined as:

J = 0 ( x T Q x + u T R u + 2 x T N u ) d t {\displaystyle J=\int _{0}^{\infty }\left(x^{T}Qx+u^{T}Ru+2x^{T}Nu\right)dt}

the feedback control law that minimizes the value of the cost is:

u = K x {\displaystyle u=-Kx\,}

where K {\displaystyle K} is given by:

K = R 1 ( B T P + N T ) {\displaystyle K=R^{-1}(B^{T}P+N^{T})\,}

and P {\displaystyle P} is found by solving the continuous time algebraic Riccati equation:

A T P + P A ( P B + N ) R 1 ( B T P + N T ) + Q = 0 {\displaystyle A^{T}P+PA-(PB+N)R^{-1}(B^{T}P+N^{T})+Q=0\,}

This can be also written as:

A T P + P A P B R 1 B T P + Q = 0 {\displaystyle {\mathcal {A}}^{T}P+P{\mathcal {A}}-PBR^{-1}B^{T}P+{\mathcal {Q}}=0\,}

with

A = A B R 1 N T Q = Q N R 1 N T {\displaystyle {\mathcal {A}}=A-BR^{-1}N^{T}\qquad {\mathcal {Q}}=Q-NR^{-1}N^{T}\,}

Finite-horizon, discrete-time [edit]

For a discrete-time linear system described by: [1]

x k + 1 = A x k + B u k {\displaystyle x_{k+1}=Ax_{k}+Bu_{k}\,}

with a performance index defined as:

J = x H p T Q x H p + k = 0 H p 1 ( x k T Q x k + u k T R u k + 2 x k T N u k ) {\displaystyle J=x_{H_{p}}^{T}Qx_{H_{p}}+\sum \limits _{k=0}^{H_{p}-1}\left(x_{k}^{T}Qx_{k}+u_{k}^{T}Ru_{k}+2x_{k}^{T}Nu_{k}\right)} , where H p {\displaystyle H_{p}} is the time horizon

the optimal control sequence minimizing the performance index is given by:

u k = F k x k {\displaystyle u_{k}=-F_{k}x_{k}\,}

where:

F k = ( R + B T P k + 1 B ) 1 ( B T P k + 1 A + N T ) {\displaystyle F_{k}=(R+B^{T}P_{k+1}B)^{-1}(B^{T}P_{k+1}A+N^{T})\,}

and P k {\displaystyle P_{k}} is found iteratively backwards in time by the dynamic Riccati equation:

P k 1 = A T P k A ( A T P k B + N ) ( R + B T P k B ) 1 ( B T P k A + N T ) + Q {\displaystyle P_{k-1}=A^{T}P_{k}A-(A^{T}P_{k}B+N)\left(R+B^{T}P_{k}B\right)^{-1}(B^{T}P_{k}A+N^{T})+Q}

from terminal condition P H p = Q {\displaystyle P_{H_{p}}=Q} .[2] Note that u H p {\displaystyle u_{H_{p}}} is not defined, since x {\displaystyle x} is driven to its final state x H p {\displaystyle x_{H_{p}}} by A x H p 1 + B u H p 1 {\displaystyle Ax_{H_{p}-1}+Bu_{H_{p}-1}} .

Infinite-horizon, discrete-time [edit]

For a discrete-time linear system described by:

x k + 1 = A x k + B u k {\displaystyle x_{k+1}=Ax_{k}+Bu_{k}\,}

with a performance index defined as:

J = k = 0 ( x k T Q x k + u k T R u k + 2 x k T N u k ) {\displaystyle J=\sum \limits _{k=0}^{\infty }\left(x_{k}^{T}Qx_{k}+u_{k}^{T}Ru_{k}+2x_{k}^{T}Nu_{k}\right)}

the optimal control sequence minimizing the performance index is given by:

u k = F x k {\displaystyle u_{k}=-Fx_{k}\,}

where:

F = ( R + B T P B ) 1 ( B T P A + N T ) {\displaystyle F=(R+B^{T}PB)^{-1}(B^{T}PA+N^{T})\,}

and P {\displaystyle P} is the unique positive definite solution to the discrete time algebraic Riccati equation (DARE):

P = A T P A ( A T P B + N ) ( R + B T P B ) 1 ( B T P A + N T ) + Q {\displaystyle P=A^{T}PA-(A^{T}PB+N)\left(R+B^{T}PB\right)^{-1}(B^{T}PA+N^{T})+Q} .

This can be also written as:

P = A T P A A T P B ( R + B T P B ) 1 B T P A + Q {\displaystyle P={\mathcal {A}}^{T}P{\mathcal {A}}-{\mathcal {A}}^{T}PB\left(R+B^{T}PB\right)^{-1}B^{T}P{\mathcal {A}}+{\mathcal {Q}}}

with:

A = A B R 1 N T Q = Q N R 1 N T {\displaystyle {\mathcal {A}}=A-BR^{-1}N^{T}\qquad {\mathcal {Q}}=Q-NR^{-1}N^{T}} .

Note that one way to solve the algebraic Riccati equation is by iterating the dynamic Riccati equation of the finite-horizon case until it converges.

Constraints [edit]

In practice, not all values of x k , u k {\displaystyle x_{k},u_{k}} may be allowed. One common constraint is the linear one:

C x + D u e . {\displaystyle C\mathbf {x} +D\mathbf {u} \leq \mathbf {e} .}

The finite horizon version of this is a convex optimization problem, and so the problem is often solved repeatedly with a receding horizon. This is a form of model predictive control.[3] [4]

[edit]

Quadratic-quadratic regulator [edit]

If the state equation is quadratic then the problem is known as the quadratic-quadratic regulator (QQR). The Al'Brekht algorithm can be applied to reduce this problem to one that can be solved efficiently using tensor based linear solvers.[5]

Polynomial-quadratic regulator [edit]

If the state equation is polynomial then the problem is known as the polynomial-quadratic regulator (PQR). Again, the Al'Brekht algorithm can be applied to reduce this problem to a large linear one which can be solved with a generalization of the Bartels-Stewart algorithm; this is feasible provided that the degree of the polynomial is not too high.[6]

References [edit]

  1. ^ Chow, Gregory C. (1986). Analysis and Control of Dynamic Economic Systems. Krieger Publ. Co. ISBN0-89874-969-7.
  2. ^ Shaiju, AJ, Petersen, Ian R. (2008). Formulas for discrete time LQR, LQG, LEQG and minimax LQG optimal control problems. IFAC Proceedings Volumes. Vol. 41. Elsevier. pp. 8773--8778.
  3. ^ "Ch. 8 - Linear Quadratic Regulators". underactuated.mit.edu . Retrieved 20 August 2022.
  4. ^ https://minds.wisconsin.edu/bitstream/handle/1793/10888/file_1.pdf;jsessionid=52A001EAADF4C22B901290B594BFDA8E?sequence=1. Retrieved 20 August 2022.
  5. ^ Borggaard, Jeff; Zietsman, Lizette (July 2020). "The Quadratic-Quadratic Regulator Problem: Approximating feedback controls for quadratic-in-state nonlinear systems". 2020 American Control Conference (ACC). pp. 818–823. doi:10.23919/ACC45564.2020.9147286. Retrieved 20 August 2022.
  6. ^ Borggaard, Jeff; Zietsman, Lizette (1 January 2021). "On Approximating Polynomial-Quadratic Regulator Problems". IFAC-PapersOnLine. pp. 329–334. doi:10.1016/j.ifacol.2021.06.090. Retrieved 20 August 2022.
  • Kwakernaak, Huibert & Sivan, Raphael (1972). Linear Optimal Control Systems. First Edition. Wiley-Interscience. ISBN0-471-51110-2.
  • Sontag, Eduardo (1998). Mathematical Control Theory: Deterministic Finite Dimensional Systems. Second Edition. Springer. ISBN0-387-98489-5.

External links [edit]

  • MATLAB function for Linear Quadratic Regulator design
  • Mathematica function for Linear Quadratic Regulator design

ellismille1986.blogspot.com

Source: https://en.wikipedia.org/wiki/Linear%E2%80%93quadratic_regulator

0 Response to "Lqr Cost to Go Matrix Continuous"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel