H_{2} filtering, also known as Kalman filtering, is an

estimation method which minimizes “average” estimation error.

More precisely, the Kalman filter minimizes the variance of the

estimation error. But there are a couple of serious limitations

to the Kalman filter.

- The Kalman filter assumes that the noise properties are

known. What if we don't know anything about the system

noise? - The Kalman filter minimizes the “average” estimation

error. What if we would prefer to minimize the worst-case

estimation error?

These limitations gave rise to H∞ filtering, also known as

minimax filtering. Minimax filtering minimizes the “worst-case”

estimation error. More precisely, the minimax filter minimizes

the maximum singular value of the transfer function from the

noise to the estimation error. While the Kalman filter requires

a knowledge of the noise statistics of the filtered process,

the minimax filter requires no such knowledge. The Kalman

filter dates back to the late 1950s while the minimax filter

has its roots in the late 1980s.

Consider the problem of estimating the variables of some

system. In dynamic systems (that is, systems which vary with

time) the system variables are often denoted by the term “state

variables”. Assume that the system variables, represented by

the vector x, are governed by the equation x_{k+1} =

Ax_{k} + w_{k} where w_{k} is random

process noise, and the subscripts on the vectors represent the

time step. Now suppose we can measure some combination of the

states. Then our measurement can be represented by the equation

z_{k} = Hx_{k} + v_{k} where

v_{k} is random measurement noise.

Now suppose we want to find an estimator for the state x

based on the measurements z and our knowledge of the system

equation. The estimator structure is assumed to be in the

following predictor-corrector form:

_{k+1}

= A_{k}

+ K_{k}(z_{k+1}– HA_{k}

(1)

where K_{k} is some gain which we need to determine.

If we want to minimize the 2-norm (the variance) of the

estimation error, then we will choose K_{k} based on

the Kalman filter. However, if we want to minimize the ∞-norm

(the “worst-case” value) of the estimation error, then we will

choose K_{k} based on the minimax filter.

Several minimax filtering formulations have been proposed.

The one we will consider here is the following: Find a filter

gain K_{k} such that the maximum singular value is less

than g . This is a way of minimizing

the worst-case estimation error. This problem will have a

solution for some values of g but

not for values of g which are too

small. If we choose a g for which

the stated problem has a solution, then the minimax filtering

problem can be solved by a constant gain K which is found by

solving the following simultaneous equations:

K = (I + P/g )

^{-1}PH^{T}

(2)P

^{-1}= M^{-1}– I/g + H^{T}H

(3)M = APA

^{T}+ I(4)

In the above equations, the superscript -1 indicates matrix

inversion, the superscript T indicates matrix transpostion, and

I is the identity matrix. The simultaneous solution of these

three equations is a problem in itself, but once we have a

solution, the matrix K gives the minimax filtering solution. If

g is too small, then the equations

will not have a solution.

One method to solve the three simultaneous equations is to

use an iterative approach. A more analytical approach is as

follows:

- Form the following 2n 2n matrix:

**(5)** - Find the eigenvectors of Z. Denote those eigenvectors

corresponding to eigenvalues outside the unit circle as c_{i}(i = 1, . . . , n). - Form the following matrix:

**(6)**where X

_{1}and X_{2}are n n

matrices. - Compute M = X
_{2}

X_{1}^{-1}.

This method only works if X_{1} has an inverse. If

X_{1} does not have an inverse, that means that the

chosen value of g is too small.

At this point we see that both Kalman and minimax filtering

have their pros and cons. The Kalman filter assumes that the

noise statistics are known. The minimax filter does not make

this assumption, but further assumes that absolutely nothing is

known about the noise. Suppose that although the noise

statistics are not perfectly known, we have a rough idea about

these statistics. Further suppose that we want to minimize some

combination of the 2-norm and the ∞-norm of the estimation

error. What could be done? Perhaps some combination of Kalman

and minimax filtering could be used.

Software's Web site.

© 1998″2001 Innovatia Software. All Rights Reserved.

**
**

## 0 comments on “Introduction to Minimax Filtering”