tutorialpoint.org

# Linear Estimation

## 1. Kalman Filter Tutorial (Cont'd...)

### 1.4 Properties of Kalman filter (cont'd...)

#### Linear estimator

The best estimation state $\hat{X}_{k|k}$ at step $k$ is formed as a linear combination of $\hat{X}_{k|k-1}$ and $Y_{k}$, which is defined as: $$\label{4linear} \hat{X}_{k|k}=L_{k}\hat{X}_{k|k-1}+K_{k}Y_{k},$$ where, $L_{k}$ and $K_{k}$ are known matrices. Now, substituting the expression of posterior estimate to the expression of error we obtain: \begin{equation*} e_{k|k}=L_{k}\hat{X}_{k|k-1}+K_{k}Y_{k}-X_{k}. \end{equation*} or, \begin{equation*} e_{k|k}=L_{k}(e_{k|k-1}+X_{k})+K_{k}(H_{k}X_{k}+v_{k})-X_{k}. \end{equation*} Now, rearranging we obtain, $$\label{4pp} e_{k|k}=L_{k}e_{k|k-1}+(L_{k}+K_{k}H_{k}-I)X_{k}+K_{k}v_{k},$$ where, $I$ is the $n\times n$ matrix. Applying expectation operator, \begin{equation*}\ E[e_{k|k}]=L_{k}E[e_{k|k-1}]+(L_{k}+K_{k}H_{k}-I)E[X_{k}]+K_{k}E[v_{k}], \end{equation*} or, \begin{equation*} (L_{k}+K_{k}H_{k}-I)E[X_{k}]=0. \end{equation*} $E[X_k]$ could not be zero. So $$\label{4oo} L_{k}=I-K_{k}H_{k}.$$ Now, substituting the value of $L_{k}$, we obtain, $$\label{4p} e_{k|k}=(I-K_{k}H_{k})e_{k|k-1}+K_{k}v_{k}.$$ This is the final equation of estimation error. Now, the expression of posteriori estimate becomes $$\label{4q1} \begin{split} &\hat{X}_{k|k}=(I-K_{k}H_{k})\hat{X}_{k|k-1}+K_{k}Y_{k}\\ &=\hat{X}_{k|k-1}+K_{k}(Y_{k}-H_{k}\hat{X}_{k|k-1})\\ &=\hat{X}_{k|k-1}+K_{k}Z_{k}, \end{split}$$ where, $Z_{k}=Y_{k}-H_{k}\hat{X}_{k|k-1}$. The term $Z_{k}$ is known as innovation which provides new information for every new measurement.