Linear propagation laws of mean and covariance

2.5. Linear propagation laws of mean and covariance#

Linear function of two random variables#

Consider a linear function of two random variables

\[ X = q(Y)=a_1 Y_1+ a_2 Y_2 + c \]

We can now show that \(\mathbb{E}(q(Y))= a_1 \mathbb{E}(Y_1)+a_2 \mathbb{E}(Y_2)+c\) using our Taylor approximations. The first-order partial derivatives namely follow as

\[ \frac{\partial q}{\partial Y_1}= a_1, \; \frac{\partial q}{\partial Y_2}= a_2 \]

and all the higher-order derivatives are zero, and consequently all higher-order terms in the Taylor series will be zero. The expectation of \(q(Y)\) follows therefore as

\[ \mathbb{E}(q(Y))= q(\mu_1,\mu_2)=a_1 \mu_1 + a_2\mu_2 + c \]

which is exact (i.e., not an approximation anymore).

Exercise

In a similar fashion derive the variance of \(X\), which is also an exact result.

Linear functions of \(n\) random variables#

Note that the linear function of two random variables can also be written as \(q(Y) = \begin{bmatrix} a_1 & a_2\end{bmatrix}\begin{bmatrix}Y_1 \\ Y_2\end{bmatrix}+c\). We will now generalize to the case where we have \(m\) linear functions of \(n\) variables, which can be written as a linear system of equations:

\[\begin{split} X= \begin{bmatrix} X_1\\ X_2 \\ \vdots \\ X_m \end{bmatrix}= \begin{bmatrix} a_{11}&a_{12}&\dots&a_{1n}\\a_{21}&a_{22}&\dots&a_{2n} \\ \vdots&\vdots&\vdots&\vdots \\ a_{m1}&a_{m2}&\dots&a_{mn} \end{bmatrix} \begin{bmatrix} Y_1\\ Y_2 \\ \vdots \\ Y_n \end{bmatrix} +\begin{bmatrix} c_1\\ c_2 \\ \vdots \\ c_m \end{bmatrix}=\mathrm{A}Y+\mathrm{c} \end{split}\]

with known \(\mathbb{E}(Y)\) and covariance matrix \(\Sigma_Y\), and \(\mathrm{c}\) a vector with deterministic variables.

The linear propagation laws of the mean and covariance matrix are given by

\[ \mathbb{E}(X) = \mathrm{A}\mathbb{E}(Y)+\mathrm{c} \]
\[ \Sigma_{X} =\mathrm{A}\Sigma_Y \mathrm{A}^T \]

These are exact results, since for linear functions the higher-order terms of the Taylor approximation become zero and thus the approximation error is zero.

Exercise

Consider the linear system of equations

\[\begin{split} X=\begin{bmatrix}1&1 \\ 1&-2\end{bmatrix}\begin{bmatrix}Y_1 \\ Y_2\end{bmatrix} \end{split}\]

with

\[\begin{split} \mu_Y = \begin{bmatrix}0 \\ 0\end{bmatrix},\; \Sigma_Y= \begin{bmatrix}3&0 \\ 0&3\end{bmatrix} \end{split}\]

Apply the linear propagation laws to find \(\mathbb{E}(X)=\mu_X\) and \(\Sigma_X\).

Exercise buckets of concrete

You want to measure out 1.5\(l\) of concrete in a bucket, but only have a bucket of with lines indicating 5\(l\), 2\(l\), and 0.5\(l\). These buckets are named \(a\), \(b\), and \(c\), respectively. To achieve this you take the 5\(l\) bucket, take out 2\(l\) using bucket \(b\), and then three times 0.5\(l\) using bucket \(c\): \(Y_1 = V_a - V_b - 3V_c\) (with \(V_i\) is the volume of bucket \(i\)). However, you can’t read the lines on the buckets perfectly, and the variance of your pouring skills is 1/100th of the volume of the bucket. Assume the volumes to be independent.

You do something similar to achieve 4.5\(l\) (\(Y_2 = V_a - V_c\)) and 1\(l\) (\(Y_3 = V_a-2V_b\)). Compute the covariance matrix of \([Y_1 \ Y_2 \ Y_3]^T\).

Hint: first find the \(\mathrm{A}\) matrix of the linear system \(Y=\mathrm{A}\cdot V\).

Attribution

This chapter was written by Sandra Verhagen. Find out more here.