5.4. Monte Carlo simulations for uncertainty propagation#
In the previous parts we learned how a transformation of random variables lead to random outputs with a different distribution. Moreover, we also saw how to propagate means and (co)variances via linearization, including the exact linear case. Alternatively, Monte Carlo (MC) simulations are very effective tools when models are highly non-linear, or when inputs are non-Gaussian, or when we need the full distribution rather than just principal moments.
The Monte Carlo method traces its roots to early probability puzzles (e.g., Buffon’s needle), but it consolidated during the 1940s at Los Alamos for neutron transport in the Manhattan Project. Stanislaw Ulam conceived the idea after card‐game thought experiments; practical algorithms and early random number routines were later developed by John von Neumann and Nicholas Metropolis. Still, the name “Monte Carlo” (Metropolis) actually nods to the Monaco casino — randomness as a computational tool. In fact, foundationally, the MC approach relies on the law of large numbers to estimate expectations by sampling, with examples in the 50s-60s related to variance-reduction ideas (e.g., importance sampling). Since then, faster computers and better random-number generators have made Monte Carlo a standard tool in science and engineering.
Simulating Mean and Variance of transformed variable#
Goal. Given a model \(X = q(Y)\) and an input univariate distribution \(p_Y\), estimate mean and variance by random sampling.
From the general analytical expression provided for Expectation law and Variance law, we observe that we can replace the equalities by a numerical approximation such as
which refer to the sample mean and the sample variance, respectively. The standard error on the mean will decrease with \(1/\sqrt{N}\). Observe that in the second expression we have \(N-1\) in the denominator since we do not know the true mean \(\mathbb{E}( q(Y) )\) and therefore we first estimate it by the sample mean of the data \(\{Y_1, ...Y_N\}\). This means that we have used one degree of freedom and an unbiased estimator of the variance is obtained by using \(N-1\) instead of \(N\).
NOTE The expressions for multivariate functions and distributions follow trivially adopting the same definition in \(\mathbb{R}^n\).
Attribution
This chapter was written by Sandra Verhagen and Lotfi Massarweh. Find out more here.