Table of Contents

A brief introduction to uncertainty analysis

In practice one can never measure something exactly. The goal of uncertainty analysis is to determine an estimate $\overline{x}$ from a finite amount of measurements and to give an uncertainty $\Delta x$.

Have a look at John R. Taylor's “An introduction to error analysis” to get a very approachable introduction to uncertainties and their propagation.

Systematic uncertainties

A systematic error is the misrepresentation of a measured quantity in one direction.

Statistical (unsystematic) uncertainties

In every measurement you are interested in the true value of a physical quantity. Systematic uncertainties will (systematically) shift it one one direction. Random uncertainties give equal probabilities for the measurement of a certain smaller or larger value. We are interested in both, and in practice statistical and systematic uncertainties are quoted separately $x\pm \Delta x_\text{syst.}\pm \Delta x_\text{stat.}$.

Even though historically not much distinction has been made, we have to distinguish between uncertainties and errors. Errors are definite deviations or hard bounds on a measurement, while uncertainties follow a certain statistical distribution, for example they can be random.

Uncertainty propagation

The best estimate for a quantity is given by using averages in the functional equation. For example a density might depend on volume $V$ and mass $m$, then the density $\rho$ depends on those, $\overline\rho=f(\overline V,\overline m)$, where $\overline x$ are averages and $f(V,m)=m/V$.

The standard deviation of quantity is calculated as the square sum of weighted standard deviations of the individually measured quantities. For example for the density this would be: $$\Delta \rho = \sqrt{ \left( \frac{\partial f}{\partial V} \Delta V \right)^2 + \left(\frac{\partial f}{\partial m} \Delta m \right)^2 }\,. $$

Note that this formula generalizes straight-forwardly to arbitrary functions $f(x_1,x_2,x_3,\ldots)$ depending on many quantities $x_i$. Also note that this formula is only valid when assuming independent and random uncertainties! 1)

An error propagation that always works (assuming a first-order Taylor expansion is applicable) and gives upper error bounds is given by: $$ \Delta \rho = \left| \frac{\partial f}{\partial V} \right| \Delta V + \left|\frac{\partial f}{\partial m}\right| \Delta m\,.$$ You can use this formula if you are not sure if your uncertainties are independent and random.

Estimates of individually measured quantities

To combine multiple measurements with random uncertainties into an improved estimate, e.g. measurement values $l_i$ for $i=1,\ldots,N$, then you can take the arithmetic average: $$ \overline l = \frac{1}{N} \sum_{j=1}^N{l_j}\,. $$

The standard deviation $\Delta l$ can be obtained from the individual measurements as2) $$ \Delta l = \sqrt{\frac{1}{N} \sum_{j=1}^N ( l_j - \overline l)^2}\,. $$ The standard deviation indicates how accurately the mean represents the sample data.

More advanced and goes beyond this lab: In many practical measurements one has to distinguish between a data sample and the full data population. In some cases only a sample of the data can be used. Then, the standard error of the mean $\Delta l / \sqrt{N}$ is the standard deviation of the theoretical distribution of the sample mean. It indicates the likely discrepancy compared to that of a larger population of data.

1)
Read about the Gaussian or Normal distribution. Also note that this is based on the first-order derivatives of $f$, and is therefore a good estimate of the standard deviation of $\rho$ or $f$ as long as the input standard deviations ($\Delta V$ and $\Delta m$) are small enough.
2)
These quantities are estimates of the parameters of a Gaussian normal distribution.