### Table of Contents

## A brief introduction to uncertainty analysis

In practice one can never measure something exactly. The goal of uncertainty analysis is to determine an estimate $\overline{x}$ from a finite amount of measurements and to give an uncertainty $\Delta x$.

Have a look at John R. Taylor's “An introduction to error analysis” to get a very approachable introduction to uncertainties and their propagation.

#### Systematic uncertainties

- E.g. a ruler/tape measure does not have the stated length, or its divisions are wrong.
- Voltage of a power supply, or value of a resistor, … is too large or too small.

A systematic error is the misrepresentation of a measured quantity in one direction.

#### Statistical (unsystematic) uncertainties

- When repeating a measurement results in slight variations, this is usually a random uncertainty. This can be due to fluctuations in the experimental setup or environment, friction in mechanical measurement devices (needle of an analog voltmeter)…

In every measurement you are interested in the true value of a physical quantity. Systematic uncertainties will (systematically) shift it one one direction. Random uncertainties give equal probabilities for the measurement of a certain smaller or larger value. We are interested in both, and in practice statistical and systematic uncertainties are quoted separately $x\pm \Delta x_\text{syst.}\pm \Delta x_\text{stat.}$.

Even though historically not much distinction has been made, we have to distinguish between uncertainties and errors. Errors are definite deviations or hard bounds on a measurement, while uncertainties follow a certain statistical distribution, for example they can be random.

### Uncertainty propagation

The best estimate for a quantity is given by using averages in the functional equation. For example a density might depend on volume $V$ and mass $m$, then the density $\rho$ depends on those, $\overline\rho=f(\overline V,\overline m)$, where $\overline x$ are averages and $f(V,m)=m/V$.

The standard deviation of quantity is calculated as the square sum of weighted standard deviations of the individually measured quantities. For example for the density this would be: $$\Delta \rho = \sqrt{ \left( \frac{\partial f}{\partial V} \Delta V \right)^2 + \left(\frac{\partial f}{\partial m} \Delta m \right)^2 }\,. $$

Note that this formula generalizes straight-forwardly to arbitrary functions $f(x_1,x_2,x_3,\ldots)$ depending on many quantities $x_i$. Also note that this formula is only valid when **assuming independent and random uncertainties**! ^{1)}

An error propagation that always works (assuming a first-order Taylor expansion is applicable) and gives upper error bounds is given by: $$ \Delta \rho = \left| \frac{\partial f}{\partial V} \right| \Delta V + \left|\frac{\partial f}{\partial m}\right| \Delta m\,.$$ You can use this formula if you are not sure if your uncertainties are independent and random.

### Estimates of individually measured quantities

To combine multiple measurements with random uncertainties into an improved estimate, e.g. measurement values $l_i$ for $i=1,\ldots,N$, then you can take the arithmetic average: $$ \overline l = \frac{1}{N} \sum_{j=1}^N{l_j}\,. $$

The standard deviation $\Delta l$ can be obtained from the individual measurements as^{2)}
$$ \Delta l = \sqrt{\frac{1}{N} \sum_{j=1}^N ( l_j - \overline l)^2}\,. $$
The standard deviation indicates how accurately the mean represents the sample data.

More advanced and goes beyond this lab: In many practical measurements one has to distinguish between a data sample and the full data population. In some cases only a sample of the data can be used. Then, the standard error of the mean $\Delta l / \sqrt{N}$ is the standard deviation of the theoretical distribution of the sample mean. It indicates the likely discrepancy compared to that of a larger population of data.

^{1)}

^{2)}