Linear Error Combination
Contents |
The approach to uncertainty analysis that has been followed up to this point in the discussion has been what is called a top-down approach. Uncertainty components are estimated from direct error propagation rules repetitions of the measurement result. To contrast this with a propagation error propagation calculator of error approach, consider the simple example where we estimate the area of a rectangle from replicate error propagation physics measurements of length and width. The area $$ area = length \cdot width $$ can be computed from each replicate. The standard deviation of the reported area is
Error Propagation Chemistry
estimated directly from the replicates of area. Advantages of top-down approach This approach has the following advantages: proper treatment of covariances between measurements of length and width proper treatment of unsuspected sources of error that would emerge if measurements covered a range of operating conditions and a sufficiently long time period independence from propagation of error model error propagation square root Propagation of error approach combines estimates from individual auxiliary measurements The formal propagation of error approach is to compute: standard deviation from the length measurements standard deviation from the width measurements and combine the two into a standard deviation for area using the approximation for products of two variables (ignoring a possible covariance between length and width), $$ s_{area} = \sqrt{width^2 \cdot s_{length}^2 + length^2 \cdot s_{width}^2} $$ Exact formula Goodman (1960) derived an exact formula for the variance between two products. Given two random variables, \(x\) and \(y\) (correspond to width and length in the above approximate formula), the exact formula for the variance is: $$ V(\bar{x} \bar{y}) = \frac{1}{n} \left[ X^2 V(y) + Y^2 V(x) + 2XYE_{11} + 2X\frac{E_{12}}{n} + 2Y\frac{E_{21}}{n} + \frac{V(x) V(y)}{n} + \frac{Cov((\Delta x)^2, (\Delta y)^2) -E_{11}^2 }{n^2} \right] $$ with \(X = E(x)\) and \(Y = E(y)\) (corresponds to width and length, respectively, in the approximate formula) \(V(x)\) is the variance of \(x\) and \(V(y)\) is the variance \(y\) (corresponds to \(
or function name. Linear Error Propagation¶ This example shows how ALGOPY can be used for linear error propagation. Consider the error model \[y = x + \epsilon\] where \(x\) a vector and \(\epsilon\) error propagation reciprocal a random vector that is normally distributed with zero mean and covariance matrix
Error Propagation Inverse
\(\Sigma^2\). The \(y\) is the observed quantity and \(x\) is a real vector representing the "true" value. One defines
Error Propagation Excel
some estimator \(\hat x\) for \(x\), e.g. the arithmetic mean \(\hat x = \sum_{i=1}^{N_m} y_i\). We assume that confidence region of the estimate \(\hat x\) is known and has an associated confidence region http://www.itl.nist.gov/div898/handbook/mpc/section5/mpc55.htm described by its covariance matrix \[\Sigma^2 = \mathbb E[(\hat x - E[\hat x])(\hat x - E[\hat x])^T]\] The question is: What can we say about the confidence region of the function \(f(y)\) when the confidence region of \(y\) is described by the covariance matrix \(\Sigma^2\)? \[\begin{split}f: \mathbb R^N \rightarrow \mathbb R^M \\ \hat x \mapsto \hat x = f(\hat x)\end{split}\] For affine (linear) functions \[z = https://pythonhosted.org/algopy/examples/error_propagation.html f(y) = Ay + b\] the approach is described in the wikipedia article http://en.wikipedia.org/wiki/Propagation_of_uncertainty . Nonlinear functions are simply linearized about the estimate \(\hat y\) of \(\mathbb E[y]\). In the vicinity of \(\hat y\), the linear model approximates the nonlinear function often quite well. To linearize the function, the Jacobian \(J(\hat y)\) of the function \(f(\hat y)\) has to be computed, i.e.: \[z \approx f(y) = f(\hat y) + J(\hat y) (y - \hat y)\] The covariance matrix of \(z\) is defined as \[C = \mathbb E[z z^T] = \mathbb E[ J y y^T J^T] = J \Sigma^2 J^T \; .\] That means if we know \(J(y)\), we can approximately compute the confidence region if \(f(\hat y)\) is sufficiently linear. To compute the Jacobian one can use the forward or the reverse mode of AD. import numpy from algopy import CGraph, Function, UTPM, dot, qr, eigh, inv, zeros def f(y): retval = zeros((3,1),dtype=y) retval[0,0] = numpy.log(dot(y.T,y)) retval[1,0] = numpy.exp(dot(y.T,y)) retval[2,0] = numpy.exp(dot(y.T,y)) - numpy.log(dot(y.T,y)) return retval D,Nm = 2,40 P = Nm y = UTPM(numpy.zeros((2,P,Nm))) y.data[0,:] = numpy.random.rand(Nm) y.data[1,:] = numpy.eye(Nm) # print f(y) J = f(y).data[1,:,:,0] print('Jacobian J(y) = \n', J) C_epsilon = 0.3*numpy
be down. Please try the request again. Your cache administrator is webmaster. Generated Thu, 20 Oct 2016 07:59:31 GMT by s_wx1157 (squid/3.5.20)
be down. Please try the request again. Your cache administrator is webmaster. Generated Thu, 20 Oct 2016 07:59:31 GMT by s_wx1157 (squid/3.5.20)