Error Propagation Matrix
Contents |
Tour Start here for a quick overview of the site Help Center Detailed answers to any questions you might have Meta Discuss the workings and error propagation division policies of this site About Us Learn more about Stack Overflow the error propagation calculator company Business Learn more about hiring developers or posting ads with us Cross Validated Questions Tags Users Badges Unanswered
Error Propagation Physics
Ask Question _ Cross Validated is a question and answer site for people interested in statistics, machine learning, data analysis, data mining, and data visualization. Join them; it only takes a
Error Propagation Chemistry
minute: Sign up Here's how it works: Anybody can ask a question Anybody can answer The best answers are voted up and rise to the top How do I propagate error values through a matrix diagonalization? up vote 3 down vote favorite 1 In the general case, if I have a symmetric, invertible (real-valued) matrix where each entry in the matrix has error propagation square root a separate error associated with it (or, in the more general case, where each value has a separate positive and negative error associated with it), how can I rigorously propagate these error values through the eigendecomposition to get error values for each of the eigenvalues and each entry in the eigenvector matrix? For my current application, it is as critical that I avoid overestimating the error as it is that I avoid underestimating it. error matrix matrix-decomposition error-propagation share|improve this question asked Oct 21 '11 at 15:55 David Hollman 398146 add a comment| 1 Answer 1 active oldest votes up vote 4 down vote accepted The propagation will depend on the diagonalization algorithm--which might be a black box--as well as the multivariate distribution of the errors. Pursuing an analytical solution therefore looks unpromising. Why not just compute an empirical distribution? That is, draw a large number of variants of the original matrix from the hypothesized error distribution and diagonalize them. Study the output distribution of the eigenvectors and eigenvalues. There are some subtleties, because there will not be a definite matching among the list
| Advanced search) All papers Titles
Error Propagation Reciprocal
Authors Abstracts Full text Help pages Full-text links: error propagation inverse Download: PDF PostScript Other formats (license) Current browse context: hep-ex
or function name. Linear Error Propagation¶ This example shows how ALGOPY can be used for linear error propagation. Consider the error model \[y = x + \epsilon\] where \(x\) https://pythonhosted.org/algopy/examples/error_propagation.html a vector and \(\epsilon\) a random vector that is normally distributed with zero mean and covariance matrix \(\Sigma^2\). The \(y\) is the observed quantity and \(x\) is a real vector representing the "true" value. One defines some estimator \(\hat x\) for \(x\), e.g. the arithmetic mean \(\hat x = \sum_{i=1}^{N_m} y_i\). We assume that confidence region of the estimate \(\hat x\) is error propagation known and has an associated confidence region described by its covariance matrix \[\Sigma^2 = \mathbb E[(\hat x - E[\hat x])(\hat x - E[\hat x])^T]\] The question is: What can we say about the confidence region of the function \(f(y)\) when the confidence region of \(y\) is described by the covariance matrix \(\Sigma^2\)? \[\begin{split}f: \mathbb R^N \rightarrow \mathbb R^M \\ \hat x \mapsto \hat error propagation matrix x = f(\hat x)\end{split}\] For affine (linear) functions \[z = f(y) = Ay + b\] the approach is described in the wikipedia article http://en.wikipedia.org/wiki/Propagation_of_uncertainty . Nonlinear functions are simply linearized about the estimate \(\hat y\) of \(\mathbb E[y]\). In the vicinity of \(\hat y\), the linear model approximates the nonlinear function often quite well. To linearize the function, the Jacobian \(J(\hat y)\) of the function \(f(\hat y)\) has to be computed, i.e.: \[z \approx f(y) = f(\hat y) + J(\hat y) (y - \hat y)\] The covariance matrix of \(z\) is defined as \[C = \mathbb E[z z^T] = \mathbb E[ J y y^T J^T] = J \Sigma^2 J^T \; .\] That means if we know \(J(y)\), we can approximately compute the confidence region if \(f(\hat y)\) is sufficiently linear. To compute the Jacobian one can use the forward or the reverse mode of AD. import numpy from algopy import CGraph, Function, UTPM, dot, qr, eigh, inv, zeros def f(y): retval = zeros((3,1),dtype=y) retval[0,0] = numpy.log(dot(y.T,y)) retval[1,0] = numpy.exp(dot(y.T,y)) retval[2,0] = numpy.exp(dot(y.T,y)) - numpy.log(dot(y.T,y)) return retval D,Nm = 2,40 P = Nm y = UTPM(numpy.zeros((2,P,N