Gaussian Error Distribution
Contents |
For other uses, see Bell curve (disambiguation). Normal distribution Probability density function The red curve is the standard normal distribution Cumulative distribution function Notation N ( μ , σ 2 ) {\displaystyle {\mathcal σ 4}(\mu ,\,\sigma ^ σ 3)} Parameters gaussian distribution function μ ∈ R — mean (location) σ2 > 0 — variance (squared scale) normal distribution formula Support x ∈ R PDF 1 2 σ 2 π e − ( x − μ ) 2 2 σ 2
Multivariate Gaussian Distribution
{\displaystyle {\frac σ 0{\sqrt − 9\pi }}}\,e^{-{\frac {(x-\mu )^ − 8} − 7}}}} CDF 1 2 [ 1 + erf ( x − μ σ 2 ) ] {\displaystyle {\frac − 2 − 1}\left[1+\operatorname
Normal Distribution Examples
− 0 \left({\frac 9{\sigma {\sqrt 8}}}\right)\right]} Quantile μ + σ 2 erf − 1 ( 2 F − 1 ) {\displaystyle \mu +\sigma {\sqrt 2}\operatorname 1 ^{-1}(2F-1)} Mean μ Median μ Mode μ Variance σ 2 {\displaystyle \sigma ^ − 8\,} Skewness 0 Ex. kurtosis 0 Entropy 1 2 ln ( 2 σ 2 π e ) {\displaystyle {\tfrac − 6 − 5}\ln(2\sigma ^ normal distribution pdf − 4\pi \,e\,)} MGF exp { μ t + 1 2 σ 2 t 2 } {\displaystyle \exp\{\mu t+{\frac − 0 σ 9}\sigma ^ σ 8t^ σ 7\}} CF exp { i μ t − 1 2 σ 2 t 2 } {\displaystyle \exp\ σ 2 σ 1}\sigma ^ σ 0t^ μ 9\}} Fisher information ( 1 / σ 2 0 0 1 / ( 2 σ 4 ) ) {\displaystyle {\begin μ 41/\sigma ^ μ 3&0\\0&1/(2\sigma ^ μ 2)\end μ 1}} In probability theory, the normal (or Gaussian) distribution is a very common continuous probability distribution. Normal distributions are important in statistics and are often used in the natural and social sciences to represent real-valued random variables whose distributions are not known.[1][2] The normal distribution is useful because of the central limit theorem. In its most general form, under some conditions (which include finite variance), it states that averages of random variables independently drawn from independent distributions converge in distribution to the normal, that is, become normally distributed when the number of random variables is sufficiently large. Physical quantities that are expected to be the sum of many independent processes (such as measurement errors) often have distributions that are nearly normal.[3] Moreover, many
on the real line. Both families add a shape parameter to the normal distribution. To distinguish the two families, they are referred to below as "version 1" and "version 2". However this is not a
Normal Distribution Statistics
standard nomenclature. Contents 1 Version 1 1.1 Parameter estimation 1.1.1 Maximum likelihood estimator normal distribution standard deviation 1.2 Applications 1.3 Properties 2 Version 2 2.1 Parameter estimation 2.2 Applications 3 Other distributions related to the normal standard normal distribution 4 See also 5 References Version 1[edit] Generalized Normal (version 1) Probability density function Cumulative distribution function Parameters μ {\displaystyle \mu \,} location (real) α {\displaystyle \alpha \,} scale (positive, real) β {\displaystyle https://en.wikipedia.org/wiki/Normal_distribution \beta \,} shape (positive, real) Support x ∈ ( − ∞ ; + ∞ ) {\displaystyle x\in (-\infty ;+\infty )\!} PDF β 2 α Γ ( 1 / β ) e − ( | x − μ | / α ) β {\displaystyle {\frac {\beta }{2\alpha \Gamma (1/\beta )}}\;e^{-(|x-\mu |/\alpha )^{\beta }}} Γ {\displaystyle \Gamma } denotes the gamma function CDF 1 2 + sgn https://en.wikipedia.org/wiki/Generalized_normal_distribution ( x − μ ) γ [ 1 / β , ( | x − μ | α ) β ] 2 Γ ( 1 / β ) {\displaystyle {\frac {1}{2}}+\operatorname {sgn}(x-\mu ){\frac {\gamma \left[1/\beta ,\left({\frac {|x-\mu |}{\alpha }}\right)^{\beta }\right]}{2\Gamma (1/\beta )}}} γ {\displaystyle \gamma } denotes the lower incomplete gamma function Mean μ {\displaystyle \mu \,} Median μ {\displaystyle \mu \,} Mode μ {\displaystyle \mu \,} Variance α 2 Γ ( 3 / β ) Γ ( 1 / β ) {\displaystyle {\frac {\alpha ^{2}\Gamma (3/\beta )}{\Gamma (1/\beta )}}} Skewness 0 Ex. kurtosis Γ ( 5 / β ) Γ ( 1 / β ) Γ ( 3 / β ) 2 − 3 {\displaystyle {\frac {\Gamma (5/\beta )\Gamma (1/\beta )}{\Gamma (3/\beta )^{2}}}-3} Entropy 1 β − log [ β 2 α Γ ( 1 / β ) ] {\displaystyle {\frac {1}{\beta }}-\log \left[{\frac {\beta }{2\alpha \Gamma (1/\beta )}}\right]} [1] Known also as the exponential power distribution, or the generalized error distribution, this is a parametric family of symmetric distributions. It includes all Laplace distributions, and as limiting cases it includes all continuous uniform distributions on bounded intervals of t
Study: Percolation 3.OOP 3.1Using Data Types 3.2Creating Data Types 3.3Designing Data Types 3.4Case Study: N-Body 4.Data Structures 4.1Performance 4.2Sorting and Searching 4.3Stacks and Queues 4.4Symbol Tables 4.5Case Study: http://www.cs.princeton.edu/introcs/11gaussian Small World Computer Science 5.Theory of Computing 5.1Formal Languages 5.2Turing Machines https://ned.ipac.caltech.edu/level5/Stetson/Stetson3_2.html 5.3Universality 5.4Computability 5.5Intractability 9.9Cryptography 6.A Computing Machine 6.1Representing Info. 6.2TOY Machine 6.3TOY Programming 6.4TOY Simulator 7.Building a Computer 7.1Boolean Logic 7.2Basic Circuit Model 7.3Combinational Circuits 7.4Sequential Circuits 7.5Digital Devices Beyond 8.Systems 8.1Library Programming 8.2Compilers 8.3Operating Systems 8.4Networking 8.5Applications Systems 9.Scientific Computation 9.1Floating Point 9.2Symbolic Methods 9.3Numerical Integration normal distribution 9.4Differential Equations 9.5Linear Algebra 9.6Optimization 9.7Data Analysis 9.8Simulation Related Booksites Web Resources FAQ Data Code Errata Appendices A. Operator Precedence B. Writing Clear Code C. Glossary D. Java Cheatsheet E. Matlab Lecture Slides Programming Assignments Appendix C: Gaussian Distribution Gaussian distribution. The Gaussian (normal) distribution was historically called the law of errors. It gaussian error distribution was used by Gauss to model errors in astronomical observations, which is why it is usually referred to as the Gaussian distribution. The probability density function for the standard Gaussian distribution (mean 0 and standard deviation 1) and the Gaussian distribution with mean μ and standard deviation σ is given by the following formulas. The cumulative distribution function for the standard Gaussian distribution and the Gaussian distribution with mean μ and standard deviation σ is given by the following formulas. As the figure above illustrates, 68% of the values lie within 1 standard deviation of the mean; 95% lie within 2 standard deviations; and 99.7% lie within 3 standard deviations. Central limit theorem. Under generous technical conditions, the distribution of the sum of a large number of independent random variables (approximately) has a normal distribution. In science and engineering, it is often reasonable to treat the error of an observation as the result of many small, independent, errors. This enables us to apply the central limit theorem and treat the errors as
more common than the rates (one time in twenty and one time in four hundred) predicted by the Gaussian, or "normal" error distribution. As you recall, we justified the whole least-squares approach by the fact that it gives the most likely set of answers, if the distribution of random errors is Gaussian. The official reason why people always assume a Gaussian error distribution goes back to something called the Central Limit Theorem. The Central Limit Theorem says that whenever a measurement is subject to a very large number of very small errors, the probability distribution for the total error is driven toward the Gaussian distribution. This is true regardless of the form of the original probability distributions of the individual errors. A proof - and it is a pretty one - can be found in any book on the theory of statistics. The real reason why people always assume a Gaussian error distribution is that, having made that assumption, we can then easily derive (and have derived!) exact mathematical formulae which allow us to compute directly the "best" values for the unknown parameters. This is not necessarily possible for other probability distributions. What would happen if, for instance, the error distribution for your data were not Gaussian, but Lorentzian, With the Gaussian, when you go to maximize the likelihood you discover that you must minimize the sum of the squares of the residuals. This leads to a very simple and straightforward set of simultaneous linear equations. With the Lorentz function, you get Try differentiating the right side of this equation with respect to each of the unknown parameters, and see where it gets you. Pretending that the error distribution is Gaussian even if it isn't makes life a lot simpler. The fact is, with real data you don't know