Error Function Probability Integral
Contents |
where it was expressed through the following integral: Later C. Kramp (1799) used this integral for the definition of the complementary error function . P.‐S. Laplace (1812) derived an
Integral Complementary Error Function
asymptotic expansion of the error function. The probability integrals were so named because integral of error function with gaussian density function they are widely applied in the theory of probability, in both normal and limit distributions. To obtain, say, a error function integral table normal distributed random variable from a uniformly distributed random variable, the inverse of the error function, namely is needed. The inverse was systematically investigated in the second half of the twentieth
Error Function Integral Calculation
century, especially by J. R. Philip (1960) and A. J. Strecok (1968).
Definitions of probability integrals and inverses The probability integral (error function) , the generalized error function , the complementary error function , the imaginary error function , the inverse error function , the inverse of the generalized error function , and the inverse complementary error function are defined through the followingProbability Integral Transform Proof
formulas: These seven functions are typically called probability integrals and their inverses. Instead of using definite integrals, the three univariate error functions can be defined through the following infinite series.
A quick look at the probability integrals and inversesHere is a quick look at the graphics for the probability integrals and inverses along the real axis. Connections within the group of probability integrals and inverses and with other function groups Representations through more general functions The probability integrals , , , and are the particular cases of two more general functions: hypergeometric and Meijer G functions. For example, they can be represented through the confluent hypergeometric functions and : Representations of the probability integrals , , , and through classical Meijer G functions are rather simple: The factor in the last four formulas can be removed by changing the classical Meijer G functions to the generalized one: The probability integrals , , , and are the particular cases of the incomplete gamma function, regularized incomplete gamma function, and exponential integral : Representations through related equivalent functions The probability integrals , , and can be represented through Fresnehere for a quick overview of the site Help Center Detailed answers to any questions you might have Meta Discuss the workings and policies of this site About Us Learn more about Stack Overflow the company Business probability integral transform theorem Learn more about hiring developers or posting ads with us Mathematics Questions Tags Users Badges
Probability Integral Transform R
Unanswered Ask Question _ Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals probability integral transform example in related fields. Join them; it only takes a minute: Sign up Here's how it works: Anybody can ask a question Anybody can answer The best answers are voted up and rise to the top Why is http://functions.wolfram.com/GammaBetaErf/Erf/introductions/ProbabilityIntegrals/ShowAll.html the error function defined as it is? up vote 35 down vote favorite 6 $\newcommand{\erf}{\operatorname{erf}}$ This may be a very naïve question, but here goes. The error function $\erf$ is defined by $$\erf(x) = \frac{2}{\sqrt{\pi}} \int_0^x e^{-t^2}dt.$$ Of course, it is closely related to the normal cdf $$\Phi(x) = P(N < x) = \frac{1}{\sqrt{2\pi}} \int_{-\infty}^x e^{-t^2/2}dt$$ (where $N \sim N(0,1)$ is a standard normal) by the expression $\erf(x) = 2\Phi(x \sqrt{2})-1$. My question is: Why http://math.stackexchange.com/questions/37889/why-is-the-error-function-defined-as-it-is is it natural or useful to define $\erf$ normalized in this way? I may be biased: as a probabilist, I think much more naturally in terms of $\Phi$. However, anytime I want to compute something, I find that my calculator or math library only provides $\erf$, and I have to go check a textbook or Wikipedia to remember where all the $1$s and $2$s go. Being charitable, I have to assume that $\erf$ was invented for some reason other than to cause me annoyance, so I would like to know what it is. If nothing else, it might help me remember the definition. Wikipedia says: The standard normal cdf is used more often in probability and statistics, and the error function is used more often in other branches of mathematics. So perhaps a practitioner of one of these mysterious "other branches of mathematics" would care to enlighten me. The most reasonable expression I've found is that $$P(|N| < x) = \erf(x/\sqrt{2}).$$ This at least gets rid of all but one of the apparently spurious constants, but still has a peculiar $\sqrt{2}$ floating around. probability statistics special-functions normal-distribution share|cite|improve this question asked May 8 '11 at 20:19 Nate Eldredge 49.1k356129 I had assumed it was because you can expand both $\erf(x)$ and $\erf^{-1}(x)$ in a Taylor series about $0$, while
here for a quick overview of the site Help Center Detailed answers to any questions you might have Meta Discuss the workings and policies of this site About http://mathoverflow.net/questions/60805/integral-over-error-function-and-normal-distribution Us Learn more about Stack Overflow the company Business Learn more about hiring developers or posting ads with us MathOverflow Questions Tags Users Badges Unanswered Ask Question _ MathOverflow is a question and answer site for professional mathematicians. Join them; it only takes a minute: Sign up Here's how it works: Anybody can ask a question Anybody can answer The best answers are error function voted up and rise to the top Integral over error function and normal distribution up vote 3 down vote favorite 1 Help me understand why $\int_{-\infty}^{\infty}\frac{1}{2}[1+\operatorname{erf}(\frac{\theta-x}{\sqrt{2q^2}})]\frac{1}{\sqrt{2\pi\sigma^2}}{\exp(-\frac{(x-\mu)^2}{2\sigma^2})}dx \approx \frac{1}{2}[1+\operatorname{erf}(\frac{\theta-\mu}{\sqrt{2(q^2+\sigma^2)}})]$ This transformation used by Mark E. Glickma in "Parameter estimation in large dynamic paired comparison experiments" 1, but i can't find why it is so. pr.probability special-functions probability-distributions st.statistics share|cite|improve this question edited Apr 6 '11 probability integral transform at 19:15 asked Apr 6 '11 at 11:51 Sklavit 1184 add a comment| 3 Answers 3 active oldest votes up vote 2 down vote accepted This is too long to be a comment. Let $X$ and $Y$ be independent ${\rm N}(\mu,\sigma_2)$ and ${\rm N}(0,q^2)$ rv's, respectively. Since $X+Y \sim {\rm N}(\mu,q^2+\sigma^2)$, it is equal in distribution to $Z + \mu$, where $Z \sim {\rm N}(0,q^2+\sigma^2)$. Hence, $$ {\rm P}(X + Y \le \theta ) = {\rm P}(Z \le \theta - \mu ) = \frac{1}{{\sqrt {2\pi (\sigma ^2 + q^2 )} }}\int_{ - \infty }^{\theta - \mu } {e^{ - z^2 /[2(\sigma ^2 + q^2 )]} \,{\rm d}z} . $$ On the other hand, by the law of total probability (conditioning on $X$), we have $$ {\rm P}(X + Y \le \theta ) = \int_{ - \infty }^\infty {{\rm P}(Y \le \theta - x)\frac{1}{{\sqrt {2\pi \sigma ^2 } }}e^{ - (x - \mu )^2 /(2\sigma ^2 )} \,{\rm d}x}. $$ Therefore, $$ {\rm P}(X + Y \le \theta ) = \int_{ - \infty }^\infty {\bigg[\int_{ - \infty }^{\theta - x} {\frac{1}{{\sqrt {2\pi q^2 } }}e^{ - y^2 /(2q^2 )} \,{\rm d}y\bigg
be down. Please try the request again. Your cache administrator is webmaster. Generated Tue, 11 Oct 2016 14:47:29 GMT by s_wx1131 (squid/3.5.20)