Delta Method Standard Error Of Variance
Contents |
on statistics Stata Journal Stata Press Stat/Transfer Gift Shop Purchase Order Stata Request a quote Purchasing FAQs Bookstore Stata Press books Books on Stata Books on statistics Stat/Transfer Stata delta method standard error stata Journal Gift Shop Training NetCourses Classroom and web On-site Video tutorials Third-party courses
Standard Error Sample Variance
Support Updates Documentation Installation Guide FAQs Register Stata Technical services Policy Contact Publications Bookstore Stata Journal Stata News Conferences and
Standard Error Variance Covariance Matrix
meetings Stata Conference Upcoming meetings Proceedings Email alerts Statalist The Stata Blog Web resources Author Support Program Installation Qualification Tool Disciplines Company StataCorp Contact us Hours of operation Announcements Customer service Register Stata
Standard Error And Variance Relationship
online Change registration Change address Subscribe to Stata News Subscribe to email alerts International resellers Careers Our sites Statalist The Stata Blog Stata Press Stata Journal Advanced search Site index Purchase Products Training Support Company >> Home >> Resources & support >> FAQs >> Explanation of the delta method What is the delta method and how is it used to estimate the standard error standard error to variance calculator of a transformed parameter? Title Explanation of the delta method Author Alan H. Feiveson, NASA The delta method, in its essence, expands a function of a random variable about its mean, usually with a one-step Taylor approximation, and then takes the variance. For example, if we want to approximate the variance of G(X) where X is a random variable with mean mu and G() is differentiable, we can try G(X) = G(mu) + (X-mu)G'(mu) (approximately) so that Var(G(X)) = Var(X)*[G'(mu)]2 (approximately) where G'() = dG/dX. This is a good approximation only if X has a high probability of being close enough to its mean (mu) so that the Taylor approximation is still good. This idea can easily be expanded to vector-valued functions of random vectors, Var(G(X)) = G'(mu) Var(X) [G'(mu)]T and that, in fact, is the basis for deriving the asymptotic variance of maximum-likelihood estimators. In the above, X is a 1 x m column vector; Var(X) is its m x m variance–covariance matrix; G() is a vector function returning a 1 x n column vector; and G'() is its n x m matrix of first derivatives. T is the transpose operator. Var(G(X)) is the re
In statistics, the delta method is a result concerning the approximate probability distribution for a function of an asymptotically normal statistical estimator from knowledge of the limiting variance of that estimator. Contents 1 Univariate delta method 1.1 Proof standard deviation variance in the univariate case 1.1.1 Proof with an explicit order of approximation 2 Multivariate delta confidence interval variance method 3 Example 4 Note 5 See also 6 References 7 External links Univariate delta method[edit] While the delta method generalizes easily t test variance to a multivariate setting, careful motivation of the technique is more easily demonstrated in univariate terms. Roughly, if there is a sequence of random variables Xn satisfying n [ X n − θ ] → D http://www.stata.com/support/faqs/statistics/delta-method/ N ( 0 , σ 2 ) , {\displaystyle {{\sqrt {n}}[X_{n}-\theta ]\,{\xrightarrow {D}}\,{\mathcal {N}}(0,\sigma ^{2})},} where θ and σ2 are finite valued constants and → D {\displaystyle {\xrightarrow {D}}} denotes convergence in distribution, then n [ g ( X n ) − g ( θ ) ] → D N ( 0 , σ 2 [ g ′ ( θ ) ] 2 ) {\displaystyle {{\sqrt {n}}[g(X_{n})-g(\theta )]\,{\xrightarrow {D}}\,{\mathcal {N}}(0,\sigma ^{2}[g'(\theta )]^{2})}} for any https://en.wikipedia.org/wiki/Delta_method function g satisfying the property that g′(θ) exists and is non-zero valued. Proof in the univariate case[edit] Demonstration of this result is fairly straightforward under the assumption that g′(θ) is continuous. To begin, we use the mean value theorem (i.e.: the first order approximation of a Taylor series using Taylor's theorem): g ( X n ) = g ( θ ) + g ′ ( θ ~ ) ( X n − θ ) , {\displaystyle g(X_{n})=g(\theta )+g'({\tilde {\theta }})(X_{n}-\theta ),} where θ ~ {\displaystyle {\tilde {\theta }}} lies between Xn and θ. Note that since X n → P θ {\displaystyle X_{n}\,{\xrightarrow {P}}\,\theta } and X n < θ ~ < θ {\displaystyle X_{n}<{\tilde {\theta }}<\theta } , it must be that θ ~ → P θ {\displaystyle {\tilde {\theta }}\,{\xrightarrow {P}}\,\theta } and since g′(θ) is continuous, applying the continuous mapping theorem yields g ′ ( θ ~ ) → P g ′ ( θ ) , {\displaystyle g'({\tilde {\theta }})\,{\xrightarrow {P}}\,g'(\theta ),} where → P {\displaystyle {\xrightarrow {P}}} denotes convergence in probability. Rearranging the terms and multiplying by n {\displaystyle {\sqrt {n}}} gives n [ g ( X n ) − g ( θ ) ] = g ′ ( θ ~ ) n [ X n − θ ] . {\displaystyle {\sqrt {n}}[g(X_{n})-g(\theta )]=g'\left