As Sample Size Increases The Standard Error Of Mean Decreases
Contents |
Tour Start here for a quick overview of the site Help Center Detailed answers to any questions you might have Meta Discuss the workings and policies of this site About Us Learn more about Stack Overflow the company Business Learn more about hiring the standard error of the mean measures the variability of the mean from sample to sample developers or posting ads with us Cross Validated Questions Tags Users Badges Unanswered Ask Question _
As The Sample Size Increases The Standard Error Of The Mean _____ Other Things Being Equal
Cross Validated is a question and answer site for people interested in statistics, machine learning, data analysis, data mining, and data visualization. Join them; as the sample size increases the standard error of the mean increases. true false it only takes a minute: Sign up Here's how it works: Anybody can ask a question Anybody can answer The best answers are voted up and rise to the top Why does increasing the sample size lower the variance? up
As The Sample Size Increases The Standard Error Of The Mean Gets
vote 14 down vote favorite 4 Big picture: I'm trying to understand how increasing the sample size increases the power of an experiment. My lecturer's slides explain this with a picture of 2 normal distributions, one for the null-hypothesis and one for the alternative-hypothesis and a decision threshold c between them. They argue that increasing sample size will lower variance and thereby cause a higher kurtosis, reducing the shared area under the curves and so the probability of a type as the sample size increases the standard error of the mean increases all else being held constant II error. Small picture: I don't understand how a bigger sample size will lower the variance. I assume you just calculate the sample variance and use it as a parameter in a normal distribution. I tried: googling, but most accepted answers have 0 upvotes or are merely examples thinking: By the law of big numbers every value should eventually stabilize around its probable value according to the normal distribution we assume. And the variance should therefore converge to the variance of our assumed normal distribution. But what is the variance of that normal distribution and is it a minimum value i.e. can we be sure our sample variance decreases to that value? variance sampling power share|improve this question asked Dec 21 '14 at 0:01 user2740 3391213 Your thought experiment concerned normally distributed data but it also applies to data drawn from many other distributions (as noted by @Aksakal, not all! The Cauchy is a commonly cited example of such bad behaviour). For binomial data there is good discussion of how power and standard error vary with sample size at stats.stackexchange.com/q/87730/22228 –Silverfish Dec 21 '14 at 2:26 1 As you are new to CrossValidated, allow me to point out that if you received what you consider a satisfactory answer, you should consider marking it as "accepted" by clicking a green tick to the left of it. This provides additional reputation for the answerer and also mark
this involves comparing samples between one regime and another (which may be a control). Sample size is important because Larger samples increase the chance of finding a significant difference, but Larger samples cost more money. Why does a larger sample size help? The sample size is chosen to maximise the chance of uncovering a specific mean difference, which is also statistically significant. Please note that specific difference and statistically significant are two quite different ideas. The specific difference is chosen by the researcher in terms of the outcome measure of the experiment. For instance, 3kg mean weight change in a diet experiment, 10% mean improvement in a teaching method experiment. Statistical significance is a probability statement telling us how likely it is that the observed difference was due to chance only. The reason larger samples increase your chance of significance is because they more reliably reflect the population mean. Imagine we are doing a trial on whether a particular diet regime helps with weight loss. A random sample of people are chosen and each person is weighed before and after the diet, giving us their weight changes. Finally we work out the mean weight change of the entire sample. To get a statistically significant result we want a result which is unlikely to have happened if the diet makes no difference (the null hypothesis). Imagine a scenario where one researcher has a sample size of 20, and another one, 40, both drawn from the same population, and both happen to get a mean weight change of 3kg. How likely is it that a 3kg weight change will be statistically significant in these two scenarios? To help us here we'll show a distribution curve from each scenario. What you see above are two distributions of possible sample means (see below) for 20 people (n=20) and 40 people (n=40), both drawn from the same population. On each we have superimposed a sample mean weight change of 3kg. The curves are both centred on zero to indicate a null hypothesis of "no difference" (ie. that the diet has no effect). It is more likely to be significant when n=40 because the distribution curve is narrower and 3kg is more extreme in relation to it than it is in the n=20