Experiment-wise Error Rate
Contents |
the experimentwise error rate is: where αew experiment wise error anova is experimentwise error rate αpc is the per-comparison error rate, and c is the number of comparisons. For example, if 5 independent comparisons
Family Wise Error Rate Post Hoc
were each to be done at the .05 level, then the probability that at least one of them would result in a Type I error is: 1 - (1 - .05)5 = 0.226. If the comparisons are not independent then the experimentwise error rate is less than . Finally, regardless of whether the comparisons are independent, αew ≤ (c)(αpc) For this example, .226 < (5)(.05) = 0.25.
Descriptive Statistics Hypothesis Testing General Properties of Distributions Distributions Normal Distribution Sampling family wise error rate r Distributions Binomial and Related Distributions Student's t Distribution Chi-square how to calculate family wise error rate and F Distributions Other Key Distributions Testing for Normality and Symmetry ANOVA One-way ANOVA
Comparison Wise Error Rate
Factorial ANOVA ANOVA with Random or Nested Factors Design of Experiments ANOVA with Repeated Measures Analysis of Covariance (ANCOVA) Miscellaneous Correlation Reliability http://davidmlane.com/hyperstat/A43646.html Non-parametric Tests Time Series Analysis Survival Analysis Handling Missing Data Regression Linear Regression Multiple Regression Logistic Regression Multinomial and Ordinal Logistic Regression Log-linear Regression Multivariate Descriptive Multivariate Statistics Multivariate Normal Distribution Hotelling’s T-square MANOVA Repeated Measures Tests Box’s Test Factor Analysis Cluster Analysis Appendix http://www.real-statistics.com/one-way-analysis-of-variance-anova/experiment-wise-error-rate/ Mathematical Notation Excel Capabilities Matrices and Iterative Procedures Linear Algebra and Advanced Matrix Topics Other Mathematical Topics Statistics Tables Bibliography Author Citation Blogs Tools Real Statistics Functions Multivariate Functions Time Series Analysis Functions Missing Data Functions Data Analysis Tools Contact Us Experiment-wise error rate We could have conducted the analysis for Example 1 of Basic Concepts for ANOVA by conducting multiple two sample tests. E.g. to decide whether or not to reject the following null hypothesis H0: μ1 = μ2 = μ3 We can use the following three separate null hypotheses: H0: μ1 = μ2 H0: μ2 = μ3 H0: μ1 = μ3 If any of these null hypotheses is rejected then the original null hypothesis is rejected. Note however that if you set α = .05 for each of the three sub-analyses then the overall alpha value is .14 since 1 – (1 – α)3 =
Du siehst YouTube auf Deutsch. Du kannst diese Einstellung unten ändern. Learn more You're viewing YouTube in German. You can change this preference below. Schließen Ja, ich möchte sie behalten Rückgängig machen Schließen Dieses https://www.youtube.com/watch?v=VO_0Rntic0s Video ist nicht verfügbar. WiedergabelisteWarteschlangeWiedergabelisteWarteschlange Alle entfernenBeenden Wird geladen... Wiedergabeliste Warteschlange http://online.sfsu.edu/efc/classes/biol458/multcomp/multcomp.htm __count__/__total__ Experiment wise error rate Belinda Davey AbonnierenAbonniertAbo beenden4141 Wird geladen... Wird geladen... Wird verarbeitet... Hinzufügen Möchtest du dieses Video später noch einmal ansehen? Wenn du bei YouTube angemeldet bist, kannst du dieses Video zu einer Playlist hinzufügen. Anmelden Teilen Mehr Melden Möchtest du dieses Video melden? error rate Melde dich an, um unangemessene Inhalte zu melden. Anmelden Transkript Statistik 1.958 Aufrufe 7 Dieses Video gefällt dir? Melde dich bei YouTube an, damit dein Feedback gezählt wird. Anmelden 8 1 Dieses Video gefällt dir nicht? Melde dich bei YouTube an, damit dein Feedback gezählt wird. Anmelden 2 Wird geladen... Wird geladen... Transkript Das interaktive Transkript konnte nicht geladen wise error rate werden. Wird geladen... Wird geladen... Die Bewertungsfunktion ist nach Ausleihen des Videos verfügbar. Diese Funktion ist zurzeit nicht verfügbar. Bitte versuche es später erneut. Veröffentlicht am 13.08.2013 Kategorie Bildung Lizenz Standard-YouTube-Lizenz Wird geladen... Autoplay Wenn Autoplay aktiviert ist, wird die Wiedergabe automatisch mit einem der aktuellen Videovorschläge fortgesetzt. Nächstes Video Wk 10 - Familywise error and analysis of factorial ANOVA - Dauer: 10:50 UWSOnline 1.821 Aufrufe 10:50 Day 27: Different types of error rates - Dauer: 12:15 mumfordbrainstats 424 Aufrufe 12:15 Multiple Comparisons - Dauer: 11:59 Steve Grambow 6.424 Aufrufe 11:59 Planned and unplanned contrasts in SPSS - Dauer: 11:25 Belinda Davey 2.929 Aufrufe 11:25 Multiple Comparisons - Dauer: 2:56 mcleo19 1.619 Aufrufe 2:56 False discovery rates and P values: the movie - Dauer: 36:05 David Colquhoun 8.192 Aufrufe 36:05 Bonferroni correction - Dauer: 5:22 Elizabeth Jacobs 41.134 Aufrufe 5:22 One-Way ANOVA: LSD confidence intervals - Dauer: 8:38 jbstatistics 25.548 Aufrufe 8:38 Type I and II Errors, Power, Effect Size, Significance and Power Analysis in Quantitative Research - Dauer: 9:42 NurseKillam 45.542 Aufrufe 9:42
the simple question posed by an analysis of variance - do at least two treatment means differ? It may be that embedded in a group of treatments there is only one "control" treatment to which every other treatment should be compared, and comparisons among the non-control treatments may be uninteresting. One may also, after performing an analysis of variance and rejecting the null hypothesis of equality of treatment means want to know exactly which treatments or groups of treatments differ. To answer these kinds of questions requires careful consideration of the hypotheses of interest both before and after an experiment is conducted, the Type I error rate selected for each hypothesis, the power of each hypothesis test, and the Type I error rate acceptable for the group of hypotheses as a whole. Comparisons or Contrasts If we let represent a treatment mean and ci a weight associated with the ith treatment mean then a comparison or contrast can be represented as: , where It can be seen that this contrast is a linear combination of treatment means (other contrasts such as quadratic and cubic are also possible). All of the following are possible comparisons: because they are weighted linear combinations of treatment means and the weights sum to zero. For example, previously we have performed comparisons between two treatment means using the t - statistic: with (n1 + n2) - 2 degrees of freedom. This statistic is a "contrast." The numerator of this expression follows the general form of the contrast outlined above with the weights c1 and c2 equal to 1 and -1, respectively: However, we also see that this contrast is divided by the pooled within cell or within group variation. So, a contrast is actually the ratio of a linear combination of weighted means to an estimate of the pooled within cell or error variation in the experiment: with degrees of freedom. For a non - directional null hypothesis t could be replaced by F: with 1, and degrees of freedom. In general, a contrast is the ratio of a linear combination of weighted means to the mean square within cells times the sum of the squares of the weights assigned to each mean divided by the sample size within cells: where the cI' s are the weights assigned to each treatment mean,, ni is the number of observations in each cell and MSerror is the within cell variation pooled from the entire expe