Peter Ralph
Advanced Biological Statistics
Linear models…
Say we have \(n\) observations coming from combinations of two factors, so that the \(k\)th observation in the \(i\)th group of factor \(A\) and the \(j\)th group of factor \(B\) is \[\begin{equation} X_{ijk} = \mu + \alpha_i + \beta_j + \gamma_{ij} + \epsilon_{ijk} , \end{equation}\] where
In words, \[\begin{equation} \begin{split} \text{(value)} &= \text{(overall mean)} + \text{(A group mean)} \\ &\qquad {} + \text{(B group mean)} + \text{(AB group mean)} + \text{(residual)} \end{split}\end{equation}\]
We’re looking at how mean pumpkin weight depends on both
So, we
draw the pictures
Ignore any “plant” and “plot” effects.
(e.g., only one pumpkin per vine and one plot per combination of conditions)
Say that \(i=1, 2, 3\) indexes fertilizer levels (low to high), and \(j=1, 2\) indexes late watering (no or yes), and \[\begin{equation}\begin{split} X_{ijk} &= \text{(weight of $k$th pumpkin in plot with conditions $i$, $j$)} \\ &= \mu + \alpha_i + \beta_j + \gamma_{ij} + \epsilon_{ijk} , \end{split}\end{equation}\] where
A good way to get a better concrete understanding of something is by simulating it –
by writing code to generate a (random) dataset that you design to look, more or less like what you expect the real data to look like.
This lets you explore statistical power, choose sample sizes, etcetera… but also makes you realize things you hadn’t, previously.
pumpkins <- expand.grid(
fertilizer=c("low", "medium", "high"),
water=c("no water", "water"),
plot=1:4,
plant=1:5,
weight=NA)
head(pumpkins)
## fertilizer water plot plant weight
## 1 low no water 1 1 NA
## 2 medium no water 1 1 NA
## 3 high no water 1 1 NA
## 4 low water 1 1 NA
## 5 medium water 1 1 NA
## 6 high water 1 1 NA
pumpkins$mean_weight <- NA
for (j in 1:nrow(pumpkins)) {
f <- as.character(pumpkins$fertilizer[j])
w <- as.character(pumpkins$water[j])
fw <- paste(f, w, sep=",")
if (fw %in% names(params$gamma)) {
gamma <- params$gamma[fw]
} else {
gamma <- 0
}
pumpkins$mean_weight[j] <- (
params$mu
+ params$alpha[f]
+ params$beta[w]
+ gamma
)
}
Or, equivalently,
Finally, we can draw the random values:
##
## Call:
## lm(formula = weight ~ fertilizer + water, data = pumpkins)
##
## Residuals:
## Min 1Q Median 3Q Max
## -4.1705 -0.8935 -0.0265 1.1688 2.6672
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 5.4992 0.2663 20.649 < 2e-16 ***
## fertilizerlow -0.5081 0.3262 -1.558 0.12199
## fertilizermedium 0.2538 0.3262 0.778 0.43805
## waterwater 1.0439 0.2663 3.920 0.00015 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 1.459 on 116 degrees of freedom
## Multiple R-squared: 0.1534, Adjusted R-squared: 0.1315
## F-statistic: 7.009 on 3 and 116 DF, p-value: 0.0002254
## Df Sum Sq Mean Sq F value Pr(>F)
## fertilizer 2 12.04 6.02 2.83 0.06311 .
## water 1 32.69 32.69 15.37 0.00015 ***
## Residuals 116 246.81 2.13
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Or equivalently,
## Analysis of Variance Table
##
## Response: weight
## Df Sum Sq Mean Sq F value Pr(>F)
## fertilizer 2 12.042 6.021 2.8298 0.0631062 .
## water 1 32.694 32.694 15.3660 0.0001503 ***
## Residuals 116 246.811 2.128
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Note: this table assumes exactly \(n\) observations in every cell.
John Tukey has a method for that.
> ?TukeyHSD
TukeyHSD package:stats R Documentation
Compute Tukey Honest Significant Differences
Description:
Create a set of confidence intervals on the differences between
the means of the levels of a factor with the specified family-wise
probability of coverage. The intervals are based on the
Studentized range statistic, Tukey's ‘Honest Significant
Difference’ method.
...
When comparing the means for the levels of a factor in an analysis
of variance, a simple comparison using t-tests will inflate the
probability of declaring a significant difference when it is not
in fact present. This because the intervals are calculated with a
given coverage probability for each interval but the
interpretation of the coverage is usually with respect to the
entire family of intervals.
## Tukey multiple comparisons of means
## 95% family-wise confidence level
##
## Fit: aov(formula = weight ~ fertilizer + water, data = pumpkins)
##
## $fertilizer
## diff lwr upr p adj
## low-high -0.5081166 -1.2824907 0.2662575 0.2681118
## medium-high 0.2538116 -0.5205625 1.0281857 0.7171494
## medium-low 0.7619282 -0.0124459 1.5363023 0.0548354
##
## $water
## diff lwr upr p adj
## water-no water 1.043934 0.5164675 1.571401 0.0001503
## Df Sum Sq Mean Sq F value Pr(>F)
## fertilizer 2 12.04 6.02 2.994 0.0540 .
## water 1 32.69 32.69 16.257 0.0001 ***
## fertilizer:water 2 17.55 8.77 4.363 0.0149 *
## Residuals 114 229.26 2.01
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Call:
## lm(formula = weight ~ fertilizer * water, data = pumpkins)
##
## Residuals:
## Min 1Q Median 3Q Max
## -3.7109 -0.8085 0.0077 0.9694 2.5657
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 6.00577 0.31710 18.940 < 2e-16 ***
## fertilizerlow -1.10405 0.44845 -2.462 0.01532 *
## fertilizermedium -0.66999 0.44845 -1.494 0.13794
## waterwater 0.03078 0.44845 0.069 0.94540
## fertilizerlow:waterwater 1.19187 0.63421 1.879 0.06276 .
## fertilizermedium:waterwater 1.84760 0.63421 2.913 0.00431 **
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 1.418 on 114 degrees of freedom
## Multiple R-squared: 0.2136, Adjusted R-squared: 0.1791
## F-statistic: 6.194 on 5 and 114 DF, p-value: 4.075e-05
Or equivalently,
## Analysis of Variance Table
##
## Response: weight
## Df Sum Sq Mean Sq F value Pr(>F)
## fertilizer 2 12.042 6.021 2.9939 0.0540477 .
## water 1 32.694 32.694 16.2569 0.0001003 ***
## fertilizer:water 2 17.547 8.774 4.3626 0.0149398 *
## Residuals 114 229.264 2.011
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Me: Hey, I made our model more complicated, and look, it fits better!
You: Yeah, of course it does. How much better?
Me: How can we tell?
You: Well, does it reduce the residual variance more than you’d expect by chance?
To compare two models, \[\begin{aligned} F &= \frac{\text{(explained variance)}}{\text{(residual variance)}} \\ &= \frac{\text{(mean square model)}}{\text{(mean square residual)}} \\ &= \frac{\frac{\text{RSS}_1 - \text{RSS}_2}{p_2-p_1}}{\frac{\text{RSS}_2}{n-p_2}} \end{aligned}\]
anova(
lm(weight ~ water, data=pumpkins),
lm(weight ~ fertilizer + water, data=pumpkins),
lm(weight ~ fertilizer * water, data=pumpkins)
)
## Analysis of Variance Table
##
## Model 1: weight ~ water
## Model 2: weight ~ fertilizer + water
## Model 3: weight ~ fertilizer * water
## Res.Df RSS Df Sum of Sq F Pr(>F)
## 1 118 258.85
## 2 116 246.81 2 12.042 2.9939 0.05405 .
## 3 114 229.26 2 17.547 4.3626 0.01494 *
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Do a stepwise model comparison of nested linear models, including
plant
and plot
in the analysis. Think about
what order to do the comparison in. Make sure plot is nested
within treatment!
Data: pumkpins.tsv