\[ %% % Add your macros here; they'll be included in pdf and html output. %% \newcommand{\R}{\mathbb{R}} % reals \newcommand{\E}{\mathbb{E}} % expectation \renewcommand{\P}{\mathbb{P}} % probability \DeclareMathOperator{\logit}{logit} \DeclareMathOperator{\logistic}{logistic} \DeclareMathOperator{\SE}{SE} \DeclareMathOperator{\sd}{sd} \DeclareMathOperator{\var}{var} \DeclareMathOperator{\cov}{cov} \DeclareMathOperator{\cor}{cor} \DeclareMathOperator{\Normal}{Normal} \DeclareMathOperator{\LogNormal}{logNormal} \DeclareMathOperator{\Poisson}{Poisson} \DeclareMathOperator{\Beta}{Beta} \DeclareMathOperator{\Binom}{Binomial} \DeclareMathOperator{\Gam}{Gamma} \DeclareMathOperator{\Exp}{Exponential} \DeclareMathOperator{\Cauchy}{Cauchy} \DeclareMathOperator{\Unif}{Unif} \DeclareMathOperator{\Dirichlet}{Dirichlet} \DeclareMathOperator{\Wishart}{Wishart} \DeclareMathOperator{\StudentsT}{StudentsT} \DeclareMathOperator{\Weibull}{Weibull} \newcommand{\given}{\;\vert\;} \]

Multivariate ANOVA

Peter Ralph

20 October – Advanced Biological Statistics

Outline

Linear models…

  1. with categorical variables (multivariate ANOVA)
  2. with continuous variables (least-squares regression)
  3. and likelihood (where’s “least-squares” come from).

Multivariate ANOVA

The factorial ANOVA model

Say we have \(n\) observations coming from combinations of two factors, so that the \(k\)th observation in the \(i\)th group of factor \(A\) and the \(j\)th group of factor \(B\) is \[\begin{equation} X_{ijk} = \mu + \alpha_i + \beta_j + \gamma_{ij} + \epsilon_{ijk} , \end{equation}\] where

  • \(\mu\): overall mean
  • \(\alpha_i\): mean deviation of group \(i\) of factor A from \(\mu\) and average of B,
  • \(\beta_j\): mean deviation of group \(j\) of factor B from \(\mu\) and average of A,
  • \(\gamma_{ij}\): mean deviation of combination \(i + j\) from \(\mu + \alpha_i + \beta_j\), and
  • \(\epsilon_{ijk}\): what’s left over (“error”, or “residual”)

In words, \[\begin{equation} \begin{split} \text{(value)} &= \text{(overall mean)} + \text{(A group mean)} \\ &\qquad {} + \text{(B group mean)} + \text{(AB group mean)} + \text{(residual)} \end{split}\end{equation}\]

Example: pumpkin pie

We’re looking at how mean pumpkin weight depends on both

  • fertilizer input, and
  • late-season watering

So, we

  1. divide a large field into many plots
  2. randomly assign plots to either “high”, “medium”, or “low” fertilizer, and
  3. independently, assign plots to either “no late water” or “late water”; then
  4. plant a fixed number of plants per plot,
  5. grow pumpkins and measure their weight.

Questions:

  1. How does mean weight depend on fertilizer?
  2. … or, on late-season water?
  3. Does the effect of fertilizer depend on late-season water?
  4. How much does mean weight differ between different plants in the same conditions?
  5. … and, between plots of the same conditions?
  6. How much does weight of different pumpkins on the same plant differ?

draw the pictures

First, a simplification

Rightarrow Ignore any “plant” and “plot” effects.

(e.g., only one pumpkin per vine and one plot per combination of conditions)

Say that \(i=1, 2, 3\) indexes fertilizer levels (low to high), and \(j=1, 2\) indexes late watering (no or yes), and \[\begin{equation}\begin{split} X_{ijk} &= \text{(weight of $k$th pumpkin in plot with conditions $i$, $j$)} \\ &= \mu + \alpha_i + \beta_j + \gamma_{ij} + \epsilon_{ijk} , \end{split}\end{equation}\] where

  • \(\mu\):
  • \(\alpha_i\):
  • \(\beta_j\):
  • \(\gamma_{ij}\):
  • \(\epsilon_{ijk}\):

Making it real with simulation

A good way to get a better concrete understanding of something is by simulating it –

by writing code to generate a (random) dataset that you design to look, more or less like what you expect the real data to look like.

This lets you explore statistical power, choose sample sizes, etcetera… but also makes you realize things you hadn’t, previously.

First, make up some numbers

  • \(\mu\):
  • \(\alpha_i\):
  • \(\beta_j\):
  • \(\gamma_{ij}\):
  • \(\epsilon_{ijk}\):

Next, a data format

head( expand.grid(
          fertilizer=c("low", "medium", "high"),
          water=c("no water", "water"),
          plot=1:4,
          plant=1:5,
          weight=NA))
##   fertilizer    water plot plant weight
## 1        low no water    1     1     NA
## 2     medium no water    1     1     NA
## 3       high no water    1     1     NA
## 4        low    water    1     1     NA
## 5     medium    water    1     1     NA
## 6       high    water    1     1     NA

Exercise: simulation (IN CLASS)

pumpkins <- expand.grid(
              fertilizer=c("low", "medium", "high"),
              water=c("no water", "water"),
              plot=1:4,
              plant=1:5,
              weight=NA)

# true values
mu <- 20
alpha <- c('high'=0, 'medium'=-6, 'low'=-12)
beta <- c('no water'=0, 'water'=0)
gamma <- c('high.no water'=0,
           'high.water'=0,
           'medium.no water'=0,
           'medium.water'=2,
           'low.no water'=0,
           'low.water'=-2)
weight_sd <- 1.2
pumpkins$mean_weight <- (mu
    + alpha[as.character(pumpkins$fertilizer)]
    + beta[as.character(pumpkins$water)]
    + gamma[paste(pumpkins$fertilizer, pumpkins$water, sep='.')])
pumpkins$weight <- rnorm(nrow(pumpkins),
                         mean=pumpkins$mean_weight,
                         sd=weight_sd)
write.table(pumpkins, file="data/pumpkins.tsv", sep="\t", row.names=FALSE)

in class

ggplot(pumpkins) + geom_boxplot(aes(x=fertilizer:water, y=weight, fill=water))

plot of chunk r look_at_sims

The simulated dataset is available at data/pumpkins.tsv.

Questions that (linear models and) ANOVA can answer

What are (estimates of) the coefficents?

summary(lm(weight ~ fertilizer + water, data=pumpkins))
## 
## Call:
## lm(formula = weight ~ fertilizer + water, data = pumpkins)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -3.5293 -1.0821  0.1251  1.0006  3.0004 
## 
## Coefficients:
##                  Estimate Std. Error t value Pr(>|t|)    
## (Intercept)        6.8595     0.2682  25.580   <2e-16 ***
## fertilizermedium   8.2095     0.3284  24.997   <2e-16 ***
## fertilizerhigh    13.0065     0.3284  39.603   <2e-16 ***
## waterwater         0.3018     0.2682   1.126    0.263    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 1.469 on 116 degrees of freedom
## Multiple R-squared:  0.9326, Adjusted R-squared:  0.9309 
## F-statistic: 535.2 on 3 and 116 DF,  p-value: < 2.2e-16

Do different fertilizer levels differ? water?

summary(aov(weight ~ fertilizer + water, data=pumpkins))
##              Df Sum Sq Mean Sq F value Pr(>F)    
## fertilizer    2   3461  1730.5 802.183 <2e-16 ***
## water         1      3     2.7   1.267  0.263    
## Residuals   116    250     2.2                   
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Or equivalently,

anova(lm(weight ~ fertilizer + water, data=pumpkins))
## Analysis of Variance Table
## 
## Response: weight
##             Df Sum Sq Mean Sq  F value Pr(>F)    
## fertilizer   2 3461.0 1730.51 802.1833 <2e-16 ***
## water        1    2.7    2.73   1.2668 0.2627    
## Residuals  116  250.2    2.16                    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

What are all those numbers?

Quinn & Keough, table 9.8

What do they mean?

Quinn & Keough, table 9.9

Which levels are different from which other ones?

John Tukey has a method for that.

> ?TukeyHSD

TukeyHSD                 package:stats                 R Documentation

Compute Tukey Honest Significant Differences

Description:

     Create a set of confidence intervals on the differences between
     the means of the levels of a factor with the specified family-wise
     probability of coverage.  The intervals are based on the
     Studentized range statistic, Tukey's ‘Honest Significant
     Difference’ method.

...

     When comparing the means for the levels of a factor in an analysis
     of variance, a simple comparison using t-tests will inflate the
     probability of declaring a significant difference when it is not
     in fact present.  This because the intervals are calculated with a
     given coverage probability for each interval but the
     interpretation of the coverage is usually with respect to the
     entire family of intervals.

Example

TukeyHSD(aov(weight ~ fertilizer + water, data=pumpkins))
##   Tukey multiple comparisons of means
##     95% family-wise confidence level
## 
## Fit: aov(formula = weight ~ fertilizer + water, data = pumpkins)
## 
## $fertilizer
##                  diff       lwr       upr p adj
## medium-low   8.209543  7.429806  8.989279     0
## high-low    13.006493 12.226757 13.786229     0
## high-medium  4.796951  4.017215  5.576687     0
## 
## $water
##                    diff       lwr      upr     p adj
## water-no water 0.301814 -0.229305 0.832933 0.2626956

Does the effect of fertilizer depend on water?

summary(aov(weight ~ fertilizer * water, data=pumpkins))
##                   Df Sum Sq Mean Sq  F value   Pr(>F)    
## fertilizer         2   3461  1730.5 1344.508  < 2e-16 ***
## water              1      3     2.7    2.123    0.148    
## fertilizer:water   2    104    51.8   40.211 6.09e-14 ***
## Residuals        114    147     1.3                      
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
summary(lm(weight ~ fertilizer * water, data=pumpkins))
## 
## Call:
## lm(formula = weight ~ fertilizer * water, data = pumpkins)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -2.96871 -0.64682  0.00617  0.77549  2.05257 
## 
## Coefficients:
##                             Estimate Std. Error t value Pr(>|t|)    
## (Intercept)                   7.9214     0.2537  31.226  < 2e-16 ***
## fertilizermedium              5.9472     0.3588  16.577  < 2e-16 ***
## fertilizerhigh               12.0832     0.3588  33.680  < 2e-16 ***
## waterwater                   -1.8219     0.3588  -5.078 1.50e-06 ***
## fertilizermedium:waterwater   4.5246     0.5074   8.918 9.11e-15 ***
## fertilizerhigh:waterwater     1.8465     0.5074   3.639 0.000412 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 1.135 on 114 degrees of freedom
## Multiple R-squared:  0.9605, Adjusted R-squared:  0.9588 
## F-statistic: 554.3 on 5 and 114 DF,  p-value: < 2.2e-16

Or equivalently,

anova(lm(weight ~ fertilizer * water, data=pumpkins))
## Analysis of Variance Table
## 
## Response: weight
##                   Df Sum Sq Mean Sq   F value    Pr(>F)    
## fertilizer         2 3461.0 1730.51 1344.5075 < 2.2e-16 ***
## water              1    2.7    2.73    2.1232    0.1478    
## fertilizer:water   2  103.5   51.76   40.2115 6.095e-14 ***
## Residuals        114  146.7    1.29                        
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Model comparison with ANOVA

The idea

Me: Hey, I made our model more complicated, and look, it fits better!

You: Yeah, of course it does. How much better?

Me: How can we tell?

You: Well, does it reduce the residual variance more than you’d expect by chance?

The \(F\) statistic

To compare two models, \[\begin{aligned} F &= \frac{\text{(explained variance)}}{\text{(residual variance)}} \\ &= \frac{\text{(mean square model)}}{\text{(mean square residual)}} \\ &= \frac{\frac{\text{RSS}_1 - \text{RSS}_2}{p_2-p_1}}{\frac{\text{RSS}_2}{n-p_2}} \end{aligned}\]

Nested model analysis

anova(
      lm(weight ~ water, data=pumpkins),
      lm(weight ~ fertilizer + water, data=pumpkins),
      lm(weight ~ fertilizer * water, data=pumpkins)
)
## Analysis of Variance Table
## 
## Model 1: weight ~ water
## Model 2: weight ~ fertilizer + water
## Model 3: weight ~ fertilizer * water
##   Res.Df    RSS Df Sum of Sq        F    Pr(>F)    
## 1    118 3711.3                                    
## 2    116  250.2  2    3461.0 1344.507 < 2.2e-16 ***
## 3    114  146.7  2     103.5   40.212 6.095e-14 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Your turn

Do a stepwise model comparison of nested linear models, including plant and plot in the analysis. Think about what order to do the comparison in. Make sure they are nested!

Data: data/pumpkins.tsv

// reveal.js plugins