Peter Ralph
Advanced Biological Statistics
If
then “\(X b\)” is shorthand for the \(n\)-vector \[ (Xb)_i = \sum_{j=1}^k X_{ij} b_j . \]
%*%
.*
.Let’s say we want to predict apple yield
using leaf_area
, tree_age
, and fertilizer
: \[\begin{aligned}
\text{yield}_i &= \beta_0 + \beta_1 \times \text{leaf_area}_i \\
&\qquad {} + \beta_2 \times \text{tree_age}_i
+ \beta_3 \times \text{fertilizer}_i + \epsilon_i.
\end{aligned}\]
If we set the columns of \(X\) to be
leaf_area
tree_age
fertilizer
and \(\beta = (\beta_0, \beta_1, \beta_2, \beta_3),\) then this is \[ \text{yield}_i = (X \beta)_i + \epsilon_i . \]
Or, \[ \text{yield} = X \beta + \epsilon . \]
Recall the model.matrix( )
function gives you back an X
matrix appropriate for factors as well.
(the “family”; describes the output)
(connects input to output)
(connects linear predictor to probability distribution)
Linear: \[ Y \sim \Normal(X \beta, \sigma) .\]
Binomial (“Logistic”): \[ Y \sim \Binom(n, \logistic(X\beta)) .\]
Poisson: \[ Y \sim \Poisson(\exp(X\beta)) .\]
Gamma: \[ Y \sim \Gam(\text{scale}=\exp(X\beta)/k, \text{shape}=k) .\]
glm(y ~ x, family=binomial)
family {stats}
Family objects provide a convenient way to specify the details of the models
used by functions such as glm.
binomial(link = "logit")
gaussian(link = "identity")
Gamma(link = "inverse")
inverse.gaussian(link = "1/mu^2")
poisson(link = "log")
Arguments
link: a specification for the model link function. This can be a
name/expression, a literal character string, a length-one character vector, or
an object of class "link-glm" (such as generated by make.link) provided it is
not specified via one of the standard names given next.
The gaussian family accepts the links (as names) identity, log and inverse; the
binomial family the links logit, probit, cauchit, (corresponding to logistic,
normal and Cauchy CDFs respectively) log and cloglog (complementary log-log);
the Gamma family the links inverse, identity and log; the poisson family the
links log, identity, and sqrt; and the inverse.gaussian family the links
1/mu^2, inverse, identity and log.
Note that the link function is the inverse of what we’d write down in the model: for instance, if the mean of the response is the exponential of the linear predictor, i.e., if \[ \E[Y] = \exp(X\beta) . \] then we have
link = "log"
Other inverses:
logistic
has inverse logit
identity
has inverse identity
x^2
(“squared”) has inverse sqrt
100 trials, where probability of success depends on \(x\):
glm()
##
## Call:
## glm(formula = zz ~ df$x, family = "binomial")
##
## Deviance Residuals:
## Min 1Q Median 3Q Max
## -1.79584 -0.14355 0.01418 0.07181 2.58115
##
## Coefficients:
## Estimate Std. Error z value Pr(>|z|)
## (Intercept) -9.7512 2.4701 -3.948 7.89e-05 ***
## df$x 1.9531 0.4849 4.028 5.62e-05 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## (Dispersion parameter for binomial family taken to be 1)
##
## Null deviance: 138.269 on 99 degrees of freedom
## Residual deviance: 28.807 on 98 degrees of freedom
## AIC: 32.807
##
## Number of Fisher Scoring iterations: 7
brms::brm()
## Only 2 levels detected so that family 'bernoulli' might be a more efficient choice.
## Compiling Stan program...
## Start sampling
## Family: binomial
## Links: mu = logit
## Formula: z | trials(1) ~ x
## Data: df (Number of observations: 100)
## Draws: 4 chains, each with iter = 2000; warmup = 1000; thin = 1;
## total post-warmup draws = 4000
##
## Population-Level Effects:
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## Intercept -11.03 2.82 -17.66 -6.60 1.00 2249 1891
## x 2.20 0.55 1.34 3.50 1.00 2051 1891
##
## Draws were sampled using sampling(NUTS). For each parameter, Bulk_ESS
## and Tail_ESS are effective sample size measures, and Rhat is the potential
## scale reduction factor on split chains (at convergence, Rhat = 1).
Stan code:
data {
int N;
vector[N] X;
vector[N] Y;
vector<lower=0> Z[N];
}
parameters {
real beta[2];
}
model {
Z ~ gamma(1, exp(- beta[1] * X - beta[2] * Y));
}
What is
the probability distribution? (describes the output)
the linear predictor? (connects input to output)
the link function? (connects linear predictor to probability distribution)
Then, simulate from it.