## 7.4 Likelihood Ratio

Likelihood Ratio (LLR) is a test used to assess the significance of the model. That is, it is a statistical test that answers the question of whether the model as a whole is able to predict the response variable better than expected by chance, or equivalently, whether at least one of the predictors that make up the model contributes significantly.

\begin{aligned} &H_{0}: \beta_{1}=\ldots=\beta_{K}=0\\ &H_{1}: \text { at least one } \beta_{k} \neq 0 \end{aligned} To carry out this analysis, the probability of obtaining the observed values (log likelihood) with the adjusted model ($$M_1$$) is compared to the probabilities obtained with a model without predictors (null model $$M_0$$).

The fitted model is considered useful if it is able to show an improvement in explaining the observations with respect to the null model (without predictors).

• The Likelihood ratio test computes the significance of the difference in residuals, deviance, between the fitted model $$M_1$$ and the null model $$M_0$$.

$LLR = -2\log \left[ \dfrac{L(M_0)}{L(M_1)} \right ] = 2 \left [ \log {L}(M_1) - \log L(M_0) \right ]$

The statistic follows a chi-square distribution, $$\chi^2,$$ with degrees of freedom equal to the difference in degrees of freedom of the two models. As compared to the null model, degrees of freedom equal the number of predictors.

If the test is significant, it implies that the model is useful, but not that it is the best. It could be that some of the independent variables are not statistically significant.

- In our example:

$\text{Likelihood ratio test: } \chi^2_{(2)} = 18.41 [0.0001],$

indicates that the fitted model - with all independent variables - fits significantly better than the null model.