Taylor series approximation to a Bayesian posterior distribution | Vose Software

Taylor series approximation to a Bayesian posterior distribution


When we have a reasonable amount of data with which to calculate the likelihood function, the posterior distribution tends to come out looking approximately Normally distributed. In this section we will examine why that is, and provide a shorthand method to determine the approximating Normal distribution directly without needing to go through a complete Bayesian analysis.

Our best estimate q0 of the value of a parameter q is the value for which the posterior distribution f(q ) is at its maximum. Mathematically, this equates to the condition:

                                        (1)

That is to say, q0 occurs where the gradient of f(q) is zero. Strictly speaking, we also require that the gradient of f(q) is going from positive to negative for q 0 to be a maximum, i.e.:

The second condition is only of any importance if the posterior distribution has two or more peaks, for which a Normal approximation to the posterior distribution would be inappropriate anyway. Taking the first and second derivatives of f(q) assumes that q is continuous variable, but the principle applies equally to discrete variables, in which case we are just looking for that value of q for which the posterior distribution has the highest value.

The Taylor series expansion of a function allows one to produce a polynomial approximation to some function f(x) about some value x0 that usually has a much simpler form than the original function. The Taylor series expansion says:

where f(m)(x) represents the mth derivative of f(x) with respect to x.

To make the next calculation a little easier to manage, we first define the log of the posterior distribution L(q) = loge[f(q)]. Since L(q) increases with f(q) the maximum of L(q) occurs at the same value of q as the maximum of f(q). We now apply the Taylor series expansion of L(q) about q0 for the first three terms:

The first term in this expansion is just a constant value (k), and tells us nothing about the shape of L(q); the second term equals zero from Equation 1, so we are left with the simplified form:

This approximation will be good providing the higher order terms (m = 3, 4, etc.) have much smaller values than the m = 2 term here.

We can now take the exponential of L(q) to get back to f(q):

where K is a normalizing constant. Now, the Normal(m,s) distribution has probability density function f(x) given by:

Comparing the above two equations, we can see that f(q) has the same functional form as a Normal distribution where:

        and         

and we can thus often approximate the Bayesian posterior distribution by the following Normal distribution:

We illustrate this Normal (or quadratic) approximation with a two simple examples:

Normal approximation to the Beta posterior distribution

Bayesian estimate of the mean of a Normal distribution with unknown standard deviation

See Also

 

Navigation