Remember Bayes Rule in Baysian Parameter Learning:

\begin{equation} P\left(\theta | D\right) = \frac{P\left(D | \theta\right) p \left(\theta\right)}{\int_{\theta}P\left(D | \theta\right) p \left(\theta\right) \dd{\theta}} \end{equation}

we can’t actually easily compute the bottom without taking an analytic integral; instead we can sample from it. If you want analytical form, you should hope that your likelihood function is a conjugate prior which allows us to analytically update prirors.

[[curator]]
I'm the Curator. I can help you navigate, organize, and curate this wiki. What would you like to do?