The prior is the probability of the parameter and represents what was thought before seeing the data.
The posterior represents what is thought given both prior information and the data just seen.
Data and hypotheses…
We have a hypotheses H0 (null), H1
We have data (Y)
We want to check if the model that we have (H1) fits our data (accept H1 / reject H0) or not(H0)
what is the probability that we can reject H0 and accept H1 at some level of significance (, P)
These are a-priori decisions even when we don’t know what the data will be and how it will behave.
We get some evidence for the model (“likelihood”) and then can even compare “likelihoods” of different models
Where does Bayes Rule come at hand?
In diagnostic cases where we’re are trying to calculate P(Disease | Symptom) we often know P(Symptom | Disease), the probability that you have the symptom given the disease, because this data has been collected from previous confirmed cases.
In scientific cases where we want to know P(Hypothesis | Result), the probability that a hypothesis is true given some relevant result, we may know P(Result | Hypothesis), the probability that we would obtain that result given that the hypothesis is true- this is often statistically calculable, as when we have a p-value.
Applicability to (f)mri
Let’s take fMRI as a relevant example
Y = X * +
Measured data : Y
Model : X
Model estimates: , (/variance)
What do we get with inferential statistics?
T-statistics on the betas ( = (1,2,…)) (taking error into account) for a specific voxel we would ONLY get that there is a chance (e.g. < 5%) that there is NO effect of (e.g. 1 > 2), given the data
But what about the likelihood of the model???
What are the chances/likelihood that 1 > 2 at some voxel or region
Could we get some quantitative measure on that?
What do we get with Bayes statistics?
Here, the idea (Bayes) is to use our post-hoc knowledge (our data) to estimate the model, ( also allowing us to compare hypotheses (models) and see which fits our data best)