In Bayesian probability theory, if, given a likelihood function p ( x ∣ θ ) {\displaystyle p(x\mid \theta )} , the posterior distribution p ( θ ∣ x ) {\displaystyle p(\theta \mid x)} is in the same probability distribution family as the prior probability distribution p ( θ ) {\displaystyle p(\theta )} , the prior and posterior are then called conjugate distributions with respect to that likelihood function and the prior is called a conjugate prior for the likelihood function p ( x ∣ θ ) {\displaystyle p(x\mid \theta )} .
A conjugate prior is an algebraic convenience, giving a closed-form expression for the posterior; otherwise, numerical integration may be necessary. Further, conjugate priors may give intuition by more transparently showing how a likelihood function updates a prior distribution.
The concept, as well as the term "conjugate prior", were introduced by Howard Raiffa and Robert Schlaifer in their work on Bayesian decision theory. A similar concept had been discovered independently by George Alfred Barnard.