In statistics, the matrix variate Dirichlet distribution is a generalization of the matrix variate beta distribution and of the Dirichlet distribution.
Suppose U 1 , … , U r {\displaystyle U_{1},\ldots ,U_{r}} are p × p {\displaystyle p\times p} positive definite matrices with I p − ∑ i = 1 r U i {\displaystyle I_{p}-\sum _{i=1}^{r}U_{i}} also positive-definite, where I p {\displaystyle I_{p}} is the p × p {\displaystyle p\times p} identity matrix. Then we say that the U i {\displaystyle U_{i}} have a matrix variate Dirichlet distribution, ( U 1 , … , U r ) ∼ D p ( a 1 , … , a r ; a r + 1 ) {\displaystyle \left(U_{1},\ldots ,U_{r}\right)\sim D_{p}\left(a_{1},\ldots ,a_{r};a_{r+1}\right)} , if their joint probability density function is
where a i > ( p − 1 ) / 2 , i = 1 , … , r + 1 {\displaystyle a_{i}>(p-1)/2,i=1,\ldots ,r+1} and β p ( ⋯ ) {\displaystyle \beta _{p}\left(\cdots \right)} is the multivariate beta function.
If we write U r + 1 = I p − ∑ i = 1 r U i {\displaystyle U_{r+1}=I_{p}-\sum _{i=1}^{r}U_{i}} then the PDF takes the simpler form
on the understanding that ∑ i = 1 r + 1 U i = I p {\displaystyle \sum _{i=1}^{r+1}U_{i}=I_{p}} .