The cumulants of a random variable X are defined using the cumulant-generating function K(t), which is the natural logarithm of the moment-generating function: K ( t ) = log E [ e t X ] . {\displaystyle K(t)=\log \operatorname {E} \left[e^{tX}\right].}
The cumulants κn are obtained from a power series expansion of the cumulant generating function: K ( t ) = ∑ n = 1 ∞ κ n t n n ! = κ 1 t 1 ! + κ 2 t 2 2 ! + κ 3 t 3 3 ! + ⋯ = μ t + σ 2 t 2 2 + ⋯ . {\displaystyle K(t)=\sum _{n=1}^{\infty }\kappa _{n}{\frac {t^{n}}{n!}}=\kappa _{1}{\frac {t}{1!}}+\kappa _{2}{\frac {t^{2}}{2!}}+\kappa _{3}{\frac {t^{3}}{3!}}+\cdots =\mu t+\sigma ^{2}{\frac {t^{2}}{2}}+\cdots .}
This expansion is a Maclaurin series, so the nth cumulant can be obtained by differentiating the above expansion n times and evaluating the result at zero:1 κ n = K ( n ) ( 0 ) . {\displaystyle \kappa _{n}=K^{(n)}(0).}
If the moment-generating function does not exist, the cumulants can be defined in terms of the relationship between cumulants and moments discussed later.
Some writers23 prefer to define the cumulant-generating function as the natural logarithm of the characteristic function, which is sometimes also called the second characteristic function,45 H ( t ) = log E [ e i t X ] = ∑ n = 1 ∞ κ n ( i t ) n n ! = μ i t − σ 2 t 2 2 + ⋯ {\displaystyle H(t)=\log \operatorname {E} \left[e^{itX}\right]=\sum _{n=1}^{\infty }\kappa _{n}{\frac {(it)^{n}}{n!}}=\mu it-\sigma ^{2}{\frac {t^{2}}{2}}+\cdots }
An advantage of H(t)—in some sense the function K(t) evaluated for purely imaginary arguments—is that E[eitX] is well defined for all real values of t even when E[etX] is not well defined for all real values of t, such as can occur when there is "too much" probability that X has a large magnitude. Although the function H(t) will be well defined, it will nonetheless mimic K(t) in terms of the length of its Maclaurin series, which may not extend beyond (or, rarely, even to) linear order in the argument t, and in particular the number of cumulants that are well defined will not change. Nevertheless, even when H(t) does not have a long Maclaurin series, it can be used directly in analyzing and, particularly, adding random variables. Both the Cauchy distribution (also called the Lorentzian) and more generally, stable distributions (related to the Lévy distribution) are examples of distributions for which the power-series expansions of the generating functions have only finitely many well-defined terms.
The n {\textstyle n} th cumulant κ n ( X ) {\textstyle \kappa _{n}(X)} of (the distribution of) a random variable X {\textstyle X} enjoys the following properties:
The cumulative property follows quickly by considering the cumulant-generating function: K X 1 + ⋯ + X m ( t ) = log E [ e t ( X 1 + ⋯ + X m ) ] = log ( E [ e t X 1 ] ⋯ E [ e t X m ] ) = log E [ e t X 1 ] + ⋯ + log E [ e t X m ] = K X 1 ( t ) + ⋯ + K X m ( t ) , {\displaystyle {\begin{aligned}K_{X_{1}+\cdots +X_{m}}(t)&=\log \operatorname {E} \left[e^{t(X_{1}+\cdots +X_{m})}\right]\\[5pt]&=\log \left(\operatorname {E} \left[e^{tX_{1}}\right]\cdots \operatorname {E} \left[e^{tX_{m}}\right]\right)\\[5pt]&=\log \operatorname {E} \left[e^{tX_{1}}\right]+\cdots +\log \operatorname {E} \left[e^{tX_{m}}\right]\\[5pt]&=K_{X_{1}}(t)+\cdots +K_{X_{m}}(t),\end{aligned}}} so that each cumulant of a sum of independent random variables is the sum of the corresponding cumulants of the addends. That is, when the addends are statistically independent, the mean of the sum is the sum of the means, the variance of the sum is the sum of the variances, the third cumulant (which happens to be the third central moment) of the sum is the sum of the third cumulants, and so on for each order of cumulant.
A distribution with given cumulants κn can be approximated through an Edgeworth series.
All of the higher cumulants are polynomial functions of the central moments, with integer coefficients, but only in degrees 2 and 3 are the cumulants actually central moments.
Introducing the variance-to-mean ratio ε = μ − 1 σ 2 = κ 1 − 1 κ 2 , {\displaystyle \varepsilon =\mu ^{-1}\sigma ^{2}=\kappa _{1}^{-1}\kappa _{2},} the above probability distributions get a unified formula for the derivative of the cumulant generating function: K ′ ( t ) = ( 1 + ( e − t − 1 ) ε ) − 1 μ {\displaystyle K'(t)=(1+(e^{-t}-1)\varepsilon )^{-1}\mu }
The second derivative is K ″ ( t ) = ( ε − ( ε − 1 ) e t ) − 2 μ ε e t {\displaystyle K''(t)=(\varepsilon -(\varepsilon -1)e^{t})^{-2}\mu \varepsilon e^{t}} confirming that the first cumulant is κ1 = K′(0) = μ and the second cumulant is κ2 = K′′(0) = με.
The constant random variables X = μ have ε = 0.
The binomial distributions have ε = 1 − p so that 0 < ε < 1.
The Poisson distributions have ε = 1.
The negative binomial distributions have ε = p−1 so that ε > 1.
Note the analogy to the classification of conic sections by eccentricity: circles ε = 0, ellipses 0 < ε < 1, parabolas ε = 1, hyperbolas ε > 1.
The cumulant generating function K(t), if it exists, is infinitely differentiable and convex, and passes through the origin. Its first derivative ranges monotonically in the open interval from the infimum to the supremum of the support of the probability distribution, and its second derivative is strictly positive everywhere it is defined, except for the degenerate distribution of a single point mass. The cumulant-generating function exists if and only if the tails of the distribution are majorized by an exponential decay, that is, (see Big O notation) ∃ c > 0 , F ( x ) = O ( e c x ) , x → − ∞ ; and ∃ d > 0 , 1 − F ( x ) = O ( e − d x ) , x → + ∞ ; {\displaystyle {\begin{aligned}&\exists c>0,\,\,F(x)=O(e^{cx}),x\to -\infty ;{\text{ and}}\\[4pt]&\exists d>0,\,\,1-F(x)=O(e^{-dx}),x\to +\infty ;\end{aligned}}} where F {\textstyle F} is the cumulative distribution function. The cumulant-generating function will have vertical asymptote(s) at the negative supremum of such c, if such a supremum exists, and at the supremum of such d, if such a supremum exists, otherwise it will be defined for all real numbers.
If the support of a random variable X has finite upper or lower bounds, then its cumulant-generating function y = K(t), if it exists, approaches asymptote(s) whose slope is equal to the supremum or infimum of the support, y = ( t + 1 ) inf supp X − μ ( X ) , and y = ( t − 1 ) sup supp X + μ ( X ) , {\displaystyle {\begin{aligned}y&=(t+1)\inf \operatorname {supp} X-\mu (X),{\text{ and}}\\[5pt]y&=(t-1)\sup \operatorname {supp} X+\mu (X),\end{aligned}}} respectively, lying above both these lines everywhere. (The integrals ∫ − ∞ 0 [ t inf supp X − K ′ ( t ) ] d t , ∫ ∞ 0 [ t inf supp X − K ′ ( t ) ] d t {\displaystyle \int _{-\infty }^{0}\left[t\inf \operatorname {supp} X-K'(t)\right]\,dt,\qquad \int _{\infty }^{0}\left[t\inf \operatorname {supp} X-K'(t)\right]\,dt} yield the y-intercepts of these asymptotes, since K(0) = 0.)
For a shift of the distribution by c, K X + c ( t ) = K X ( t ) + c t . {\textstyle K_{X+c}(t)=K_{X}(t)+ct.} For a degenerate point mass at c, the cumulant generating function is the straight line K c ( t ) = c t {\textstyle K_{c}(t)=ct} , and more generally, K X + Y = K X + K Y {\textstyle K_{X+Y}=K_{X}+K_{Y}} if and only if X and Y are independent and their cumulant generating functions exist; (subindependence and the existence of second moments sufficing to imply independence.6)
The natural exponential family of a distribution may be realized by shifting or translating K(t), and adjusting it vertically so that it always passes through the origin: if f is the pdf with cumulant generating function K ( t ) = log M ( t ) , {\textstyle K(t)=\log M(t),} and f | θ {\textstyle f|\theta } is its natural exponential family, then f ( x ∣ θ ) = 1 M ( θ ) e θ x f ( x ) , {\textstyle f(x\mid \theta )={\frac {1}{M(\theta )}}e^{\theta x}f(x),} and K ( t ∣ θ ) = K ( t + θ ) − K ( θ ) . {\textstyle K(t\mid \theta )=K(t+\theta )-K(\theta ).}
If K(t) is finite for a range t1 < Re(t) < t2 then if t1 < 0 < t2 then K(t) is analytic and infinitely differentiable for t1 < Re(t) < t2. Moreover for t real and t1 < t < t2 K(t) is strictly convex, and K′(t) is strictly increasing.
Given the results for the cumulants of the normal distribution, it might be hoped to find families of distributions for which κm = κm+1 = ⋯ = 0 for some m > 3, with the lower-order cumulants (orders 3 to m − 1) being non-zero. There are no such distributions.7 The underlying result here is that the cumulant generating function cannot be a finite-order polynomial of degree greater than 2.
The moment generating function is given by: M ( t ) = 1 + ∑ n = 1 ∞ μ n ′ t n n ! = exp ( ∑ n = 1 ∞ κ n t n n ! ) = exp ( K ( t ) ) . {\displaystyle M(t)=1+\sum _{n=1}^{\infty }{\frac {\mu '_{n}t^{n}}{n!}}=\exp \left(\sum _{n=1}^{\infty }{\frac {\kappa _{n}t^{n}}{n!}}\right)=\exp(K(t)).}
So the cumulant generating function is the logarithm of the moment generating function K ( t ) = log M ( t ) . {\displaystyle K(t)=\log M(t).}
The first cumulant is the expected value; the second and third cumulants are respectively the second and third central moments (the second central moment is the variance); but the higher cumulants are neither moments nor central moments, but rather more complicated polynomial functions of the moments.
The moments can be recovered in terms of cumulants by evaluating the nth derivative of exp ( K ( t ) ) {\textstyle \exp(K(t))} at t = 0 {\displaystyle t=0} , μ n ′ = M ( n ) ( 0 ) = d n exp ( K ( t ) ) d t n | t = 0 . {\displaystyle \mu '_{n}=M^{(n)}(0)=\left.{\frac {\mathrm {d} ^{n}\exp(K(t))}{\mathrm {d} t^{n}}}\right|_{t=0}.}
Likewise, the cumulants can be recovered in terms of moments by evaluating the nth derivative of log M ( t ) {\textstyle \log M(t)} at t = 0 {\displaystyle t=0} , κ n = K ( n ) ( 0 ) = d n log M ( t ) d t n | t = 0 . {\displaystyle \kappa _{n}=K^{(n)}(0)=\left.{\frac {\mathrm {d} ^{n}\log M(t)}{\mathrm {d} t^{n}}}\right|_{t=0}.}
The explicit expression for the nth moment in terms of the first n cumulants, and vice versa, can be obtained by using Faà di Bruno's formula for higher derivatives of composite functions. In general, we have μ n ′ = ∑ k = 1 n B n , k ( κ 1 , … , κ n − k + 1 ) {\displaystyle \mu '_{n}=\sum _{k=1}^{n}B_{n,k}(\kappa _{1},\ldots ,\kappa _{n-k+1})} κ n = ∑ k = 1 n ( − 1 ) k − 1 ( k − 1 ) ! B n , k ( μ 1 ′ , … , μ n − k + 1 ′ ) , {\displaystyle \kappa _{n}=\sum _{k=1}^{n}(-1)^{k-1}(k-1)!B_{n,k}(\mu '_{1},\ldots ,\mu '_{n-k+1}),} where B n , k {\textstyle B_{n,k}} are incomplete (or partial) Bell polynomials.
In the like manner, if the mean is given by μ {\textstyle \mu } , the central moment generating function is given by C ( t ) = E [ e t ( x − μ ) ] = e − μ t M ( t ) = exp ( K ( t ) − μ t ) , {\displaystyle C(t)=\operatorname {E} [e^{t(x-\mu )}]=e^{-\mu t}M(t)=\exp(K(t)-\mu t),} and the nth central moment is obtained in terms of cumulants as μ n = C ( n ) ( 0 ) = d n d t n exp ( K ( t ) − μ t ) | t = 0 = ∑ k = 1 n B n , k ( 0 , κ 2 , … , κ n − k + 1 ) . {\displaystyle \mu _{n}=C^{(n)}(0)=\left.{\frac {\mathrm {d} ^{n}}{\mathrm {d} t^{n}}}\exp(K(t)-\mu t)\right|_{t=0}=\sum _{k=1}^{n}B_{n,k}(0,\kappa _{2},\ldots ,\kappa _{n-k+1}).}
Also, for n > 1, the nth cumulant in terms of the central moments is κ n = K ( n ) ( 0 ) = d n d t n ( log C ( t ) + μ t ) | t = 0 = ∑ k = 1 n ( − 1 ) k − 1 ( k − 1 ) ! B n , k ( 0 , μ 2 , … , μ n − k + 1 ) . {\displaystyle {\begin{aligned}\kappa _{n}&=K^{(n)}(0)=\left.{\frac {\mathrm {d} ^{n}}{\mathrm {d} t^{n}}}(\log C(t)+\mu t)\right|_{t=0}\\[4pt]&=\sum _{k=1}^{n}(-1)^{k-1}(k-1)!B_{n,k}(0,\mu _{2},\ldots ,\mu _{n-k+1}).\end{aligned}}}
The nth moment μ′n is an nth-degree polynomial in the first n cumulants. The first few expressions are:
μ 1 ′ = κ 1 μ 2 ′ = κ 2 + κ 1 2 μ 3 ′ = κ 3 + 3 κ 2 κ 1 + κ 1 3 μ 4 ′ = κ 4 + 4 κ 3 κ 1 + 3 κ 2 2 + 6 κ 2 κ 1 2 + κ 1 4 μ 5 ′ = κ 5 + 5 κ 4 κ 1 + 10 κ 3 κ 2 + 10 κ 3 κ 1 2 + 15 κ 2 2 κ 1 + 10 κ 2 κ 1 3 + κ 1 5 μ 6 ′ = κ 6 + 6 κ 5 κ 1 + 15 κ 4 κ 2 + 15 κ 4 κ 1 2 + 10 κ 3 2 + 60 κ 3 κ 2 κ 1 + 20 κ 3 κ 1 3 + 15 κ 2 3 + 45 κ 2 2 κ 1 2 + 15 κ 2 κ 1 4 + κ 1 6 . {\displaystyle {\begin{aligned}\mu '_{1}={}&\kappa _{1}\\[5pt]\mu '_{2}={}&\kappa _{2}+\kappa _{1}^{2}\\[5pt]\mu '_{3}={}&\kappa _{3}+3\kappa _{2}\kappa _{1}+\kappa _{1}^{3}\\[5pt]\mu '_{4}={}&\kappa _{4}+4\kappa _{3}\kappa _{1}+3\kappa _{2}^{2}+6\kappa _{2}\kappa _{1}^{2}+\kappa _{1}^{4}\\[5pt]\mu '_{5}={}&\kappa _{5}+5\kappa _{4}\kappa _{1}+10\kappa _{3}\kappa _{2}+10\kappa _{3}\kappa _{1}^{2}+15\kappa _{2}^{2}\kappa _{1}+10\kappa _{2}\kappa _{1}^{3}+\kappa _{1}^{5}\\[5pt]\mu '_{6}={}&\kappa _{6}+6\kappa _{5}\kappa _{1}+15\kappa _{4}\kappa _{2}+15\kappa _{4}\kappa _{1}^{2}+10\kappa _{3}^{2}+60\kappa _{3}\kappa _{2}\kappa _{1}+20\kappa _{3}\kappa _{1}^{3}\\&{}+15\kappa _{2}^{3}+45\kappa _{2}^{2}\kappa _{1}^{2}+15\kappa _{2}\kappa _{1}^{4}+\kappa _{1}^{6}.\end{aligned}}}
The "prime" distinguishes the moments μ′n from the central moments μn. To express the central moments as functions of the cumulants, just drop from these polynomials all terms in which κ1 appears as a factor: μ 1 = 0 μ 2 = κ 2 μ 3 = κ 3 μ 4 = κ 4 + 3 κ 2 2 μ 5 = κ 5 + 10 κ 3 κ 2 μ 6 = κ 6 + 15 κ 4 κ 2 + 10 κ 3 2 + 15 κ 2 3 . {\displaystyle {\begin{aligned}\mu _{1}&=0\\[4pt]\mu _{2}&=\kappa _{2}\\[4pt]\mu _{3}&=\kappa _{3}\\[4pt]\mu _{4}&=\kappa _{4}+3\kappa _{2}^{2}\\[4pt]\mu _{5}&=\kappa _{5}+10\kappa _{3}\kappa _{2}\\[4pt]\mu _{6}&=\kappa _{6}+15\kappa _{4}\kappa _{2}+10\kappa _{3}^{2}+15\kappa _{2}^{3}.\end{aligned}}}
Similarly, the nth cumulant κn is an nth-degree polynomial in the first n non-central moments. The first few expressions are: κ 1 = μ 1 ′ κ 2 = μ 2 ′ − μ 1 ′ 2 κ 3 = μ 3 ′ − 3 μ 2 ′ μ 1 ′ + 2 μ 1 ′ 3 κ 4 = μ 4 ′ − 4 μ 3 ′ μ 1 ′ − 3 μ 2 ′ 2 + 12 μ 2 ′ μ 1 ′ 2 − 6 μ 1 ′ 4 κ 5 = μ 5 ′ − 5 μ 4 ′ μ 1 ′ − 10 μ 3 ′ μ 2 ′ + 20 μ 3 ′ μ 1 ′ 2 + 30 μ 2 ′ 2 μ 1 ′ − 60 μ 2 ′ μ 1 ′ 3 + 24 μ 1 ′ 5 κ 6 = μ 6 ′ − 6 μ 5 ′ μ 1 ′ − 15 μ 4 ′ μ 2 ′ + 30 μ 4 ′ μ 1 ′ 2 − 10 μ 3 ′ 2 + 120 μ 3 ′ μ 2 ′ μ 1 ′ − 120 μ 3 ′ μ 1 ′ 3 + 30 μ 2 ′ 3 − 270 μ 2 ′ 2 μ 1 ′ 2 + 360 μ 2 ′ μ 1 ′ 4 − 120 μ 1 ′ 6 . {\displaystyle {\begin{aligned}\kappa _{1}={}&\mu '_{1}\\[4pt]\kappa _{2}={}&\mu '_{2}-{\mu '_{1}}^{2}\\[4pt]\kappa _{3}={}&\mu '_{3}-3\mu '_{2}\mu '_{1}+2{\mu '_{1}}^{3}\\[4pt]\kappa _{4}={}&\mu '_{4}-4\mu '_{3}\mu '_{1}-3{\mu '_{2}}^{2}+12\mu '_{2}{\mu '_{1}}^{2}-6{\mu '_{1}}^{4}\\[4pt]\kappa _{5}={}&\mu '_{5}-5\mu '_{4}\mu '_{1}-10\mu '_{3}\mu '_{2}+20\mu '_{3}{\mu '_{1}}^{2}+30{\mu '_{2}}^{2}\mu '_{1}-60\mu '_{2}{\mu '_{1}}^{3}+24{\mu '_{1}}^{5}\\[4pt]\kappa _{6}={}&\mu '_{6}-6\mu '_{5}\mu '_{1}-15\mu '_{4}\mu '_{2}+30\mu '_{4}{\mu '_{1}}^{2}-10{\mu '_{3}}^{2}+120\mu '_{3}\mu '_{2}\mu '_{1}\\&{}-120\mu '_{3}{\mu '_{1}}^{3}+30{\mu '_{2}}^{3}-270{\mu '_{2}}^{2}{\mu '_{1}}^{2}+360\mu '_{2}{\mu '_{1}}^{4}-120{\mu '_{1}}^{6}\,.\end{aligned}}}
In general,8 the cumulant is the determinant of a matrix: κ l = ( − 1 ) l + 1 | μ 1 ′ 1 0 0 0 0 … 0 μ 2 ′ μ 1 ′ 1 0 0 0 … 0 μ 3 ′ μ 2 ′ ( 2 1 ) μ 1 ′ 1 0 0 … 0 μ 4 ′ μ 3 ′ ( 3 1 ) μ 2 ′ ( 3 2 ) μ 1 ′ 1 0 … 0 μ 5 ′ μ 4 ′ ( 4 1 ) μ 3 ′ ( 4 2 ) μ 2 ′ ( 4 3 ) μ 1 ′ 1 … 0 ⋮ ⋮ ⋮ ⋮ ⋮ ⋱ ⋱ ⋮ μ l − 1 ′ μ l − 2 ′ … … … … ⋱ 1 μ l ′ μ l − 1 ′ … … … … … ( l − 1 l − 2 ) μ 1 ′ | {\displaystyle \kappa _{l}=(-1)^{l+1}\left|{\begin{array}{cccccccc}\mu '_{1}&1&0&0&0&0&\ldots &0\\\mu '_{2}&\mu '_{1}&1&0&0&0&\ldots &0\\\mu '_{3}&\mu '_{2}&\left({\begin{array}{l}2\\1\end{array}}\right)\mu '_{1}&1&0&0&\ldots &0\\\mu '_{4}&\mu '_{3}&\left({\begin{array}{l}3\\1\end{array}}\right)\mu '_{2}&\left({\begin{array}{l}3\\2\end{array}}\right)\mu '_{1}&1&0&\ldots &0\\\mu '_{5}&\mu '_{4}&\left({\begin{array}{l}4\\1\end{array}}\right)\mu '_{3}&\left({\begin{array}{l}4\\2\end{array}}\right)\mu '_{2}&\left({\begin{array}{c}4\\3\end{array}}\right)\mu '_{1}&1&\ldots &0\\\vdots &\vdots &\vdots &\vdots &\vdots &\ddots &\ddots &\vdots \\\mu '_{l-1}&\mu '_{l-2}&\ldots &\ldots &\ldots &\ldots &\ddots &1\\\mu '_{l}&\mu '_{l-1}&\ldots &\ldots &\ldots &\ldots &\ldots &\left({\begin{array}{l}l-1\\l-2\end{array}}\right)\mu '_{1}\end{array}}\right|}
To express the cumulants κn for n > 1 as functions of the central moments, drop from these polynomials all terms in which μ'1 appears as a factor: κ 2 = μ 2 {\displaystyle \kappa _{2}=\mu _{2}\,} κ 3 = μ 3 {\displaystyle \kappa _{3}=\mu _{3}\,} κ 4 = μ 4 − 3 μ 2 2 {\displaystyle \kappa _{4}=\mu _{4}-3{\mu _{2}}^{2}\,} κ 5 = μ 5 − 10 μ 3 μ 2 {\displaystyle \kappa _{5}=\mu _{5}-10\mu _{3}\mu _{2}\,} κ 6 = μ 6 − 15 μ 4 μ 2 − 10 μ 3 2 + 30 μ 2 3 . {\displaystyle \kappa _{6}=\mu _{6}-15\mu _{4}\mu _{2}-10{\mu _{3}}^{2}+30{\mu _{2}}^{3}\,.}
The cumulants can be related to the moments by differentiating the relationship log M(t) = K(t) with respect to t, giving M′(t) = K′(t) M(t), which conveniently contains no exponentials or logarithms. Equating the coefficient of t n−1 / (n−1)! on the left and right sides and using μ′0 = 1 gives the following formulas for n ≥ 1:9 μ 1 ′ = κ 1 μ 2 ′ = κ 1 μ 1 ′ + κ 2 μ 3 ′ = κ 1 μ 2 ′ + 2 κ 2 μ 1 ′ + κ 3 μ 4 ′ = κ 1 μ 3 ′ + 3 κ 2 μ 2 ′ + 3 κ 3 μ 1 ′ + κ 4 μ 5 ′ = κ 1 μ 4 ′ + 4 κ 2 μ 3 ′ + 6 κ 3 μ 2 ′ + 4 κ 4 μ 1 ′ + κ 5 μ 6 ′ = κ 1 μ 5 ′ + 5 κ 2 μ 4 ′ + 10 κ 3 μ 3 ′ + 10 κ 4 μ 2 ′ + 5 κ 5 μ 1 ′ + κ 6 μ n ′ = ∑ m = 1 n − 1 ( n − 1 m − 1 ) κ m μ n − m ′ + κ n . {\displaystyle {\begin{aligned}\mu '_{1}={}&\kappa _{1}\\[1pt]\mu '_{2}={}&\kappa _{1}\mu '_{1}+\kappa _{2}\\[1pt]\mu '_{3}={}&\kappa _{1}\mu '_{2}+2\kappa _{2}\mu '_{1}+\kappa _{3}\\[1pt]\mu '_{4}={}&\kappa _{1}\mu '_{3}+3\kappa _{2}\mu '_{2}+3\kappa _{3}\mu '_{1}+\kappa _{4}\\[1pt]\mu '_{5}={}&\kappa _{1}\mu '_{4}+4\kappa _{2}\mu '_{3}+6\kappa _{3}\mu '_{2}+4\kappa _{4}\mu '_{1}+\kappa _{5}\\[1pt]\mu '_{6}={}&\kappa _{1}\mu '_{5}+5\kappa _{2}\mu '_{4}+10\kappa _{3}\mu '_{3}+10\kappa _{4}\mu '_{2}+5\kappa _{5}\mu '_{1}+\kappa _{6}\\[1pt]\mu '_{n}={}&\sum _{m=1}^{n-1}{n-1 \choose m-1}\kappa _{m}\mu '_{n-m}+\kappa _{n}\,.\end{aligned}}} These allow either κ n {\textstyle \kappa _{n}} or μ n ′ {\textstyle \mu '_{n}} to be computed from the other using knowledge of the lower-order cumulants and moments. The corresponding formulas for the central moments μ n {\textstyle \mu _{n}} for n ≥ 2 {\textstyle n\geq 2} are formed from these formulas by setting μ 1 ′ = κ 1 = 0 {\textstyle \mu '_{1}=\kappa _{1}=0} and replacing each μ n ′ {\textstyle \mu '_{n}} with μ n {\textstyle \mu _{n}} for n ≥ 2 {\textstyle n\geq 2} : μ 2 = κ 2 μ 3 = κ 3 μ n = ∑ m = 2 n − 2 ( n − 1 m − 1 ) κ m μ n − m + κ n . {\displaystyle {\begin{aligned}\mu _{2}={}&\kappa _{2}\\[1pt]\mu _{3}={}&\kappa _{3}\\[1pt]\mu _{n}={}&\sum _{m=2}^{n-2}{n-1 \choose m-1}\kappa _{m}\mu _{n-m}+\kappa _{n}\,.\end{aligned}}}
These polynomials have a remarkable combinatorial interpretation: the coefficients count certain partitions of sets. A general form of these polynomials is μ n ′ = ∑ π ∈ Π ∏ B ∈ π κ | B | {\displaystyle \mu '_{n}=\sum _{\pi \,\in \,\Pi }\prod _{B\,\in \,\pi }\kappa _{|B|}} where
Thus each monomial is a constant times a product of cumulants in which the sum of the indices is n (e.g., in the term κ3 κ22 κ1, the sum of the indices is 3 + 2 + 2 + 1 = 8; this appears in the polynomial that expresses the 8th moment as a function of the first eight cumulants). A partition of the integer n corresponds to each term. The coefficient in each term is the number of partitions of a set of n members that collapse to that partition of the integer n when the members of the set become indistinguishable.
Further connection between cumulants and combinatorics can be found in the work of Gian-Carlo Rota, where links to invariant theory, symmetric functions, and binomial sequences are studied via umbral calculus.10
The joint cumulant κ of several random variables X1, ..., Xn is defined as the coefficient κ1,...,1(X1, ..., Xn) in the Maclaurin series of the multivariate cumulant generating function, see Section 3.1 in,11 G ( t 1 , … , t n ) = log E ( e ∑ j = 1 n t j X j ) = ∑ k 1 , … , k n κ k 1 , … , k n t 1 k 1 ⋯ t n k n k 1 ! ⋯ k n ! . {\displaystyle G(t_{1},\dots ,t_{n})=\log \mathrm {E} (\mathrm {e} ^{\sum _{j=1}^{n}t_{j}X_{j}})=\sum _{k_{1},\ldots ,k_{n}}\kappa _{k_{1},\ldots ,k_{n}}{\frac {t_{1}^{k_{1}}\cdots t_{n}^{k_{n}}}{k_{1}!\cdots k_{n}!}}\,.} Note that κ k 1 , … , k n = ( d d t 1 ) k 1 ⋯ ( d d t n ) k n G ( t 1 , … , t n ) | t 1 = ⋯ = t n = 0 , {\displaystyle \kappa _{k_{1},\dots ,k_{n}}=\left.\left({\frac {\mathrm {d} }{\mathrm {d} t_{1}}}\right)^{k_{1}}\cdots \left({\frac {\mathrm {d} }{\mathrm {d} t_{n}}}\right)^{k_{n}}G(t_{1},\dots ,t_{n})\right|_{t_{1}=\dots =t_{n}=0}\,,} and, in particular κ ( X 1 , … , X n ) = d n d t 1 ⋯ d t n G ( t 1 , … , t n ) | t 1 = ⋯ = t n = 0 . {\displaystyle \kappa (X_{1},\ldots ,X_{n})=\left.{\frac {\mathrm {d} ^{n}}{\mathrm {d} t_{1}\cdots \mathrm {d} t_{n}}}G(t_{1},\dots ,t_{n})\right|_{t_{1}=\dots =t_{n}=0}\,.} As with a single variable, the generating function and cumulant can instead be defined via H ( t 1 , … , t n ) = log E ( e ∑ j = 1 n i t j X j ) = ∑ k 1 , … , k n κ k 1 , … , k n i k 1 + ⋯ + k n t 1 k 1 ⋯ t n k n k 1 ! ⋯ k n ! , {\displaystyle H(t_{1},\dots ,t_{n})=\log \mathrm {E} (\mathrm {e} ^{\sum _{j=1}^{n}it_{j}X_{j}})=\sum _{k_{1},\ldots ,k_{n}}\kappa _{k_{1},\ldots ,k_{n}}i^{k_{1}+\cdots +k_{n}}{\frac {t_{1}^{k_{1}}\cdots t_{n}^{k_{n}}}{k_{1}!\cdots k_{n}!}}\,,} in which case κ k 1 , … , k n = ( − i ) k 1 + ⋯ + k n ( d d t 1 ) k 1 ⋯ ( d d t n ) k n H ( t 1 , … , t n ) | t 1 = ⋯ = t n = 0 , {\displaystyle \kappa _{k_{1},\dots ,k_{n}}=(-i)^{k_{1}+\cdots +k_{n}}\left.\left({\frac {\mathrm {d} }{\mathrm {d} t_{1}}}\right)^{k_{1}}\cdots \left({\frac {\mathrm {d} }{\mathrm {d} t_{n}}}\right)^{k_{n}}H(t_{1},\dots ,t_{n})\right|_{t_{1}=\dots =t_{n}=0}\,,} and κ ( X 1 , … , X n ) = ( − i ) n d n d t 1 ⋯ d t n H ( t 1 , … , t n ) | t 1 = ⋯ = t n = 0 . {\displaystyle \kappa (X_{1},\ldots ,X_{n})=\left.(-i)^{n}{\frac {\mathrm {d} ^{n}}{\mathrm {d} t_{1}\cdots \mathrm {d} t_{n}}}H(t_{1},\dots ,t_{n})\right|_{t_{1}=\dots =t_{n}=0}\,.}
Observe that κ k 1 , … , k n ( X 1 , … , X n ) {\textstyle \kappa _{k_{1},\dots ,k_{n}}(X_{1},\ldots ,X_{n})} can also be written as κ k 1 , … , k n = d k 1 d t 1 , 1 ⋯ d t 1 , k 1 ⋯ d k n d t n , 1 ⋯ d t n , k n G ( ∑ j = 1 k 1 t 1 , j , … , ∑ j = 1 k n t n , j ) | t i , j = 0 , {\displaystyle \kappa _{k_{1},\dots ,k_{n}}=\left.{\frac {\mathrm {d} ^{k_{1}}}{\mathrm {d} t_{1,1}\cdots \mathrm {d} t_{1,k_{1}}}}\cdots {\frac {\mathrm {d} ^{k_{n}}}{\mathrm {d} t_{n,1}\cdots \mathrm {d} t_{n,k_{n}}}}G\left(\sum _{j=1}^{k_{1}}t_{1,j},\dots ,\sum _{j=1}^{k_{n}}t_{n,j}\right)\right|_{t_{i,j}=0},} from which we conclude that κ k 1 , … , k n ( X 1 , … , X n ) = κ 1 , … , 1 ( X 1 , … , X 1 ⏟ k 1 , … , X n , … , X n ⏟ k n ) . {\displaystyle \kappa _{k_{1},\dots ,k_{n}}(X_{1},\ldots ,X_{n})=\kappa _{1,\ldots ,1}(\underbrace {X_{1},\dots ,X_{1}} _{k_{1}},\ldots ,\underbrace {X_{n},\dots ,X_{n}} _{k_{n}}).} For example κ 2 , 0 , 1 ( X , Y , Z ) = κ ( X , X , Z ) , {\displaystyle \kappa _{2,0,1}(X,Y,Z)=\kappa (X,X,Z),\,} and κ 0 , 0 , n , 0 ( X , Y , Z , T ) = κ n ( Z ) = κ ( Z , … , Z ⏟ n ) . {\displaystyle \kappa _{0,0,n,0}(X,Y,Z,T)=\kappa _{n}(Z)=\kappa (\underbrace {Z,\dots ,Z} _{n}).\,} In particular, the last equality shows that the cumulants of a single random variable are the joint cumulants of multiple copies of that random variable.
The joint cumulant of random variables can be expressed as an alternate sum of products of their mixed moments, see Equation (3.2.7) in,12 κ ( X 1 , … , X n ) = ∑ π ( | π | − 1 ) ! ( − 1 ) | π | − 1 ∏ B ∈ π E ( ∏ i ∈ B X i ) {\displaystyle \kappa (X_{1},\dots ,X_{n})=\sum _{\pi }(|\pi |-1)!(-1)^{|\pi |-1}\prod _{B\in \pi }E\left(\prod _{i\in B}X_{i}\right)} where π runs through the list of all partitions of {1, ..., n}; where B runs through the list of all blocks of the partition π; and where |π| is the number of parts in the partition.
For example, κ ( X ) = E ( X ) , {\displaystyle \kappa (X)=\operatorname {E} (X),} is the expected value of X {\textstyle X} , κ ( X , Y ) = E ( X Y ) − E ( X ) E ( Y ) , {\displaystyle \kappa (X,Y)=\operatorname {E} (XY)-\operatorname {E} (X)\operatorname {E} (Y),} is the covariance of X {\textstyle X} and Y {\textstyle Y} , and κ ( X , Y , Z ) = E ( X Y Z ) − E ( X Y ) E ( Z ) − E ( X Z ) E ( Y ) − E ( Y Z ) E ( X ) + 2 E ( X ) E ( Y ) E ( Z ) . {\displaystyle \kappa (X,Y,Z)=\operatorname {E} (XYZ)-\operatorname {E} (XY)\operatorname {E} (Z)-\operatorname {E} (XZ)\operatorname {E} (Y)-\operatorname {E} (YZ)\operatorname {E} (X)+2\operatorname {E} (X)\operatorname {E} (Y)\operatorname {E} (Z).\,}
For zero-mean random variables X 1 , … , X n {\textstyle X_{1},\ldots ,X_{n}} , any mixed moment of the form ∏ B ∈ π E ( ∏ i ∈ B X i ) {\textstyle \prod _{B\in \pi }E\left(\prod _{i\in B}X_{i}\right)} vanishes if π {\textstyle \pi } is a partition of { 1 , … , n } {\textstyle \{1,\ldots ,n\}} which contains a singleton B = { k } {\textstyle B=\{k\}} . Hence, the expression of their joint cumulant in terms of mixed moments simplifies. For example, if X,Y,Z,W are zero mean random variables, we have κ ( X , Y , Z ) = E ( X Y Z ) . {\displaystyle \kappa (X,Y,Z)=\operatorname {E} (XYZ).\,} κ ( X , Y , Z , W ) = E ( X Y Z W ) − E ( X Y ) E ( Z W ) − E ( X Z ) E ( Y W ) − E ( X W ) E ( Y Z ) . {\displaystyle \kappa (X,Y,Z,W)=\operatorname {E} (XYZW)-\operatorname {E} (XY)\operatorname {E} (ZW)-\operatorname {E} (XZ)\operatorname {E} (YW)-\operatorname {E} (XW)\operatorname {E} (YZ).\,}
More generally, any coefficient of the Maclaurin series can also be expressed in terms of mixed moments, although there are no concise formulae. Indeed, as noted above, one can write it as a joint cumulant by repeating random variables appropriately, and then apply the above formula to express it in terms of mixed moments. For example κ 201 ( X , Y , Z ) = κ ( X , X , Z ) = E ( X 2 Z ) − 2 E ( X Z ) E ( X ) − E ( X 2 ) E ( Z ) + 2 E ( X ) 2 E ( Z ) . {\displaystyle \kappa _{201}(X,Y,Z)=\kappa (X,X,Z)=\operatorname {E} (X^{2}Z)-2\operatorname {E} (XZ)\operatorname {E} (X)-\operatorname {E} (X^{2})\operatorname {E} (Z)+2\operatorname {E} (X)^{2}\operatorname {E} (Z).\,}
If some of the random variables are independent of all of the others, then any cumulant involving two (or more) independent random variables is zero.
The combinatorial meaning of the expression of mixed moments in terms of cumulants is easier to understand than that of cumulants in terms of mixed moments, see Equation (3.2.6) in:13 E ( X 1 ⋯ X n ) = ∑ π ∏ B ∈ π κ ( X i : i ∈ B ) . {\displaystyle \operatorname {E} (X_{1}\cdots X_{n})=\sum _{\pi }\prod _{B\in \pi }\kappa (X_{i}:i\in B).}
For example: E ( X Y Z ) = κ ( X , Y , Z ) + κ ( X , Y ) κ ( Z ) + κ ( X , Z ) κ ( Y ) + κ ( Y , Z ) κ ( X ) + κ ( X ) κ ( Y ) κ ( Z ) . {\displaystyle \operatorname {E} (XYZ)=\kappa (X,Y,Z)+\kappa (X,Y)\kappa (Z)+\kappa (X,Z)\kappa (Y)+\kappa (Y,Z)\kappa (X)+\kappa (X)\kappa (Y)\kappa (Z).\,}
Another important property of joint cumulants is multilinearity: κ ( X + Y , Z 1 , Z 2 , … ) = κ ( X , Z 1 , Z 2 , … ) + κ ( Y , Z 1 , Z 2 , … ) . {\displaystyle \kappa (X+Y,Z_{1},Z_{2},\dots )=\kappa (X,Z_{1},Z_{2},\ldots )+\kappa (Y,Z_{1},Z_{2},\ldots ).\,}
Just as the second cumulant is the variance, the joint cumulant of just two random variables is the covariance. The familiar identity var ( X + Y ) = var ( X ) + 2 cov ( X , Y ) + var ( Y ) {\displaystyle \operatorname {var} (X+Y)=\operatorname {var} (X)+2\operatorname {cov} (X,Y)+\operatorname {var} (Y)\,} generalizes to cumulants: κ n ( X + Y ) = ∑ j = 0 n ( n j ) κ ( X , … , X ⏟ j , Y , … , Y ⏟ n − j ) . {\displaystyle \kappa _{n}(X+Y)=\sum _{j=0}^{n}{n \choose j}\kappa (\,\underbrace {X,\dots ,X} _{j},\underbrace {Y,\dots ,Y} _{n-j}\,).\,}
Main article: law of total cumulance
The law of total expectation and the law of total variance generalize naturally to conditional cumulants. The case n = 3, expressed in the language of (central) moments rather than that of cumulants, says μ 3 ( X ) = E ( μ 3 ( X ∣ Y ) ) + μ 3 ( E ( X ∣ Y ) ) + 3 cov ( E ( X ∣ Y ) , var ( X ∣ Y ) ) . {\displaystyle \mu _{3}(X)=\operatorname {E} (\mu _{3}(X\mid Y))+\mu _{3}(\operatorname {E} (X\mid Y))+3\operatorname {cov} (\operatorname {E} (X\mid Y),\operatorname {var} (X\mid Y)).}
In general,14 κ ( X 1 , … , X n ) = ∑ π κ ( κ ( X π 1 ∣ Y ) , … , κ ( X π b ∣ Y ) ) {\displaystyle \kappa (X_{1},\dots ,X_{n})=\sum _{\pi }\kappa (\kappa (X_{\pi _{1}}\mid Y),\dots ,\kappa (X_{\pi _{b}}\mid Y))} where
For certain settings, a derivative identity can be established between the conditional cumulant and the conditional expectation. For example, suppose that Y = X + Z where Z is standard normal independent of X, then for any X it holds that15 κ n + 1 ( X ∣ Y = y ) = d n d y n E ( X ∣ Y = y ) , n ∈ N , y ∈ R . {\displaystyle \kappa _{n+1}(X\mid Y=y)={\frac {\mathrm {d} ^{n}}{\mathrm {d} y^{n}}}\operatorname {E} (X\mid Y=y),\,n\in \mathbb {N} ,\,y\in \mathbb {R} .} The results can also be extended to the exponential family.16
In statistical physics many extensive quantities – that is quantities that are proportional to the volume or size of a given system – are related to cumulants of random variables. The deep connection is that in a large system an extensive quantity like the energy or number of particles can be thought of as the sum of (say) the energy associated with a number of nearly independent regions. The fact that the cumulants of these nearly independent random variables will (nearly) add make it reasonable that extensive quantities should be expected to be related to cumulants.
A system in equilibrium with a thermal bath at temperature T have a fluctuating internal energy E, which can be considered a random variable drawn from a distribution E ∼ p ( E ) {\textstyle E\sim p(E)} . The partition function of the system is Z ( β ) = ∑ i e − β E i , {\displaystyle Z(\beta )=\sum _{i}e^{-\beta E_{i}},} where β = 1/(kT) and k is the Boltzmann constant and the notation ⟨ A ⟩ {\textstyle \langle A\rangle } has been used rather than E [ A ] {\textstyle \operatorname {E} [A]} for the expectation value to avoid confusion with the energy, E. Hence the first and second cumulant for the energy E give the average energy and heat capacity. ⟨ E ⟩ c = ∂ log Z ∂ ( − β ) = ⟨ E ⟩ ⟨ E 2 ⟩ c = ∂ ⟨ E ⟩ c ∂ ( − β ) = k T 2 ∂ ⟨ E ⟩ ∂ T = k T 2 C {\displaystyle {\begin{aligned}\langle E\rangle _{c}&={\frac {\partial \log Z}{\partial (-\beta )}}=\langle E\rangle \\[6pt]\langle E^{2}\rangle _{c}&={\frac {\partial \langle E\rangle _{c}}{\partial (-\beta )}}=kT^{2}{\frac {\partial \langle E\rangle }{\partial T}}=kT^{2}C\end{aligned}}}
The Helmholtz free energy expressed in terms of F ( β ) = − β − 1 log Z ( β ) {\displaystyle F(\beta )=-\beta ^{-1}\log Z(\beta )\,} further connects thermodynamic quantities with cumulant generating function for the energy. Thermodynamics properties that are derivatives of the free energy, such as its internal energy, entropy, and specific heat capacity, all can be readily expressed in terms of these cumulants. Other free energy can be a function of other variables such as the magnetic field or chemical potential μ {\textstyle \mu } , e.g. Ω = − β − 1 log ( ⟨ exp ( − β E − β μ N ) ⟩ ) , {\displaystyle \Omega =-\beta ^{-1}\log(\langle \exp(-\beta E-\beta \mu N)\rangle ),\,} where N is the number of particles and Ω {\textstyle \Omega } is the grand potential. Again the close relationship between the definition of the free energy and the cumulant generating function implies that various derivatives of this free energy can be written in terms of joint cumulants of E and N.
The history of cumulants is discussed by Anders Hald.1718
Cumulants were first introduced by Thorvald N. Thiele, in 1889, who called them semi-invariants.19 They were first called cumulants in a 1932 paper by Ronald Fisher and John Wishart.20 Fisher was publicly reminded of Thiele's work by Neyman, who also notes previous published citations of Thiele brought to Fisher's attention.21 Stephen Stigler has said that the name cumulant was suggested to Fisher in a letter from Harold Hotelling. In a paper published in 1929, Fisher had called them cumulative moment functions.22
The partition function in statistical physics was introduced by Josiah Willard Gibbs in 1901. The free energy is often called Gibbs free energy. In statistical mechanics, cumulants are also known as Ursell functions relating to a publication in 1927.
More generally, the cumulants of a sequence { mn : n = 1, 2, 3, ... }, not necessarily the moments of any probability distribution, are, by definition, 1 + ∑ n = 1 ∞ m n t n n ! = exp ( ∑ n = 1 ∞ κ n t n n ! ) , {\displaystyle 1+\sum _{n=1}^{\infty }{\frac {m_{n}t^{n}}{n!}}=\exp \left(\sum _{n=1}^{\infty }{\frac {\kappa _{n}t^{n}}{n!}}\right),} where the values of κn for n = 1, 2, 3, ... are found formally, i.e., by algebra alone, in disregard of questions of whether any series converges. All of the difficulties of the "problem of cumulants" are absent when one works formally. The simplest example is that the second cumulant of a probability distribution must always be nonnegative, and is zero only if all of the higher cumulants are zero. Formal cumulants are subject to no such constraints.
In combinatorics, the nth Bell number is the number of partitions of a set of size n. All of the cumulants of the sequence of Bell numbers are equal to 1. The Bell numbers are the moments of the Poisson distribution with expected value 1.
For any sequence { κn : n = 1, 2, 3, ... } of scalars in a field of characteristic zero, being considered formal cumulants, there is a corresponding sequence { μ′ : n = 1, 2, 3, ... } of formal moments, given by the polynomials above. For those polynomials, construct a polynomial sequence in the following way. Out of the polynomial μ 6 ′ = κ 6 + 6 κ 5 κ 1 + 15 κ 4 κ 2 + 15 κ 4 κ 1 2 + 10 κ 3 2 + 60 κ 3 κ 2 κ 1 + 20 κ 3 κ 1 3 + 15 κ 2 3 + 45 κ 2 2 κ 1 2 + 15 κ 2 κ 1 4 + κ 1 6 {\displaystyle {\begin{aligned}\mu '_{6}={}&\kappa _{6}+6\kappa _{5}\kappa _{1}+15\kappa _{4}\kappa _{2}+15\kappa _{4}\kappa _{1}^{2}+10\kappa _{3}^{2}+60\kappa _{3}\kappa _{2}\kappa _{1}+20\kappa _{3}\kappa _{1}^{3}\\&{}+15\kappa _{2}^{3}+45\kappa _{2}^{2}\kappa _{1}^{2}+15\kappa _{2}\kappa _{1}^{4}+\kappa _{1}^{6}\end{aligned}}} make a new polynomial in these plus one additional variable x: p 6 ( x ) = κ 6 x + ( 6 κ 5 κ 1 + 15 κ 4 κ 2 + 10 κ 3 2 ) x 2 + ( 15 κ 4 κ 1 2 + 60 κ 3 κ 2 κ 1 + 15 κ 2 3 ) x 3 + ( 45 κ 2 2 κ 1 2 ) x 4 + ( 15 κ 2 κ 1 4 ) x 5 + ( κ 1 6 ) x 6 , {\displaystyle {\begin{aligned}p_{6}(x)={}&\kappa _{6}\,x+(6\kappa _{5}\kappa _{1}+15\kappa _{4}\kappa _{2}+10\kappa _{3}^{2})\,x^{2}+(15\kappa _{4}\kappa _{1}^{2}+60\kappa _{3}\kappa _{2}\kappa _{1}+15\kappa _{2}^{3})\,x^{3}\\&{}+(45\kappa _{2}^{2}\kappa _{1}^{2})\,x^{4}+(15\kappa _{2}\kappa _{1}^{4})\,x^{5}+(\kappa _{1}^{6})\,x^{6},\end{aligned}}} and then generalize the pattern. The pattern is that the numbers of blocks in the aforementioned partitions are the exponents on x. Each coefficient is a polynomial in the cumulants; these are the Bell polynomials, named after Eric Temple Bell.
This sequence of polynomials is of binomial type. In fact, no other sequences of binomial type exist; every polynomial sequence of binomial type is completely determined by its sequence of formal cumulants.
In the above moment-cumulant formula\ E ( X 1 ⋯ X n ) = ∑ π ∏ B ∈ π κ ( X i : i ∈ B ) {\displaystyle \operatorname {E} (X_{1}\cdots X_{n})=\sum _{\pi }\prod _{B\,\in \,\pi }\kappa (X_{i}:i\in B)} for joint cumulants, one sums over all partitions of the set { 1, ..., n }. If instead, one sums only over the noncrossing partitions, then, by solving these formulae for the κ {\textstyle \kappa } in terms of the moments, one gets free cumulants rather than conventional cumulants treated above. These free cumulants were introduced by Roland Speicher and play a central role in free probability theory.2324 In that theory, rather than considering independence of random variables, defined in terms of tensor products of algebras of random variables, one considers instead free independence of random variables, defined in terms of free products of algebras.25
The ordinary cumulants of degree higher than 2 of the normal distribution are zero. The free cumulants of degree higher than 2 of the Wigner semicircle distribution are zero.26 This is one respect in which the role of the Wigner distribution in free probability theory is analogous to that of the normal distribution in conventional probability theory.
Weisstein, Eric W. "Cumulant". From MathWorld – A Wolfram Web Resource. http://mathworld.wolfram.com/Cumulant.html http://mathworld.wolfram.com/Cumulant.html ↩
Kendall, M. G., Stuart, A. (1969) The Advanced Theory of Statistics, Volume 1 (3rd Edition). Griffin, London. (Section 3.12) ↩
Lukacs, E. (1970) Characteristic Functions (2nd Edition). Griffin, London. (Page 27) ↩
Lukacs, E. (1970) Characteristic Functions (2nd Edition). Griffin, London. (Section 2.4) ↩
Aapo Hyvarinen, Juha Karhunen, and Erkki Oja (2001) Independent Component Analysis, John Wiley & Sons. (Section 2.7.2) /wiki/John_Wiley_%26_Sons ↩
Hamedani, G. G.; Volkmer, Hans; Behboodian, J. (2012-03-01). "A note on sub-independent random variables and a class of bivariate mixtures". Studia Scientiarum Mathematicarum Hungarica. 49 (1): 19–25. doi:10.1556/SScMath.2011.1183. /wiki/Doi_(identifier) ↩
Lukacs, E. (1970) Characteristic Functions (2nd Edition), Griffin, London. (Theorem 7.3.5) ↩
Bazant, Martin (February 4, 2005). "MIT 18.366 | Fall 2006 | Graduate | Random Walks and Diffusion, Lecture 2: Moments, Cumulants, and Scaling". MIT OpenCourseWare. Archived from the original on 2022-10-07. Retrieved 2023-09-03. https://ocw.mit.edu/courses/18-366-random-walks-and-diffusion-fall-2006/resources/lec02/ ↩
Smith, Peter J. (May 1995). "A Recursive Formulation of the Old Problem of Obtaining Moments from Cumulants and Vice Versa". The American Statistician. 49 (2): 217–218. doi:10.2307/2684642. JSTOR 2684642. https://www.jstor.org/stable/2684642 ↩
Rota, G.-C.; Shen, J. (2000). "On the Combinatorics of Cumulants". Journal of Combinatorial Theory, Series A. 91 (1–2): 283–304. doi:10.1006/jcta.1999.3017. https://doi.org/10.1006%2Fjcta.1999.3017 ↩
Peccati, Giovanni; Taqqu, Murad S. (2011). "Wiener Chaos: Moments, Cumulants and Diagrams". Bocconi & Springer Series. 1. doi:10.1007/978-88-470-1679-8. ISBN 978-88-470-1678-1. ISSN 2039-1471. 978-88-470-1678-1 ↩
Brillinger, D.R. (1969). "The Calculation of Cumulants via Conditioning". Annals of the Institute of Statistical Mathematics. 21: 215–218. doi:10.1007/bf02532246. S2CID 122673823. /wiki/Doi_(identifier) ↩
Dytso, Alex; Poor, H. Vincent; Shamai Shitz, Shlomo (2023). "Conditional Mean Estimation in Gaussian Noise: A Meta Derivative Identity with Applications". IEEE Transactions on Information Theory. 69 (3): 1883–1898. doi:10.1109/TIT.2022.3216012. S2CID 253308274. /wiki/Doi_(identifier) ↩
Dytso, Alex; Cardone, Martina; Zieder, Ian (2023). "Meta Derivative Identity for the Conditional Expectation". IEEE Transactions on Information Theory. 69 (7): 4284–4302. doi:10.1109/TIT.2023.3249163. S2CID 257247930. /wiki/Doi_(identifier) ↩
Hald, A. (2000) "The early history of the cumulants and the Gram–Charlier series" International Statistical Review, 68 (2): 137–153. (Reprinted in Lauritzen, Steffen L., ed. (2002). Thiele: Pioneer in Statistics. Oxford U. P. ISBN 978-0-19-850972-1.) 978-0-19-850972-1 ↩
Hald, Anders (1998). A History of Mathematical Statistics from 1750 to 1930. New York: Wiley. ISBN 978-0-471-17912-2. 978-0-471-17912-2 ↩
H. Cramér (1946) Mathematical Methods of Statistics, Princeton University Press, Section 15.10, p. 186. ↩
Fisher, R.A., John Wishart, J. (1932) The derivation of the pattern formulae of two-way partitions from those of simpler patterns, Proceedings of the London Mathematical Society, Series 2, v. 33, pp. 195–208 doi:10.1112/plms/s2-33.1.195 /wiki/Ronald_Fisher ↩
Neyman, J. (1956): ‘Note on an Article by Sir Ronald Fisher,’ Journal of the Royal Statistical Society, Series B (Methodological), 18, pp. 288–94. ↩
Fisher, R. A. (1929). "Moments and Product Moments of Sampling Distributions" (PDF). Proceedings of the London Mathematical Society. 30: 199–238. doi:10.1112/plms/s2-30.1.199. hdl:2440/15200. https://digital.library.adelaide.edu.au/dspace/bitstream/2440/15200/1/74pt2.pdf ↩
Speicher, Roland (1994). "Multiplicative functions on the lattice of non-crossing partitions and free convolution". Mathematische Annalen. 298 (4): 611–628. doi:10.1007/BF01459754. S2CID 123022311. /wiki/Mathematische_Annalen ↩
Novak, Jonathan; Śniady, Piotr (2011). "What Is a Free Cumulant?". Notices of the American Mathematical Society. 58 (2): 300–301. ISSN 0002-9920. /wiki/Notices_of_the_American_Mathematical_Society ↩