The saddlepoint approximation method, initially proposed by Daniels (1954) is a specific example of the mathematical saddlepoint technique applied to statistics, in particular to the distribution of the sum of N {\displaystyle N} independent random variables. It provides a highly accurate approximation formula for any PDF or probability mass function of a distribution, based on the moment generating function. There is also a formula for the CDF of the distribution, proposed by Lugannani and Rice (1980).
Definition
If the moment generating function of a random variable X = ∑ i = 1 N X i {\displaystyle X=\sum _{i=1}^{N}X_{i}} is written as M ( t ) = E [ e t X ] = E [ e t ∑ i = 1 N X i ] {\displaystyle M(t)=E\left[e^{tX}\right]=E\left[e^{t\sum _{i=1}^{N}X_{i}}\right]} and the cumulant generating function as K ( t ) = log ( M ( t ) ) = ∑ i = 1 N log E [ e t X i ] {\displaystyle K(t)=\log(M(t))=\sum _{i=1}^{N}\log E\left[e^{tX_{i}}\right]} then the saddlepoint approximation to the PDF of the distribution X {\displaystyle X} is defined as:3
f ^ X ( x ) = 1 2 π K ″ ( s ^ ) exp ( K ( s ^ ) − s ^ x ) ( 1 + R ) {\displaystyle {\hat {f}}_{X}(x)={\frac {1}{\sqrt {2\pi K''({\hat {s}})}}}\exp(K({\hat {s}})-{\hat {s}}x)\,\left(1+{\mathcal {R}}\right)}where R {\displaystyle {\mathcal {R}}} contains higher order terms to refine the approximation4 and the saddlepoint approximation to the CDF is defined as:5
F ^ X ( x ) = { Φ ( w ^ ) + ϕ ( w ^ ) ( 1 w ^ − 1 u ^ ) for x ≠ μ 1 2 + K ‴ ( 0 ) 6 2 π K ″ ( 0 ) 3 / 2 for x = μ {\displaystyle {\hat {F}}_{X}(x)={\begin{cases}\Phi ({\hat {w}})+\phi ({\hat {w}})\left({\frac {1}{\hat {w}}}-{\frac {1}{\hat {u}}}\right)&{\text{for }}x\neq \mu \\{\frac {1}{2}}+{\frac {K'''(0)}{6{\sqrt {2\pi }}K''(0)^{3/2}}}&{\text{for }}x=\mu \end{cases}}}where s ^ {\displaystyle {\hat {s}}} is the solution to K ′ ( s ^ ) = x {\displaystyle K'({\hat {s}})=x} , w ^ = sgn s ^ 2 ( s ^ x − K ( s ^ ) ) {\displaystyle {\hat {w}}=\operatorname {sgn} {\hat {s}}{\sqrt {2({\hat {s}}x-K({\hat {s}}))}}} , u ^ = s ^ K ″ ( s ^ ) {\displaystyle {\hat {u}}={\hat {s}}{\sqrt {K''({\hat {s}})}}} , and Φ ( t ) {\displaystyle \Phi (t)} and ϕ ( t ) {\displaystyle \phi (t)} are the cumulative distribution function and the probability density function of a normal distribution, respectively, and μ {\displaystyle \mu } is the mean of the random variable X {\displaystyle X} :
μ ≜ E [ X ] = ∫ − ∞ + ∞ x f X ( x ) d x = ∑ i = 1 N E [ X i ] = ∑ i = 1 N ∫ − ∞ + ∞ x i f X i ( x i ) d x i {\displaystyle \mu \triangleq E\left[X\right]=\int _{-\infty }^{+\infty }xf_{X}(x)\,{\text{d}}x=\sum _{i=1}^{N}E\left[X_{i}\right]=\sum _{i=1}^{N}\int _{-\infty }^{+\infty }x_{i}f_{X_{i}}(x_{i})\,{\text{d}}x_{i}} .
When the distribution is that of a sample mean, Lugannani and Rice's saddlepoint expansion for the cumulative distribution function F ( x ) {\displaystyle F(x)} may be differentiated to obtain Daniels' saddlepoint expansion for the probability density function f ( x ) {\displaystyle f(x)} (Routledge and Tsao, 1997). This result establishes the derivative of a truncated Lugannani and Rice series as an alternative asymptotic approximation for the density function f ( x ) {\displaystyle f(x)} . Unlike the original saddlepoint approximation for f ( x ) {\displaystyle f(x)} , this alternative approximation in general does not need to be renormalized.
- Butler, Ronald W. (2007), Saddlepoint approximations with applications, Cambridge: Cambridge University Press, ISBN 9780521872508
- Daniels, H. E. (1954), "Saddlepoint Approximations in Statistics", The Annals of Mathematical Statistics, 25 (4): 631–650, doi:10.1214/aoms/1177728652
- Daniels, H. E. (1980), "Exact Saddlepoint Approximations", Biometrika, 67 (1): 59–63, doi:10.1093/biomet/67.1.59, JSTOR 2335316
- Lugannani, R.; Rice, S. (1980), "Saddle Point Approximation for the Distribution of the Sum of Independent Random Variables", Advances in Applied Probability, 12 (2): 475–490, doi:10.2307/1426607, JSTOR 1426607, S2CID 124484743
- Reid, N. (1988), "Saddlepoint Methods and Statistical Inference", Statistical Science, 3 (2): 213–227, doi:10.1214/ss/1177012906
- Routledge, R. D.; Tsao, M. (1997), "On the relationship between two asymptotic expansions for the distribution of sample mean and its applications", Annals of Statistics, 25 (5): 2200–2209, doi:10.1214/aos/1069362394
References
Daniels, H. E. (December 1954). "Saddlepoint Approximations in Statistics". The Annals of Mathematical Statistics. 25 (4): 631–650. doi:10.1214/aoms/1177728652. ISSN 0003-4851. http://projecteuclid.org/euclid.aoms/1177728652 ↩
Lugannani, Robert; Rice, Stephen (June 1980). "Saddle point approximation for the distribution of the sum of independent random variables". Advances in Applied Probability. 12 (2): 475–490. doi:10.2307/1426607. ISSN 0001-8678. https://www.cambridge.org/core/journals/advances-in-applied-probability/article/saddle-point-approximation-for-the-distribution-of-the-sum-of-independent-random-variables/70A031DB905980CA675021C6D9BFFD21 ↩
Daniels, H. E. (December 1954). "Saddlepoint Approximations in Statistics". The Annals of Mathematical Statistics. 25 (4): 631–650. doi:10.1214/aoms/1177728652. ISSN 0003-4851. http://projecteuclid.org/euclid.aoms/1177728652 ↩
Daniels, H. E. (December 1954). "Saddlepoint Approximations in Statistics". The Annals of Mathematical Statistics. 25 (4): 631–650. doi:10.1214/aoms/1177728652. ISSN 0003-4851. http://projecteuclid.org/euclid.aoms/1177728652 ↩
Daniels, H. E. (December 1954). "Saddlepoint Approximations in Statistics". The Annals of Mathematical Statistics. 25 (4): 631–650. doi:10.1214/aoms/1177728652. ISSN 0003-4851. http://projecteuclid.org/euclid.aoms/1177728652 ↩