In information theory, the binary entropy function, denoted H ( p ) {\displaystyle \operatorname {H} (p)} or H b ( p ) {\displaystyle \operatorname {H} _{\text{b}}(p)} , is defined as the entropy of a Bernoulli process (i.i.d. binary variable) with probability p {\displaystyle p} of one of two values, and is given by the formula:
The base of the logarithm corresponds to the choice of units of information; base e corresponds to nats and is mathematically convenient, while base 2 (binary logarithm) corresponds to shannons and is conventional (as shown in the graph); explicitly:
Note that the values at 0 and 1 are given by the limit 0 log 0 := lim x → 0 + x log x = 0 {\displaystyle \textstyle 0\log 0:=\lim _{x\to 0^{+}}x\log x=0} (by L'Hôpital's rule); and that "binary" refers to two possible values for the variable, not the units of information.
When p = 1 / 2 {\displaystyle p=1/2} , the binary entropy function attains its maximum value, 1 shannon (1 binary unit of information); this is the case of an unbiased coin flip. When p = 0 {\displaystyle p=0} or p = 1 {\displaystyle p=1} , the binary entropy is 0 (in any units), corresponding to no information, since there is no uncertainty in the variable.