In probability theory, conditional dependence is a relationship between two or more events that are dependent when a third event occurs. For example, if A {\displaystyle A} and B {\displaystyle B} are two events that individually increase the probability of a third event C , {\displaystyle C,} and do not directly affect each other, then initially (when it has not been observed whether or not the event C {\displaystyle C} occurs) P ( A ∣ B ) = P ( A ) and P ( B ∣ A ) = P ( B ) {\displaystyle \operatorname {P} (A\mid B)=\operatorname {P} (A)\quad {\text{ and }}\quad \operatorname {P} (B\mid A)=\operatorname {P} (B)} ( A and B {\displaystyle A{\text{ and }}B} are independent).
But suppose that now C {\displaystyle C} is observed to occur. If event B {\displaystyle B} occurs then the probability of occurrence of the event A {\displaystyle A} will decrease because its positive relation to C {\displaystyle C} is less necessary as an explanation for the occurrence of C {\displaystyle C} (similarly, event A {\displaystyle A} occurring will decrease the probability of occurrence of B {\displaystyle B} ). Hence, now the two events A {\displaystyle A} and B {\displaystyle B} are conditionally negatively dependent on each other because the probability of occurrence of each is negatively dependent on whether the other occurs. We have P ( A ∣ C and B ) < P ( A ∣ C ) . {\displaystyle \operatorname {P} (A\mid C{\text{ and }}B)<\operatorname {P} (A\mid C).}
Conditional dependence of A and B given C is the logical negation of conditional independence ( ( A ⊥ ⊥ B ) ∣ C ) {\displaystyle ((A\perp \!\!\!\perp B)\mid C)} . In conditional independence two events (which may be dependent or not) become independent given the occurrence of a third event.