In vector calculus, an invex function is a differentiable function f {\displaystyle f} from R n {\displaystyle \mathbb {R} ^{n}} to R {\displaystyle \mathbb {R} } for which there exists a vector valued function η {\displaystyle \eta } such that
for all x and u.
Invex functions were introduced by Hanson as a generalization of convex functions. Ben-Israel and Mond provided a simple proof that a function is invex if and only if every stationary point is a global minimum, a theorem first stated by Craven and Glover.
Hanson also showed that if the objective and the constraints of an optimization problem are invex with respect to the same function η ( x , u ) {\displaystyle \eta (x,u)} , then the Karush–Kuhn–Tucker conditions are sufficient for a global minimum.