Menu
Home Explore People Places Arts History Plants & Animals Science Life & Culture Technology
On this page
Barrier function
Continuous function whose value increases to infinity

In optimization, a barrier function is a continuous function that increases to infinity near the boundary of the feasible region, used to replace inequality constraints by adding a penalizing term to the objective function. Also called interior penalty functions, they ensure solutions remain inside the feasible region. The two main types are inverse and logarithmic barrier functions, with the latter gaining renewed interest due to their role in primal-dual interior point methods. Barrier functions are fundamental in mathematics for solving constrained optimization problems more effectively.

We don't have any images related to Barrier function yet.
We don't have any YouTube videos related to Barrier function yet.
We don't have any PDF documents related to Barrier function yet.
We don't have any Books related to Barrier function yet.
We don't have any archived web articles related to Barrier function yet.

Motivation

Consider the following constrained optimization problem:

minimize f(x) subject to xb

where b is some constant. If one wishes to remove the inequality constraint, the problem can be reformulated as

minimize f(x) + c(x), where c(x) = ∞ if x > b, and zero otherwise.

This problem is equivalent to the first. It gets rid of the inequality, but introduces the issue that the penalty function c, and therefore the objective function f(x) + c(x), is discontinuous, preventing the use of calculus to solve it.

A barrier function, now, is a continuous approximation g to c that tends to infinity as x approaches b from above. Using such a function, a new optimization problem is formulated, viz.

minimize f(x) + μg(x)

where μ > 0 is a free parameter. This problem is not equivalent to the original, but as μ approaches zero, it becomes an ever-better approximation.3

Logarithmic barrier function

For logarithmic barrier functions, g ( x , b ) {\displaystyle g(x,b)} is defined as − log ⁡ ( b − x ) {\displaystyle -\log(b-x)} when x < b {\displaystyle x<b} and ∞ {\displaystyle \infty } otherwise (in one dimension; see below for a definition in higher dimensions). This essentially relies on the fact that log ⁡ t {\displaystyle \log t} tends to negative infinity as t {\displaystyle t} tends to 0.

This introduces a gradient to the function being optimized which favors less extreme values of x {\displaystyle x} (in this case values lower than b {\displaystyle b} ), while having relatively low impact on the function away from these extremes.

Logarithmic barrier functions may be favored over less computationally expensive inverse barrier functions depending on the function being optimized.

Higher dimensions

Extending to higher dimensions is simple, provided each dimension is independent. For each variable x i {\displaystyle x_{i}} which should be limited to be strictly lower than b i {\displaystyle b_{i}} , add − log ⁡ ( b i − x i ) {\displaystyle -\log(b_{i}-x_{i})} .

Formal definition

Minimize c T x {\displaystyle \mathbf {c} ^{T}x} subject to a i T x ≤ b i , i = 1 , … , m {\displaystyle \mathbf {a} _{i}^{T}x\leq b_{i},i=1,\ldots ,m}

Assume strictly feasible: { x | A x < b } ≠ ∅ {\displaystyle \{\mathbf {x} |Ax<b\}\neq \emptyset }

Define logarithmic barrier g ( x ) = { ∑ i = 1 m − log ⁡ ( b i − a i T x ) for  A x < b + ∞ otherwise {\displaystyle g(x)={\begin{cases}\sum _{i=1}^{m}-\log(b_{i}-a_{i}^{T}x)&{\text{for }}Ax<b\\+\infty &{\text{otherwise}}\end{cases}}}

See also

References

  1. Nesterov, Yurii (2018). Lectures on Convex Optimization (2 ed.). Cham, Switzerland: Springer. p. 56. ISBN 978-3-319-91577-7. 978-3-319-91577-7

  2. Nocedal, Jorge; Wright, Stephen (2006). Numerical Optimization (2 ed.). New York, NY: Springer. p. 566. ISBN 0-387-30303-0. 0-387-30303-0

  3. Vanderbei, Robert J. (2001). Linear Programming: Foundations and Extensions. Kluwer. pp. 277–279. /wiki/Robert_J._Vanderbei