From Handwiki In mathematical optimization, the perturbation function is any function which relates to primal and dual problems. The name comes from the fact that any such function defines a perturbation of the initial problem. In many cases this takes the form of shifting the constraints.[1] In some texts the value function is called the perturbation function, and the perturbation function is called the bifunction.[2]
Given two dual pairs of separated locally convex spaces [math]\displaystyle{ \left(X,X^*\right) }[/math] and [math]\displaystyle{ \left(Y,Y^*\right) }[/math]. Then given the function [math]\displaystyle{ f: X \to \mathbb{R} \cup \{+\infty\} }[/math], we can define the primal problem by
If there are constraint conditions, these can be built into the function [math]\displaystyle{ f }[/math] by letting [math]\displaystyle{ f \leftarrow f + I_\mathrm{constraints} }[/math] where [math]\displaystyle{ I }[/math] is the characteristic function. Then [math]\displaystyle{ F: X \times Y \to \mathbb{R} \cup \{+\infty\} }[/math] is a perturbation function if and only if [math]\displaystyle{ F(x,0) = f(x) }[/math].[1][3]
The duality gap is the difference of the right and left hand side of the inequality
where [math]\displaystyle{ F^* }[/math] is the convex conjugate in both variables.[3][4]
For any choice of perturbation function F weak duality holds. There are a number of conditions which if satisfied imply strong duality.[3] For instance, if F is proper, jointly convex, lower semi-continuous with [math]\displaystyle{ 0 \in \operatorname{core}({\Pr}_Y(\operatorname{dom}F)) }[/math] (where [math]\displaystyle{ \operatorname{core} }[/math] is the algebraic interior and [math]\displaystyle{ {\Pr}_Y }[/math] is the projection onto Y defined by [math]\displaystyle{ {\Pr}_Y(x,y) = y }[/math]) and X, Y are Fréchet spaces then strong duality holds.[1]
Let [math]\displaystyle{ (X,X^*) }[/math] and [math]\displaystyle{ (Y,Y^*) }[/math] be dual pairs. Given a primal problem (minimize f(x)) and a related perturbation function (F(x,y)) then the Lagrangian [math]\displaystyle{ L: X \times Y^* \to \mathbb{R} \cup \{+\infty\} }[/math] is the negative conjugate of F with respect to y (i.e. the concave conjugate). That is the Lagrangian is defined by
In particular the weak duality minmax equation can be shown to be
If the primal problem is given by
where [math]\displaystyle{ \tilde{f}(x) = f(x) + I_{\mathbb{R}^d_+}(-g(x)) }[/math]. Then if the perturbation is given by
then the perturbation function is
Thus the connection to Lagrangian duality can be seen, as L can be trivially seen to be
Let [math]\displaystyle{ (X,X^*) }[/math] and [math]\displaystyle{ (Y,Y^*) }[/math] be dual pairs. Assume there exists a linear map [math]\displaystyle{ T: X \to Y }[/math] with adjoint operator [math]\displaystyle{ T^*: Y^* \to X^* }[/math]. Assume the primal objective function [math]\displaystyle{ f(x) }[/math] (including the constraints by way of the indicator function) can be written as [math]\displaystyle{ f(x) = J(x,Tx) }[/math] such that [math]\displaystyle{ J: X \times Y \to \mathbb{R} \cup \{+\infty\} }[/math]. Then the perturbation function is given by
In particular if the primal objective is [math]\displaystyle{ f(x) + g(Tx) }[/math] then the perturbation function is given by [math]\displaystyle{ F(x,y) = f(x) + g(Tx - y) }[/math], which is the traditional definition of Fenchel duality.[5]
![]() |
Categories: [Linear programming] [Convex optimization]