Polynomial Optimization 3: Why do we need generalize?

It has been a while since the last post. Let us recall what we have done. We study the unconstrained polynomial optimization problem

(\mathcal{U})          \displaystyle \inf_{x\in \mathbb{R}^n} p(x)

where p is a real polynomial. This problem is equivalent to

(\mathcal{N})          \sup \ \rho\text{ subject to } p - \rho \geq 0 \text{ on }\mathbb{R}^n.

Since any sum-of-square (SOS) polynomial is nonnegative, the optimal value can be lower bounded by the value of this problem:

\sup \ \rho\text{ subject to } p - \rho \text{ is SOS on }\mathbb{R}^n.

This is called the SOS relaxation of  (\mathcal{U}). We studied how different nonnegative polynomials and SOS polynomials are, and mentioned whether a polynomial is SOS can be verified by semidefinite programming (SDP). The above SOS relaxation can be reformulated as an SDP as well. Then it can be solved by hand, SeDuMi, YALMIP or any solver of your choice. The complexity is a polynomial in the number of constraints and the size of matrices involved in the SDP. Hence the SOS relaxation is a natural relaxation to  (\mathcal{U}).

However, SOS relaxation may not approximate the solution of (\mathcal{U}), and it is possible that its optimal value is very far from that of (\mathcal{U}). The following example illustrates such situation.

Example 1.          Consider \displaystyle \inf_{(x_1,x_2)\in \mathbb{R}^2} p(x_1,x_2) where p(x_1,x_2)=x_1^2 x_2^2 (x_1^2 + x_2^2 - 3) + 1 is the Motzkin polynomial. The minimum value is 0 attained at (x_1,x_2)=(1,1), while the optimal value of the SOS relaxation is -\infty because p(x_1,x_2)-\alpha is not SOS for any \alpha\in\mathbb{R} (by Part 1).

This leads to the new question: Is there a better relaxation for (\mathcal{U})? More precisely and hopefully, is there a sequence of solvable relaxations such that the optimal values converge to that of (\mathcal{U})?

Recall that calculus class tells us that any optimal solution of (\mathcal{U}) is a critical point of p, i.e., the gradient vanishes or \nabla p=0. Thus if we assume the infimum of (\mathcal{U}) is attained, then (\mathcal{U}) is equivalent to the equality-constrained polynomial optimization problem

\inf \ p(x)\text{ subject to } \nabla p(x)=0.

That means we need to know how to deal with equality-constrained problems. This is what we do next time.

This entry was posted in Algebra, Applied mathematics, Optimization. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s