It has been a while since the last post. Let us recall what we have done. We study the unconstrained polynomial optimization problem

()

where is a real polynomial. This problem is equivalent to

() .

Since any sum-of-square (SOS) polynomial is nonnegative, the optimal value can be lower bounded by the value of this problem:

.

This is called the *SOS relaxation* of (). We studied how different nonnegative polynomials and SOS polynomials are, and mentioned whether a polynomial is SOS can be verified by semidefinite programming (SDP). The above SOS relaxation can be reformulated as an SDP as well. Then it can be solved by hand, SeDuMi, YALMIP or any solver of your choice. The complexity is a polynomial in the number of constraints and the size of matrices involved in the SDP. Hence the SOS relaxation is a natural relaxation to ().

However, SOS relaxation may not approximate the solution of (), and it is possible that its optimal value is very far from that of (). The following example illustrates such situation.

**Example 1.** Consider where is the Motzkin polynomial. The minimum value is 0 attained at , while the optimal value of the SOS relaxation is because is not SOS for any (by Part 1).

This leads to the new question: Is there a better relaxation for ()? More precisely and hopefully, is there a sequence of solvable relaxations such that the optimal values converge to that of ()?

Recall that calculus class tells us that any optimal solution of () is a critical point of , i.e., the gradient vanishes or . Thus if we assume the infimum of () is attained, then () is equivalent to the *equality-constrained polynomial optimization problem*

.

That means we need to know how to deal with equality-constrained problems. This is what we do next time.