Polynomial Optimization 2: SOS and SDP

In this article we shall describe something called Gram-matrix method which can decompose a polynomial into sum of squares. The notation X\succeq 0 means X is a square symmetric positive semidefinite matrix. 

Proposition 1.          Let p \in \mathbb{R}[{\bf x}], \displaystyle p = \sum_{\alpha\in \mathbb{N}^n_{2d}}p_\alpha {\bf x}^\alpha be a polynomial of degree at most 2d. Then the following are equivalent:

(a) p is SOS

(b) p = {\bf z}_d^T X{\bf z}_d for some positive semidefinite matrix X. The matrix X is called the Gram matrix of the SOS decomposition of p.

(c) The following system in the matrix variable X = (X_{\alpha,\beta})_{\alpha,\beta\in \mathbb{N}^n_{d}} is feasible:

(1)           \begin{cases}X&\succeq 0\\ \displaystyle\sum_{\substack{\beta,\gamma\in \mathbb{N}^n_d\\\beta+\gamma=\alpha}}X_{\beta,\gamma}&= p_\alpha \qquad (|\alpha|\leq 2d)\end{cases}

Before we prove it we do some counting. First, the cardinality of \mathbb{N}^n_d is \dbinom{n+d}{d}. This is because the cardinality is indeed the number of nonnegative integer solutions (\alpha_1,\cdots,\alpha_n,\alpha_{n+1}) of

\alpha_1 + \cdots + \alpha_n+\alpha_{n+1} =d.

Using stars and bars argument, it is \dbinom{(n+1)+d-1}{d} = \dbinom{n+d}{d}. Thus X is a \dbinom{n+d}{d}\times\dbinom{n+d}{d} matrix. By similar reasoning (1) has \dbinom{n+2d}{2d} equations. So (1) has polynomial size if d is fixed.

Proof of Proposition 1. Set {\bf z}_d \triangleq ({\bf x}^\alpha: |\alpha|\leq d). For polynomials u_j \in \mathbb{R}[{\bf x}]_d define  \text{vec}(u_j) as the sequence of coefficients in the monomial basis of \mathbb{R}[{\bf x}]. Then u_j = \text{vec}(u_j)^T {\bf z}_d. It follows that \displaystyle \sum_j u_j^2 = {\bf z}_d^T \left( \sum_j \text{vec}(u_j)\text{vec}(u_j)^T\right){\bf z}_d. The matrix \displaystyle\sum_j \text{vec}(u_j)\text{vec}(u_j)^T is positive semidefinite.

Thus, p is SOS if and only if p = {\bf z}_d^T X{\bf z}_d for some positive semidefinite matrix X. The system (1) comes from equating the coefficients of polynomials p and {\bf z}_d^T X {\bf z}_d. \square

Proposition 1 is useful because computing SOS decomposition of p is equivalent to finding X such that (1) holds. This feasibility problem is indeed a semidefinite program (SDP), which can be solved in polynomial time. Let’s recall SDP.

We first associate the Frobenius inner product to the space of matrices \mathbb{R}^{n\times n} given by

\displaystyle\langle A,B\rangle \triangleq \text{Tr}(A^T B) = \sum_{i,j=1}^n a_{ij} b_{ij}

Denote by \text{Sym}_n the vector space of symmetric n\times n matrices. Given  C,A_1,\cdots,A_m\in \text{Sym}_n and b\in \mathbb{R}^m. A (primal) semidefinite program (SDP) is in the form

(2)        p^* = \displaystyle\sup_{X\in \text{Sym}_n} \langle C,X\rangle subject to X\succeq 0, \langle A_j,X\rangle = b_j for 1\leq j \leq m

where the decision variable is only matrix X.

By setting C=0 and choosing correct A_j,b_j‘s (What are they?), (2) is equivalent to finding X so that (1) holds.

Let us illustrate Proposition 1 using examples.

Example 1.          Consider the degree 4 univariate polynomial p(x) = x^4 + 4x^3 + 6x^2 + 4x + 5.  Is p SOS? Let’s try to find a decomposition. Set X = \begin{bmatrix}a&b&c\\b&d&e\\c&e&f \end{bmatrix}. By equating coefficients for p = {\bf z}_2^T X {\bf z}_2 where {\bf z}_2 = \begin{bmatrix}1\\x\\x^2\end{bmatrix}, system (1) is:

X\succeq 0

x^4: 1 = f

x^3: 4 = 2e

x^2: 6 =d + 2 e

x: 4 = 2b

1: 5 = a

Then by some SDP solver or by hand one solution is

X = \begin{bmatrix} 5 & 2 & 0 \\ 2 & 6 & 2 \\ 0 & 2 & 1\end{bmatrix} = V^T V where V = \begin{bmatrix} 0 & 2 & 1\\ \sqrt{2} & \sqrt{2} & 0 \\ \sqrt{3} & 0 & 0 \end{bmatrix}.

Thus, if we set u_1 = 2x + x^2, u_2 = \sqrt{2} + \sqrt{2}x and u_3 = \sqrt{3} then

p(x) = u_1^2+u_2^2+u_3^2 = (x^2+2x)^2 + 2(1+x)^2 + 3.

Example 2.         Consider the degree 4 bivariate polynomial p(x,y) = x^4 + 2 x^3 y + 3x^2 y^2 + 2xy^3 + 2y^4. Want to find X\succeq 0 such that

p = \begin{bmatrix} x^2 & xy & y^2 \end{bmatrix} X \begin{bmatrix} x^2\\xy\\y^2\end{bmatrix} where X =\begin{bmatrix}a&b&c\\b&d&e\\c&e&f\end{bmatrix}.

Equating coefficients, (1) becomes:

X\succeq 0

x^4: 1 = a

x^3 y : 2 = 2b

x^2 y^2: 3= d+2c

xy^3: 2 = 2e

y^4: 2 = f

Thus X = \begin{bmatrix} 1 & 1 & c\\1 & 3-2c & 1\\c & 1 & 2\end{bmatrix}. One can check X\succeq 0 if and only if c\in [-1,1]. Setting c=-1 and c=0 and factorizing X we obtain two SOS decompositions of p:

c = -1: p(x,y) = (x^2+xy-y^2)^2 + (y^2 + 2xy)^2

\displaystyle c = 0: p(x,y) = (x^2+xy)^2 + \frac{3}{2}(xy+y^2)^2 + \frac{1}{2}(xy-y^2)^2

(End)

Advertisements
This entry was posted in Algebra, Applied mathematics, Optimization. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s