The World of Complexity

(Sept 17, 2013)

Our little plan is to write a series of posts discussing a sequence of optimization problems. Before we dive into them, it is cool to learn a rigorous treatment of standard complexity notions P, NP, NP-hard and so forth. This is how this article arises.

The materials below come from the book GLS.

There are two types of problems that we are concerned. The first one is optimization problems, for example, linear programming (LP) problem, which is to compute x^* = \text{argmin}\{c^T x :Ax\leq b\}. In this case the inputs (or instances) are two vectors b,c and a matrix A, while the output (or solution) is a vector x^* (the minimizer). The second type is called decision problems which the output of the problem must be either “Yes” or “No”. For example, given a graph G=(V,E), does there exist a Hamiltonian cycle? This is a decision problem.

Decision problems and optimization problems are closely related, as we will see later.

We need an encoding scheme, which can convert every instance or solution into finite 0-1 string. Here the encoding scheme of our interest is binary representation. Let \{0,1\}^* be the set of all finite 0-1 strings. For every \sigma\in \{0,1\}^*, we let |\sigma| be its length.

First we define the meaning of a problem. Like the examples above, the key ingredients of a problem should be inputs (usually called  \sigma) and outputs (usually called \tau).

Definition 1.          A problem is a subset \Pi of \{0,1\}^* \times \{0,1\}^* such that for any \sigma\in \{0,1\}^* there exists \tau \in \{0,1\}^* such that (\sigma, \tau)\in \Pi.

Notice that “No solution” is also a solution to a problem.

The next step is to give a definition to “algorithm” which can solve a problem. We shall interchange the words “algorithm” and “Turing machine” (or simply machine). We are going to define a t-tape Turing machine. A tape is an infinite 0-1-blank string. A t-tape machine has t tapes, and t heads of which each one starts at the first cell of each tape, reads, writes or erases the entry, can move to the left or right, or can stay. Also a machine includes a central unit of states, especially a beginning state B and an end state E. The machine starts at state B and works until state E. Before we state the definition let us see an example of how a machine works.

Example 1.          Here is a Turing machine T which solves the problem of adding a binary number by 1. The states and actions are:

B: Search to the right for a blank. After reaching a blank, move the head to the left. Go to state C.

C: When reaching 0 replace it by 1 and then go to E. When reaching 1 replace it by 0 and stay at C. When reaching “blank” replace it by 1 and go to E. Move the head to the left no matter in which situation.


Suppose we want to compute 101+1. So the input is 101. At the state B the head points to leftmost 1. Now search to the right until the blank next to rightmost 1. Then move the head to the left and change the state to C. Now replace rightmost 1 by 0 and the middle zero by 1. Then go to state E and stop. The result will be 110 (which is expected).

(Sept 21, 2013)

Definition 2.          Denote by \{0,1,*\} the space of infinite 0-1-blank strings. A Turing machine is a 6-tuple T=(X,B,E,\Phi,\Psi,\Xi) (\Xi is the upper case of \xi) such that:

X is a finite set of states

B,E\in X

\Phi:X\times\{0,1,*\}^t \rightarrow X specifies the change of states.

\Psi: X\times\{0,1,*\}^t \rightarrow \{0,1,*\}^t specifies the change of the letters in each tape.

\Xi: X\times\{0,1,*\}^t \rightarrow \{0,1,-1\}^t specifies the movement of heads.

Definition 3.          We say T solves the problem \Pi if for any input \sigma, we set \sigma_1 \triangleq \sigma, \sigma_2 = \cdots = \sigma_t \triangleq \emptyset to the t-tapes of T at the beginning state B. Then, after a finite number of moves on read-write heads, it stops at state E while \sigma_1=\tau is such that (\sigma,\tau)\in \Pi.

The next thing is to define a “polynomial time algorithm”. Given a problem \Pi and suppose T solves \Pi. For any input \sigma, we define time(\sigma) as the number of steps in which T reaches state E from state B. The running time function for T solving \Pi is a function f:\mathbb{N}\rightarrow \mathbb{N} (\mathbb{N}=1,2,\cdots) given by:

f(n) = \max\{time(\sigma): |\sigma|\leq n\}.

Since there are only finitely many 0-1 strings of length not exceeding n, the maximum in above definition makes sense.

Definition 4.          We say T is a polynomial time algorithm solving \Pi if there is polynomial p for which f(n)\leq p(n) for all n.

(Oct 9, 2013)

Now we restrict our discussion to decision problems, that are “Yes” or “No” problems. The classes P, NP etc. are defined in terms of decision problems.

Definition 5.         A decision problem (DP) \Pi is a problem such that: For any \sigma\in \{0,1\}^*, exactly (\sigma,0)\in \Pi or (\sigma,1)\in \Pi. Also, if (\sigma,\tau)\in \Pi, then \tau = 0 or 1.

We define

P = \{\text{DP }\Pi: \Pi \text{ can be solved by some polynomial time algorithm}\}.

As we may know, NP problems are those decision problems such that if we guess the answer to the problem is “Yes”, then there is a polynomial length verification of this “Yes”. To be precise,

NP=\{\text{DP }\Pi:\text{ There exists }\Sigma\in P\text{ such that }(\sigma,1)\in \Pi \text{ if and only if there exists } \tau\in \{0,1\}^* \text{ and polynomial } \Phi \text{ with }((\sigma,\tau),1)\in \Sigma \text{ and } |\tau|\leq \Phi(|\sigma|)\}.

The polynomial \Phi in the above definition is to make sure the checking input \tau cannot have too long length. In detail, it has to be controlled by that of \sigma.

By setting \Sigma=\Pi, one sees that P \subset NP. One of the Millennium Prize Problems asks if P = NP. Simply put, is there a decision problem in NP which is also in P? This is still unknown.

(Nov 9, 2013)

Intuitively, NP-complete problems are the hard decision problems, in terms of: all NP problems can be transformed to them in polynomial time.

Definition 6.         Given two DPs \Pi and \Pi'. A polynomial transformation from \Pi to \Pi' is an algorithm T such that given an instance \sigma of \Pi, T produces an instance \sigma ' of \Pi ' in polynomial time such that

(\sigma,1)\in \Pi if and only if (\sigma,1)\in \Pi'.

Definition 7.         A DP \Pi is said to be NP-complete if any problem \Pi'\in NP can be polynomial transformed to \Pi.

Therefore if an NP-complete problem is in P, then we have P = NP. Since people don’t believe P = NP, we have nothing more to do on a DP after realizing it as NP-complete.

Example 2 (Linear Programming).          We can associate a decision problem naturally to a given optimization problem \min \{f(x): x\in S\} as follows: Given q\in \mathbb{Q}, is there some x^*\in S such that f(x^*)\leq q? Indeed these two problems are equivalent: once we can solve the decision problem, we can use binary search algorithm to solve the optimization problem. Clearly the decision problem of LP is in NP. In 1979 Khachiyan discovered Ellipsoid Method which explains that LP is actually in P. Before 1979, Simplex Method was discovered by George Dantzig for solving LP. Simplex Method cannot be used to show LP is in P because one can construct some artificial example that cannot be solved by Simplex Method in polynomial time. For your information, Interior Point Method is the state-of-the-art for solving LP (and SDP). Ellipsoid Method theoretically works but is not practical.

Example 3 (Integer Factorization).         Deciding if an integer is a prime is a problem in P, while the integer factorization problem is an NP problem: Given n,k integers. Is there integer f such that 1<f<k and f divides n. If the answer is Yes, then letting \tau=f and \Sigma = does a number divide another number, one can verify the correctness in polynomial time. Clearly \text{length of }f\leq \text{length of }k.

Example 4 (Partition Problem).         This is a popular NP-complete problem: Given a sequence a_1,\cdots,a_n of positive integers. Is there x_1,\cdots,x_n\in\{\pm 1\}^n such that x_1 a_1 + \cdots +x_n a_n = 0? We will come back to this thing when we discuss polynomial optimization.

Example 5 (Hamiltonian Cycle Problem).          Given a graph G = (V,E). Does a Hamiltonian cycle exist? This is a problem in NP, and also known as NP-complete.   We can also associate a counting problem to this problem: How many Hamiltonian cycles exist in the graph? We expect this problem is harder than checking the existence (NP).

Example 6 (Derangement Problem).          This is a problem from the community of combinatorics. A derangement is a permutation w\in S_n (symmetric group) such that w(i)\neq i for all i = 1,\cdots,n. Given a subgroup G\subset S_n. Is there a derangement in G? This is known to be an NP-complete problem.

(Nov 15, 2013)

Next we define something called Turing reduction. Before that we need to allow the possibility for an algorithm to call subroutines (some black box processes). The academic name of subroutine is oracle.

Definition 8.          An oracle is a pair (\Pi,\Phi) where \Pi is a problem and \Phi is a polynomial such that for any instance \sigma, there exists \tau such that (\sigma,\tau) \in \Pi and |\tau|\leq \Phi(|\sigma|).

Also, we denote \widetilde{\Pi}\triangleq \{(\sigma, \tau) \in \Pi: |\tau| \leq \Phi(|\sigma|))\} .

In words, we require \Pi in the oracle can be solved by some algorithm, but we have NO requirement on the running time or whatever about it (So it is really a black box). However, we need control on the length of the solution of \Pi.

Then we can define a Turing machine which is able to call k oracles.

Definition 9.          Let T be a (t-tape) Turing machine. An koracle Turing machine is a k+1-tuple (T;\Pi_1,\cdots,\Pi_k) such that:

(1) The last k-tapes are called oracle tapes. Their heads are called oracle heads. The i-th oracle tape has an oracle (\Pi_i,\Phi_i) associated to it.

(2) The central unit X is the superset of some \{x_1,\cdots,x_k\}.

(3) If the central unit is in the state x_i, then the i-th oracle tape \sigma is erased and miraculously replaced by some \tau such that (\sigma,\tau)\in \Pi_i, for all i.

As before, we can define the meaning of an oracle Turing machine solving a problem \Pi (just include the possibility of calling oracles).

Definition 10.  

(1)  For any oracle Turing machine solving some problem, given an instance \sigma, we define time(\sigma) to be the number of steps from the state B to state E, together with the number of times that oracles are called in between.

(2) The running time function of a oracle Turing machine (T;\Pi_1,\cdots,\Pi_k) solving some problem is f:\mathbb{N} \times \{\text{polynomials }p:\mathbb{N}\rightarrow\mathbb{N}\}^k \rightarrow \mathbb{N} given by:

f(n;\Phi_1,\cdots,\Phi_k) = \max\{time(\sigma)\text{ performed by }(T; \widetilde{\Pi}_1,\cdots,\widetilde{\Pi}_k): |\sigma|\leq n\}.

(3)  An oracle Turing machine solving some problem is polynomial when for any polynomials \Phi_1,\cdots,\Phi_k:\mathbb{N}\rightarrow\mathbb{N}, there exists a polynomial p= p_{\Phi_1,\cdots,\Phi_k} such that f(n)\leq p(n) for all n.

(4) A Turing reduction from problem \Pi to \Pi' is a polynomial 1-oracle Turing machine (T;\Pi') solving \Pi. A problem \Pi is said to be NP-easy if there exists \Pi'\in NP such that \Pi can be Turing reduced to \Pi'. A problem \Pi is said to be NP-hard if there is an NP-complete problem \Pi' such that \Pi' can be Turing reduced to \Pi.

Therefore NP-hard problems are harder than all NP problems. Examples include Traveling Salesman Problem (TSP) and Polynomial Optimization.

(Maybe the End, or more things coming)

This entry was posted in Applied mathematics, Optimization. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s