Exercises in real analysis

In this post I type up solutions of some interesting questions in the real analysis qualifying examinations. Most questions will be from UW. This post will be updated continuously. Of course, you are encouraged to think about the question before looking at the solution. And it may happen that the solution is not correct. Some motivations / comments are also included in the solutions.

Actually many questions in such an exam are not really difficult. Usually they do not involve advanced theorems. But you need to think about the relevant tools and use them in some natural way.

**********

1. Let K be a continuous function on [0, 1] \times [0, 1]. For f \in L^2[0, 1] and x \in [0, 1], define Tf(x) = \int_0^1 K(x, y)f(y)dy.

(a) Show that \| Tf \|_{sup} \leq \| K \|_{\sup} \| f \|_2 for all x \in [0, 1], and that T f is continuous.

(b) Show that if \{f_n\} is a bounded sequence in L^2[0, 1], the sequence \{Tf_n\} contains a uniformly convergent subseqeunce.

(c) Assume that T is one-to-one. Show that T does not map L^2[0, 1] onto C[0, 1].

Solution. General thoughts: We are studying an integral operator T on L^2[0, 1], and the kernel K is continuous. Functional analysis!

(a) This shows that T is a bounded linear operator from L^2[0, 1] to C[0, 1] equipped with the sup norm. The solution is standard. For any x \in [0, 1],

|Tf(x)| \leq \int_0^1 |K(x, y)| |f(y)| dy.

Since we need the L^2-norm of f in the estimate, we use the Cauchy-Schwarz in equality. We get

|Tf(x)| \leq \|K\|_2 \|f\|_2 \leq \|K\|_{sup} \|f\|_2.

Continuity is also easy to check.  For x_1, x_2 \in [0, 1],

|Tf(x_2) - Tf(x_1)| \leq \int_0^1 |K(x_2, y) - K(x_1, y)| |f(y)| dy.

Since K is continuous on [0, 1] \times [0, 1], it is uniformly continuous. Thus, given \varepsilon > 0, we may find \delta > 0 such that

|K(x_1, y) - K(x_2, y)| < \varepsilon

whenever |x_1 - x_2| < \delta. Again using Cauchy-Schwarz inequality, we have

|Tf(x_1) - Tf(x_2)| \leq \varepsilon \|f\|_2.  (*)

So T f is continous.

(b) We need to prove that T is a compact linear operator. Thus we need a certain compactness result in C[0, 1]. This is provided by the Arzelà–Ascoli theorem: \{Tf_n\} has a uniformly convergent subsequence if \{Tf_n\} is uniformly bounded and equicontinuous.

By (a), \|Tf_n\|_{sup} \leq \|K\|_{sup} \|f_n\|_2. Since \{f_n\} is bounded in L^2, we see that \{Tf_n\} is uniformly bounded. To check equicontinuity, note that by (*),

|T f_n(x_1) - T f_n(x_2) | \leq \varepsilon \| f_n\|_2 \leq \varepsilon \sup_n \|f_n\|_2

whenever |x_1 - x_2| < \delta. This is enough since \delta depends only on K but not on the f_n.

(c) We need to show that if T is one-one, then it is not onto. In such an abstract setting, it is hopeless to construct some function C[0, 1] outside the range of T. We shall proceed by contradiction.

Hence, suppose that T: L^2[0, 1] \rightarrow C[0, 1] is one-one and onto. Hence it is bijective and T^{-1} is well defined. In such a situation, you should be able to recall the Open Mapping Theorem, which implies that T^{-1} is also bounded. Thus, there exists a constant C > 0 such that

\|T^{-1}g\|_2 \leq C \|g\|_{sup} for all g \in C[0, 1].  (**)

Now we have used the assumption that T is one-one and we also applied a big theorem. So we should be almost there!

It takes a little thought to derive a contradiction from (**). A hint is given by (b), so perhaps we need to construct a sequence of continuous functions which does not has a uniformly convergent subsequence. For example, consider g_n \in C[0, 1] defined by g_n(x) = 0 on [0, \frac{1}{2} - \frac{1}{n}], = 1 on [\frac{1}{2} + \frac{1}{n}, 1], and is linear in between. As is well known, this sequence converges pointwise to a discontinuous function. (So no subseqeuence converges uniformly.)

Since T is assumed to be onto, there exist functions f_n \in L^2[0, 1] such that Tf_n = g_n. Now by (**), we have

\|f_n\|_2 \leq C \|g_n\|_{sup} = C \cdot 1 = C

for all n. Hence \{f_n\} is bounded in L^2[0, 1]. And so by the compactness result in (b), we can extract a uniformly convergent subsequence from \{g_n\}. This is clearly a contradiction. \Box

**********

2. Let {\mathcal{K}} be the family of all non-empty compact subsets of {\mathbb{R}}. For A, B \in {\mathcal{K}}, define

d(A, B) = \sup_{x \in A} \inf_{y \in B} |x - y| + \sup_{y \in B} \inf_{x \in A} |x - y|.

(a) Prove that ({\mathcal{K}}, d) is a metric space.

(b) Prove that ({\mathcal{K}}, d) is separable.

Solution. First, let us understand the definition of the proposed metric by drawing a picture (here I draw it for the plane):

The distance is the sum of the length of the two lines.

(a) The only difficulty is to prove the triangle inequality. Let A, B and C be non-empty compact set. Consider

d(A, C) = \sup_{x \in A} \inf_{z \in C} |x - z| + \sup_{z \in C} \inf_{x \in A} |x - z|.

We do not expect that there are special tricks involved. The inequality should follow from some application of the triangle inequality of {\mathbb{R}}. To make something in B appear, let y_1, y_2 \in B and insert them inside:

\leq \sup_{x \in A} ( \inf_{z \in C} |x - y_1| + |y_1 - z|)) + \sup_{z \in C} (\inf_{x \in A} ( | x - y_2| + |y_2- z|)).

We use two independent elements in B to ensure more flexibility when handling \inf and \sup. Simplifying the above expression, we get

= \sup_{x \in A} |x - y_1| + \inf_{z \in C} |y_1 - z| + \inf_{x \in A} |x - y_2| + \sup_{z \in C} |y_2 - z|.

Note that

\inf_{z \in C} |y_1 - z| + \inf_{x \in A} |x - y_2| \leq \sup_{y \in B} \inf_{z \in C} |y - z| + \sup_{y \in B} \inf_{x \in A} |x - y|.

So these two terms are alright. So, now we have

d(A, C) \leq \sup_{x \in A} |x - y_1| + \sup_{y \in B} \inf_{z \in C} |y - z| + \sup_{y \in B} \inf_{x \in A} |x - y| + \sup_{z \in C}|y_2 - z|

for all y_1, y_2 \in B. It might appear initially things are not fine because we cannot interchange \sup and \inf. But don’t worry. By compactness and continuity, we may replace all \sup by \max. Now we may formulate and prove the following result:

Lemma. Suppose that f(u, v) is a real continuous function of (u, v) where u and v belong respectively to some compact sets. If C \leq \max_u f(u, v) for all v, then C \leq \max_u \min_v f(u, v).

Proof: Exercise. \Box

Hence, we get

d(A, C) \leq \sup_{x \in A} \inf_{y \in Y} |x - y| + \sup_{y \in B} \inf_{z \in C} |y - z| + \sup_{y \in B} \inf_{x \in A} |x - y| + \sup_{z \in C}\inf_{y \in B} |y - z|

= d(A, B) + d(B, C).

(b) Now we need to prove that the metric space is separable. That is, we need to find  countable dense set. Note that we are working on the real line; there are not many reasonable choices for the countable set.

Let D = \bigcup_{n \in {\mathbb{N}}} \{\frac{k}{2^n}: k \in {\mathbb{Z}}\} be the dyadic rationals. Given k and n, let

I_{n, k} = [\frac{k}{2^n}, \frac{k+1}{2^n}]

be the corresponding dyadic interval. Let S be the set of all finite unions of intervals with endpoints in D. Clearly, S is countable and is a subset of {\mathcal{K}}. We claim that D is dense in {\mathcal{K}}.

To show this, let K be any compact set on {\mathbb{R}}. Define

K_n = \bigcup_{k: I_{n, k} \cap K \neq \emptyset} I_{n, k}.

That is, we consider covers of K by small dyadic intervals. Clearly, K \subset K_n for all K. Moreover, for every x \in K_n, there exists y \in K such that |x - y| \leq \frac{1}{2^n}. Now we consider

d(K_n, K) \leq \sup_{y \in K} \inf_{x \in K_n} |x - y| + \sup_{x \in K_n} \inf_{y \in K} |x - y|.

Since K \subset K_n, the first term is 0. And by the second property of K_n mentioned above, for all x \in K_n we have

\inf_{y \in K} |x - y| \leq \frac{1}{2^n}.

Hence d(K_n, K) \leq \frac{1}{2^n} \rightarrow 0 as n \rightarrow \infty. \Box

**********

3. Let f be a positive, continuously differentiable function on (0, \infty) satisfying f'(x) > 0 for all x \in (0, \infty). Suppose that for some constant C > 0,

f(x) \leq Cx^2, \ \ \ x \geq 1.

Show that

\int_0^{\infty} \frac{1}{f'(x)}dx = \infty.

(Hint: Use the fact that if \int_0^{\infty} \frac{1}{f'(x)}dx < \infty, then \lim_{a \rightarrow \infty} \int_a^{\infty} \frac{1}{f'(x)}dx = 0.)

Solution. We need to show that \int_0^{\infty} \frac{1}{f'(x)}dx = \infty, i.e., the derivative f' cannot be too large. However, it is impossible to obtain an estimate of f' in terms of x (draw a few pictures to see!), and this is where analysis (rather than mere calculus) comes into play.

Suppose on the contrary that \int_0^{\infty} \frac{1}{f'(x)}dx < \infty. To use the hint, let \epsilon > 0 be arbitrary. Then we may find a > 0 such that

\int_a^{\infty} \frac{1}{f'(t)}dt < \epsilon.

The idea is that we may use this to obtain an estimate of the distribution function of f'. We use m to denote the Lebesgue measure. By the Markov inequality, for any \lambda > 0, we have

m(\{t \geq a: \frac{1}{f'(t)} > \lambda\}) \leq \frac{\epsilon}{\lambda}.

(This method is also called the first moment estimate.) Rearranging, we get

m(\{t \in [a, x]: f'(t) \geq \frac{1}{\lambda}\} \geq (x - a) - \frac{\epsilon}{\lambda}

for x \geq a. This shows that f' cannot be too small in a measure-theoretic sense. Consider

f(x) = f(0) + \int_0^x f'(t) dt.

(Fundamental theorem of calculus.) For x \geq a,

\int_0^x f'(t) dt \geq \int_a^x f'(t) dt \geq \int_{[0, x] \cap \{f' \geq \frac{1}{\lambda}\}} \frac{1}{f'(t)}dt \geq \frac{1}{\lambda}\left( (x - a) - \frac{\epsilon}{\lambda} \right).

Use calculus to maximize the last function. We get

f(x) \geq \frac{(x - a)^2}{4 \epsilon}

for x \geq a. Since \epsilon > 0 is arbitrary, we conclude that the estimate f(x) \leq Cx^2 cannot hold. This contradiction establishes that \int_0^{\infty} \frac{1}{f'(x)}dx = \infty. \Box

Remark: We also observe that the bound f(x) \leq C x^2 is sharp. If f(x) = x^{2 + \epsilon}, then \int_0^{\infty} \frac{1}{f'(x)}dx < \infty. KKK may want to think about how this exercise relates to isoperimetric inequalities.

Advertisements
This entry was posted in Analysis, Functional analysis. Bookmark the permalink.

8 Responses to Exercises in real analysis

  1. Ken Leung says:

    I think there are some typos in the chain of inequalities after (Fundamental Theorem of Calculus) in Q3.

  2. Leonard Wong says:

    Corrected! Common typos when typing indefinite integrals. Thank you.

    • KKK says:

      I think the third term in this chain of inequality is a typo. More precisely I think the correct one should be:
      \displaystyle \int_0 ^x f'(t)dt \geq \int_ a^x f'(t)dt \geq \int_{[a,x]\cap \{f'\geq\frac{1}{\lambda}\}} f'(t)dt \geq \frac{1}{\lambda} \left( (x-a)- \frac{\epsilon}{\lambda}\right).

      And also, a very minor mistake: after maximizing the function (in \lambda) \frac{1}{\lambda} \left( (x-a)- \frac{\epsilon}{\lambda}\right), we should have f(x)-f(0)\geq\frac{(x-a)^2}{4\epsilon}. This of course doesn’t affect your conclusion.

  3. Edward Fan says:

    I have thought a more elementary proof of question 3, see if I am correct.
    Fix any a > 1. For x>a, by Cauchy-Schwarz inequality,

    (x-a)^2 = (\int^x_a dt )^2 \leq (\int^x_a \sqrt{f'(t)}^2dt) (\int^x_a \frac{1}{\sqrt{f'(t)}^2} dt) = (f(x) - f(a) ) \int^x_a \frac{1}{f'(t)} dt.

    Since f'> 0, f is strictly increasing and f(x) -  f(a) > 0 for x>a, and so the above inequality rewrites as

    \int^x_a \frac{1}{f'(t)} dt \geq \frac{(x-a)^2}{f(x)-f(a)}.

    Now since for x \geq 1, f(x) \leq C x^2, for sufficently large x >a, Cx^2 -f(a) > 0 and

    \int^x_a \frac{1}{f'(t)} dt \geq \frac{(x-a)^2}{Cx^2 -f(a)},

    Now take x \rightarrow \infty, we have

    \int^\infty_a \frac{1}{f'(t)} dt \geq \frac{1}{C}.

    Now take a \rightarrow \infty, we have

    \displaystyle\lim_{a \rightarrow \infty} \int^\infty_a \frac{1}{f'(t)} dt \geq \frac{1}{C} >0.

    The result then follows from the hint.

    [Nice proof. Hope you don’t mind me to do some typesetting in your comment \underset{\smile}{\wedge\;\wedge} -KKK]

  4. Edward Fan says:

    Sorry for the typo, the last formula is
    \lim_{a \rightarrow \infty} \int^\infty_a \frac{1}{f'(t)} dt \geq \frac{1}{C} >0,

  5. paullailai says:

    Q1 : the Theorem should hold true if K\in L^2(X\times X).
    This is called the Hilbert-Schmidt integral operator.

    Q2: I didn’t think seriously, is it the Hausdorff metric?

    • KKK says:

      Paullai: for Q2, this metric is equivalent to the Hausdorff distance:
      \displaystyle d_H(A,B)= \max \{\sup_{a\in A} \inf_{b\in B}|a-b|,  \sup_{b\in B} \inf_{a\in A}|a-b|\},

      by the elementary inequality \max\{a,b\}\leq a+b\leq 2\max\{a,b\} for a,b\geq0.

  6. KKK says:

    I haven’t read your solution of Q2(b) in details (and I don’t know dyadic intervals), but can we do it this way?

    Let S be the collection of all subsets of \mathbb{R} which consist of only finitely many rational numbers. Clearly S is countable. Given a compact K and \epsilon>0, we then cover K by finitely many balls of radius \epsilon, whose centers are in K. We choose in each ball a rational point and let A be the collection of all such points, then it is easy to see that d(A,K)<3\epsilon.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s