Thursday 2 June 2011

Cotlar-Stein lemma

The formatter threw an exception while trying to deserialize the message: Error in deserializing body of request message for operation 'Translate'. The maximum string content length quota (30720) has been exceeded while reading XML data. This quota may be increased by changing the MaxStringContentLength property on the XmlDictionaryReaderQuotas object used when creating the XML reader. Line 2, position 33141.
The formatter threw an exception while trying to deserialize the message: Error in deserializing body of request message for operation 'Translate'. The maximum string content length quota (30720) has been exceeded while reading XML data. This quota may be increased by changing the MaxStringContentLength property on the XmlDictionaryReaderQuotas object used when creating the XML reader. Line 1, position 33802.

A basic problem in harmonic analysis (as well as in linear algebra, random matrix theory, and high-dimensional geometry) is to estimate the operator norm {\|T\|_{op}} of a linear map {T: H \rightarrow H'} between two Hilbert spaces, which we will take to be complex for sake of discussion. Even the finite-dimensional case {T: {\bf C}^m \rightarrow {\bf C}^n} is of interest, as this operator norm is the same as the largest singular value {\sigma_1(A)} of the {n \times m} matrix {A} associated to {T}.

In general, this operator norm is hard to compute precisely, except in special cases. One such special case is that of a diagonal operator, such as that associated to an {n \times n} diagonal matrix {D = \hbox{diag}(\lambda_1,\ldots,\lambda_n)}. In this case, the operator norm is simply the supremum norm of the diagonal coefficients: \displaystyle \|D\|_{op} = \sup_{1 \leq i \leq n} |\lambda_i|. \ \ \ \ \ (1)

A variant of (1) is Schur’s test, which for simplicity we will phrase in the setting of finite-dimensional operators {T: {\bf C}^m \rightarrow {\bf C}^n} given by a matrix {A = (a_{ij})_{1 \leq i \leq n; 1 \leq j \leq m}} via the usual formula

\displaystyle T (x_j)_{j=1}^m := ( \sum_{j=1}^m a_{ij} x_j )_{i=1}^n.

A simple version of this test is as follows: if all the absolute row sums and columns sums of {A} are bounded by some constant {M}, thus \displaystyle \sum_{j=1}^m |a_{ij}| \leq M \ \ \ \ \ (2)

for all {1 \leq i \leq n} and \displaystyle \sum_{i=1}^n |a_{ij}| \leq M \ \ \ \ \ (3)

for all {1 \leq j \leq m}, then \displaystyle \|T\|_{op} = \|A\|_{op} \leq M \ \ \ \ \ (4)

(note that this generalises (the upper bound in) (1).) Indeed, to see (4), it suffices by duality and homogeneity to show that \displaystyle |\sum_{i=1}^n (\sum_{j=1}^m a_{ij} x_j) y_i| \leq M

whenever {(x_j)_{j=1}^m} and {(y_i)_{i=1}^n} are sequences with {\sum_{j=1}^m |x_j|^2 = \sum_{i=1}^n |y_i|^2 = 1}; but this easily follows from the arithmetic mean-geometric mean inequality \displaystyle |a_{ij} x_j) y_i| \leq \frac{1}{2} |a_{ij}| |x_i|^2 + \frac{1}{2} |a_{ij}| |y_j|^2

and (2), (3).

Schur’s test (4) (and its many generalisations to weighted situations, or to Lebesgue or Lorentz spaces) is particularly useful for controlling operators in which the role of oscillation (as reflected in the phase of the coefficients {a_{ij}}, as opposed to just their magnitudes {|a_{ij}|}) is not decisive. However, it is of limited use in situations that involve a lot of cancellation. For this, a different test, known as the Cotlar-Stein lemma, is much more flexible and powerful. It can be viewed in a sense as a non-commutative variant of Schur’s test (4) (or of (1)), in which the scalar coefficients {\lambda_i} or {a_{ij}} are replaced by operators instead.

To illustrate the basic flavour of the result, let us return to the bound (1), and now consider instead a block-diagonal matrix \displaystyle A = \begin{pmatrix} \Lambda_1 & 0 & \ldots & 0 \\ 0 & \Lambda_2 & \ldots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \ldots & \Lambda_n \end{pmatrix} \ \ \ \ \ (5)

where each {\Lambda_i} is now a {m_i \times m_i} matrix, and so {A} is an {m \times m} matrix with {m := m_1 + \ldots +m_n}. Then we have \displaystyle \|A\|_{op} = \sup_{1 \leq i \leq n} \|\Lambda_i\|_{op}. \ \ \ \ \ (6)

Indeed, the lower bound is trivial (as can be seen by testing {A} on vectors which are supported on the {i^{th}} block of coordinates), while to establish the upper bound, one can make use of the orthogonal decomposition \displaystyle {\bf C}^m \equiv \bigoplus_{i=1}^m {\bf C}^{m_i} \ \ \ \ \ (7)

to decompose an arbitrary vector {x \in {\bf C}^m} as

\displaystyle x = \begin{pmatrix} x_1 \\ x_2 \\ \vdots \\ x_n \end{pmatrix}

with {x_i \in {\bf C}^{m_i}}, in which case we have \displaystyle Ax = \begin{pmatrix} \Lambda_1 x_1 \\ \Lambda_2 x_2 \\ \vdots \\ \Lambda_n x_n \end{pmatrix}

and the upper bound in (6) then follows from a simple computation.

The operator {T} associated to the matrix {A} in (5) can be viewed as a sum {T = \sum_{i=1}^n T_i}, where each {T_i} corresponds to the {\Lambda_i} block of {A}, in which case (6) can also be written as \displaystyle \|T\|_{op} = \sup_{1 \leq i \leq n} \|T_i\|_{op}. \ \ \ \ \ (8)

When {n} is large, this is a significant improvement over the triangle inequality, which merely gives

\displaystyle \|T\|_{op} \leq \sum_{1 \leq i \leq n} \|T_i\|_{op}.

The reason for this gain can ultimately be traced back to the “orthogonality” of the {T_i}; that they “occupy different columns” and “different rows” of the range and domain of {T}. This is obvious when viewed in the matrix formalism, but can also be described in the more abstract Hilbert space operator formalism via the identities \displaystyle T_i^* T_j = 0 \ \ \ \ \ (9)

and \displaystyle T_i T^* j = 0 \ \ \ \ \ (10)

whenever {i \neq j}. (The first identity asserts that the ranges of the {T_i} are orthogonal to each other, and the second asserts that the coranges of the {T_i} (the ranges of the adjoints {T_i^*}) are orthogonal to each other.) By replacing (7) with a more abstract orthogonal decomposition into these ranges and coranges, one can in fact deduce (8) directly from (9) and (10).

The Cotlar-Stein lemma is an extension of this observation to the case where the {T_i} are merely almost orthogonal rather than orthogonal, in a manner somewhat analogous to how Schur’s test (partially) extends (1) to the non-diagonal case. Specifically, we have

Lemma 1 (Cotlar-Stein lemma) Let {T_1,\ldots,T_n: H \rightarrow H'} be a finite sequence of bounded linear operators from one Hilbert space {H} to another {H'}, obeying the bounds \displaystyle \sum_{j=1}^n \| T_i T_j^* \|_{op}^{1/2} \leq M \ \ \ \ \ (11)

and \displaystyle \sum_{j=1}^n \| T_i^* T_j \|_{op}^{1/2} \leq M \ \ \ \ \ (12)

for all {i=1,\ldots,n} and some {M > 0} (compare with (2), (3)). Then one has \displaystyle \| \sum_{i=1}^n T_i \|_{op} \leq M. \ \ \ \ \ (13)

Note from the basic {TT^*} identity \displaystyle \|T\|_{op} = \|TT^* \|_{op}^{1/2} = \|T^* T\|_{op}^{1/2} \ \ \ \ \ (14)

that the hypothesis (11) (or (12)) already gives the bound \displaystyle \|T_i\|_{op} \leq M \ \ \ \ \ (15)

on each component {T_i} of {T}, which by the triangle inequality gives the inferior bound

\displaystyle \| \sum_{i=1}^n T_i \|_{op} \leq nM;

the point of the Cotlar-Stein lemma is that the dependence on {n} in this bound is eliminated in (13), which in particular makes the bound suitable for extension to the limit {n \rightarrow \infty} (see Remark 1 below).

The Cotlar-Stein lemma was first established by Cotlar in the special case of commuting self-adjoint operators, and then independently by Cotlar and Stein in full generality, with the proof appearing in a subsequent paper of Knapp and Stein.

The Cotlar-Stein lemma is often useful in controlling operators such as singular integral operators or pseudo-differential operators {T} which “do not mix scales together too much”, in that operators {T} map functions “that oscillate at a given scale {2^{-i}}” to functions that still mostly oscillate at the same scale {2^{-i}}. In that case, one can often split {T} into components {T_i} which essentically capture the scale {2^{-i}} behaviour, and understanding {L^2} boundedness properties of {T} then reduces to establishing the boundedness of the simpler operators {T_i} (and of establishing a sufficient decay in products such as {T_i^* T_j} or {T_i T_j^*} when {i} and {j} are separated from each other). In some cases, one can use Fourier-analytic tools such as Littlewood-Paley projections to generate the {T_i}, but the true power of the Cotlar-Stein lemma comes from situations in which the Fourier transform is not suitable, such as when one has a complicated domain (e.g. a manifold or a non-abelian Lie group), or very rough coefficients (which would then have badly behaved Fourier behaviour). One can then select the decomposition {T = \sum_i T_i} in a fashion that is tailored to the particular operator {T}, and is not necessarily dictated by Fourier-analytic considerations.

Once one is in the almost orthogonal setting, as opposed to the genuinely orthogonal setting, the previous arguments based on orthogonal projection seem to fail completely. Instead, the proof of the Cotlar-Stein lemma proceeds via an elegant application of the tensor power trick (or perhaps more accurately, the power method), in which the operator norm of {T} is understood through the operator norm of a large power of {T} (or more precisely, of its self-adjoint square {TT^*} or {T^* T}). Indeed, from an iteration of (14) we see that for any natural number {N}, one has \displaystyle \|T\|_{op}^{2N} = \| (TT^*)^N \|_{op}. \ \ \ \ \ (16)

To estimate the right-hand side, we expand out the right-hand side and apply the triangle inequality to bound it by \displaystyle \sum_{i_1,j_1,\ldots,i_N,j_N \in \{1,\ldots,n\}} \| T_{i_1} T_{j_1}^* T_{i_2} T_{j_2}^* \ldots T_{i_N} T_{j_N}^* \|_{op}. \ \ \ \ \ (17)

Recall that when we applied the triangle inequality directly to {T}, we lost a factor of {n} in the final estimate; it will turn out that we will lose a similar factor here, but this factor will eventually be attenuated into nothingness by the tensor power trick.

To bound (17), we use the basic inequality {\|ST\|_{op} \leq \|S\|_{op} \|T\|_{op}} in two different ways. If we group the product {T_{i_1} T_{j_1}^* T_{i_2} T_{j_2}^* \ldots T_{i_N} T_{j_N}^*} in pairs, we can bound the summand of (17) by

\displaystyle \| T_{i_1} T_{j_1}^* \|_{op} \ldots \| T_{i_N} T_{j_N}^* \|_{op}.

On the other hand, we can group the product by pairs in another way, to obtain the bound of \displaystyle \| T_{i_1} \|_{op} \| T_{j_1}^* T_{i_2} \|_{op} \ldots \| T_{j_{N-1}}^* T_{i_N}\|_{op} \| T_{j_N}^* \|_{op}.

We bound {\| T_{i_1} \|_{op}} and {\| T_{j_N}^* \|_{op}} crudely by {M} using (15). Taking the geometric mean of the above bounds, we can thus bound (17) by \displaystyle M \sum_{i_1,j_1,\ldots,i_N,j_N \in \{1,\ldots,n\}} \| T_{i_1} T_{j_1}^* \|_{op}^{1/2} \| T_{j_1}^* T_{i_2} \|_{op}^{1/2} \ldots \| T_{j_{N-1}}^* T_{i_N}\|_{op}^{1/2} \| T_{i_N} T_{j_N}^* \|_{op}^{1/2}.

If we then sum this series first in {j_N}, then in {i_N}, then moving back all the way to {i_1}, using (11) and (12) alternately, we obtain a final bound of \displaystyle n M^{2N}

for (16). Taking {N^{th}} roots, we obtain \displaystyle \|T\|_{op} \leq n^{1/2N} M.

Sending {N \rightarrow \infty}, we obtain the claim.

Remark 1 As observed by Comech, the Cotlar-Stein lemma can be extended to infinite sums {\sum_{i=1}^\infty T_i} (with the obvious changes to the hypotheses (11), (12)). Indeed, one can show that for any {f \in H}, the sum {\sum_{i=1}^\infty T_i f} is unconditionally convergent in {H'} (and furthermore has bounded {2}-variation), and the resulting operator {\sum_{i=1}^\infty T_i} is a bounded linear operator with an operator norm bound on {M}.

Remark 2 If we specialise to the case where all the {T_i} are equal, we see that the bound in the Cotlar-Stein lemma is sharp, at least in this case. Thus we see how the tensor power trick can convert an inefficient argument, such as that obtained using the triangle inequality or crude bounds such as (15), into an efficient one.

Remark 3 One can prove Schur’s test by a similar method. Indeed, starting from the inequality \displaystyle \|A\|_{op}^{2N} \leq \hbox{tr}( (AA^*)^N )

(which follows easily from the singular value decomposition), we can bound {\|A\|_{op}^{2N}} by \displaystyle \sum_{i_1,\ldots,j_N \in \{1,\ldots,n\}} a_{i_1,j_1} \overline{a_{j_1,i_2}} \ldots a_{i_N,j_N} \overline{a_{j_N,i_1}}.

Estimating the other two terms in the summand by {M}, and then repeatedly summing the indices one at a time as before, we obtain \displaystyle \|A\|_{op}^{2N} \leq n M^{2N}

and the claim follows from the tensor power trick as before. On the other hand, in the converse direction, I do not know of any way to prove the Cotlar-Stein lemma that does not basically go through the tensor power argument.

No comments:

Post a Comment