The formatter threw an exception while trying to deserialize the message: Error in deserializing body of request message for operation 'Translate'. The maximum string content length quota (30720) has been exceeded while reading XML data. This quota may be increased by changing the MaxStringContentLength property on the XmlDictionaryReaderQuotas object used when creating the XML reader. Line 1, position 33987.
In a few weeks, Princeton University will host a conference in Analysis and Applications in honour of the 80th birthday of Elias Stein (though, technically, Eli’s 80th birthday was actually in January). As one of Eli’s students, I was originally scheduled to be one of the speakers at this conference; but unfortunately, for family reasons I will be unable to attend. In lieu of speaking at this conference, I have decided to devote some space on this blog for this month to present some classic results of Eli from his many decades of work in harmonic analysis, ergodic theory, several complex variables, and related topics. My choice of selections here will be a personal and idiosyncratic one; the results I present are not necessarily the “best” or “deepest” of his results, but are ones that I find particularly elegant and appealing. (There will also inevitably be some overlap here with Charlie Fefferman’s article “Selected theorems by Eli Stein“, which not coincidentally was written for Stein’s 60th birthday conference in 1991.)
In this post I would like to describe one of Eli Stein’s very first results that is still used extremely widely today, namely his interpolation theorem from 1956 (and its refinement, the Fefferman-Stein interpolation theorem from 1972). This is a deceptively innocuous, yet remarkably powerful, generalisation of the classic Riesz-Thorin interpolation theorem which uses methods from complex analysis (and in particular, the Lindelöf theorem or the Phragmén-Lindelöf principle) to show that if a linear operator from one (-finite) measure space to another obeyed the estimates
for all and
for all , where and , then one automatically also has the interpolated estimates
for all and , where the quantities are defined by the formulae
The Riesz-Thorin theorem is already quite useful (it gives, for instance, by far the quickest proof of the Hausdorff-Young inequality for the Fourier transform, to name just one application), but it requires the same linear operator to appear in (1), (2), and (3). Eli Stein realised, though, that due to the complex-analytic nature of the proof of the Riesz-Thorin theorem, it was possible to allow different linear operators to appear in (1), (2), (3), so long as the dependence was analytic. A bit more precisely: if one had a family of operators which depended in an analytic manner on a complex variable in the strip (thus, for any test functions , the inner product would be analytic in ) which obeyed some mild regularity assumptions (which are slightly technical and are omitted here), and one had the estimates
and
for all and some quantities that grew at most exponentially in (actually, any growth rate significantly slower than the double-exponential would suffice here), then one also has the interpolated estimates
for all and a constant depending only on .
In Fefferman’s survey, he notes the proof of the Stein interpolation theorem can be obtained from that of the Riesz-Thorin theorem simply “by adding a single letter of the alphabet”. Indeed, the way the Riesz-Thorin theorem is proven is to study an expression of the form
where are functions depending on in a suitably analytic manner, for instance taking for some test function , and similarly for . If are chosen properly, will depend analytically on as well, and the two hypotheses (1), (2) give bounds on and for respectively. The Lindelöf theorem then gives bounds on intermediate values of , such as ; and the Riesz-Thorin theorem can then be deduced by a duality argument. (This is covered in many graduate real analysis texts; I myself covered it here.)
The Stein interpolation theorem proceeds by instead studying the expression
One can then repeat the proof of the Riesz-Thorin theorem more or less verbatim to obtain the Stein interpolation theorem.
The ability to vary the operator makes the Stein interpolation theorem significantly more flexible than the Riesz-Thorin theorem. We illustrate this with the following sample result:
Proposition 1 For any (test) function , let be the average of along an arc of a parabola:
where is a bump function supported on (say) . Then is bounded from to , thus
There is nothing too special here about the parabola; the same result in fact holds for convolution operators on any arc of a smooth curve with nonzero curvature (and there are many extensions to higher dimensions, to variable-coefficient operators, etc.). We will however restrict attention to the parabola for sake of exposition. One can view as a convolution , where is a measure on the parabola arc . We will also be somewhat vague about what “test function” means in this exposition in order to gloss over some minor technical details.
By testing (and its adjoint) on the indicator function of a small ball of some radius (or of small rectangles such as ) one sees that the exponent , here are best possible.
This proposition was first proven by Littman in 1973 using the Stein interpolation theorem. To illustrate the power of this theorem, it should be noted that for almost two decades this was the only known proof of this result; a proof based on multilinear interpolation (exploiting the fact that the exponent in (4) is an integer) was obtained by Oberlin, and a fully combinatorial proof was only obtained in 2008 in an unpublished note of Christ (see also the recent papers of Stovall and of Dendrinos-Laghi-Wright for further extensions of the combinatorial argument).
To motivate the Stein interpolation argument, let us first try using the Riesz-Thorin interpolation theorem first. The exponent pair is an interpolant between and , so a first attempt to proceed here would be to establish the bounds
and
for all (test) functions
The bound (5) is an easy consequence of Minkowski’s integral inequality(or Young’s inequality, noting that is a finite measure). On the other hand, because the measure is not absolutely continuous, let alone arising from an function, the estimate (6) is very false. For instance, if one applies to the indicator function for some small , then the norm of is , but the norm of is comparable to , contradicting (6) as one sense to zero.
To get around this, one first notes that there is a lot of “room” in (5) due to the smoothing properties of the measure . Indeed, from Plancherel’s theorem one has
and
for all test functions , where
is the Fourier transform of , and
It is clear that is uniformly bounded in , which already gives (5). But a standard application of the method of stationary phase reveals that one in fact has a decay estimate
for some . This shows that is not just in , but is somewhat smoother as well; in particular, one has
for any (fractional) differential operator of order . (Here we adopt the usual convention that the constant is allowed to vary from line to line.)
Using the numerology of the Stein interpolation theorem, this suggests that if we can somehow obtain the counterbalancing estimate
for some differential operator of order , then we should be able to interpolate and obtain the desired estimate (4). And indeed, we can take an antiderivative in the direction, giving the operator
and a simple change of variables does indeed verify that this operator is bounded from to .
Unfortunately, the above argument is not rigorous, because we need an analytic family of operators in order to invoke the Stein interpolation theorem, rather than just two operators and . This turns out to require some slightly tricky complex analysis: after some trial and error, one finds that one can use the family defined for by the formula
where is the Gamma function, and extended to the rest of the complex plane by analytic continuation. The Gamma factor is a technical one, needed to compensate for the divergence of the weight as approaches ; it also makes the Fourier representation of cleaner (indeed, is morally ). It is then easy to verify the estimates
for all (with growing at a controlled rate), while from Fourier analysis one also can show that
for all . Finally, one can verify that , and (4) then follows from the Stein interpolation theorem.
It is instructive to compare this result with what can be obtained by real-variable methods. One can perform a smooth dyadic partition of unity
for some bump function (of total mass ) and bump function (of total mass zero), which (formally, at least) leads to the decomposition
where is a harmless smoothing operator (which certainly maps to ) and
It is not difficult to show that
while a Fourier-analytic computation (using (7)) reveals that
which interpolates (by, say, the Riesz-Thorin theorem, or the real-variable Marcinkiewicz interpolation theorem) to
which is close to (4). Unfortunately, we still have to sum in , and this creates a “logarithmic divergence” that just barely fails to recover (4). (With a slightly more refined real interpolation argument, one can at least obtain a restricted weak-type estimate from to this way, but one can concoct abstract counterexamples to show that the estimates (10), (11) are insufficient to obtain an bound on .)
The key difference is that the inputs (8), (9) used in the Stein interpolation theorem are more powerful than the inputs (10), (11) in the real-variable method. Indeed, (8) is roughly equivalent to the assertion that
and (9) is similarly equivalent to the assertion that
A Fourier averaging argument shows that these estimates imply (10) and (11), but not conversely. If one unpacks the proof of Lindelöf’s theorem (which is ultimately powered by an integral representation, such as that provided by the Cauchy integral formula) and hence of the Stein interpolation theorem, one can interpret Stein interpolation in this case as using a clever integral representation of in terms of expressions such as and , where are various nonlinear transforms of . Technically, it would then be possible to rewrite the Stein interpolation argument as a real-variable one, without explicit mention of Lindelöf’s theorem; but the proof would then look extremely contrived; the complex-analytic framework is much more natural (much as it is in analytic number theory, where the distribution of the primes is best handled by a complex-analytic study of the Riemann zeta function).
Remark 1 A useful strengthening of the Stein interpolation theorem is the Fefferman-Stein interpolation theorem, in which the endpoint spaces and are replaced by the Hardy space and the space of functions of bounded mean oscillation respectively. These spaces are more stable with respect to various harmonic analysis operators, such as singular integrals (and in particular, with respect to the Marcinkiewicz operators , which come up frequently when attempting to use the complex method), which makes the Fefferman-Stein theorem particularly useful for controlling expressions derived from these sorts of operators.
No comments:
Post a Comment