Variants of the Schwarz lemma

Take some self map on the unit disk $\mathbb{D}$, $f$. If $f(0) = 0$, $g(z) = f(z) / z$ has a removable singularity at $0$. On $|z| = r$, $|g(z)| \leq 1 / r$, and with the maximum principle on $r \to 1$, we derive $|f(z)| \leq |z|$ everywhere. In particular, if $|f(z)| = |z|$ anywhere, constancy by the maximum principle tells us that $f(z) = \lambda z$, where $|\lambda| = 1$. $g$ with the removable singularity removed has $g(0) = f'(0)$, so again, by the maximum principle, $|f'(0)| = 1$ means $g$ is a constant of modulus $1$. Moreover, if $f$ is not an automorphism, we cannot have $|f(z)| = |z|$ anywhere, so in that case, $|f'(0)| < 1$.

Cauchy’s integral formula in complex analysis

I took a graduate course in complex analysis a while ago as an undergraduate. However, I did not actually understand it well at all, to which is a testament that much of the knowledge vanished very quickly. It pleases me though now following some intellectual maturation, after relearning certain theorems, they seem to stick more permanently, with the main ideas behind the proof more easily understandably clear than mind-disorienting, the latter of which was experienced by me too much in my early days. Shall I say it that before I must have been on drugs of something, because frankly the way about which I approached certain things was frankly quite weird, and in retrospect, I was in many ways an animal-like creature trapped within the confines of an addled consciousness oblivious and uninhibited. Almost certainly never again will I experience anything like that. Now, I can only mentally rationalize the conscious experience of a mentally inferior creature but such cannot be experienced for real. It is almost like how an evangelical cannot imagine what it is like not to believe in God, and even goes as far as to contempt the pagan. Exaltation, exhilaration was concomitant with the leap of consciousness till it not long after established its normalcy.

Now, the last of non-mathematical writing in this post will be on the following excerpt from Grothendieck’s Récoltes et Semailles:

In those critical years I learned how to be alone. [But even] this formulation doesn’t really capture my meaning. I didn’t, in any literal sense learn to be alone, for the simple reason that this knowledge had never been unlearned during my childhood. It is a basic capacity in all of us from the day of our birth. However these three years of work in isolation [1945–1948], when I was thrown onto my own resources, following guidelines which I myself had spontaneously invented, instilled in me a strong degree of confidence, unassuming yet enduring, in my ability to do mathematics, which owes nothing to any consensus or to the fashions which pass as law….By this I mean to say: to reach out in my own way to the things I wished to learn, rather than relying on the notions of the consensus, overt or tacit, coming from a more or less extended clan of which I found myself a member, or which for any other reason laid claim to be taken as an authority. This silent consensus had informed me, both at the lycée and at the university, that one shouldn’t bother worrying about what was really meant when using a term like “volume,” which was “obviously self-evident,” “generally known,” “unproblematic,” etc….It is in this gesture of “going beyond,” to be something in oneself rather than the pawn of a consensus, the refusal to stay within a rigid circle that others have drawn around one—it is in this solitary act that one finds true creativity. All others things follow as a matter of course.

Since then I’ve had the chance, in the world of mathematics that bid me welcome, to meet quite a number of people, both among my “elders” and among young people in my general age group, who were much more brilliant, much more “gifted” than I was. I admired the facility with which they picked up, as if at play, new ideas, juggling them as if familiar with them from the cradle—while for myself I felt clumsy, even oafish, wandering painfully up an arduous track, like a dumb ox faced with an amorphous mountain of things that I had to learn (so I was assured), things I felt incapable of understanding the essentials or following through to the end. Indeed, there was little about me that identified the kind of bright student who wins at prestigious competitions or assimilates, almost by sleight of hand, the most forbidding subjects.

In fact, most of these comrades who I gauged to be more brilliant than I have gone on to become distinguished mathematicians. Still, from the perspective of thirty or thirty-five years, I can state that their imprint upon the mathematics of our time has not been very profound. They’ve all done things, often beautiful things, in a context that was already set out before them, which they had no inclination to disturb. Without being aware of it, they’ve remained prisoners of those invisible and despotic circles which delimit the universe of a certain milieu in a given era. To have broken these bounds they would have had to rediscover in themselves that capability which was their birthright, as it was mine: the capacity to be alone.

Grothendieck was first known to me the dimwit in a later stage of high school. At that time, I was still culturally under the idiotic and shallow social constraints of an American high school, though already visibly different, unable to detach too much from it both intellectually and psychologically. There is quite an element of what I now in recollection with benefit of hindsight can characterize as a harbinger of unusual aesthetic discernment, one exercised and already vaguely sensed back then though lacking in reinforcement in social support and confidence, and most of all, in ability. For at that time, I was still much of a species in mental bondage, more often than not driven by awe as opposed to reason. In particular, I awed and despaired at many a contemporary of very fine range of myself who on the surface appeared to me so much more endowed and quick to grasp and compute, in an environment where judgment of an individual’s capability is dominated so much more so by scores and metrics, as opposed to substance, not that I had any of the latter either.

Vaguely, I recall seeing the above passage once in high school articulated with so much of verbal richness of a height that would have overwhelmed and intimidated me at the time. It could not be understood by me how Grothendieck, this guy considered by many as greatest mathematician of the 20th century, could have actually felt dumb. Though I felt very dumb myself, I never fully lost confidence, sensing a spirit in me that saw quite differently from others, that was far less inclined to lose himself in “those invisible and despotic circles” than most around me. Now, for the first time, I can at least subjectively feel identification with Grothendieck, and perhaps I am still misinterpreting his message to some extent, though I surely feel far less at sea with respect to that now than before.

Later I had the fortune to know personally one who gave a name to this implicit phenomenon, aesthetic discernment. It has been met with ridicule as a self-congratulatory achievement one of lesser formal achievement, a concoction of a failure in self-denial. Yet on the other hand, I have witnessed that most people are too carried away in today’s excessively artificially institutionally credentialist society that they lose sight of what is fundamentally meaningful, and sadly, those unperturbed by this ill are few and fewer. Finally, I have reflected on the question of what good is knowledge if too few can rightly perceive it. Science is always there and much of it of value remains unknown to any who has inhabited this planet, and I will conclude at that.

So, one of the theorems in that class was of course Cauchy’s integral formula, one of the most central tools in complex analysis. Formally,

Let $D$ be a bounded domain with piecewise smooth boundary. If $f(z)$ is analytic on $D$, and $f(z)$ extends smoothly to the boundary of $D$, then

$f(z) = \frac{1}{2\pi i}\int_{\partial D} \frac{f(w)}{w-z}dw,\qquad z \in D. \ \ \ \ (1)$

This theorem was actually somewhat elusive to me. I would learn it, find it deceptively obvious, and then forget it eventually, having to repeat this cycle. I now ask how one would conceive of this theorem. On that, we first observe that by continuity, we can show that the average on a circle will go to its value at the center as the radius goes to zero. With $dw = i\epsilon e^{i\theta}d\theta$, we can with the $w - z$ in the denominator, vanish out any factor of $f(z + \epsilon e^{i\theta})$ in the integrand. From this, we have the result if $D$ sufficiently small circle. Even with this, there is implicit Cauchy’s integral theorem, the one which states that integral of holomorphic function inside on closed curve is zero. Speaking of which, we can extend to any bounded domain with piecewise smooth boundary along the same principle.

Cauchy’s integral formula is powerful when the integrand is bounded. We have already seen this in Montel’s theorem. In another even simpler case, in Riemann’s theorem on removable singularities, we can with our upper bound on the integrand $M$, establish with $M / r^n$ establish that for $n < 0$, the coefficient in the Laurent series about the point is $a_n = 0$.

This integral formula extends to all derivatives by differentiating. Inductively, with uniform convergence of the integrand, one can show that

$f^{(m)}(z) = \frac{m!}{2\pi i}\int_{\partial D} \frac{f(w)}{(w-z)^{m+1}}dw, \qquad z \in D, m \geq 0$.

An application of this for a bounded entire function would be to contour integrate along an arbitrarily large circle to derive an $n!M / R^n$ upper bound (which goes to $0$ as $R \to \infty$) on the derivatives. This gives us Liouville’s theorem, which states that bounded entire functions are constant, by Taylor series.

Weierstrass products

Long time ago when I was a clueless kid about the finish 10th grade of high school, I first learned about Euler’s determination of $\zeta(2) = \frac{\pi^2}{6}$. The technique he used was of course factorization of $\sin z / z$ via its infinitely many roots to

$\displaystyle\prod_{n=1}^{\infty} \left(1 - \frac{z}{n\pi}\right)\left(1 + \frac{z}{n\pi}\right) = \displaystyle\prod_{n=1}^{\infty} \left(1 - \frac{z^2}{n^2\pi^2}\right)$.

Equating the coefficient of $z^2$ in this product, $-\displaystyle\sum_{n=1}^{\infty}\frac{1}{n^2\pi^2}$, with the coefficient of $z^2$ in the well-known Maclaurin series of $\sin z / z$, $-1/6$, gives that $\zeta(2) = \frac{\pi^2}{6}$.

This felt to me, who knew almost no math, so spectacular at that time. It was also one of great historical significance. The problem was first posed by Pietro Mengoli in 1644, and had baffled the most genius of mathematicians of that day until 1734, when Euler finally stunned the mathematical community with his simple yet ingenious solution. This was done when Euler was in St. Petersburg. On that, I shall note that from this, we can easily see how Russia had a rich mathematical and scientific tradition that began quite early on, which must have deeply influenced the preeminence in science of Tsarist Russia and later the Soviet Union despite their being in practical terms quite backward compared to the advanced countries of Western Europe, like UK and France, which of course was instrumental towards the rapid catching up in industry and technology of the Soviet Union later on.

I had learned of this result more or less concurrently with learning on my own (independent of the silly American public school system) what constituted a rigorous proof. I remember back then I was still not accustomed to the cold, precise, and austere rigor expected in mathematics and had much difficulty restraining myself in that regard, often content with intuitive solutions. From this, one can guess that I was not quite aware of how Euler’s solution was in fact not a rigorous one by modern standards, despite its having been noted from the book from which I read this. However, now I am aware that what Euler constructed was in fact a Weierstrass product, and in this article, I will explain how one can construct those in a way that guarantees uniform convergence on compact sets.

Given a finite number of points on the complex plane, one can easily construct an analytic function with zeros or poles there for any combination of (finite) multiplicities. For a countably infinite number of points, one can as well the same way but how can one know that it, being of a series nature, doesn’t blow up? There is quite some technical machinery to ensure this.

We begin with the restricted case of simple poles and arbitrary residues. This is a special case of what is now known as Mittag-Leffler’s theorem.

Theorem 1.1 (Mittag-Leffler) Let $z_1,z_2,\ldots \to \infty$ be a sequence of distinct complex numbers satisfying $0 < |z_1| \leq |z_2| \leq \ldots$. Let $m_1, m_2,\ldots$ be any sequence of non-zero complex numbers. Then there exists a (not unique) sequence $p_1, p_2, \ldots$ of non-negative integers, depending only on the sequences $(z_n)$ and $(m_n)$, such that the series $f (z)$

$f(z) = \displaystyle\sum_{n=1}^{\infty} \left(\frac{z}{z_n}\right)^{p_n} \frac{m_n}{z - z_n} \ \ \ \ (1.1)$

is totally convergent, and hence absolutely and uniformly convergent, in any compact set $K \subset \mathbb{C} \ {z_1,z_2,\ldots}$. Thus the function $f(z)$ is meromorphic, with simple poles $z_1, z_2, \ldots$ having respective residues $m_1, m_2, \ldots$.

Proof: Total convergence, in case forgotten, refers to the Weierstrass M-test. That said, it suffices to establish

$\left|\left(\frac{z}{z_n}\right)^{p_n}\frac{m_n}{z-z_n}\right| < M_n$,

where $\sum_{n=1}^{\infty} M_n < \infty$. For total convergence on any compact set, we again use the classic technique of monotonically increasing disks to $\infty$ centered at the origin with radii $r_n \leq |z_n|$. This way for $|z| \leq r_n$, we have

$\left|\left(\frac{z}{z_n}\right)^{p_n}\frac{m_n}{z-z_n}\right| < \left(\frac{r_n}{|z_n|}\right)^{p_n}\frac{m_n}{|z_n|-r_n} < M_n$.

With $r_n < |z_n|$ we can for any $M_n$ choose large enough $p_n$ to satisfy this. This makes clear that the $\left(\frac{z}{z_n}\right)^{p_n}$ is our mechanism for constraining the magnitude of the values attained, which we can do to an arbitrary degree.

The rest of the proof is more or less trivial. For any $K$, pick some $r_N$ the disk of which contains it. For $n < N$, we can bound with $\displaystyle\max_{z \in K}\left|\left(\frac{z}{z_n}\right)^{p_n}\frac{m_n}{z-z_n}\right|$, which must be bounded by continuity on compact set (now you can see why we must omit the poles from our domain).     ▢

Lemma 1.1 Let the functions $u_n(z) (n = 1, 2,\ldots)$ be regular in a compact set $K \subset C$, and let the series $\displaystyle\sum_{n=1}^{\infty} u_n(z)$ be totally convergent in $K$ . Then the infinite product $\displaystyle\sum_{n=1}^{\infty} \exp (u_n(z)) = \exp\left(\displaystyle\sum_{n=1}^{\infty} u_n(z)\right)$ is uniformly convergent in $K$.

Proof: Technical exercise left to the reader.     ▢

Now we present a lemma that allows us to take the result of Mittag-Leffler (Theorem 1.1) to meromorphic functions with zeros and poles at arbitrary points, each with its prescribed multiplicity.

Lemma 1.2 Let $f (z)$ be a meromorphic function. Let $z_1,z_2,\ldots = 0$ be the poles of $f (z)$, all simple with respective residues $m_1, m_2,\ldots \in \mathbb{Z}$. Then the function

$\phi(z) = \exp \int_0^z f (t) dt \ \ \ \ (1.2)$

is meromorphic. The zeros (resp. poles) of $\phi(z)$ are the points $z_n$ such that $m_n > 0$ (resp. $m_n < 0$), and the multiplicity of $z_n$ as a zero (resp. pole) of $\phi(z)$ is $m_n$ (resp. $-m_n$).

Proof: Taking the exponential of that integral has the function of turning it into a one-valued function. Take two paths $\gamma$ and $\gamma'$ from $0$ to $z$ with intersects not any of the poles. By the residue theorem,

$\int_{\gamma} f(z)dz = \int_{\gamma'} f(z)dz + 2\pi i R$,

where $R$ is the sum of residues of $f(t)$ between $\gamma$ and $\gamma'$. Because the $m_i$s are integers, $R$ must be an integer from which follows that our exponential is a one-valued function. It is also, with the exponential being analytic, also analytic. Moreover, out of boundedness, it is non-zero on $\mathbb{C} \setminus \{z_1, z_2, \ldots\}$. We can remove the pole at $z_1$ with $f_1(z) = f(z) - \frac{m_1}{z - z_1}$. This $f_1$ remains analytic and is without zeros at $\mathbb{C} \setminus \{z_2, \ldots\}$. From this, we derive

\begin{aligned} \phi(z) &= \int_{\gamma} f(z)dz \\ &= \int_{\gamma} f_1(z) + \frac{m_1}{z-z_1}dz \\ &= (z-z_1)^{m_1}\exp \int_0^z f_1(t) dt. \end{aligned}

We can continue this process for the remainder of the $z_i$s.      ▢

Theorem 1.2 (Weierstrass) Let $F(z)$ be meromorphic, and regular and $\neq 0$ at $z = 0$. Let $z_1,z_2, \ldots$ be the zeros and poles of $F(z)$ with respective multiplicities $|m_1|, |m_2|, \ldots$, where $m_n > 0$ if $z_n$ is a zero and $m_n < 0$ if $z_n$ is a pole of $F(z)$. Then there exist integers $p_1, p_2,\ldots \geq 0$ and an entire function $G(z)$ such that

$F(z) = e^{G(z)}\displaystyle\prod_{n=1}^{\infty}\left(1 - \frac{z}{z_n}\right)^{m_n}\exp\left(m_n\displaystyle\sum_{k=1}^{p_n}\frac{1}{k}\left(\frac{z}{z_k}^k\right)\right), \ \ \ \ (1.3)$

where the product converges uniformly in any compact set $K \subset \mathbb{C} \ \{z_1,z_2,\ldots\}$.

Proof: Let $f(z)$ be the function in (1.1) with $p_i$s such that the series is totally convergent, and let $\phi(z)$ be the function in (1.2). By Theorem 1.1 and Lemma 1.2, $\phi(z)$ is meromorphic, with zeros $z_n$ of multiplicities $m_n$ if $m_n > 0$, and with poles $z_n$ of multiplicities $|m_n|$ if $m_n < 0$. Thus $F(z)$ and $\phi(z)$ have the same zeros and poles with the same multiplicities, whence $F(z)/\phi(z)$ is entire and $\neq 0$. Therefore $\log (F(z)/\phi(z)) = G(z)$ is an entire function, and

$F(z) = e^{G(z)} \phi(z). \ \ \ \ (1.4)$

Uniform convergence along path of integration from $0$ to $z$ (not containing the poles) enables term-by-term integration. Thus, from (1.2), we have

\begin{aligned} \phi(z) &= \exp \displaystyle\sum_{n=1}^{\infty} \left(\frac{z}{z_n}\right)^{p_n} \frac{m_n}{t - z_n}dt \\ &= \displaystyle\prod_{n=1}^{\infty}\exp \int_0^z \left(\frac{m_n}{t - z_n} + \frac{m_n}{z_n}\frac{(t/z_n)^{p_n} -1}{t/z_n - 1}\right)dt \\ &= \displaystyle\prod_{n=1}^{\infty}\exp \int_0^z \left(\frac{m_n}{t - z_n} + \frac{m_n}{z_n}\displaystyle\sum_{k=1}^{p_n}\left(\frac{t}{z_n}\right)^{k-1}\right)dt \\ &= \displaystyle\prod_{n=1}^{\infty}\exp \left(\log\left(1 - \frac{z}{z_n}\right)^{m_n} + m_n\displaystyle\sum_{k=1}^{p_n}\frac{1}{k}\left(\frac{t}{z_n}\right)^k\right) \\ &= \displaystyle\prod_{n=1}^{\infty}\left(1 - \frac{z}{z_n}\right)^{m_n} \exp \left(m_n\displaystyle\sum_{k=1}^{p_n}\frac{1}{k}\left(\frac{t}{z_n}\right)^k\right).\end{aligned}

With this, (1.3) follows from (1.4). Moreover, in a compact set $K$, we can always bound the length of the path of integration, whence, by Theorem 1.1, the series

$\displaystyle\sum_{n=1}^{\infty}\int_0^z \left(\frac{t}{z_n}\right)^{p_n}\frac{m_n}{t - z_n}dt$

is totally convergent in $K$. Finally, invoke Lemma 1.1 to conclude that the exponential of that is total convergent in $K$ as well, from which follows that (1.3) is too, as desired.     ▢

If at $0$, our function has a zero or pole, we can easily multiply by $z^{-m}$ with $m$ the multiplicity there to regularize it. This yields

$F(z) = z^me^{G(z)}\displaystyle\prod_{n=1}^{\infty}\left(1 - \frac{z}{z_n}\right)^{m_n}\exp\left(m_n\displaystyle\sum_{k=1}^{p_n}\frac{1}{k}\left(\frac{z}{z_n}^k\right)\right)$

for Weierstrass factorization formula in this case.

Overall, we see that we transform Mittag-Leffler (Theorem 1.1) into Weierstrass factorization (Theorem 1.2) through integration and exponentiation. In complex, comes up quite often integration of an inverse or $-1$ order term to derive a logarithm, which once exponentiated gives us a linear polynomial to the power of the residue, useful for generating zeros and poles. Once this is observed, that one can go from the former to the latter with some technical manipulations is strongly hinted at, and one can observe without much difficulty that the statements of Lemma 1.1 and Lemma 1.2 are needed for this.

References

• Carlo Viola, An Introduction to Special Functions, Springer International Publishing, Switzerland, 2016, pp. 15-24.

Riemann mapping theorem

I am going to make an effort to understand the proof of the Riemann mapping theorem, which states that there exists a conformal map from any simply connected region that is not the entire plane to the unit disk. I learned of its significance that its combination with the Poisson integral formula can be used to solve basically any Dirichlet problem where the region in question in simply connected.

Involved in this is Montel’s theorem, which I will now state and prove.

Definition A normal family of continuous functions is one for which every sequence in it has a uniformly convergent subsequence.

Montel’s theorem A family $\mathcal{F}$ on domain $D$ of holomorphic functions which is locally uniformly bounded is a normal family.

Proof: Turns out holomorphic alongside local uniform boundness is enough for us to establish local equicontinuity via the Cauchy integral formula. On any compact set $K \subset D$, we can find some $r$ for which for every point $z_0 \in K$, $\overline{B(z_0, 2r)} \subset D$. By local boundedness we have some $M>0$ such that $|f(z)| \leq M$ in all of $B(z_0, 2r)$. Thus, for any $w \in K$, we can use Cauchy’s integral formula, for any $z \in B(w, r)$. In that, the radius $r$ versus $2r$ is used to bound the denominator with $2r^2$.

\begin{aligned} |f(z) - f(w)| &= \left| \oint_{\partial B(z_0, 2r)} \frac{f(\zeta)}{\zeta - z} - \frac{f(\zeta)}{\zeta - w}d\zeta \right| \\ & \leq |z-w| \oint_{\partial B(z_0, 2r)} \left| \frac{f(\zeta)}{(\zeta - z)(\zeta - w)} \right| d\zeta \\ & < \frac{|z-w|2\pi r}{2\pi 2r^2} M. \end{aligned}

This shows it’s locally Lipschitz and thus locally equicontinuous. To choose the $\delta$ we can divide our $\epsilon$ by that Lipschitz constant alongside enforcing less than $2r$ so as to stay inside the domain.

With this we can finish off with the Arzela-Ascoli theorem.     ▢

Now take the family $\mathcal{F}$ of analytic, injective functions from simply connected region $\Omega$ onto $\mathbb{D}$ the unit disk which take $z_0$ to $0$. On this we have the following.

Proposition If $f \in \mathcal{F}$ is such that for all $g \in \mathcal{F}$, $|f'(z)| \geq g'(z)$, then $f$ surjects onto $\mathbb{D}$.

Proof:  We prove the contrapositive. In order to do so, it suffices to find for any $f$ that hits not $w \in \mathbb{D}$, $f = s \circ g$, where $s, g$ are analytic with $g(z_0) = 0$ and $s$ is a self-map on $\mathbb{D}$ that fixes $0$ and is not an automorphism. In that case, we can deduce from Schwarz lemma that $|s'(0)| < 1$ and thereby from the chain rule that $g'(z_0) > f'(z_0)$.

Recall that we have automorphisms on $\mathbb{D}$, $T_w = \frac{z-w}{1-wz}$, for all $w \in \mathbb{D}$ and that their inverses are also automorphisms. Let’s try to take $0$ to $w$, then $w$ to $w^2$ via $p(z) = z^2$, and finally $w^2$ to $0$. With this, we have a working $s = T_{w^2} \circ p \circ T_w^{-1}$.     ▢

Nonemptiness of family

It is not difficult to construct an analytic injective self map on $\mathbb{D}$ that sends $z_0$ to $0$. The part of mapping $z_0$ to $0$ is in fact trivial with the $T_w$s. To do that it suffices to map $\mathbb{D}$ to $\mathbb{C} \setminus \overline{\mathbb{D}}$ as after that, we can invert.

Since $D$ is not the entire complex plane, there is some $a \notin D$. By translation, we can assume that $a = 0$. Because the region is simply connected, there is a path from $0$ to $\infty$ outside the region, which means there is an analytic branch of the square root. For any $w$ that gets hits by that, $-w$ does not. By the open mapping theorem, we can find a ball centered at $-w$ that is entirely outside the region. With this, we can translate and dilate accordingly to shift that to the unit disk.

Construction of limit to surjection

We can see now that if we can construct a sequence of functions in our family that converges to an analytic one with the same zero at $z_0$ with maximal derivative (in absolute value) there, we are finished. Specifically, let $\{f_n\}$ be a sequence from $\mathcal{F}$ such that

$\lim_n |f'_n(z_0)| = \sup_{f \in \mathcal{F}} \{|f'(z_0)|\}$.

This can be done by taking functions with sufficiently increasing derivatives at $z_0$. With Montel’s theorem on our obviously locally uniformly bounded family, we know that our family is normal, and thus by definition, we can extract some subsequence that is uniformly convergent on compact sets. Now it remains to show that the function converged to is analytic and injective.

The injective part follows from a corollary of Hurwitz’s theorem, which we now state.

Hurwitz’s theorem (corollary of) If $f_n$ is a sequence of injective analytic functions with converge uniformly on compact sets to $f$, then $f$ is constant or injective.

Proof: Recall that Hurwitz’s theorem states that if $f$ has a zero of some multiplicity $m$ at some point $z_0$, then for any $\epsilon > 0$, we will, past some $N$ in the index of the sequence, have $m$ zeros within $B(z_0, \epsilon)$ for all $f_n, n > N$, provided $f$ is not constantly $0$. For any point to see that a non-constant $f$ can hit it only once, it suffices to do a translation by that point on all the $f_n$s to turn it into a zero, so that the hypothesis of Hurwitz’s theorem, which in this case, bounds the number of zeros above by $1$, with the $f_n$s being injective, can be applied.     ▢

To show analyticity, we can use Weierstrass’s theorem.

Weierstrass’s theorem Take $\{f_n\}$ and supposed it converges uniformly on compact sets to $f$. Then the following hold:
a. $f$ is analytic.
b. $\{f'_n\}$ converges to $f'$ uniformly on compact sets.

Proof: This is a more standard theorem, so I will only sketch the proof. Recall the definition of compact as possessing the every cover has finite subcover property. This is so powerful, because we can for any collection of balls centered at every point of the cover, find a finite of them that covers the entire space, and finiteness allows us to take a maximum or minimum of finite $N$s or $\delta$s to uniformize some limit.

We can do the same here. For every $z$ on a compact set, express $f_n$ as integral of $\frac{f_n}{\zeta - z}$ via Cauchy’s integral formula on some ball centered at $z$. Uniform convergence of $\frac{f_n}{\zeta - z}$ on the boundary to $\frac{f}{\zeta-z}$ allows us to put the limit inside the integral to give us $f$, as represented via Cauchy’s integral formula. The same can be done for the $\{f'_n\}$

Again we can use two radii as done in the proof of Montel’s theorem to impose uniform convergence on a smaller ball.     ▢

Finally, our candidate conformal map to $\mathbb{D}$ satisfies that $f(z_0) = 0$. If not, convergence would be naught at $z_0$ since $f_n(z_0) = 0$ for all $n$.

This gives us existence. There is also a uniqueness aspect of the Riemann mapping theorem that comes when one imposes $f'(z_0) \in \mathbb{R}$. This is very elementary to prove and will be left to the reader.

More mathematical struggles

Math is hard. It wrecks my self-esteem, and at times, it makes me feel an utter loser, who simply isn’t smart enough, who is a league if not multiple away from the big name mathematicians, who come up with much if not most of the most original results in mathematics. There are times when the formalism within the mathematics looks, perhaps superficially out of lack of perception no the part of its viewer, so excruciatingly complex and dry, and that one is inclined to simply go: this is too hard, give up. I’ve felt that, and I think just about everyone, no matter how smart, has, to some extent. Over time, I’ve come to realize that the dirty details tend to be a natural product of a few main ideas behind the proof, and once such ideas as grasped, every detail can easily be seen to have its rightful place within the entire construction. There was a time when I felt demoralized or slightly baffled upon seeing this answer of Ron Maimon that can totally come across as intellectually too presumptuous, from a guy too smart who never had to struggle like all us ordinary folks, from a guy who takes for granted as routine what is a slog for most, without being metacognitively aware enough to appreciate that he is of a totally different beast. In this, stood out the following quote:

You need to learn to “unpack” proofs into the construction that is involved, to know what the proof is saying really. It is no good to memorize the proof, you need to understand the construction, and this will motivate the proof.

What he means by this, as far as I can tell, is that one should try to reverse engineer the source of the proof, the path or motivation that brought to it its discoverer. This, per convention of terseness in mathematical literature, is usually obfuscated, and it is the reader who is expected to uncover it himself. In any case, one finds that in mathematics, or any deep intellectual discipline, it is largely up to the learner himself  to form the right mental picture, which cannot be done with any form of explanation on too dull a pupil who lacks the inner drive.

I was disappointed yesterday, struggling to solve a problem that I had failed to solve back in 2014, the solution of which I had back then read, and even written up for myself, but which had evanesced entirely from my unretentive memory. It is my hope that this doesn’t happen again.

Apparently, there is a universal entire function $f$ such that only any compact set for any entire $g$, for any $\epsilon > 0$, there is a $c > 0$ such that $|f(z+c) - g(z)| < \epsilon$.

This looks initially entirely elusive, but once one realizes that entire functions can be approximated to arbitrary precision via a countable set of polynomials, namely $(\mathbb{Q}+i\mathbb{Q})[z]$, it is hinted that it suffices to approximate each of these polynomials arbitrarily closely on an arbitrarily large disk. This lends to taking a sequence of that dense set of polynomials (call it $\{P_j\}_{j=0}^{\infty}$) with each distinct element in it occurring an infinite number of times, and constructing a uniformly convergent everywhere series such that the $j$th partial sum approximates the $j$th polynomial of the sequence to a degree of closeness that goes to zero as $j \to \infty$ when translated some $c_j > 0$ to the right. Denote this as $f(z) = \displaystyle\sum_{i=0}^\infty Q_i(z)$. The individual $Q_i$s we can obtain via Runge’s theorem.

Range’s theorem states that for any compact subset $K$ of $\mathbb{C}$ and function $f$ holomorphic on some open set containing $K$, for any set $A$ containing at least one element in each connected component of $\mathbb{C} \setminus K$, there is a sequence of rational functions with all poles in $A$ that converges uniformly to $f$ on $K$. This is shown partly by taking Riemann sums of the integral associated with Cauchy’s integral formula on some closed piecewise-linear contour $\Gamma$ in the open set that contains $K$ in its interior.

We associate each $P_j$ in our sequence with disk $D_j$ with center $c_j > 0$ and radius $j$ with no intersections between the disks. One observes that out of infinite occurrence of each of the elements of $P_j$, we can approximate on an infinitely large disk (the disks among $\{D_j\}_{j=0}^{\infty}$ corresponding to each polynomial in our dense set is infinite and thus unbounded in radius) and with arbitrary degree of precision (by having the $\epsilon$ of approximation go to $0$ as $j \to \infty$).

We have disks about the origin $E_j$ containing $D_i, 0 \leq i \leq j$ and not containing any $D_i, i > j$, with $E_j \subset E_{j+1}$ and $|Q_j| < \frac{1}{2^j}$ on $E_{j-1}$ for $j \geq 1$. This way, the tail $\displaystyle\sum_{j=n}^{\infty} Q_j$ uniformly convergent on $E_j$ to what goes to $0$ as $n \to \infty$. The containment relation with respect to the $D_j$s also makes it such that the radii of $E_j$ go to $\infty$, necessary for full coverage of $\mathbb{C}$.

We first let $Q_0 = P_0$ and then we obtain $Q_j$ via Runge’s theorem for any $\epsilon > 0$ that satisfies $|Q_j(z) - (P_j(z-c_j) - \displaystyle\sum_{i=0}^{j-1} Q_i(z))| < \epsilon$ on $D_j$ in addition to the aforementioned $|Q_j| < \frac{1}{2^j}$ on $E_{j-1}$. Here we have the holomorphic function approximated as $P_j(z-c_j) - \displaystyle\sum_{i=0}^{j-1} Q_i(z)$ on $D_j$ and $0$ on $E_{j-1}$, separate on these two disjoint sets. Note that there are still poles of the rational approximation from Runge’s theorem, which is problematic, as our universal function must be entire. This is easy to resolve by modifying $Q$ to be polynomials which uniformly approximate on $D_j \cup E_{j-1}$ (by taking partial sum of the series expansion that is analytic on that region).

Take any entire $g$ and some $r > 0$. We have for each element of our dense set some smallest $j$ at which it occurs for which $\displaystyle\sum_{i=j}^{\infty} \frac{1}{2^i} < \epsilon$, and by density, one of them is such that it differs from $g$ on $|z| \leq r$ by less than $\epsilon$ uniformly. In sum, we have for some $j$ on $|z| \leq r$

$|g(z) - P_j(z)| < \epsilon$,

$|\displaystyle\sum_{i=0}^j Q(z+c_j) - P_j(z)| < \epsilon$,

and

$|f(z+c_j) - \displaystyle\sum_{i=0}^j Q(z+c_j)| < \displaystyle\sum_{i=j}^{\infty} \frac{1}{2^i} < \epsilon$.

Combining the three with triangle equality yields that $|f(z+c_j) - g(z)| < 3\epsilon$ on our desired disk.

An unpacking of Hurwitz’s theorem in complex analysis

Let’s first state it.

Theorem (Hurwitz’s theorem). Suppose $\{f_k(z)\}$ is a sequence of analytic functions on a domain $D$ that converges normally on $D$ to $f(z)$, and suppose that $f(z)$ has a zero of order $N$ at $z_0$. Then for every small enough $\rho > 0$, there is $k$ large such that $f_k(z)$ has exactly $N$ zeros in the disk $\{|z - z_0| < \rho\}$, counting multiplicity, and these zeros converge to $z_0$ as $k \to \infty$.

As a refresher, normal convergence on $D$ is convergence uniformly on every closed disk contained by it. We know that the argument principle comes in handy for counting zeros within a domain. That means

The number of zeros in $|z - z_0| < \rho$, $\rho$ arbitrarily small, goes to the number of zeros inside the same circle of $f$, provided that

$\frac{1}{2\pi i}\int_{|z - z_0| = \rho} \frac{f'_k(z)}{f_k(z)}dz \longrightarrow \frac{1}{2\pi i}\int_{|z - z_0| = \rho} \frac{f'(z)}{f(z)}dz$.

To show that boils down to a few technicalities. First of all, let $\rho > 0$ be sufficiently small that the closed disk $\{|z - z_0| \leq \rho\}$ is contained in $D$, with $f(z) \neq 0$ inside it everywhere except for $z_0$. Since $f_k(z)$ converges to $f(z)$ uniformly inside that closed disk, $f_k(z)$ is not zero on its boundary, the domain integrated over, for sufficiently large $k$. Further, since $f_k \to f$ uniformly, so does $f'_k / f_k \to f' / f$, so we have condition such that convergence is preserved on application of integral to the elements of the sequence and to its convergent value. With $\rho$ arbitrarily small, the zeros of $f_k(z)$ must accumulate at $z_0$.

Principal values (of integrals)

I’ve been looking through my Gamelin’s Complex Analysis quite a bit lately. I’ve solved some exercises, which I’ve written up in private. I was just going over the section on principal values, which had a very neat calculation. I’ll give a sketch of that one here.

Take an integral $\int_a^b f(x)dx$ such that on some $x_0 \in (a,b)$ there is a singularity, such as $\int_{-1}^1 \frac{1}{x}dx$. The principal value of that is defined as

$PV \int_a^b f(x)dx = \lim_{\epsilon \to 0}\left(\int_a^{x_0 - \epsilon} + \int_{x_0 + \epsilon}^b\right)f(x)dx$.

The example the book presented was

$PV\int_{-\infty}^{\infty} \frac{1}{x^3 - 1} = -\frac{\pi}{\sqrt{3}}$.

Its calculation invokes both the residue theorem and the fractional residue theorem. Our integrand, complexly viewed, has a singularity at $e^{2\pi i / 3}$, with residue $\frac{1}{3z^2}|_{z = e^{2\pi i / 3}} = \frac{e^{2\pi i / 3}}{3}$, which one can arrive at with so called Rule 4 in the book, or more from first principles, l’Hopital’s rule. That is the residue to calculate if we had the half-disk in the half plane, arbitrarily large. However, with our pole at $1$ we must indent it there. The integral along the arc obviously vanishes. The infinitesimal arc spawned by the indentation, the integral along which, can be calculated by the fractional residue theorem, with any $-\pi$, the minus accounting for the clockwise direction. This time the residue is at $1$, with $\frac{1}{3z^2}|_{z = 1} = \frac{1}{3}$. So that integral, no matter how small $\epsilon$ is, is $-\frac{\pi}{3}i$. $2\pi i$ times the first residue we calculated minus that, which is extra with respect to the integral, the principal value thereof, that we wish to calculate, yields $-\frac{\pi}{\sqrt{3}}$ for the desired answer.

Let’s generalize. Complex analysis provides the machinery to compute integrals not to be integrated easily by real means, or something like that. Canonical is having the value on an arc go to naught as the arc becomes arbitrarily large, and equating the integral with a constant times the sum of the residues inside. We’ve done that here. Well, it turns out that if the integral has an integrand that explodes somewhere on the domain of integration, we can make a dent there, and minus out the integral along its corresponding arc.