Big Picard theorem

I’ve been asked to prove the Big Picard theorem, assuming the fundamental normality test. Assuming the latter, it is a very short proof, and I could half-ass with that. I don’t like writing up stuff that I don’t actually understand for the sake of doing so. There’s little point, and if I’m going to actually write up a proof of it, I’ll do so for real, which means that I go over the fundamental normality test in its entirety.

First some preliminaries.

Theorem 2.28 (Riemann mapping theorem). Let \Omega \subset \mathbb{C} be simply-connected and \Omega \neq \mathbb{C}. Then there exists a conformal homeomorphism f : \Omega \to \mathbb{D} onto the unit disk \mathbb{D}.

Proof: Linked here.

Theorem 2.30. Suppose n is bounded, simply-connected, and regular. Then any conformal homeomorphism as in Theorem 2.28 extends to a homeomorphism \bar{\Omega} \to \bar{\mathbb{D}}.

Schwarz reflection principle. Suppose that f is an analytic function which is defined in the upper half-disk \{|z|^2 < 1, \text{Im } z > 0\}. Further suppose that f extends to a continuous function on the real axis, and takes on real values on the real axis. Then f can be extended to an analytic function on the whole disk by the formula

f(\bar{z}) = \overline{f(z)}

and the values for z reflected across the real axis are the reflections of f(z) across the real axis.

We begin by presenting the standard “geometric” procedure by which the covering map \pi : \mathbb{D} \to \mathbb{C} \setminus \{p_1, p_2\} may be obtained. Here p_1, p_2 are distinct points. This then leads naturally to the “little” and “big” Picard theorems, which are fundamental results of classical function theory.

Figure4.4

The construction takes place in the Poincaré disk. In the above figure, we have a circle C_2 reflected about C_1. The configuration is such that C_2 intersects C_1 perpendicularly. We are reflecting C_2 across C_1. The intersection points must be fixed, and the reflection must preserve the orthogonality. Moreover, reflection preserves the geodesic nature, and under the hyperbolic metric, geodesics are generalized circles. From this, we can deduce that C_2 goes to itself, with its two arcs relative to C_1 interchanged.

To construct the map, we start with a triangle \Delta_0 inside the unit circle consisting of circular arcs that intersect the unit circle at right angles. Reflect \Delta_0 across each of its sides and one gets three more triangles with circular arcs intersecting the unit circle at right angles.

Figure5.5

The above figure shows how the unit disk is partitioned by triangles as a result of iterating these reflections indefinitely. To obtain the sought after covering map, we start from the Riemann mapping theorem which gives us a conformal isomorphism f : \Delta_0 \to \mathbb{H}, the upper half-plane. This map extends as a homeomorphism to the boundary by Theorem 2.30. Thus, the three circular arcs of \Delta_0 get mapped to the intervals [-\infty, 0], [0,1], [1,\infty], respectively. By the Schwarz reflection principle, the map f extends analytically to the region obtained by reflecting \Delta_0 across each of its sides as just explained above. There is that 0, 1, \infty are on the boundary of the unit disk and thus omitted. There is also that complex conjugation as specified in the Schwarz reflection principle reflects the upper half plane to the lower half plane. This way, we obtain a conformal map onto \mathbb{C} \setminus \{0, 1\} defined on the entire unit disk that is a local isomorphism and a covering map.

Theorem 4.18. Every entire function which omits two values is constant.

Proof. Indeed, if f is such a function, we may assume that it takes its values in \mathbb{C} \setminus \{0, 1\}. But then we can lift f to the universal cover of \mathbb{C} \setminus \{0, 1\} to obtain an entire function F into \mathbb{D}. By Liouville’s theorem, F is constant.     ▢

Theorem 4.19 (Fundamental normality test). Any family of functions \mathcal{F} in \mathcal{H}(\Omega) which omits the same two distinct values in \mathbb{C} is a normal family.

Theorem 4.20. If f has an isolated essential singularity at z_0, then in any small neighborhood of z_0 the function f attains every complex value infinitely often, with one possible exception.

Proof. Suppose without loss of generality that z_0 = 0 and define f_n(z) = f(2^{-n}z) for an integer n \geq 1. We take n so large that f_n is analytic on 0 < |z| < 2 by making the neighborhood about z_0 sufficiently small. Suppose by contradiction every neighborhood of f omits the some two points. Then every function in this family, defined via f, omits the same two points. Thus, by the fundamental normality test, some subsequence of the family f_{n_k}(z) \to F(z) uniformly on 1/2 \leq |z| \leq 1 where either F is analytic or F = \infty by Weierstrass’s theorem (see here). By the maximum principle, in the former case, f is bounded near z = 0, which means it’s removable. In the latter case, convergence to \infty implies that z = 0 is a pole, contradicting that f has an essential singularity there.     ▢

References

  • Schlag, W., A Course in Complex Analysis and Riemann Surfaces, American Mathematical Society, 2014, pp. 70-72,81,160-164.

On grad school, science, academia, and also a problem on Riemann surfaces

I like mathematics a ton and I am not bad at it. In fact, I am probably better than many math graduate students at math, though surely, they will have more knowledge than I do in some respects, or maybe even not that, because frankly, the American undergrad math major curriculum is often rather pathetic, well maybe largely because the students kind of suck. In some sense, you have to be pretty clueless to be majoring in just pure math if you’re not a real outlier at it, enough to have a chance at a serious academic career. Of course, math professors won’t say this. So we have now an excess of people who really shouldn’t be in science (because they much lack the technical power or an at least reasonable scientific taste/discernment, or more often both) adding noise to the job market. On this, Katz in his infamous Don’t Become a Scientist piece writes:

If you are in a position of leadership in science then you should try to persuade the funding agencies to train fewer Ph.D.s. The glut of scientists is entirely the consequence of funding policies (almost all graduate education is paid for by federal grants). The funding agencies are bemoaning the scarcity of young people interested in science when they themselves caused this scarcity by destroying science as a career. They could reverse this situation by matching the number trained to the demand, but they refuse to do so, or even to discuss the problem seriously (for many years the NSF propagated a dishonest prediction of a coming shortage of scientists, and most funding agencies still act as if this were true). The result is that the best young people, who should go into science, sensibly refuse to do so, and the graduate schools are filled with weak American students and with foreigners lured by the American student visa.

Even he believes that now the Americans who go into science are often the ones who are too dumb or clueless to realize that they basically have no future there. I can surely attest to how socially inept, or at least clueless, many math grad students are, as I interact with them much more now. The epidemic described by Katz is accentuated by the fact that professors in science are not encouraging of students who seek a plan B, which everyone should given the way the job market is right now, and even go as far as to create an atmosphere wherein even to express a desire to leave academia is a no-no. I am finding that this type of environment is even corroding my interest in mathematics itself, which is sad. In any case, I sort of disagree with Katz in that I feel like the very top scientific talent of my generation still mostly ends in top or at least good graduate schools, though surely there are many who feel alienated or don’t find the risk worth taking, and end up leaving science. I myself am thinking of forgetting about mathematics altogether. So that I can concentrate my motivation and time and energy on developing expertise in some area of software engineering that is in demand, for the money and (relative) job security, and hopefully also find it a sufficiently fulfilling experience. There are a lot of morons in tech of course, but certain corners of it do provide refuge. I had always thought of mathematics as being a field with a much higher threshold cognitively in its content, enough to filter out most of the uninteresting people, but that’s, to my disappointment, less so than I expected. I do have reason to be scared, because one of the smartest and most interesting people I know took like five years following his math PhD to make his way into full employment, in a programming/data science heavy role of course, despite being arguably much better at programming than most industry software engineers with a computer science degree, which he lacked, an indicator of the perverse extent to which our society now runs on risk-aversion and (artificial) credential signaling. I can only consider myself fortunate that I do have a computer science degree from a reputable place, and with that, I have already made a modest pot of gold, despite being frankly quite mediocre at real computer stuff, which I have had difficulty becoming as interested in as I have been in mathematics. Maybe I was even fortunate to have not been all that gifted in the first place, which in some sense compelled me to be more realistic, as there is arguably nothing worse than becoming an academic loser, which academia is full of nowadays, sadly. This type of thing can happen to real geniuses too. Look at Yitang Zhang for instance, the most prominent case to come to mind. Except he actually made it afterwards, spectacularly and miraculously, with his dogged belief in himself and perseverance under adversity. For every one of him, I would expect like 10 real geniuses (in ability) who were under-nurtured, under-recognized, or even screwed, left to fade into obscurity.

I’ll transition now to a problem that I’ve been asked to solve. Its statement is the following:

Let f be holomorphic on a simply-connected Riemann surface M, and assume that f never vanishes. Then there exists F holomorphic on M such that f = e^F. Show that harmonic functions on M have conjugate harmonic functions.

Every p_0 \in M corresponds to an open connected neighborhood U =  \{p : \lVert F(p) - F(p_0) \rVert < F(p_0)\}. Let \{U_{\alpha}\} be the system consisting of these neighborhoods, (\log F)_{\alpha} a continuous branch of the logarithm of F in U_{\alpha}. From this arises a family F_{\alpha} = \{(\log F)_{\alpha} + 2n\pi i, n \in \mathbb{Z}\}.

In Schlag, there is the following lemma.

Lemma 5.5. Suppose M is a simply-connected Riemann surface and

\{D_{\alpha} \subset M : \alpha \in A\}

is a collection of domains (connected, open). Assume further that these sets form an open cover M = \bigcup_{\alpha \in A} D_{\alpha} such that for each \alpha \in A there is a family F_{\alpha} of analytic functions f : D_{\alpha} \to N, where N is some other Riemann surface, with the following properties: if f \in F_{\alpha} and p \in D_{\alpha} \cap D_{\beta}, then there is some g \in F_{\beta} so that f = g near p. Then given \gamma \in A and some f \in F_{\gamma} there exists an analytic function \psi_{\gamma} : M \to N so that \psi_{\gamma} = f on D_{\gamma}.

Using the families of analytic function F_{\alpha} as given above, it is clear that near p \in D_{\alpha} \cap D_{\beta}, (\log F)_{\alpha} + 2n_{\alpha}\pi i = (\log F)_{\beta} + 2n_{\beta}\pi i when n_{\alpha} = n_{\beta}, which means the hypothesis of Lemma 5.5 is satisfied by the above families.

I’ll present the proof of the above lemma here, to consolidate my own understanding, and also out of its essentiality in the construction of a global holomorphic function matching some function in each family. It does so in generality of course, whereas in the problem we are trying to solve it is on a specific case.

Proof. Let

\mathcal{U} = \{(p, f) | p \in D_{\alpha}, f \in F_{\alpha}, \alpha \in A\} / \sim

where (p, f) \sim (q, g) iff p = q and f = g in a neighborhood of p. Let [p, f] denote the equivalence class of (p, f). As usual, \pi([p, f]) = p. For each f \in F_{\alpha}, let

D'_{\alpha, f} = \{[p, f] | p \in D_{\alpha}\}.

Clearly, \pi : D_{\alpha, f}' \to D_{\alpha} is bijective. We define a topology on \mathcal{U} as follows: \Omega \subset D_{\alpha, f}' is open iff \pi(\Omega) \subset D_{\alpha} is open for each \alpha, f \in F_{\alpha}. This does indeed define open sets in \mathcal{U}: since \pi(D'_{\alpha, f} \cap D'_{\beta, g}) is the union of connected components of D_{\alpha} \cap D_{\beta} by the uniqueness theorem (if it is not empty), it is open in M as needed. With this topology, \mathcal{U} is a Hausdorff space since M is Hausdorff (we use this if the base points differ) and because of the uniqueness theorem (which we use if the base points coincide). Note that by construction, we have made the fibers indexed by the functions in F_{\alpha} discrete in the topology of \mathcal{U}.

The main point is now to realize that if \widetilde{M} is a connected component of \mathcal{U}, then \pi : \widetilde{M} \to M is onto and in fact is a covering map. Let us check that it is onto. First, we claim that \pi(\widetilde{M}) \subset M is open. Thus, let [p, f] \in \widetilde{M} and pick D_{\alpha} with p \in D_{\alpha} and pick D_{\alpha} with p \in D_{\alpha} and f \in F_{\alpha}. Clearly, D'_{\alpha, f} \cap \widetilde{M} \neq \emptyset and since D_{\alpha}, and thus also D'_{\alpha, f}, is open and connected, the connected component \widetilde{M} has to contiain D'_{\alpha, f} entirely. Therefore, D_{\alpha} \subset \pi(\widetilde{M}) as claimed.

Next, we need to check that M \setminus \pi(\widetilde{M}) is open. Let p \in M \setminus \pi(\widetilde{M}) and pick D_{\beta} so that p \in D_{\beta}. If D_{\beta} \cap \pi(\widetilde{M}) = \emptyset, then we are done. Otherwise, let q \in D_{\beta} \cap \pi(\widetilde{M}) and pick D_{\alpha} containing q and some f \in F_{\alpha} with D'_{\alpha, f} \subset \widetilde{M} (using the same “nonempty intersection implies containment” argument as above). But now we can find g \in F_{\beta} with the property that f = g on a component of D_{\alpha} \cap D_{\beta}. As before, this implies that \widetilde{M} would have to contain D'_{\beta, g} which is a contradiction.

To see that \pi : \widetilde{M} \to M is a covering map, one verifies that

\pi^{-1}(D_{\alpha}) = \bigcup_{f \in F_{\alpha}} D'_{\alpha, f}.

The sets on the right-hand side are disjoint and in fact they are connected components of \pi^{-1}(D_{\alpha}).

Since M is simply-connected, \widetilde{M} is homeomorphic to M (proof given in the appendix). We thus infer the existence of a globally defined analytic function which agrees with some f \in F_{\alpha} on each D_{\alpha}. By picking the connected component that contains any given D_{\alpha, f}' one can fix the “sheet” locally on a given D_{\alpha}.     ▢

By this, we can construct an analytic F such that for all \alpha,

f_{|U_{\alpha}} = (\log F)_{\alpha} + n_{\alpha} \cdot 2\pi i, \qquad n_{\alpha} \in \mathbb{Z}.

from which follows e^F = f.

For the existence of harmonic conjugates, we do similarly. Take a connected open cover of M, \{U_{\alpha}\} where each U_{\alpha} is conformally equivalent to the unit disc, and v_{\alpha} is a harmonic conjugate of u in U_{\alpha} (which exists uniquely up to constant on the unit disc. Let F_{\alpha} = \{v_{\alpha} + c, \quad c \in \mathbb{R}\}. Then by the same lemma, there exists v such that for all \alpha,

v_{|U_{\alpha}} = v_{\alpha} + c_{\alpha}, \quad \text{some } c_{\alpha} \in \mathbb{R}

that is harmonic and conjugate to u since it is the harmonic conjugate to u on every element of the cover, again with choise of c_{\alpha}s to ensure that on intersection of cover elements there is a match.

 

 

 

Elliptic functions

I am writing this as a way to go through in detail the section on elliptic functions in Schlag’s book.

Proposition 4.14.  Let \Lambda = \{m\omega_1 + n\omega_2 | m,n \in \mathbb{Z}\} and set \Lambda^* = \Lambda \setminus \{0\}. For any integer n \geq 3, the series

f(z) = \displaystyle\sum_{w \in \Lambda} (z+w)^{-n} \qquad (4.16)

defines a function f \in \mathcal{M}(M) with deg(f) = n. Furthermore, the Weierstrass function

\wp(z) = \frac{1}{z^2} + \displaystyle\sum_{w \in \Lambda^*} [(z+w)^{-2} - w^{-2}] ,\qquad (4.17)

is an even elliptic function of degree two with \Lambda as its group of periods. The poles of \wp are precisely the points in \Lambda and they are all of order 2.

Proof.  It suffices to prove that f(z) = \displaystyle\sum_{w \in \Lambda} (z+w)^{-n} converges absolutely and uniformly on every compact set K \subset \mathbb{C} \setminus \Lambda. Periodicity allows us to restrict to the closure of any fundamental region. There exists C > 0 such that for all x,y \in \mathbb{R},

C^{-1}(|x|+|y|) \leq |x\omega_1 + y\omega_2|.

Hence, when z \in \{x\omega_1 + y\omega_2 | 0 \leq x, y \leq 1\}, then

|z + (k_1\omega_1 + k_2\omega_2)| \geq C^{-1}(|k_1| + |k_2|) - |z| \geq (2C)^{-1}(|k_1| + |k_2|)

provided |k_1| + |k_2| is sufficiently large. In

\displaystyle\sum_{|k_1|+|k_2|>0} |k_1 + k_2|^{-n},

there are O(n) occurrences of |k_1| + |k_2| = n, which means the above converges when n > 2, and this, with the above bound, means f \in \mathcal{H}(\mathbb{C} \setminus \Lambda). Periodicity implies f \in \mathcal{M}(M). Moreover, the degree of (4.16) is determined by nothing that inside a fundamental region the series has a unique pole of order n.

For the second part, we note that when |w| > 2|z|,

\left|(z+w)^{-2} - w^{-2}\right| \leq \frac{|z||z+2s|}{|w|^2|z+w|^2} \leq \frac{C|z|}{|w|^3},

which means the series defining \wp, which is clearly even, converges absolutely and uniformly on compact subsets of \mathbb{C} \setminus \Lambda. For the periodicity of \wp, note that \rho' is periodic relative the same lattice \Lambda. Thus, for every w \in \Lambda,

\wp(z+w) - \wp(z) = C(w) \quad \forall z \in \mathbb{C}

with some constant C(w), which has to be zero by

\wp(\omega_1/2) - \wp(-\omega_1/2) = 0.

Another way to go about it to define \sigma such that

\zeta(z) = \frac{d \log \sigma(z)}{dz} = \frac{1}{z} + \displaystyle\sum_{w \in \Lambda^*} \left[\frac{1}{z-\omega} + \frac{1}{\omega} + \frac{z}{\omega^2}\right],

so that \wp = -\zeta', from which by periodicity, we have

\zeta(z+\omega) - \zeta(z) = C(\omega).

From this we can solve that

\sigma(z+\omega_j) = -\sigma(z)e^{\eta_j(z+\omega_j/2)}, \qquad (4.20)

where the \nu_js are constants for j = 1,2.

Lemma 4.15.  With \wp as before, one has

(\wp'(z))^2 = 4(\wp(z) - e_1)(\wp(z) - e_2)(\wp(z) - e_3) \qquad (4.21)

where e_1 = \wp(\omega_1/2), e_2 = \rho(\omega_2/2), and e_3 = \rho((\omega_1+\omega_2)/2) are pairwise distinct. Furthermore, one has e_1 + e_2 + e_3 = 0 so that (4.21) can be written in the form

(\wp'(z))^2 = 4(\wp(z))^3 - g_2\wp(z) - g_3 \qquad (4.22)

with constants g_2 = -4(e_1e_2 + e_1e_3 + e_2e_3) and g_3 = 4e_1e_2e_3.

View the torus as

S = \{x\omega_1 + y\omega_2 | -1/2 \leq x,y \leq 1/2\}.

\wp'(z) is odd and has a pole of order 3 at z = 0 but no other poles in S, which means \wp'(z) has degree 3.

Oddness with periodicity applied at \omega_1/2 and \omega_2/2 yields that

\frac{1}{2}\omega_1, \quad \frac{1}{2}\omega_2, \quad \frac{1}{2}(\omega_1+\omega_2)

are the three zeros of \wp', each simple, and thus also the unique points where \wp has valency 2 apart from z = 0. The e_j are distinct, because if not \wp would assume such a value four times, impossible when the degree is 2.

Denoting the RHS of (4.21) by F(z), we have that

\frac{(\wp'(z))^2}{F(z)} \in \mathcal{H}(M)

with the zeros cancelled out, and thus equal to a constant.

At z = 0, the highest pole of (\wp'(z))^2 is one of order 3\cdot 2 = 6 with coefficient (-2)^2 = 4. In F(z), we have essentially a cubic in \wp(z) with leading coefficient 4, and \wp(z) has pole of order 2 with coefficient 1. In taking the limit towards zero, we only need to consider the 4\wp(z)^3 term, which has the highest order pole, which is also of order 6 with coefficient 4. That means our constant function is 1.

The final statement follows by observing from the Laurent series around zero, which is, from the geometric series of \left(\frac{1/w}{1+(z/w)}\right)^2 - \frac{1}{w^2}

\wp(z) = \frac{1}{z^2} - \displaystyle\sum_{k = 1}^{\infty} (k+1)(-1)^{k}z^k\displaystyle\sum_{w \in \Lambda^*} \frac{1}{w^{3+k}}.

Because \wp is even, the odd coefficients must vanish. So we have

\wp(z) = \frac{1}{z^2} - \displaystyle\sum_{k=1}^{\infty} (2k+1)z^{2k} \displaystyle\sum_{w \in \Lambda^*} \frac{1}{w^{2k+2}}.

For now, let

G_k = \displaystyle\sum_{w \in \Lambda^*} \frac{1}{w^k}.

\begin{aligned}\wp(z) & = & \frac{1}{z^2} + 3G_4z^2 + 5G_6z^4 + \cdots, \\ \wp'(z) & = & \frac{-2}{z^3} + 6G_4z + 20G_6z^3 + \cdots, \\ (\wp(z))^3 & = & \frac{1}{z^6} + 9\frac{G_4}{z^2} + \cdots, \\ (\wp'(z))^2 & = & \frac{4}{z^6} - \frac{24G_4}{z^2} + \cdots. \end{aligned}

What we want is to find the g_2 such that (\wp'(z))^2 - 4(\wp(z))^3 + g_2\wp(z) becomes analytic and thus constant, and to do that we must vanish out all the poles at 0. The z^{-6} coefficients tells us to multiply (\wp(z))^3 by 4. After that, we have from the z^{-2} coefficient that -24G_4 - 9\cdot 4 G_4 + g_2 = 0, which means

g_2 = 60G_4 = -4(e_1e_2 + e_1e_3 + e_2e_3).

Proposition 4.16.  Every f \in \mathcal{M}(M) is a rational function of \wp and \wp'. If f is even, then it is a rational function of \wp alone.

Proof.  Suppose that f is non-constant and even. Then for all but finitely many values of w \in \mathbb{C}_{\infty}, the equation f(z) - w = 0 has only simple zeros (since there are only finitely many zeros of f'). Pick two such w \in \mathbb{C} and denote them by c,d. Moreover, we can ensure that the zeros of f - c and f - d are distinct from the branch points of \wp. Thus, since f is even and with 2n = deg(f), one has:

\begin{aligned}\{z \in M : f(z) - c = 0\} & = \{a_j, -a_j\}_{j=1}^n, \\ \{z \in M : f(z) - d = 0\} & = \{b_j, -b_j\}_{j=1}^n. \end{aligned}

The elliptic functions

g(z) = \frac{f(z) - c}{f(z) - d}

and

h(z) = \displaystyle\prod_{j=1}^n \frac{\wp(z) - \wp(a_j)}{\wp(z) - \wp(b_j)}

have the same zeros and poles which are all simple. It follows that g = \alpha h for some \alpha \neq 0. Solving this relation for f yields the desired conclusion.

If f is odd, then f/\wp' is even so f = \wp'R(\wp) where R is rational. Finally, if f is any elliptic function, then

f(z) = \frac{1}{2}(f(z) + f(-z)) + \frac{1}{2}(f(z) - f(-z))

is a decomposition into even/odd elliptic functions whence

f(z) = R_1(\wp) + \wp'R_2(\wp)

with rational R_1, R_2 as claimed.     ▢

We conclude with the following question: given disjoint finite sets of distinct points \{z_j\} and \{\zeta_k\} in M as well as positive integers n_j for z_j and \nu_k for \zeta_k, respectively, is there an elliptic function with precisely these zeros and poles and of the given orders? In the case of \mathbb{C}_\infty yes iff \sum_{j} n_j = \sum_{k} \nu_k since the degree must be constant throughout.

For the tori, we first observe that by the residue theorem one has

\frac{1}{2\pi i}\oint_{\partial P} z\frac{f'(z)}{f(z)}dz = \sum_j n_jz_j - \sum_k \nu_k \zeta_k. \qquad (4.25)

where \partial P is the boundary of a fundamental region P such that no zero or pole lies on the boundary. Second, comparing parallel sides of the fundamental region and using the periodicity shows that the left-hand side is (4.25) is of the form n_1\omega_1 + n_2\omega_2 with n_1, n_2 \in \mathbb{Z} and thus equals 0 modulo \Lambda. (This follows from that \int_{\gamma} \frac{f'(z)}{f(z)}dz is the difference of logarithms of the same value, which regardless of branch must be an integer multiple of 2\pi i.)

Now consider the edges in \partial P given by \gamma_1(t) = \{t\omega_1 | 0 \leq t \leq 1\} and \gamma_1(t) = \{\omega_2 + t\omega_1 | 0 \leq t \leq 1\} respectively. By \omega_2-periodicity of \frac{f'(z)}{f(z)} we infer that

\int_{\gamma_1} z\frac{f'(z)}{f(z)}dz + \int_{\gamma_2} z \frac{f'(z)}{f(z)}dz = -\omega_2\int_{\gamma_1}d \log f(z).

The branch of logarithm here is irrelevant, since the arbitrary constant is differentiated away. By periodicity applied to the difference in this integral,

\omega_2 \frac{1}{2\pi i} \int_{\gamma_1} d\log f(z) \in \omega_2 \mathbb{Z}.

The other edge pair gives an element of \omega_1 \mathbb{Z}, whence (4.24).

Theorem 4.17.  Suppose (4.23) and (4.24) hold. Then there exists an elliptic function which has precisely these zeros and poles with the given orders. This function is unique up to a nonzero complex multiplicative constant.

Proof.  Listing the points z_j and \zeta_k expanded out with their respective multiplicities, we obtain sequences z_j' and \zeta_k' of the same length, say n. Shifting the z_j's and \zeta_k's by lattice elements if needed, one has

\sum_{j=1}^n z_j' = \sum_{k=1}^n \zeta_k'.

Take

f(z) = \displaystyle\prod_{j=1}^n \frac{\sigma(z - z_j')}{\sigma(z - \zeta_j')}

using the \sigma in (4.20). Then

\begin{aligned} \frac{f(z+\omega_i)}{f(z)} & = & \displaystyle\prod_{j=1}^n \frac{\sigma(z-z_j' + \omega_i/2)}{\sigma(z - z_j')}\cdot \frac{\sigma(z - \zeta_j')}{\sigma(z - \zeta_j' + \omega_i/2)} \\ & = & \displaystyle\prod_{j=1}^n e^{\eta_i\left[(z - z_j' + \omega_i/2) - (z - \zeta_j' + \omega_i/2)\right]} \\ & = & e^{\eta_i\cdot 0} \\ & = & 1, \end{aligned}

which shows periodicity.    ▢

Finally, we observe how we can solve (4.22) by integrating

\frac{d\wp(z)}{\sqrt{4(\wp(z))^3 - g_2\wp(z) - g_3}} = dz

where we choose some branch of the root, which yields

z - z_0 = \int_{\wp(z_0)}^{\wp(z)} \frac{d\zeta}{\sqrt{4\zeta^3 - g_2\zeta - g_3}}.

In other words, the Weierstrass function \wp is the inverse of an elliptic integral. The integration path in (4.30) needs to be chosen to avoid the zeros and poles of \wp', and the branch of the root is determined by \wp'.

Analogously, \int_{w_0}^w \frac{d\zeta}{\sqrt{1 - \zeta^2}} = z - z_0 is satisfied by w = \sin z with similar restriction on the path and the choice of branch. This case though is a periodic function with one period, whereas in (4.30), there are two periods.

References

  • Schlag, W., A Course in Complex Analysis and Riemann Surfaces, American Mathematical Society, 2014, pp. 153-157.

Construction of Riemann surfaces as quotients

There is a theorem in Chapter 4 Section 5 of Schlag’s complex analysis text. I went through it a month ago, but only half understood it, and it is my hope that passing through it again, this time with writeup, will finally shed light, after having studied in detail some typical examples of such Riemann surfaces, especially tori, the conformal equivalence classes of which can be represented by the fundamental region of the modular group, which arise from quotienting out by lattices on the complex plane, as well as Fuchsian groups.

In the text, the theorem is stated as follows.

Theorem 4.12.  Let \Omega \subset \mathbb{C}_{\infty} and G < \mathrm{Aut}(\mathbb{C}_{\infty}) with the property that

  • g(\Omega) \subset \Omega for all g \in G,
  • for all g \in G, g \neq \mathrm{id}, all fixed points of g in \mathbb{C}_{\infty} lie outside of \Omega,
  • for all K \subset \Omega compact, the cardinality of \{g \in G | g(K) \cap K \neq \phi\} is finite.

Under these assumptions, the natural projection \pi : \Omega \to \Omega / G is a covering map which turns \Omega/G canonically onto a Riemann surface.

The properties essentially say that the we have a Fuchsian group G acting on \Omega \subset \mathbb{C}_{\infty} without fixed points, excepting the identity. To show that quotient space is a Riemann surface, we need to construct charts. For this, notice that without fixed points, there is for all z \in \Omega, a small pre-compact open neighborhood of z denoted by K_z \subset \Omega, so that

g(\overline{K_z} \cap \overline{K_z}) = \emptyset \qquad \forall g \in G, g \neq \mathrm{id}.

So, in K_z no two elements are twice represented, which mean the projection \pi : K_z \to K_z is the identity, and therefore we can use the K_zs as charts. The gs as Mobius transformations are open maps which take the K_zs to open sets. In other words, \pi^{-1}(K_z) = \bigcup_{g \in G} g^{-1}(K_z) with pairwise disjoint open sets g^{-1}(K_z). From this, the K_zs are open sets in the quotient topology. In this scheme, the gs are the transition maps.

Finally, we verify that this topology is Hausdorff. Suppose \pi(z_1) \neq \pi(z_2) and define for all n \geq 1,

A_n = \left\{z \in \Omega | |z-z_1| < \frac{r}{n}\right\} \subset \Omega

B_n = \left\{z \in \Omega | |z-z_2| < \frac{r}{n}\right\} \subset \Omega

where r > 0 is sufficiently small. Define K = \overline{A_1} \cup \overline{B_1} and suppose that \pi(A_n) \cap \pi(B_n) \neq \emptyset for all n \geq 1. Then for some a_n \in A_n and g_n \in G we have

g_n(a_n) \in B_n \qquad \forall n \geq 1.

Since g_n(K) \cap K has finite cardinality, there are only finitely many possibilities for g_n and one of them therefore occurs infinitely often. Pass to the limit n \to \infty and we have g(z_1) = z_2 or \pi(z_1) = \pi(z_2), a contradiction.

 

Variants of the Schwarz lemma

Take some self map on the unit disk \mathbb{D}, f. If f(0) = 0, g(z) = f(z) / z has a removable singularity at 0. On |z| = r, |g(z)| \leq 1 / r, and with the maximum principle on r \to 1, we derive |f(z)| \leq |z| everywhere. In particular, if |f(z)| = |z| anywhere, constancy by the maximum principle tells us that f(z) = \lambda z, where |\lambda| = 1. g with the removable singularity removed has g(0) = f'(0), so again, by the maximum principle, |f'(0)| = 1 means g is a constant of modulus 1. Moreover, if f is not an automorphism, we cannot have |f(z)| = |z| anywhere, so in that case, |f'(0)| < 1.

Cauchy’s integral formula in complex analysis

I took a graduate course in complex analysis a while ago as an undergraduate. However, I did not actually understand it well at all, to which is a testament that much of the knowledge vanished very quickly. It pleases me though now following some intellectual maturation, after relearning certain theorems, they seem to stick more permanently, with the main ideas behind the proof more easily understandably clear than mind-disorienting, the latter of which was experienced by me too much in my early days. Shall I say it that before I must have been on drugs of something, because the way about which I approached certain things was frankly quite weird, and in retrospect, I was in many ways an animal-like creature trapped within the confines of an addled consciousness oblivious and uninhibited. Almost certainly never again will I experience anything like that. Now, I can only mentally rationalize the conscious experience of a mentally inferior creature but such cannot be experienced for real. It is almost like how an evangelical cannot imagine what it is like not to believe in God, and even goes as far as to contempt the pagan. Exaltation, exhilaration was concomitant with the leap of consciousness till it not long after established its normalcy.

Now, the last of non-mathematical writing in this post will be on the following excerpt from Grothendieck’s Récoltes et Semailles:

In those critical years I learned how to be alone. [But even] this formulation doesn’t really capture my meaning. I didn’t, in any literal sense learn to be alone, for the simple reason that this knowledge had never been unlearned during my childhood. It is a basic capacity in all of us from the day of our birth. However these three years of work in isolation [1945–1948], when I was thrown onto my own resources, following guidelines which I myself had spontaneously invented, instilled in me a strong degree of confidence, unassuming yet enduring, in my ability to do mathematics, which owes nothing to any consensus or to the fashions which pass as law….By this I mean to say: to reach out in my own way to the things I wished to learn, rather than relying on the notions of the consensus, overt or tacit, coming from a more or less extended clan of which I found myself a member, or which for any other reason laid claim to be taken as an authority. This silent consensus had informed me, both at the lycée and at the university, that one shouldn’t bother worrying about what was really meant when using a term like “volume,” which was “obviously self-evident,” “generally known,” “unproblematic,” etc….It is in this gesture of “going beyond,” to be something in oneself rather than the pawn of a consensus, the refusal to stay within a rigid circle that others have drawn around one—it is in this solitary act that one finds true creativity. All others things follow as a matter of course.

Since then I’ve had the chance, in the world of mathematics that bid me welcome, to meet quite a number of people, both among my “elders” and among young people in my general age group, who were much more brilliant, much more “gifted” than I was. I admired the facility with which they picked up, as if at play, new ideas, juggling them as if familiar with them from the cradle—while for myself I felt clumsy, even oafish, wandering painfully up an arduous track, like a dumb ox faced with an amorphous mountain of things that I had to learn (so I was assured), things I felt incapable of understanding the essentials or following through to the end. Indeed, there was little about me that identified the kind of bright student who wins at prestigious competitions or assimilates, almost by sleight of hand, the most forbidding subjects.

In fact, most of these comrades who I gauged to be more brilliant than I have gone on to become distinguished mathematicians. Still, from the perspective of thirty or thirty-five years, I can state that their imprint upon the mathematics of our time has not been very profound. They’ve all done things, often beautiful things, in a context that was already set out before them, which they had no inclination to disturb. Without being aware of it, they’ve remained prisoners of those invisible and despotic circles which delimit the universe of a certain milieu in a given era. To have broken these bounds they would have had to rediscover in themselves that capability which was their birthright, as it was mine: the capacity to be alone.

Grothendieck was first known to me the dimwit in a later stage of high school. At that time, I was still culturally under the idiotic and shallow social constraints of an American high school, though already visibly different, unable to detach too much from it both intellectually and psychologically. There is quite an element of what I now in recollection with benefit of hindsight can characterize as a harbinger of unusual aesthetic discernment, one exercised and already vaguely sensed back then though lacking in reinforcement in social support and confidence, and most of all, in ability. For at that time, I was still much of a species in mental bondage, more often than not driven by awe as opposed to reason. In particular, I awed and despaired at many a contemporary of very fine range of myself who on the surface appeared to me so much more endowed and quick to grasp and compute, in an environment where judgment of an individual’s capability is dominated so much more so by scores and metrics, as opposed to substance, not that I had any of the latter either.

Vaguely, I recall seeing the above passage once in high school articulated with so much of verbal richness of a height that would have overwhelmed and intimidated me at the time. It could not be understood by me how Grothendieck, this guy considered by many as greatest mathematician of the 20th century, could have actually felt dumb. Though I felt very dumb myself, I never fully lost confidence, sensing a spirit in me that saw quite differently from others, that was far less inclined to lose himself in “those invisible and despotic circles” than most around me. Now, for the first time, I can at least subjectively feel identification with Grothendieck, and perhaps I am still misinterpreting his message to some extent, though I surely feel far less at sea with respect to that now than before.

Later I had the fortune to know personally one who gave a name to this implicit phenomenon, aesthetic discernment. It has been met with ridicule as self-congratulatory artificialized by one of lesser formal achievement, a concoction of a failure in self-denial. Yet on the other hand, I have witnessed that most people are too carried away in today’s excessively artificially institutionally credentialist society that they lose sight of what is fundamentally meaningful, and sadly, those unperturbed by this ill are few and fewer. Finally, I have reflected on the question of what good is knowledge if too few can rightly perceive it. Science is always there and much of it of value remains unknown to any who has inhabited this planet, and I will conclude at that.

So, one of the theorems in that class was of course Cauchy’s integral formula, one of the most central tools in complex analysis. Formally,

Let D be a bounded domain with piecewise smooth boundary. If f(z) is analytic on D, and f(z) extends smoothly to the boundary of D, then

f(z) = \frac{1}{2\pi i}\int_{\partial D} \frac{f(w)}{w-z}dw,\qquad z \in D. \ \ \ \ (1)

This theorem was actually somewhat elusive to me. I would learn it, find it deceptively obvious, and then forget it eventually, having to repeat this cycle. I now ask how one would conceive of this theorem. On that, we first observe that by continuity, we can show that the average on a circle will go to its value at the center as the radius goes to zero. With dw = i\epsilon e^{i\theta}d\theta, we can with the w - z in the denominator, vanish out any factor of f(z + \epsilon e^{i\theta}) in the integrand. From this, we have the result if D sufficiently small circle. Even with this, there is implicit Cauchy’s integral theorem, the one which states that integral of holomorphic function inside on closed curve is zero. Speaking of which, we can extend to any bounded domain with piecewise smooth boundary along the same principle.

Cauchy’s integral formula is powerful when the integrand is bounded. We have already seen this in Montel’s theorem. In another even simpler case, in Riemann’s theorem on removable singularities, we can with our upper bound on the integrand M, establish with M / r^n establish that for n < 0, the coefficient in the Laurent series about the point is a_n = 0.

This integral formula extends to all derivatives by differentiating. Inductively, with uniform convergence of the integrand, one can show that

f^{(m)}(z) = \frac{m!}{2\pi i}\int_{\partial D} \frac{f(w)}{(w-z)^{m+1}}dw, \qquad z \in D, m \geq 0.

An application of this for a bounded entire function would be to contour integrate along an arbitrarily large circle to derive an n!M / R^n upper bound (which goes to 0 as R \to \infty) on the derivatives. This gives us Liouville’s theorem, which states that bounded entire functions are constant, by Taylor series.

 

Weierstrass products

Long time ago when I was a clueless kid about the finish 10th grade of high school, I first learned about Euler’s determination of \zeta(2) = \frac{\pi^2}{6}. The technique he used was of course factorization of \sin z / z via its infinitely many roots to

\displaystyle\prod_{n=1}^{\infty} \left(1 - \frac{z}{n\pi}\right)\left(1 + \frac{z}{n\pi}\right) = \displaystyle\prod_{n=1}^{\infty} \left(1 - \frac{z^2}{n^2\pi^2}\right).

Equating the coefficient of z^2 in this product, -\displaystyle\sum_{n=1}^{\infty}\frac{1}{n^2\pi^2}, with the coefficient of z^2 in the well-known Maclaurin series of \sin z / z, -1/6, gives that \zeta(2) = \frac{\pi^2}{6}.

This felt to me, who knew almost no math, so spectacular at that time. It was also one of great historical significance. The problem was first posed by Pietro Mengoli in 1644, and had baffled the most genius of mathematicians of that day until 1734, when Euler finally stunned the mathematical community with his simple yet ingenious solution. This was done when Euler was in St. Petersburg. On that, I shall note that from this, we can easily see how Russia had a rich mathematical and scientific tradition that began quite early on, which must have deeply influenced the preeminence in science of Tsarist Russia and later the Soviet Union despite their being in practical terms quite backward compared to the advanced countries of Western Europe, like UK and France, which of course was instrumental towards the rapid catching up in industry and technology of the Soviet Union later on.

I had learned of this result more or less concurrently with learning on my own (independent of the silly American public school system) what constituted a rigorous proof. I remember back then I was still not accustomed to the cold, precise, and austere rigor expected in mathematics and had much difficulty restraining myself in that regard, often content with intuitive solutions. From this, one can guess that I was not quite aware of how Euler’s solution was in fact not a rigorous one by modern standards, despite its having been noted from the book from which I read this. However, now I am aware that what Euler constructed was in fact a Weierstrass product, and in this article, I will explain how one can construct those in a way that guarantees uniform convergence on compact sets.

Given a finite number of points on the complex plane, one can easily construct an analytic function with zeros or poles there for any combination of (finite) multiplicities. For a countably infinite number of points, one can as well the same way but how can one know that it, being of a series nature, doesn’t blow up? There is quite some technical machinery to ensure this.

We begin with the restricted case of simple poles and arbitrary residues. This is a special case of what is now known as Mittag-Leffler’s theorem.

Theorem 1.1 (Mittag-Leffler) Let z_1,z_2,\ldots \to \infty be a sequence of distinct complex numbers satisfying 0 < |z_1| \leq |z_2| \leq \ldots. Let m_1, m_2,\ldots be any sequence of non-zero complex numbers. Then there exists a (not unique) sequence p_1, p_2, \ldots of non-negative integers, depending only on the sequences (z_n) and (m_n), such that the series f (z)

f(z) = \displaystyle\sum_{n=1}^{\infty} \left(\frac{z}{z_n}\right)^{p_n} \frac{m_n}{z - z_n} \ \ \ \ (1.1)

is totally convergent, and hence absolutely and uniformly convergent, in any compact set K \subset \mathbb{C} \ {z_1,z_2,\ldots}. Thus the function f(z) is meromorphic, with simple poles z_1, z_2, \ldots having respective residues m_1, m_2, \ldots.

Proof: Total convergence, in case forgotten, refers to the Weierstrass M-test. That said, it suffices to establish

\left|\left(\frac{z}{z_n}\right)^{p_n}\frac{m_n}{z-z_n}\right| < M_n,

where \sum_{n=1}^{\infty} M_n < \infty. For total convergence on any compact set, we again use the classic technique of monotonically increasing disks to \infty centered at the origin with radii r_n \leq |z_n|. This way for |z| \leq r_n, we have

\left|\left(\frac{z}{z_n}\right)^{p_n}\frac{m_n}{z-z_n}\right| < \left(\frac{r_n}{|z_n|}\right)^{p_n}\frac{m_n}{|z_n|-r_n} < M_n.

With r_n < |z_n| we can for any M_n choose large enough p_n to satisfy this. This makes clear that the \left(\frac{z}{z_n}\right)^{p_n} is our mechanism for constraining the magnitude of the values attained, which we can do to an arbitrary degree.

The rest of the proof is more or less trivial. For any K, pick some r_N the disk of which contains it. For n < N, we can bound with \displaystyle\max_{z \in K}\left|\left(\frac{z}{z_n}\right)^{p_n}\frac{m_n}{z-z_n}\right|, which must be bounded by continuity on compact set (now you can see why we must omit the poles from our domain).     ▢

Lemma 1.1 Let the functions u_n(z) (n = 1, 2,\ldots) be regular in a compact set K \subset C, and let the series \displaystyle\sum_{n=1}^{\infty} u_n(z) be totally convergent in K . Then the infinite product \displaystyle\sum_{n=1}^{\infty} \exp (u_n(z)) = \exp\left(\displaystyle\sum_{n=1}^{\infty} u_n(z)\right) is uniformly convergent in K.

Proof: Technical exercise left to the reader.     ▢

Now we present a lemma that allows us to take the result of Mittag-Leffler (Theorem 1.1) to meromorphic functions with zeros and poles at arbitrary points, each with its prescribed multiplicity.

Lemma 1.2 Let f (z) be a meromorphic function. Let z_1,z_2,\ldots = 0 be the poles of f (z), all simple with respective residues m_1, m_2,\ldots \in \mathbb{Z}. Then the function

\phi(z) = \exp \int_0^z f (t) dt \ \ \ \ (1.2)

is meromorphic. The zeros (resp. poles) of \phi(z) are the points z_n such that m_n > 0 (resp. m_n < 0), and the multiplicity of z_n as a zero (resp. pole) of \phi(z) is m_n (resp. -m_n).

Proof: Taking the exponential of that integral has the function of turning it into a one-valued function. Take two paths \gamma and \gamma' from 0 to z with intersects not any of the poles. By the residue theorem,

\int_{\gamma} f(z)dz = \int_{\gamma'} f(z)dz + 2\pi i R,

where R is the sum of residues of f(t) between \gamma and \gamma'. Because the m_is are integers, R must be an integer from which follows that our exponential is a one-valued function. It is also, with the exponential being analytic, also analytic. Moreover, out of boundedness, it is non-zero on \mathbb{C} \setminus \{z_1, z_2, \ldots\}. We can remove the pole at z_1 with f_1(z) = f(z) - \frac{m_1}{z - z_1}. This f_1 remains analytic and is without zeros at \mathbb{C} \setminus \{z_2, \ldots\}. From this, we derive

\begin{aligned} \phi(z) &= \int_{\gamma} f(z)dz \\ &= \int_{\gamma} f_1(z) + \frac{m_1}{z-z_1}dz \\ &= (z-z_1)^{m_1}\exp \int_0^z f_1(t) dt. \end{aligned}

We can continue this process for the remainder of the z_is.      ▢

Theorem 1.2 (Weierstrass) Let F(z) be meromorphic, and regular and \neq 0 at z = 0. Let z_1,z_2, \ldots be the zeros and poles of F(z) with respective multiplicities |m_1|, |m_2|, \ldots, where m_n > 0 if z_n is a zero and m_n < 0 if z_n is a pole of F(z). Then there exist integers p_1, p_2,\ldots \geq 0 and an entire function G(z) such that

F(z) = e^{G(z)}\displaystyle\prod_{n=1}^{\infty}\left(1 - \frac{z}{z_n}\right)^{m_n}\exp\left(m_n\displaystyle\sum_{k=1}^{p_n}\frac{1}{k}\left(\frac{z}{z_k}^k\right)\right), \ \ \ \ (1.3)

where the product converges uniformly in any compact set K \subset \mathbb{C} \ \{z_1,z_2,\ldots\}.

Proof: Let f(z) be the function in (1.1) with p_is such that the series is totally convergent, and let \phi(z) be the function in (1.2). By Theorem 1.1 and Lemma 1.2, \phi(z) is meromorphic, with zeros z_n of multiplicities m_n if m_n > 0, and with poles z_n of multiplicities |m_n| if m_n < 0. Thus F(z) and \phi(z) have the same zeros and poles with the same multiplicities, whence F(z)/\phi(z) is entire and \neq 0. Therefore \log (F(z)/\phi(z)) = G(z) is an entire function, and

F(z) = e^{G(z)} \phi(z). \ \ \ \ (1.4)

Uniform convergence along path of integration from 0 to z (not containing the poles) enables term-by-term integration. Thus, from (1.2), we have

\begin{aligned} \phi(z) &= \exp \displaystyle\sum_{n=1}^{\infty} \left(\frac{z}{z_n}\right)^{p_n} \frac{m_n}{t - z_n}dt \\ &= \displaystyle\prod_{n=1}^{\infty}\exp \int_0^z \left(\frac{m_n}{t - z_n} + \frac{m_n}{z_n}\frac{(t/z_n)^{p_n} -1}{t/z_n - 1}\right)dt \\ &= \displaystyle\prod_{n=1}^{\infty}\exp \int_0^z \left(\frac{m_n}{t - z_n} + \frac{m_n}{z_n}\displaystyle\sum_{k=1}^{p_n}\left(\frac{t}{z_n}\right)^{k-1}\right)dt \\ &= \displaystyle\prod_{n=1}^{\infty}\exp \left(\log\left(1 - \frac{z}{z_n}\right)^{m_n} + m_n\displaystyle\sum_{k=1}^{p_n}\frac{1}{k}\left(\frac{t}{z_n}\right)^k\right) \\ &= \displaystyle\prod_{n=1}^{\infty}\left(1 - \frac{z}{z_n}\right)^{m_n} \exp \left(m_n\displaystyle\sum_{k=1}^{p_n}\frac{1}{k}\left(\frac{t}{z_n}\right)^k\right).\end{aligned}

With this, (1.3) follows from (1.4). Moreover, in a compact set K, we can always bound the length of the path of integration, whence, by Theorem 1.1, the series

\displaystyle\sum_{n=1}^{\infty}\int_0^z \left(\frac{t}{z_n}\right)^{p_n}\frac{m_n}{t - z_n}dt

is totally convergent in K. Finally, invoke Lemma 1.1 to conclude that the exponential of that is total convergent in K as well, from which follows that (1.3) is too, as desired.     ▢

If at 0, our function has a zero or pole, we can easily multiply by z^{-m} with m the multiplicity there to regularize it. This yields

F(z) = z^me^{G(z)}\displaystyle\prod_{n=1}^{\infty}\left(1 - \frac{z}{z_n}\right)^{m_n}\exp\left(m_n\displaystyle\sum_{k=1}^{p_n}\frac{1}{k}\left(\frac{z}{z_n}^k\right)\right)

for Weierstrass factorization formula in this case.

Overall, we see that we transform Mittag-Leffler (Theorem 1.1) into Weierstrass factorization (Theorem 1.2) through integration and exponentiation. In complex, comes up quite often integration of an inverse or -1 order term to derive a logarithm, which once exponentiated gives us a linear polynomial to the power of the residue, useful for generating zeros and poles. Once this is observed, that one can go from the former to the latter with some technical manipulations is strongly hinted at, and one can observe without much difficulty that the statements of Lemma 1.1 and Lemma 1.2 are needed for this.

References

  • Carlo Viola, An Introduction to Special Functions, Springer International Publishing, Switzerland, 2016, pp. 15-24.

Implicit function theorem and its multivariate generalization

The implicit function theorem for a single output variable can be stated as follows:

Single equation implicit function theorem. Let F(\mathbf{x}, y) be a function of class C^1 on some neighborhood of a point (\mathbf{a}, b) \in \mathbb{R}^{n+1}. Suppose that F(\mathbf{a}, b) = 0 and \partial_y F(\mathbf{a}, b) \neq 0. Then there exist positive numbers r_0, r_1 such that the following conclusions are valid.

a. For each \mathbf{x} in the ball |\mathbf{x} - \mathbf{a}| < r_0 there is a unique y such that |y - b| < r_1 and F(\mathbf{x}, y) = 0. We denote this y by f(\mathbf{x}); in particular, f(\mathbf{a}) = b.

b. The function f thus defined for |\mathbf{x} - \mathbf{a}| < r_0 is of class C^1, and its partial derivatives are given by

\partial_j f(\mathbf{x}) = -\frac{\partial_j F(\mathbf{x}, f(\mathbf{x}))}{\partial_y F(\mathbf{x}, f(\mathbf{x}))}.

Proof. For part (a), assume without loss of generality positive \partial_y F(\mathbf{a}, b). By continuity of that partial derivative, we have that in some neighborhood of (\mathbf{a}, b) it is positive and thus for some r_1 > 0, r_0 > 0 there exists f such that |\mathbf{x} - \mathbf{a}| < r_0 implies that there exists a unique y (by intermediate value theorem along with positivity of \partial_y F) such that |y - b| < r_1 with F(\mathbf{x}, y) = 0, which defines some function y = f(\mathbf{x}).

To show that f has partial derivatives, we must first show that it is continuous. To do so, we can let r_1 be our \epsilon and use the same process to arrive at our \delta, which corresponds to r_0.

For part (b), to show that its partial derivatives exist and are equal to what we desire, we perturb \mathbf{x} with an \mathbf{h} that we let WLOG be

\mathbf{h} = (h, 0, \ldots, 0).

Then with k = f(\mathbf{x}+\mathbf{h}) - f(\mathbf{x}), we have F(\mathbf{x} + \mathbf{h}, y+k) = F(\mathbf{x}, y) = 0. From the mean value theorem, we can arrive at

0 = h\partial_1F(\mathbf{x}+t\mathbf{h}, y + tk) + k\partial_y F(\mathbf{x}+t\mathbf{h}, y+tk)

for some t \in (0,1). Rearranging and taking h \to 0 gives us

\partial_j f(\mathbf{x}) = -\frac{\partial_j F(\mathbf{x}, y)}{\partial_y F(\mathbf{x}, y)}.

The following can be generalized to multiple variables, with k implicit functions and k constraints.     ▢

Implicit function theorem for systems of equations. Let \mathbf{F}(\mathbf{x}, \mathbf{y}) be an \mathbb{R}^k valued functions of class C^1 on some neighborhood of a point \mathbf{F}(\mathbf{a}, \mathbf{b}) \in \mathbb{R}^{n+k} and let B_{ij} = (\partial F_i / \partial y_j)(\mathbf{a}, \mathbf{b}). Suppose that \mathbf{F}(\mathbf{x}, \mathbf{y}) = \mathbf{0} and \det B \neq 0. Then there exist positive numbers r_0, r_1 such that the following conclusions are valid.

a. For each \mathbf{x} in the ball |\mathbf{x} - \mathbf{a}| < r_0 there is a unique \mathbf{y} such that |\mathbf{y} - \mathbf{b}| < r_1 and \mathbf{F}(\mathbf{x}, \mathbf{y}) = 0. We denote this \mathbf{y} by \mathbf{f}(\mathbf{x}); in particular, \mathbf{f}(\mathbf{a}) = \mathbf{b}.

b. The function \mathbf{f} thus defined for |\mathbf{x} - \mathbf{a}| < r_0 is of class C^1, and its partial derivatives \partial_j \mathbf{f} can be computed by differentiating the equations \mathbf{F}(\mathbf{x}, \mathbf{f}(\mathbf{x})) = \mathbf{0} with respect to x_j and solving the resulting linear system of equations for \partial_j f_1, \ldots, \partial_j f_k.

Proof: For this we will be using Cramer’s rule, which is that one can solve a linear system Ax = y (provided of course that A is non-singular) by taking matrix obtained from substituting the kth column of A with y and letting x_k be the determinant of that matrix divided by the determinant of A.

From this, we are somewhat hinted that induction is in order. If B is invertible, then one of its k-1 \times k-1 submatrices is invertible. Assume WLOG that such applies to the one determined by B^{kk}. With this in mind, we can via our inductive hypothesis have

F_1(\mathbf{x}, \mathbf{y}) = F_2(\mathbf{x}, \mathbf{y}) = \cdots = F_{k-1}(\mathbf{x}, \mathbf{y}) = 0

determine y_j = g_j(\mathbf{x}, y_k) for j = 1,2,\ldots,k-1. Here we are making y_k an independent variable and we can totally do that because we are inducting on the number of outputs (and also constraints). Substituting this into the F_k constraint, this reduces to the single variable case, with

G(\mathbf{x}, y_k) = F_k(\mathbf{x}, \mathbf{g}(\mathbf{x}, y_k), y_k) = 0.

It suffices now to show via our \det B \neq 0 hypothesis that \frac{\partial G}{\partial y_k} \neq 0. Routine application of the chain rule gives

\frac{\partial G}{\partial y_k} = \displaystyle\sum_{j=1}^{k-1} \frac{\partial F_k}{\partial y_j} \frac{\partial g_j}{\partial y_k} + \frac{\partial F_k}{\partial y_k} = \displaystyle\sum_{j=1}^{k-1} B^{kj} \frac{\partial g_j}{\partial y_k} + B^{kk}. \ \ \ \ (1)

The \frac{\partial g_j}{\partial y_k}s are the solution to the following linear system:

\begin{pmatrix} \frac{\partial F_1}{\partial y_1}  & \dots & \frac{\partial F_1}{\partial y_{k-1}} \\ \; & \ddots \; \\ \frac{\partial F_{k-1}}{\partial y_1} & \dots & \frac{\partial F_{k-1}}{\partial y_{k-1}} \end{pmatrix} \begin{pmatrix} \frac{\partial g_1}{\partial y_k} \\ \vdots \\ \frac{\partial g_{k-1}}{\partial y_k} \end{pmatrix} = \begin{pmatrix} \frac{-\partial F_1}{\partial y_k} \\ \vdots \\ \frac{-\partial F_{k-1}}{\partial y_k} \end{pmatrix} .

Let M^{ij} denote the k-1 \times k-1 submatrix induced by B_{ij}. We see then that in the replacement for Cramer’s rule, we arrive at what is M^{kj} but with the last column swapped to the left k-j-1 times such that it lands in the jth column and also with a negative sign, which means

\frac{\partial g_j}{\partial y_k}(\mathbf{a}, b_k) = (-1)^{k-j} \frac{\det M^{jk}}{\det M^{kk}}.

Now, we substitute this into (1) to get

\begin{aligned}\frac{\partial G}{\partial y_k}(\mathbf{a}, b_k) &= \displaystyle_{j=1}^{k-1} (-1)^{k-j}B_{kj}\frac{\det M^{kj}}{\det M^{kk}} + B_kk \\ &= \frac{\sum_{j=1}^k (-1)^{j+k} B_{kj}\det M^{kj}}{\det M^{kk}} \\ &= \frac{\det B}{\det M^{kk}} \\ &\neq 0. \end{aligned}

Finally, we apply the implicit function theorem for one variable for the y_k that remains.     ▢

References

  • Gerald B. Folland, Advanced Calculus, Prentice Hall, Upper Saddle River, NJ, 2002, pp. 114–116, 420–422.

 

A nice consequence of Baire category theorem

In a complete metric space X, we call a point x for which \{x\} is open an isolated point. If X is countable and there are no isolated points, we can take \displaystyle\cap_{x \in X} X \setminus x = \emptyset, with each of the X \setminus x open and dense, to violate the Baire category theorem. From that, we can arrive at the proposition that in a complete metric space, no isolated points implies that the space uncountable, and similarly, that countable implies there is an isolated point.

 

Riemann mapping theorem

I am going to make an effort to understand the proof of the Riemann mapping theorem, which states that there exists a conformal map from any simply connected region that is not the entire plane to the unit disk. I learned of its significance that its combination with the Poisson integral formula can be used to solve basically any Dirichlet problem where the region in question in simply connected.

Involved in this is Montel’s theorem, which I will now state and prove.

Definition A normal family of continuous functions is one for which every sequence in it has a uniformly convergent subsequence.

Montel’s theorem A family \mathcal{F} on domain D of holomorphic functions which is locally uniformly bounded is a normal family.

Proof: Turns out holomorphic alongside local uniform boundness is enough for us to establish local equicontinuity via the Cauchy integral formula. On any compact set K \subset D, we can find some r for which for every point z_0 \in K, \overline{B(z_0, 2r)} \subset D. By local boundedness we have some M>0 such that |f(z)| \leq M in all of B(z_0, 2r). Thus, for any w \in K, we can use Cauchy’s integral formula, for any z \in B(w, r). In that, the radius r versus 2r is used to bound the denominator with 2r^2.

\begin{aligned} |f(z) - f(w)| &= \left| \oint_{\partial B(z_0, 2r)} \frac{f(\zeta)}{\zeta - z} - \frac{f(\zeta)}{\zeta - w}d\zeta \right| \\ & \leq  |z-w| \oint_{\partial B(z_0, 2r)} \left| \frac{f(\zeta)}{(\zeta - z)(\zeta - w)} \right| d\zeta \\ & <  \frac{|z-w|2\pi r}{2\pi 2r^2} M. \end{aligned}

This shows it’s locally Lipschitz and thus locally equicontinuous. To choose the \delta we can divide our \epsilon by that Lipschitz constant alongside enforcing less than 2r so as to stay inside the domain.

With this we can finish off with the Arzela-Ascoli theorem.     ▢

Now take the family \mathcal{F} of analytic, injective functions from simply connected region \Omega onto \mathbb{D} the unit disk which take z_0 to 0. On this we have the following.

Proposition If f \in \mathcal{F} is such that for all g \in \mathcal{F}, |f'(z)| \geq g'(z), then f surjects onto \mathbb{D}.

Proof:  We prove the contrapositive. In order to do so, it suffices to find for any f that hits not w \in \mathbb{D}, f = s \circ g, where s, g are analytic with g(z_0) = 0 and s is a self-map on \mathbb{D} that fixes 0 and is not an automorphism. In that case, we can deduce from Schwarz lemma that |s'(0)| < 1 and thereby from the chain rule that g'(z_0) > f'(z_0).

Recall that we have automorphisms on \mathbb{D}, T_w = \frac{z-w}{1-wz}, for all w \in \mathbb{D} and that their inverses are also automorphisms. Let’s try to take 0 to w, then w to w^2 via p(z) = z^2, and finally w^2 to 0. With this, we have a working s = T_{w^2} \circ p \circ T_w^{-1}.     ▢

Nonemptiness of family

It is not difficult to construct an analytic injective self map on \mathbb{D} that sends z_0 to 0. The part of mapping z_0 to 0 is in fact trivial with the T_ws. To do that it suffices to map \mathbb{D} to \mathbb{C} \setminus \overline{\mathbb{D}} as after that, we can invert.

Since D is not the entire complex plane, there is some a \notin D. By translation, we can assume that a = 0. Because the region is simply connected, there is a path from 0 to \infty outside the region, which means there is an analytic branch of the square root. For any w that gets hits by that, -w does not. By the open mapping theorem, we can find a ball centered at -w that is entirely outside the region. With this, we can translate and dilate accordingly to shift that to the unit disk.

Construction of limit to surjection

We can see now that if we can construct a sequence of functions in our family that converges to an analytic one with the same zero at z_0 with maximal derivative (in absolute value) there, we are finished. Specifically, let \{f_n\} be a sequence from \mathcal{F} such that

\lim_n |f'_n(z_0)| = \sup_{f \in \mathcal{F}} \{|f'(z_0)|\}.

This can be done by taking functions with sufficiently increasing derivatives at z_0. With Montel’s theorem on our obviously locally uniformly bounded family, we know that our family is normal, and thus by definition, we can extract some subsequence that is uniformly convergent on compact sets. Now it remains to show that the function converged to is analytic and injective.

The injective part follows from a corollary of Hurwitz’s theorem, which we now state.

Hurwitz’s theorem (corollary of) If f_n is a sequence of injective analytic functions with converge uniformly on compact sets to f, then f is constant or injective.

Proof: Recall that Hurwitz’s theorem states that if f has a zero of some multiplicity m at some point z_0, then for any \epsilon > 0, we will, past some N in the index of the sequence, have m zeros within B(z_0, \epsilon) for all f_n, n > N, provided f is not constantly 0. For any point to see that a non-constant f can hit it only once, it suffices to do a translation by that point on all the f_ns to turn it into a zero, so that the hypothesis of Hurwitz’s theorem, which in this case, bounds the number of zeros above by 1, with the f_ns being injective, can be applied.     ▢

To show analyticity, we can use Weierstrass’s theorem.

Weierstrass’s theorem Take \{f_n\} and supposed it converges uniformly on compact sets to f. Then the following hold:
    a. f is analytic.
    b. \{f'_n\} converges to f' uniformly on compact sets.

Proof: This is a more standard theorem, so I will only sketch the proof. Recall the definition of compact as possessing the every cover has finite subcover property. This is so powerful, because we can for any collection of balls centered at every point of the cover, find a finite of them that covers the entire space, and finiteness allows us to take a maximum or minimum of finite Ns or \deltas to uniformize some limit.

We can do the same here. For every z on a compact set, express f_n as integral of \frac{f_n}{\zeta - z} via Cauchy’s integral formula on some ball centered at z. Uniform convergence of \frac{f_n}{\zeta - z} on the boundary to \frac{f}{\zeta-z} allows us to put the limit inside the integral to give us f, as represented via Cauchy’s integral formula. The same can be done for the \{f'_n\}

Again we can use two radii as done in the proof of Montel’s theorem to impose uniform convergence on a smaller ball.     ▢

Finally, our candidate conformal map to \mathbb{D} satisfies that f(z_0) = 0. If not, convergence would be naught at z_0 since f_n(z_0) = 0 for all n.

This gives us existence. There is also a uniqueness aspect of the Riemann mapping theorem that comes when one imposes f'(z_0) \in \mathbb{R}. This is very elementary to prove and will be left to the reader.