Vector fields, flows, and the Lie derivative

Let M be a smooth real manifold. A smooth vector field V on M can be considered as a function from C^{\infty}(M) to C^{\infty}(M). Every function f : M \to \mathbb{R} at every point p \in M is by a vector field (which implicitly associates a tangent vector at every point) taken to some real value, which one can think of as the directional derivative of f along the tangent vector. Moreover, this varies smoothly with p.

Along any vector field, if we start at any point, we can trace a path along the vector field. Imagine a vector field in water based on the velocity that does not change with time. Take a point particle at a point at any time and we can deterministically predict its path both forward in time and backward in time. We call this an integral curve and it is easy to see that integral curves are equivalence classes.

On a manifold M, at a point with chart (U, \varphi), under vector field V, we would have

\frac{\mathrm{d}x^{\mu}(t)}{\mathrm{d}t} = V^{\mu}(x(t)), \qquad (1)

where x^{\mu}(t) is the \muth component of \varphi(x(t)) and V = V^{\mu}\partial / \partial x^{\mu}. This is an ODE which is guaranteed to have a unique solution at least locally, and we assume for now that the parameter t can be maximally extended.

If we attach the initial condition that at t = 0, the integral curve is at x_0, and denote the coordinate by \sigma^{\mu}(t, x_0)(1) becomes

\frac{\mathrm{d}\sigma^{\mu}(t, x_0)}{\mathrm{d}t} = V^{\mu}(\sigma(t, x_0)),

Here, \sigma : \mathbb{R} \times M \to M is called a flow generated by V, which necessarily satisfies

\sigma(t, \sigma(s, x_0)) = \sigma(t+s, x_0)

for any s, t \in \mathbb{R}.

Within this is the structure of a one-parameter family where

(i) \sigma_{s+t} = \sigma_s \circ \sigma_t or \sigma_{s+t}(x_0) = \sigma_s(\sigma_t(x_0)).
(ii) \sigma_0 is the identity map.
(iii) \sigma_{-t} = (\sigma_t)^{-1}.

We now ask the question how a smooth vector field W changes along a smooth vector field V. If our manifold were simply \mathbb{R}^n (with a single identity chart, globally) we would at any point p some direction along V and on an infinitesimal change along that, W would change as well. In this case, it is easy to represent tangent vectors with indexed coordinates. Naively, we could take the displacement in W, divide by the amount of displacement along V and take the limit. However, we have not defined addition of tangent vectors on different tangent spaces. To do so, we would need some meaningful correspondence between values on different tangent spaces. Why can we not simply do vector addition? Recall that tangent space elements are defined in terms of how they act on smooth functions from M to \mathbb{R} instead of directly. It is only because they are linear in themselves with respect to any given such function that we can using vectors to represent them.

We resolve this in a more general fashion by defining the induced map on tangent spaces T_pM and T_{f(p)}N for smooth f : M \to N between manifolds. Recall that an element of a tangent space is a map D : C^{\infty}(M) \to \mathbb{R} (that also satisfies the Leibniz property: D(fg) = Df \cdot g + f \cdot Dg). If g \in C^{\infty}(N), then g \circ f \in C^{\infty}(M). We define the induced map

\Phi_{f, p} : T_p M \to T_{f(p)} N

in the following manner. If D \in T_p(M), then \Phi_{f, p}(D) = D', where D'[g] = D[g \circ f].

We notice how we can apply this on \sigma_t : M \to M in our construction of the Lie derivative \mathcal{L}_V W of a vector field W with respect to vector field V. Since the flow is along V,

\sigma_{-t}^{\mu}(p) = x^{\mu}(p) - tV^{\mu}(p) + O(t^2). \qquad (2)

We define as the induced map of \sigma_t(p)

\Phi_{\sigma_{-t}, \sigma_t(p)} : T_{\sigma_t(p)} M \to T_p M.

If \Phi_{\sigma_{-t}, \sigma_t(p)}(W) = W', then by definition,

W'[f](p) = W[f \circ \sigma_{-t}](\sigma_t(p)).

That means

\mathcal{L}_V W[f](p) = \left(\displaystyle\lim_{t \to 0}\frac{W'(p) - W(p)}{t}\right)[f] = \displaystyle\lim_{t \to 0}\frac{W'[f](p) - W[f](p)}{t}. \qquad (3)

Using that by the chain rule,

\frac{\partial}{\partial x^r}(f \circ \sigma_{-t})(\sigma_t(p)) = \frac{\partial \sigma_{-t}^{\rho}}{\partial x^r}(\sigma_t(p)) \frac{\partial f}{\partial x^{\mu}}(p),

we arrive at

\begin{aligned} W'[f](p) & = W^{\nu}(\sigma_t(p)) \frac{\partial}{\partial x^{\nu}}[f \circ \sigma_{-t}](\sigma_t(p)) \\ & = W^{\nu}(\sigma_t(p)) \frac{\partial \sigma_{-t}^{\mu}}{\partial x^{\nu}}(\sigma_t(p))\frac{\partial f}{\partial x^{\mu}}(p). \qquad (4) \end{aligned}

Using the power series of \sigma_t(p) at p, we get

W^{\nu}(\sigma_t(p)) = W^{\nu}(p) + tV^{\rho}(p) \frac{\partial W^{\nu}}{\partial x^{\rho}} + O(t^2). \qquad (5)

Moreover, by (2),

\frac{\partial}{\partial x^{\nu}} \sigma_{-t}^{\mu}(\sigma_t(p)) = \delta_{\nu}^{\mu} - t \frac{\partial V^{\mu}}{\partial x^{\mu}}(p) + O(t^2). \qquad (6)

Substituting (5) and (6) into (4) yields

\begin{aligned} W'[f](p) & = \left(W^{\nu}(p) + tV^{\rho}(p) \frac{\partial W^{\nu}}{\partial x^{\rho}} + O(t^2)\right)\left(\delta_{\nu}^{\mu} - t \frac{\partial V^{\mu}}{\partial x^{\mu}}(p) + O(t^2)\right)\frac{\partial f}{\partial x^\mu} \\ & = \left(W^{\mu}(p) + t\left(V^{\rho}(p) \frac{\partial W^{\nu}}{\partial x^{\rho}}(p) - W^{\nu}(p) \frac{\partial V^{\mu}}{\partial x^{\nu}}(p)\right) + O(t^2)\right)\frac{\partial f}{\partial x^\mu} \\ & = \left(W^{\mu}(p) + t\left(V^{\nu}(p) \frac{\partial W^{\mu}}{\partial x^{\nu}}(p) - W^{\nu}(p) \frac{\partial V^{\mu}}{\partial x^{\nu}}(p)\right) + O(t^2)\right)\frac{\partial f}{\partial x^\mu}. \qquad (7) \end{aligned}

There is a constant term, a first order term, and an O(t^2). In (3), the constant term is subtracted out, and the O(t^2) contributes nothing to the limit. This means that the Lie derivative is equal to the first order term, with

(\mathcal{L}_V W)^{\mu}(p) = V^{\nu}(p) \frac{\partial W^{\mu}}{\partial x^{\nu}}(p) - W^{\nu}(p) \frac{\partial V^{\mu}}{\partial x^{\nu}}(p). \qquad (8)

Notice how in (4), there is \frac{\partial f}{\partial x^{\mu}} that we have omitted in (8). This is because we are using \partial/\partial x^\mu as the basis of the tangent vector that is applied onto f \in C^{\infty}(M).

We have in (8) what is the \muth component of the Lie bracket of [V,W] where

[V,W]^{\mu} = V^{\nu} \frac{\partial W^{\mu}}{\partial x^{\nu}} - W^{\nu} \frac{\partial V^{\mu}}{\partial x^{\nu}}. \qquad (9)

 

Sheaves of holomorphic functions

I can sense vaguely that the sheaf is a central definition in the (superficially) horrendously abstract language of modern mathematics. There really does seem to be quite a distance, between crudely speaking, pre-1950 math and post-1950 math in the mainstream in terms of the level of abstraction typically employed. It is my hope that I will eventually accustom myself to the latter instead of viewing it as a very much alien language. It is difficult though, and  there are in fact definitions which take quite me a while to grasp (by this, I mean be able to visualize it so clearly that feel like I won’t ever forget it), which is expected given how long it has taken historically to condense to certain definitions golden in hindsight. In the hope of a step forward in my goal to understand sheaves, I’ll write up the associated definitions in this post.

Definition 1 (Presheaf). Let (X, \mathcal{T}) be a topological space. A presheaf of vector spaces on X is a family \mathcal{F} = \{\mathcal{F}\}_{U \in \mathcal{T}} of vector spaces and a collection of associated linear maps, called restriction maps,

\rho = \{\rho_V^U : \mathcal{F}(U) \to \mathcal{F}(V) | V,U \in \mathcal{T} \text{ and } V \subset U\}

such that

\rho_U^U = \text{id}_{\mathcal{F}(U)} \text{ for all } U \in \mathcal{T}

\rho_W^V \circ \rho_V^U = \rho_W^U \text{ for all } U,V,W \in \mathcal{T} \text{ such that } W \subseteq V \subseteq U.

Given U,V \in \mathcal{T} such that V \subseteq U and f \in \mathcal{F}(U) one often writes f|_V rather than \rho_V^U(f).

Definition 2 (Sheaf). Let \mathcal{F} be a presheaf on a topological space X. We call \mathcal{F} a sheaf on X if for all open sets U \subseteq X and collections of open sets \{U_i \subseteq U\}_{i \in I} such that \cup_{i \in I} U_i = U, \mathcal{F}(U) satisfies the following properties:

  1. For f, g \in F(U) such that f|_{U_i} = g|_{U_i} for all i \in I, it is given that f = g.    (2.1)
  2. For all collections \{f_i \in F(U_i)\}_{i \in I} such that f_i |_{U_i \cap U_j} = f_j |_{U_i \cap U_j} for all i, j \in I there exists f \in F(U) such that f |_{U_i} = f_i for all i \in I.    (2.2)

In more concrete terms, it is not difficult to see that (2.1) is a statement of power series about a point with radius of convergence covering U, and that (2.2) is a statement of analytic continuation.

Definition 3 (Sheaf of holomorphic functions \mathcal{O}). Let X be a Riemann surface. The presheaf \mathcal{O} of holomorphic functions on X is made up of complex vector spaces of holomorphic functions. For all open sets U \subseteq X, \mathcal{O}(U) is the vector space of holomorphic functions on U. The restrictions are the usual restrictions of functions.

Proposition 4  If X is a Riemann surface, then \mathcal{O} is a sheaf on X.

Proof. As \mathcal{O} is a presheaf, it suffices to show properties (2.1) and (2.2)(2.1) follows directly from the definition of restriction of a function. If they agree on every set in the cover of U, they agree on all of U.

For (2.2) take some collection \{f_i \in \mathcal{O}(U_i)\}_{i \in I} such that f_i |_{U_i \cap U_j} = f_j |_{U_i \cap U_j} for all i, j \in I. For x \in U, f(x) = f_i(x) where i \in I such that x \in U. When \in U_i \cap U_jf_i |_{U_i \cap U_j} = f_j |_{U_i \cap U_j} by definition of the f_i. Therefore, f is well-defined. Given any x \in U, there exists some neighborhood U_i \in \mathcal{U} where f_i is holomorphic. From this follows that f is holomorphic, which means f \in \mathcal{O}(U).     ▢

Definition 5 (Direct limit of algebraic objects). Let \langle I, \leq \rangle be a directed set. Let \{A_i : i \in I\} be a family of objects indexed by I and f_{ij}: A_j \rightarrow A_j be a homomorphism for all i \leq j with the following properties:

  1. f_{ii} is the identity of A_i, and
  2. f_{ik} = f_{jk} \circ f_{ij} for all i \leq j \leq k.

Then the pair \langle A_i, f_{ij} \rangle is called a direct system over I.

The direct limit of the direct system \langle A_i, f_{ij} \rangle is denoted by \varinjlim A_i and is defined as follows. Its underlying set is the disjoint union of the A_is modulo a certain equivalence relation \sim:

\varinjlim A_i = \bigsqcup_i A_i \bigg / \sim.

Here, if x_i \in A_i and x_j \in A_j, then x_i \sim x_j iff there is some k \in I with i \leq k, j \leq k such that f_{ik}(x_i) = f_{jk}(x_j).

More concretely, using the sheaf of holomorphic functions on a Riemann surface, we see that here, the indices correspond to open sets with i \leq j meaning U \supset V, and f_{ij} : A_i \to A_j is the restriction \rho_V^U : \mathcal{F}(U) \to \mathcal{F}(V). Two holomorphic functions defined on U and V, represented by x_i and x_j are considered equivalent iff they are equal restricted to some W \subset V \cap U.

Fix a point x \in X and requires that the open sets in consideration are the neighborhoods of it. The direct limit in this case is called the stalk of F at x, denoted F_x. For each neighborhood U of x, the canonical morphism F(U) \to F_x associates to a section s of F over U an element s_x of the stalk F_x called the germ of s at x.

Dually, there is the inverse limit, which in our concrete context is the more abstract language for an analytic continuation.

Definition 6 (Inverse limit of algebraic objects). Let \langle I, \leq \rangle be a directed set. Let \{A_i : i \in I\} be a family of objects indexed by I and f_{ij}: A_j \rightarrow A_j be a homomorphism for all i \leq j with the following properties:

  1. f_{ii} is the identity of A_i, and
  2. f_{ik} = f_{jk} \circ f_{ij} for all i \leq j \leq k.

Then the pair ((A_i)_{i \in I}, (f_{ij})_{i \leq j \in I}) is an inverse system of groups and morphisms over I, and the morphism f_{ij} are called the transition morphisms of the system.

We define the inverse limit of the inverse system ((A_i)_{i \in I}, (f_{ij})_{i \leq j \in I}) as a particular subgroup of the direct product of the A_is:

A = \displaystyle\varprojlim_{i \in I} A_i = \left\{\left.\vec{a} \in \prod_{i \in I} A_i\; \right|\;a_i = f_{ij}(a_j) \text{ for all } i \leq j \text{ in } I\right\}.

What we have essentially are families of holomorphic functions over open sets, and we glue them together via a direct product indexed by open sets under the restriction there must be agreement in values at places where the open sets coincide. This gives us the space of holomorphic functions over the union of the open sets, which is of course a subgroup of the direct product under both addition and multiplication. We have here again the common theme of patching up local pieces to create a global structure.

Urysohn metrization theorem

The Urysohn metrization theorem gives conditions which guarantee that a topological space is metrizable. A topological space (X, \mathcal{T}) is metrizable is there is a metric that induces a topology that is equivalent to the topological space itself. These conditions are that the space is regular and second-countable. Regular means that any combination of closed subset and point not in it is separable, and second-countable means there is a countable basis.

Metrization is established by embedding the topological space into a metrizable one (every subspace of a metrizable space is metrizable). Here, we construct a metrization of [0,1]^{\mathbb{N}} and use that for the embedding. We first prove that regular and second-countable implies normal, which is a hypothesis of Urysohn’s lemma. We then use Urysohn’s lemma to construct the embedding.

Lemma Every regular, second-countable space is normal.

Proof: Let B_1, B_2 be the sets we want to separate. We can construct a countable open cover of B_1, \{U_i\}, whose closures intersect not B_2 by taking a open neighborhoods of each element of B_1. With second-countability, the union of those can be represented as a union of a countable number of open sets, which yields our desired cover. Do the same for B_2 to get a similar cover \{V_i\}.

Now we wish to minus out from our covers in such a way that their closures are disjoint. We need to modify each of the U_is and V_is such that they do not mutually intersect in their closures. A way to do that would be that for any U_i and V_j, we have the part of \bar{U_i} in V_j subtracted away from it if j \geq i and also the other way round. This would give us U_i' = U_i \setminus \sum_{j=1}^i \bar{V_j} and V_i' = V_i \setminus \sum_{j=1}^i \bar{V_j}.     ▢

Urysohn’s lemma Let A and B be disjoint closed sets in a normal space X. Then, there is a continuous function f : X \to [0,1] such that f(A) = \{0\} and f(B) = \{1\}.

Proof: Observe that if for all dyadic fractions (those with least common denominator a power of 2) r \in (0,1), we assign open subsets of X U(r) such that

  1. U(r) contains A and is disjoint from B for all r
  2. r < s implies that \overline{U(r)} \subset U(s)

and set f(x) = 1 if x \notin U(r) for any r and f(x) = \inf \{r : x \in U(r)\} otherwise, we are mostly done. Obviously, f(A) = \{0\} and f(B) = \{1\}. To show that it is continuous, it suffices to show that the preimages of [0, a) and (a, 1] are open for any x. For [0, a), the preimage is the union of U(r) over r < a, as for any element to go to a' < a, by being an infimum, there must be a s \in (a', a) such that U(s) contains it. Now, suppose f(x) \in (a, 1] and take s \in (a, f(x)). Then, X \setminus \bar{U(s)} is an open neighborhood of x that maps to a subset of (a, 1]. We see that x \in X \setminus \overline{U(s)}, with if otherwise, s < f(x) and thereby f(x) \leq s' < f(x) for s' > s and U(s') \supset \overline{U(s)}. Moreover, with s > a, we have excluded anything that does not map above a.

Now we proceed with the aforementioned assignment of subsets. In the process, we construct another assignment V. Initialize U(1) = X \setminus B and V(0) = X \setminus A. Let U(1/2) and V(1/2) be disjoint open sets containing A and B respectively (this is where we need our normality hypothesis). Notice how in normality, we have disjoint closed sets B_1 and B_2 with open sets U_1 and U_2 disjoint which contain them respectively, one can complement B_1 to derive a closed set larger than U_2, which we call U_2' and run the same normal separation process on A_1 and U_2'. With this, we can construct U(1/4), U(3/4), V(1/4), V(3/4) and the relations

X \setminus V(0) \subset U(1/4) \subset X \setminus V(1/4) \subset U(1/2),

X \setminus U(1) \subset V(3/4) \subset X \setminus U(3/4) \subset V(1/2).

Inductively, we can show that we can continue this process on X \setminus V(a/2^n) and X \setminus U((a+1)/2^n) for each a = 0,1,\ldots,2^n-1 provided U and V on all dyadics with denominator 2^n to fill in the ones with denominator 2^{n+1}. One can draw a picture to help visualize this process and to see that this satisfies the required aforementioned conditions for U.     ▢

Now we will find a metric for \mathbb{R}^{\mathbb{N}} the product space. Remember that the base for product space is such that all projections are open and a cofinite of them are the full space itself (due to closure under only finite intersection). Thus our metric must be such that every \epsilon-ball contains some open set of the product space where a cofinite number of the indices project to \mathbb{R}. The value of x - y for x,y \in \mathbb{R} as well as its powers is unbounded, so obviously we need to enforce that the distance exceed not some finite value, say 1. We also need that for any \epsilon > 0, the distance contributed by all of the indices but a finite number exceeds it not. For this, we can tighten the upper bound on the ith index to 1/i, and instead of summing (what would be a series), we take a \sup, which allows for all n > N where 1/N < \epsilon, the nth index is \mathbb{R} as desired. We let our metric be

D(\mathbf{x}, \mathbf{y}) = \sup\{\frac{\min(|x_i-y_i|, 1)}{i} : i \in \mathbb{N}\}.

That this satisfies the conditions of metric is very mechanical to verify.

Proposition The metric D induces the product topology on \mathbb{R}^{\mathbb{N}}.

Proof: An \epsilon-ball about some point must be of the form

(x_1 - \epsilon/2, x_1 + \epsilon/2) \times (x_2 - 2\epsilon/2, x_2 + 2\epsilon/2) \times \cdots \times (x_n - n\epsilon/2, x_n + n\epsilon/2) \times \mathbb{R} \times \cdots \times \mathbb{R} \times \cdots,

where n is the largest such that n\epsilon < 1. Clearly, we can fit into that an open set of the product space.

Conversely, take any open set and assume WLOG that it is connected. Then, there must be only a finite set of natural number indices I which project to not the entire space but instead to those with length we can assume to be at most 1. That must have a maximum, which we call n. For this we can simply take the minimum over i \leq n of the length of the interval for i divided by i as our \epsilon.     ▢

Now we need to construct a homeomorphism from our second-countable, regular (and thereby normal) space to some subspace of \mathbb{R}^\mathbb{N}. A homeomorphism is injective as part of definition. How to satisfy that? Provide a countable collection of continuous functions to \mathbb{R} such that at least one of them differs whenever two points differ. Here normal comes in handy. Take any two distinct points. Take two non-intersecting closed sets around them and invoke Urysohn’s lemma to construct a continuous function. That would have to be 0 at one and 1 at the other. Since our space is second-countable, we can do that for each pair of points with only a countable number. For every pair in the basis B_n, B_m where \bar{B_n} \subset B_m, we do this on \bar{B_n} and X \setminus B_m.

Proposition Our above construction is homeomorphic to [0,1]^{\mathbb{R}}.

Proof: Call our function f. Each of its component functions is continuous so the entire Cartesian product is also continuous. It remains to show the other way, that U in the domain open implies the image of U is open. For that it is enough to take z_0 = f(x_0) for any x_0 \in U and find some open neighborhood of it contained in f(U). U contains some basis element of the space and thus, there is a component (call it f_n) that sends X \setminus U to all to 0 and x_0 not to 0. This essentially partitions X by 0 vs not 0, with the latter portion lying inside U, which means that \pi_n^{-1}((0, \infty)) \cap f(X) is strictly inside f(U). The projections in product space are continuous so that set must be open. This suffices to show that f(U) is open.     ▢

With this, we’ve shown our arbitrary regular, second-countable space to be homeomorphic to a space we directly metrized, which means of course that any regular, second-countable space is metrizable, the very statement of the Urysohn metrication theorem.

Path lifting lemma and fundamental group of circle

I’ve been reading some algebraic topology lately. It is horrendously abstract, at least for me at my current stage. Nonetheless, I’ve managed to make a little progress. On that, I’ll say that the path lifting lemma, a beautiful fundamental result in the field, makes more sense to me now at the formal level, where as perceived by me right now, the difficulty lies largely in the formalisms.

Path lifting lemma:    Let p : \tilde{X} \to X be a covering projection and \gamma : [0,1] \to X be a path such that for some x_0 \in X and \tilde{x} \in \tilde{X},

\gamma(0) = x_0 = p(\tilde{x_0}). \ \ \ \ (1)

Then there exists a unique path \tilde{\gamma} : [0,1] \to \tilde{X} such that

p \circ \tilde{\gamma} = \gamma, \qquad \tilde{y}(0) = \tilde{x_0}. \ \ \ \ (2)

How to prove this at a high level? First, we use the Lebesgue number lemma on an open cover of X by evenly covered open sets to partition [0,1] into intervals of length 1/n < \eta, with \eta the Lebesgue number, to induce n pieces of the path in X which all lie in some open set of the cover. Because every open set is evenly covered, we for each piece have a uniquely determined continuous map (by the homeomorphism of the covering map plus boundary condition). Glue them together to get the lifted path, via the gluing lemma.

Let \mathcal{O} be our cover of X by evenly covered open sets. Let \eta > 0 be a Lebesgue number for \gamma^{-1}(\mathcal{O}), with n such that 1/n < \eta.

Let \gamma_j be \gamma restricted to [\frac{j}{n}, \frac{j+1}{n}]. At j = 0, we have that p^{-1}(\gamma_0([0,\frac{1}{n}])) consists of disjoint sets each of which is homeomorphic to \gamma_0([0, \frac{1}{n}]), and we pick the one that contains \tilde{x_0}, letting q_0 denote the associated map for that, to \tilde{X}, so that p \circ (q_0 \circ \gamma_0) = \gamma_0, with \tilde{\gamma_0} = q_0 \circ \gamma_0.

We continue like this for j up to n-1, using the value imposed on the boundary, which we have by induction to determine the homeomorphism associated with the covering projection that keeps the path continuous, which we call q_j. With this, we have

\tilde{\gamma_j} = q_j \circ \gamma_j.

A continuous path \tilde{\gamma} is obtained by applying to the gluing lemma to these. That

p \circ \tilde{\gamma} = \gamma

is satisfied because it is satisfied on sets the union of which is the entire domain, namely \{[\frac{j}{n}, \frac{j+1}{n}] : j = 0,1,\ldots,n-1\}.

A canonical example of path lifting is that of lifting a path on the unit circle to a path on the real line. To every point on the unit circle is associated its preimage under the map t \mapsto (\cos t, \sin t). It is not hard to verify that this is in fact a covering space. By the path lifting lemma, there is some unique path on the real line that projects to our path on the circle that ends at some integer multiple of 2\pi, call it 2\pi n, and that path is homotopic to the direct path from 0 to 2\pi n via the linear homotopy. Application of the projection onto that homotopy yields that our path on the circle, which we call f, is homotopic to the path where one winds around the circle n times counterclockwise, which we call \omega_n.

Homotopy between f and \omega_n is unique. If on the other hand, \omega_n were homotopic to \omega_m for m \neq n, they we could lift the homotopy onto the real line, thereby yielding a contradiction as there the endpoints would not be the same.

This requires a homotopy lifting lemma. The proof of that is similar to that of path lifting, but it is more complicated, since there is an additional homotopy parameter, by convention, within [0,1], alongside the path parameter. Again, we use the Lebesgue number lemma, but this time on grid [0,1] \times [0,1], and again for each grid component there is a unique way to select the local homeomorphism such that there is agreement with its neighboring components, with the parameter space in common here an edge common to two adjacent grid components.

With that every path on the circle is uniquely equivalent by homotopy to some unique \omega_n, we have that its fundamental group is \mathbb{Z}, since clearly, \omega_m * \omega_n = \omega_{m+n}, where here, * is the path concatenation operation.

Another characterization of compactness

The canonical definition of compactness of a topological space X is every open cover has finite sub-cover. We can via contraposition translate this to every family of open sets with no finite subfamily that covers X is not a cover. Not a cover via de Morgan’s laws can be characterized equivalently as has complements (which are all closed sets) which have finite intersection. The product is:

A topological space is compact iff for every family of closed sets with the finite intersection property, the intersection of that family is non-empty.

Grassmannian manifold

We all know of real projective space \mathbb{R}P^n. It is in fact a special space of the Grassmannian manifold, which denoted G_{k,n}(\mathbb{R}), is the set of k-dimensional subspaces of \mathbb{R}^n. Such can be represented via the ranges of the k \times n matrices of rank k, k \leq n. On application of that operator we can apply any g \in GL(k, \mathbb{R}) and the range will stay the same. Partitioning by range, we introduce the equivalence relation \sim by \bar{A} \sim A if there exists g \in GL(k, \mathbb{R}) such that \bar{A} = gA. This Grassmannian can be identified with M_{k,n}(\mathbb{R}) / GL(k, \mathbb{R}).

Now we find the charts of it. There must be a minor k \times k with nonzero determinant. We can assume without loss of generality (as swapping columns changes not the range) that the first minor made of the first k columns is one of such, for the convenience of writing A = (A_1, \tilde{A_1}), where the \tilde{A_1} is k \times (n-k). We get

A_1^{-1}A = (I_k, A_1^{-1}\tilde{A_1}).

Thus the degrees of freedom are given by the k \times (n-k) matrix on the right, so k(n-k). If that submatrix is not the same between two full matrices reduced via inverting by minor, they cannot be the same as an application of any non identity element in GL(k, \mathbb{R}) would alter the identity matrix on the left.

I’ll leave it to the reader to run this on the real projective case, where k = 1, n = n+1.