## Cayley-Hamilton theorem and Nakayama’s lemma

The Cayley-Hamilton theorem states that every square matrix over a commutative ring $A$ satisfies its own characteristic equation. That is, with $I_n$ the $n \times n$ identity matrix, the characteristic polynomial of $A$

## Jordan normal form

Jordan normal form states that every square matrix is similar to a Jordan normal form matrix, one of the form

$J = \begin{bmatrix}J_1 & \; & \; \\ \; & \ddots & \; \\\; & \; & J_p \\ \end{bmatrix}$

where each of the $J_i$ is square of the form

$J_i = \begin{bmatrix}\lambda_i & 1 & \; & \; \\ \; & \lambda_i \; & \ddots & \; \\ \; & \; & \ddots & 1 \\ \; & \; & \; & \lambda_i \\ \end{bmatrix}$.

## Math sunday

I had a chill day thinking about math today without any pressure whatsoever. First I figured out, calculating inductively, that the order of $GL_n(\mathbb{F}_p)$ is $(p^n - 1)(p^n - p)(p^n - p^2)\cdots (p^n - p^{n-1})$. You calculate the number of $k$-tuples of column vectors linear independent and from there derive $p^k$ as the number of vectors that cannot be appended if linear independence is to be preserved. A Sylow $p$-group of that is the group of upper triangular matrices with ones on the diagonal, which has the order $p^{n(n-1)/2}$ that we want.

I also find the proof of the first Sylow theorem much easier to understand now, the inspiration of it. I had always remembered that the Sylow $p$-group we are looking for can be the stabilizer subgroup of some set of $p^k$ elements of the group where $p^k$ divides the order of the group. By the pigeonhole principle, there can be no more than $p^k$ elements in it. The part to prove that kept boggling my mind was the reverse inequality via orbits. It turns out that that can be viewed in a way that makes its logic feel much more natural than it did before, which like many a proof not understood, seems to spring out of the blue.

We wish to show that the number of times, letting $p^r$ be the largest $p$th power dividing $n$, that the order of some orbit is divided by $p$ is no more than $r-k$. To do that it suffices to show that the sum of the orders of the orbits, $\binom{n}{p^k}$ is divided by $p$ no more than that many times. To show that is very mechanical. Write out as $m\displaystyle\prod_{j = 1}^{p^k-1} \frac{p^k m - j}{p^k - j}$ and divide out each element of the product on both the numerator and denominator by $p$ to the number of times $j$ divides it. With this, the denominator of the product is not a multiple of $p$, which means the number of times $p$ divides the sum of the orders of the orbits is the number of times it divides $m$, which is $r-k$.

Following this, Brian Bi told me about this problem, starred in Artin, which means it was considered by the author to be difficult, that he was stuck on. To my great surprise, I managed to solve it under half an hour. The problem is:

Let $H$ be a proper subgroup of a finite group $G$. Prove that the conjugate subgroups of $H$ don’t cover $G$.

For this, I remembered the relation $|G| = |N(H)||Cl(H)|$, where $Cl(H)$ denotes the number of conjugate subgroups of $H$, which is a special case of the orbit-stabilizer theorem, as conjugation is a group action after all. With this, given that $|N(H)| \geq |H|$ and that conjugate subgroups share the identity, the union of them has less than $|G|$ elements.

I remember Jonah Sinick’s once saying that finite group theory is one of the most g-loaded parts of math. I’m not sure what his rationale is for that exactly. I’ll say that I have a taste for finite group theory though I can’t say I’m a freak at it, unlike Aschbacher, but I guess I’m not bad at it either. Sure, it requires some form of pattern recognition and abstraction visualization that is not so loaded on the prior knowledge front. Brian Bi keeps telling me about how hard finite group theory is, relative to the continuous version of group theory, the Lie groups, which I know next to nothing about at present.

Oleg Olegovich, who told me today that he had proved “some generalization of something to semi-simple groups,” but needs a bit more to earn the label of Permanent Head Damage, suggested upon my asking him what he considers as good mathematics that I look into Arnold’s classic on classical mechanics, which was first to come to mind on his response of “stuff that is geometric and springs out of classical mechanics.” I found a PDF of it online and browsed through it but did not feel it was that tasteful, perhaps because I’m been a bit immersed lately in the number theoretic and abstract algebraic side of math that intersects not with physics, though I had before an inclination towards more physicsy math. I thought of possibly learning PDEs and some physics as a byproduct of it, but I’m also worried about lack of focus. Maybe eventually I can do that casually without having to try too hard as I have done lately for number theory. At least, I have not the right combination of brainpower and interest sufficient for that in my current state of mind.

## A recurrence relation

I noticed that

$(x_1 - x_k)\displaystyle\sum_{i_1+\cdots+i_k=n} x_1^{i_1}\cdots x_k^{i_k} = \displaystyle\sum_{i_1+\cdots+i_{k-1}=n+1} x_1^{i_1}\cdots x_{k-1}^{i_{k-1}} - \displaystyle\sum_{i_2+\cdots+i_k=n+1} x_2^{i_2}\cdots x_k^{i_k}.$

In the difference on the RHS, it is apparent that terms without $x_1$ or $x_k$ will vanish. Thus, all the negative terms which are not cancelled out have a $x_k$ and all such positive terms have a $x_1$. Combinatorially, all terms of degree $n+1$ with $x_k$ can be generated by multiplying $x_k$ on all terms of degree $n$. Analogous holds for the positive terms. The terms with only $x_1$ and $x_k$ are cancelled out with the exception of the $x_1^{n+1} - x_k^{n+1}$ that remains.

This recurrence appears in calculation of the determinant of the Vandermonde matrix.

I learned that the adjugate is the transpose of the matrix with the minors with the appropriate sign, that as we all know, alternates along rows and columns, corresponding to each element of the matrix on which the adjugate is taken. The matrix, multiplied with its adjugate, in fact, yields the determinant of that matrix, times the identity of course, to matrix it. Note that the diagonal elements of its result is exactly what one gets from applying the minors algorithm for calculating the determinant along each row. The other terms vanish. There are $n(n-1)$ of them, where $n$ is the number of rows (and columns) of the (square) matrix. They are, for each column of the adjugate and each column of it not equal to the current column, the sum of each entry in the column times the minor (with sign applied) determined by the removal of the other selected column (constant throughout the sum) and the row of the current entry. In the permutation expansion of this summation, each element has a (unique) sister element, with the sisterhood relation symmetric, determined by taking the entry of the adjugate matrix in the same column as the non minor element to which the permutation belongs and retrieving in the permutation expansion of the element times minor for that element the permutation, the product representing which contains the exact same terms of the matrix. Note that shift in position of the swapped element in the minor matrix is one less than that in the adjugate matrix. Thus, the signs of the permutations cancel. From this, we arrive at that the entire sum of entry times corresponding minor across the column is zero.

A corollary of this is that $\mathrm{adj}(\mathbf{AB}) = \mathrm{adj}(\mathbf{B})\mathrm{adj}(\mathbf{A})$.

## More math

Last night, I learned, once more, the definition of absolute continuity. Formally, a function $f : X \to Y$‘s being absolutely continuous is its for any $\epsilon > 0$, having a $\delta > 0$ such that for any finite number of pairs of points $(x_k, y_k)$ with $\sum |x_k - y_k| < \delta$ implies $\sum |f(x_k) - f(y_k)| < \epsilon$. It is stronger than uniform continuity, a special case of it. I saw that it implied almost everywhere differentiability and is intimately related to the Radon-Nikodym derivative. A canonical example of a function not absolute continuous but uniformly continuous, to my learning last night afterwards, is the Cantor function, this wacky function still to be understood by myself.

I have no textbook on this or on anything measure theoretic, and though I could learn it from reading online, I thought I might as well buy a hard copy of Rudin that I can scribble over to assist my learning of this core material, as I do with the math textbooks I own. Then, it occurred to me to consult my math PhD student friend Oleg Olegovich on this, which I did through Skype this morning.

He explained very articulately absolute continuity as a statement on bounded variation. It’s like you take any set of measure less than $\delta$ and the total variation of that function on that set is no more than $\epsilon$. It is a guarantee of a stronger degree of tightness of the function than uniform continuity, which is violated by functions such as $x^2$ on reals, the continuity requirements of which increases indefinitely as one goes to infinity and is thereby not uniformly continuous.

Our conversation then drifted to some lighter topics, lasting in aggregate almost 2 hours. We talked jokingly about IQ and cultures and politics and national and ethnic stereotypes. In the end, he told me that введите общение meant “input message”, in the imperative, and gave me a helping hand with the plural genitive conjugation, specifically for “советские коммунистические песни”. Earlier this week, he asked me how to go about learning Chinese, for which I gave no good answer. I did, on this occasion, tell him that with all the assistance he’s provided me with my Russian learning, I could do reciprocally for Chinese, and then the two of us would become like Москва-Пекин, the lullaby of which I sang to him for laughs.

Back to math, he gave me the problem of proving that for any group $G$, a subgroup $H$ of index $p$, the smallest prime divisor of $|G|$, is normal. The proof is quite tricky. Note that the action of $G$ on $G / H$ induces a map $\rho : G \to S_p$, the kernel of which we call $N$. The image’s order, as a subgroup of $S_p$ must divide $p!$, and as an isomorphism of a quotient group of $G$ must divide $n$. Here is where the smallest prime divisor hypothesis is used. The greatest common divisor of $n$ and $p!$ cannot not $p$ or not $1$. It can’t be $1$ because not everything in $G$ is a self map on $H$. $N \leq H$ as everything in $N$ must take $H$ to itself, which only holds for elements of $H$. By that, $[G:N] \geq [G:H] = p$ which means $N = H$. The desired result thus follows from $NgH = gH$ for all $g \in G$.

Later on, I looked at some random linear algebra problems, such as proving that an invertible matrix $A$ is normal iff $A^*A^{-1}$ is unitary, and that the spectrum of $A^*$ is the complex conjugate of the spectrum of $A$, which can be shown via examination of $A^* - \lambda I$. Following that, I stumbled across some text involving minors of matrices, which reminded me of the definition of determinant, the most formal one of which is $\sum_{\sigma \in S_n}\mathrm{sgn}(\sigma)\prod_{i=1}^{n}a_{i,\sigma_{i}}$. In school though we learn its computation via minors with alternating signs as one goes along. Well, why not relate the two formulas.

In this computation, we are partitioning based on the element that $1$ or any specific element of $[n] = \{1, 2, \ldots, n\}$, with a corresponding row in the matrix, maps to. How is the sign determined for each? Why does it alternate. Well, with the mapping for $1$ already determined in each case, it remains to determine the mapping for the remainder, $2$ through $n$. There are $(n-1)!$ of them, from $\{2, 3, \ldots, n\}$ to $[n] \setminus \sigma_1$. If we were to treat $1$ through $i-1$ as shifted up by one so as to make it a self map on $\{2, 3, \ldots, n\}$ then each entry in the sum of the determinant of the minor would have its sign as the sign of the number of two cycles between consecutive elements (which generate the symmetric group). Following that, we’d need to shift back down $\{2, 3, \ldots, i\}$, the presentation of which, in generator decomposition, would be $(i\ i+1)(i-1\ i) \ldots (1\ 2)$, which has sign equal to the sign of $i$, which is one minus the column we’re at, thereby explaining why we alternate, starting with positive.