Principal values (of integrals)

I’ve been looking through my Gamelin’s Complex Analysis quite a bit lately. I’ve solved some exercises, which I’ve written up in private. I was just going over the section on principal values, which had a very neat calculation. I’ll give a sketch of that one here.

Take an integral \int_a^b f(x)dx such that on some x_0 \in (a,b) there is a singularity, such as \int_{-1}^1 \frac{1}{x}dx. The principal value of that is defined as

PV \int_a^b f(x)dx = \lim_{\epsilon \to 0}\left(\int_a^{x_0 - \epsilon} + \int_{x_0 + \epsilon}^b\right)f(x)dx.

The example the book presented was

PV\int_{-\infty}^{\infty} \frac{1}{x^3 - 1} = -\frac{\pi}{\sqrt{3}}.

Its calculation invokes both the residue theorem and the fractional residue theorem. Our integrand, complexly viewed, has a singularity at e^{2\pi i / 3}, with residue \frac{1}{3z^2}|_{z = e^{2\pi i / 3}} = \frac{e^{2\pi i / 3}}{3}, which one can arrive at with so called Rule 4 in the book, or more from first principles, l’Hopital’s rule. That is the residue to calculate if we had the half-disk in the half plane, arbitrarily large. However, with our pole at 1 we must indent it there. The integral along the arc obviously vanishes. The infinitesimal arc spawned by the indentation, the integral along which, can be calculated by the fractional residue theorem, with any -\pi, the minus accounting for the clockwise direction. This time the residue is at 1, with \frac{1}{3z^2}|_{z = 1} = \frac{1}{3}. So that integral, no matter how small \epsilon is, is -\frac{\pi}{3}i. 2\pi i times the first residue we calculated minus that, which is extra with respect to the integral, the principal value thereof, that we wish to calculate, yields -\frac{\pi}{\sqrt{3}} for the desired answer.

Let’s generalize. Complex analysis provides the machinery to compute integrals not to be integrated easily by real means, or something like that. Canonical is having the value on an arc go to naught as the arc becomes arbitrarily large, and equating the integral with a constant times the sum of the residues inside. We’ve done that here. Well, it turns out that if the integral has an integrand that explodes somewhere on the domain of integration, we can make a dent there, and minus out the integral along its corresponding arc.

Advertisements

A result of Cantorian pathologies

We are asked to find a function on [0,1] such that f(0) = f(1) = 1 that has positive derivative almost everywhere on that domain. It occurred to me to use the Cantor set, which is obtained by partitioning remaining intervals into thirds and removing the interior of the second one. So first (1/3, 2/3) then (1/3^2, 2/3^2) and (7/3^2, 8/3^2), and so on. Each time we remove two-thirds of the remaining and summing the geometric series yields a measure of 1 for all that is removed. Another Cantorian construct arrived at from the Cantor set is the Cantor function, or Cantor staircase, called such as it resembles a staircase. That is, it turns out exactly what we need. It is a function with derivative zero almost everywhere, with non zero derivative points as jump points at the Cantor set. It is that discontinuity that facilitates the going from 0 to 1 along an interval with zero derivative almost everywhere. A transformation of that with a function with positive derivative is a step to deliver us what we want. That would be x minus the Cantor function. This has derivative 1 almost everywhere. However, we need to keep its range inside [0,1], which its outputs at [0,1/2] seems to not satisfy entirely. We are done though if we prove the other half to be non-negative, because then we can stretch the other half horizontally by a factor of two. It is needless to say that between 1/2 and 2/3 such is the case. Past 2/3, we have a downshift of 2 in base three to a 1 in base two at the first place, meaning a decrease of at least 1/6 when summing the net change at all digits which decrease in value. Digits increase in value on a change from 2 to 1 in every place past the first, or on a 1 digit in the original in its change to base two, which can occur only once, with all the following of that changed to zero, an increase of 1/2^n - 1/3^n, where n is the index of the place. In the former case, the largest possible increase is 1/2^2 + 1/2^3 + \cdots = 1/2 minus 2/3^2 + 2/3^3 + \cdots = 1/3, which is 1/6. In the latter case, 1/2^n - 1/3^n is exceeded by 1/2^{n-1} - 1/(2 \cdot 3^{n-1}), the total decrease from digit 2 to digit 1 from the nth place on. Thus, the minimum total increase from the increases exceeds the maximum total decrease from the decreases, which completes our proof of non-negativity for x \geq 1/2.

Galois group of x^10+x^5+1

This was a problem from an old qualifying exam, that I solved today, with a few pointers. First of all, is it reducible? It actually is. Note that x^{15} - 1 = (x^5-1)(x^{10}+x^5+1) = (x^3-1)(x^{12} + x^9 + x^6 + x^3 + 1). 1 + x + x^2, as a prime element of \mathbb{Q}[x] that divides not x^5-1 must divide the polynomial, the Galois group of which we are looking for. The other factor of it corresponds to the multiplicative group of \mathbb{F}_{15}, which has 8 elements. Seeing that it has 3 elements of order 2 and 4 elements of order 4 and is abelian, it must be C_2 \times C_4. Thus, the answer is C_2 \times C_2 \times C_4.

On the adjugate

I learned that the adjugate is the transpose of the matrix with the minors with the appropriate sign, that as we all know, alternates along rows and columns, corresponding to each element of the matrix on which the adjugate is taken. The matrix, multiplied with its adjugate, in fact, yields the determinant of that matrix, times the identity of course, to matrix it. Note that the diagonal elements of its result is exactly what one gets from applying the minors algorithm for calculating the determinant along each row. The other terms vanish. There are n(n-1) of them, where n is the number of rows (and columns) of the (square) matrix. They are, for each column of the adjugate and each column of it not equal to the current column, the sum of each entry in the column times the minor (with sign applied) determined by the removal of the other selected column (constant throughout the sum) and the row of the current entry. In the permutation expansion of this summation, each element has a (unique) sister element, with the sisterhood relation symmetric, determined by taking the entry of the adjugate matrix in the same column as the non minor element to which the permutation belongs and retrieving in the permutation expansion of the element times minor for that element the permutation, the product representing which contains the exact same terms of the matrix. Note that shift in position of the swapped element in the minor matrix is one less than that in the adjugate matrix. Thus, the signs of the permutations cancel. From this, we arrive at that the entire sum of entry times corresponding minor across the column is zero.

A corollary of this is that \mathrm{adj}(\mathbf{AB}) = \mathrm{adj}(\mathbf{B})\mathrm{adj}(\mathbf{A}).

More math

Last night, I learned, once more, the definition of absolute continuity. Formally, a function f : X \to Y‘s being absolutely continuous is its for any \epsilon > 0, having a \delta > 0 such that for any finite number of pairs of points (x_k, y_k) with \sum |x_k - y_k| < \delta implies \sum |f(x_k) - f(y_k)| < \epsilon. It is stronger than uniform continuity, a special case of it. I saw that it implied almost everywhere differentiability and is intimately related to the Radon-Nikodym derivative. A canonical example of a function not absolute continuous but uniformly continuous, to my learning last night afterwards, is the Cantor function, this wacky function still to be understood by myself.

I have no textbook on this or on anything measure theoretic, and though I could learn it from reading online, I thought I might as well buy a hard copy of Rudin that I can scribble over to assist my learning of this core material, as I do with the math textbooks I own. Then, it occurred to me to consult my math PhD student friend Oleg Olegovich on this, which I did through Skype this morning.

He explained very articulately absolute continuity as a statement on bounded variation. It’s like you take any set of measure less than \delta and the total variation of that function on that set is no more than \epsilon. It is a guarantee of a stronger degree of tightness of the function than uniform continuity, which is violated by functions such as x^2 on reals, the continuity requirements of which increases indefinitely as one goes to infinity and is thereby not uniformly continuous.

Our conversation then drifted to some lighter topics, lasting in aggregate almost 2 hours. We talked jokingly about IQ and cultures and politics and national and ethnic stereotypes. In the end, he told me that введите общение meant “input message”, in the imperative, and gave me a helping hand with the plural genitive conjugation, specifically for “советские коммунистические песни”. Earlier this week, he asked me how to go about learning Chinese, for which I gave no good answer. I did, on this occasion, tell him that with all the assistance he’s provided me with my Russian learning, I could do reciprocally for Chinese, and then the two of us would become like Москва-Пекин, the lullaby of which I sang to him for laughs.

Back to math, he gave me the problem of proving that for any group G, a subgroup H of index p, the smallest prime divisor of |G|, is normal. The proof is quite tricky. Note that the action of G on G / H induces a map \rho : G \to S_p, the kernel of which we call N. The image’s order, as a subgroup of S_p must divide p!, and as an isomorphism of a quotient group of G must divide n. Here is where the smallest prime divisor hypothesis is used. The greatest common divisor of n and p! cannot not p or not 1. It can’t be 1 because not everything in G is a self map on H. N \leq H as everything in N must take H to itself, which only holds for elements of H. By that, [G:N] \geq [G:H] = p which means N = H. The desired result thus follows from NgH = gH for all g \in G.

Later on, I looked at some random linear algebra problems, such as proving that an invertible matrix A is normal iff A^*A^{-1} is unitary, and that the spectrum of A^* is the complex conjugate of the spectrum of A, which can be shown via examination of A^* - \lambda I. Following that, I stumbled across some text involving minors of matrices, which reminded me of the definition of determinant, the most formal one of which is \sum_{\sigma \in S_n}\mathrm{sgn}(\sigma)\prod_{i=1}^{n}a_{i,\sigma_{i}}. In school though we learn its computation via minors with alternating signs as one goes along. Well, why not relate the two formulas.

In this computation, we are partitioning based on the element that 1 or any specific element of [n] = \{1, 2, \ldots, n\}, with a corresponding row in the matrix, maps to. How is the sign determined for each? Why does it alternate. Well, with the mapping for 1 already determined in each case, it remains to determine the mapping for the remainder, 2 through n. There are (n-1)! of them, from \{2, 3, \ldots, n\} to [n] \setminus \sigma_1. If we were to treat 1 through i-1 as shifted up by one so as to make it a self map on \{2, 3, \ldots, n\} then each entry in the sum of the determinant of the minor would have its sign as the sign of the number of two cycles between consecutive elements (which generate the symmetric group). Following that, we’d need to shift back down \{2, 3, \ldots, i\}, the presentation of which, in generator decomposition, would be (i\ i+1)(i-1\ i) \ldots (1\ 2), which has sign equal to the sign of i, which is one minus the column we’re at, thereby explaining why we alternate, starting with positive.

intelligence, math, chess

I met up with Kolya, Austin, and Ethan today. We ate out at this Ramen place, where we chatted mostly about light math, mathematicians, and intelligence related topics. I remember telling Austin about how the brain doesn’t peak in many until mid-late 20s or so, with rapid growth spurts often occurring in one’s early and mid twenties. This is consistent with the precipitous dip in car accident rates at age 25-26, the age when the prefrontal cortex matures, according to various online sources I’ve seen. So if you are struggling with things and not past that age, you still have plenty of hope! As for math, I asked Austin, who is entering a math PhD program next year, about the proof of Rolle’s theorem, which an old friend had asked me about a week earlier. It goes as follows. The hypotheses are continuity from [a,b] \subset R and differentiability on (a,b). As for the extreme value, it can occur either at an endpoint, in which case the function is constant, or in between, in which case the left and right derivatives are less than or equal to zero and greater than or equal to zero, which combined necessitates a derivative of zero at that point.

Afterwards, we played some piano with some singing alongside, and following that, I played a game of chess with Ethan after he asked if I had a chessboard, which I did. Chess has basically not crossed my mind for almost 10 years, and I have probably not played a single game in the last 3 years or so. I would sometimes observe the live 2700 to see changes in rankings, but there was no actual chess content in my head whatsoever. I still have nostalgia for when I played in chess tournaments in 6th grade. I remember at the end, I had some state rating of only a little above 1200, having placed in the top 30 in the state tournament that year. Needless to say, the level of chess going on between those little kids, of which I was one, was quite low. I stopped in junior high as there was no chess club there. Nonetheless, I always had a mix of fascination and awe with the game. At that time, I was, to put it bluntly, quite clueless about it; I simply had not the requisite intellectual maturity.

I obviously lost to Ethan, who is rated at 2200 something, but to my great pleasure, I didn’t lose in a pathetic way. I was very calculating and rational in every move, to the extent that my level of knowledge and experience permitted. At the end, I lost with a reckless sacrifice where I forwent a minor piece for two of the pawns that covered his kingside castle, hoping to launch an attack. I did not calculate far enough and it was not successful, and seeing that all hope was lost, I resigned. The biggest contrast between this time and when I played long before was that I had much better positional sense, which I suspect is derived from a substantially higher level of qualitative reasoning, the aspect of intelligence captured by the verbal side of IQ tests, relative to before. I believe this because position is all about how different pieces to relate together and about thinking of the pieces in a high level of coherence. In every move, I took into account positional considerations. Unlike before, when I could make moves recklessly, without thorough calculation, I would think carefully on what exactly I gained from making such a move, as well as thinking how the opponent could respond. It is just like how in writing, every word you use must be there for a good reason, and how in social interaction, one needs the cognitive empathy to predict how the other party is likely to respond. I will not go much into the details of the game, which I am not confident I could easily reconstruct. I do remember it began with the Caro-Kann, the name of which I still remembered well, and that in the early middle game, it felt like there was little that could be done that made sense. Then, I had said, “I feel like I’m in a zugswang right now.” Ethan responded with a why, along with a remark this game appears more positional than tactical.

Back on the car, Ethan and Austin played some blindfold chess (or at least it seemed like that), which I don’t think I could do. Austin thought that blindfold chess required visual spatial, but I told him that chess viewed properly is not visual spatial at all. The state of a chess game in essence is entirely qualitative, representable as an array of states, each of which is empty or of some piece, along with states for castle and enpassant. The board is nothing but visual distraction. This is akin to how blindness did not interfere much with genius mathematicians like Euler and Pontryagin; the math exists independent of the visual representation through text.

While they were playing, I brought up Mikhail Botvinnik, who I was reading about on Wikipedia in both English and Russian (okay I still know very little of that, but enough to get *something* out of glancing through texts). He was a Soviet Jew who was one of the top chess players during the Stalinist era and a world champion. He characterized himself as “Jewish by blood, Russian by culture, and Soviet by upbringing.” On a victory in a great tournament in Nottingham, he sent an effusive telegram to Stalin. He also became a committed communist, whatever that means. Speaking of which, could it be a coincidence that both Kasparov and Fischer became political radicals notorious for opposing their home countries in an obnoxious manner, especially Fischer, who behaved as if he had developed some form of schizophrenia? Will Magnus Carlsen become the same? (I think not.) Anyhow, chess is a crazy world, with the people at the top most definitely not normal, and the politics, viewed superficially by me, not qualified to discuss the matter intelligently, can be intense as that pertaining to the Olympics, which can, as we all know, also go quite out of hand.

I’ll conclude by saying that if I were take up chess again, I could probably do much better than before with my much bigger brain, though of course, I have matters of higher priority. In any case, I’ll probably keep a casual interest in chess, and perhaps read more about the lives of and culture amongst the top players of the world, as well as studying the game itself.

Second isomorphism theorem

This is copied from a Facebook chat message I had with someone a few weeks ago, with wordpress latex applied over the math:
A couple weeks ago, I learned the statement of the second isomorphism theorem, which states that given a subgroup S and normal subgroup N of G, SN is a subgroup of G and S \cap N is a normal subgroup of S, with SN / N isomorphic to S / (S \cap N).
Any element of SN / N can be represented as anN = aN for a \in S, where the n on the LHS is in N. A similar statement of representation via a(S \cap N), a \in S holds for S / (S \cap N). Define \phi: SN/N \to S / (S \cap N) with \phi(aN) = a(S \cap N), which is bijective. By normality, \phi(abN) = ab(S \cap N) = a(S \cap N)b(S \cap N) = \phi(aN)\phi(bN). Thus, \phi is an isomorphism. QED.

Proof of fundamental theorem of arithmetic

I read it a couple days ago and actually remembered it this time in a way that I will never forget it. It invokes Euclid’s lemma, which states that if p | ab for p prime, then p | a or p | b, which can be proved using Bezout’s lemma. For existence, it does induction on the number of factors, with 1 as the trivial base case. For the non base case, wherein our number is composite, apply the inductive hypothesis on the factors. For uniqueness, assume two distinct factorizations: p_1p_2\ldots p_n = q_1q_2\ldots q_n. By Euclid’s lemma, each of the p_is divides and is thus equal to one of the q_is. Keep invoking Euclid’s lemma, canceling out a prime factor on each iteration and eventually we must end with 1 = 1 in order for the two sides to be equal.