## Bubble sort using IORef

I started learning Haskell around summer of 2015, and to be honest, I found it very difficult. It, as a representative of functional programming, requires a very different mode of reasoning, one that I had not really exposed myself to before. I used very light features of Haskell at a real job for testing purposes. With “very light,” IORef is obviously disqualified. I haven’t touched it for over a year, and I am quite rusty on it. Though I expect, since I had diligently worked through example code of various Haskell constructs and patterns, to the point where I could follow what was going on without much difficulty, I can retrain up to the level I had previously been at without too much difficulty.

I believe I first learned of functional programming a few years before when I started with Haskell, with, memorable, the introduction of the two line quicksort. See, at that time, I basically didn’t have a clue how programming really worked, and so I actually might have believed that it was the real quicksort. Maybe I wasn’t even fully aware of quicksort’s important in place distinction, which is what renders it superior to mergesort, which is also $O(n \log n)$. It’s pretty non-trivial to implement the actual quicksort in Haskell, because Haskell, as a functional (non-mutable state) programming language, is such that mutable arrays are somewhat difficult to work with. Of course, this no mutable state associated with functional programming is an ideal that is too good to be true. What happens, roughly, is that mutation of memory, which is necessary for any program to run, is done under the hood; it is the higher level language that protects against dangerous mutation of memory. Though I have read the code for it, long ago, I still cannot say that I actually ever understood how real quicksort is implemented in Haskell. Even now, given how rusty I am, I feel like it might be a bit too much to put on my place. So instead, I’ll go over bubble-sort, with the hope that quicksort will be elaborated on this blog not long after. The resource I am basing off of is from another blog.

We start by introducing `IORef`, which gives us a mutable reference to a type. It is not a pure operation, with every operation on it involving the `IO` monad.

The functions involved for manipulating `IORef` are:

```data IORef a

newIORef    :: a -> IO (IORef a)
readIORef   :: IORef a -> IO a
writeIORef  :: IORef a -> a -> IO ()
modifyIORef :: IORef a -> (a -> a) -> IO ()
```

I will assume the reader if aware how bubble sort works. Now let’s brainstorm how we would, using `IORef`, implement it. First of all, what is its type. It should be

```bubbleSort :: [Int] -> IO [Int]
```

From this, we can easily guess that we ought to transform the input `[Int]` into a `IORef [Int]`. A difficulty here is that the constructor for `IORef`, `newIORef` wraps the `IORef` inside `IO`. Applying `map` would thus give us a list of `IO`, the results of which we wish to collect. For this, we use `mapM`.

So in our code will have something like

```xs <- mapM newIORef input
```

Now, we want to $n-1$ times, where $n$ is the length of the input, the number of passthroughs needed to complete bubble-sort with guarantee, perform the passthrough swap adjacent elements (if not in order) operation. The passthrough swap operation is an action to be completed $n-1$ times. How do we characterize this action, in code? It consists of pass through and maybe swap on each index. Each index maps to an operation, and these operations can be collected in a sequential fashion. In code, this would along the lines of

```\j -> do
let ix = xs !! j
let iy = xs !! (j + 1)

y <- readIORef iy     when (x > y) \$ do
writeIORef ix y
writeIORef iy x
```

For this we can use `mapM` again, but better can be done stylistically. There is the output of the operation, which is `IO ()` , or when combined `IO [()]` contains no information, so it can be discarded. There is a variant of `mapM` for that, `mapM_`. Since an anonymous function (that in this case maps an index to a do statement) is somewhat bulky, we’d like to have it as the latter argument, and `forM_` is `mapM_` with the arguments swapped.

Finally, we pull out the values from the `IORef` with

`mapM readIORef xs`

Putting all this together gives us

```bubbleSort :: [Int] -> IO [Int]
bubbleSort input = do
let ln = length input

xs <- mapM newIORef input
forM_ [0..ln - 1] \$ \_ -> do
forM_ [0..ln - 2] \$ \j -> do
let ix = xs !! j
let iy = xs !! (j + 1)

when (x > y) \$ do
writeIORef ix y
writeIORef iy x

```

Finally, we note that there is an issue ignored for the sake of simplicity since it is not the focus of this article. One might have noticed the `!!`. What is that?

```Prelude> :t (!!)
(!!) :: [a] -> Int -> a
```

It’s the list accessor function, which is linear time, unlike constant time array access. I will, hopefully, dive into how arrays with constant time reads and writes are implemented in Haskell in some detail. For a high level explanation, see this answer on StackOverflow. Basically, there is a Haskell runtime system with primitive operations for memory reads and writes (that is linked with a Haskell program as part of the compilation process for production of the executable). See here for a manual of the runtime system, with all the runtime system options one can set for the executable.

## Tech industry, an interview question, and tail recursion

I have written on here before that I sort of disliked the tech industry. Why? Because I felt many of the people there are kind of boring and not that smart, and much of the work is quite mundane, though of course there are some extremely good ones who do the bulk of the technical heavy lifting (I’m not, though maybe I could become one), who are grossly under compensated relative to their actual contribution. Of course, my standards must be way too high, or I must be way too weird or non-conformist, or too spoiled. At the very least, the tech industry pays quite well, especially the big companies which offer bonus and equity. Of course, plenty of 150+ IQ people will go into grad school in math or physics or computer science, doing some much more academically involved work, often with contempt for the intellectual lightweights in the tech industry. I plead guilty to having had that sort of attitude as well, and maybe I still do. Related to that is how I found the whole artificial marketing and inflation of achievement in tech kind of disingenuous. However, I’ve figured out by now that one only has much to lose from not playing along in that game. I’ve been paying more attention to LinkedIn recently. It’s literally a credentialist cesspool of professional posturing, full of mediocrities who put on there literally every detail of their professional and extracurricular life. My having become more accepting of that indicates somewhat that I’ve improved attitude-wise. I feel like I talk to some non-techs too now, in a normal way, without expressing any sign of contempt, because what’s the point? My next step would probably be to shut down this socially unacceptably nerdy and elitist and non-PC blog, but unfortunately, I don’t feel comfortable dulling myself out like that. Of course, it might just be that the whole career game more or less compels me to do so sooner or later. When I say this, I have in mind the following from Michael O Church’s essay Does Genius Exist:

Most gifted children seem like they might achieve creative excellence in adulthood; very few actually do. I’ve observed the careers of extremely intelligent (i.e., IQ 160+) people and the results are, at best, disappointing. About half go to graduate school; the other half go to Wall Street or Silicon Valley straight out of college. Either way, they expect to defeat the morons in business handily, retire within ten years, and dedicate the remainders of their lives to intellectual pursuits. It almost never works out that way. It’s not uncommon for highly intelligent people to be mobbed and bullied in their corporate jobs by resentful mediocrities, although even more common is for them to disappear into the bland, beige fog, and to lose every element of originality they once had. Most often, they disappear somewhere in the folds of middle management, and do what they can to hide what they once were.

I already feel more comfortable doing what, according to this, is most often done by the gifted later in life, not that I am +4 sigma above the mean, which is evident from my credentials, though +3 sigma sounds about right. Surely, success in the corporate world relies much on being liked by those in power which requires being conformist, loyal (or at least appearing so), dependable, and not threatening to the interests above. You get promoted by becoming the manager’s favorite, which is done by being the one who supports the career of the manager the most in an indispensable way.

I don’t like much the whole interview process in tech. It’s like the problems are so trivial (they are artificial and not all that related to real engineering) and some of the interviewers are nowhere near as smart as me, IQ-wise. Well, it doesn’t matter, because one has to adapt to one’s world instead of the other way round. And like it or not, for those on the tail end, the distribution of IQ/ability in the world is what it is today.

Speaking of tech interviews, I was asked this question in a recent interview.

Flatten a list. The list, of course, can have list elements. So something like
`flatten([1,2,3,[10,30]]) => [1,2,3,10,30]`

I had done this problem before. There is the brute force recursion solution. In Python, which is a dynamically typed language (which is more or less necessary for this problem (because otherwise, one would have to impose some type constraints and on top of that find a way to make nested lists work within a static type system, which as far as I can tell, would require defining some generic sum type of primitive types and the list type of that generic sum type itself), this would be, in functional style

```def flatten(l):
return [l] if type(l) != list else sum(map(flatten, l), [])```

Of course, in the actual interview, I wrote it imperative style, to increase my chance of passing it. 😉

This is of course actually inefficient, in that there will be a copy made at each level of the recursion. To avoid that, we employ a helper function.

```def traverse(acc, l):
if type(l) != list:
acc.append(l)
else:
for e in l:
traverse(acc, e)```

The `flatten` function itself would be

```def flatten(l):
acc = []
traverse(acc, l)
return acc```

By essentially preallocating the space for the output flattened list and appending to it via traversal, we use constant memory aside from the linear for the returned flattened list itself.

What did this problem remind me of? Tail recursion. Functional languages support it to avoid adding a new stack frame on each recursive call, which would be very handy for this problem. In fact, the traverse function, in its pattern of implementation can be translated to a tail recursive one in a functional language. We’ll leave that for later.

To start, we’ll present a canonical example of tail recursion, the factorial function. In Haskell, the immediate implementation it would be

```factorial :: Int -> Int
factorial n = if n <= 0 then 1 else n * factorial (n-1)```

In assembly, we would have something like

```_Z9factoriali:
# create new stack frame
pushq   %rbp
movq    %rsp, %rbp
subq    \$16, %rsp
# copy parameter to stack
movl    %edi, -4(%rbp)
# compare parameter with 0
cmpl    \$0, -4(%rbp)
# the recursive case
jg      .L2
# the base case, set 1 as return value and return
movl    \$1, %eax
jmp     .L3
.L2:
# set argument in recursive call to current argument minus one
movl    -4(%rbp), %eax
subl    \$1, %eax
movl    %eax, %edi
call    _Z9factoriali
# set return value to n * factorial(n-1)
imull   -4(%rbp), %eax
.L3:
leave
ret
```

One can see explicitly in the assembly the adding of a new stack frame for the recursive call. To avoid that, we employ tail recursion as follows.

```factorialTail :: Int -> Int -> Int
factorialTail acc n = if n <= 0 then acc
else factorialTail (acc*n) (n-1)
factorial = factorialTail 1```

In assembly, this would be

```_Z13factorialTailii:
# move acc to %edi
movl    %edi, %eax
# bitwise AND of %esi with %esi itself is %esi, set SF, ZF, PF flags accordingly
testl   %esi, %esi
# return if sign flag (SF) or zero flag (ZF) are on
jle     .L5
.L2:
# acc *= n
imull   %esi, %eax
# n -= 1, subl also sets zero flag
subl    \$1, %esi
# loop back if zero flag is not set, return once n == 0
jne     .L2
.L5:
ret
```

Notice how this is essentially a for loop, with no recursive calls. With the `-O2` flag set on x86-64 gcc 8.1, which I used to generate the above assembly code, with my own comments later added, the tail recursion compiler optimization was implemented, as evidenced by the assembly produced. I ran this not on my own machine, but on the cloud via the handy godbolt.org compiler explorer I found, the code of which happens to be on GitHub. And I can only say that the guy who created this tool looks like another one of those uber prolific programmers blessed with tremendous instinct and power for building software systems. You might think that my writing a blog post with Haskell and x86 leans me towards this category as well. Oppositely, I actually think that I’m quite pathetically weak at computer stuff (and started off very unnatural at it), though I also believe that with some dedicated practice I can become good. I would say that I was natural with mathematics and algorithms but not with engineering or systems, though surely, with quite a lot of exposure, I developed, slowly, a sense for the latter as well, gradually steering what had been a horrendously off intuition towards the right direction, and concurrently, reducing, stage by stage, the sense of awe and intimidation I had felt with respect to the actual natural hackers. I can at least console myself by thinking that much of my awe’s having transformed into some sense of normalized (mentally) understanding is an indicator of rapid progress. Yes, there are still plenty of people way better than I am, but I no longer feel like what they are doing, their thought progress, cannot even be understood by my weakling brain, that once perceived it as some form of otherworldly wizardry beyond my comprehension, and of course, its actor some form of higher being.

On quite another domain, I felt somewhat similarly with regard to those at the top of the socioeconomic and political hierarchy. The default for corporate executives and those officially at the top is one of reverence. People assume that because they are on the top, they must be inherently superior in some way or another in their ability. Programmers, as status-insensitive, socially clueless aspies, are supposed to be largely oblivious to the political machinations orchestrated by those on top, to what is the reality of their (subordinate) position within the whole hierarchy. In any case, those people felt to me to be in a whole other world, similar to the impression I had of these elite programmers; I was much oblivious to that world and also could care less about it. Until I more or less developed, as far as I tell, a more accurate intuition for how that world works as well, much aided, of course, by experience, mostly indirect, but enough for me to, with my intelligence and independent judgment, construct what I believe is a reasonable picture or model for what actually goes on, one which I expect to be enhanced over time with more data collected. I’ve increasingly grown to realize, over time, that people are very much naturally psychologically chained to the reality of their official position, their formal credentials, which are correlated very imperfectly with actual ability, or in some cases loaded on (largely born) social position and artificial perception, even nil or negatively correlated; it takes an independent mind, a revolutionary mind to break free from that by disentangling the real and the artificial. And becoming mentally free is the first step towards becoming actually free.