Understanding maths

Note: this post is a translation of an older post written in French: Compréhension mathématique. I wrote the original one when I was still in university, but it still rings true today – in many ways 🙂

After two heavyweight posts, Introduction to algorithmic complexity 1/2 and Introduction to algorithmic complexity 2/2, here’s a lighter and probably more “meta” post. Also probably more nonsense – it’s possible that, at the end of the article, you’ll either be convinced that I’m demanding a lot of myself, or that I’m completely ridiculous 🙂

I’m quite fascinated by the working of the human brain. Not by how it works – that, I don’t know much about – but by the fact that it works at all. The whole concept of being able to read and write, for instance, still amazes me. And I do spend a fair amount of time thinking about how I think, how to improve how I think, or how to optimize what I want to learn so that it matches my way of thinking. And in all of that thinking, I redefined for myself what I mean by “comprehension”.

My previous definition of comprehension

It often happens that I moan about the fact that I don’t understand things as fast as I used to; I’m wondering how much of that is the fact that I’m demanding more of myself. There was a time where my definition “understanding” was “what you’re saying looks logical, I can see the logical steps of what you’re doing at the blackboard, and I see roughly what you’re doing”. I also have some painful memories of episodes such as the following:

“Here, you should read this article.
− OK.
<a few hours later>
− There, done!
− Already?
− Well, yeah… I’m a fast reader…
− And you understood everything?
− Well… yeah…
− Including why <obscure but probably profound point of the article>?
<blank look, sigh and explanations> (not by me, the explanations).

I was utterly convinced to have understood, before it was proven to me that I missed a fair amount of things. Since then, I learnt a few things.

What I learnt about comprehension

The first thing I learnt, is that “vaguely understand” is not “comprehend”, or at least not at my (current) level of personal requirements. “Vaguely understanding” is the first step. It can also be the last step, if it’s on a topic for which I can/want to do with superficial knowledge. I probably gained a bit of modesty, and I probably say way more often that I only have a vague idea about some topics.

The second thing is that comprehension does take time. Today, I believe I need three to four reads of a research paper (on a topic I know) to have a “decent” comprehension of it. Below that, I have “a vague idea of what the paper means”.

The third thing, although it’s something I heard a lot at home, is that “repeating is at the core of good understanding”. It helps a lot to have at least been exposed to a notion before trying to really grasp it. The first exposure is a large hit in the face, the second one is slightly mellower, and at the third one you start to know where you are.

The fourth thing is that my brain seems to like it when I write stuff down. Let me sing an ode to blackboard and chalk. New technology gave us a lot of very fancy stuff, video-projectors, interactive whiteboards, and I’m even going to throw whiteboards and Vellada markers with it. I may seem reactionary, but I like nothing better than blackboard and chalk. (All right, the blackboard can be green.) It takes more time to write a proof on the blackboard than to run a Powerpoint with the proof on it (or Beamer slides, I’m not discriminating on my rejection of technology 😉 ) . So yeah, the class is longer, probably. But it gives time to follow. And it also gives time to take notes. Many of my classmates tell me they prefer to “listen than take notes” (especially since, for blackboard classes, there is usually an excellent set of typeset lecture notes). But writing helps me staying focused, and in the end to listen better. I also scribble a few more detailed points for things that may not be obvious when re-reading. Sometimes I leave jokes to future me – the other day, I found a “It’s a circle, ok?” next to a potato-shaped figure, it made me laugh a lot. Oh and, as for the fact that I also hate whiteboards: first, Velleda markers never work. Sometimes, there’s also a permanent marker hiding in the marker cup (and overwriting with an erasable marker to eventually erase it is a HUGE PAIN). And erasable marker is faster to erase than chalk. I learnt to enjoy the break that comes with “erasing the blackboard” – the usual method in the last classes I attended was to work in two passes, one with a wet thingy, and one with a scraper. I was very impressed the first time I saw that 😉 (yeah, I’m very impressionable) and, since then, I enjoy the minute or two that it takes to re-read what just happened. I like it. So, all in all: blackboard and chalk for the win.

How I apply those observations

With all of that, I learnt how to avoid the aforementioned cringy situations, and I got better at reading scientific papers. And takes more time than a couple of hours 😉

Generally, I first read it very fast to have an idea of the structure of the paper, what it says, how the main proof seems to work, and I try to see if there’s stuff that is going to annoy me. I have some ideas about what makes my life easier or not in a paper, and when it gets in the “not” territory, I grumble a bit, even though I suppose that these structures may not be the same for everyone. (There are also papers that are just a pain to read, to be honest). That “very fast” read is around half an hour to an hour for a ~10-page article.

The second read is “annotating”. I read in a little more detail, and I put questions everywhere. The questions are generally “why?” or “how?” on language structures such that “it follows that”, “obviously”, or “by applying What’s-his-name theorem”. It’s also pretty fast, because there is a lot of linguistic signals, and I’m still not trying to comprehend all the details, but to identify the ones that will probably require me to spend some time to comprehend them. I also take note of the points that “bother” me, that is to say the ones where I don’t feel comfortable. It’s a bit hard to explain, because it’s really a “gut feeling” that goes “mmmh, there, something’s not quite right. I don’t know what, but something’s not quite right”. And it’s literally a gut feeling! It may seem weird to link “comprehension” to “feelings”, but, as far as I’m concerned, I learnt, maybe paradoxically, to trust my feelings to evaluate my comprehension – or lack thereof.

The third read is the longer – that’s where I try to answer all the questions of the second read and to re-do the computations. And to convince myself that yeah, that IS a typo there, and not a mistake in my computation or reasoning. The fourth read and following are refinements of the third read for the questions that I couldn’t answer during the third one (but for which, maybe, things got clearer in the meantime).

I estimate that I have a decent understanding of a paper when I answered the very vast majority of the questions from the second read. (And I usually try to find someone more clever than me for the questions that are still open). Even there… I do know it’s not perfect.

The ultimate test is to prepare a presentation about the paper. Do as I say and not as I do – I typically do that by preparing projector slides. Because as a student/learner, I do prefer a blackboard class, but I also know that it’s a lot of work, and that doing a (good) blackboard presentation is very hard (and I’m not there yet). Once I have slides (which, usually, allow me to still find a few points that are not quite grasped yet), I try to present. And now we’re back to the “gut feeling”. If I stammer, if there are slides that make no sense, if the presentation is not smooth: there’s probably still stuff that requires some time.

When, finally, everything seems to be good, the feeling is a mix between relief and victory. I don’t know exactly what the comparison would be. Maybe the people who make domino shows. You spend an enormous amount of time placing your dominos next to one another, and I think that at the moment where the last one falls without the chain having been interrupted… that must be that kind of feeling.

Of course, I can’t do that with everything I read, it would take too much time. I don’t know if there’s a way to speed up the process, but I don’t think it’s possible, at least for me, in any significant way. I also need to let things “simmer”. And there’s a fair amount of strong hypotheses on the impact of sleep on learning and memory; I don’t know how much of that can be applied to my “math comprehension”, but I wouldn’t be surprised if the brain would take the opportunity of sleep to tidy and make the right connections.

Consequently, it’s sometimes quite frustrating to let things at a “partial comprehension” stage – whether it’s temporary or because of giving up – especially when you don’t know exactly what’s wrong. The “gut feeling” is there (and not only on the day before the exam 😉 ). Sometimes, I feel like giving up altogether – what’s the point of trying, and only half understanding things? But maybe “half” is better than “not at all”, when you know you still have half of the way to walk. And sometimes, when I get a bit more stubborn, everything just clicks. And that’s one of the best feelings of the world.

Introduction to algorithmic complexity – 2/2

Note: this is the translation of a post that was originally published in French: Introduction à la complexité algorithmique – 2/2.

In the previous episode, I explained two ways to sort books, and I counted the elementary operations I needed to do that, and I estimated the number of operations depending on the number of books that I wanted to sort. In particular, I looked into the best case, worst case and average case for the algorithms, depending on the initial ordering of the input. In this post, I’ll say a bit more about best/worst/average case, and then I’ll refine the notion of algorithmic complexity itself.

The “best case” is typically analyzed the least, because it rarely happens (thank you, Murphy’s Law.) It still gives bounds of what is achievable, and it gives a hint on whether it’s useful to modify the input to get the best case more often.

The “worst case” is the only one that gives guarantees. Saying that my algorithm, in the worst case, executes in a given amount of time, guarantees that it will never take longer – although it can be faster. That type of guarantees is sometimes necessary. In particular, it allows to answer the question of what happens if an “adversary” provides the input, in a way that will make the algorithm’s life as difficult as possible – a question that would interest cryptographers and security people for example. Having guarantees on the worst case means that the algorithm works as desired, even if an adversary tries to make its life as miserable as possible. The drawback of using the worst case analysis is that the “usual” execution time often gets overestimated, and sometimes gets overestimated by a lot.

Looking at the “average case” gives an idea of what happens “normally”. It also gives an idea about what happens if the algorithm is repeated several times on independent data, where both the worst case and the best case can happen. Moreover, there is sometimes ways to avoid the worst cases, so the average case would be more useful in that case. For example, if an adversary gives me the books in an order that makes my algorithm slow, I can compensate that by shuffling the books at the beginning so that the probability of being in a bad case is low (and does not depend on my adversary’s input). The drawback of using the average case analysis is that we lose the guarantee that we have on the worst case analysis.

For my book sorting algorithms, my conclusions were as follows:

  • For the first sorting algorithm, where I was searching at each step for a place to put the book by scanning all the books that I had inserted so far, I had, in the best case, 2n-1 operations, in the worst case, \displaystyle \frac{n^2 + n}{2} operations, and in the average case, \displaystyle \frac{n^2+9n}{4}. operations.
  • For the second sorting algorithm, where I was grouping book groups two by two, I had, in all cases, 2\times n \times\log_2(n) operations.

I’m going to draw a figure, because figures are pretty. If the colors are not that clear, the plotted functions and their captions are in the same order.

It turns out that, when talking about complexity, these formulas (2n-1, \displaystyle \frac{n^2 + n}{2}, \displaystyle \frac{n^2+9n}{4}, 2\times n \times\log_2(n)) would not be the ones that I would use in general. If someone asked me about these complexities, I would answer, respectively, that the complexities are n, n^2 (or “quadratic”), n^2 again, and n \log n.

This may seem very imprecise, and I’ll admit that the first time I saw this kind of approximations, I was quite irritated. (It was in physics class, so I may not have been in the best of moods either.) Since then, I find it quite handy, and even that it makes sense. The fact that “it makes sense” has a strong mathematical justification. For the people who want some precision, and who are not afraid of things like “limit when x tends to infinity of blah blah”, it’s explained there: http://en.wikipedia.org/wiki/Big_O_notation. It’s vastly above the level of the post I’m currently writing, but I still want to justify a couple of things; be warned that everything that follows is going to be highly non-rigorous.

The first question is what happens to smaller elements of the formulas. The idea is that only keep what “matters” when looking at how the number of operations increases with the number of elements to sort. For example, if I want to sort 750 books, with the “average” case of the first algorithm, I have \displaystyle \frac{n^2 + 9n}{4} = \displaystyle \frac{n^2}{4} + \frac{9n}{4}. For 750 books, the two parts of the sum yield, respectively, 140625 and… 1687. If I want to sort 1000 books, I get 250000 and 2250. The first part of the sum is much larger, and it grows much quicker. If I need to know how much time I need, and I don’t need that much precision, I can pick \displaystyle \frac{n^2}{4} and discard \displaystyle \frac{9n}{4} – already for 1000 books, it contributes less than 1% of the total number of operations.

The second question is more complicated: why do I consider as identical n^2 and \displaystyle \frac{n^2}{4}, or 2n\log_2 n and n \log n? The short answer is that it allows to make running times comparable between several algorithms. To determine which algorithm is the most efficient, it’s nice to be able to compare how they perform. In particular, we look at the “asymptotic” comparison, that is to say what happens when the input of the algorithm contains a very large number of elements (for instance, if I have a lot of books to sort) – that’s where using the fastest algorithm is going to be at most worth it.

To reduce the time that it takes an algorithm to execute, I have two possibilities. Either I reduce the time that each operation takes, or I reduce the number of operations. Suppose that I have a computer that can execute one operation by second, and that I want to sort 100 elements. The first algorithm, which needs \displaystyle \frac{n^2}{4} operations, finishes after 2500 seconds. The second algorithm, which needs 2n\log_2 n operations, finishes after 1328 seconds. Now suppose that I have a much faster computer to execute the first algorithm. Instead of needing 1 second per operation, it’s 5 times faster, and can execute an operation in 0.2 seconds. That means that I can sort 100 elements in 500 seconds, which is faster than the second algorithm on the slower computer. Yay! Except that, first, if I run the second algorithm of the second computer, I can sort my elements five times faster too, in 265 seconds. Moreover, suppose now that I have 1000 elements to sort. With the first algorithm on the fast computer, I need 0.2 \times \frac{1000^2}{4} = 50000 seconds, and with the second algorithm on the much slower computer, 2 \times 1000 \times \log_2(1000) = 19931 seconds.

That’s the idea behind removing the “multiplying numbers” when estimating complexity. Given an algorithm with a complexity “n^2” and algorithm with a complexity “n \log n“, I can put the first algorithm on the fastest computer I can: there will always be a number of elements for which the second algorithm, even on a very slow computer, will be faster than the first one. The number of elements in question can be very large if the difference of speed of the computers is large, but since large numbers are what I am interested in anyway, that’s not a problem.

So when I compare two algorithms, it’s much more interesting to see that one needs “something like n \log n” operations and one needs “something like n^2” operations than to try to pinpoint the exact constant that multiplies the n \log n or the n^2.

Of course, if two algorithms need “something along the lines of n^2 operations”, asking for the constant that is multiplying that n^2 is a valid question. In practice, it’s not done that often, because unless things are very simple and well-defined (and even then), it’s very hard to determine that constant exactly, depending on how you implement it with a programmation language. It would also require to ask exactly what an operation is. There are “classical” models that allow to define all these things, but linking them to current programming languages and computers is probably not realistic.

Everything that I talked about so far is function of n, which is in general the “size of the input”, or the “amount of work that the algorithm has to do”. For books to sort, it would be the number of books. For graph operations, it would be the number of vertices of graphs, and/or the number of edges. Now, as “people who write algorithms”, given an input of size n, what do we like, what makes us frown, what makes us run very fast in the other direction?

The “constant time algorithms” and “logarithmic time algorithms” (whose numbers of operations are, respectively, a constant that does not depend on n or “something like \log n“) are fairly rare, because with \log n operations (or a constant number of operations), we don’t even have the time to look at the whole input. So when we find an algorithm of that type, we’re very, very happy. A typical example of a logarithmic time algorithm is searching an element in a sorted list. When the list is sorted, it is not necessary to read it completely to find the element that we’re looking for. We can start checking if it’s before or after the middle element, and search in the corresponding part of the list. Then we check if it’s before or after the middle of the new part of the list, and so on.

We’re also very happy when we find a “linear time algorithm” (the number of operations is “something like n“). That means that we read the whole input, make a few operations per element of the input, and bam, done. n \log n is also usually considered as “acceptable”. It’s an important bound, because it is possible to prove that, in standard algorithmic models (which are quite close to counting “elementary” operations), it is not possible to sort n elements faster than with n \log n operations in the general case (that is to say, without knowing anything about the elements or the order in which they are). There are a number of algorithms that require, at some point, some sorting: if it is not possible to get rid of the sorting, such an algorithm will also not get below n \log n operations.

We start grumbling a bit at n^2, n^3, and to grumble a lot on greater powers of n. Algorithms that can run in n^k operations, for some value of k (even 1000000), are called “polynomial”. The idea is that, in the same way that a n \log n algorithm will eventually be more efficient than a n^2 algorithm, with a large enough input, a polynomial algorithm, whatever k, will be more efficient than a 2^n-operation algorithm. Or than a 1.308^n-operation algorithm. Or even than a 1.0001^n-operation algorithm.

In the real life, however, this type of reasoning does have its limits. When writing code, if there is a solution that takes 20 times (or even 2 times) less operations than another, it will generally be the one that we choose to implement. And the asymptotic behavior is only that: asymptotic. It may not apply for the size of the inputs that are processed by our code.

There is an example I like a lot, and I hope you’ll like it too. Consider the problem of multiplying matrices. (For people who never saw matrices: they’re essentially tables of numbers, and you can define how to multiply these tables of numbers. It’s a bit more complicated/complex than multiplying numbers one by one, but not that much more complicated.) (Says the girl who didn’t know how to multiply two matrices before her third year of engineering school, but that’s another story.)

The algorithm that we learn in school allows to multiply to matrices of size n \times n with n^3 operations. There exists an algorithm that is not too complicated (Strassen algorithm) that works in n^{2.807} operations (which is better than n^3). And then there is a much more complicated algorithm (Coppersmith-Winograd and later) that works in n^{2.373} operations. This is, I think, the only algorithm for which I heard SEVERAL theoreticians say “yeah, but really, the constant is ugly” – speaking of the number by which we multiply that n^{2.373} to get the “real” number of operations. That constant is not very well-defined (for the reasons mentioned earlier) – we just know that it’s ugly. In practice, as far as I know, the matrix multiplication algorithm that is implemented in “fast” matrix multiplication librairies is Strassen’s or a variation of it, because the constant in the Coppersmith-Winograd algorithm is so huge that the matrices for which it would yield a benefit are too large to be used in practice.

And this funny anecdote concludes this post. I hope it was understandable – don’t hesitate to ask questions or make comments 🙂

Introduction to algorithmic complexity – 1/2

Note: this is the translation of a post that was originally published in French: Introduction à la complexité algorithmique – 1/2.

There, now that I warmed up by writing a couple of posts where I knew where I wanted to go (a general post about theoretical computer science, and a post to explain what is a logarithm, because it’s always useful). And then I made a small break and talked about intuition, because I needed to gather my thoughts. So now we’re going to enter things that are a little bit more complicated, and that are somewhat more difficult to explain for me too. So I’m going to write, and we’ll see what happens in the end. Add to that that I want to explain things while mostly avoiding the formality of maths that’s by now “natural” to me (but believe me, it required a strong hammer to place it in my head in the first place): I’m not entirely sure about the result of this. I also decided to cut this post in two, because it’s already fairly long. The second one should be shorter.

I already defined an algorithm as a well-defined sequence of operations that can eventually give a result. I’m not going to go much further into the formal definition, because right now it’s not useful. And I’m also going to define algorithmic theory, in a very informal way, as the quantity of resources that I need to execute my algorithm. By resources, I will mostly mean “time”, that is to say the amount of time I need to execute the algorithm; sometimes “space”, that is to say the amount of memory (think of it as the amount of RAM or disk space) that I need to execute my algorithm.

I’m going to take a very common example to illustrate my words: sorting. And, to give a concrete example of my sorting, suppose I have a bookshelf full of books (an utterly absurd proposition). And that it suddenly takes my fancy to want to sort them, say by alphabetical order of their author (and by title for two books by same author). I say that a book A is “before” or “smaller than” a book B if it must be put before in the bookshelf, and that it is “after” or “larger than” the book B if it must be sorted after. With that definition, Asimov’s books are “before” or “smaller than” Clarke’s, which are “before” or “smaller than” Pratchett’s. I’m going to keep this example during the whole post, and I’ll draw parallels to the corresponding algorithmic notions.

Let me first define what I’m talking about. The algorithm I’m studying is the sorting algorithm: that’s the algorithm that allows me to go from a messy bookshelf to a bookshelf whose content is in alphabetical order. The “input” of my algorithm (that is to say, the data that I give to my algorithm for processing), I have a messy bookshelf. The “output” of my algorithm, I have the data that have been processed by my algorithm, that is to say a tidy bookshelf.

I can first observe that, the more books I have, the longer it takes to sort them. There’s two reasons for that. The first is that, if you consider an “elementary” operation of the sort (for instance, put a book in the bookshelf), it’s longer to do that 100 times than 10 times. The second reason is that if you consider what you do for each book, the more books there is, the longer it is. It’s longer to search for the right place to put a book in the midst of 100 books than in the midst of 10.

And that’s precisely what we’re interested in here: how the time that is needed to get a tidy bookshelf grows as a function of the number of books or, generally speaking, how the time necessary to get a sorted sequence of elements depends on the number of elements to sort.

This time depends on the sorting method that is used. For instance, you can choose a very, very long sorting method: while the bookshelf is not sorted, you put everything on the floor, and you put the books back in the bookshelf in a random order. Not sorted? Start again. At the other end of the spectrum, you have Mary Poppins : “Supercalifragilistic”, and bam, your bookshelf is tidy. The Mary Poppins method has a nice particularity: it doesn’t depend on the number of books you have. We say that Mary Poppins executes “in constant time”: whatever the number of books that need to be sorted, they will be within the second. In practice, there’s a reason why Mary Poppins makes people dream: it’s magical, and quite hard to do in reality.

Let’s go back to reality, and to sorting algorithms that are not Mary Poppins. To analyze how the sorting works, I’m considering three elementary operations that I may need while I’m tidying:

  • comparing two books to see if one should be before or after the other,
  • add the books to the bookshelf,
  • and, assuming that my books are set in some order on a table, moving a book from one place to another on the table.

I’m also going to suppose that these three operations take the same time, let’s say 1 second. It wouldn’t be very efficient for a computer, but it would be quite efficient for a human, and it gives some idea. I’m also going to suppose that my bookshelf is somewhat magical (do I have some Mary Poppins streak after all?), that is to say that its individual shelves are self-adapting, and that I have no problem placing a book there without going “urgh, I don’t have space on this shelf anymore, I need to move books on the one below, and that one is full as well, and now it’s a mess”. Similarly: my table is magical, and I have no problem placing a book where I want. Normally, I should ask myself that sort of questions, including from an algorithm point of view (what is the cost of doing that sort of things, can I avoid it by being clever). But since I’m not writing a post about sorting algorithms, but about algorithmic complexity, let’s keep things simple there. (And for those who know what I’m talking about: yeah, I’m aware my model is debatable. It’s a model, it’s my model, I do what I want with it, and my explanations within that framework are valid even if the model itself is debatable.)

First sorting algorithm

Now here’s a first way to sort my books. Suppose I put the contents of my bookshelf on the table, and that I want to add the books one by one. The following scenario is not that realistic for a human who would probably remember where to put a book, but let’s try to imagine the following situation.

  1. I pick a book, I put it in the bookshelf.
  2. I pick another book, I compare it with the first: if it must be put before, I put it before, otherwise after.
  3. I pick a third book. I compare it with the book in the first position on the shelf. If it must be put before, I put it before. If it must be put after, I compare with the book on the second position on the shelf. If it must be before, I put it between both books that are already in the shelf. If it must be put after, I put it as last position.
  4. And so on, until my bookshelf is sorted. For each book that I insert, I compare, in order, with the books that are already there, and I add it between the last book that is “before” it and the first book that is “after” it.

And now I’m asking how much time it takes if I have, say, 100 books, or an arbitrary number of books. I’m going to give the answer for both cases: for 100 books and for n books. The time for n books will be a function of the number of books, and that’s really what interests me here – or, to be more precise, what will interest me in the second post of this introduction.

The answer is that it depends on the order in which the books were at the start when they were on my table. It can happen (why not) that they were already sorted. Maybe I should have checked before I put everything on the table, it would have been smart, but I didn’t think of it. It so happens that it’s the worst thing that can happen to this algorithm, because every time I want to place a book in the shelf, since it’s after/greater than all the books I put before it, I need to compare it all of the books that I put before. Let’s count:

  1. I put the first book in the shelf. Number of operations: 1.
  2. I compare the second book with the first book. I put it in the shelf. Number of operations: 2.
  3. I compare the third book with the first book. I compare the third book with the second book. I put it in the shelf. Number of operations: 3.
  4. I compare the fourth book with the first book. I compare the fourth book with the second book. I compare the fourth book with the third book. I put it in the shelf. Number of operations: 4.
  5. And so on. Every time I insert a book, I compare it to all the books that were placed before it; when I insert the 50th book, I do 49 comparison operations, plus adding the book in the shelf, 50 operations.

So to insert 100 books, if they’re in order at the beginning, I need 1+2+3+4+5+…+99+100 operations. I’m going to need you to trust me on this if you don’t know it already (it’s easy to prove, but it’s not what I’m talking about right now) that 1+2+3+4+5+…+99+100 is exactly equal to (100 × 101)/2 = 5050 operations (so, a bit less than one hour and a half with 1 operation per second). And if I don’t have 100 books anymore, but n books in order, I’ll need \displaystyle \frac{n(n+1)}{2} = \frac{n^2 + n}{2} operations.

Now suppose that my books were exactly in the opposite order of the order they were supposed to be sorted into. Well, this time, it’s the best thing that can happen with this algorithm, because the first book that I add is always smaller than the ones I put before, so I just need a single comparison.

  1. I put the first book in the shelf. Number of operations: 1.
  2. I compare the second book with the second book. I put it in the shelf. Number of operations: 2.
  3. I compare the third book with the first book. I put it in the shelf. Number of operations: 2.
  4. And so on: I always compare with the first book, it’s always before, and I always have 2 operations.

So if my 100 books are exactly in reverse order, I do 1+2+2+…+2 = 1 + 99 × 2 = 199 operations (so 3 minutes and 19 seconds). And if I have n books in reverse order, I need 1 + 2(n-1) = 2n-1 operations.

Alright, we have the “best case” and the “worst case”. Now this is where it gets a bit complicated, because the situation that I’m going to describe is less well-defined, and I’m going to start making approximations everywhere. I’m trying to justify the approximations I’m making, and why they are valid; if I’m missing some steps, don’t hesitate to ask in the comments, I may be missing something myself.

Suppose now that my un-ordered books are in state such that every time I add a book, it’s added roughly in the middle of what has been sorted (I’m saying “roughly” because if I sorted 5 books, I’m going to place the 6th after the book in position 2 or position 3 – positions are integer, I’m not going to place it after the book in position 2.5.) Suppose I insert book number i: I’m going to estimate the number of comparisons that I make to \displaystyle \frac i 2 + 1, which is greater or equal to the number of comparisons that I actually make. To see that, I distinguish on whether i is even or odd. You can show that it works for all numbers; I’m just going to give two examples to explain that indeed there’s a fair chance it works.

If I insert the 6th book, I have already 5 books inserted. If I want to insert it after the book in position 3 (“almost in the middle”), I’m making 4 comparisons (because it’s after the books in positions 1, 2 and 3, but before the book in position 4): we have \displaystyle \frac i 2 + 1 = \frac 6 2 + 1 = 4.

If I insert the 7th book, I already have 6 books inserted, I want to insert it after the 3rd book as well (exactly in the middle); so I also make 4 comparisons (for the same reason), and I have \displaystyle \frac i 2 + 1 = \frac 7 2 + 1 = 4.5.

Now I’m going to estimate the number of operations I need to sort 100 books, overestimating a little bit, and allowing myself “half-operations”. The goal is not to count exactly, but to get an order of magnitude, which will happen to be greater than the exact number of operations.

  • Book 1: \frac 1 2 + 1 = 1.5, plus putting on the shelf, 2.5 operations (I actually don’t need to compare here; I’m just simplifying my end computation.)
  • Book 2: \frac 2 2 + 1 = 2, plus putting on the shelf, 3 operations (again, here, I have an extra comparison, because I have only one book in the shelf already, but I’m not trying to get an exact count).
  • Book 3: \frac 3 2 + 1 = 2.5, plus putting on the shelf, 3.5 operations.
  • Book 4: \frac 4 2 + 1 = 3, plus putting on the shelf, 4 operations.

If I continue like that and I re-order my computations a bit, I have, for 100 books:

\displaystyle \frac{1+2+3+...+99+100}{2} + 100 + 100 = \frac{100 \times 101}{4} + 200 = 2725 \text{ operations}

which yields roughly 45 minutes.

The first element of my sum is from the \displaystyle \frac i 2 that I have in all my comparison computations. The first 100 comes from the “1” that I add every time I count the comparisons (and I do that 100 times); the second “100” comes from the “1” that I do every time I count putting the book in the shelf, which I also do 100 times.

That 2725 is a bit overestimated, but “not that much”: for the first two books, I’m counting exactly 2.5 comparisons too much; for the others, I have at most 0.5 comparisons too much. Over 100 books, I have at most 98 \times 0.5 + 2.5 = 51.5 extra operations; the exact number of operations is between 2673 and 2725 (between 44 and 45 minutes). I could do thing a little more precisely, but we’ll see in what follows (in the next post) why it’s not very interesting.

If I’m counting for n books, my estimation is

\displaystyle \frac{\frac{n(n+1)}{2}}{2} + 2n = \frac{n^2 + 9n}{4} \text{ operations}

It is possible to prove (but that would really be over the top here) that this behaviour is roughly he one that you get when you have books in a random order. The idea is that if my books are in a random order, I will insert some around the beginning, some around the end, and so “on average” roughly in the middle.

Another sorting algorithm

Now I’m going to explain another sorting method, which is probably less easy to understand, but which is probably the easiest way for me to continue my argument.

Let us suppose, this time, that I want to sort 128 books instead of 100, because it’s a power of 2, and it makes my life easier for my concrete example. And I didn’t think about it before, and I’m too lazy to go back to the previous example to run it for 128 books instead of 100.

Suppose that all my books directly on the table, and I’m going to make “groups” before putting my books in the bookshelf. And I’m going to make these groups in a somewhat weird, but efficient fashion.

First, I combine my books two by two. I take two books, I compare them, I put the smaller one (the one that is before in alphabetical order) on the left, and the larger one on the right. At the end of this operation, I have 64 groups of two books, and for each group, a small book on the left, and a large book on the right. To do this operation, I had to make 64 comparisons, and 128 moves of books (I suppose that I always move books, if only to have them in hand and read the authors/titles).

Then, I take my groups of two books, and I combine them again so that I have groups of 4 books, still in order. To do that, I compare the first two books of the group; the smaller of both becomes the first book of my group of 4. Then, I compare the remaining book of the group of 2 from which I picked the first book, and I put the smaller one in position 2 of my group of 4. There, I have two possibilities. Either I have one book in each of my initial groups of 2: in that case, I compare them, and I put them in order in my group of 4. or I still have a full group of two: so I just have to add them at the end of my new group, and I have an ordered group of 4. Here are two little drawings to distinguish both cases: each square represents a book whose author starts by the indicated letter; each rectangle represents my groups of books (the initial groups of two and the final group of 4), and the red elements are the ones that are compared at each step.

So, for each group of 4 that I create, I need to make 4 moves and 2 or 3 comparisons. I end up with 32 groups of 4 books; in the end, to make combine everything into 32 groups of 4 books, I make 32 × 4 = 128 moves and between 32 × 2 = 64 and 32 × 3 = 96 comparisons.

Then, I create 16 groups of 8 books, still by comparing the first element of each group of books and by creating a common, sorted group. To combine two groups of 4 books, I need 8 moves and between 4 and 7 comparisons. I’m not going to get into how exactly to get these numbers: the easiest way to see that is to enumerate all the cases, and while it’s still feasible for groups of 4 books, it’s quite tedious. So to create 16 groups of 8 books, I need to do 16×8 moves and between 16×4 = 64 and 16×7 = 112 comparisons.

I continue like that until I have 2 groups of 64 books, which I combine (directly in the bookshelf to gain some time) to get a sorted group of books.

Now, how much time does that take me? First, let me give an estimation for 128 books, and then we’ll see what happens for n books. First, we evaluate the number of comparisons when combining two groups of books. I claim that to combine two groups of k elements into a larger group of 2k elements, I need at most 2k comparisons. To see that: every time I place a book in the larger group, it’s either because I compared it to another one (and made a single comparison at that step), or because one of my groups is empty (and there I would make no comparison at all). Since I have a total of 2k books, I make at most 2k comparisons. I also move 2k books to combine my groups. Moreover, for each “overall” step (taking all the groups and combining them two by two), I do overall 128 moves – because I have 128 books, and each of them is in exactly one “small” group at the beginning and ends up in one “large” group at the end. So, for each “overall” step of merging, I’m doing at most 128 comparisons and 128 moves.

Now I need to count the number of overall steps. For 128 books, I do the following:

  1. Combine 128 groups of 1 book into 64 groups of 2 books
  2. Combine 64 groups of 2 books into 32 groups of 4 books
  3. Combine 32 groups of 4 books into 16 groups of 8 books
  4. Combine 16 groups of 8 books into 8 groups of 16 books
  5. Combine 8 groups of 16 books into 4 groups of 32 books
  6. Combine 4 groups of 32 books into 2 groups of 64 books
  7. Combine 2 groups of 64 books into 1 group of 128 books

So I have 7 “overall” steps. For each of these steps, I have 128 moves, and at most 128 comparisons, so at most 7×(128 + 128) = 1792 operations – that’s a bit less than half an hour. Note that I didn’t make any hypothesis here on the initial order of the books. Compare that to the 5050 operations for the “worst case” of the previous computation, or with the ~2700 operations of the “average” case (those numbers were also counted for 100 books; for 128 books we’d have 8256 operations for the worst case and ~4300 with the average case).

Now what about the formula for n books? I think we can agree that for each overall step of group combining, we move n books, and that we do at most n comparisons (because each comparison is associated to putting a book in a group). So, for each overall step, I’m doing at most 2n comparisons. Now the question is: how many steps do we need? And that’s where my great post about logarithms (cough) gets useful. Can you see the link with the following figure?

What if I tell you that the leaves are the books in a random order before the first step? Is that any clearer? The leaves represent “groups of 1 book”. Then the second level represents “groups of two books”, the third represent “groups of 4 books”, and so on, until we get a single group that contains all the books. And the number of steps is exactly equal to the logarithm (in base 2) of the number of books, which corresponds to the “depth” (the number of levels) of the tree in question.

So to conclude, for n books, I have, in the model I defined, at most 2 \times n \times \log_2(n) operations.

There, I’m going to stop here for this first post. In the next post, I’ll explain why I didn’t bother too much with exactly exact computations, and why one of the sentences I used to pronounce quite often was “bah, it’s a constant, I don’t care” (and also why sometimes we actually do care).

I hope this post was understandable so far; otherwise don’t hesitate to grumble, ask questions, and all that sort of things. As for me, I found it very fun to write all this 🙂 (And, six years later, I also had fun translating it 🙂 )

Mathematical intuition

Note: this post is a translation/adaptation of a post I wrote in French a few years ago: Intuition mathématique.

I initially wrote that post as a “warning” to my French-reading readers, saying “I might not manage to avoid annoying language tics such as ‘intuitively’, ‘obviously’, ‘it’s easy to see that'”. I think I got slightly better at that since then (or at least slightly better at noticing it and correcting it), but it’s still probably something I do.

I do try to avoid the “intuitive argument” when I explain something, because it used to make me quite anxious when I was on the receiving end of it and I had a hard time understanding why it was intuitive. But still, it does happen – it did happen in an exam once, to fail at explaining “why it’s intuitive”. Something that felt so brightly obvious that I had forgotten why it was so obvious. It’s quite annoying when someone points it out to you… especially when the “someone” is the examiner.

One of the most interesting articles I read on the topic was from Terry Tao, There’s more to mathematics than rigour and proofs. He distinguishes between three “stages” in math education:

  • the “pre-rigourous stage” – you make a lot of approximations, analogies, and probably you spend more time computing than theorizing
  • the “rigorous” stage – you try to do things properly, in a precise and formal way, and you work with abstract objects without necessarily having a deep understanding of what they mean (but you do work with them properly)
  • the “post-rigorous stage” – you know enough of your domain to know which approximations and which analogies are indeed valid, you have a good idea fairly quickly about what something is going to yield when the proof/computation is done, and you actually understand the concepts you work with.

Six years ago, when I wrote the original version of this post, I was considering myself a mix of the three. I did start to get some “intuition” (and I’ll explain later what I meant by that), but I was still pretty bad at finishing a complex computation properly. And, obviously, it did (and still does) depends on the domain: in “my” domain, I was approximately there; if you ask me right now to solve a differential equation or to work on complicated analysis, I’m probably going to feel very pre-rigourous. I’ve been out of university for a few years now, and there’s definitely some things that have regressed, that used to be post-rigourous and are now barely rigorous anymore (or that require a lot more effort to do things in a proper way, let’s say). One of the nice things that stayed, though, is that I believe I’m far less prone to make the confusion between “pre-rigourous” and “post-rigourous” than I used to be (which got me qualified as “sloppy” on more than one occasion).

Anyway, I believe that the categories are more fluid than what Tao says, but I also believe he’s right. And that there’s no need to panic when the person in front of you says: “it’s obvious/intuitive that”: she probably just has more experience than you do. And it’s sometimes hard to explain what became, with experience, obvious. If I say “it’s obvious than 2 + 3 = 5”, we’ll probably agree on it; if I ask “yeah, but why does 2 + 3 = 5 ?”, I’ll probably get an answer, but I may have some blank stares for a few seconds. It’s a bit of a caricature, but I think it’s roughly the idea.

In the everyday language, intuition is somewhat magical, a bit of “I don’t know why, but I believe that things are this way or that way, and that this or that is going to happen”. I tend to be very wary of intuition in everyday life, because I tend to be wary about what I don’t understand. In maths, the definition is roughly the same: “I think the result is going to look like that, and I feel that if I finish the proof it’s going to work”. The main advantage of mathematical intuition is that you can check it, understand why it’s correct or why it’s wrong. In my experience, (correct) intuition (or whatever you put behind that word) comes with practice, with having seen a lot of things, with the fact of linking things to one another. I believe that what people put behind “intuition” may be linking something new (a new problem) to something you’ve already seen, and to do this correctly. It’s also a matter of pattern matching. When it comes to algorithm complexity, which is the topic of the next two posts that I’ll translate, it’s “I believe this algorithm is going to take that amount of time, because it basically the same thing as this other super-well-known-thing that’s taking that amount of time”. The thing is, the associations are not always fully conscious – you may end up seeing the result first and explain it later.

It doesn’t mean that you can never be awfully wrong. Thinking that a problem is going to take a lot of time to solve (algorithmically speaking), when there is a condition that makes it much easier. Thinking that a problem is not very hard, and realizing it’s going to take way more time than ou thought. It still happens to me, in both directions. I believe that making mistakes is actually better than the previous step, which was “but how can you even have an idea on the topic?”

I don’t know how this intuition eventually develops. I know mine was much better after a few years at ETH than it was at the beginning of it. I also know it’s somewhat worse right now than when I just graduated. It’s still enough of a flux that I still get amazed by the fact that there are things now that are completely natural to me whereas they were a complete unknown a few years ago. Conversely, it’s sometimes pretty hard to realize that you once knew how to do things, and you forgot, by lack of practice (that’s a bit of my current state with my fractal generator).

But it’s also (still) new enough that I do remember the anxiety of not understanding how things can be intuitive for some people. So, I promise: I’m going to try to avoid saying things are intuitive or obvious. But I’d like you to tell me if I err in my ways, especially if it’s a problem for you. Also, there’s a fair chance that if I say “it’s intuitive”, it’s because I’m too lazy to get into explanations that seem obvious to me (but that may not be for everyone else). So, there: I’ll try to not be lazy 🙂

(Ironically: I did hesitate translating this blog post from French because it seemed like I was only saying completely obvious things in it and that it wasn’t very interesting. My guess is – it was less obvious to me 6 years ago when I wrote it, and so it’s probably not that obvious to people who spent less time than I did thinking about that sort of things 😉 )

So… What’s a logarithm?

This is the translation and update of a blog post originally written in French: “Et donc, c’est quoi, un logarithme ?”.

It’s quite hard to write a series of articles about theoretical computer science without talking about logarithms. Why? Because it’s one of the “useful” functions when one talks about algorithm complexity. So, to make sure that everyone is on the same page, this is a remedial class about logarithms. People who have bad memories about high school maths probably also have bad memories of logarithms; however, logarithms are cool.

Speaking of logarithms, I don’t know how they taught you that in your maths classes; for me, I’m pretty sure it’s been defined from the integral of the reciprocal function. STAY HERE, THAT’S NOT WHAT I’M GOING TO DO. Well, not yet. First, let me try to give some intuition about that thing.

Let us first consider the figure I just successfully inserted in my blog post. I started from a point; from this point I made two branches, at the end of which I added a point; on each of these points, I added two branches, at the end of which I added a point, and so on. I could have continued like that for a while, conceptually – I’d have issues with space and fatigue (it’s going to be annoying really fast to draw points), but I think you can imagine a tree like that with as many levels as you want. Let’s also suppose, because we do computer science, that the “root” of the tree (the very first point I added on my picture at the bottom of it) is at level 0.

Now suppose that I want to count the number of “leaves” of my tree, that is to say the number of points that I have on the highest level of my tree.

It’s pretty clear that the number of leaves depends on the level at which I stop drawing my tree, and that it increases for each level. If I stop drawing at level 0, I have 1 leaf. If I stop at level 1, I multiply that by 2 (because I made two branches), so that’s 2. If I stop drawing at level 2, I multiply the number of leaves at level 1 by 2 again, so that’s 8. And for every level, I take the number of leaves from the previous level and I multiply again by two. At level 3, I’ll have 2×2×2 = 8 leaves, and at level 4, 2×2×2×2 = 16 leaves. To know the number of leaves at level n, where n is the number of my level, I do n multiplications of 2 by itself, which can be written 2^n (read “two to the power of n“).

Now suppose that I don’t want to know the number of leaves corresponding to a level, but the number of levels corresponding to a given number of leaves. For instance, I have 2 leaves: that’s level 1. 16 leaves: that’s level 4. 32 leaves, level 5. And if I have 128 leaves, that’s level 7. It gets a bit more complicated if I have, say, 20 leaves. 20 leaves, that’s level “4 and a bit”: I started drawing level 5, and then I stopped because I got too tired to finish.

This operation (finding the level for a given number of leaves) is the inverse function of the previous “power” operation (finding the number of leaves for a given level), and that’s a logarithm. I say it’s the inverse function because it allows me to “undo” the previous operation. If I take a number n, I compute its power of 2, it yields 2^n, and if I take the logarithm of that, I get

\log(2^n) = n

Similarly, if I have a number n, that I take its logarithm, and that I compute the power of two of the result, I get

2^{\log n} = n

Alright, everything’s nice and shiny, but what happens if, instead of making two branches at each step, I make 3? With the same reasoning as before, at level n, I have 3×3×…×3 leaves, that is 3^n. And, well, in the same way, I can define a logarithm that would be the inverse of this 3^n. But I do want to be able to tell one from the other, so I write the power to which they correspond as a subscript, like this:

\log_2, \log_3

with

3^{\log_3 n} = n

and

\log_3(3^n) = n

That subscript is the “base” of the logarithm. Allow me a small remark about logarithm in base 10 (it’s also true for other bases, at least integer, of logarithms, but let me avoid that). It’s very easy to have a rough estimate of the logarithm in base 10 of a number. It’s the number of digits of said number, minus 1. We have \log_{10}(10) = 1, \log_{10}(100) = 2 (because 10^2 = 100); the logarithm base 10 of all the numbers between 10 and 100 is between 1 and 2. In the same way, you can say that the logarithm base 10 of 14578 is between 4 and 5, because 14578 is between 10000 = 10^4 and 100000 = 10^5, which allows to conclude on the value of the logarithm. (I’m hiding a number of things here, including the reasons that make that reasoning actually correct.)

As an aside, as you may see, one interesting property of the logarithm is that it can “compress” orders of magnitude. If you want to represent on a single sheet of paper quantities that have a very large amplitude. For example, in chemistry, you may want to represent concentrations that go from 10^{-10} to 0.1, and on a “normal” scale, you wouldn’t be able to distinguish between 10^{-10} and 10^{-9}. You can however use a logarithmic scale, so that you represent with the same amount of space “things that happen between 10^{-10} and 10^{-9} and “things that happen between 0.01 and 0.1”. For large scales, xkcd made a very nice drawing with the observable universe seen at log scale: Height.

Back to the previous point – now I have defined the concept of “base” for my logarithm – that’s the number corresponding to the power function that I inverse to get my logarithm. The question is – what prevents me from using “exotic” bases for my logarithms? The answer is “nothing”. I can define a logarithm in base 3.5 (corresponding to the power at which I raise 3.5 to get the number for which I’m computing the logarithm base 3.5), \displaystyle \frac 5 3 (corresponding to the power at which I raise\displaystyle \frac 5 3 to get the number for which I’m computing the logarithm base \displaystyle \frac 5 3), or even \pi (corresponding to… okay, you get the idea) if I want to. It’s less “intuitive” than when looking at the explanation with the tree and the number of levels (because it’s pretty hard to draw \pi branches), but if you see it as the inverse of the power of the same number, I hope you get the idea.

Now the next question you can ask is whether all these logarithms are somehow linked, or whether you can express them in some common way. The answer is yes. There exists the following relation between logarithms of any three bases a, b and c:

\displaystyle \log_a(x) = \frac{\log_c b}{\log_c a}\log_b(x)

(Yes, that’s typically the kind of things that I had in my exam formular because I always get confused, especially when I’m stressed out… like I am in an exam 😉 )

Also observe that the base of the logarithm c absolutely does not matter: the ratio between the logarithms of two numbers stays the same independently of the base.

The important thing to remember here is that all logarithms are equal “up to a constant”; they have the same “asymptotic behavior” (I’m giving the terms here, but I’ll write a specific post on the topic, because it’s a bit long to explain). For theoretical computer science, it’s interesting because we’re mostly interested in behaviors “up to a constant” when we’re talking about execution time or memory consumption of an algorithm. Again – I’ll come back to this later – take it as a “spoiler” of the next episodes 🙂

People from different backgrounds tend to prefer different bases for their logarithms; the three most common bases are 2, 10 and e \approx 2.71828. Here, I feel that it’s possible someone just read that and went “wait, what?”. As far as powers go, there is a “special” one: the powers of e. e is approximately equal to 2.71828, and the function x \mapsto e^x has its own name and its own notation: it’s the exponential function, and e^x = \exp(x). It’s special because of an interesting property: it is equal to its derivative. And because the exponential function is often used, its inverse, the “natural logarithm” (logarithm in base e) is also used a lot, and we write \ln(x) = \log_e(x). It’s also been brought to my attention that some conventions (and, I presume, authors) use \text{lg}(x) = \log_2(x) and use \log(x) = log_{10}(x). Wikipedia has more opinions on the question.

It also turns out that the natural logarithm is related to the reciprocal function \displaystyle x \mapsto \frac 1 x. Formally, we write that

\displaystyle \ln(x) = \int_1^x \frac 1 t \:\text{d}t

And this is what it means, graphically:

The red curve represents the function \displaystyle x \mapsto \frac 1 x. The grey zone corresponds here to the integral from 1 to 6: the area of that zone is equal to \ln(6). And you can represent the natural logarithm of any value (greater than 1) by the area of the zone between the x-axis and the \displaystyle x \mapsto \frac 1 x curve from 1 to that value. Side remark: this area is equal to 1 when you take it from 1 to e (because \log_b(b) = 1 for all b, so \ln(e) = \log_e(e) = 1).

And, to conclude, a nice property of the logarithm function: you can write (in any logarithm base):

\log(a \times b) = \log(a) + \log(b)
\displaystyle \log \left(\frac a b\right) = \log(a) - \log(b)

This is probably easiest to see via the power functions. Let us write things down. We have, on the one hand:

2^{\log_2(a\times b)} = a \times b

and, on the other hand,

2^{\log_2 a + \log_2 b} = 2^{\log_2 a} \times 2^{\log_2 b} = a \times b

Since, if x^m = x^n, then m = n, putting everything together yields the first result. The second equality (with the minuses) can be derived in exactly the same way (and left as an exercise to the reader 😉 ).

Note that it’s because of that kind of property that logarithms were used in the first place. If you have a large table with a lot of “number/logarithm” correspondances, and if you want to multiply two large numbers easily, you look up in the table the logarithms of both numbers, you add them (which is much easier than to multiply them), and you look at the corresponding value (by looking at the table in the other direction) to get the value of the multiplication. Historically, that kind of table appeared in the 18th century (thank you M. Napier) to make astronomical computation easier; slide rules also work on that principle; and all of this only started to disappear when, quite recently, mechanical and later electronic calculators appeared…

Anyway. Logarithms are cool. And now I hope you’re also convinced of that 🙂

What IS Theoretical Computer Science?

(Note: this is a “present-day version” of an article that I published in French around 5 years ago.)

As I mentioned when starting this blog, there’s a few blog posts from my French blog that I kind of want to translate to English. This is part of it: I wrote a series of articles about trying to explain a bit of math and theoretical computer science, and I believe these are pretty good candidates for that.

A few years ago, I was studying at ETH Zürich, and I got asked quite a lot “So, what are you doing these days?” – which was a question that was fairly hard to answer to – typically, I said “Theoretical computer science. Basically maths.” It wasn’t very satisfying, but getting into the details of things when answering a “social” question wasn’t necessarily a good idea either. But maybe there are some people who ARE interested in a bit more detail (if you’re one of the persons to whom I answered that at the time, and you’re still interested, now is your chance 😉 ), and that maybe I could explain things to people who didn’t swim in that area for a few years. To give an idea, I’m aiming at the “I didn’t do maths since high school” level. I’m not sure I managed it – it’s been a while since I’ve quit high school, and I did do a bit of maths since then 🙂

So this is first a very general post to give a few elements, and then I want to give a bit more detail. The idea is that I still love what I studied a few years ago, and I want to explain why I love it. It’s going to ask a bit of work – the first thing I did on the French series was to write a post about “but, but, but… what IS a logarithm?”. At that point I was a bit afraid it’d be an “insult” to the scientific knowledge of my readers, but I still did it because I thought it was fun and because I thought that logarithms were still a very useful notion. It so happened that said logarithm blog post was, by far, my most popular post ever (it still getting hits from search engines these days), so. And then, there’s a few blog posts depending on the mood. A bit of complexity theory, a bit of graph theory (I like graph theory), a bit of probabilities (lol), and the “prerequisites” for all of that (my goal was to be able to explain P vs NP, how to define what NP-complete means, and I eventually got there, and it was a nice exercise).

But still, let’s get to the point. What is theoretical computer science? The first answer I have is that’s it’s the theory of everything you can do with a computer. In the middle of that, you have the notion of algorithm. An algorithm is just a series of instructions. You can think of a cooking recipe as an algorithm:

  1. To make béchamel sauce, you need butter, flour and milk.
  2. Measure out identical quantities of butter and flour.
  3. In a pan over medium heat, melt the butter and add the flour; mix until you get a thick paste.
  4. While the béchamel sauce does not have the desired texture, add milk and mix.
  5. If you wish, add salt, pepper, nutmeg or cheese.

You have instructions (which are numbered) that can be repeated (while the sauce doesn’t have the desired texture) or that you only execute under certain conditions (you only add nutmeg if you want to).

For another process that is more commonly described as an algorithm, you can think of the long division algorithm you learnt at school.

I’m a bit biased on the categories that you can fit into theoretical computer science, because there are domains that I know better than others, but here’s a strictly non-exhaustive list (if I forgot your favorite domain… that’s because I don’t know what to say about it, don’t hesitate to explain in the comments). For a more complete list, I’ll send you to the Wikipedia article about theoretical computer science, which itself is based on the list from SIGACT (a large theoretical computer science research group). As for the fact that “I do maths” – theoretical computer science is often considered as part of discrete mathematics and, in general, proving that something works often implies writing pretty proofs full of pretty maths (or full of ugly maths, but that’s far less satisfying).

I’m going to allow myself a fair amount of approximations here (in particular, I’m confusing problems and problem instances) – I’m trying to write things simply without writing overlong paragraphs. Those who know will be able to correct on their own; for the others: it’s a first approximation, it’s on some level wrong by definition, but I do intend to make it a bit less wrong in the following posts.

So, to me, theoretical computer science contains among others:

  • Algorithmics. Given a problem, how do you solve it with an algorithm, how do you prove that your solving method (the algorithm) is correct, and how much time do you need to get a result?
  • Complexity theory. It includes, but does not restrict itself to, the “how much time do you need to get a result?” from the previous point. We’re also interested in the question “given a problem, what are the minimum, average and worst case time and memory consumption that will allow me to solve it?”.
  • Graph theory. A graph is a mathematical structure made of points (the “vertices” of the graph) and lines (the “edges” of the graph). One example could be a subway map: the stops are the vertices, the edges are the connexions from one stop to another with a subway line. In graph theory, we look at graph properties (for example: can I go from one point to another; what is the shortest path from one point to another; can I still go from point A to point B if I remove this vertex – or, in my previous example, if I shut down this subway station), and at algorithms to decide these properties (how do I find the shortest path from one point to another in the graph).
  • Data structures. How do I store the data of the problem I’m interested in in the most efficient way possible (from a time and space perspective) for the problem I want to solve? Or, to go back to the previous example, how do I store my subway map so that I can compute stuff about it? Data structures and algorithms go hand in hand – you need data on which to run your algorithm, and your data without algorithms is quite often just storage.
  • Computational geometry. How to represent and solve with a computer problems that are related to geometry? A common example is the post office problem: given the location of all post offices in a city, how do I efficiently find the one closest to my home? If I’m only interested in the one that’s closest to my home, I can compute the distance between my home and all post offices, but how can I improve on that if I’m interested in that information for all houses in the city? And what do I do if I shut down or open a new post office?
  • Randomness. What is the impact of randomness for all this? For some algorithms, it’s interesting to add some randomness in the computation (I flip a coin, if I get heads the algorithm does operation X, if I get tails the algorithm does operation Y). What impact does it have on algorithms – in particular their correctness, and how to assess the time that a random algorithm will need to (correctly) solve a problem? Random data structures can also be interesting. For instance, you can define random graphs: you take a set of points, and you add edges between them with some probability. Depending on how you define your random graph, it can be a fairly good model of social networks (who’s friend with whom)… or of the human brain.
  • Cryptography. If Alice wants to send a message to Bob without Eve being able to read it, and if Bob wants to be sure that the message he just read does come from Alice, how can they do that? And how to prove that Eve cannot read the message, and that Bob can be sure of where the message comes from.
  • Computability. It turns out you can’t solve everything with an algorithm, and that you can even prove that some things cannot be computed with an algorithm. The most famous example is the “halting problem”, which is a bit meta, but I hope you’ll forgive me. You can prove that there is no program that can decide if a given problem eventually stops and give a result or if it just loops for ever. The proof is very pretty, but it kind of hurts the brain the first time you see it; maybe I’ll eventually write a blog post about it. (Note: I said that “maybe” five years ago, maybe I should do that for real 😉 )

There, those are all things for which I have at least some notions. Theoretical computer science includes some more stuff for which my knowledge is mostly zero: information theory (I’m somewhat ashamed of that), machine learning (or how do you make a computer recognize pictures of cats), formal methods (or how do I prove that my code does not have bugs), quantum computing (would a quantum computer be better at recognizing pictures of cat than a normal computer), computational biology (how do I check whether two DNA sequences are close of one another or not), and I probably still forget a few.

Also note that different people may have different opinions about what is or is not theoretical computer science. I’m not necessarily convinced of the fact that most machine learning as it is applied today is theoretical computer science, because it still looks very experimental, as in “we tweaked that parameter and consequently the computer is 2.2% better at recognizing cats. (Note: I’d probably be less “critical” now than I was five years ago on that topic. Maybe. Probably.) Wikipedia also gives distributed computing as theoretical computer science. It most definitely intersects, but I wouldn’t say that distributed computing is a subset of theoretical computer science. Same for computational biology: some elements are part of it (the example of the DNA strings), but I’m pretty sure that not all computational biologists would define themselves as theoretical computer scientists 🙂

I thought this would be a pretty short blog post: I think that’s a failure. I hope it was still somewhat interesting; if there are things you think I can talk about in a future blog post in this category… well, don’t hesitate. As I mentioned, I actually have some amount of content that I aim to translate from my archives; I am open to suggestions still, my goal is to Blog More 🙂