Planarity, minors and donuts

Ce billet est la traduction d’un billet écrit initialement en français : TPA – Planarité, mineurs et donuts.

Let’s have a little bit of graph theory today – I want to present (without a proof) a fun result about graph planarity. Which means – first, let’s define a few things and draw a few pictures.

A graph is a set of vertices (“points”) connected by edges. To describe a graph, I can say “the set of vertices is labeled 1, 2, 3, 4 and 5, and I have edges between edges 1 and 2, 2 and 3, 3 and 4, 4 and 1, 1 and 3, 2 and 4, 1 and 5, and 4 and 5”. I can also draw it on a sheet of paper, and I can draw that statement in different ways: all the drawings here are drawings of the graph I just described.

In these three drawings, two of them are plane: the graph is drawn such that edges of the graph do not cross. The first drawing is not plane: the edges 1-3 and 2-4 are crossing. A graph that can be drawn in the plane (that is to say, on a flat piece of paper) such that no two edges cross is called a planar graph. The fact that we have an adjective for such a graph should tell you that there exists non-planar graphs: graphs that cannot be drawn in the plane without crossing two edges. Two typical (and useful) examples are the complete graph on 5 vertices (5 vertices, and all possible edges) – denoted $K_5$, and the complete bipartite graph on 3×2 vertices (2 groups $A$ and $B$ of three vertices, and all possible edges between vertices of group $A$ and vertices of group $B$), denoted $K_{3,3}$. Here are their “usual” drawings; as we saw in the previous figure, these drawings are not unique. But you can search for a long long time for a drawing where no two edges cross (and not find it).

And there is a theorem (Wagner, 1937) that says that if a graph is not planar, it’s because somewhere in its structure it contains something that looks like $K_5$ or $K_{3,3}$. More precisely:

A finite graph is planar if and only if it does not have $K_5$ or latex $K_{3,3}$ as a minor.

Okay, I cheated a bit, because I haven’t defined minor yet. So let’s do that:

A minor of a graph $G$ is a graph that is obtained from $G$ by doing any number of the following operations: removing an edge; removing a vertex; contracting a vertex.

Removing an edge is easy: if two vertices are connected by an edge, we can decide that in fact they aren’t, and remove the edge. Removing a vertex is also easy: pick a vertex, remove it, and remove all the edges that are connected to it, because otherwise we don’t know where they’re going anyway. Contracting a vertex is a tiny bit more complicated: the idea is that we pick two vertices that are connected with an edge, and we “merge” them in a single vertex. The resulting vertex is connected to all the edges of the initial two vertices that were in the original graph. One can imagine “pinching” the two vertices that get closer and closer until BOOM they fuse together. For a small example, in which I contract the vertices connected by the red edge in a single vertex:

So what Wagner says is: “if I fiddle with a graph a bit and I manage to obtain $K_5$ or $K_{3,3}$, then I cannot draw the graph without crossing edges. But if I cannot get $K_5$ or $K_{3,3}$, then I can draw the graph without crossing.”

There’s a theorem that expands this idea of “forbidden minors”. It’s a theorem by Robertson and Seymour, and its proof spans over 20 papers, published from 1984 to 2003. The theorem is stated as follows:

Every family of graphs that is closed under minors can be defined by a finite set of forbidden minors.

Let’s explain. A family of graphs that is closed under minors is a set of graphs such that, if I take any graph in this family, and I take a minor of it (with the aforementioned definition), then the minor is also in that set of graphs. What Robertson and Seymour say is that, in such a family, there exists a finite set of forbidden minors: if a graph has a minor of that set, then that graph is not part of the family.

Wagner’s theorem is a restriction of Robertson-Seymour for planar graphs. Planar graphs are a family of graphs that is closed under minors: if I take a minor of a planar graph, I can still draw that minor without crossing any edge, so it is planar as well. And there is a finite set of forbidden minors, specifically $K_5$ or $K_{3,3}$: if I find one of these minors in a graph, then the graph is not planar. Wagner, for the case of planar graphs, actually states the forbidden minors explicitly, which Robertson-Seymour does not.

And that’s the fun part: in general, we do not know the forbidden minors in question. We know they exist; we know there’s a finite number of them, but we do not know which they are. And, for a tiny example as a conclusion of this post: suppose now that I want to draw a graph not on a sheet of paper, but on a torus – or, if you want, on a donut.

The idea would be to draw vertices on a donut, and to connect them with edges, exactly as I do in the plane. If I do that, and I can draw the graph without crossing edges, the graph is not planar anymore, but toroidal (or donutoidal if you want). The family of toroidal graphs is closed for minors, so there exists a set of forbidden minors for this family. So far, 17523 have been found. It is yet unknown how many there are in total. Fun, isn’t it?

(And if you want to find them all, you may want to start by making donuts.)

NP-complete problems

In a previous post, I tried to explain the P vs NP problem. I gave a rough definition of the NP complexity class: NP is the class of the problems for which, for each “yes” instance, I can get convinced in polynomial time that the answer is yes by providing me with a proof that it is indeed the case. And, at the end of that post, I vaguely mentioned (without naming it) the notion of NP-complete problems.

The usual definition of a NP-complete problem is that it’s a problem which belongs to NP, and which is NP-hard. So now I need to explain what NP-hard means. Informally, it’s a problem that is at least as hard as all other problems of the NP class. Another informal way to say that is that it’s at least as hard as another NP-hard problem. The problem with that definition is the bootstrap problem: you’d need a first NP-hard problem to compare it to the others.

The archetypal NP-hard problem is SAT, which was the topic of my previous post. Since SAT is also in NP, SAT is also the archetypal NP-complete problem. It’s not a priori obvious that SAT is NP-hard. But one can prove that even 3-SAT is NP-hard, where 3-SAT represents the CNF instances of SAT where all clauses have exactly 3 literals. I’m not going to go through the proof, because that would imply that I’d explain a lot of things that I kind of don’t want to explain. For the people who are interested in the topic, it’s been proven in 1971, and it is known as the Cook-Levin theorem.

Now that we do have a NP-hard problem, we can find other ones by reduction. We reduce a NP-hard problem (for instance, 3-SAT), to another problem that we want to prove is NP-hard (call that problem A). To do that, we show that if we can solve A efficiently (that is to say, in polynomial time), then we can solve 3-SAT efficiently.

The idea of that kind of proof is to take an arbitrary instance of the NP-hard problem, and we transform it, in polynomial time, into an instance of A that allows to conclude on the original instance. Now suppose that we can solve problem A (that is to say, any instance of problem A) in polynomial time. Then we can take any instance of SAT, transform it into an instance of problem A, solve the instance of problem A, and get a conclusion. The transformation is done in polynomial time; the solving of A is in polynomial time; summing both yields again a polynomial, so we can solve any instance of SAT in polynomial time, so we can solve SAT in polynomial time. Easy.

NP-complete problems are, in a way, “complexity equivalent”: if we know how to solve one in polynomial time, then we can solve all of them in polynomial time as well, and we can solve all problems from NP in polynomial time (since NP-hard problems are at least as hard as any problem from NP). So, if we find a way to solve in polynomial time all the instances of any NP-complete problem… we proved that P = NP. And won a million dollars. And surprised a whole lot of people.

There is a large set of problems that are known to be NP-hard, or NP-complete if they are as well in NP. And there are people who look at… exotic problems, shall we say: let me give a few links for the people who are interested and may get a laugh out of it:

After this fun interlude, I’m going to give an example of reduction, so that you see that kind of proof works. I’m going to prove that the CLIQUE problem is NP-hard. It’s pretty much the basic example that everybody gives when they’re teaching that kind of things, but there must be a reason (I guess: it’s fairly easy to explain, and the reduction is also fairly easy compared to some others). What I’m explaining here is largely coming from the CLRS, that is to say the Cormen, Leiserson, Rivest and Stein, that is to say Introduction to Algorithms – that’s the reference book for A LOT of algorithms classes. (And I have a copy at home and a copy at work – you never know.)

The CLIQUE problem is described as follows: given a graph (a set of vertices and edges) $G$, does it contain a clique of size $k$? A clique of size $k$ is a set of $k$ vertices that are all connected to one another. For instance, this thing below is a clique of size 8. Also note that it does contain cliques of size 1 (single vertex), 2 (an edge), 3 (a triangle), 4, 5, 6, 7, since we can find sets of vertices of that size that are all connected to one another.

Let us reduce 3-SAT to that problem; since 3-SAT is NP-hard, CLIQUE will then be proven to be NP-hard as well. CLIQUE is in NP, because if I provide you with $k$ vertices, it’s possible to check that all these vertices are indeed connected to one another. So, if CLIQUE is NP-hard, CLIQUE is NP-complete.

To start with my reduction, I pick an arbitrary 3-SAT formula – more precisely, a 3-CNF-SAT formula (I explained the meaning of this in my post on SAT), which is a formula that has that look:

$(x_1 \vee x_2 \vee x_3) \wedge (\bar x_2 \vee x_4 \vee \bar x_5) \wedge (x_1 \vee \bar x_2 \vee \bar x_3) \wedge ...$

that is to say, a set of $m$ clauses connected by AND and composed of 3 literals connected by OR.

From there, we’re creating a graph that contains $3m$ vertices. Every vertex corresponds to a literal of the formula; vertices can be duplicated. For the above formula (truncated before the …), this yields the following vertices :

$x_1, x_2, x_3 ; \bar x_2, \bar x_4, \bar x_5 ; x_1, \bar x_2, \bar x_3$.

Then, we add edges almost everywhere. We don’t add edges in the “groups” that correspond to the clauses, and we also do not add edges between literals that are not compatible, that is to say inverse of each other. If I have two literals $x_1$ and $\bar x_1$, I’m not creating an edge between them. For the formula above, this is the graph in question:

And these are the edges that are NOT in the previous graph:

And now that we have that graph, we want to show that the SAT formula can be satisfied if and only if the graph (the first one) has a clique of size $m$, where $m$ is the number of clauses in the SAT formula. So I need to prove two things:

• if the formula can be satisfied, then the graph has a clique of size $m$,
• if the graph has a clique of size $m$, then the formula can be satisfied.

Let’s start with the first point. Suppose the formula can be satisfied. This means that we can assign a value to all the variables so that the formula is satisfied. This means that all clauses can be satisfied by this assignment. This also means that, for each clause, there’s a literal with value 1 (either a “positive” literal, for instance $x_1$ if the variable is assigned to 1, or a “negative” literal, for instance $\bar x_1$, if the variable is assigned to 0). Now remember that we created vertices grouped by “clause”, so for each clause, we can pick the vertex corresponding to that 1-valued literal (and if several literals have value 1 in a clause, we can pick an arbitrary one). Since we pick a vertex by clause, and we have $m$ clauses, we have $m$ vertices. We now want to prove that these $m$ vertices form a clique, which means that there is an edge between every pair of vertices of that set. We constructed the graph such that there is no edge between two vertices of a given clause, and we’re fine there, because we chose exactly one vertex per clause. Moreover, there is no edge between a literal and its negation – we’re also fine there, because we only picked literals that have value 1, and $x_i$ and $\bar x_i$ can’t both have value 1. These are the only conditions for which there is no edge between two vertices; which means that our $m$ vertices are all connected with each other, which yields a clique of size $m$, which is what we want.

Now, let’s look at the second point: if the graph has a clique of size $m$, then the formula can be satisfied. Suppose that the graph has a clique of size $m$. Since the vertices corresponding to the literals of a clause are not connected, this means that we have a vertex for each clause. We can give the value 1 to all the literals corresponding to these vertices. We cannot have a clique containing $x_i$ and $\bar x_i$, because there would be no edge between these two vertices, which goes against the definition of a clique, where all edges are present. So if a clique of size $m$ exists, this means that we found, for each clause, a literal whose value can be 1, and that all of these literals are compatible with each other. And so, we can satisfy the formula corresponding to the graph, which we also wanted to prove.

So, if we can solve CLIQUE in polynomial time, then we can solve 3-SAT in polynomial time; since 3-SAT is NP-hard, CLIQUE is NP-hard, so CLIQUE is NP-complete, which concludes the proof.

Showing that a problem is NP-complete is, in a way, an indicator that the problem in question is difficult, but this needs to be mitigated a lot. For one thing, it does not say anything about a specific instance of the problem. It does not say anything about a specific subset of instances either – let me explain that. If I say that CLIQUE is difficult, it doesn’t mean that, for example, deciding whether a triangle (a clique of size 3) is in a graph is difficult. I can take all sets of 3 vertices, look at whether they form a triangle, and conclude. There are approximately $n^3$ sets of 3 vertices in a graph with $n$ vertices (okay, there’s exactly $\displaystyle \frac{n(n-1)(n-2)}{6}$ – anyway, roughly $n^3$), so I can actually decide that in polynomial time (because I’m doing $n^3$ operations which are basically checking if three edges are in a graph). So I can decide 3-CLIQUE in polynomial time. Well, I’m not going to be a millionaire with that, because the CLIQUE problem is wider than just 3. I can also decide 1000-CLIQUE (clique of size 1000) in polynomial time with the same principle. Well, it’s a polynomial of degree 1000, but who cares 😛

But, in the general case, I cannot decide whether a graph over $n$ vertices contains a clique of $\displaystyle \frac{n}{2}$, or $\displaystyle \frac{n}{10000}$ vertices, or even $\log n$ vertices in polynomial time with this algorithm that looks at all groups of size $k$ (so, here, $k$ would be equal to $\displaystyle \frac{n}{2}$, $\displaystyle \frac{n}{10000}$, or $\log n$) and that looks at the edges of the group, because that would mean doing roughly $n^{n/2}$, $n^{n/10000}$ and $n^{\log n}$ operations, respectively, and that in any case it’s not polynomial. And, generally speaking, nobody knows if it’s possible to do that in polynomial time, since CLIQUE is NP-complete and we don’t know if P is equal to NP. Many people think that not, but nobody knows.

Even that does not say anything about the difficulty of a specific instance. If I’m given a graph that contains $\displaystyle \frac{n(n-1)}{2}$ edges for $n$ vertices, then I can decide pretty easily that it does contain a clique of size $\displaystyle \frac{n}{2}$, $\displaystyle \frac{n}{10000}$ and $\log n$. That’s because a graph that has $\displaystyle \frac{n(n-1)}{2}$ for $n$ vertices contains all the possible edges for the graph: it’s a clique on $n$ vertices… which means that every subgraph over $k$ vertices, for all $k \leq n$, is a clique of size $k$. Same thing if I’m given a graph over $n$ vertices with 0 edge, it’s not going to be hard to say that it doesn’t contain a clique of size $\displaystyle \frac n 2$.

SAT is also NP-hard, but can solve 2-CNF-SAT instances (where all clauses contain 2 literals) in polynomial, and even linear time. There are also SAT instances that are deemed “easy”, for instance the ones that have only few dependencies between the clauses: we know that we can to solve them, and we even know how (for the people who are interested in what I mean exactly by what I mean by “a few dependencies”, I’m thinking specifically about “the ones that are in the scope of the Lovász Local Lemma“, whose application to SAT is not trivial from the Wikipedia definition, but which might be clearer here).

Generally speaking, it may even be pretty difficult to create NP-hard instances of problems that we cannot solve in polynomial time. By the way, it’s not easy to show that we don’t know how to solve a given instance in polynomial time. We can find instances problem instances that are hard (i.e. require exponential time) for a given algorithm, and easy for another algorithm…

And that’s a bit of what happens in real life: we have instance of problems for which we know that the general case is NP-hard (or worse), and that we still solve without great difficulty. It can be that the problem is “small” (it’s not a huge problem to put an exponential algorithm on a problem of size 4), but it can also be that the problem belongs to an “easy” subset of instances. Considering that an instance of a problem is hard because the general problem is hard, and giving up on it (because we won’t know how to do it) may not be a good idea – it may be that we forgot to consider an easy restriction of the problem!

As usual, “it’s not that simple”. I’ve spent a fair amount of time with theory, and I tend to look at the problems themselves more than at given instances. But it’s sometimes nice to remember that stuff that’s “theoretically hard” can end up being “practically easy” – it’s a nice change from the things that are theoretically easy but that we don’t know how to do practically… 🙂