On Kotzig’s conjecture in random graphs


Abstract

In 1963, Anton Kotzig famously conjectured that \(K_{n}\), the complete graph of order \(n\), where \(n\) is even, can be decomposed into \(n-1\) perfect matchings such that every pair of these matchings forms a Hamilton cycle. The problem is still wide open and here we consider a variant of it for the binomial random graph \(G(n,p)\). We prove that, for every fixed \(k\), there exists a constant \(C=C(k)\) such that, when \(p\ge \frac{C \log n}{n}\), with high probability, \(G(n,p)\) contains \(k\) edge-disjoint perfect matchings with the property that every pair of them forms a Hamilton cycle. In fact, our main result is a very precise counting result for \(K_n\). We show that, given any \(k\) edge-disjoint perfect matchings \(M_1,\dots,M_k\), the probability that a uniformly random perfect matching \(M^*\) in \(K_n\) has the property that \(M^*\cup M_i\) forms a Hamilton cycle for each \(i\in [k]\) is \(\Theta_k(n^{-k/2})\). This is proved by building on a variety of methods, including a random process analysis, the absorption method, the entropy method and the switching method. The result on the binomial random graph follows from a slight strengthening of our counting result via the recent breakthroughs on the expectation threshold conjecture.

1 Introduction↩︎

A \(1\)-factor or perfect matching of a graph \(G\) is a collection of pairwise vertex-disjoint edges of \(G\) covering all the vertices of \(G\). A partition of the edge set of \(G\) into \(1\)-factors is called a \(1\)-factorisation of \(G\). The following standard construction shows that if \(n\) is even, then the complete graph of order \(n\), denoted by \(K_n\), has a \(1\)-factorisation. Place all but one of the vertices at the corners of a regular \((n-1)\)-gon, with the remaining vertex at the center. Observe that, with this arrangement of vertices, choosing an edge \(e\) from the center to a polygon vertex together with all possible edges that lie on lines perpendicular to \(e\) gives a \(1\)-factor. The \(1\)-factors that can be constructed in this way form a \(1\)-factorisation of \(K_n\).

Kotzig observed that if \(n-1\) is a prime, then each pair of \(1\)-factors in the construction above induces a Hamilton cycle (cf. 1). A \(1\)-factorisation in which any two distinct \(1\)-factors induce a Hamilton cycle is called a perfect \(1\)-factorisation. Motivated by this, in 1963, at “the first international conference devoted to graph theory" (cf. [1]), Kotzig made the following conjecture, which has become known as the perfect \(1\)-factorisation conjecture.

Conjecture 1 (Kotzig’s conjecture [2]). Let \(n \ge 2\) be an even integer. Then \(K_n\) has a perfect \(1\)-factorisation.

Figure 1: A perfect 1-factorisation of K_6, where each 1-factor is drawn with a different colour.

Kotzig’s observation proves the conjecture for all \(n\) such that \(n-1\) is a prime number. Later, Anderson [3] showed that the conjecture holds for all \(n\) such that \(n/2\) is a prime. Much effort has been put into attacking this conjecture but it is still very far from being resolved. Historically, each increase in the smallest order for which Kotzig’s conjecture was unresolved had merited a paper. In fact, besides the general results mentioned above, the conjecture is known to be true for some more sporadic values of \(n\) and the current smallest open case is \(n=64\). We refer to the survey of Rosa [4] for the full list of known values (with the case \(n=54\) only appearing in a recent paper of Pike [5]). In addition to the existence statement, there has been also some interest in counting the number of non-isomorphic perfect \(1\)-factorisations of \(K_n\), often using computer-assisted proofs. The largest value of \(n\) for which the exact answer is known is \(n=16\): Gill and Wanless [6] and, independently, Meszka [7] showed there are \(3155\) non-isomorphic perfect \(1\)-factorisations of \(K_{16}\).

1 has connections to several other problems. A proper colouring of the edges of a graph \(G\) is called acyclic if there is no \(2\)-coloured cycle in \(G\), that is, if the union of any two colour classes induces a subgraph of \(G\) which is a forest. The acyclic edge chromatic number of \(G\), denoted by \(a'(G)\), is the least number of colours in an acyclic edge colouring of \(G\). Fiamčík [8] and, independently, Alon, Sudakov and Zaks [9] conjectured that \(a'(G) \le \Delta(G)+2\) for all graphs \(G\), where \(\Delta(G)\) denotes the maximum degree of \(G\). As shown in [9], if 1 holds, then \(a'(K_{2n})=a'(K_{2n+1})=2n+1\) for every \(n\), which implies that the above conjecture is true for complete graphs.

Coming back to Kotzig’s conjecture, Wanless [10] made a similar one for complete balanced bipartite graphs, stating that \(K_{n,n}\) admits a perfect \(1\)-factorisation if and only if \(n=2\) or \(n\) is odd. That the condition on \(n\) is necessary may not look obvious at first sight, but it is well known (see the discussion at the end of 7) and goes back at least to Laufer [11], who also showed that if \(K_{n+1}\) has a perfect \(1\)-factorisation, then so does \(K_{n,n}\). Therefore, if true, 1 implies its bipartite version. One reason for interest in the bipartite analogue of 1 is its relationship with row-Hamiltonian Latin squares. Let \(m\) and \(n\) be positive integers with \(m \le n\). An \(m \times n\) Latin rectangle is an \(m \times n\) matrix of \(n\) symbols, each of which occurs exactly once in each row and at most once in each column. A Latin square of order \(n\) is an \(n \times n\) Latin rectangle. Suppose that \(i\) and \(j\) are two distinct rows of a Latin square \(\mathcal{L}\) with symbol set \(S\) and define a permutation \(\pi_{i,j} : S \to S\) by \(\pi_{i,j}(\mathcal{L}_{i,k}) = \mathcal{L}_{j,k}\) for each \(k \in S\). A Latin square of order \(n\) is row-Hamiltonian if \(\pi_{i,j}\) consists of a single cycle for all \(i,j\). It was noted [10] that a Latin square of order \(n\) is row-Hamiltonian if and only if it contains no \(a \times b\) Latin subrectangle with \(1 < a \le b < n\). This is an extremely rare property as it is known (see, e.g., [12]) that the overwhelming majority of Latin squares have quadratically many Latin subsquares. There is a natural equivalence between row-Hamiltonian Latin squares of order \(n\) and ordered perfect \(1\)-factorisations of \(K_{n,n}\) (see [10], where the argument is spelt out in detail), and thus the latter may be used to find explicit constructions of the former. The conjecture of Wanless is a long way from being resolved. There are few known infinite families of perfect \(1\)-factorisations of \(K_{n,n}\) [13][15], and these only cover the case \(n \in \{p, 2p - 1, p^2\}\) for some odd prime \(p\).

Moreover, given the difficulty of proving 1, several notions that relax the perfectness requirement have appeared in the literature. For example, resolving a conjecture of Archdeacon, it has been proved by Královič and Královič [16] that for every even integer \(n \ge 2\), there is a \(1\)-factorisation \(\{M_1,\dots,M_{n-1}\}\) of \(K_n\) such that \(M_1 \cup M_j\) induces a Hamilton cycle for all \(j \ge 2\). (These are known as semi-perfect \(1\)-factorisations.) Moreover, Wagner [17] studied an approximate version of 1 in the following sense. For a \(1\)-factorisation \(\mathcal{M}\) of \(K_n\), let \(c(\mathcal{M})\) be the number of pairs of \(1\)-factors of \(\mathcal{M}\) such that their union induces a Hamilton cycle. Then define \(c(n):=\max c(\mathcal{M})\), where the maximum is over all \(1\)-factorisations of \(K_n\). Note that 1 is equivalent to the statement that \(c(n)=\binom{n-1}{2}\) for every even \(n\), and Wagner offered some bounds on \(c(n)\) in terms of the Euler’s totient function.

Finally, one of the results we will prove says that, given a collection \(\mathcal{M}\) of perfect matchings on the same vertex set, say \(V\), satisfying certain conditions, there is a perfect matching which creates a Hamilton cycle with each of the matchings in \(\mathcal{M}\). A related problem, which is motivated by questions in statistical physics, is to compute the maximum over all perfect matchings \(M\) on \(V\) of the number of cycles that \(M\) creates with the given matchings in \(\mathcal{M}\). This has been recently studied when \(\mathcal{M}\) consists of perfect matchings chosen uniformly at random [18], and we refer to Chapter 2 of the book of Gurau [19] for more background.

1.1 Main results↩︎

Our first main result is a variant of Kotzig’s conjecture for random graphs. The random graph model we consider here is the binomial random graph \(G(n, p)\), which has \(n\) vertices, and where each pair of vertices forms an edge independently with probability \(p\). Moreover we always assume that \(n\) is even.

Since \(G(n,p)\) is not regular, it does not have a \(1\)-factorisation. However, one can still ask when \(G(n,p)\) contains a certain number of perfect matchings, say \(k\), such that the union of any two distinct of them induces a Hamilton cycle. When \(k=1\), we are simply requiring the existence of a perfect matching and the threshold is \(\log n/n\). When \(k=2\), the problem is equivalent to the existence of a Hamilton cycle (as a Hamilton cycle on an even number of vertices decomposes into two edge-disjoint perfect matchings) and the threshold is again \(\log n/n\) by a celebrated result of Pósa [20]. Already for \(k=3\), this is not clear anymore. We show that, for any fixed constant \(k\), the result is true above the same threshold.

Theorem 2. For every fixed \(k \in \mathbb{N}\) there exists a constant \(C=C(k)\) such that, when \(p\ge \frac{C \log n}{n}\), w.h.p. \(G(n,p)\) contains \(k\) edge-disjoint perfect matchings \(M_1, \dots, M_k\) such that \(M_i \cup M_j\) induces a Hamilton cycle for all distinct \(i,j \in [k]\).

It is tempting to conjecture that the conclusion is already true when \(p\ge \frac{\log n+(k-1)\log\log n + \omega(n)}{n}\), or even a hitting time result could hold. We will discuss this further in Section 8.

Roughly speaking, our approach to prove Theorem 2 is to find the perfect matchings sequentially by revealing the random edges in \(k\) rounds. Assuming that we have already found a collection of perfect matchings such that the union of any two of them induces a Hamilton cycle, we then want to be able to find yet another perfect matching which forms a Hamilton cycle with each of the given ones. Even when the host graph is complete, it does not seem obvious how to achieve this.

Hence the above approach motivates the following natural question: Given a collection of \(k\) (edge-disjoint) perfect matchings on a common vertex set, can one find another perfect matching that forms a Hamilton cycle with each of the given matchings? If \(k=2\), then this is easy and in fact for any two perfect matchings there exists a third one with the desired property. This can be seen by just adding the edges of the new matching greedily, and in each step, there will be some choice which does not create a forbidden cycle, until the last step. However, this simple procedure breaks down when \(k\ge 3\). In fact, it is not true anymore that for any given \(k\) perfect matchings, there exists another perfect matching forming a Hamilton cycle with each of them, not even when \(n\) is arbitrarily large. For instance, assume \(k=3\) and take three matchings \(M_1, M_2, M_3\) which are identical on \((n-4)/2\) edges and are pairwise disjoint on the remaining \(4\) vertices, which we denote by \(v_1,\dots,v_4\) in such a way that \(v_1v_2,v_3v_4 \in M_1\). Suppose by contradiction that there is a perfect matching \(M^*\) which creates a Hamilton cycle with each \(M_i\). Then \(M^* \cup M_1\) is a Hamilton cycle and, without loss of generality, we can assume that the cycle traverses the vertices \(v_1, \dots, v_4\) in this order. In particular, there are two non-spanning paths between \(v_2\) and \(v_3\), and between \(v_4\) and \(v_1\) which alternate between edges of \(M_1\) and edges of \(M^*\). Let \(j \in \{2,3\}\) be such that \(v_1v_4,v_2v_3 \in M_j\). Then \(M^* \cup M_j\) consists of two cycles, which is a contradiction.

However, our next result shows that, as long as each matching has some edges which belong “exclusively" to it, we can find the desired additional perfect matching.

Theorem 3. For every fixed \(k\in \mathbb{N}\), there exists a constant \(C=C(k)\) such that the following holds. Let \(V\) be a set of \(n\) vertices and suppose \(M_1,\dots,M_k\) are perfect matchings on \(V\), such that \[\begin{align} |M_i\setminus(\cup_{j\neq i}M_j)| \ge C \label{exlusiveness} \end{align}\tag{1}\] for each \(i\in[k]\). Then there exists a perfect matching \(M^*\) such that \(M^*\cup M_i\) forms a Hamilton cycle for each \(i\in [k]\).

We will prove Theorem 3 using the absorbing method which allows to “plan ahead” and prepare for a situation where the greedy algorithm gets stuck. As already discussed above, condition 1 is not needed when \(k=2\), but cannot be omitted when \(k \ge 3\).

3 establishes the existence of one perfect matching \(M^*\) which forms a Hamilton cycle with each of the given matchings. The next natural question is to count the number of possibilities for \(M^*\). To motivate our result, recall that the number of perfect matchings in \(K_n\) is \(\frac{n!}{2^{n/2}(n/2)!}=\left(\sqrt{2} \pm o(1)\right) \cdot \left((n/e)^{n/2}\right)\). Suppose now that a perfect matching \(M_1\) is given and we want to count the number of perfect matchings \(M_2\) of \(K_n\) such that \(M_1\cup M_2\) forms a Hamilton cycle. The answer is easily seen to be \(\frac{2^{n/2} \cdot (n/2)!}{2 \cdot (n/2)}= \left(\sqrt{\pi} \pm o(1)\right) \cdot \left(n^{-1/2}(n/e)^{n/2}\right)\). In other words, if \(M_2\) was chosen uniformly at random from all perfect matchings, then the probability that it forms a Hamilton cycle together with \(M_1\) is \(\left(\sqrt{\frac{\pi}{2}} \pm o(1)\right) \cdot n^{-1/2}\). Heuristically, one might then expect that given \(k\) “independent” (say, edge-disjoint) matchings, the probability that a uniformly random perfect matching forms a Hamilton cycle with each of them is \(\left(\left(\frac{\pi}{2}\right)^{k/2} \pm o(1)\right) \cdot n^{-k/2}\). When \(k=2\), that is, in the case where the greedy algorithm never gets stuck, Kim and Wormald [21] proved that this is indeed correct. By applying this iteratively, they also obtained the asymptotic number of matchings \(M_1, \dots, M_k\) such that \(M_i \cup M_j\) forms a Hamilton cycle for each \(ij \in E(G)\) with \(G\) being a fixed graph on \([k]\) of degeneracy \(2\). However, when \(k>2\), their method does not work anymore, at least not without significant new ideas that allow for some “planning ahead”. We confirm the heuristic argument for any constant \(k\), up to the multiplicative constant factor.

Theorem 4. Let \(k\in \mathbb{N}\) be fixed and \(n\) sufficiently large. Let \(V\) be a set of \(n\) vertices and suppose \(M_1,\dots,M_k\) are edge-disjoint perfect matchings on \(V\). Then the number of perfect matchings \(M^*\) such that \(M^*\cup M_i\) forms a Hamilton cycle for each \(i\in [k]\), is \(\Theta_k\left(n^{-k/2}(n/e)^{n/2}\right)\).

The condition that the given matchings \(M_1,\dots,M_k\) are edge-disjoint cannot be dropped. For instance, if they would be identical, then the number of choices for \(M^*\) is \(\Theta\left(n^{-1/2}(n/e)^{n/2}\right)\) as discussed above. In fact, we will prove a more general version where we can relax the edge-disjointness condition to a mild overlap (cf. 14). For its proof, we use a variety of methods, including a random process analysis, the absorption method (via 3), a variant of the entropy method and the switching method. We then use this stronger result to deduce Theorem 2 via the recent breakthroughs on the expectation threshold conjecture (cf. 6), where our goal is to show that the auxiliary hypergraph whose hyperedges correspond to the possible matchings \(M^*\) is suitably spread.

Let us briefly discuss the methods we use. As already mentioned, the proof of the existence result employs the absorbing method, which is a powerful method that in general is applied to turn approximate solutions into exact solutions. While the greedy algorithm can potentially get stuck, this will only happen towards the end of the process. Very roughly speaking, we will overcome this by planting some special edges in the beginning that will belong to the final matching and will yield some flexibility to resolve any final situation where the greedy algorithm gets stuck. We hope that our ideas will shed some light on potential ways of attack on Kotzig’s conjecture using such kind of methods.

For the counting result, a natural approach is to determine the number of choices the greedy algorithm has in each step, and multiply them together. Unfortunately, this sequence is not deterministic, but depends on the history of already chosen edges. A commonly used arsenal of methods for such counting problems of combinatorial configurations is the following: For lower bounds, to use a randomized greedy algorithm whose behaviour can be analysed through martingale inequalities. Here, the goal is to show that, typically, the number of choices in each step follows some deterministic trajectory, up to small error terms, and that, by stopping the process early enough, a typical outcome of the produced approximate solution can be completed (using the absorbing method, say). For upper bounds, the entropy method has proven itself as a very versatile tool. Here, the main idea is that the number of solutions is directly linked to the entropy of a uniformly random solution. One can then apply known facts about entropy like the chain rule to analyse this in a step-by-step manner akin to the random greedy algorithm. While our approach is certainly inspired by these methods, the main difficulties in our case are quite different from previous applications. Perhaps most notably, the key random variable which controls the number of choices in each step, which is the so-called total overlap (cf. Definition 7), is not concentrated. In fact, one heuristically expects it to be bounded by a constant. While this means one should be able to show that it typically never exceeds \(\Theta(\log n)\), this would not be enough to obtain a counting result that is tight up to constant factors.

Hence, one of the key features of our counting proof is that we analyse the total contribution of this variable over the whole process. Fortunately, it turns out that it is enough to control the expectations. For this, we exploit a “self-correcting” behaviour. Very roughly speaking, this means that if the total overlap exceeds some constant threshold, then there will be a negative drift that pushes it down again. This phenomenon is also crucial for the strengthening of our counting result where the matchings can have a mild overlap at the start (which in turn is essential for the result on random graphs). For this, we need to show that the total overlap decreases quickly and then stays below the constant threshold.

One caveat with using the entropy method is that we have to work with a solution chosen uniformly at random, which is a difficult distribution to handle due to the lack of independence. In many applications, one can bypass this issue with the following conditioning trick: fix any solution (say a perfect matching with the desired property) and reveal its elements (the edges) in a random order. If one can efficiently control the expected number of choices for each step (with respect to the previously appearing elements of the fixed solution), then this may already yield the desired bound. However, in our case, this does not seem tractable, due to the rather complicated and “long-range” possibilities that can make an edge unavailable. (This is slightly reminiscent of the counting problem for so-called “high-girth” designs, where so far the conjectured upper bounds could not be established.) Thus, since we really have to exploit the properties of a uniformly random solution, a final key ingredient in our analysis is a statistical result which says that every available edge is roughly equally likely to be contained in a uniformly random solution. This uses a cascade of several switching operations.

2 Notation, organisation and basic concepts↩︎

2.1 Notation↩︎

We let \([n]\) denote the set \(\{1, \dots, n\}\) and write \(\log(x)\) to mean \(\log_e(x)\).

We use standard graph theory terminology. Given a matching \(M\) we denote by \(V(M)\) the set of vertices covered by \(M\). We denote an ordered matching with \(m\) edges by \((e_1,\dots,e_m)\).

We say that an event holds with high probability (w.h.p.) if the probability that it holds tends to \(1\) as the number of vertices \(n\) tends to infinity.

For \(a, b, c \in (0, 1]\), we write \(a \ll b \ll c\) in our statements to mean that there are increasing functions \(f, g : (0, 1] \to (0, 1]\) such that whenever \(a \le f(b)\) and \(b \le g(c)\), then the subsequent result holds. Moreover, when using the Landau symbols \(O(\cdot), \Omega(\cdot), \Theta(\cdot)\), subscripts denote variables that the implicit constants may depend on. Similarly, \(o_D(\cdot)\) stands for a function which tends to \(0\) as \(D \to \infty\).

2.2 Organisation of the paper↩︎

We introduce some basic concepts in the next subsection. We prove 3 in 3 and (the strengthening of) 4 in 4. The proof of 4 will require an additional result which we prove in 5. We prove 2 in 6, where we also discuss the relevant concepts around the expectation threshold conjecture. We then discuss analogous results in the bipartite setting in 7 and finish with some concluding remarks in 8.

2.3 Basic concepts↩︎

We introduce some basic concepts which will be used throughout the paper.

Definition 5 (\(k\)-configuration). A \(k\)-configuration is a set of \(k\) (not necessarily edge-disjoint) perfect matchings on a common vertex set. The order of a \(k\)-configuration is the size of its vertex set. We say that a pair of vertices forms an available edge if the edge is not contained in any of the given perfect matchings.

We will often associate colours with the \(k\) matchings. For two generic matchings, we will use blue and green. Given a \(k\)-configuration, we aim to find a matching \(M^*\) that forms a Hamilton cycle with each of the given matchings. We will always refer to this matching as the red matching.

We will construct the red matching in an iterative way. Suppose we are given a partial red matching. When speaking of a partial red matching, we always assume that it is properly partial, that is, not a perfect matching yet, and we also assume that the union of this red matching with any other colour is still cycle-free. (Once such a cycle would be closed, it would be impossible to extend this red matching to a full red matching.) We would like to add a red edge, while keeping the property that the union of the new red matching with any other colour is still cycle-free. We encode the edges which we are not allowed to pick as the new red edge through the following auxiliary graph (cf. (B) in 2).

Definition 6 (Reduced configuration). Given a \(k\)-configuration \(\mathcal{M}\) and a partial red matching \(M\), we define the reduced configuration as follows. Its vertex set is the set of vertices not covered by an edge of \(M\), which we denote by \(U\). Then for each colour, say blue, and each vertex \(u\in U\), we consider the path starting at \(u\) with edges alternating blue and red and we let \(v\) be the other endpoint. (Clearly \(v \neq u\) and \(v \in U\).) We then insert \(uv\) as a blue edge in the reduced configuration (cf. 2).

Observe that the reduced configuration is still a \(k\)-configuration, in the sense that each colour induces a perfect matching, and that an edge of the reduced configuration may be assigned multiple colours. Moreover, if the \(k\)-configuration has an edge \(e\) of some colour, say blue, and none of its endpoints is covered by a red edge, then \(e\) is still an edge of colour blue in the reduced configuration. The reduced configuration conveniently allows us to “forget” the red edges that we have already inserted: Indeed, the collection of red edges inserted so far can be completed to a red perfect matching if and only if its reduced configuration admits a red perfect matching (cf. 2).

Figure 2: (A): A 3-configuration \mathcal{M} of order 8 and a partial red matching M of size 2. (B): The reduced configuration of \mathcal{M} with respect to M (note that each colour still induces a perfect matching and that we have created double coloured edges). (C): A red perfect matching M' of the reduced configuration. (D): M \cup M' is a red perfect matching of \mathcal{M}.

The example in 2 shows that, even if we start from a configuration with pairwise edge-disjoint perfect matchings, we may create edges with multiple colours in the reduced configuration. However, for our arguments, it is essential to control how the number of edges which get more than one colour evolves in the corresponding reduced configurations after adding a new red edge, which we capture through the following definition.

Definition 7 (Overlap). For a \(k\)-configuration \(\mathcal{M}:=\{M_1,\dots,M_k\}\) and two colours \(i, j\), we define the overlap of colours \(i\) and \(j\) as \(R_{ij}(\mathcal{M}):=|M_i \cap M_j|\). Moreover, we define the total overlap as \(R(\mathcal{M}):=\sum_{i,j: i \neq j} R_{ij}(\mathcal{M})\).

In order to understand the behaviour of the total overlap, it is enough to consider a single pair of colours at the time. Therefore, fix any two colours, say \(i\)=blue and \(j\)=green, and note that the union of the blue and green matching decomposes into alternating blue-green cycles and double coloured edges. Then observe that there are five different ways we can add a red edge, depending on the components (of the blue-green graph) its endpoints belong to:

  1. both endpoints belong to the same cycle \(C\) and their distance is odd;

  2. both endpoints belong to the same cycle \(C\) and their distance is even;

  3. its endpoints belong to different cycles \(C_1\) and \(C_2\);

  4. one endpoint belongs to a cycle \(C\) and the other one to a double edge \(f\);

  5. both endpoints belong to double edges, say \(f_1\) and \(f_2\).

For each case, we can easily describe the changes to the blue-green cycle structure of the reduced configuration. In case [case95A], \(C\) splits into two shorter cycles; in particular, if the endpoints of the red edge are at distance exactly three, then at least one of these cycles would be a (new) double edge and \(R_{ij}(\cdot)\) would increase by \(1\) or \(2\). In case [case95B], \(C\) becomes a cycle whose length is by two shorter than that of \(C\); in particular, if the length of \(C\) was \(4\), the shorter cycle would be a (new) double edge and \(R_{ij}(\cdot)\) would increase by \(1\). In case [case95C], the two cycles \(C_1\) and \(C_2\) merge into a new longer cycle, and \(R_{ij}(\cdot)\) does not change. In case [case95D], \(C\) and \(f\) merge into a cycle; in particular, \(R_{ij}(\cdot)\) decreases by \(1\). In case [case95E], \(f_1\) and \(f_2\) merge into a double edge; in particular, \(R_{ij}(\cdot)\) decreases by \(1\). In particular, we note that the addition of a new red edge changes \(R_{ij}(\cdot)\) by at most \(2\), and thus the total overlap changes by at most \(k^2\). In order to keep track of \(R_{ij}(\cdot)\) over time, we make the following definition.

Definition 8 (Good and bad edges). Let \(\mathcal{M}\) be a \(k\)-configuration and \(e \not \in \mathcal{M}\). Let \(\mathcal{M}'\) be the reduced configuration of \(\{e\}\). Given two colours, say \(i,j\), we say that \(e\) is good (resp. bad) for the colours \(i\) and \(j\) if \(R_{ij}(\mathcal{M}') < R_{ij}(\mathcal{M})\) (resp. \(R_{ij}(\mathcal{M}') > R_{ij}(\mathcal{M})\)). We remark that the available edges whose addition does not change \(R_{ij}(\cdot)\) are neither good nor bad.

The discussion above immediately implies the following proposition.

Proposition 9. Let \(\mathcal{M}\) be a \(k\)-configuration of order \(n\) and \(i, j\) be two colours. Set \(R_{ij}:=R_{ij}(\mathcal{M})\). Then the following properties hold.

  1. The number of good edges is at least \((R_{ij}/2-k/2)n\) and the number of bad edges is at most \(n\).

  2. Let \(e \not\in \mathcal{M}\) and \(\mathcal{M}'\) be its reduced configuration. If \(e\) is a good edge then \(R_{ij}(\cdot)\) decreases by \(1\) and if \(e\) is a bad edge then \(R_{ij}(\cdot)\) increases by \(1\) or \(2\).

Proof. Let \(S\) be the set of vertices covered by the double edges, i.e. those in \(M_i \cap M_j\).

The good edges are those corresponding to cases [case95D] and [case95E], i.e. the available edges which are incident to a vertex of \(S\). Their number is at least \(|S|(n-|S|)+\binom{|S|}{2} - k (n/2) \ge (|S|/4-k/2)n= (R_{ij}/2-k/2)n\), where we used \(|S| \le n\).

Similarly, the bad edges are those corresponding to case [case95A] (with the endpoints of the red edge being at distance three on the cycle) and case [case95B] (with the cycle having length four). Therefore, for each vertex, there exist at most two bad red edges incident to this vertex, and thus the number of bad edges is at most \(n\). This proves Item [prop95P951].

Item [prop95P952] follows directly from the discussion above of cases [case95A][case95B][case95D] and [case95E].

\(\square\)

Finally, we prove an easy observation.

Observation 10. Let \(\mathcal{M}\) be a \(k\)-configuration of order \(n\), fix \(m \le n/2\) and let \(M^*=(e_1^*,\dots,e_m^*)\) be an ordered red matching of size \(m\) chosen uniformly at random. Then, for each \(t \le m\), conditioned on \(e_1^*,\dots,e_{t-1}^*\), the ordered matching \((e_t^*,\dots,e_m^*)\) is distributed as a uniformly chosen ordered red matching of size \(m-t+1\) of the reduced configuration of \(\{e_1^*,\dots,e_{t-1}^*\}\).

Proof. Let \(\mathcal{X}\) be the set of ordered red matchings of size \(m\) of \(\mathcal{M}\). Let \((f_1,\dots,f_m) \in \mathcal{X}\) and condition on \(e_i^*=f_i\) for each \(i \le t-1\). Denote by \(\mathcal{M}'\) the reduced configuration of \(\{f_1,\dots,f_{t-1}\}\). Then \[\mathbb{P}\Bigg[\bigcap_{i=t}^m \{e_i^*=f_i\} \bigg| \bigcap_{i=1}^{t-1} \{e_i^*=f_i\}\Bigg] = \frac{\mathrm{\mathbb{P}}\left[\cap_{i=1}^m \{e_i^*=f_i\}\right]}{\mathrm{\mathbb{P}}\left[\cap_{i=1}^{t-1} \{e_i^*=f_i\}\right]} = \frac{1/|\mathcal{X}|}{|\mathcal{A}|/|\mathcal{X}|} = \frac{1}{|\mathcal{A}|} \, ,\] where \(\mathcal{A}\) denotes the set of matchings of \(\mathcal{X}\) with \(e_i^*=f_i\) for each \(i \le t-1\). We conclude by observing that \(\mathcal{A}\) has the same cardinality as the set of the red matchings of size \(m-t+1\) of \(\mathcal{M}'\).

\(\square\)

3 Existence – Proof of Theorem 3↩︎

We first discuss the proof of 3. Suppose that we are given a \(k\)-configuration and we would like to construct a red perfect matching. We start by observing that we can greedily build a partial red matching of size at least \(\frac{n-k-1}{2}\). Indeed, suppose that we have already chosen some red edges, consider the reduced configuration and let \(n'\) be its number of vertices. The union of the reduced edges has size at most \(\frac{n'}{2}\cdot k\) and, as long as \(n' > k+1\), we have \(\frac{n'}{2}\cdot k<\binom{n'}{2}\) and thus there is a choice for the next red edge. The main part of the proof is to show that the reduced configuration obtained when there are only few uncovered vertices (\(k\) or \(k+1\), depending on the parity of \(k\)) can be solved, i.e. that it has a red perfect matching. Note that not all configurations are solvable: For example, if we take three perfect matchings on four vertices whose union forms a \(K_4\), there is no red perfect matching. Therefore we employ the absorbing method, building a flexible structure in the beginning which will help us to deal with any potential leftover.

To motivate our approach, observe that if the \(k\) matchings of a \(k\)-configuration are identical, then a red perfect matching clearly exists. Hence, if we could choose some red edges so that in the reduced configuration the \(k\) colours become identical, we would be done. When \(k=2\), this can be easily achieved.

Proposition 11. Given a \(2\)-configuration, there exists a partial red matching such that in the reduced configuration the two matchings are identical.

Proof. Let the two colours be blue and green. Recall that the union of any two perfect matchings forms a spanning subgraph whose components are alternating blue-green (even) cycles or double edges. For the latter case, we do not have to do anything, so suppose we have an even cycle with edges alternating blue and green. If the length is at least \(6\), then taking a red edge between two vertices at distance \(3\) on the cycle splits off a double-coloured edge and a shorter cycle in the reduced configuration. Finally, if the cycle has length \(4\), then choosing one of the diagonals as a red edge leaves a single double-coloured edge in the reduced configuration. By repeatedly choosing red edges in this way, we find a partial red matching with the desired property.

\(\square\)

Once two colours are identical, they will remain identical, regardless of which additional red edges we add. Hence, a natural approach is to equalize the colours step by step. Note that this is not as easy as for \(k=2\): When \(k>2\), we might not be able to add the red edges as in the proof of Proposition 11 since these might be edges of a third colour.

Definition 12 (Equalizing matching). Given a \(k\)-configuration \(\mathcal{M}\) and two colours, say blue and green, a partial red matching is called blue-green equalizing if in the reduced configuration the blue and green matchings are identical. It is called totally equalizing if in the reduced configuration all the \(k\) matchings are identical.

Observe that Proposition 11 can be rephrased equivalently by saying that any \(2\)-configuration has a totally equalizing partial red matching. We remark once more that we only seek a partial red matching which is properly partial, meaning that its reduced configuration is still non-empty. This is because we will find such equalizing matchings in different subconfigurations of a given configuration, and in the end we want to find a solution which “connects” these different parts.

We now define our absorber.

Definition 13 (Absorber). Let \(\mathcal{L}\) be a \(k\)-configuration, and blue and green be two colours. Let \(\mathcal{A}\) be a \(k\)-configuration, which is vertex-disjoint from \(\mathcal{L}\) (but has the same colours). Then \(\mathcal{A}\) is called a blue-green equalizer for \(\mathcal{L}\) if both \(\mathcal{A}\) and \(\mathcal{A}\cup \mathcal{L}\) have a blue-green equalizing red matching. Moreover \(\mathcal{A}\) is called an absorber for \(\mathcal{L}\) if both \(\mathcal{A}\) and \(\mathcal{A}\cup \mathcal{L}\) have a totally equalizing red matching.

We remark that, while \(\mathcal{A}\) and \(\mathcal{L}\) are vertex-disjoint, the equalizing matching of \(\mathcal{A}\cup \mathcal{L}\) is still allowed to contain edges between \(\mathcal{A}\) and \(\mathcal{L}\).

3.1 Existence of absorbers↩︎

Given a \(k\)-configuration, we will construct an absorber by iteratively building equalizers for two colours at the time. Therefore we start by showing how we can equalize two colours.

Lemma 1. Given a \(k\)-configuration \(\mathcal{L}\) of order \(m\) and two colours, say blue and green, there exists a blue-green equalizer of order \(2m\).

Proof. Let \(v_1,\dots,v_m\) be the vertices of \(\mathcal{L}\) and set \(Z:=\{x_1, \dots, x_m, y_1, \dots, y_m\}\) to be a set of \(2m\) new vertices. We will construct a blue-green equalizer \(\mathcal{A}\) with \(V(\mathcal{A})=Z\) (3 illustrates the various steps).

First we take the edges \(x_iy_i\) as blue-green double coloured edge for all \(i \in [m]\), and denote by \(E_1\) their union. Note that these edges give a blue and a green perfect matching in \(V(\mathcal{A})\) and thus we will not add any more edges of these two colours. Moreover, observe that, no matter which edges we add for the other colours, \(\mathcal{A}\) has a blue-green equalizing red matching, namely the empty matching.

Now, for each colour different from blue and green, we have to add a perfect matching of \(Z\) in this colour in order to turn \(\mathcal{A}\) into a \(k\)-configuration. Moreover, we want to choose them in such a way that we can then find a blue-green equalizing red matching \(M\) in \(\mathcal{A}\cup \mathcal{L}\). In order to take advantage of 11, we will achieve this in the opposite order: We first define which edges we want for \(M\) and then show that it is still possible to add the other colours in a “compatible” way, meaning that the union of \(M\) and any of these matchings is cycle-free (where we also forbid double edges).

We first take \(v_ix_i\) as red edge for all \(i\in [m]\), and denote by \(M_1\) their union (cf. drawing (B1) in 3). Consider the reduced configuration of \(M_1\) with respect to colours blue and green (cf. drawing (B2) in 3). Then its vertex set is \(Y=\{y_1,\dots,y_m\}\) and, by Proposition 11, there exists a partial blue-green equalizing red matching \(M_2\) on \(Y\) (cf. drawings (B3) and (B4) in 3). In particular, \(M_1 \cup M_2\) is a blue-green equalizing red matching for \(\mathcal{A}\cup \mathcal{L}\).

Now we move to the other colours. Since the edges of \(\mathcal{A}\) are allowed to receive more than one colour, we can simply focus on one such colour, say yellow.2 Since \(M_2\) is a properly partial matching of \(Y\), there are at least two vertices of \(Y\), say \(y\) and \(\bar{y}\), which are not covered by a red edge (cf. drawing (C1) in 3). We start by covering all but \(4\) vertices of \(Z\) with a partial yellow matching, while not using the vertices \(y\) and \(\bar{y}\). This can be done greedily (cf. drawing (C2) in 3). Indeed, let \(\tilde{Z}\) be the set of vertices of \(Z\) which are not (yet) covered by a yellow edge, and suppose \(y, \bar{y} \in \tilde{Z}\) and \(|\tilde{Z}| >4\). Then, take any \(z \in \tilde{Z} \setminus\{y,\bar{y}\}\), consider (if it exists) the red-yellow alternating path starting from \(z\) and let \(z'\) be its other endpoint. Then there exists \(w \in \tilde{Z} \setminus\{y,\bar{y},z\}\) with \(w \neq z'\) and we can add \(zw\) as a yellow edge. (If the alternating path does not exist, we can take as \(w\) any vertex in \(\tilde{Z} \setminus\{y,\bar{y},z\}\).) At the end of the greedy process, there are exactly \(4\) vertices which are not covered by a yellow edge: these are \(y, \bar{y}\) and two more, say \(s\) and \(\bar{s}\). We can finish by adding \(ys\) and \(\bar{y}\bar{s}\) as yellow edges as, since neither \(y\) nor \(\bar{y}\) is covered by a red edge, this addition cannot close a red-yellow cycle (cf. drawing (C3) in 3). We let \(E_2\) be the union of all the edges added in this step (among all colours).

Observe that the configuration on \(Z\) with edges \(E_1 \cup E_2\) gives the desired equalizer (cf. drawing (A) in 3): Indeed, \(E_1 \cup E_2\) is the union of \(k\) perfect matchings of \(Z\), the empty matching is blue-green equalizing for \(\mathcal{A}\), and \(M_1 \cup M_2\) is blue-green equalizing for \(\mathcal{A}\cup \mathcal{L}\).

\(\square\)

Figure 3: (A): A 3-configuration \mathcal{L} on 10 vertices \{v_i\} and its blue-green equalizer \mathcal{A} with vertices \{x_i\} \cup \{y_i\}: The empty matching is blue-green equalizing for \mathcal{A} and M_1 \cup M_2 is blue-green equalizing for \mathcal{A}\cup \mathcal{L}. (B): We show how we add the blue and green matching on Z and how we choose the red edges in M_1 and M_2 in the proof of 1. (B1): We first add double blue-green coloured edges x_iy_i and set M_1:=\{v_ix_i\}. (B2): The blue and green edges of the reduced configuration of M_1. (B3): We find a red matching M_2 which equalizes blue and green. (B4): Indeed the blue and green edges of the reduced configuration of M_1 \cup M_2 are identical. (C): We show how we add the yellow matching on Z in the proof of 1. (C1): Once M_1 and M_2 have been fixed, we arbitrarily choose two distinct vertices y, \bar{y} not covered by a red edge. (C2): We can greedily add a yellow matching covering all but 4 vertices such that y, \bar{y} are not covered and we do not close a red-yellow cycle. (C3): We add the yellow edges sy, \bar{s}\bar{y} which completes a yellow perfect matching.

We can then show the existence of absorbers.

Lemma 2. Any \(k\)-configuration \(\mathcal{L}\) of order \(m\) has an absorber.

Proof. We prove the statement by induction on \(k\). If \(k=2\), the statement is true by 1. Suppose it is true for \(k-1\) and let \(\{c_1,\dots,c_k\}\) be the set of the colours of the matchings in \(\mathcal{L}\). By 1, there exists an equalizer \(\mathcal{A}\) for \(\mathcal{L}\) for the colours \(c_1\) and \(c_2\). Let \(M_1\) (resp. \(M_2\)) be the \(c_1\)-\(c_2\) equalizing red matching of \(\mathcal{L}\cup \mathcal{A}\) (resp. \(\mathcal{A}\)).

Let \(\mathcal{L}_1\) be the reduced configuration of \(\mathcal{L}\cup \mathcal{A}\) with respect to the matching \(M_1\). Then, in \(\mathcal{L}_1\), the matchings of colour \(c_1\) and \(c_2\) are equal and we can consider them as being the same colour. In particular, by induction, there exists an absorber \(\mathcal{A}_1\) of \(\mathcal{L}_1\) (for the colours \(c_2, \dots, c_k\) and thus \(c_1\) as well). We let \(M_1^1\) (resp. \(M_2^1\)) be the totally equalizing red matching of \(\mathcal{L}_1 \cup \mathcal{A}_1\) (resp. \(\mathcal{A}_1)\). Similarly, let \(\mathcal{L}_2\) be the reduced configuration of \(\mathcal{A}\) with respect to the matching \(M_2\). Then, by induction, there exists an absorber \(\mathcal{A}_2\) of \(\mathcal{L}_2\) (for the colours \(c_2, \dots, c_k\) and thus \(c_1\) as well). We let \(M_1^2\) (resp. \(M_2^2\)) be the totally equalizing red matching of \(\mathcal{L}_2 \cup \mathcal{A}_2\) (resp. \(\mathcal{A}_2)\).

Let \(\mathcal{A}':=\mathcal{A}\cup \mathcal{A}_1 \cup \mathcal{A}_2\) (notice the union is disjoint). Then \(\mathcal{A}'\) is a \(k\)-configuration and is an absorber for \(\mathcal{L}\). Indeed, \(M_1 \cup M_1^1 \cup M_2^2\) is a totally equalizing red matching for \(\mathcal{L}\cup \mathcal{A}'\) while \(M_2 \cup M_2^1 \cup M_1^2\) is a totally equalizing red matching for \(\mathcal{A}'\).

\(\square\)

3.2 Finding absorbers inside configurations.↩︎

While we have shown that absorbers exist, we still have to justify why we can indeed “find” them inside a given initial \(k\)-configuration. By “finding”, we mean that we can choose a few red edges such that the desired absorber appears inside the reduced configuration. This is the step where we crucially need that every matching has some exclusive edges.

We first show that taking at most two red edges suffices to force a fixed edge to appear in the reduced configuration and in the desired colour, say blue, provided there are enough edges which are exclusively blue.

Lemma 3. Let \(\mathcal{M}\) be a \(k\)-configuration on \(V\) and let \(Z \subseteq V\). Suppose that there are at least \((|Z|+2)\cdot k+1\) edges with both endpoints in \(V \setminus Z\) which are exclusively blue. Let \(x,y\in Z\) such that the blue neighbours of both \(x\) and \(y\) belong to \(V \setminus Z\). Then there exists a partial red matching \(M\) of size \(2\) such that its edges lie outside \(Z\) and its reduced configuration has the following property: The only new edge with both endpoints in \(Z\) is \(xy\) and its colour is blue, and all the others stay the same.

Proof. Let \(x'\) and \(y'\) be the other endpoints of the blue edges adjacent to \(x\) and \(y\), respectively. By assumption \(x',y' \not \in Z\). Set \(T:=N\left(Z \cup \{x',y'\}\right)\), i.e. \(T\) is the set of vertices \(t\) such that \(tw\) is an edge of \(\mathcal{M}\) (of any colour) for some \(w \in Z \cup \{x',y'\}\). Since \(|T| \le (|Z|+2)k\), there exist \(x'',y'' \in V \setminus (Z \cup T)\) such that the edge \(x''y''\) is exclusively blue.

We set \(e:=x'x''\) and \(f:=y'y''\), and claim that \(M:=\{e,f\}\) is a feasible red matching. Indeed, clearly it does not close any alternating red-blue cycle. For any other colour, say green, since \(e,f\) are available edges in \(\mathcal{M}\), none of them is green and thus adding \(M\) does not create any double red-green edge. Suppose now there is an alternating red-green cycle using both these two edges. Then, since there are no other red edges, the cycle has length \(4\) and it is either \(x'x''y''y'\) or \(x'y''y'x''\). The first cannot happen as otherwise \(x''y''\) would be a green edge, while it is exclusively blue by assumption. The latter cannot happen either as otherwise \(x'y''\) would be a green edge, contradicting the definition of \(T\).

Observe that in the reduced configuration of \(M\), the edge \(xy\) is blue. We now show that no other new edge is created in \(Z\) and that all the edges with both endpoints in \(Z\) stay the same. Suppose \(vw\) is a new edge of the reduced configuration of \(M\) with \(v,w \in Z\), and let \(c\) be its colour. Then there is a \(c\)-red alternating path with endpoints \(v\) and \(w\), using some of the added red edges \(e\) and \(f\). By definition of \(T\) and the choice of \(x''\) and \(y''\), among the endpoints of \(e\) (resp. \(f\)) only \(x'\) (resp. only \(y'\)) can be adjacent in colour \(c\) to \(v\) or \(w\), and if this is the case then it cannot be adjacent to both \(v\) and \(w\). Therefore, without loss of generality, we can assume that the \(c\)-red alternating path is \(vx'x''y''y'w\). Since \(x''y''\) is exclusively blue, then the colour \(c\) is blue and the edge \(vw\) is \(xy\), as wanted. Finally, since \(M\) lies outside \(Z\), all the edges with both endpoints in \(Z\) stay the same.

\(\square\)

Next, we apply the previous lemma several times in order to find the desired absorber.

Lemma 4. For all integers \(k,m \ge 2\), there exists a constant \(C=C(k,m)\) such that the following holds for any \(k\)-configuration \(\mathcal{A}\) of order \(m\). Suppose that \(\mathcal{M}\) is a \(k\)-configuration with the property that, for each of the \(k\) colours, \(\mathcal{M}\) has at least \(C\) edges of that colour only. Then there exists a red matching \(M^*\) such that the reduced configuration contains \(\mathcal{A}\).

Proof. Set \(C:=k \cdot [(k+3)m+3]\). Let \(\mathcal{E}:=\{e_1,\dots,e_{\ell}\}\) be the multiset of edges of \(\mathcal{A}\) and observe that \(\ell = (km)/2\). Given the edge \(e_i\), denote its endpoints by \(a_i\) and \(b_i\), and its colour by \(c_i\). Let \(V\) be the vertex set of \(\mathcal{M}\), and fix an independent subset \(Z \subseteq V\) of size \(m\) and any bijection \(\phi:V(\mathcal{A}) \to Z\). (Such a set \(Z\) exists as \(\mathcal{M}\) is the union of \(k\) perfect matchings and has at least \(2C\) vertices.) We show that for each \(i \in [\ell]\)

  1. there exists a red matching \(M^*_i\) of size \(2i\) such that its edges lie outside \(Z\) and the edges of its reduced configuration with both endpoints in \(Z\) are precisely \(\phi(a_j)\phi(b_j)\) in colour \(c_j\) for each \(j \in [i]\).

Then we can set \(M^*:=M^*_{\ell}\) and the lemma follows.

The case \(i=1\) of [eq:Q] follows from Lemma 3 on input \(Z\), \(x=\phi(a_1)\), \(y=\phi(b_1)\) and colour \(c_1\). (Observe that since \(Z\) is an independent set, the neighbours of \(x\) and \(y\) in colour \(c_1\) are outside \(Z\).) Now suppose that [eq:Q] holds for some \(i<\ell\) and let \(M^*_i\) be the red matching. We want to apply 3 to the reduced configuration of \(M^*_i\) (which still contains the set \(Z\)). For that, for each colour, say blue, we need to count the number of its edges which are exclusively blue. So fix an edge of \(\mathcal{M}\) which lies outside \(Z\) and was exclusively blue at the beginning, say \(f:=vw\). The edge remains exclusively blue in the reduced configuration of \(M^*_i\) unless one of the following circumstances happens. First, it could be that one of the endpoints of \(f\) belong to \(M^*_i\). Since each vertex in \(M^*_i\) is adjacent to at most one (exclusively) blue edge, this can happen for at most \(|V(M^*_i)|\) such edges. Second, it could be that \(f\) gets an additional colour, i.e. there are another colour, say green, and an alternating green-red path with endpoints \(v\) and \(w\). The number of such alternating green-red paths is at most \(|M^*_i|\) (indeed, the path is completely determined by the first red edge it uses). Therefore the number of edges of the reduced configuration which are exclusively blue is at least \(C-|V(M^*_i)| - k |M^*_i| = C-(k+2)|M^*_i| \ge C - (k+2)km = (m+3)k \ge (|Z|+2)k+1\), where we used our choice of \(C\). Therefore Lemma 3 on input \(Z\), \(x=\phi(a_{i+1})\), \(y=\phi(b_{i+1})\) and colour \(c_{i+1}\) gives a red matching \(M\) and we can set \(M^*_{i+1}:=M^*_i \cup M\). (Observe that, by [eq:Q], the neighbours of \(x\) and \(y\) in colour \(c_{i+1}\) are outside \(Z\).) It is easy to see that \(M^*_{i+1}\) satisfies [eq:Q].

\(\square\)

3.3 Proof of 3↩︎

We can now show the main result of this section: Configurations which, for each colour, contain enough edges of that colour only, admit a red perfect matching.

Proof of Theorem 3. Let \(C:=C(k)\) be a large enough constant for the arguments below to hold and let \(m \in \{k,k+1\}\) be even. Let \(\mathcal{L}_1,\dots,\mathcal{L}_s\) be a list of all possible \(k\)-configurations of order \(m\), up to isomorphism. Note that \(s=O_k(1)\). For every \(j \in [s]\), by Lemma 2, there exists a \(k\)-configuration \(\mathcal{A}_j\) such that \(\mathcal{A}_j\) is an absorber for \(\mathcal{L}_j\). We may assume that the \(\mathcal{A}_j\)’s have pairwise disjoint vertex sets, and we let \(\mathcal{A}^*:=\mathcal{A}_1\cup \dots \cup \mathcal{A}_s\). (Note \(\mathcal{A}^*\) is a \(k\)-configuration.) Now, by Lemma 4, we can find a partial red matching \(M^*_1\) in \(\mathcal{M}\) such that the reduced configuration \(\mathcal{M}_1\) contains \(\mathcal{A}^*\) as a subconfiguration, and we let \(U\) be its vertex set. Note that, since \(\mathcal{A}^*\) is a configuration, \(\mathcal{M}_1\) has no edges between \(U\) and \(W:=V(\mathcal{M}_1)\setminus U\). In \(\mathcal{M}_1[W]\), we run the greedy algorithm to find a partial red matching \(M^*_2\) that covers all but \(m\) vertices. Let \(\mathcal{L}\) be the reduced configuration on these \(m\) vertices. Then it must be isomorphic to \(\mathcal{L}_{j_0}\) for some \(j_0\in [s]\). Now, for each \(j\neq j_0\), there is a totally equalizing red matching for \(\mathcal{A}_j\), and there is a totally equalizing red matching for \(\mathcal{L}_{j_0}\cup \mathcal{A}_{j_0}\) (and thus for \(\mathcal{L}\cup A_{j_0}\)). Let \(M^*_3\) be the union of all these red matchings. Hence, \(M^*_1\cup M^*_2 \cup M^*_3\) is a totally equalizing matching of \(\mathcal{M}\), and we can easily choose the remaining red edges to find a red perfect matching for \(\mathcal{M}\).

\(\square\)

4 Counting – Proof of Theorem 4↩︎

4 was stated for edge-disjoint perfect matchings, but we can show that the conclusion still holds if the matchings overlap only mildly, which is crucial for the deduction of 2.

Theorem 14. Let \(k\in \mathbb{N}\) be fixed. Then the following holds for all \(n\) sufficiently large. Let \(V\) be a set of \(n\) vertices and suppose \(M_1,\dots,M_k\) are perfect matchings on \(V\) such that \(\sum_{i,j:i\neq j} |M_i\cap M_j| \le n^{1/8}\). Then the number of perfect matchings \(M^*\) such that \(M^*\cup M_i\) forms a Hamilton cycle for each \(i\in [k]\) is \(\Theta_k\left(n^{-k/2}(n/e)^{n/2}\right)\).

4 is then an immediate corollary of 14.

As already explained in 2.3, given a configuration, we construct the red matching iteratively and, as long as at least \(k+2\) vertices are uncovered, we can choose the next red edge greedily. In order to count the number of red perfect matchings, roughly speaking, we want to estimate the number of choices the greedy algorithm has in each step. It is much more convenient to count ordered matchings in this way, which simply differs from the unordered case by a factor of \((n/2)!\). First, we give a rough sketch of how the counting proof works.

Let \(\mathcal{M}\) be the given configuration and \(M^*=(e_1^*,e_2^*,\dots,e_{n/2}^*)\) be an ordered red perfect matching. For \(0 \le t \le n/2-1\), let \(\mathcal{M}_t\) be the reduced configuration of the matching \(\{e_1^*,\dots,e_{t}^*\}\) with the convention that \(\mathcal{M}_0:=\mathcal{M}\). Moreover, let \(n_t:=n-2t\) denote the order of the reduced configuration at time \(t\), and let \(M_{t,i}\) denote the edges of \(\mathcal{M}_t\) of colour \(i\). The number of possibilities for \(e_{t+1}^*\) is then the number of edges in the reduced configuration that are not occupied by any reduced matching, so it is exactly \[\binom{n_t}{2}-|\cup_{i\in[k]} M_{t,i}|\, .\] By assuming for now that the matchings stay (nearly) edge-disjoint, this amounts roughly to \[\binom{n_t}{2}-k\frac{n_t}{2}= \frac{n_t^2}{2} \cdot \left[ 1-\frac{k+1}{n_t}\right] \approx \frac{n_t^2}{2} \cdot \exp\left(-\frac{k+1}{n_t}\right) \,,\] where we used that \(1-x \approx \exp\left(-x\right)\). By multiplying the choices we get \[\begin{align} \label{eq:counting} \prod_{t=0}^{n/2-1} \left[ \frac{n_t^2}{2} \cdot \exp\left(-\frac{k+1}{n_t}\right) \right] =\left( \prod_{t=0}^{n/2-1} \frac{n_t^2}{2}\right) \cdot \exp\left(-(k+1)\sum_{t=0}^{n/2-1}\frac{1}{n_t}\right) \, . \end{align}\tag{2}\] We can easily see that \[\prod_{t=0}^{n/2-1} \frac{n_t^2}{2} = \frac{(2^{n/2} \cdot (n/2)!)^2}{2^{n/2}} = 2^{n/2} \cdot (n/2)!^2 \, .\] Moreover, \(\sum_{t=0}^{n/2-1}1/n_t\) is the harmonic sum but only over the even integers, and thus it is approximately \(\log (n/2)/2 \approx \log(n)/2\) (cf. 16) and hence the second term in 2 is approximately \(n^{-(k+1)/2}\). Putting all together, the number of ordered red matchings is approximately \(2^{n/2} \cdot (n/2)!^2 \cdot n^{-(k+1)/2}\). After dividing this quantity by \((n/2)!\), we get that the number of unordered red matchings is approximately \[2^{n/2} \cdot (n/2)! \cdot n^{-(k+1)/2} \approx n^{-k/2}(n/e)^{n/2}\, ,\] where we used Stirling’s approximation \(n! \approx n^{1/2} \cdot (n/e)^{n}\).

We will argue that this heuristic argument can be made rigorous. For that, there are two main obstacles arising from the assumptions we made in the simplified argument above. Firstly, we cannot assume that the matchings stay edge-disjoint: In fact, the number of choices at each step heavily depends on the already chosen red edges. Secondly, the greedy algorithm does not necessarily produce a red perfect matching, as it is only guaranteed to run as long as at least \(k+2\) vertices are left. The details are different for the lower and upper bound.

For the lower bound, overlap actually helps us in some sense, since then the number of choices for the next red edge is larger and it will be enough to use the bound \(|\cup_{i\in[k]} M_{t,i}| \le k \frac{n_t}{2}\). So the main concern is that towards the end of the process, we do not get stuck. That is, we want to stop the process at some time \(m:=n/2-D\), where \(D\) is a large constant, and be left with a reduced configuration that has small enough overlap so that we can complete the red matching into a red perfect matching by an application of Theorem 3. While this is not always true, we show that, for a uniformly random ordered red matching of size \(m\), the expected overlap of the reduced configuration is constant. Therefore, by Markov’s inequality, a constant fraction of the outcomes can indeed be completed.

For the upper bound, we can ignore the problem that some of the red matchings of size \(m\) might not be completable to a red perfect matching. However, we do have to control carefully the total contribution of the term \(|\cup_{i\in[k]} M_{t,i}|\) over time. Again, we do so for a uniformly random ordered red matching, where the overlap at time \(t\) corresponds to the overlap of the reduced configuration of the first \(t\) edges of the ordered red matching. Intuitively, this is similar to running the greedy process at random, meaning that at each step we choose the next red edge uniformly at random among all the available ones. However, they do not have the same distribution and we need the following statistical result.

Theorem 15. Let \(k \in \mathbb{N}\) be fixed. Then the following holds for \(n\) sufficiently large. Let \(\mathcal{M}\) be a \(k\)-configuration of order \(n\) with matchings \(M_1, \dots, M_k\) and suppose that \(\sum_{i,j:i\neq j}|M_i \cap M_j| \le n^{1/8}\). Let \(M^*\) be a red perfect matching chosen uniformly at random. Then, for every edge \(e\notin E(\mathcal{M})\), we have \[\mathrm{\mathbb{P}}\left[e \in M^*\right]=\Theta_k(1/n).\]

This is actually a corollary of Theorem 14. Indeed, from 14 we know that the number of all red perfect matchings of \(\mathcal{M}\) is \(\Theta_k\left(n^{-k/2}(n/e)^{n/2}\right)\). Then consider the reduced configuration \(\mathcal{M}'\) obtained by taking a fixed \(e \not\in E(\mathcal{M})\) as red edge. By Item [prop95P952] of 9, the total overlap increases by at most \(2k^2\) and thus we can apply 14 again and get that the number of red perfect matchings of \(\mathcal{M}'\) is \(\Theta_k\left(n^{-k/2}(n/e)^{n/2-1}\right)\). Therefore we can conclude that \[\mathrm{\mathbb{P}}\left[e \in M^*\right]=\frac{\Theta_k\left(n^{-k/2}(n/e)^{n/2-1}\right)}{\Theta_k\left(n^{-k/2}(n/e)^{n/2}\right)} = \Theta_k(1/n)\, .\] In fact, we will not prove Theorem 15 directly since this seems roughly as hard as Theorem 14. Instead, we will prove the following variant for almost-perfect matchings, which suffices for our purposes.

Lemma 5. Let \(1/n \ll 1/D \ll 1/k\) with \(k,D,n \in \mathbb{N}\) and \(k \ge 2\). Let \(\mathcal{M}\) be a \(k\)-configuration of order \(n\) and \(M^*\) be a red matching of size \(m:=n/2-D\) chosen uniformly at random. Then, for every edge \(e\notin E(\mathcal{M})\), we have \[\mathrm{\mathbb{P}}\left[e\in M^*\right] = \frac{1 \pm o_D(1)}{n}\, .\]

The proof is postponed to 5 and utilises the switching method. We remark that for our application, it would be enough to have \(\mathrm{\mathbb{P}}\left[e\in M^*\right] = \Theta_k(1/n)\).

4.1 General observations↩︎

As already pointed out, in our calculation, we will encounter partial sums of the harmonic series and we will use the following simple fact.

Fact 16. Let \(n\) be a positive even integer and \(s < n/2\) be an integer. Set \(n_t:=n-2t\) for each \(t \le s\). Then \(\sum_{t=0}^{s-1} \frac{1}{n_t} = \frac{1}{2} \left(\log \frac{n}{n-2s}\pm O(1) \right)\).

Proof. Using the well-known fact that \(\sum_{i=1}^n \frac{1}{i} = \log n \pm O(1)\), together with the observation that \(n_t=2(n/2-t)\) and \(n/2 -t \in \mathbb{N}\), we get that \[\begin{align} \sum_{t=0}^{s-1} \frac{1}{n_t} &= \frac{1}{2}\left(\frac{1}{n/2}+\frac{1}{n/2-1}+\dots+\frac{1}{n/2-s+1}\right) \\ &= \frac{1}{2} \left( \sum_{t=1}^{n/2} \frac{1}{t} -\sum_{t=1}^{n/2-s} \frac{1}{t} \right) = \frac{1}{2} \left(\log \frac{n/2}{n/2-s} \pm O(1)\right) = \frac{1}{2} \left(\log \frac{n}{n-2s} \pm O(1)\right)\, . \end{align}\]

\(\square\)

The next lemma provides the relevant estimates concerning the total overlap of a uniformly random ordered red matching. We recall that the total overlap of a configuration \(\mathcal{M}\) is denoted by \(R(\mathcal{M})\) (cf. 7).

Lemma 6. Let \(1/n \ll 1/D \ll 1/k\) with \(k,D,n \in \mathbb{N}\), and fix a \(k\)-configuration \(\mathcal{M}\) on a vertex set of size \(n\) with \(R(\mathcal{M}) \le n^{1/8}\). For an ordered red matching \(M:=(e_1, \dots, e_{m})\) of size \(m:=n/2-D\) and \(t \in \{0,1,\dots,m\}\), set \(n_t:=n-2t\) and denote by \(R_t(M)\) the total overlap of the reduced configuration of the red matching \(\{e_1,\dots,e_t\}\). Then, for a uniformly random ordered red matching \(M\) of size \(m\), we have \(\mathrm{\mathbb{E}}\left[R_m(M)\right] < D/4\) and \(\mathrm{\mathbb{E}}\left[\sum_{t=0}^{m} \frac{R_t(M)}{n_t^2}\right] = O_{k}(1)\).

In order to establish 6, we first show that the total overlap of a uniformly random ordered red matching has a “self-correcting” behaviour in the following sense. By 9, if the overlap is large, then there are many edges whose choice as the subsequent red edge would decrease the overlap. Moreover, by 5, every available edge is roughly equally likely to be contained in a uniformly random ordered red matching. Therefore, the overlap decreases in expecation. This is captured in the following two results.

Proposition 17. For each \(k \in \mathbb{N}\) and \(r,s>0\), the following holds with \(B:=\frac{4(2s+kr/2)k^2}{r}\) and \(\alpha:=r/4\). Let \(\mathcal{M}\) be a \(k\)-configuration of order \(n\) and \(X\) be an available edge of \(\mathcal{M}\) chosen according to a distribution which satisfies the following property: For each available edge \(e\) of \(\mathcal{M}\) it holds that \[\label{prop:Q95distribution95R95t} r\cdot n^{-2} \le \mathrm{\mathbb{P}}\left[X=e\right] \le s \cdot n^{-2}\, .\qquad{(1)}\] Let \(\mathcal{M}'\) be the reduced configuration of \(X\). Then, provided that \(R(\mathcal{M}) \ge B\), we have \(\mathrm{\mathbb{E}}\left[R(\mathcal{M}')\right] \le (1-\alpha/n)\cdot R(\mathcal{M})\).

Proof. Let \(R':=R(\mathcal{M}')\) and \(R:=R(\mathcal{M})\). For any distinct colours \(i,j \in [k]\), set \(R'_{ij}:=R_{ij}(\mathcal{M}')\) and \(R_{ij}:=R_{ij}(\mathcal{M})\). Recall the definition of good and bad edges in 8. From Property [prop95P951] of 9, there are at least \((R_{ij}/2-k/2)n\) good edges and at most \(n\) bad edges (for colours \(i\) and \(j\)). Together with Property [prop95P952] and the assumption ?? , we have \[\begin{align} \mathrm{\mathbb{E}}\left[R_{ij}'\right] & \le R_{ij} + 2 \cdot s n^{-2} \cdot (\# \text{ bad edges}) - 1 \cdot r n^{-2} \cdot (\# \text{ good edges}) \\ & \le R_{ij} + \frac{2s+kr/2-rR_{ij}/2}{n}\, . \end{align}\] By summing over all pairs of colours, and using that \(R=\sum_{i,j:i \neq j} R_{ij}\) and \(R'=\sum_{i,j:i \neq j} R_{ij}'\), we get \[\begin{align} \mathrm{\mathbb{E}}\left[R'\right] \le R + \frac{(2s+kr/2)k^2-rR/2}{n} \le R - \frac{\alpha}{n} \cdot R\, , \end{align}\] where the last inequality holds by the definition of \(B\) and \(\alpha\).

\(\square\)

Lemma 7. Let \(n \in \mathbb{N}\) and \(m < n/2\) be an integer. Set \(n_0:=n\) and, for each \(t \in [m]\), set \(n_t:=n-2t\). Let \(R_0, R_1, \dots, R_{m}\) be random variables such that

  1. \(\mathrm{\mathbb{E}}\left[R_{t}-R_{t-1}\right] \le 2\) for each \(t\); and

  2. there exist \(0 < \alpha <2\) and \(B>0\) such that \(\mathrm{\mathbb{E}}\left[R_{t}\right] \le (1-\alpha/n_{t-1}) \cdot \mathrm{\mathbb{E}}\left[R_{t-1}\right]\) for each \(t\) with \(\mathrm{\mathbb{E}}\left[R_{t-1}\right] \ge B\).

Then, for each \(t \in [m]\), we have \(\mathrm{\mathbb{E}}\left[R_t\right] = O_{\alpha}(1) \cdot \mathrm{\mathbb{E}}\left[R_0\right] \cdot \left(\frac{n_t}{n}\right)^{\alpha/2}+B+2\). Moreover \(\mathrm{\mathbb{E}}\left[\sum_{t=0}^{m} \frac{R_t}{n_t^2}\right] = O_{\alpha}\left(\frac{\mathrm{\mathbb{E}}\left[R_0\right]}{n^{\alpha/2}}+B+2\right)\).

Proof. We start by the following claim.

Claim . If there exists \(s \ge 0\) such that \(\mathrm{\mathbb{E}}\left[R_{s+1}\right] \le B\), then \(\mathrm{\mathbb{E}}\left[R_t\right] \le B+2\) for each \(s+1 \le t \le m\). This can be proved by induction on \(t\). If \(t=s+1\), the claim is true by assumption. Fix \(t > s+1\). If \(\mathrm{\mathbb{E}}\left[R_{t-1}\right] < B\), using [prop95S951], we have \(\mathrm{\mathbb{E}}\left[R_t\right] \le \mathrm{\mathbb{E}}\left[R_{t-1}\right] + 2 \le B+2\). If \(\mathrm{\mathbb{E}}\left[R_{t-1}\right] \ge B\), then by [prop95S952], we have \(\mathrm{\mathbb{E}}\left[R_t-R_{t-1}\right] \le 0\), and thus \(\mathrm{\mathbb{E}}\left[R_t\right] \le \mathrm{\mathbb{E}}\left[R_{t-1}\right] \le B+2\).

\(\square\)

Let \(s\) be the least \(t\) with \(0 \le t < m\) such that \(\mathrm{\mathbb{E}}\left[R_{t+1}\right] \le B\) (if it exists, otherwise take \(s=m\)). Then, by Claim [claim:exp95below95constant], \(\mathrm{\mathbb{E}}\left[R_t\right] \le B+2\) for each \(s+1 \le t \le m\). Therefore \[\label{eq:sum95R95t951} \sum_{t=s+1}^{m} \frac{\mathrm{\mathbb{E}}\left[R_t\right]}{n_t^2} \le (B+2) \cdot \sum_{t=s+1}^{m} \frac{1}{n_t^2}= (B+2) \cdot O(1)\, ,\tag{3}\] where we used that \(\sum_{i=1}^{\infty} i^{-2} < \infty\).

On the other hand, by definition of \(s\), for all \(t \in [s]\), we have that \(\mathrm{\mathbb{E}}\left[R_{t}\right] > B\) and thus \[\begin{align} \mathrm{\mathbb{E}}\left[R_t\right] &\le \mathrm{\mathbb{E}}\left[R_0\right] \cdot \prod_{i=0}^{t-1} \left(1-\frac{\alpha}{n_i}\right) \le \mathrm{\mathbb{E}}\left[R_0\right] \cdot \exp\left(-\alpha \cdot \sum_{i=0}^{t-1} \frac{1}{n_i}\right) \\ & \le \mathrm{\mathbb{E}}\left[R_0\right] \cdot \exp\left\{-\frac{\alpha}{2} \left(\log \frac{n}{n-2t} - O(1)\right)\right\} \\ & = O_{\alpha}(1) \cdot \mathrm{\mathbb{E}}\left[R_0\right] \cdot \left(\frac{n-2t}{n}\right)^{\alpha/2}\, , \end{align}\] where the first inequality uses that from [prop95S952] we have \(\mathrm{\mathbb{E}}\left[R_{t}\right] \le \left(1-\frac{\alpha}{n_{t-1}}\right) \cdot \mathrm{\mathbb{E}}\left[R_{t-1}\right]\), the second inequality uses \(1-x \le \exp(-x)\) and the third inequality uses 16. The first part of the lemma follows from the calculation above together with Claim [claim:exp95below95constant].

Moreover, for all \(t \in [s]\), we have \[\frac{\mathrm{\mathbb{E}}\left[R_t\right]}{n_t^2} = O_{\alpha}(1) \cdot \mathrm{\mathbb{E}}\left[R_0\right] \cdot \left(\frac{n-2t}{n}\right)^{\alpha/2} \cdot \frac{1}{(n-2t)^2} = O_{\alpha}(1) \cdot \frac{\mathrm{\mathbb{E}}\left[R_0\right]}{n^{\alpha/2}} \cdot \frac{1}{(n-2t)^{2-\alpha/2}}\, ,\] and thus \[\label{eq:sum95R95t952} \sum_{t=0}^{s} \frac{\mathrm{\mathbb{E}}\left[R_t\right]}{n_t^2}= \frac{\mathrm{\mathbb{E}}\left[R_0\right]}{n^{\alpha/2}} \cdot O_{\alpha}(1)\, ,\tag{4}\] where we used that \(\sum_{i=1}^{\infty} i^{-(2-\alpha/2)} < \infty\) as \(2 - \alpha/2 >1\).

The second part of the lemma then follows from 3 and 4 .

\(\square\)

We finish this subsection with the proof of 6.

Proof of 6. Fix \(t \in [m]\). By 10, conditioned on \(e_1, \dots, e_{t-1}\), the ordered matching \((e_t,\dots,e_m)\) is distributed as a uniformly random ordered red matching of size \(m':=m-(t-1)=n_{t-1}/2-D\) of the reduced configuration of \(\{e_1,\dots, e_{t-1}\}\), which we denote by \(\mathcal{M}'\). Therefore, for every edge \(e \not\in E(\mathcal{M}')\), we have \[\begin{align} \mathrm{\mathbb{P}}\left[e_t=e|e_1,\dots,e_{t-1}\right] &= \frac{\# \text{ ordered red matchings of \mathcal{M}' with first edge e}}{\# \text{ ordered red matchings of } \mathcal{M}'} \\ &= \frac{(m'-1)! \cdot \# \text{ unordered red matchings of \mathcal{M}' containing } e}{(m')! \cdot \# \text{ unordered red matchings of \mathcal{M}'}} \\ &= \frac{1}{m'} \cdot \frac{1 \pm o_D(1)}{n_{t-1}}\\ &=\frac{2 \pm o_D(1)}{n_{t-1}^2}\, , \end{align}\] where the third line uses 5. Therefore, Condition ?? in 17 holds with, say, \(r=1\) and \(s=3\). Then the sequence of random variables \(R_0(\cdot),\dots,R_m(\cdot)\) satisfies the assumptions of 7 with \(\alpha:=1/4\) and \(B:=24k^2+2k^3\): Indeed, Property [prop95S951] holds by Item [prop95P952] of 9 and Property [prop95S952] holds by 17.

We conclude that \[\mathrm{\mathbb{E}}\left[R_m(M)\right] = O(1) \cdot R_0 \cdot \left(\frac{2D}{n}\right)^{1/8}+24k^2+2k^3+2 < D/4 \, ,\] and \[\mathrm{\mathbb{E}}\left[\sum_{t=0}^{m} \frac{R_t(M)}{n_t^2}\right] = O\left(\frac{R_0}{n^{1/8}}+24k^2+2k^3+2\right) = O_{k}(1)\, ,\] where we used that \(R_0=R(\mathcal{M}) \le n^{1/8}\) and \(1/D \ll 1/k\).

\(\square\)

4.2 Lower bound↩︎

Proof of the lower bound of 14. Choose a new constant \(D\) such that \[1/n \ll 1/D \ll 1/k\, ,\] and set \(C:=D/2\). Let \(\mathcal{M}_0:=\mathcal{M}\) be the given configuration, \(m:=n/2-D\), \(n_0:=n\) and, for \(t \in [m]\), \(n_t:=n-2t\).

Let \(\mathcal{X}\) be the set of ordered red matchings \((e_1^*,\dots,e_m^*)\) of size \(m\). We can construct the elements of \(\mathcal{X}\) greedily as follows: Having chosen \(e_1^*, \dots, e_{t}^*\), we choose \(e_{t+1}^*\) to be an available red edge of the reduced configuration of \(e_1^*, \dots, e_{t}^*\) (which has \(n_t\) vertices). Observe that different outcomes give different ordered matchings. Moreover, the number of choices for \(e_{t+1}^*\) is at least \(\binom{n_t}{2}-k \frac{n_t}{2} = \frac{n_t^2}{2} \left(1-\frac{k+1}{n_t}\right) \ge \frac{n_t^2}{2} \cdot \exp\left(-\frac{k+1}{n_t}-2\frac{(k+1)^2}{n_t^2}\right)\), where we used that \(1-x \ge \exp(-x-2x^2)\) for each \(x \in [0,1/4]\) (which is satisfied in our case as \((k+1)/n_t \le (k+1)/(2D) \ll 1\) for \(t \in [m]\)). Therefore we have \[\begin{align} |\mathcal{X}| &\ge \prod_{t=0}^{m-1} \left[\binom{n_t}{2}-k \frac{n_t}{2}\right] \\ &= \Omega_D(1) \cdot \prod_{t=0}^{n/2-1} \frac{n_t^2}{2} \cdot \exp\left(-\frac{k+1}{n_t}-2\frac{(k+1)^2}{n_t^2}\right) \\ &= \Omega_D(1) \cdot \left(\prod_{t=0}^{n/2-1} \frac{n_t^2}{2}\right) \cdot \exp\left(-(k+1) \sum_{t=0}^{n/2-1} \frac{1}{n_t}\right) \cdot \exp\left(-2(k+1)^2 \sum_{t=0}^{n/2-1} \frac{1}{n_t^2}\right) \, , \end{align}\] where the second line uses the inequality above and that if \(m < t < n/2\), then \(2 \le n_t \le 2D\), while the third line is just rearranging. Using 16, together with \(\prod_{t=0}^{n/2-1} \frac{n_t^2}{2}=2^{n/2} \cdot (n/2)!^2\) and \(\sum_{i=1}^{\infty} i^{-2} < \infty\), we have \[\begin{align} |\mathcal{X}| &= \Omega_D(1) \cdot 2^{n/2} \cdot (n/2)!^2 \cdot \exp\left(-(k+1) \frac{\log(n/2)-O(1)}{2}\right) \\ & = \Omega_D(1) \cdot 2^{n/2} \cdot (n/2)!^2 \cdot n^{-(k+1)/2}\, . \end{align}\]

We now estimate the proportion of matchings in \(\mathcal{X}\) that can be completed to a full red matching. Let \(M_1^* \in \mathcal{X}\) and \(\mathcal{M}':=\{M_1',\dots,M_k'\}\) be its reduced configuration (of order \(2D\)), say on vertex set \(V'\). Suppose that \[\label{eq:matching95can95be95completed} \left|M_i' \setminus \bigcup_{j \neq i} M_j'\right| \ge C \text{ for each } i \in [k]\, .\tag{5}\] Then, by 3, \(\mathcal{M}'\) has a red perfect matching \(M_2^*\) (covering \(V'\)), which we arbitrarily order. By concatenating (in this order) \(M_1^*\) and \(M_2^*\), we obtain an ordered red perfect matching \(M^*\) and note that different \(M_1^*\) give different \(M^*\). Also observe that if there exists \(i \in [k]\) such that \(\left|M_i' \setminus \bigcup_{j \neq i} M_j'\right| < C\), then \(R(\mathcal{M}') \ge \sum_{j:j \neq i} |M_j' \cap M_i'| \ge D-C >0\).

By 6, for the reduced configuration \(\mathcal{M}'\) of a uniformly random ordered red matching of size \(m\), we have \(\mathrm{\mathbb{E}}\left[R(\mathcal{M}')\right] < D/4\). By Markov’s inequality, \[\mathrm{\mathbb{P}}\left[\mathcal{M}' \text{ does not satisfy }~\eqref{eq:matching95can95be95completed}\right] \le \mathrm{\mathbb{P}}\left[R(\mathcal{M}') \ge D-C\right] \le \frac{\mathrm{\mathbb{E}}\left[R(\mathcal{M}')\right]}{D-C} < \frac{1}{2}\, ,\] where we also used that \(C=D/2\).

Therefore at least half of the partial ordered red matchings in \(\mathcal{X}\) can be completed to a full ordered red perfect matching. Recalling that these matchings are ordered, we conclude that the number of unordered red perfect matchings is at least \(\frac{|\mathcal{X}|/2}{(n/2)!}= \Omega_D(1) \cdot 2^{n/2-1} \cdot (n/2)! \cdot n^{-(k+1)/2}\). This finishes the proof of the lower bound of 14 as \(n!=\Theta(n^{1/2}\cdot (n/e)^{n})\) by Stirling’s approximation.

\(\square\)

4.3 Upper bound↩︎

As already mentioned, a common technique to prove upper bounds on the number of certain combinatorial objects is the entropy method. Usually it is carried out from the outset for each individual problem. We make an attempt to formulate a general lemma that could potentially be applied to other problems as well. Although the proof steps are just the usual entropy arguments, we find the general statement quite appealing and, to the best of our knowledge, it has not been stated explicitly before.

The setup is the following. Let \(\Omega_1, \dots, \Omega_m\) be finite sets and \(\mathcal{X}\subseteq \Omega_1 \times \dots \times \Omega_m\). Our goal is to give an upper bound on the size of \(\mathcal{X}\). A natural approach is to count the number of possibilities for \(x=(x_1,\dots,x_m)\in \mathcal{X}\) by choosing \(x_1, x_2, \dots, x_m\) sequentially one after the other and accounting for the number of choices available at each step. The issue is that having chosen \(x_1, \dots, x_i\), the number of choices for \(x_{i+1}\) can heavily depend on the previous choices. A bound on \(|\mathcal{X}|\) can still be obtained by considering the worst case, but this provides bad estimates. Our goal is to consider instead the average case.

One issue with formalizing a general statement is that the definition of a suitable choice in a specific step depends heavily on the concrete problem. To overcome this, we make the following abstract definition, which defines a suitable choice simply as a choice for which there is at least one completion to a full element of \(\mathcal{X}\).

For \(x:=(x_1,\dots,x_m)\in \mathcal{X}\) as above and \(t \in [m]\), define \[\mathcal{S}_t(x):=\left\{z \in \Omega_{t}: \begin{array}{c} \text{there exists } x' \in \mathcal{X}\text{ with } x'_{t}=z \text{ and}\\ x'_{i}=x_{i} \text{ for each } i \in [t-1] \end{array} \right\}\, ,\] and observe that \(\mathcal{S}_t(x) \neq \emptyset\) (since \(x_{t} \in \mathcal{S}_t(x)\)). Moreover \(\mathcal{S}_t(x)\) actually depends only on \(x_{1}, \dots, x_{t-1}\) and thus we will denote it by \(\mathcal{S}(x_{1}, \dots, x_{t-1})\) as well (and by \(\mathcal{S}_1\) when \(t=1\)). Finally, define \(s_t(x):=|\mathcal{S}_t(x)|\). Trivially, we have \(|\mathcal{X}| \le \prod_{t \in [m]} \max_{x \in \mathcal{X}} \left[ s_t(x) \right]\). The following lemma now says that instead of considering the worst case in each step, we can take the average over a random element from \(\mathcal{X}\). We remark that, although the definition of \(s_t(x)\) is quite abstract, in practice one can easily obtain upper bounds on \(s_t(x)\) by discarding all choices that are obviously not completable due to some condition that is specific to the problem at hand.

Lemma 8. Let \(\Omega_1, \dots, \Omega_m\) be finite sets and \(\mathcal{X}\subseteq \Omega_1 \times \dots \times \Omega_m\) with \(\mathcal{X}\neq \emptyset\). Then \[|\mathcal{X}| \le \prod_{t \in [m]} \mathrm{\mathbb{E}}_{x \in \mathcal{X}}\left[ s_t(x) \right]\, .\]

Note that in the proof we do not explicitly use the notion of entropy. We only rely on the AM-GM inequality and basic laws of probability, but the proof steps naturally correspond to the usual entropy proofs.

Proof of Lemma 8. Define a random variable \(Z = (z_1,\dots,z_m) \in \mathcal{X}\) as follows. We let \(z_{1}\) be a uniformly random element from \(\mathcal{S}_1\). (That is, \(z_{1}\) is an element of \(\Omega_{1}\) chosen uniformly at random among those which can be completed to an element of \(\mathcal{X}\).) Then, assuming we have chosen \(z_{1}, \dots, z_{t-1}\), we let \(z_{t}\) be a uniformly random element from \(\mathcal{S}(z_{1}, \dots, z_{t-1})\). Observe that \(Z \in \mathcal{X}\) by construction.

For \(x \in \mathcal{X}\), we have \[\begin{align} \mathrm{\mathbb{P}}\left[Z=x\right] = \mathrm{\mathbb{P}}\left[\bigcap_{t \in [m]} \{z_{t}=x_{t}\}\right] & = \prod_{t \in [m]} \mathrm{\mathbb{P}}\left[z_{t}=x_{t} \Big| \begin{array}{c} z_i=x_{i} \\ \text{ for each } i \in [t-1] \end{array}\right]\\ &= \prod_{t \in [m]} \frac{1}{s_t(x)}\, , \end{align}\] where we recall that since \(x \in \mathcal{X}\), we have \(x_{t} \in \mathcal{S}_t(x)\) and thus \(s_t(x) \neq 0\) for all \(t\). Therefore, \[\begin{align} \log|\mathcal{X}| &= \log \frac{|\mathcal{X}|}{\sum_{x \in \mathcal{X}} \mathrm{\mathbb{P}}\left[Z=x\right]} \le \frac{1}{|\mathcal{X}|} \cdot \log \frac{1}{\prod_{x \in \mathcal{X}} \mathrm{\mathbb{P}}\left[Z=x\right]} \\ &= \frac{1}{|\mathcal{X}|} \cdot \log\left( \prod_{x \in \mathcal{X}} \prod_{t \in [m]} s_t(x)\right) = \sum_{t \in [m]} \frac{1}{|\mathcal{X}|} \sum_{x \in \mathcal{X}} \log\left( s_t(x)\right) \le \sum_{t \in [m]} \log\left( \frac{1}{|\mathcal{X}|} \sum_{x \in \mathcal{X}} s_t(x)\right) \, , \end{align}\] where the first equality uses that, since \(Z \in \mathcal{X}\), we have \(\sum_{x \in \mathcal{X}} \mathrm{\mathbb{P}}\left[Z=x\right] = 1\), the first inequality uses the AM-GM inequality, and the second inequality follows from Jensen’s inequality and the concavity of the \(\log\)-function. By taking the exponential on both sides, we get \[|\mathcal{X}| \le \prod_{t \in [m]} \frac{1}{|\mathcal{X}|} \sum_{x \in \mathcal{X}} s_t(x) = \prod_{t \in [m]} \mathrm{\mathbb{E}}_{x \in \mathcal{X}}\left[ s_t(x) \right]\, ,\] as desired.

\(\square\)

Proof of the upper bound of 14. Choose a new constant \(D\) such that \[1/n \ll 1/D \ll 1/k\, .\] Set \(m:=n/2-D\), \(n_0:=n\) and, for \(t \in [m]\), \(n_t:=n-2t\).

Let \(\mathcal{X}\) be the set of ordered red matchings of size \(m\). For an ordered matching \(M:=(e_1, \dots, e_{m}) \in \mathcal{X}\) and \(t \in [m]\), define \(s_t(M)\) to be the number of edges \(e\) of \(K_n\) such that there exists an ordered matching \(M':=(e_1', \dots, e_{m}') \in \mathcal{X}\) with \(e_i'=e_i\) for each \(i \in [t-1]\) and \(e_{t}'=e\). Moreover, denote by \(\mathcal{M}_{t-1}(M)\) the reduced configuration of the red matching \(\{e_1, \dots, e_{t-1}\}\). Then, we have \[|\mathcal{X}| \le \prod_{t \in [m]} \mathrm{\mathbb{E}}_{M \in \mathcal{X}}\left[ s_t(M) \right] \le \prod_{t=0}^{m-1} \mathrm{\mathbb{E}}_{M \in \mathcal{X}}\left[ \binom{n_t}{2}- e(\mathcal{M}_t(M)) \right]\, ,\] where the first inequality uses 8 and the second one follows from the fact that if an edge is counted by \(s_{t}(M)\), then it must be an available edge of \(\mathcal{M}_{t-1}(M)\). For a more pleasant notation, we drop the subscript in \(\mathrm{\mathbb{E}}_{M \in \mathcal{X}}[\cdot]\) from now on, since we will always take expectations with respect to a uniformly random matching \(M\) from \(\mathcal{X}\).

Letting \(\left(\mathcal{M}_t(M)\right)_i\) be the set of edges of colour \(i\) of \(\mathcal{M}_t(M)\) and setting \(R_t(M):=R(\mathcal{M}_t(M))\), note that \[\begin{align} e(\mathcal{M}_t(M)) &= |\cup_{i \in [k]} (\mathcal{M}_t(M))_i| \\ &\ge \sum_{i \in [k]} |(\mathcal{M}_t(M))_i| - \sum_{i,j: i \neq j} |(\mathcal{M}_t(M))_i \cap (\mathcal{M}_t(M))_j|=k \frac{n_t}{2}- R_t(M)\, , \end{align}\] and thus \(\binom{n_t}{2}- e(\mathcal{M}_t(M)) \le \binom{n_t}{2}-k \frac{n_t}{2}+ R_t(M) = \frac{n_t^2}{2} \cdot \left[1-\frac{k+1}{n_t}+\frac{R_t(M)}{n_t^2}\right]\). Then \[\begin{align} |\mathcal{X}| & \le \prod_{t=0}^{m-1} \frac{n_t^2}{2} \cdot \left[1-\frac{k+1}{n_t}+\frac{\mathrm{\mathbb{E}}\left[R_t(M)\right]}{n_t^2}\right] \\ & \le \prod_{t=0}^{m-1} \frac{n_t^2}{2} \cdot \exp\left(-\frac{k+1}{n_t}+\frac{\mathrm{\mathbb{E}}\left[R_t(M)\right]}{n_t^2}\right) \\ &= \left(\prod_{t=0}^{m-1} \frac{n_t^2}{2}\right) \cdot \exp\left(-(k+1) \sum_{t=0}^{m-1} \frac{1}{n_t}\right) \cdot \exp\left(\mathrm{\mathbb{E}}\left[\sum_{t=0}^{m-1} \frac{R_t(M)}{n_t^2}\right]\right) \, , \end{align}\] where we used that \(1+x \le e^x\) for each \(x \in \mathbb{R}\) in the second line. Observe that \(\prod_{t=0}^{m-1} \frac{n_t^2}{2} \le \prod_{t=0}^{n/2-1} \frac{n_t^2}{2} = 2^{n/2} \cdot (n/2)!^2\) and, by 16, it holds that \(\exp\left(-(k+1) \sum_{t=0}^{m-1} \frac{1}{n_t}\right) =O_D(1) \cdot n^{-(k+1)/2}\). Moreover, by 6, \(\mathrm{\mathbb{E}}\left[\sum_{t=0}^{m} \frac{R_t(M)}{n_t^2}\right] = O_{k}(1)\).

We can then conclude that \(|\mathcal{X}| = O_D(1) \cdot 2^{n/2} \cdot (n/2)!^2 \cdot n^{-(k+1)/2}\). Finally observe that if \(M:=(e_1,\dots, e_{n/2})\) is an ordered red perfect matching, then \((e_1,\dots,e_{m})\) is an ordered red matching of size \(m\). Moreover, every ordered matching of size \(m\) can be completed to an ordered red perfect matching in \(O_D(1)\) ways. Therefore the number of ordered red perfect matchings is \(O_D(1) \cdot |\mathcal{X}|\). After dividing this bound by \((n/2)!\) to account for the ordering, we conclude that the number of unordered red perfect matchings is \(O_D(1) \cdot 2^{n/2} \cdot (n/2)! \cdot n^{-(k+1)/2}\). This finishes the proof of the upper bound of 14 as \(n!=\Theta(n^{1/2}\cdot (n/e)^{n})\) by Stirling’s approximation.

\(\square\)

5 Switching lemma↩︎

In this section, we prove 5 by using the switching method, which relies on the following strategy. Suppose we have two disjoint finite sets \(S, T\), and an operation, usually called switch, that takes an object in \(S\) and modifies it to make an object in \(T\). If \(d_S\) is the average number of switchings that can be applied to an object in \(S\) and \(d_T\) is the average number of switchings that can be applied to an object in \(T\), then \(d_S|S| = d_T |T|\). An estimate of the relative values of \(d_S\) and \(d_T\) provides an estimate of the relative sizes of \(S\) and \(T\). This is very useful in particular when it is hard to determine the total sizes of \(S\) and \(T\) directly.

We now move to our setting. For the rest of the section, we fix a \(k\)-configuration \(\mathcal{M}\) of order \(n\) and an edge \(e:=ab \not\in E(\mathcal{M})\). We assume that \[1/n \ll 1/D \ll 1/k\, ,\] denote by \(\mathcal{R}\) the set of red matchings of \(\mathcal{M}\) of size \(m:=n/2-D\), and define the following partition of \(\mathcal{R}\): \[\label{eq:partition95classes} \begin{gather} A:=\{M \in \mathcal{R}: e \in M\}\, , \\ X:=\{M \in \mathcal{R}: e \not\in M \text{ and } a, b \in V(M)\}\, , \\ Y:=\{M \in \mathcal{R}: e \not\in M, \text{ and either } a \text{ or } b \in V(M)\}\, ,\\ Z:=\{M \in \mathcal{R}: a, b \not\in V(M)\}\, . \end{gather}\tag{6}\]

We first prove the upper bound of 5, which is relatively straightforward and serves to illustrate the method.

Proof of the upper bound of 5. It is enough to compare \(|A|\) and \(|X|\). Define the bipartite graph with parts \(A\) and \(X\) as follows: For \(M \in A\) and \(M' \in X\), we add the edge \(MM'\) if \(M'\) can be obtained from \(M\) by deleting \(e\) and another red edge, and adding two new red edges incident to a vertex of \(V \setminus V(M)\) and to \(a\) and \(b\), respectively. Then, for every \(A \in M\), we have \(\deg_H(M) \ge (n/2-D-1)(2D-k)(2D-k-1) \ge (2D^2-o_D(1))n\), as we must remove \(e\) (this is a fixed choice) and another red edge (for which we have \(n/2-D-1\) choices), then add an available red edge from \(a\) to a vertex in \(V \setminus V(M)\) (for which we have at least \(2D-k\) choices) and, after that, an available red edge from \(b\) to a vertex in \(V \setminus V(M)\) not yet used for \(a\) (for which we have at least \(2D-k-1\) choices). Similarly, let \(M' \in X\) and \(a', b' \in V(M')\) such that \(aa',bb' \in M'\). We have \(\deg_H(M') \le \binom{2D+2}{2}\), as the edges to be removed are fixed and we must add \(e\) (if possible) and a red edge in \((V \setminus V(M')) \cup \{a',b'\}\) (for which we have at most \(\binom{2D+2}{2}\) choices). Then \(|A| \cdot \min_{M \in A} \deg_H(M) \le e(H) \le |X| \cdot \max_{M' \in X} \deg_H(M')\) and thus \[\label{eq:upper95bound95A} \frac{|A|}{|X|} \le \frac{\binom{2D}{2}}{(2D^2-o_D(1))n} = \frac{1 + o_D(1)}{n}\, .\tag{7}\]

The result follows as \(\mathrm{\mathbb{P}}\left[e \in M^*\right] = \frac{|A|}{|A|+|X|+|Y|+|Z|} \le \frac{|A|}{|X|} \le \frac{1 + o_D(1)}{n}\).

\(\square\)

Unfortunately, such an easy switching argument does not go through for the lower bound. Indeed, a lower bound on \(|A|/|X|\) would require a lower bound on \(\deg_H(M')\) (where we are using the same notation as in the proof above). The issue is that some \(M' \in X\) may be isolated in the graph \(H\) as, after removing the two red edges incident to \(a\) and \(b\), it is not guaranteed that we can add \(e\) as a red edge. The main work of this section is to analyse the proportion of matchings for which \(e\) is “blocked”. Since we anyways have to delete the red edges at \(a\) and \(b\) before we can possibly add \(e\), we will only consider matchings in \(Z\) for this.

Definition 18 (Blocked edge and blocking path). Given a colour \(i\) and a red matching \(M\) with \(a,b \not\in V(M)\), we say that \(e\) is \(i\)-blocked (in \(M\)) if \(e\) appears in colour \(i\) in the reduced configuration of \(M\), that is, if the maximal alternating \(i\)-red path starting at \(a\) ends in \(b\). We call such path the \(i\)-blocking path. We say that \(e\) is blocked if there exists a colour \(i\) such that \(e\) is \(i\)-blocked.

Figure 4: Let e=ab. Then e is blocked in colour green, but not blocked in colour blue.

Note that for a matching \(M\in Z\) and a colour \(i\), the graph induced by \(M\) and the matching of \(\mathcal{M}\) of colour \(i\) decomposes into a collection of vertex-disjoint alternating \(i\)-red paths, with each path starting and ending with an edge of colour \(i\). (Observe that we may have paths consisting of a single edge of colour \(i\).) Moreover the endpoints of these paths are exactly the \(2D\) vertices not covered by \(M\), and the path containing \(a\) contains \(b\) as well if and only if \(e\) is \(i\)-blocked. Hence, heuristically, we would expect that for a random matching \(M\) from \(Z\), the probability that \(e\) is \(i\)-blocked is \(O(1/D)\). We formalise this argument after introducing the relevant definitions.

For each colour \(i \in [k]\), we define \[\begin{gather} B^i:=\{M \in \mathcal{R}: a, b \not\in V(M) \text{ and e is i-blocked}\}\, ,\\ N^i:=\{M \in \mathcal{R}: a, b \not\in V(M) \text{ and e is not i-blocked}\}\, . \end{gather}\] Then we let \(B:=\cup_{i \in [k]} B^i\) and \(N:= Z \setminus B\). Note that a red matching \(M \in \mathcal{R}\) belongs to \(N\) if and only if \(a,b \not \in V(M)\) and \(e\) is not blocked in any colour.

Lemma 9. For each colour \(i \in [k]\), we have \(|B^i| = O(1/D) \cdot |Z|\).

Fix a colour \(i \in [k]\). For an integer \(\ell \ge 1\), we set \(B_{\ell}^i\) to be the subset of \(B^i\) consisting of the matchings for which the \(i\)-blocking path has exactly \(\ell\) red edges. Similarly, we let \(N_{\ell}^i\) to be the subset of \(N^i\) consisting of the matchings for which the maximal alternating \(i\)-red path starting at \(a\) and the one starting at \(b\) have in total exactly \(\ell\) red edges. (Recall these two paths are distinct by definition of \(N^i\).) The first step is to compare \(B^i_\ell\) with \(N^i_{\ell-1}\).

Lemma 10. For each colour \(i \in [k]\) and integer \(\ell \ge 1\), we have \(|B_{\ell}^i| \le \frac{n}{\ell \cdot \Omega(D^2)} \cdot |N_{\ell-1}^i|\).

Proof. Suppose \(i\) is the colour blue. For \(M \in B_{\ell}^i\), let \(P_M\) be the blue-blocking path. For \(M' \in N_{\ell-1}^i\), let \(Q_{M'}^a\) and \(Q_{M'}^b\) be the maximal alternating blue-red paths starting at \(a\) and \(b\), respectively. Define \(H\) to be the bipartite graph with parts \(B_{\ell}^i\) and \(N_{\ell-1}^i\) where \(MM' \in E(H)\) if and only if \(M'\) can be obtained from \(M\) by removing a red edge \(f\) of \(P_M\) and adding a new red edge \(f'\) with endpoints in \(V \setminus (V(M) \cup \{a,b\})\) (cf. 5).

Then, for every \(M \in B^i_\ell\), we have \(\deg_H(M)= \ell \cdot \Omega(D^2)\), as \(f\) can be chosen in \(\ell\) ways and \(f'\) in at least \(\binom{2D-2}{2}-k \cdot (D-1)\) ways. Moreover, for every \(M' \in N_{\ell-1}^i\), we have \(\deg_H(M') \le n\) as, if possible, we must add the red edge joining the endpoints of \(Q_{M'}^a\) and \(Q_{M'}^b\) (different from \(a\) and \(b\)), and remove a red edge of \(M'\) not in the path \(Q_{M'}^a\) or \(Q_{M'}^b\) (for which we have at most \(n\) choices).

The lemma then follows from \(\ell \cdot \Omega(D^2) \cdot |B_{\ell}^i|=e(H) \le n \cdot |N_{\ell-1}^i|\).

\(\square\)

Figure 5: Switching between B^i_\ell and N^i_{\ell-1} in 10.

When \(\ell\) is linear (in \(n\)), 10 provides a good bound, but this is not the case anymore if \(\ell\) is small. However we are able to show that there are only few matchings in \(B^i\) whose \(i\)-blocking path is short, and we do so by using the following lemma.

Lemma 11. Let \(\ell \le n/10\) and \(T:=\{e_1, e_2, \dots, e_{\ell}\}\) be a red matching such that none of the edges \(e_i\) is incident to \(a\) or \(b\). Then for a uniformly random red matching \(M^*\) of size \(n/2-D\) and any \(x,y \not \in \cup_{i \in [\ell]} e_i\), it holds that \[\mathrm{\mathbb{P}}\left[xy \in M^* \Big| \begin{array}{c} a,b \not \in V(M^*) \text{ and }\\ T \subseteq M^* \end{array} \right] = O\left( \frac{1}{n}\right)\, .\]

Proof. The proof is identical to the upper bound of 5, which we already proved at the beginning of the section. Let \(\tilde{\mathcal{R}}\) be the set of red matchings \(M\) of size \(n/2-D\) such that \(a, b \not \in V(M)\) and \(T \subseteq M\). Define \(\tilde{A}:=\{M \in \tilde{\mathcal{R}}: xy \in M\}\) and \(\tilde{X}:=\{M \in \tilde{\mathcal{R}}: xy \not\in M \text{ and } x, y \in V(M)\}\). Then the required probability is at most \(|\tilde{A}|/|\tilde{X}|\).

Define the bipartite graph with parts \(\tilde{A}\) and \(\tilde{X}\) as follows: For \(M \in \tilde{A}\) and \(M' \in \tilde{X}\), we add the edge \(MM'\) if \(M'\) can be obtained from \(M\) by deleting \(xy\) and another red edge not in \(T\), and adding two new red edges incident to a vertex of \(V \setminus (V(M) \cup \{a,b\})\) and to \(x\) and \(y\), respectively. Then, for every \(M \in \tilde{A}\), we have \(\deg_H(M) \ge (n/2-D-1-|T|)(2D-2-k)(2D-3-k)\). Similarly, for every \(M' \in \tilde{X}\), we have \(\deg_H(M') \le \binom{2D}{2}\), as the edges to be removed are fixed and we must add \(xy\) (if possible) and a red edges in \(V \setminus (V(M') \cup \{a,b\})\). The result follows from \(|\tilde{A}| \cdot \min_{M \in A} \deg_H(M) \le e(H) \le |\tilde{X}| \cdot \max_{M' \in X} \deg_H(M')\).

\(\square\)

Finally we are able to prove 9.

Proof of 9. Let \(M^*\) be a uniformly random red matching of size \(n/2-D\) and set \(\ell^*:=n/(10D)\). Partition \(B^i=B^i_{< \ell^*} \cup B^i_{\ge \ell^*}\), where \(B^i_{< \ell^*}:=\bigcup_{\ell < \ell^*} B^i_{\ell}\) and \(B^i_{\ge \ell^*}:=\bigcup_{\ell \ge \ell^*} B^i_{\ell}\).

Fix \(\ell < \ell^*\) and a colour \(i\). Define \(\mathcal{E}^i_\ell\) to be the event that there is an \(i\)-red alternating path starting with an edge of colour \(i\) at \(a\) and ending with an edge of colour \(i\) at \(b\) containing exactly \(\ell\) red edges. We bound the probability of \(\mathcal{E}^i_\ell\) by conditioning on the first \(\ell-1\) red edges on such a path. So let \(\mathcal{T}\) be the set of partial red matchings \(T\) of size \(\ell-1\) such that the maximal alternating \(i\)-red path starting with an edge of colour \(i\) at \(a\) does not contain \(b\) and has exactly \(\ell-1\) red edges (so these are precisely the edges of \(T\)). Let \(a'\) be the endpoint of this path different from \(a\), and \(b'\) such that \(bb'\) is an edge of colour \(i\). Then, for each \(T \in \mathcal{T}\), we have \[\mathrm{\mathbb{P}}\left[\mathcal{E}^i_\ell \Big| \begin{array}{c} a,b \not \in V(M^*) \text{ and } \\ T \subseteq M^* \end{array} \right] = \mathrm{\mathbb{P}}\left[a' b' \in M^* \Big| \begin{array}{c} a,b \not \in V(M^*) \text{ and } \\ T \subseteq M^* \end{array}\right] = O(1/n)\, ,\] where the last equality uses 11. Therefore, since if \(\mathcal{E}^i_\ell\) holds then \(T \subseteq M^*\) for some \(T \in \mathcal{T}\) and since the events \(\{T \subseteq M^*\}_{T \in \mathcal{T}}\) are pairwise disjoint, by the law of total probability it holds that \[\mathrm{\mathbb{P}}\left[\mathcal{E}^i_\ell | a,b \not \in V(M^*)\right] = \sum_{T \in \mathcal{T}} \mathrm{\mathbb{P}}\left[\mathcal{E}^i_\ell \Big| \begin{array}{c} a,b \not \in V(M^*) \text{ and } \\ T \subseteq M^* \end{array}\right] \cdot \mathrm{\mathbb{P}}\left[T \subseteq M^*\right] = O(1/n)\, .\] The last line is equivalent to \(|B^i_{\ell}| = O(1/n) \cdot |Z|\), and this holds for each \(\ell < \ell^*\). By a union bound over all \(\ell < \ell^*\), we have \(|B^i_{<\ell^*}| \le O(1/D) \cdot |Z|\).

For \(\ell \ge \ell^*\), we get from 10 that \[|B_{\ell}^i| \le \frac{n}{\ell \cdot \Omega(D^2)} \cdot |N_{\ell-1}^i| \le \frac{n}{\ell^* \cdot \Omega(D^2)} \cdot |N_{\ell-1}^i| = O(1/D) \cdot |N_{\ell-1}^i|\, .\] Therefore \[|B^i_{\ge \ell^*}| = \sum_{\ell \ge \ell^*} |B^i_\ell| = O(1/D) \cdot \sum_{\ell \ge \ell^*} |N^i_{\ell-1}| \le O(1/D) \cdot |Z|\, .\] Overall we get \(|B^i|=|B^i_{< \ell^*}| + |B^i_{\ge \ell^*}| = O(1/D) \cdot |Z|\).

\(\square\)

We are now ready to complete the proof of the switching lemma. Proof of the lower bound of 5. We compare the sizes of the sets \(A,X,Y,Z\), which partition \(\mathcal{R}\) and have been defined in 6 .

We start with \(A\) and \(Z\), arguing via \(N\). Define the bipartite graph \(H\) with parts \(A\) and \(N\) as follows: For \(M \in A\) and \(M' \in N\), we add the edge \(MM'\) if \(M'\) can be obtained from \(M\) by deleting \(e\) and adding a red edge \(f\) not incident to \(a,b\). Then, for every \(M \in A\), we have \(\deg_H(M) \le \binom{2D}{2}\) as \(f\) needs to be chosen in \(V \setminus V(M)\). Similarly, for every \(M' \in N\), we have \(\deg_H(M') \ge n/2-D\), as we must remove an edge of \(M'\) and add \(e\), which is always possible as \(e\) is not blocked by definition of \(N\). Then \(|A| \cdot \binom{2D}{2} \ge e(H) \ge |N| \cdot (n/2-D)\) and thus \(|A| \ge \frac{|N|}{4D^2} \cdot \left(1 -o_D(1)\right)\cdot n\). By 9 and a union bound over the \(k\) colours, we have \(|B|=O(k/D) \cdot |Z|\). Using \(|Z|=|B|+|N|\), we conclude \(|N|=(1-O(k/D))\cdot|Z|\). In particular, \(|A| \ge \frac{|Z|}{4D^2} \cdot \left(1-o_D(1)\right) \cdot n\).

Next, we compare the sizes of \(X\) and \(Z\). Define the bipartite graph \(H\) with parts \(X\) and \(Z\) as follows: For \(M \in X\) and \(M' \in Z\), we add the edge \(MM'\) if \(M'\) can be obtained from \(M\) by removing the two red edges incident to \(a\) and \(b\) and adding two red edges not incident to any of \(a,b\). Then, for every \(M \in X\), we have \(\deg_H(M) \ge \frac{1}{2} \cdot \left(\binom{2D+2}{2}-k(D+1)\right) \cdot \left(\binom{2D}{2}-kD\right)\) and, for every \(M' \in Z\), we have \(\deg_H(M') \le \binom{n/2-D}{2} (2D+3)(2D+1)\). Arguing as above, we get \(|X| \le \frac{|Z|}{4D^2} \cdot \left(1+o_D(1)\right) \cdot n^2\). Together with the inequality 7 , this also implies that \(|A| \le \frac{|Z|}{4D^2}\cdot (1+o_D(1)) \cdot n\).

Finally, we compare the sizes of \(Y\) and \(Z\). Let \(M \in Y\) and suppose without loss of generality that \(a \in V(M)\) (and thus \(b \not\in V(M)\)). Let \(a' \in V(M)\) be such that \(aa' \in M\). Define the bipartite graph \(H\) with parts \(Y\) and \(Z\) as follows: For \(M \in Y\) and \(M' \in Z\), we add the edge \(MM'\) if \(M'\) can be obtained from \(M\) by removing \(aa'\) and adding an edge with endpoints in \(V \setminus (V(M) \cup \{b\})\). Then, for every \(M \in Y\), we have \(\deg_H(M) = \Omega_k(D^2)\). Moreover, for every \(M' \in Z\), we have \(\deg_H(M') =O_D(n)\) as we must remove an edge of \(M'\) (and we have \(n/2-D\) choices), choose \(a\) or \(b\) (2 choices) and, assuming we chose \(a\), add an available edge from \(a\) to a vertex in \(V \setminus (V(M') \cup \{b\})\) (for which the number of choices is at most \(2D-2\)). Arguing as above, we get \(|Y| \le |Z| \cdot O_k(n)\).

Combining everything together, we have \(|A|+|X|+|Y|+|Z| \le \frac{|Z|}{4D^2} \cdot (1+o_D(1)) \cdot n^2\), and thus \[\begin{align} \mathrm{\mathbb{P}}\left[e \in M^*\right] = \frac{|A|}{|A|+|X|+|Y|+|Z|} \ge \frac{\frac{|Z|}{4D^2} \cdot (1-o_D(1)) \cdot n}{\frac{|Z|}{4D^2} \cdot (1+o_D(1)) \cdot n^2} \ge \frac{1-o_D(1)}{n}\, . \end{align}\]

\(\square\)

6 Spreadness and proof of Theorem 2↩︎

The proof of 2 uses the fractional version of the Kahn–Kalai conjecture [22]. This was proposed by Talagrand [23] and recently resolved by Frankston, Kahn, Narayanan and Park [24]. The Kahn–Kalai conjecture was then proved in full by Park and Pham [25], but the fractional version is sufficient for us.

Suppose one wants to show that w.h.p. \(G(n,p)\) contains a certain substructure. Define an auxiliary hypergraph \(\mathcal{H}\), where the vertices of \(\mathcal{H}\) are the edges of \(K_n\) and the hyperedges correspond to the desired substructures. So the question is now whether a random subset of the vertices of \(\mathcal{H}\) contains some hyperedge. The fractional Kahn-Kalai conjecture reduces this probabilistic problem to the ‘extremal’ problem of showing that \(\mathcal{H}\) is spread, where spreadness is defined as follows. We use \(|\mathcal{H}|\) to denote the number of hyperedges of \(\mathcal{H}\).

Definition 19. Let \(\mathcal{H}\) be a hypergraph on \(V\). We say that \(\mathcal{H}\) is \(q\)-spread if for every \(S\subseteq V\) we have that \(\Big|\{H \in \mathcal{H}: S \subseteq H\}\Big| \le q^{-|S|} \cdot |\mathcal{H}|\), i.e. if the number of hyperedges containing \(S\) is at most a \(q^{-|S|}\)-fraction of all edges.

The statement we use here follows from [24] and is taken from [26].

Theorem 20. There exists an absolute constant \(C\) such that the following holds. Let \(\mathcal{H}\) be an \(N\)-vertex \(q\)-spread hypergraph with \(|\mathcal{H}|>0\). Suppose that \(W\) is a random subset of \(V(\mathcal{H})\) where every vertex is included independently with probability \(p\ge Cq^{-1} \log N\). Then w.h.p. (as \(N \to \infty\)) there exists an edge of \(\mathcal{H}\) such that all its vertices have been selected.

We show how 20 and 14 can be combined to prove 2.

Proof of Theorem 2. Let \(C >0\) be large enough and \(p \ge C (\log n)/n\). We reveal \(G(n,p)\) in \(k\) rounds \(G_1,\dots,G_k\) and find the \(k\) perfect matchings sequentially, namely, for each \(\ell \in [k]\), we find in \(G_\ell\) a perfect matching \(M_\ell\) that forms a Hamilton cycle with the previously found matchings \(M_1,\dots,M_{\ell-1}\).

Let \(1 \le \ell \le k\). Suppose we have found \(\ell-1\) edge-disjoint perfect matchings \(M_1, \dots, M_{\ell-1}\) in \(G_1 \cup \dots \cup G_{\ell-1}\). (We know in addition that the union of any two of them induces a Hamilton cycle, but this is not needed for the following argument.) Let \(\mathcal{M}:=\{M_1, \dots, M_{\ell-1}\}\). Define the auxiliary hypergraph \(\mathcal{H}\) with \(V(\mathcal{H})=E(K_n)\), and where each hyperedge of \(\mathcal{H}\) corresponds to a red perfect matching of \(\mathcal{M}\). Our goal is to show that \(\mathcal{H}\) is \(q\)-spread for \(q:=\gamma n\) for some constant \(\gamma>0\) (which can possibly depend on \(\ell\)). Then, by Theorem 20, w.h.p. \(G_\ell\) contains a matching with the desired property.

By 14, we know that there exist constants \(\alpha,\beta\) (depending on \(\ell\)) with \(\alpha \le \beta\) such that for any \((\ell-1)\)-configuration on \(n\) vertices with total overlap at most \(n^{1/8}\), the number of perfect matchings which form a Hamilton cycle with each matching of the configuration is at least \(\alpha \cdot n^{-(\ell-1)/2}(n/e)^{n/2}\) and at most \(\beta \cdot n^{-(\ell-1)/2}(n/e)^{n/2}\). We show that choosing \(\gamma:=\alpha/(3\beta\cdot e)\) suffices.

Showing that \(\mathcal{H}\) is \(q\)-spread requires to prove that for every \(S \subseteq V(\mathcal{H})\),

  1. the number of hyperedges containing \(S\) is at most \(q^{-|S|}\cdot |\mathcal{H}|\).

Note that \(S\) is a set of edges of \(K_n\) and, if these edges do not form a partial red matching of \(\mathcal{M}\), then no hyperedge of \(\mathcal{H}\) contains \(S\) and there is nothing to prove. So we may assume that \(S\) is a partial red matching, and we let \(\mathcal{M}'\) be the reduced configuration of \(S\). Then the number of hyperedges containing \(S\) is equal to the number of red perfect matchings of \(\mathcal{M}'\).

By Theorem 14, we know that \(\mathcal{H}\) has at least \(\alpha \cdot n^{-(\ell-1)/2}(n/e)^{n/2}\) hyperedges. Let \(s:=|S|\) and set \(n':=n-2s\). If \(s\ge k \log n\), it is enough to upper bound the number of red perfect matchings of \(\mathcal{M}'\) by the number of all perfect matchings on a set of size \(n'\), which in turn is \(O((n'/e)^{n'/2}) = O((n/e)^{n/2-s})\). In order to prove [item:spread] for \(S\), it is enough to show that \[\begin{align} O((n/e)^{n/2-s}) \le q^{-s} \cdot \alpha n^{-(\ell-1)/2}(n/e)^{n/2}\, . \end{align}\] By plugging in the value of \(q\), this is equivalent to \(O(n^{(\ell-1)/2}) \le \alpha \cdot \left( \frac{3\beta}{\alpha}\right)^s\), which holds as \(\left( \frac{3\beta}{\alpha}\right)^s \ge 3^s \ge e^s \ge e^{k \log n} = n^k = \omega(n^{(\ell-1)/2})\).

Assume now that \(s\le k\log n\). By Item [prop95P952] of 9, each time we add a red edge, the total overlap can increase by at most \(k^2\). Since the matchings of \(\mathcal{M}\) are pairwise edge-disjoint, we have \(R(\mathcal{M})=0\) and thus \(R(\mathcal{M}') \le R(\mathcal{M})+2k^2s=2k^2s=O_k(\log n)\). Moreover \(\mathcal{M}'\) has order \(n' \ge n/2\). Therefore, again by Theorem 14, the number of red perfect matchings of \(\mathcal{M}'\) is at most \[\begin{align} \beta \cdot n'^{-(\ell-1)/2}(n'/e)^{n'/2} \le \beta \cdot n'^{-(\ell-1)/2}(n/e)^{n/2-s}\, . \end{align}\] In order to prove [item:spread] for \(S\), it is enough to show that \[\begin{align} \beta \cdot n'^{-(\ell-1)/2}(n/e)^{n/2-s} \le q^{-s} \cdot \alpha n^{-(\ell-1)/2}(n/e)^{n/2}. \end{align}\] By plugging in the value of \(q\), this is equivalent to \(\left( \frac{n}{n'}\right)^{(\ell-1)/2} \le 3^s \cdot \left(\beta/\alpha\right)^{s-1}\), which can be shown to hold as follows. We have \(\frac{n}{n'}= \frac{1}{1-2s/n} \le 1+\frac{3s}{n} \le \exp\left(\frac{3s}{n}\right)\) and thus \(\left( \frac{n}{n'}\right)^{(\ell-1)/2} \le \exp\left(\frac{3(\ell-1)}{2n} \cdot s\right) \le 3^s\) for \(n\) large enough. Moreover, \(\beta \ge \alpha\) and thus \(3^s \le 3^s \cdot \left(\beta/\alpha\right)^{s-1}\).

This proves [item:spread] for all \(S\).

\(\square\)

7 Bipartite setting↩︎

In light of the conjecture of Wanless [10] on a bipartite version of Kotzig’s conjecture (see 1), it is natural to ask if our results can be adapted to the bipartite setting. Let \(k \in \mathbb{N}\) and \(M_1, \dots, M_k\) be perfect matchings of \(K_{n,n}\) with small overlap. Is there a perfect matching \(M^*\) (of \(K_{n,n}\)) which creates a Hamilton cycle with each of them? And, if so, how many?

A perfect matching \(M\) of \(K_{n,n}\) can be expressed as the permutation \(\pi_M\) of \(S_n\), where \(\pi_M(i)=j\) if and only if \(ij \in M\). Observe that if there exists \(M^*\) such that \(M^* \cup M_i\) is a Hamilton cycle, then \(\pi_{M^*} \circ \pi_{M_i}\) consists of one cycle and its parity depends on \(n\) only (it is even if \(n\) is odd and odd otherwise). Since this is true for each \(i \in [k]\), it follows that the \(\pi_{M_i}\) must all have the same parity. In particular, for all distinct \(i,j \in [k]\), the permutation \(\pi_{M_i} \circ \pi_{M_j}\) is even and thus

  1. \(M_i \cup M_j\) has an even number of cycles whose length is divisible by \(4\).

This shows that condition [item:bipartite] is necessary for the existence of such \(M^*\). Ironically, among our arguments, the only one which requires a conceptual modification to show that condition [item:bipartite] is also sufficient is 11, which was the easiest part in the general setting. Therefore we provide the full details here.

Proposition 21 (Analogue of 11 in the bipartite setting). Fix two matchings of \(K_{n,n}\), say of colour blue and green, such that condition [item:bipartite] holds. Then there exists an equalizing partial red matching.

In the non-bipartite setting, this was achieved by inserting red edges inside the alternating blue-green cycles to break them into shorter alternating cycles and double edges. In particular, when the cycle had length exactly \(4\), we were taking one of the diagonals as a red edge. This is no longer possible in the bipartite setting as the diagonals are non-bipartite edges. However, we own the extra assumption [item:bipartite] and use the following fact.

Fact 22. Given two perfect matchings of \(K_{n,n}\), consider the cycles in their union. Then the parity of the number of those whose length is divisible by \(4\) is an invariant under the addition of (bipartite) red edges.

Proof of 21. The union of the two matchings consists of alternating blue-green even cycles and double edges. For the latter case, we do not have to do anything, so suppose we have a cycle with edges alternating blue and green. If the length is at least \(6\), then taking a red edge between two vertices at distance \(3\) on the cycle splits off a double-coloured edge and an alternating cycle whose length is shorter by \(4\). (In particular, a cycle of length \(6\) splits off into two double edges.) By repeatedly choosing red edges in this way, we get a reduced configuration consisting of alternating blue-green cycles of length \(4\) and double edges. By 22, condition [item:bipartite] still holds and thus there is an even number of cycles of length \(4\). We pair these cycles arbitrarily and add a (bipartite) red edge between each pair. This leaves a reduced configuration consisting of alternating cycles of length \(6\) and double edges, which can be easily equalized.

\(\square\)

The rest of the arguments goes through and we can prove the following results for \(K_{n,n}\).

Theorem 23. For every fixed \(k\in \mathbb{N}\), there exists a constant \(C=C(k)\) such that the following holds. Let \(M_1,\dots,M_k\) be perfect matchings of \(K_{n,n}\) satisfying condition [item:bipartite] and such that \[\begin{align} |M_i\setminus(\cup_{j\neq i}M_j)| \ge C \end{align}\] for each \(i\in[k]\). Then there exists a perfect matching \(M^*\) of \(K_{n,n}\) such that \(M^*\cup M_i\) forms a Hamilton cycle for each \(i\in [k]\).

Theorem 24. Let \(k\in \mathbb{N}\) be fixed. Then the following holds for all \(n\) sufficiently large. Let \(M_1,\dots,M_k\) be perfect matchings of \(K_{n,n}\) satisfying condition [item:bipartite] and such that \(\sum_{i,j:i\neq j} |M_i\cap M_j| \le n^{1/8}\). Then the number of perfect matchings \(M^*\) of \(K_{n,n}\) such that \(M^*\cup M_i\) forms a Hamilton cycle for each \(i\in [k]\) is \(\Theta_k(n^{1/2-k} (n/e)^n)\).

Finally, we discuss the analogue of 2 for the bipartite random graph \(G(n,n,p)\). Fixed \(k \in \mathbb{N}\), under which condition on \(p\) does \(G(n,n,p)\) contain \(k\) perfect matchings such that any pair forms a Hamilton cycle? If \(k=2\), the question is already well understood as we are asking for a Hamilton cycle. So we can assume \(k \ge 3\). First observe that if \(M\) and \(M'\) are perfect matchings of \(K_{n,n}\) and \(M \cup M'\) is a Hamilton cycle, then, as above, \(\pi_M \circ \pi_{M'}\) is an odd permutation if \(n\) is even (and it is even otherwise). This implies that, if \(n\) is even, then \(\pi_M\) and \(\pi_{M'}\) have different parity. In particular, if \(K_{n,n}\) admits three (or more) perfect matchings such that any pair forms a Hamilton cycle, then \(n\) must be odd. Therefore, for our question to have a positive answer, this condition is necessary. We can prove the following.

Theorem 25. For every fixed \(k \in \mathbb{N}\) with \(k \ge 3\), there exists a constant \(C=C(k)\) such that, when \(p\ge \frac{C \log n}{n}\), w.h.p. for \(n\) odd the bipartite random graph \(G(n,n,p)\) contains \(k\) edge-disjoint perfect matchings \(M_1, \dots, M_k\) such that \(M_i \cup M_j\) induces a Hamilton cycle for all distinct \(i,j \in [k]\).

The proof follows the step of that of 2 and, in particular, builds the matchings iteratively one after the other in \(k\) rounds, using 24 and 20. Note that if \(n\) is odd and a collection of perfect matchings of \(K_{n,n}\) is such that any pair forms a Hamilton cycle, then condition [item:bipartite] clearly holds and thus 24 can be applied.

8 Concluding remarks↩︎

3 offers a variant of Kotzig’s conjecture for random graphs. A natural problem is to sharpen the threshold for the edge probability. One might expect that \(p\ge \frac{\log n+(k-1)\log\log n + \omega(n)}{n}\) is already enough. In fact, even a “hitting time” result might be possible. The legendary hitting time result, proved independently by Bollobás [27] and Ajtai, Komlós and Szemerédi [28], says that if we start with the empty graph and then keep adding edges at random, w.h.p., in the very moment that the resulting graph has minimum degree \(2\), it will also have a Hamilton cycle. Later, Bollobás and Frieze [29] extended this result by proving that, for a given constant \(k\), in the very moment that the minimum degree is \(2k\), there exist \(k\) edge-disjoint Hamilton cycles. The following would constitute a powerful generalization.

Problem 26. Prove that, in the random graph process, w.h.p., as soon as the minimum degree is \(k\), there exists a collection of \(k\) edge-disjoint perfect matchings such that the union of any two distinct of them induces a Hamilton cycle.

Note that this result would imply the result of Bollobás and Frieze [29] since we can arbitrarily pair up the \(2k\) perfect matchings and each pair will form a Hamilton cycle, and the \(k\) resulting Hamilton cycles will be edge-disjoint. A closely related problem would be to consider random regular graphs. We reiterate a conjecture by Wormald [30] that for fixed \(d\), a random \(d\)-regular graph should w.h.p. have a perfect \(1\)-factorisation.

Another direction for future work is to sharpen the counting result (4) to obtain an asymptotic estimate. Given the heuristic argument described in 1, we conjecture that given \(k\) edge-disjoint perfect matchings on \([n]\), the number of perfect matchings forming a Hamilton cycle with each of them is \(\left(\sqrt{2} \cdot \left(\pi/2\right)^{k/2} \pm o(1)\right) \cdot n^{-k/2} \cdot (n/e)^{n/2}\). We recall that the case \(k=2\) has already been proved by Kim and Wormald [21]. We remark that, even with a more precise counting result, our method would not give a sharper result for \(G(n,p)\) since it relies on the expectation threshold conjecture.

Acknowledgments↩︎

We are grateful to Pranshu Gupta for stimulating discussions at the early stages of this project.

References↩︎

[1]
D. Kühn and D. Osthus, Hamilton cycles in graphs and hypergraphs: an extremal perspective, Proceedings of the International Congress of Mathematicians—Seoul 2014. Vol. IV, Kyung Moon Sa, Seoul, 2014, pp. 381–406.
[2]
A. Kotzig, Hamilton graphs and Hamilton circuits, Theory of Graphs and its Applications (Proc. Sympos. Smolenice, 1963), Publ. House Czech. Acad. Sci., Prague, 1964, pp. 63–82.
[3]
B. A. Anderson, Finite topologies and Hamiltonian paths, J. Combinatorial Theory Ser. B 14(1973), 87–93.
[4]
A. Rosa, Perfect 1-factorizations, Math. Slovaca 69(2019), 479–496.
[5]
D. A. Pike, A perfect one-factorisation of \(K_{56}\), J. Combin. Des. 27(2019), 386–390.
[6]
M. J. Gill and I. M. Wanless, Perfect 1-factorisations of \(K_{16}\), Bull. Aust. Math. Soc. 101(2020), 177–185.
[7]
M. Meszka, There are \(3155\) nonisomorphic perfect one-factorizations of \(K_{16}\), J. Combin. Des. 28(2020), 85–94.
[8]
I. Fiamčík, The acyclic chromatic class of a graph, Math. Slovaca 28(1978), 139–145, (in Russian).
[9]
N. Alon, B. Sudakov, and A. Zaks, Acyclic edge colorings of graphs, J. Graph Theory 37(2001), 157–167.
[10]
I. M. Wanless, Perfect factorisations of bipartite graphs and Latin squares without proper subrectangles, Electron. J. Combin. 6(1999), Research Paper 9, 16.
[11]
P. J. Laufer, On strongly Hamiltonian complete bipartite graphs, Ars Combin. 9(1980), 43–46.
[12]
M. Kwan, A. Sah, and M. Sawhney, Large deviations in random Latin squares, Bull. Lond. Math. Soc. 54(2022), 1420–1438.
[13]
J. Allsop and I. M. Wanless, Row-Hamiltonian Latin squares and Falconer varieties, Proc. Lond. Math. Soc. (3) 128(2024), Paper No. e12575, 28.
[14]
D. Bryant, B. Maenhaut, and I. M. Wanless, New families of atomic Latin squares and perfect 1-factorisations, J. Combin. Theory Ser. A 113(2006), 608–624.
[15]
D. Bryant, B. M. Maenhaut, and I. M. Wanless, A family of perfect factorisations of complete bipartite graphs, J. Combin. Theory Ser. A 98(2002), 328–342.
[16]
R. Královič and R. Královič, On semi-perfect 1-factorizations, Structural information and communication complexity, Lecture Notes in Comput. Sci. 3499, Springer, Berlin, 2005, pp. 216–230.
[17]
D. G. Wagner, On the perfect one-factorization conjecture, Discrete Math. 104(1992), 211–215.
[18]
R. Gurau, F. Joos, and B. Sudakov, The large \(N\) factorization does not hold for arbitrary multi-trace observables in random tensors, Lett. Math. Phys. 115(2025), Paper No. 93.
[19]
R. Gurau, Random tensors, Oxford University Press, Oxford, 2017.
[20]
L. Pósa, Hamiltonian circuits in random graphs, Discrete Math. 14(1976), 359–364.
[21]
J. H. Kim and N. C. Wormald, Random matchings which induce Hamilton cycles and Hamiltonian decompositions of random regular graphs, J. Combin. Theory Ser. B 81(2001), 20–44.
[22]
J. Kahn and G. Kalai, Thresholds and expectation thresholds, Combin. Probab. Comput. 16(2007), 495–502.
[23]
M. Talagrand, Are many small sets explicitly small?, STOC’10—Proceedings of the 2010 ACMInternational Symposium on Theory of Computing, ACM, New York, 2010, pp. 13–35.
[24]
K. Frankston, J. Kahn, B. Narayanan, and J. Park, Thresholds versus fractional expectation-thresholds, Ann. of Math. 194(2021), 475–495.
[25]
J. Park and H. T. Pham, A proof of the Kahn-Kalai conjecture, J. Amer. Math. Soc. 37(2024), 235–243.
[26]
H. T. Pham, A. Sah, M. Sawhney, and M. Simkin, A Toolkit for Robust Thresholds, arXiv:2210.03064, 2022.
[27]
B. Bollobás, The evolution of sparse graphs, Graph theory and combinatorics (Cambridge, 1983), Academic Press, London, 1984, pp. 35–57.
[28]
M. Ajtai, J. Komlós, and E. Szemerédi, First occurrence of Hamilton cycles in random graphs, North-Holland Mathematics Studies 115(1985), 173–178.
[29]
B. Bollobás and A. M. Frieze, On matchings and Hamiltonian cycles in random graphs, Ann. Discrete Math. 28(1985), 23–46.
[30]
N. C. Wormald, Models of random regular graphs, Surveys in combinatorics, London Math. Soc. Lecture Note Ser. 267, Cambridge Univ. Press, Cambridge, 1999, pp. 239–298.

  1. Fakultät für Informatik und Mathematik, Universität Passau, Germany. Email: stefan.glock@uni-passau.de, amedeo.sgueglia@uni-passau.de. SG is funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) – 542321564; AS is funded by the Alexander von Humboldt Foundation.↩︎

  2. In some sense, in this part of the proof yellow and red swap their roles. There are some red edges already fixed and we have to choose a yellow perfect matching while not creating alternating cycles.↩︎