On the Enumeration of All Unique Paths of Recombining Trinomial Trees4


Abstract

Recombining trinomial trees are a workhorse for modeling discrete-event systems in option pricing, logistics, and feedback control. Because each node stores a state-dependent quantity, a depth-\(D\) tree naïvely yields \(\mathscr{O}(3^{D})\) trajectories, making exhaustive enumeration infeasible. Under time-homogeneous dynamics, however, the graph exhibits two exploitable symmetries: (i) translational invariance of nodes and (ii) a canonical bijection between admissible paths and ordered tuples encoding weak compositions. Leveraging these, we introduce a mass-shifting enumeration algorithm that slides integer “masses’’ through a cardinality tuple to generate exactly one representative per path-equivalence class while implicitly counting the associated weak compositions. This trims the search space by an exponential factor, enabling markedly deeper trees—and therefore tighter numerical approximations of the underlying evolution—to be processed in practice. We further derive an upper bound on the combinatorial counting expression that induces a theoretical lower bound on the algorithmic cost of \(\sim \mathscr{O}\!\bigl(D^{1/2}\,1.612^{D}\bigr)\). This correspondence permits direct benchmarking while empirical tests, whose pseudo-code we provide, corroborate the bound, showing only a small constant overhead and substantial speedups over classical breadth-first traversal. Finally, we highlight structural links between our algorithmic/combinatorial framework and Motzkin paths with Narayana-type refinements, suggesting refined enumerative formulas and new potential analytic tools for path-dependent functionals.

Discrete Mathematics, Combinatorial Trees, Graph Algorithms, Algorithmic Complexity, Enumeration, Option Pricing

05C05, 05C85, 05A15, 68Q25, 68R10, 68Q17

1 Introduction↩︎

Recombining trinomial trees themselves exist as an approximation of a branching process. This is apparent since it is not always true that a process will return to a previous value after evolving to one in the future. However, these structures are well researched as approximations in the context of discrete event control problems or option pricing where approximations such as these are more strongly held [1][3]. With that aside, all trees, whether of a recombining nature or not, have huge exponential blow-ups in time and spatial complexity as they evolve. In particular, as the recombining trinomial tree evolves, it quickly generates trillions of distinct paths, whose explosion can only be controlled by algorithms and clever use of data structures. This problem has been thoroughly studied, most famously in Knuth’s series on combinatorial algorithms [4], [5].

In this paper, we aim to exploit the graphical structure of the tree, in order to avoid the combinatorial explosion typically associated with full path enumeration. This is made possible by using structural symmetries. More specifically, we use the property of translational equivalence among nodes, where identical values persist in depth shifts, preserving accumulated quantities along different paths. Consider the example shown in 1. The recombining tree consists of 25 nodes, each identified by a pair of integers: depth (ranging from 0 at the root to 4 at the terminal level) and position (indicating horizontal placement). Values are assigned purely by position, so all nodes at the same position share the same value. For instance, the root at position 0 has value 20, while its children at positions -1, 0, and 1 take values 18, 20, and 22. Paths then accumulate these position-based values from root to terminal nodes. In effect this is just a histogram representation—the node values themselves are illustrative and not essential.

Let’s take a look at 1 and consider the terminal node ending at position 0. The blue, green, and red paths in 1 each sum to \(102\). Each path visits the nodes with value 22 once, and visits the node with values 20 4 times; the sum is \[\label{E:LebesgueSum} 4\times 20 + 1\times 22=102.\tag{1}\] Generally, our setup allows for multiple paths to have the same accumulated value.

Trees are often used to discretize path integrals. In our case, we want to average the path–sum of values (e.g., terms like \(102\)) over all paths terminating at position 0. A naı̈ve method would enumerate every such path, compute its path–sum, and then average—quickly becoming infeasible as the tree depth grows. Instead, one can enumerate all possible path–sum values and count how many paths realize each. For example, in 1 there are three colored terms corresponding to position 1 once and position 0, so \(N_{102}=3\). The average is then obtained by summing \(p\,N_p\) (e.g.\(3\times 102\)) over all path–sum values \(p\) and normalizing by \(\sum N_p\): \[\text{Average} \;=\; \frac{\sum_{p} p\,N_p}{\sum_{p} N_p}\,.\] This amounts to a Lebesgue–style integral—summing “values \(\times\) measures”—rather than a Riemann–style sum over base points. This shift in viewpoint dramatically reduces computation by replacing an enumeration over exponentially many paths with an aggregation over the typically far smaller set of distinct path–sum values.

Figure 1: Recombining Tree with Values

Note that in this figure we explicitly display the value stored at each position. In every other figure, we omit these values as they are implicitly present, and—under the labeling scheme introduced shortly—each node retains the same value throughout. Suppressing them in later diagrams keeps the visual presentation clear and uncluttered. We now introduce our notation for a tree of maximum depth \(D\), \(\mathscr{T}_{D}\), and informally define the set of all paths it generates as \(\mathscr{P}\). We also define an informal reconstruction function \(\EuScript{C}(\mathscr{P})\), where \(\EuScript{C}(\mathscr{P})\) denotes all the possible combinations of paths, which constructs the tree by taking the union of all paths—i.e., \(\mathscr{T}_{D}= \EuScript{C}(\mathscr{P})\). This is formally addressed later in 2.

This relation highlights that the full tree is simply the collection of all possible path combinations. Due to the recombining nature of the tree, many of these paths are structurally redundant. That is, multiple paths contain the same vertices in different orders without altering the final path-dependent quantity. We refer to such redundancies as permutations of paths, denoted \(\EuScript{P}(\mathscr{P})\). These permutations arise when vertex positions differ across depths but represent the same cumulative state. Hence, the set of unique paths in the tree can be informally defined as: \[\begin{align} \label{unique95formula} \EuScript{U}(\mathscr{P}) = \EuScript{C}(\mathscr{P}) - \EuScript{P}(\mathscr{P}). \end{align}\tag{2}\] Although trivially computing all paths and then subtracting duplicates could be done, such an approach fails to leverage the underlying invariance in 1 and still incurs the full computational cost of path enumeration. In contrast, our algorithm offers a novel method for directly computing \(\EuScript{U}(\mathscr{P})\), ensuring that we never generate permutations of previously generated paths. We achieve this by pre-pruning permutations during the enumeration process, dramatically reducing the computational overhead.

Although the argument here is informal, it illustrates the core contribution of our method: a principled way to generate only the distinct path structures in a recombining tree, without redundancy. This offers substantial advantages in discrete event control problems and other applications involving path-dependent dynamics where recombining trees are used to model system evolution. Our approach draws on decades of work aimed at accelerating path enumeration, including the algorithms of Eades and McKay [6], Ehrlich’s loop‑less generation technique [7], and the Steinhaus–Johnson–Trotter algorithm originally presented by Johnson [8]. We also refer to work by Ruskey and Williams [9] and Cheng [10] for more recent work in this particular field.

The question then arises, why specifically use a trinomial tree? A recombining trinomial lattice improves on the classic binomial grid by adding a "stay-put" or "no-change" branch, delivering countless benefits. With two free probabilities per step, it can match both the drift and the variance of the underlying process, giving second-order weak accuracy instead of the binomial’s first-order, so far fewer time steps - and hence far fewer total nodes - are needed to hit a given error tolerance [11]. The trinomial structure is also the discrete analog of a central difference scheme, which is unconditionally stable for many payoffs, in deference to the application in option pricing. This stability lets users of any algorithm that utilizes this underlying structure to stretch time increments two-to-four-fold without inducing oscillations [12]. In control problems, the three branches also line up perfectly with the canonical actions "increase, hold, decrease" eliminating the dummy states a binomial grid must invent. In summary, a third branch provides just enough freedom to hit both drift and volatility, yielding higher accuracy, greater numerical stability, cleaner boundary handling, and better scalability - all while keeping the lattice recombining.

2 Setup↩︎

We can now formally develop our definitions of a recombining rooted trinomial tree that we will use for the remainder of our analysis. To simplify later calculations, we represent this tree as a directed graph embedded in the lattice \(\mathbb{Z} \times \mathbb{Z}_{+}\). Given a nonnegative integer \(D\), which represents the maximum depth of the tree, we define the set of vertices as: \[\label{V:VertexDef} V_D= \{(k,d)\in \mathbb{Z}\times \mathbb{Z}_+ : 0 \le d \le D,\; |k| \le d\}\tag{3}\] and the set of as: \[\label{E:EdgeDef} E_D= \left\{\left((k,d-1),(k+s,d)\right)\in V_D\times V_D: s\in \{-1,0,1\}\right\}\tag{4}\] For a generic node \((k,d)\) in \(V_D\), we will think of \(d\) as the depth coordinate and \(k\) as the position coordinate. In addition, \(E_D\) encodes the rule that from any node at position \(k\) and depth \(d-1\), we may move to depth \(d\) by stepping to \(k-1, k, k+1\). The resulting graph is denoted \(\mathscr{T}_{D}= (V_D, E_D)\), a formalization of the tree described in 1, and is symmetric under reflection across the position axis \(k = 0\), a property that will be important in later sections. We show a visualization of this graph in 2 to build the graphical intuition for the reader:

Figure 2: \mathscr{T}_{D} with Node-Labelings for V_{D}

We are interested in accumulating values along the paths in the tree. A path in \(\mathscr{T}_{D}\) is a sequence \((\mathsf{v}_0,\mathsf{v}_1,\dots,\mathsf{v}_D)\) of vertices such that \(\mathsf{v}_0=(0,0)\), each \(\mathsf{v}_d\in V_D\), and \((\mathsf{v}_{d-1},\mathsf{v}_d)\in E_D\) for \(d\in\{1,2,\dots,D\}\). We provide an example of such a path in 3:

Figure 3: Recombining Tree with Sample Path

Let \(\mathscr{P}\) be the collection of all paths in \(\mathscr{T}_{D}\). We can regard paths as walks by recording only the step increments. For each \((s_1,s_2,\dots,s_D)\in\{-1,0,1\}^D\), set \[\label{E:walktopathA} \mathsf{v}_d \;\overset{\text{def}}{=}\; \biggl(\sum_{1\le d'\le d} s_{d'},\, d\biggr), \qquad d\in\{0,1,\dots,D\},\tag{5}\] with the convention that \(\sum_{\emptyset}\overset{\text{def}}{=}0\). Then \[\label{E:walktopathB} (\mathsf{v}_0,\mathsf{v}_1,\dots,\mathsf{v}_D)\in \mathscr{P}.\tag{6}\]

Conversely, if \[\mathsf{p}\;=\; \bigl((0,0),(k_1,1),\dots,(k_D,D)\bigr)\in \mathscr{P},\] define the increments \[s_d \;\overset{\text{def}}{=}\; k_d - k_{d-1}, \qquad d\in\{1,2,\dots,D\}.\] This recovers the walk \((s_1,\dots,s_D)\in\{-1,0,1\}^D\).

Hence, the mapping between walks and paths is a bijection, and therefore \[\lvert \mathscr{P}\rvert \;=\; \lvert \{-1,0,1\}^D \rvert \;=\; 3^D.\]

We can also extract the depth-indexed position sequence via \(\pi:\mathscr{P}\to\mathbb{Z}^{D+1}\) defined by \[\label{E:piDef} \pi\bigl((0,0),(k_1,1),\dots,(k_D,D)\bigr) \;\overset{\text{def}}{=}\; (0,k_1,\dots,k_D).\tag{7}\]

3 Aggregation↩︎

Let’s return to 1. The blue, green, and red paths all sum to 102 and end at the 20 node. We can rewrite the sums as \[20\times 4+22\times 1 = 80+22=102,\] reflecting the fact that 20 occurs 4 times along each path, and 22 occurs once along each path (analogous to Lebesgue integration vs Riemannian integration). Essentially, we will index paths on the tree to first identify the blue path, compute the sum, and then skip over the green and red paths. Doing in a way which avoids recursion, we will construct an efficient way to identify all aggregated values.

To ground the discussion, we open this section with the most straightforward technique for indexing (i.e., enumerating) every realization of a recombining trinomial tree: a depth-first recursive traversal that visits all branches until the target depth \(D\). This classical approach was extremely important throughout our experiments because it served as a very stable baseline to check almost all of our algorithms for correctness. And while recursion does scale exponentially, this baseline serves three goals

  1. its transparency made manual verification easy;

  2. it provided a clean performance yard-stick for the more sophisticated algorithms introduced later; and

  3. it exposed the combinatorial structure of the tree in the clearest possible way.

Our version augments the vanilla DFS 8 with a hash-map memoisation scheme 9 that caches the value associated with each previously visited \(\{-1, 0,+1\}\)-triple; revisiting an identical sub-problem is therefore reduced to \(\mathscr{O}(1)\) amortized time [13]. Although there is a slight difference in the exact cost saved due to computational time that is added from using recursion on a function and adding function calls on the stack frame, we defer that discussion to 4.

We can then transition to analyzing the computational complexity of iterating over a trinomial tree, which we can extrapolate from [14] and our analysis in 2.

Without memoisation the time cost is \[\label{E:naivetime} \text{Time}_{\text{naïve}} = 3^D \times D = \mathscr{O}\left(D\cdot3^D\right),\tag{8}\] where the linear factor counts the computational cost associated with iterating along each \(D\)-step path. Caching removes the redundant work, leaving \[\label{E:memoisedtime} \text{Time}_{\text{memoised}} \approx \mathscr{O}\left(3^{D}\right)\tag{9}\] where we state that the computational time is approximate because it is constant amortized time. This classical baseline establishes the reference point for all of our later algorithms and their optimizations.

4 Removing Recursion from the Enumeration Process↩︎

Recursion — though a classical way to generate paths — is often computationally expensive. Even with careful data structures, each additional path incurs another stack frame, and total running time can grow rapidly [15]. To address this, we design a recursion-free algorithm that (i) exhaustively enumerates all valid paths, (ii) produces no spurious output, (iii) limits the number of loop passes (avoiding heavy overhead), and (iv) extends naturally to the unique-representative algorithm developed later in 5.

Once paths are projected via 7 , it is natural to impose an order that supports a nonrecursive, “ping-pong’’ traversal: rather than recursing, we reset a loop cursor to a designated index and deterministically march through the next valid configuration. To set the stage, we introduce a canonical indexing by terminal node. Fix \(D\in\mathbb{N}\) and, for any \(k^*\in\{-D,-D+1,\dots,D\}\), define the set of all admissible depth-\(D\) paths that terminate at \((k^*,D)\): \[\mathscr{P}_{k^*} \;\overset{\text{def}}{=}\; \bigl\{ (\mathsf{v}_{0},\mathsf{v}_{1},\dots,\mathsf{v}_{D})\in\mathscr{P} :\;\mathsf{v}_{D}=(k^*,D) \bigr\}.\] This viewpoint lets us either sweep over all attainable \(k^*\) for a fixed depth \(D\), or restrict to a single target \(k^*\) when only that terminal node is of interest—providing flexibility in what the generator emits.

We impose a total order via the position projection \(\pi\) of 7 . The definition is recorded as: \[\label{S:LexOrd} \begin{align} &\text{For } \mathbf{k}=(k_0,\dots,k_D),\;\mathbf{k}'=(k'_0,\dots,k'_D)\in\mathbb{Z}^{D+1},\\[4pt] &\mathbf{k}\prec \mathbf{k}' \iff \exists\, d^\star \in \{0,\dots,D\}\;\text{minimal such that } k_{d^\star}\neq k'_{d^\star} \;\text{and}\;k_{d^\star}<k'_{d^\star},\\[4pt] &\mathsf{p}\prec_{\text{lex}}\mathsf{p}' \iff \pi(\mathsf{p})\prec \pi(\mathsf{p}')\quad\text{for } \mathsf{p},\mathsf{p}'\in\mathscr{P}. \end{align}\tag{10}\] In practice, this ordering mirrors “walking down’’ the tree while updating only a small number of coordinates at each step.

For implementation, it is convenient to begin with the strictly nonnegative representatives and then add a thin post-processing layer, after each paths is generated, to recover the paths with negative excursions. Define the strictly nonnegative (“positive’’) paths that end at \(k^*\) by \[\mathscr{P}^+_{k^*} \;=\; \bigl\{\mathsf{p}\in \mathscr{P}_{k^*}:\;\pi(\mathsf{p})\in\mathbb{Z}_+^{D+1}\bigr\}.\] We enumerate them in lexicographic order as \[\mathscr{P}^+_{k^*} \;=\; \{\mathsf{p}^{(n)}:\;n=1,2,\dots,|\mathscr{P}^+_{k^*}|\}, \qquad \mathsf{p}^{(n)}\prec_{\text{lex}}\mathsf{p}^{(n+1)}.\] The lexicographically lowest positive path is \[\pi\!\bigl(\mathsf{p}^{(1)}\bigr) \;=\; \bigl(\underbrace{0,\dots,0}_{D+1-k^*},\,1,2,\dots,k^*\bigr).\] The lexicographically highest positive path—i.e., the \(\prec_{\text{lex}}\)-maximizer in \(\mathscr{P}^+_{k^*}\)—has \(\pi\)-image \[\label{E:highestpath} \begin{cases} \bigl(0,1,2,\dots,\tfrac{D+k^*}{2},\;\tfrac{D+k^*}{2}-1,\dots,k^*\bigr), & \text{if } D-k^*\;\text{is even},\\[6pt] \bigl(0,1,2,\dots,\tfrac{D+k^*-1}{2},\;\tfrac{D+k^*-1}{2},\;\tfrac{D+k^*-1}{2}-1,\dots,k^*\bigr), & \text{if } D-k^*\;\text{is odd}, \end{cases}\tag{11}\] i.e., an initial monotone rise that peaks at \(\bigl\lfloor\tfrac{D+k^*}{2}\bigr\rfloor\) and then (possibly after a single stay at the peak in the odd case) descends to \(k^*\). This tuple serves as the maximal seed for our nonrecursive enumeration.

4.0.0.1 Seed–and–march (recursion-free control)

Beginning from this maximal seed, 10 advances by a simple two-step local update that preserves the admissibility rules of 2. Tick–down: select the largest index \(j\) whose coordinate can be decreased by \(1\) without violating feasibility (i.e., ComputeTickDown returns \(j\) with CheckValidity\((\texttt{CURRENT},j,-1)=\texttt{True}\)); if none exists, enumeration halts. Sweep–across: treat the decremented unit as freed mass and greedily increment successive indices \(i>j\) whenever CheckValidity\((\texttt{CURRENT},i,+1)\) holds, recording each valid intermediate path. This “tick–down then sweep–across” march visits the tuples of \(\mathscr{P}^+_{k^*}\) in \(\prec_{\text{lex}}\) order, as defined in 10 using only local coordinate edits and \(O(1)\) work per updated coordinate—no recursion, no whole-path comparisons. After each emitted positive path, a thin post-processing layer restores negative excursions to recover the full \(\mathscr{P}_{k^*}\) without changing the control flow. This highlights the necessity of the lexicographical ordering we defined in 10 to remove the recursion from the path enumeration process. In 5 we extend this ordering to the main construction we present in this paper, making explicit how it enables the removal of recursion in these enumeration processes.

For comparison and completeness, 9 presents a classical recursive DFS that explores children via \(\{-1,0,+1\}\). Both schemes use the same lexicographic scaffold, but the DFS incurs branch exploration and \(O(D)\) call-stack depth, whereas 10 realizes the same enumeration with deterministic tick–down (sweep–across) updates in place—effectively making “GenComb” recursion-free while preserving outputs (and, under the same ordering policy, the enumeration order). As a practical cross-check, one can hash canonical path encodings and verify that, for each \((D,k^{*})\), the multiset of outputs from the recursive DFS matches those from our stack-free generator; implementation details appear with the algorithms.

4.0.0.2 From positive representatives to all paths

Most admissible paths reaching \(k^*\) will visit negative nodes. 12 supplies a light-weight post-processing: the flip family \(\mathscr{F}(\cdot)\) (57 ) negates any subset of unlocked positive excursions, producing valid paths with the same endpoint (which can be inserted into 10 without adding in recursion). 4 guarantees validity and endpoint preservation, and [prop:flip-representation] gives the exact representation 59 for \(k^*\ge 0\) and its sign-flipped counterpart 60 for \(k^*<0\). Operationally, one may emit each nonnegative representative as soon as it is generated by 10, and then emit its flip family in any fixed subset order (e.g., lexicographic in the excursion indices), preserving a global total order.

4.0.0.3 Parity for seeding

If a path has \(j_+\) up-steps, \(j_-\) down-steps, and \(j_0\) stays, then the relations in 61 and the parity constraint 62 (13) ensure that the chosen seed obeys the even–odd compatibility between \(D\), \(k^*\), and \(j_0\). In particular, the maximal seed 11 is consistent with admissible counts and enables a deterministic, recursion-free enumeration pipeline: lexicographically generate \(\mathscr{P}^+_{k^*}\) (10) from the maximal seed, then expand each representative by excursion flips to obtain \(\mathscr{P}_{k^*}\), preserving determinism and avoiding stack-based recursion, while remaining compatible with the unique-representative machinery in 5.

4.0.0.4 Removing Recursion from Unique Path Generation

This section forms a cornerstone of the paper’s main results. While our excursion-based treatment and the permutation of negatives across excursion blocks (Appendix 12) are conceptually interesting, we relegate their detailed mechanics to the appendices to avoid obscuring the core contributions. What matters here are the ideas of maximal paths and lexicographical ordering (see 10 ), which not only eliminate recursion from the forthcoming algorithms but also guide the combinatorial structure underlying them. By linking “walking down’’ the graph to”marching through’’ path tuples, these tools provide intuitive evidence for the correctness of our approach. Their practical integration—via the maximal seed defined in 11 , the recursion-free generation 10, and the post-processing flip mechanism justified in [prop:flip-representation]—yields a complete, nonrecursive enumeration pipeline. With this groundwork laid and the parity constraints recorded in 13, we now proceed to the main results, confident that the reader has both a graphical and an algorithmic picture of how ordering improves and informs path enumeration.

5 Enumeration of Unique Path Combinations↩︎

As before, every \(\mathsf{p}\in \mathscr{P}\) in a recombining tree \(\mathscr{T}_{D}\) can be mapped into a tuple via 7 . Our goal here is to enumerate only the unique paths—those that differ in value, not merely by a translation or re-ordering of identical vertex positions. Although such permutations are rare near the root, they proliferate rapidly with depth, creating a large amount of redundancy. Ultimately, removing them does not alter the computational blow-up of the problem, but it does extend the depth and breadth of the tree we can explore, allowing finer discretisation for certain discrete event problems that require it.

Building on the classic recursive framework reviewed in 2 and the tuple-based, recursion-free iterator enabled by our ordering scheme presented in 4, we now construct a closed-form combinatorial count and an efficient, recursiveless algorithm that enumerates every unique path. The key observation is that many of the paths terminating at the same node \((k^{*},D)\) share the same multiset of position values and thus the same aggregated value; differing only by shifts in visit order (see 1). In tuple form, this multiset is represented by a count vector that records how many times each position \(k\) appears. Thus we are left with the following high level algorithmic approach, which we will use this section to expand upon in detail, presenting the main results of the paper:

  1. Count–vector representation: We treat each “cardinality tuple’’ as a count vector - i.e.a compact record of the multiplicities of each position in \(\pi(\mathsf{p})\).

  2. Recursion-free generation: We then iterate directly over these count vectors (using the "ping-pong" iterator logic presented in 10 in 4), producing exactly one tuple for every distinct vector and thereby eliminating shift-equivalent duplicates.

  3. Natural Negative-Path Augmentation: Finally, we introduce a simple and natural augmentation which allows us to enumerate negative paths

Because of the construction of the algorithm in this manner, as will become clear in this section, duplicates are suppressed a priori. As a result, the tree is effectively pre-pruned while the entire procedure remains non-recursive, yielding a substantial reduction in runtime and memory consumption without an accuracy trade-off. This combined framework avoids both recursion and shift-equivalent paths, significantly enlarging the tractable region of the recombining tree.

5.1 Closed-Form Combinatorics for Unique Path Enumeration↩︎

Fix \(\mathsf{p}\in \mathscr{P}_{k^*}\). The viable positions are \[(k_-,k_-+1,\dots k_+-1,k_+)\] where \[\label{E:Kbounds} \begin{align} k_- &= \begin{cases} \tfrac{k^*-D}{2} &\text{if D-k^* is even} \\ \tfrac{k^*-D+1}{2} &\text{if D-k^* is odd} \end{cases} \\ k_+ &= \begin{cases} \tfrac{D+k^*}{2} &\text{if D-k^* is even} \\ \tfrac{D+k^*-1}{2} &\text{if D-k^* is odd} \end{cases} \end{align}\tag{12}\] For each integer \(k\) in \([k_-,k_+]\), define \[c_k(\mathsf{p})\overset{\text{def}}{=}\left|\left\{d\in \{0,1\dots D\}: (\pi(\mathsf{p}))_d=k\right\}\right|;\] \(c_k(\mathsf{p})\) is the number of times that the position equals \(k\). Combining these, we define the cardinality tuple5 \[\label{cardinality95tuple95formal95def} \hat{C}(\mathsf{p})\overset{\text{def}}{=}\begin{blockarray}{cccc} k_- & k_-+1 & \dots & k_+ \\ \begin{block}{(cccc)} c_{k_-}(\mathsf{p}), & c_{k_-+1}(\mathsf{p}), & \dots & c_{k_+}(\mathsf{p}) \\ \end{block} \end{blockarray} \eqqcolon \hat{c}\tag{13}\] \(\hat{C}(\mathsf{p})\) is the empirical count of the values taken by the path and, more specifically, \(\hat{C}_{D,k^{*}}(\mathsf{p})\) is the empirical count of the values taken by a path that end at a particular depth \(D\) and position \(k^{*}\). We formally define this mapping here: \[\label{card95map} \hat{C}_{D,k^*} : \mathscr{P}_{k^*} \to \mathbb{N}^{[k_{-},k_{+}]}, \qquad \hat{C}_{D,k^{*}}(\mathsf{p})=\bigl(c_{k}(\mathsf{p})\bigr)_{k=k_{-}}^{k_{+}}.\tag{14}\] We then define the full set of all unique cardinality tuples—that represent paths in a tree of depth \(D\) that terminate at node \(k^{*}\)—as \(\mathcal{C}_{D,k^{*}}\): \[\label{full95card95set} \mathcal{C}_{D,k^{*}} \overset{\text{def}}{=}\{\,\hat{C}_{D,k^{*}}(\mathsf{p})=(c_{k}(\mathsf{p}))_{k=k_{-}}^{k_{+}}\in \mathbb{N}^{[k_{-},k_{+}]}\,\}.\tag{15}\] We then have that, for any \(\hat{c} \in \mathcal{C}_{D,k^{*}}\) \[\label{E:totalmass} \sum_{k\in [k_-,k_+]} c_k(\mathsf{p}) = D+1\tag{16}\] This construction naturally accommodates additional weights at each node, as illustrated in 1. Let \(\mathfrak{v}: [k_{-},k_{+}] \to \mathbb{R}\) denote any function assigning a weight \(\mathfrak{v}_{k}\) to each admissible position \(k\) (or node \((k,d)\)). Then, for a path \(\pi(\mathsf{p})=(k_0,k_1,\dots,k_D)\), the cumulative value along the path is \[\label{E:efficientaggregation} \sum_{d=0}^{D} \mathfrak{v}_{k_d} \;=\; \sum_{k\in [k_{-},k_{+}]} c_{k}(\mathsf{p})\,\mathfrak{v}_{k}.\tag{17}\] Thus the cardinality tuple provides an efficient way to aggregate weighted quantities along paths.

Before embarking on the formal construction of our algorithm and its associated combinatorial framework, we pause to introduce a sequence of key remarks. These remarks serve as the conceptual foundation of the argument: they articulate the principles that guarantee correctness and provide the scaffolding on which the later technical details rest. By presenting them explicitly at the outset, we make clear how each subsequent step of the construction is informed by—and consistent with—these foundational insights. For clarity, we present the remarks in modular form. This allows the reader to revisit them easily as the discussion unfolds, since many will be refined, extended, or specialized in later sections. In this way, the remarks serve both as a reference point and as a running thread that connects the evolving stages of the argument.

Due to the nature of the argument we will begin to construct, it becomes useful to introduce notation that separates the set 15 into "positive" \(\hat{c}\) and "mixed" \(\hat{c}\) (where mixed here means \(c_{k}, \; k< 0\) are allowed): \[\mathcal{C}^{+}_{D,k^*} \overset{\text{def}}{=}\bigl\{\,\hat{c}\in \mathcal{C}_{D,k^{*}}:\; c_k=0\;\text{for all }k<0\,\bigr\}, \qquad \mathcal{C}^{-}_{D,k^*} \overset{\text{def}}{=}\mathcal{C}_{D,k^{*}}\setminus \mathcal{C}^{+}_{D,k^{*}}.\] Thus \(\mathcal{C}^{+}_{D,k^*}\) consists of tuples arising from paths in \(\mathscr{P}_{k^*}\) that never visit \(k<0\), and \(\mathcal{C}^{-}_{D,k^*}\) those that visit at least one negative position. Enumerating the positive part first and then augmenting the algorithm to allow negative visits yields a clean construction, without carrying along negative-index components that are identically zero in what will soon be understood as "the first phase".

A central guarantee of our algorithm’s correctness comes from interpreting cardinality tuples as minimal representatives of the equivalence classes of all unique paths in the tree. Each tuple indexes exactly one equivalence class, ensuring that paths are counted and ordered without duplication or omission. In this way, cardinality tuples capture precisely the minimal information required to reconstruct the tree. We therefore formalize the equivalence relation and its induced class structure here, while postponing the detailed proof to 14.

We define an equivalence relation on \(\mathscr{P}_{k^*}\), using the definitions in 14 and 15 by \[\label{equivalence95relation} \mathsf{p}\sim \mathsf{p}' \;\Longleftrightarrow\; \hat{C}_{D,k^*}(\mathsf{p})=\hat{C}_{D,k^*}(\mathsf{p}').\tag{18}\] The equivalence class of \(\mathsf{p}\) is thus: \[\label{equivalence95class} [\mathsf{p}]_{\sim} = \hat{C}_{D,k^*}^{-1}\!\bigl(\hat{C}_{D,k^*}(\mathsf{p})\bigr) = \{\mathsf{p}' \in \mathscr{P}_{k^*}:\;\hat{C}_{D,k^*}(\mathsf{p}')=\hat{C}_{D,k^*}(\mathsf{p})\}.\tag{19}\]

As discussed in 10 , lexicographic ordering is useful for implementation and best-case bounds. For cardinality tuples we impose lexicographic order directly on \(\mathcal{C}^{+}_{D,k^*}\subset \mathbb{N}^{[0,k_+]}\); via \(\hat{C}_{D,k^*}\) this induces a representative-independent total order on the equivalence classes i.e. \(\hat{c} \mathrel{\prec_{\mathrm{lex}}}\hat{c}'\). This is why it was so important to introduce the lexicographical ordering, without it, we would not be able to construct this process on equivalence classes properly. (Note that lex order on raw paths \(\pi(\mathsf{p})\) need not descend to the quotient unless it is constant on the fibers of \(\hat{C}_{D,k^*}\).). We will later extend the lexicographical ordering described here to all \(\hat{c} \in \mathcal{C}_{D,k^{*}} \subset \mathbb{N}^{[k_{-},k_{+}]}\) in 46 .

With all these remarks fully established, we can now build an enumeration of unique elements of \(\mathscr{P}_{k^*}\) organized by these cardinality tuples. Using the path rules in \(\mathscr{P}_{k^*}\), we will:

  • efficiently sequence all admissible (i.e., path-realizable) cardinality tuples, and

  • reconstruct the elements of \(\mathscr{P}_{k^*}\) from those tuples.

For the algorithm, we can fix a canonical starting tuple that captures the maximal excursion which we will refer to often as the seed tuple. This seed tuple is analogous to what was referenced in 4 and defined formally in the path regime as 11 . With \(D\) and \(k^* \geq 0\) fixed and \([0,k_+]\) as in 12 , we define this seed tuple \(\hat{s}^{(0)}\), which is a valid \(\hat{c}\) and is thus contained in \(\mathcal{C}_{D,k^{*}}^{+}\), as the following element-wise: \[\label{E:starting95tuple} c_k \;=\; \begin{cases} 0, & k_{-} \leq k<0,\\[2pt] 1, & 0 \le k \le k^*-1,\\[2pt] 2, & k^* \le k \le k_+-1,\\[6pt] \begin{cases} 1, & \text{D-k^* even},\\ 2, & \text{D-k^* odd}, \end{cases} & k = k_+, \end{cases} \qquad (k\in [k_-,k_+]).\tag{20}\] Equivalently, \((c_k)_{k=k_-}^{k_+}\) has \(0\) for \(k<0\), then \(1\) on \([0,k^*)\), \(2\) on \([k^*,k_+)\), and a top entry at \(k_+\) determined by the parity of \(D-k^*\); this is the “highest’’ tuple corresponding to 11 , and \(\sum_{k=k_-}^{k_+} c_k = D+1\) by 16 . It serves as the starting point for the mass–shift enumeration. By 17 and 18 , iterating over admissible tuples enumerates all equivalence classes and—via 1419 —recovers the entire tree without redundancy and can be thought of, given the ordering scheme, as walking down the unique path representatives in the tree. Note that this maximal tuple lies in \(\mathcal{C}^{+}_{D,k^{*}}\). We will later extend the enumeration from \(\mathcal{C}^{+}_{D,k^{*}}\) to the full \(\mathcal{C}_{D,k^{*}}\) (thus including \(\mathcal{C}^{-}_{D,k^{*}}\)) via an augmentation introduced after the construction. For now, as in [set95splitting], it is useful to begin with the maximal strictly positive case (including \(k^{*} = 0\)).

Before the enumeration process begins, it is important to note a structural constraint that follows directly from the graph. At all times during the redistribution procedure, the following conditions must hold:

  1. For every index \(0 \leq k \leq k^{*}-1\), where we are considering arbitrary \(\hat{c} \in \mathcal{C}^{+}_{D,k^{*}}\), we must maintain \(c_{k} \geq 1\). This reflects the fact that each such slot must retain at least one visit a node in order to allow a path to ascend from \(0\) up to \(k^{*}\).

  2. For every index \(k^{*} \leq k \leq k_{+}-1\), the mass in position \(c_{k}\) can only be decremented below \(2\) once the mass in the position immediately to its right, \(c_{k+1}\), has already been reduced to \(0\). This expresses the requirement that any excursion reaching level \(k \geq k^{*}\) must eventually return downwards to \(k^{*}\), which necessitates having at least two visits at those intermediate heights.

The only exception is the current highest occupied slot \(c_{k}\) at the peak of an excursion: this slot may equal \(1\), since its unit of mass can be redistributed while still preserving the condition that all admissible paths terminate at \(k^{*}\). This graphical constraint also imposes a natural stopping condition to the algorithm, which we will discuss in detail later.

These constraints in [rem:mass95constraints] are immediate from the structure of the underlying recombining tree and can be seen in a clear example where the red dotted line - which starts at \(k^{*}\) - requires that all elements below it (highlighted in yellow) meet condition (1) in [rem:mass95constraints] and all elements at or above it meet condition (2) in [rem:mass95constraints]:

Figure 4: Example of Graphically Informed Constraints on c_{k}

These constraints - informed by the structure of the graph - ensure that every admissible redistribution corresponds to a valid path ending at \(k^{*}\) while also remaining true to the minimality of the equivalence class representation of the tree 19 .

5.1.0.1 Working Example

Let’s work through a simple example, taking \(D=7\) and \(k^*=2\). Using 12 , we have \[k_-=-2 \qquad \text{and}\qquad k_+=4.\]

From 11 , the highest path with this \((k^{*},D) = (2,7)\) has position sequence \[\label{E:samplemaxpath} (\textcolor{blue}{0},\textcolor{blue}{1},\textcolor{red}{2},\textcolor{red}{3},\textcolor{green}{4},\textcolor{green}{4},\textcolor{red}{3},\textcolor{red}{2})\tag{21}\] and this has cardinality tuple \(\hat{c}\): \[\label{E:toptuple} \begin{blockarray}{ccccccc} -2 & -1 & 0 & 1 & 2 & 3 & 4 \\ \begin{block}{(ccccccc)} 0, & 0, & \textcolor{blue}{1}, & \textcolor{blue}{1}, & \textcolor{red}{2}, & \textcolor{red}{2}, & \textcolor{green}{2} \\ \end{block} \end{blockarray}\tag{22}\] In line with 16 , we have \[0 + 0 + 1 + 1 + 2 + 2 + 2 = 8.\]

Note the structure of 22 compared to 21 . The path goes up to \(k^*=2\), then further up to \(4\), and then back down to \(k^*=2\). It visits \(0\) and \(1\) once (on the way up), and visits \(2\) and \(3\) twice (once up, once down). It also visits \(4\) twice (since \(D-k^*=5\) is odd), though it would visit \(4\) once if \(D-k^*\) were even. In 21 and 22 , the “way up’’ is in blue, the top of the excursion above \(k^*=2\) in green, and the remaining descent above \(k^*=2\) in red, similar in spirit to 4.

Let’s secondly consider the next highest path reaching \((k^{*},D) = (2,7)\) which has cardinality tuple \(\hat{c}\): \[\label{E:nexthighestcardinality} \begin{blockarray}{ccccccc} -2 & -1 & 0 & 1 & 2 & 3 & 4 \\ \begin{block}{(ccccccc)} 0, & 0, & 1, & 1, & 2, & 3, & 1 \\ \end{block} \end{blockarray}\tag{23}\] These are the paths \(\mathsf{p}\): \[(0,1,2,3,3,4,3,2) \qquad \text{and}\qquad (0,1,2,3,4,3,3,2).\]

In terms of the lexicographical ordering of 10 , \[(0,1,2,3,4,4,3,2) \succ_{\text{lex}}(0,1,2,3,4,3,3,2) \succ_{\text{lex}}(0,1,2,3,3,4,3,2).\] In other words, the highest path 21 is greater than the paths corresponding to 23 .

What is especially nice is that this same lexicographical ordering is maintained in the context of the cardinality tuples themselves. Here we compare tuples **right to left** (decreasing \(k\)), so the \(\hat{c} = (0,0,1,1,2,2,2)\) is read as \(\hat{c} = (2,2,2,1,1,0,0)\) for the comparison. Consequently, we can compare 22 with 23 as \[\begin{blockarray}{ccccccc} 4 & 3 & 2 & 1 & 0 & -1 & -2 \\ \begin{block}{(ccccccc)} \textcolor{green}{2}, & \textcolor{red}{2}, & \textcolor{red}{2}, & \textcolor{blue}{1}, & \textcolor{blue}{1}, & 0, & 0\\ \end{block} \end{blockarray} \;\succ_{\text{lex}}\; \begin{blockarray}{ccccccc} 4 & 3 & 2 & 1 & 0 & -1 & -2 \\ \begin{block}{(ccccccc)} 1, & 3, & 2, & 1, & 1, & 0, & 0 \\ \end{block} \end{blockarray}\] Consequently, the induced ordering between equivalence classes is determined by their cardinality tuples (under this right-to-left lex order we observed in 10 and [card95ordering]) and is independent of which path representative from each class is chosen. Here, we have pushed one of the parts of the top of the excursion above \(k^*=2\) (at height \(4\)) back down into the lower parts of the excursion.

Let’s now examine the third highest path reaching \((k^{*},D) = (2,7)\) with cardinality tuple \[\label{E:secondcardinalitytuple} \begin{blockarray}{ccccccc} -2 & -1 & 0 & 1 & 2 & 3 & 4 \\ \begin{block}{(ccccccc)} 0, & 0, & 1, & 1, & 3, & 2, & 1 \\ \end{block} \end{blockarray}\tag{24}\] These correspond to the paths: \[(0,1,2,2,3,4,3,2) \; \text{ and } \; (0,1,2,3,4,3,2,2).\] There are exactly \(2\) paths corresponding to 24 . In terms of the lexicographical ordering of 10 , we therefore have \[\begin{gather} (0,1,2,3,4,4,3,2) \;\succ_{\text{lex}}\; (0,1,2,3,4,3,3,2) \;\succ_{\text{lex}}\; (0,1,2,3,4,3,2,2) \;\succ_{\text{lex}}\;\\ (0,1,2,3,3,4,3,2) \;\succ_{\text{lex}}\; (0,1,2,2,3,4,3,2). \end{gather}\] In particular, the lower of the two paths from 23 , namely the path \((0,1,2,3,3,4,3,2),\) still lies above both paths from 24 except the path \((0,1,2,3,4,3,2,2),\) which lies strictly between the two paths from 23 .

For the corresponding cardinality tuples, we compare entries from right to left (decreasing \(k\)) as before. Writing tuples in the order \(4,3,2,1,0,-1,-2,\) we have \[\begin{blockarray}{ccccccc} 4 & 3 & 2 & 1 & 0 & -1 & -2 \\ \begin{block}{(ccccccc)} 1, & 3, & 2, & 1, & 1, & 0, & 0 \\ \end{block} \end{blockarray} \;\;\succ_{\text{lex}}\;\; \begin{blockarray}{ccccccc} 4 & 3 & 2 & 1 & 0 & -1 & -2 \\ \begin{block}{(ccccccc)} 1, & 2, & 3, & 1, & 1, & 0, & 0 \\ \end{block} \end{blockarray}\] so the class for 23 precedes the class for 24 under this right-to-left lex order. As before, the induced ordering between equivalence classes is determined by their tuples and is independent of which path representative from each class is chosen. Here, compared with 23 , we have pushed one visit from level \(3\) down to level \(2\) (the single visit at level \(4\) remains unchanged).

We also reinforce the fact that we have clearly restricted attention to the positive set \(\mathcal{C}^{+}_{D,k^*}\) as shown in [set95splitting] - which we will later augment to the full \(\mathcal{C}_{D,k^{*}}\). Since every \(\hat{c}\in\mathcal{C}^{+}_{D,k^*}\) has \(c_k=0\) for all \(k<0\), it is convenient to work with the truncated (positive–restriction) map \[\mathrm{Tr}:\;\mathcal{C}^{+}_{D,k^*}\longrightarrow \mathrm{Tr}(\mathcal{C}^{+}_{D,k^*})\subseteq \mathbb{N}^{[0,k_+]},\quad \mathrm{Tr}\bigl((c_k)_{k_-}^{k_+}\bigr)\overset{\text{def}}{=}(c_k)_{k=0}^{k_+},\] and we denote by \(\mathrm{Tr}^{-1}\) its inverse on this image: \[\mathrm{Tr}^{-1}:\;\mathrm{Tr}(\mathcal{C}^{+}_{D,k^*})\longrightarrow \mathcal{C}^{+}_{D,k^*},\qquad \mathrm{Tr}^{-1}\bigl((c_k)_{0}^{k_+}\bigr)\overset{\text{def}}{=}(\underbrace{0,\dots,0}_{k= k_-,\dots,-1}, c_0,\dots, c_{k_+}).\] On \(\mathcal{C}^{+}_{D,k^*}\) we have \(\mathrm{Tr}^{-1}\circ \mathrm{Tr}=\mathrm{id}_{\mathcal{C}^{+}_{D,k^*}}\) and \(\mathrm{Tr}\circ \mathrm{Tr}^{-1}=\mathrm{id}_{\mathrm{Tr}(\mathcal{C}^{+}_{D,k^*})}\), so no information is lost by dropping the identically zero negative coordinates. The right-to-left lexicographic order is also preserved, because the discarded entries are equal (all zeros) for every element of \(\mathcal{C}^{+}_{D,k^*}\).

Returning to the running example with \(D=7\), \(k^*=2\), \(k_-=-2\), \(k_+=4\), we see that the length-\(7\) tuples in \(\mathcal{C}^{+}_{7,2}\) (indexed by \(-2,-1,0,1,2,3,4\)) correspond bijectively (c.f. [truncation]) to length-\(5\) truncated tuples (indexed by \(0,1,2,3,4\)): \[\begin{gather} (0,0,1,1,2,2,2)\;\longleftrightarrow\;(1,1,2,2,2),\\ (0,0,1,1,2,3,1)\;\longleftrightarrow\;(1,1,2,3,1),\\ (0,0,1,1,3,2,1)\;\longleftrightarrow\;(1,1,3,2,1),\\ (0,0,1,2,2,2,1)\;\longleftrightarrow\;(1,2,2,2,1),\\ (0,0,2,1,2,2,1)\;\longleftrightarrow\;(2,1,2,2,1). \end{gather}\] where we completed this procedure with the last two entries into the list. Thus we notice that this procedure is actually the movement of mass \(i=1\) through the tuple. The “move one unit of mass across the nonzero support” visualization can be carried out on the truncated tuples as an addition: \[\label{mass95process} \begin{align} &(1,1,2,2,1) + (0,0,0,0,1) = (1,1,2,2,2)\\ \Longrightarrow\;&(1,1,2,2,1) + (0,0,0,1,0) = (1,1,2,3,1)\\ \Longrightarrow\;&(1,1,2,2,1) + (0,0,1,0,0) = (1,1,3,2,1)\\ \Longrightarrow\;&(1,1,2,2,1) + (0,1,0,0,0) = (1,2,2,2,1)\\ \Longrightarrow\;&(1,1,2,2,1) + (1,0,0,0,0) = (2,1,2,2,1) \end{align}\tag{25}\] Note that whenever needed, we recover the full length-\(7\) representatives in \(\mathcal{C}^{+}_{7,2}\) by applying \(\text{Tr}^{-1}\), i.e., by padding two leading zeros (the entries for \(k=-2,-1\)). In particular, the truncated tuple \((1,1,2,3,1)\) is exactly the positive-restriction of the length-\(7\) tuple \((0,0,1,1,2,3,1)\), and similarly for the others. Hence all ordering and enumeration arguments carry over verbatim on \(\mathcal{C}^{+}_{D,k^*}\) using truncated tuples, with no loss of correctness or generality.

This same process in 25 can be captured by graphically traversing the tree. To do this, we consider a process where a unit of mass \(i=1\) is subtracted from the final entry of \(\hat{c}\), and this mass is “moved” leftward through the tuple, one position at a time. This results in the following sequence of configurations which we can interpret as a walk down the unique path representatives in the graph: \[\begin{align} \begin{array}{c} (0, 0, \ldots, 0, 1) \\ (0, 0, \ldots, 1, 0) \\ \vdots \\ (0, 1, \ldots, 0, 0) \\ (1, 0, \ldots, 0, 0) \end{array} \quad \Longrightarrow \quad \begin{array}{c} \text{Sample Path Configuration}\\ \includegraphics[width=0.5\textwidth]{drawing.pdf} \label{first95partition} \end{array} \end{align}\tag{26}\] As can be seen in 26 , the “one–mass sweep’’ already exhausts all unique configurations when \(k^{*}=D-1\): by the equivalence relation 18 and the graphical observations in 26 , shifting zero mass (the starting tuple 20 ) and then one unit from the rightmost entry across the support generates every admissible class at that depth. For deeper targets \(k^{*}<D-1\) (e.g., the toy case \(D=7,\;k^{*}=2\)), additional configurations appear and we must move larger amounts of mass while preserving feasibility of paths (hence of the cardinality tuples) to generate all the possible configurations.

To organize this, we define the number of available slots, for \(\hat{c} \in \mathcal{C}_{D,k^{*}}^{+}\) at a given stage to be the count of indices in the truncated support \([0,k_+]\) that a unit of mass may traverse from the current rightmost non-zero index: \[\ell \;\overset{\text{def}}{=}\; k_+ - 0 + 1 \;=\; k_+ + 1.\] In the toy example, \(\ell=5\). A unit of mass may be removed only from the final element \(c_{k_+}\) and shifted left through these \(\ell\) slots; this produces exactly the one–mass family. Once \(c_{k_+}\) is depleted, we proceed to \(c_{k_+-1}\), and \(\ell\) decreases by one (we never “move over zero slots,” which would duplicate earlier tuples or violate feasibility). To reach new configurations, we then have two units we need to move, and so on: at stage \(m\) we permit moving \(m\) units while the available slots shrink as the right boundary retreats. This schedule respects the graph constraints at every step and, under the lexicographic order, enumerates all admissible tuples without omission or repetition.

Returning to our toy example, we remove two units of mass from the rightmost entry of an arbitrary seed \(\hat{c} \in \mathcal{C}_{7,2}^{+}\) whose elements are described by 20 . The construction proceeds analogously to 25 : the mass is first subtracted, then permuted across the \(\ell\) available slots, and finally these permutations are added back to the originating tuple. This additive perspective serves only to illustrate the mechanism; the algorithm itself does not require explicitly performing these additions.

Now that we are at stage \(m = 2\), we need to move \(i = m = 2\) mass across \(\ell=4\) many slots. One would naturally ask the question, what about \((0,0,0,0,2) + (1,1,2,2,0)\)? Our algorithm naturally captures this through our seed tuple \(\hat{s}^{(0)}\) 20 , so we don’t want to count it again. But, by construction of the algorithm, we automatically exclude this case from being counted by how we start this second stage i.e.: \[\tag{27} \begin{figure}\includegraphics[width=0.8\textwidth]{_pdflatex/sbuhvejp.png}\tag{28}\end{figure}\]

Note as well that this same process happened bridging our mass \(m = i = 0\) stage to our \(m = i = 1\) stage which built in this natural exclusion of the seed tuple \(\hat{s}^{(0)}\) from the \(i = 1\) case as well: \[\tag{29} \begin{figure}\includegraphics[width=0.8\textwidth]{_pdflatex/dfvaxyur.png}\tag{30}\end{figure}\] In this case we removed \(0\) mass from the leftmost entry (since taking more would violate the tree structure) and \(1\) mass from the rightmost entry (recovering the seed configuration). Thus we moved \(i=1\) across \(\ell=4\) slots and \(i=0\) across \(\ell=5\) slots, exactly as predicted. Summing these possibilities gives \(5\) distinct combinations, in agreement with 25 .

By construction, this procedure never under- or over-counts equivalence classes, preserving the minimality of the relation. Moreover, the same mechanism applies at every stage: the slot–mass combinatorics automatically encode the correct enumeration. With these tools equipped, we can finally enumerate the \(m = i = 2\) stage for our toy example: \[\label{second95partition} \begin{align} &(1,1,2,2,0) + (0,0,0,2,0) = (1,1,2,4,0)\\ \implies &(1,1,2,2,0) + (0,0,1,1,0) = (1,1,3,3,0)\\ \implies &(1,1,2,2,0) + (0,1,0,1,0) = (1,2,2,3,0)\\ \implies &(1,1,2,2,0) + (1,0,0,1,0) = (2,1,2,3,0)\\ \implies &(1,1,2,2,0) + (0,0,2,0,0) = (1,1,4,2,0)\\ \implies &(1,1,2,2,0) + (0,1,1,0,0) = (1,2,3,2,0)\\ \implies &(1,1,2,2,0) + (1,0,1,0,0) = (2,1,3,2,0)\\ \implies &(1,1,2,2,0) + (0,2,0,0,0) = (1,3,2,2,0)\\ \implies &(1,1,2,2,0) + (1,1,0,0,0) = (2,2,2,2,0)\\ \implies &(1,1,2,2,0) + (2,0,0,0,0) = (3,1,2,2,0)\\ \end{align}\tag{31}\] What 25 and 31 show is that we are effectively counting combinations of integer partitions over a truncated tuple of size \(\ell\), and then reinserting that truncated structure into the stage-\(m\) starting tuple (cf. 27 , 29 ). In particular, comparing 25 and 31 with the integer partitions of \(i=1,2\) demonstrates that this identification is valid without loss of generality, accuracy, or minimality of the equivalence relation:

  • For \(i=1\), there is only one partition: \((1)\).

  • For \(i=2\), there are two partitions: \((2)\) and \((1+1)\).

At any stage, the validity of this procedure can be verified empirically by examining the tree graph for fixed \(k^{*}\) and \(D\) (both finite for case studies):

Figure 5: Recombining Tree Paths

This graphical viewpoint, together with the toy example and algorithm, provides the foundation for developing closed-form combinatorics. We develop said combinatorics by introducing the well-known formula for counting "weak compositions" [16]: \[\begin{align} \mathscr{C}^w = \binom{i + \ell - 1}{\ell - 1} \end{align}\] This formula gives an exact count of the ways to distribute a mass \(i\) across \(\ell\) slots and, in doing so, establishes a precise correspondence between the enumeration of the equivalence classes of tree structures and the theory of integer partitions - captured formally by the weak composition formula. Thus, this formula can be exploited to obtain a closed-form combinatorial expression with a corresponding algorithm that enables one to either (a) count and enumerate exactly the number of unique paths reaching a node \(k^{*}\) of their choice in a recombining trinomial tree of depth \(D\), or (b) to generate all unique paths in a recombining trinomial tree of depth \(D\) by iterating over all \(k^{*} \in [-D,D]\).

In order to achieve this, we must take into account the dynamic nature of the size of the mass, the slots available to us during each stage, and finally, the existence of negative paths and how we can extend these observations and the combinatorial expression to capture the entire general structure (i.e. the case \(k^{*} \in \{D, D-1,..., 0, ..., -D + 1, -D\}\)). In order to achieve this, we perform a case study, which we will be able to link to our toy example, and then continuously extend it until we have generalized to this case.

5.1.1 Case Examination:↩︎

Case 1: \(k^{*} = D,\, D \geq 0\)↩︎

In 20 we introduced a seed tuple as the general template for initialization; in the toy example this specialized to 22 . One perspective on this seeding is provided by the weak composition formula: here we move \(m=i=0\) units of mass across a tuple with \(\ell=5\) available slots, \[\begin{align} \binom{0+5-1}{5-1} = \binom{4}{4} = 1, \end{align}\] confirming that the initialization consists of a single cardinality tuple. More generally, when \(k^{*}=D\) the same phenomenon occurs (cf. [fig:tree-four-panel]): the algorithm is seeded, no mass can be moved without violating the tuple (and hence the tree), and exactly one maximal path results.

Formally, \[\mathscr{C}^{w}_{D,\,k^{*}=D} = \binom{0+\ell-1}{\ell-1} = \binom{\ell-1}{\ell-1} = 1.\] In this case, the set \(\mathcal{C}_{D,\,k^{*}=D}\) contains a single cardinality tuple, so \(|\mathcal{C}_{D,\,k^{*}=D}|=1\). The weak-composition procedure \(\mathscr{C}^{w}_{D,\,k^{*}=D}\) therefore enumerates exactly one configuration, establishing \(|\mathscr{C}^{w}_{D,\,k^{*}=D}| = |\mathcal{C}_{D,\,k^{*}=D}| = 1.\) In our toy example \(k^{*}\neq D\), so further stages are required beyond this trivial case, and we therefore proceed to Case 2.

Case 2: \(k^{*} = D - 1,\, D - 1 \geq 0\)↩︎

In this case we recover exactly the situation of our worked example in 26 : the process reduces to moving a single unit of mass across the cardinality tuple, which corresponds to stepping down one level at a time in the graph in order to generate the next collection of possible paths, naturally following the lexicographical ordering. This procedure generates all unique legal paths, and—by manipulating the tuple entries—can equivalently be used to reconstruct all corresponding tuples. Moreover, as illustrated in [fig:tree-four-panel], no negative excursions are possible at this depth, so no augmentation of the counting process is required.

Formally, the enumeration reduces to the positive set, \(\mathcal{C}^{+}_{D, k^{*}=D-1} = \mathcal{C}_{D, k^{*}=D-1}\). We again apply the weak composition formulation, verified directly in 25 . The seed tuple \(\hat{s}^{(0)}\) (the first tuple enumerating in 25 ) is already counted by \(\mathscr{C}^{w}_{D,\,k^{*}=D}\), so it remains only to count the ways of moving \(m=i=1\) unit of mass through \(\ell-1\) slots and add this to the base case. Thus, \[\mathscr{C}^{w}_{D,\,k^{*}=D-1} = \mathscr{C}^{w}_{D,\,k^{*}=D} + \binom{1+(\ell-1)-1}{(\ell-1)-1} = 1 + \binom{\ell-1}{\ell-2} = \ell.\] This agrees with the explicit enumeration in 25 (where \(\ell=5\) gives exactly 5 paths and we enumerated 5 paths) and matches the walk down the path in the graphical structure of [fig:tree-four-panel].

Recognizing the Algorithm↩︎

For \(m=i=2\) an algorithmic pattern emerges. The seed tuple \(\hat{s}^{(0)}\) (cf. 20 ) has its entries determined by the depth \(D\) and the terminal index \(k^{*}\). By parity, the last entry \(c_{k_{+}}\) is either \(1\) or \(2\), so the mass–redistribution count must branch on this value. This is already visible in Case 2 (¿sec:case952?): moving one unit requires counting over \(\ell-1\) slots, in addition to the trivial \(i=0\) move over \(\ell\) slots. In our toy example \(\mathcal{C}_{D=7,k^{*}=2}\), the final entry begins at \(2\), so this issue does not arise immediately; however, if \(c_{k_{+}}=1\) (which is permitted and exhibited by 20 ), that step exhausts the available mass at the boundary, and to avoid repeats (and to continue the lexicographically ordered “march down” the tree) we must proceed with two units moved over \(\ell-2\) slots instead. In short, the enumeration at each stage respects the lexicographic schedule and the tree’s feasibility constraints, but its first increment depends on the parity of the terminal entry. Accordingly, we introduce the switching term \[\label{switching95term} \beta \;=\; \begin{cases} 1, & D + k^{*} \equiv 1 \pmod{2},\\ 2, & D + k^{*} \equiv 0 \pmod{2}, \end{cases}\tag{32}\] which dictates whether the initial increment at the right boundary consumes one or two units before the process resumes its lexicographic descent. If this term’s use is not apparent now, as we write the full combinatorics, its use will become obvious.

With the switching term \(\beta\) in place, we separate the counting process into two regimes: the even and odd cases. The need for this distinction comes from the observation that the number of available slots \(\ell\) depends on the parity of the terminal entry \(c_{k_{+}}\) in the seed tuple. This parity directly alters the redistribution schedule and hence modifies the weak composition count \(\mathscr{C}^{w}\). Consequently, in the general case the formula itself must adapt, and \(\beta\) serves as the toggle between the two regimes.

We therefore write the weak composition enumeration of \(\mathcal{C}^{+}_{D,k^{*}}\) for the \(m\)-th stage, valid for any \(m>0\) and any \(i \geq 0\), in two versions: one for the even case (\(\beta=2\)) and one for the odd case (\(\beta=1\)). This separation ensures that the lexicographic march down the tree is preserved and that no paths are over- or under-counted.

Odd Case: \(c_{k_{+}} = 1, \; \beta = 1\)
stage (\(m\)) mass (\(i\)) slots (\(\ell\)) #compositions (\(C^{w}\))
0 0 \(\ell\) 1
1 1 \(\ell - 1\) \(\binom{i + \ell - 2}{\ell - 2}\)
2 2 \(\ell - 2\) \(\binom{i + \ell - 3}{\ell - 3}\)
3 3 \(\ell - 2\) \(\binom{i + \ell - 3}{\ell - 3}\)
4 4 \(\ell - 3\) \(\binom{i + \ell - 4}{\ell - 4}\)
5 5 \(\ell - 3\) \(\binom{i + \ell - 4}{\ell - 4}\)
\(\vdots\) \(\vdots\) \(\vdots\) \(\vdots\)
\(m\) \(i = D - k^{*}\) \(\ell - \lceil \tfrac{i+1}{2} \rceil, \, \, i > 0\) \(\binom{i + \ell - \lceil \tfrac{i+1}{2} \rceil - 1}{\ell - \lceil \tfrac{i+1}{2} \rceil - 1}\)
Table 1: Weak Composition Patterns with Parity-Based Indexing
Even Case: \(c_{k_{+}} = 2, \; \beta = 2\)
stage (\(m\)) mass (\(i\)) slots (\(\ell\)) #compositions (\(C^{w}\))
0 0 \(\ell\) 1
1 1 \(\ell - 1\) \(\binom{i + \ell - 2}{\ell - 2}\)
2 2 \(\ell - 1\) \(\binom{i + \ell - 2}{\ell - 2}\)
3 3 \(\ell - 2\) \(\binom{i + \ell - 3}{\ell - 3}\)
4 4 \(\ell - 2\) \(\binom{i + \ell - 3}{\ell - 3}\)
5 5 \(\ell - 3\) \(\binom{i + \ell - 4}{\ell - 4}\)
\(\vdots\) \(\vdots\) \(\vdots\) \(\vdots\)
\(m\) \(i = D - k^{*}\) \(\ell - \lfloor \tfrac{i+1}{2} \rfloor\) \(\binom{i + \ell - \lfloor \tfrac{i+1}{2} \rfloor - 1}{\ell - \lfloor \tfrac{i+1}{2} \rfloor - 1}\)

Returning to the toy example \(\mathcal{C}_{7,2}\), we observe that \(c_{k_{+}} = 2\), so \(\beta = 2\) and we are in the even regime. Referring to the even-case table, we compare the stage \(m=i=2\) with the explicit enumeration in 31 . Substituting into the weak-composition formula gives \[\binom{2 + 5 - 2}{5 - 2} = \binom{5}{3} = 10,\] which matches the number of tuples explicitly listed.

To account for all positive paths up to this point, we note that \(\mathscr{C}^{w}_{D, k^{*}=D-2}\) should include: mass \(0\) over \(\ell=5\), mass \(1\) over \(\ell=4\), and mass \(2\) over \(\ell=4\). This yields \[\label{wrong95count} \begin{align} \mathscr{C}^{w}_{D, k^{*}=D - 2} &= \mathscr{C}^{w}_{D,\,k^{*}=D} + \mathscr{C}^{w}_{D,\,k^{*}=D - 1} + \binom{2 + (\ell - 1) - 1}{(\ell - 1) - 1} \\ &= 1 + (\ell - 1) + \binom{\ell}{\ell - 2}. \end{align}\tag{33}\] For \(\ell=5\), this gives \(15\) total combinations, exactly matching the enumerated tuples in 31 . However, careful inspection of the graph in [fig:tree-four-panel] reveals the first appearance of a negative path. Since 33 accounts only for positive paths, it under-counts by precisely one and is thus incomplete. This observation leads naturally to the next case, \(k^{*} = D-2, \; D-2 \geq 0\), where negative contributions must begin to be incorporated.

At this point it is useful to note that all positive paths can be systematically counted within the even–odd regime using 2. In particular, expression 33 , though incorrect, highlights the general form of the positive-path count \(\mathcal{C}^{+}_{D,k^{*}}\), which can use 2 to write. Again, defer the correct count in 33 , which has negative paths, to the subsequent cases, beginning with \(k^{*} = D-2, \; D-2 \geq 0\), and continuing with the general case \(k^{*} \in \{D, D - 1, ..., 0,..., -D + 1, -D\}\), which will generalize and ultimately complete the counting procedure for any \(D, k^{*}\). In order to write 2 into a nice compact closed-form non-piecewise combinatorial expression, we utilize the well known identity relating the ceiling and floor functions: \[\label{delta} \Big\lceil \frac{i + 1}{2}\Big\rceil = \Big\lfloor\frac{i + 1}{2}\Big\rfloor + \delta, \quad \delta = \begin{cases} 1, \quad i \, \, \text{is even}\\ 0, \quad i \, \, \text{is odd} \end{cases}\tag{34}\] We also recall Pascal’s Identity: \[\label{pascal} \binom{n}{k} = \binom{n - 1}{k} + \binom{n - 1}{k - 1}\tag{35}\] Examining the general case in 2, define \[a \;\overset{\text{def}}{=}\; \Big\lfloor\frac{i+1}{2}\Big\rfloor, \qquad c \;\overset{\text{def}}{=}\; \Big\lceil\frac{i+1}{2}\Big\rceil = a+\delta(i).\] The “even-case” general term then reads \[\label{eq:even-general-term} \binom{i+\ell-a-1}{\;\ell-a-1\;} \;=\; \binom{i+\ell-\lfloor\frac{i+1}{2}\rfloor-1}{\;\ell-\lfloor\frac{i+1}{2}\rfloor-1\;}.\tag{36}\] Applying 35 to 36 yields \[\label{eq:pascal-split-a} \binom{i+\ell-a-1}{\;\ell-a-1\;} = \binom{i+\ell-a-2}{\;\ell-a-1\;} + \binom{i+\ell-a-2}{\;\ell-a-2\;}.\tag{37}\] Rewriting in terms of \(c=a+\delta(i)\) gives the uniform identity \[\label{eq:uniform-delta} \binom{i+\ell-a-1}{\;\ell-a-1\;} = \binom{i+\ell-c-1}{\;\ell-c-1\;} \;+\; \delta(i)\,\binom{i+\ell-c-1}{\;\ell-c\;}.\tag{38}\] Indeed, if \(i\) is odd then \(\delta(i)=0\) and the second term vanishes; if \(i\) is even then \(\delta(i)=1\) and 38 is precisely the Pascal split with indices shifted by one. Next, we use the global parity switch determined by \((D,k^{*})\) that we introduced in 32 so that \(\beta - 1 \in \{0,1\}\). In regimes where the parity choice is fixed by \((D,k^{*})\) (and not by \(i\)), we replace the local \(\delta(i)\) in 38 by the global toggle \(\beta-1\). Using \(c=\big\lceil\frac{i+1}{2}\big\rceil\), this yields the single closed-form summand \[\label{eq:single-summand} \binom{i+\ell-c-1}{\;\ell-c-1\;} \;+\; (\beta-1)\,\binom{i+\ell-c-1}{\;\ell-c\;}, \qquad c=\Big\lceil\frac{i+1}{2}\Big\rceil.\tag{39}\] Therefore, the positive-path count, which as we saw from 33 , can taken as then sum over the mass \(i = 0\) until \(D - k^{*}\). Given this, for any arbitrary starting tuple \(\hat{c}\), given the choice of \(D, k^{*}\) can be written without case splits as \[\label{eq:final-closed-form} \mathscr{C}_{D,k^{*}}^{w,+}(\hat{c}) \;=\; \sum_{i=0}^{D-k^{*}} \left\{ \binom{i+\ell-\big\lceil\frac{i+1}{2}\big\rceil-1}{\;\ell-\big\lceil\frac{i+1}{2}\big\rceil-1\;} + (\beta-1)\, \binom{i+\ell-\big\lceil\frac{i+1}{2}\big\rceil-1}{\;\ell-\big\lceil\frac{i+1}{2}\big\rceil\;} \right\}\tag{40}\] We write \(\mathscr{C}_{D,k^{*}}^{w,+}\) to reconcile with 33 : here the superscript \(^{w,+}\) indicates the count of weak-composition terms arising from positive paths only, i.e., paths that never visit the negative side, thereby excluding the negative-path contribution. This distinguishes \(\mathscr{C}_{D,k^{*}}^{w,+}\) from the full weak-composition count \(\mathscr{C}_{D,k^{*}}^{w}\), which includes both positive and negative paths (the latter will be incorporated in the subsequent cases).

For the stopping condition of the counting process - and by extension the algorithm - we must make explicit the feasibility constraint on the cardinality tuple. Along the positive side, every level up to the terminal index must remain reachable; equivalently, for each level \(\ell \le k^{*}\) we must reserve the baseline occupancy that maintains reachability (otherwise some node becomes unreachable). In addition, the terminal level \(k^{*}\) contributes its fixed terminal mass.

It is important to note that \(k^{*}\) itself may tick down from its default value of \(2\) to \(1\), but never below \(1\) and values of \(\hat{c}\), \(c_{0 \leq k < k^{*}}\), cannot give up any of their mass thus they remain at 1 and do not contribute to the sum bounds. Allowing \(k^{*}<1\) would immediately force the path into an unreachable state and invalidate the composition. Thus, when constructing the admissible range for the transferable surplus mass \(i\), we must account for this minimal cutoff. With the baseline and terminal reservations enforced, the maximum transferable mass is \(i_{\max} = D - k^{*}\), since algebraically \(i = D - k^{*} - 1 + 1 = D - k^{*}\). Therefore the admissible summation range is \[\label{eq:i-range} 0 \;\le\; i \;\le\; D - k^{*},\tag{41}\] and this upper bound furnishes the stopping criterion used by the counting algorithm for \(\mathscr{C}_{D,k^{*}}^{w,+}\). 6

We can now finally proceed to our last two cases and in doing so, augment our combinatorics and algorithm with the proper introduction of negative paths. In order to do this, we need to return to observation we made in our initial construction of the tree: symmetry.

5.1.1.1 Symmetry of the Recombining Tree

One of the core advantages of the recombining tree is its symmetry over the path \(\mathsf{p}= ((0,0),(0,0),...,(0,0))\) as referenced in 2. In our construction of the problem, we continuously referenced positive paths, and then mixed paths that contained both positive and negative paths. Moreover, in [fig:tree-four-panel], we structured these cases over the positive paths. This property of symmetry allows us to state the follow theorem:

Theorem 1 (Reflection symmetry of counts). Let \(\mathscr{T}_{D}=(V_D,E_D)\) be the recombining rooted trinomial tree with \[\begin{gather} V_D=\{(k,d)\in\mathbb{Z}\times\mathbb{Z}_+:\;0\le d\le D,\;|k|\le d\}\\ E_D=\bigl\{\,((k,d-1),(k+s,d)):\;s\in\{-1,0,1\}\,\bigr\} \end{gather}\] Let \(\mathscr{P}^+\) (resp.\(\mathscr{P}^-\)) be the set of root-to-depth-\(D\) paths that stay on the nonnegative (resp.non-positive) side, i.e. \(k_d\ge 0\) (resp.\(k_d\le 0\)) for all \(d\). For any terminal constraint that is symmetric under \(k\mapsto -k\) (e.g., “end at \(\pm k^{\dagger}\)” or “end in a set \(S\) with \(S=-S\)), the map \[R:\;(k,d)\mapsto(-k,d)\] induces a bijection \(R:\mathscr{P}^{+}\!\to\mathscr{P}^{-}\), and hence a bijection on cardinality tuples \[\hat{C}:\;\pi\mapsto (c_k(\pi))_{k}\quad\text{satisfies}\quad \hat{C}\bigl(R\pi\bigr)=\rho\bigl(\hat{C}(\pi)\bigr), \text{ where } \rho\bigl((c_k)_k\bigr)=(c_{-k})_k.\] Consequently, the positive- and negative-side combinatorial counts coincide: \[\#\hat{C}(\mathscr{P}^{+}) \;=\; \#\hat{C}(\mathscr{P}^{-}),\] and any enumeration algorithm (or closed-form count) developed on \(\mathscr{P}^{+}\) applies mutatis mutandis* to \(\mathscr{P}^{-}\) via \(\rho\).*

Proof. Define \(R(k,d)=(-k,d)\). Then \(R\) is a graph automorphism of \(\mathscr{T}_{D}\): if \(((k,d-1),(k+s,d))\in E_D\) with \(s\in\{-1,0,1\}\), applying \(R\) to both endpoints yields \[\bigl((-k,d-1),\,(-k-s,d)\bigr),\] which is again an edge in \(E_D\) since \(-s\in\{-1,0,1\}\). The root \((0,0)\) is fixed by \(R\), and depth is preserved.

If \(\pi\in\mathscr{P}^{+}\) satisfies a symmetric terminal constraint (e.g., \(k_D\in S\) with \(S=-S\) or \(k_D=\pm k^{\dagger}\)), then \(R\pi\in\mathscr{P}^{-}\) satisfies the same constraint because \(R\) flips the sign of \(k\) and leaves \(d\) unchanged. In particular, \(R\) is a bijection \(\mathscr{P}^{+}\to\mathscr{P}^{-}\) with inverse \(R\) itself.

For the cardinality tuple \(\hat{C}(\pi)=(c_k(\pi))_{k}\), where \(c_k(\pi)=|\{d\in\{0,\dots,D\}:\;k_d=k\}|\), one has \[c_k(R\pi)=|\{d:\;-k_d=k\}|=|\{d:\;k_d=-k\}|=c_{-k}(\pi),\] so \(\hat{C}(R\pi)=\rho(\hat{C}(\pi))\) with \(\rho((c_k)_k)=(c_{-k})_k\). Thus \(\rho\) is a bijection between the sets of cardinality tuples realized by \(\mathscr{P}^{+}\) and \(\mathscr{P}^{-}\), which proves the equality of counts.

Finally, any enumeration algorithm on \(\mathscr{P}^{+}\) that operates by (i) local transitions \(k\mapsto k+s\) with \(s\in\{-1,0,1\}\) and (ii) book-keeping via the tuple \((c_k)_k\) is equivariant under \(\rho\); hence reflecting the output (or equivalently mirroring indices \(k\mapsto -k\) in the algorithm) yields a correct enumeration on \(\mathscr{P}^{-}\). ◻

Case 3: \(k^{*} = D - 2, \; D-2 \geq 0\)↩︎

Like ¿sec:case951?, we can use the graphical structure to see what amounts to a trivial case where we add our first negative path. Examining [fig:tree-four-panel] we can see that there is only 1 mixed positive and negative path. Thus, we can finally correct 33 by counting the negative path formally: \[\begin{align} \mathscr{C}^{w}_{D, k^{*}=D - 2} & = \mathscr{C}_{D,k^{*} = D-2}^{w, -} + \mathscr{C}_{D,k^{*} = D-2}^{w, +}\\ & = 1 + \sum_{i=0}^{2} \left\{ \binom{i+\ell-\big\lceil\frac{i+1}{2}\big\rceil-1}{\;\ell-\big\lceil\frac{i+1}{2}\big\rceil-1\;} + (\beta-1)\, \binom{i+\ell-\big\lceil\frac{i+1}{2}\big\rceil-1}{\;\ell-\big\lceil\frac{i+1}{2}\big\rceil\;} \right\} \end{align}\] Plugging in the correct \(\beta, \ell\) for our toy example \(\mathcal{C}_{7,2}\) (where we can finally dispense with the superscript \(^{+}\) notation), verifies that this count is 16, exactly what we would expect.

Unfortunately, while it is possible, it now becomes difficult to see in the graph all of the negative paths and their combinatorial structure. However, using 1, we know that our counting process, so long as we follow the constraints, will remain valid. This observation however is key and leads to some key questions. 1) How can we maintain the fidelity of the algorithm while still optimizing for computational speed? 2) What constraints does the tree impose that we need to account for to maintain the equivalence relation and still be able to utilize 40 ? 3) How can we exploit our original and canonical construction \(\hat{C}(\mathsf{p})\)? Fortunately, in our next and final case, we will not only be able to answer all these questions, but also be able to construct our full algorithm and complete combinatorial expression.

Case N: \(k^{*} \in \{-D,-D+1,\dots,D\}\)↩︎

Take any mixed-integer path \(\mathsf{p}\in \mathscr{P}_{D,k^{*}}\) that satisfies the standing constraints of our construction: successive nodes differ by \(\pm 1\) or \(0\), the path starts at \(0\), and it must terminate at the prescribed endpoint \(k^{*}\). For paths that extend into the negative region, however, an additional restriction arises, one that is also exploited in 12 when we wrote our recursiveless enumeration in 4. Specifically, a valid traversal requires the return to the nonnegative positions. Concretely, whenever our mixed \(\mathsf{p}\) takes its step \(-1\) (or more negative pieces subject to obeying the same constraints) while considering \(k^{*}\geq0\), at least one additional \(0\)-step becomes necessary to ensure a return to the positive side:

Figure 6: Mixed-integer recombining tree highlighting a sample traversal for k^{*}=D-2 (green).

Here, we highlight an example of the negative path in magenta and the use of the additional required 0 when a negative value is hit. Further inspection will show that whenever a negative node is traversed in a valid path \(\mathsf{p}\), at least two visits to \(0\) are required. By symmetry, the same holds whenever \(k^{*}<0\). In the special case \(k^{*}=0\), three distinct visits to \(0\) are necessary: the initial step, the return, and the terminal position.

Accordingly, the admissible cardinality tuples must satisfy \[\label{condition} c_{0} = \begin{cases} 2, & k^{*} \neq 0,\\ 3, & k^{*} = 0. \end{cases}\tag{42}\]

In our definition 20 , we prescribed minimum counts for certain \(c_{k}\) values that must be preserved throughout the mass–redistribution process. Specifically, entries initialized at \(1\) remain fixed, while entries initialized at \(2\) may be reduced provided they are walked down appropriately through the redistribution procedure described in the previous cases. The new condition augments this rule: whenever a path includes a negative node, the process terminates precisely when mass must first be removed from \(k^{*}\), at which point \(c_{k^{*}}\) is reduced to \(1\). This stopping condition is exactly the one encoded in the upper bound of the summation in 41 and keeps us from ever decreasing \(c_{0}\) below 2.

In the same vein, the negative paths obey analogous constraints to the positive ones. As the tree is traversed into negative indices, the cardinality counts must satisfy nested lower bounds. Assume \(k_- < 0\) and let \(m_* \overset{\text{def}}{=}-\,k_- \in \mathbb{Z}_{> 0}\). To admit excursions down to \(k_-,\) the cardinalities must satisfy \[\label{E:neg95to95kminus95cases} c_{k_-} \;\ge\; 1, \qquad c_{-j} \;\ge\; 2 \quad \text{for } 1 \le j \le m_* - 1,\tag{43}\] together with \(c_0\) as in 42 .

Equivalently, for each intermediate depth \(m \in \{1,\dots,m_*\}\), inclusion of the node \(-m\) requires \[\label{E:neg95to95kminus95general} c_{-j} \;\ge\; \begin{cases} 2, & 1 \le j < m,\\ 1, & j = m, \end{cases} \qquad (1 \le j \le m).\tag{44}\] With these new constraints equipped, we can use our understanding of our lexicographical ordering to inform how, while obeying these constraints, we can enumerate all these paths. Previously, when generating all positive paths, our process naturally maintained the lexicographical ordering and thus verified our enumeration algorithm was not missing any paths and our combinatorics were not missing any counts. We can impose this same logic on the negative paths by extending our lexicographical ordering for the cardinality tuples. We state it as such:

Let \(\hat{c}=(c_k)_{k=k_-}^{k_+}\) and \(\hat{c}'=(c_k')_{k=k_-}^{k_+}\) be elements of \(\mathcal{C}_{D,k^*}\subset \mathbb{N}^{[k_-,k_+]}\). We define the index blocks of \(c_{k}\) \[\label{index95blocks} I_- \overset{\text{def}}{=}(-1,-2,\dots,k_-),\qquad I_+ \overset{\text{def}}{=}(k_+,k_+-1,\dots,0),\tag{45}\] and define the comparison key: \[\Phi(\hat{c})\;\overset{\text{def}}{=}\;\bigl(\,-c_{-1},-c_{-2},\dots,-c_{k_-}\;;\;c_{k_+},c_{k_+-1},\dots,c_0\,\bigr)\in\mathbb{Z}^{|I_-|+|I_+|}.\] Equip \(\mathbb{Z}^{|I_-|+|I_+|}\) with the standard (left-to-right) lexicographic order. We then define the negative-first, right-to-left lex order on \(\mathcal{C}_{D,k^*}\) by \[\label{complete95lex95ord} \hat{c} \;\prec_{\text{lex}\,\pm}\; \hat{c}' \quad\Longleftrightarrow\quad \Phi(\hat{c})\;\text{is lexicographically smaller than}\;\Phi(\hat{c}').\tag{46}\]

Unpacking the rule:

  1. (Compare the negative part first; “smaller is larger” deeper down.) Let \(j^*=\min\{\,j\ge 1:\;c_{-j}\neq c'_{-j}\,\}\), if it exists. Then \[\hat{c} \;\succ_{\text{lex}\,\pm}\; \hat{c}' \quad\Longleftrightarrow\quad c_{-j^*} \;<\; c'_{-j^*}.\] For instance, \((c_{-1},c_{-2})=(2,1)\) compares as \((2,1)\succ_{\text{lex}\,\pm}(2,2)\), mirroring \(-21>-22\).

  2. (Tie on negatives \(\Rightarrow\) compare positives right-to-left in the usual sense.) If \(c_{-j}=c'_{-j}\) for all \(j\ge 1\) with \(-j\ge k_-\), set \[i^*=\max\{\,i\in[0,k_+]:\;c_i\neq c'_i\,\}.\] Then we have \[\hat{c} \;\succ_{\text{lex}\,\pm}\; \hat{c}' \quad\Longleftrightarrow\quad c_{i^*} \;>\; c'_{i^*}.\]

This order \(\prec_{\text{lex}\,\pm}\) is a total order on \(\mathcal{C}_{D,k^*}\); it extends the right-to-left lex order on \(\mathcal{C}^{+}_{D,k^*}\) (where all negative coordinates are zero), and via the cardinality map 1419 it induces a representative-independent total order on the equivalence classes in \(\mathscr{P}_{k^*}\) (cf.@eq:S:LexOrd ). Moreover, it coincides perfectly with a walk down the mixed paths given by the graph of \(k^{*} \in \{-D,-D+1,\dots,D\}\) where the number of negative paths that reach \(k^{*}\) is nontrivial. By reflection symmetry (1), the strictly negative side is obtained by the involution \(k\mapsto -k\), consistent with the sign flip built into \(\Phi\).

We now employ the extended lexicographical ordering together with the additional cardinality constraints and the combinatorial expression previously developed. Taken together, these three ingredients enable us to formulate the refined algorithm in its complete form and to derive the associated closed-form combinatorial expression. The key step is the construction of an appropriate shift function, which expands the enumeration while respecting the constraints - and of course is motivated by the ordering.

5.1.1.2 The Shift Function

Recall the positive maximal seed \(\hat{s}^{(0)}\) from 20 . Our extended lexicographic order ([R:MixedLex]) tells us to prioritize comparisons by the negative coordinates and only then, in case of ties, by the positive coordinates (right-to-left). To extend the enumeration from \(\mathcal{C}^{+}_{D,k^*}\) to the full \(\mathcal{C}_{D,k^*}\) while respecting 4244 , we proceed in stages that progressively admit negative levels.

5.1.1.3 Stage and Step Indices.

We distinguish two levels of indexing in the redistribution process:

Within each stage \(M\), the admissible tuples form a block that we denote by \[\mathcal{B}_M \;\overset{\text{def}}{=}\; \bigl\{\, \hat{s}^{(M,m)} : 0 \leq m \leq D-k^{*}_{M} \,\bigr\},\] where \(k^{*}_{M}\) is the terminal position \(k^{*}\) after \(m\) many stages of mass redistribution and \(M\) is the number of total shifts (i.e. applications of the proposed shift function). Note also the added subscript on \(k^{*}\). A detailed explanation will follow shortly; for the moment it suffices to observe that \(k^{*}\) shifts after each \(M\), and this labeling records that dependence. By construction, \(\hat{s}^{(M,0)} \equiv \hat{s}^{(M)}\) is the canonical seed for stage \(M\), and successive elements are generated via \[\hat{s}^{(M,m)} \;\rightsquigarrow\; \hat{s}^{(M,m+1)}\qquad (0 \leq m < D-k_{M}^{*}).\] as determined by our counting process. Thus each block \(\mathcal{B}_M\) is a contiguous chain of tuples in lex order, beginning from the stage seed and ending when no further admissible redistribution is possible. In this way the full enumeration proceeds block-wise across stages \(M\), while within each block it walks sequentially through the tuples that are generated over \(m\) many mass redistributions.

At stage \(M=1\) we “turn on’’ visits to \(-1\) by minimally migrating mass from the rightmost admissible positive entries (i.e., from \(k_+\) leftward down to \(0\)) into the new coordinates that must be met first under the order, namely \(c_{-1}\) (and \(c_0\) if needed to satisfy 42 ). This extension should be obvious to the reader as it is informed by the graphical structure just now adding in the negative side (c.f. [rem:mass95constraints]). This produces a new seed tuple \(\hat{s}^{(1)}\) that is lexicographically maximal among all tuples that reach \(-1\) (no tuple greater than \(\hat{s}^{(1)}\) in \(\prec_{\text{lex}\,\pm}\) remains to be enumerated within this block, aside from those already handled at stage \(M=0\) which was the all positive case). Below we provide a sample case of this first mass shift that occurs for a general initial all positive seed tuple \(\hat{s} \in \mathcal{C}^{+}_{D,k^{*}}\) and performs one mass shift. Note that these updates obey 4244 : at each step we remove two units of mass from the rightmost slot (or, if that slot contains only a single unit, from the next available slot as well) and redistribute this mass into the \(c_{0}\) and \(c_{-1}\) positions. This procedure initializes the algorithm at the extremal path where \(-1\) is the only negative value, though as the mass process continues, additional visits to \(-1\) naturally appear. At the same time, the index \(k^{*}\) shifts one position to the right within the active block, while the slots with \(c_{k}=0\) are shifted one step to the left, a behavior we will make precise shortly. \[\begin{align} \hat{s}^{(M = 0, \; m = 0)} & = (\underbrace{0\;,\; 0\;,\; \dots \;,\; 0\;,\; 0}_{\scriptscriptstyle c_{k}, k \in [k_{-}, -1]} \;, \underbrace{\underset{c_0}{1}\;,\;1\;,\;\dots \;,\;1}_{\scriptscriptstyle c_{k}, k \in [0, k^{*}-1]}\;,\;\underbrace{2\;,\;\dots \;,\quad\; 2\quad\;,\;(2 \text{ or } 1)}_{\scriptscriptstyle c_{k}, k \in [k^{*}, k_{+}]}) \\ \hat{s}^{(M = 1,\; m=0)} & = (\underbrace{0\;,\;0\;,\;\dots \;,\;0}_{\scriptscriptstyle c_{k}, k \in [k_{-}, -2]}\;,\;\underbrace{1\;,\;\underset{c_{0}}{2}\;,\;1\;,\;\dots \;,\;1}_{\scriptscriptstyle c_{k}, k \in [-1, k^{*}-1]}\;,\;\underbrace{2\;,\;\dots \;,(2 \text{ or } 1)}_{\scriptscriptstyle c_{k},k \in [k^{*}, k_{+} - 1]},\quad\;\underset{k_{+}}{0}\quad\;) \end{align}\]

We then enumerate all admissible tuples at stage \(M=1\) by the same mass redistribution walk as in the positive case, only now constrained by the newly included negative coordinate (and equivalently the new set of indices it is defined over). After exhausting this block, we repeat the same idea for \(M=2\): minimally migrate mass (again from the rightmost admissible positive entries) to satisfy the next required negative coordinate \(c_{-2}\) (together with the previously activated \(c_{-1}\) and the \(c_0\) constraint), thereby producing a new seed \(\hat{s}^{(2)}\), which is maximal for the \(-2\)-admitting block under \(\prec_{\text{lex}\,\pm}\). Continuing in this manner for \(M=3,4,\dots,-k_-\), we gradually include \(-3,-4,\dots,k_-\), each time starting from a canonical seed and sweeping the corresponding block in lex order.

In summary, the reader should picture a staircase of seed tuples: \[\begin{align} \hat{s}^{(M=0,\; m=0)} &\leadsto \cdots \leadsto \hat{s}^{(M=0,\; m=D-k^{*}_{0})} \leadsto \hat{s}^{(M=1,\; m=0)} \leadsto \cdots \leadsto \hat{s}^{(M=1,\; m=D-k^{*}_{1})} \notag \\ &\leadsto \hat{s}^{(M=2,\; m=0)} \leadsto \cdots \leadsto \hat{s}^{(M=2,\; m=D-k^{*}_{2})} \leadsto \cdots \leadsto \hat{s}^{(M=-k_-,\; m=D-k^{*}_{-k_{-}})}\, \end{align}\] where each transition is effected by a minimal, right-to-left shift of mass that (i) enforces the next negative-side constraints from 4244 , and (ii) ensures the new seed is the largest element (in \(\prec_{\text{lex}\,\pm}\)) of its stage. Now we can formalize our shift function and write in our final closed-form combinatorial expression.

5.1.1.4 Formalizing the Shift Function

At the outer stage \(M \in \{0,1,\dots,-k_{-}\}\) we act only on the active index set 45 , realized as a left–translate of \([0,k_{+}]\) by \(M\): \[\label{eq:IM-def} I_M \;\overset{\text{def}}{=}\; \bigl([0,k_{+}] - M\bigr)\cap\mathbb{Z} \;=\; \{\,k-M : k\in[0,k_{+}]\,\}\cap\mathbb{Z} \;=\; [-M,\,k_{+}-M]\cap\mathbb{Z}.\tag{47}\] Equivalently, the blocks satisfy the recursion \[\label{eq:IM-recursion} I_{M+1} \;=\; I_M - 1 \;=\; \{\,k-1 : k\in I_M\,\}.\tag{48}\] Thus \(I_0 = [0,k_{+}]\) and \(I_1 = [-1,\,k_{+}-1]\), etc., i.e., the active set is pushed one unit to the left at each stage.

Position of \(k^{*}\) In global indices \(k^{*}\) is fixed, but its position within the active block \(I_M\) shifts right by one each stage. Measuring position from the left endpoint of \(I_M\), \[\label{eq:kstar-position} \mathrm{pos}_{I_M}(k^{*}) \;\overset{\text{def}}{=}\; k^{*} - \min I_M \;=\; k^{*} - (-M) \;=\; k^{*}+M.\tag{49}\] Equivalently, in the locally re–centered (translated) coordinates \[\tau_M:\mathbb{Z}\to\mathbb{Z},\qquad \tau_M(k)\overset{\text{def}}{=}k - \min I_M = k+M,\] we have \(\tau_M(I_M)=[0,k_{+}]\) and \(\tau_M(k^{*}) = k^{*}+M\), making explicit that \(k^{*}\) advances one slot to the right at each outer stage \(M\).

Given 4748 and the conditions stated in
4244 , we now define the full process that each application of the shift function applies to the previous \(\hat{s}\) in the chain starting at \(\hat{s}^{(0)}\). We start at stage \(M=0\) with our initial seed tuple \(\hat{s}^{(0)}\) with its associated parity tag \(\beta \in \{1,2\}\) as defined in 32 . We then define our shifting process formally by the update \(\mathcal{A}_{M,t}\) as follows:

  • Odd case \(\beta=1\):

    • Input \(\hat{s}^{(M=0,m=0)}_{\text{odd}} = (c_{0},...,c_{k_{+}} = 1)\).

    • Subtract \(1\) from \(c_{k_{+}}\) (so \(c_{k_{+}}\gets c_{k_{+}}-1\)).

    • Form \(v \overset{\text{def}}{=}(1,\,c_0,\,\dots,\,c_{k_{+}-1})\) (left–translate with a leading \(1\)).

    • Let \(j_t \overset{\text{def}}{=}\max\{j: v_j=2\}\) (rightmost local slot holding \(2\)).

    • Update \[v_{j_t} \leftarrow 1, \qquad v_1 \leftarrow v_1 + 1.\]

    • Set \(\hat{s}^{(M=1,m=0)}_{\text{odd}} \overset{\text{def}}{=}v\).

  • Even case \(\beta=2\):

    • Input \(\hat{s}^{(M=0,m=0)}_{\text{even}} = (c_{0},...,c_{k_{+}} = 2)\).

    • Subtract \(2\) from \(c_{k_{+}}\) (so \(c_{k_{+}}\gets c_{k_{+}}-2\)).

    • Form \(v \overset{\text{def}}{=}(1,\,c_0,\,\dots,\,c_{k_{+}-1})\).

    • Update \[v_1 \leftarrow v_1 + 1.\]

    • Set \(\hat{s}^{(M=1,m=0)}_{\text{even}} \overset{\text{def}}{=}v\).

We then translate the active window to \(I_{M+1}=I_M-1\) and set the next seed. This process (i) reseeds the counting routine each time with a new \(\hat{s}^{(i)}\), \(i\in[0,-k_-]\), and (ii) produces both (a) a geometric terminal index showing where \(k^{*}\) sits in global coordinates, and (b) an effective index reflecting the mass reservoir used in the next counting pass. We make this explicit as follows.

5.1.1.5 Dynamic indices

We define the geometric terminal index \[\label{eq:kgeo} k^{*}_{\mathrm{geo}}(M)\;\overset{\text{def}}{=}\;k^{*}+M,\tag{50}\] which governs the placement of \(k^{*}\) inside \(I_{M}\), and the effective terminal index \[\label{eq:keff} \tilde{k}(M)\;\overset{\text{def}}{=}\;k^{*}+2M,\tag{51}\] which governs the mass horizon available at stage \(M\). After applying the stage update we set \[\hat{s}^{(M+1)}=\mathcal{S}_M(\hat{s}^{(M)}),\] and the next counting pass is performed with the reseated terminal index values \(k^{*}_{\mathrm{geo}}(M+1)\) and \(\tilde{k}(M+1)\).

5.1.1.6 Reseeded counting at each stage

After each shift we rerun the positive–side counting with the reseated terminal index. Using the same closed form as in 40 but with \(k^{*}\mapsto k^{*}_{\mathrm{geo}}(M)\) for geometry and \(\tilde{k}(M)\) for the horizon, the stage–\(M\) contribution is \[\label{eq:stage-contrib-clean} \mathscr{C}^{w}_{D,k^{*}_{\mathrm{geo}}(M)}\!\bigl(\hat{s}^{(M)}\bigr) = \sum_{i=0}^{\,D-\tilde{k}(M)} \left\{ \binom{i+\ell-\Big\lceil\frac{i+1}{2}\Big\rceil-1}{\;\ell-\Big\lceil\frac{i+1}{2}\Big\rceil-1\;} + (\beta-1)\, \binom{i+\ell-\Big\lceil\frac{i+1}{2}\Big\rceil-1}{\;\ell-\Big\lceil\frac{i+1}{2}\Big\rceil\;} \right\}\tag{52}\] Here \((\beta-1)\) is unchanged across stages by construction of our shifting process, and the upper limit \(D-\tilde{k}(M)=D-k^{*}-2M\) shrinks by \(2\) at each stage, reflecting the natural depletion of the mass reservoir by the shift update.

Within each stage we have the deterministic trajectory \[\hat{s}^{(M,0)}\equiv \hat{s}^{(M)} \;\rightsquigarrow\; \hat{s}^{(M,1)} \;\rightsquigarrow\;\cdots\;\rightsquigarrow\; \hat{s}^{(M,m)}\in\mathbb{N}^{I_M}, \qquad m=0,1,\dots, m_{\max}(M),\] generated by the parity–dependent map \[\hat{s}^{(M,m+1)} \;=\; \mathcal{S}_{\beta}^{(M)}\!\bigl(\hat{s}^{(M,m)}\bigr), \qquad m=0,1,\dots, m_{\max}(M)-1,\] where the counting index \(m\) coincides with the summation index \(i\) in 52 . We define the stage horizon and reseeding map by \[m_{\max}(M) \;\overset{\text{def}}{=}\; D - \tilde{k}(M), \;\; \mathscr{S}^{(M)}(\hat{s}^{(M)}) \;\overset{\text{def}}{=}\; \hat{s}^{(M,m_{\max}(M))}, \;\; \hat{s}^{(M+1)} \;\overset{\text{def}}{=}\; \mathscr{S}^{(M)}(\hat{s}^{(M)}),\] so that \(m_{\max}(M)\) is exactly the upper limit of the summation in 52 and the terminal state of stage \(M\) becomes the seed for stage \(M+1\).

5.1.1.7 Total reseeded count (using the existing outer stopping time).

A shift is applicable at stage \(M\) iff the active window can move left without colliding with either boundary; equivalently, \[k_{+}-M \;>\; k^{*} \quad\text{and}\quad -M \;>\; k_{-}.\] Thus the outer process stops at the first \(M\) for which a further shift would not change \(k^{*}\), namely \[\label{eq:outer-stop} T \;\overset{\text{def}}{=}\; \min\{\,k_{+}-k^{*},\,-k_{-}\,\}.\tag{53}\] For \(M<T\) we have \(k^{*}_{\mathrm{geo}}(M)=k^{*}+M\) and the next reseed is well–defined; at \(M=T\) no further shift is applied and \(k^{*}\) ceases to change. By recursion, \(\hat{s}^{(0)}\) is the given seed and, for \(M\ge1\), \(\hat{s}^{(M)}=\mathscr{S}^{(M-1)}(\hat{s}^{(M-1)})\).

We then define the total reseeded count as \[\label{eq:dynamic-total-count-final} \mathscr{C}^{w}_{D,\mathrm{dyn}}\!\bigl(\hat{s}^{(0)}\bigr) \;=\; \sum_{M=0}^{T} \mathscr{C}^{w}_{D,k^{*}_{\mathrm{geo}}(M)}\!\bigl(\hat{s}^{(M)}\bigr), \qquad k^{*}_{\mathrm{geo}}(M)=k^{*}+M,\quad \tilde{k}(M)=k^{*}+2M.\tag{54}\]

Our stopping index \(T\) is not imposed externally but arises from the natural mechanics of the shift process \(\mathcal{A}_{M}\). The active window can shift left exactly \(T\) times before either the boundary \(k_{-}\) or the right–capacity \(k_{+}\) prevents a further shift. With our effective index \(\tilde{k}(M)=k^{*}+2M\) the available mass horizon shrinks by \(2\) per completed shift and the final “leftover” unit is automatically counted when \(m_{\max}(M)=1\) (even case \(\cup \{k^{*} = 0\}\)) or \(m_{\max}(M)=0\) (odd case), so no additional ad hoc correction is required.

6 Computational Complexity Analysis↩︎

Before we begin our computational complexity analysis analytically, it serves to observe something that should appear intuitive. We start by referencing a visualization of a recombining tree from [17]:

Figure 7: Distribution of the Concentration of Paths

Although this visualization is of a binomial tree, it highlights a phenomenon that our counting algorithm establishes analytically and empirically: the path density at each terminal position \(k^{*}\) peaks at \(k^{*} = 0\) and smoothly decays to \(1\) at the boundary terminal nodes \(k^{*} = \{-D\},\{D\}\). This observation is rigorously justified through the symmetry result in 1 and is true for any recombining \(n\)-nomial tree (which of course includes the tree we worked over). Consider the distribution of terminal nodes indexed by \(k^{*}\) in the interval \[[k^{*} = -D,\,0) \;\cup\; \{0\} \;\cup\; (0,\,D] .\] From 7 (or equivalently by direct evaluation of 54 for each \(k^{*}\)), it follows that the cardinalities satisfy \[\max_{k^{*}\in[-D,D]} \, \bigl|\{\mathsf{p}\in \mathcal{C}_{D,k^{*}}\}\bigr| \;=\; \bigl|\{\mathsf{p}\in \mathcal{C}_{D,0}\}\bigr|,\] with the path counts decaying symmetrically as \(|k^{*}|\) increases. In particular, the boundary values obey \[\forall D \in \mathbb{N}, \qquad \bigl|\{\mathsf{p}\in \mathcal{C}_{D,\pm D}\}\bigr| = 1,\] so that the distribution is unimodal at \(k^{*}=0\) and strictly decreasing toward the endpoints. Thus, in analyzing the computational complexity of the algorithm that performs computations over these paths, it suffices to show that the runtime is bounded above by the enumeration count of the path family with the largest cardinality. Equivalently, this reduces to identifying the terminal index \(k^{*}\) at which the maximum mass accumulates—both perspectives are equivalent. Accordingly, we bound the counting formula by considering the set \(\mathcal{C}_{D,0}\), since \[\max_{k^{*}\in[-D,D]} \, \bigl|\mathcal{C}_{D,k^{*}}\bigr| \;=\; \bigl|\mathcal{C}_{D,0}\bigr| .\] On a per-round basis, the largest enumeration occurs when we generate all positive paths i.e. \(\mathcal{C}_{D,0}^{+}\). (Note that the symmetric contribution from negative paths is picked up in parallel, but is always strictly smaller because the mass is constrained at each application of the shift function.) To formalize this, we let

  • \(\ell \overset{\text{def}}{=}\lceil D/2 \rceil\) - "slot" parameter used in every binomial index

  • \(s \overset{\text{def}}{=}\lfloor D/2 \rfloor\) - the maximum number of "shifts" the parity-aware shift operator \(\mathcal{S}_{\beta}\) can perform

We can that that each general configuration of an arbitrary \(\hat{c} = (c_{0},...,c_{k_{+}}) \in \mathcal{C}_{D,k^{*}}^{+}\) is counted by 40 . We can thus make the substitutions

  • \(r_{i} \overset{\text{def}}{=}\ell - \lceil\frac{i + 1}{2}\rceil\)

  • \(N_{i} \overset{\text{def}}{=}i + r_{i} - 1\)

  • \(K_{i} \overset{\text{def}}{=}r_{i} - 1\)

into 40 to get: \[\mathscr{C}(\hat{s}^{(0)}) = \sum_{i=0}^{D-k^{*}}\!\left( \binom{N_i}{K_i} \;+\; (\beta - 1)\,\binom{N_i}{K_i + 1} \right).\] One application of the shift operator \(\mathcal{S}_{\beta}\) removes two units of mass from the rightmost end and slides the tuple one place right. Therefore, after the \(k^{\text{th}}\) shift, we conservatively have: \[\begin{align} D_{k}\overset{\text{def}}{=}D - k \end{align}\] We can then present the following lemma:

Lemma 1 (Refined upper bound via entropy). Let \(c\) be a configuration with effective depth \(\tilde{d}\in\mathbb{N}\) and put \(\ell=\lceil \tilde{d}/2\rceil\). For \(i\ge0\) define \[\begin{gather} r_i=\ell-\Big\lceil\frac{i+1}{2}\Big\rceil,\qquad N_i=i+r_i-1=\ell-1+i-\Big\lceil\frac{i+1}{2}\Big\rceil,\\ K_i=r_i-1=\ell-\Big\lceil\frac{i+1}{2}\Big\rceil-1. \end{gather}\] Then there exists a constant \(C>0\) such that \[\mathscr{C}(\hat{s}^{(0)})\;\le\; C\,\tilde{d}^{1/2}\,\gamma^{\,\tilde{d}}, \qquad \gamma \;=\; 2^{\frac{3}{4}H(1/3)} \approx 1.61185,\] where \(H(x)=-x\log_2 x-(1-x)\log_2(1-x)\).

Proof. Use \(\lceil x\rceil\ge x\) and \(\lceil x\rceil\le x+1\) to get, for all relevant \(i\), \[N_i\le \ell-1+\frac{i}{2}\le \tfrac{3}{4}\tilde{d} + \mathscr{O}(1), \qquad \frac{K_i}{N_i}\;\ge\;\frac{2\ell - i - 3}{2\ell + i - 2}\;=\;\frac{1}{3}-\mathscr{O}\Big(\frac{1}{\tilde{d}}\Big).\] By the entropy-form binomial bound (e.g., [18], [19]), \[\binom{N}{K}\;\le\;\frac{C_0}{\sqrt{N}}\;2^{\,N H(K/N)}.\] Apply with \((N,K)=(N_i,K_i)\) and the bounds above to obtain \[\binom{N_i}{K_i},\;\binom{N_i}{K_i+1}\;\le\;\frac{C_1}{\sqrt{\tilde{d}}}\;\gamma^{\,\tilde{d}}, \quad \gamma=2^{\frac{3}{4}H(1/3)}.\] Summing over at most \(\tilde{d}+1\) indices \(i\) yields \(\mathscr{C}(\hat{s}^{(0)})\le C\,\tilde{d}^{1/2}\gamma^{\tilde{d}}\). ◻

Using the hockey-stick identity
\(\sum_{i=0}^{r}\binom{i+\ell-1}{\ell-1}=\binom{r+\ell}{\ell}\) [20] to collapse the \(i\)–sum first gives a single binomial term and improves the pre-factor to \(\mathscr{O}(\tilde{d}^{-1/2})\) (same base \(\gamma\)).

We then write another Lemma where we examine the per-round decay under shifts:

Lemma 2 (Per-round decay under shifts). Let \(\mathscr{C}_{k}\) denote the cost after exactly \(k\) applications of \(\mathcal{S}_{\beta}\), and set \(D_k \overset{\text{def}}{=}D-k\). There exists a constant \(C>0\) (the same as in 1) such that, for all \(0\le k\le m\), \[\label{eq:per-round-decay} \mathscr{C}_{k} \;\le\; C\,D_k^{1/2}\,\gamma^{\,D_k} \;\le\; C\,D^{1/2}\,\gamma^{\,D}\,\rho^{\,k}, \qquad \rho \overset{\text{def}}{=}\gamma^{-1} \in (0,1).\qquad{(1)}\]

Proof. After \(k\) shifts the effective depth is at most \(D_k=D-k\). Applying 1 with \(\tilde{d}=D_k\) yields \(\mathscr{C}_{k}\le C\,D_k^{1/2}\,\gamma^{D_k}\). Since \(D_k^{1/2}\le D^{1/2}\) and \(\gamma^{D_k}=\gamma^{D}\cdot\gamma^{-k}\), we obtain ?? with \(\rho=\gamma^{-1}\). ◻

We then have yet another Lemma where we examine the per-round decay under shifts:

Lemma 3 (Outer stopping time). The shift process stops no later than round \(s=\lfloor D/2\rfloor\).

Proof. By definition, one application of \(\mathcal{S}_{\beta}\) via \(\mathcal{A}\) removes two units of mass from the rightmost end and shifts the tuple one slot to the right (c.f. [shift95process95A]). Let \(M_0\) denote the total removable mass available to the shift operator at the start of a global round. Because the tuple encodes a path of length \(d\) with at least one unit in every occupied slot, the removable mass satisfies \(M_0\le D\). Each shift reduces the removable mass by exactly \(2\) and never increases it (collisions merge “\(2\)”s but do not create new ones). Hence after at most \(\lfloor D/2\rfloor\) shifts the removable mass is exhausted and no further application of \(\mathcal{S}_{\beta}\) is possible. Equivalently, the pivot cannot advance beyond the rightmost slot once \(M_0\) has been depleted, so the procedure halts by round \(s=\lfloor D/2\rfloor\). ◻

We then clearly have a complete theorem for the computational upper bound on the combinatorics:

Theorem 2 (Total running time). Let \[T(D) \overset{\text{def}}{=}\sum_{k=0}^{s} \mathscr{C}_{k}, \qquad s \overset{\text{def}}{=}\bigl\lfloor D/2 \bigr\rfloor,\] where \(\mathscr{C}_{k}\) is the cost after exactly \(k\) shifts. With \(\gamma \overset{\text{def}}{=}2^{\frac{3}{4}H(1/3)}\approx 1.61185\) and \(\rho\overset{\text{def}}{=}\gamma^{-1}\approx 0.6206\), 1 and the per-round decay lemma give \[T(D) \;\le\; C\,D^{1/2}\,\gamma^{D} \sum_{k=0}^{s}\rho^{k} \;\le\; \frac{C}{1-\rho}\,D^{1/2}\,\gamma^{D}.\] Numerically, \(1-\rho \approx 0.3794\) and \(\frac{1}{1-\rho}\approx 2.636\), hence \[T(D)\;\le\; C_{\!*}\,D^{1/2}\,\gamma^{D} \quad\text{with}\quad C_{\!*}\overset{\text{def}}{=}\frac{C}{1-\rho}\;\lesssim\; 2.64\,C.\] In particular, \[T(D)\;=\;\mathscr{O}\bigl(D^{1/2}\,1.612^{\,D}\bigr).\]

Proof. By the per-round bound, \(\mathscr{C}_{k}\le C\,D^{1/2}\gamma^{D}\rho^{k}\) for \(0\le k\le s\). Summing the geometric series and using \(s=\lfloor D/2\rfloor\) (so the tail factor drops), \[T(D)\le C D^{1/2}\gamma^{D}\frac{1-\rho^{\,s+1}}{1-\rho}\le \frac{C}{1-\rho} D^{1/2}\gamma^{D}.\] Insert the numerics for \(\rho=\gamma^{-1}\). ◻

Corollary 1 (Exponential speedup over naive recursion). The naive exhaustive recursion costs \(3^{D}\) operations. Therefore \[\frac{3^{D}}{T(D)} \;\ge\; \frac{1}{C_{\!*}}\,\frac{(3/\gamma)^{D}}{D^{1/2}} \quad\text{with}\quad \frac{3}{\gamma}\approx 1.861.\] Hence the ratio grows unboundedly like \((1.861)^{D}/(C_{\!*}\,D^{1/2})\), proving an exponential improvement.

These bounds establish a rigorous computational upper bound on the complexity of the combinatorial enumeration. In turn, they also imply a theoretical lower bound on the runtime that any implementation of our algorithm must incur. The key task then is to analyze the performance of our actual implementation, derive its achievable practical upper bound, and compare this with the theoretical lower bound. The gap between the two quantifies how closely the implemented algorithm approaches the fundamental efficiency limits.

7 Conclusions↩︎

We close by highlighting two complementary threads. In our first thread future work, we outline several directions that naturally follow from our framework, including extensions to general \(n\)-nomial trees, links to Motzkin or Dyck–style occupation profiles, sharper complexity bounds, and density-aware sampling schemes for very deep trees. Together, these closing sections synthesize the practical impact of our contributions and chart a concrete agenda for subsequent research. Then in our second thread applications, we summarize how our enumeration and counting results can be used in practice—e.g., discretizing stochastic dynamics on lattices, path-dependent valuation and risk aggregation, and planning/rollout in various discrete event problems—emphasizing when the constructive algorithm is preferable to classical recursion.

7.1 Further Work↩︎

7.1.1 Connection with Motzkin Paths↩︎

There is a natural correspondence between length-\(D\) walks on our recombining trinomial tree (step set \(\{-1,0,+1\}\)) and Motzkin walks of length \(D\) (directed lattice paths with up, flat, down steps). Imposing the usual half-plane constraint (never going below the baseline) and endpoint \(0\) specializes these to classical Motzkin paths, whose univariate generating function is algebraic and whose enumerants are the Motzkin numbers. Removing \(0\)-steps further specializes to Dyck paths (Catalan objects). See, e.g., Banderier–Flajolet for a unified treatment of directed lattice paths (including \(\{-1,0,+1\}\)) via the kernel method, generating functions, and half-plane constraints [21].

What is distinctive in our setting is the cardinality tuple (occupation profile) \(c(\pi)=(c_{k})_{k\in[k_-,k_+]}\), which records the number of visits to each level along a path. This refines beyond standard Motzkin/Dyck statistics (peaks, returns, level steps at height \(h\), etc.) typically used for Narayana- or Fine-type refinements. A promising direction is to relate our multilevel occupation profiles to Motzkin polynomials (a multivariate scheme that weights steps by height) and to continued-fraction/J-fraction representations: Oste–Van der Jeugt develop Motzkin polynomials and show how tridiagonal-matrix powers and weighted Motzkin paths are encoded by such generating functions [22].

7.1.1.1 Open questions w.r.t the Connection with Motzkin Paths

  1. Does the multivariate series \(\displaystyle F(\mathbf{y};t)=\sum_{D\ge0}\;\sum_{\pi} t^{D}\prod_{k} y_{k}^{\,c_{k}(\pi)}\) admit a closed J-fraction/continued-fraction form that matches a suitable specialization of Motzkin polynomials? If so, our weak-composition formula for fixed \(c(\pi)\) could potentially follow by coefficient extraction.

  2. Under nonnegativity constraints (meanders/excursions), can one potentially obtain Narayana-type refinements that condition on \(c(\pi)\) (e.g., total visits at nonnegative levels) and compare them to known refinements for Motzkin and or Dyck paths?

  3. How do endpoint constraints (\(k^{*}\neq 0\)) and parity rules interact with the standard first-return/first-step decompositions used for Motzkin paths? Can these be expressed as simple functional equations for \(F(\mathbf{y};t)\)?

We emphasize that, while the Motzkin correspondence is classical, we are not aware of a prior closed enumeration by the full level-visit profile \(c(\pi)\) on trinomial trees. Establishing whether our occupation-profile enumeration reduces to (or strictly extends) known Motzkin polynomial frameworks is an interesting avenue for future work.

7.1.2 Potential for Gray-Code Optimization↩︎

Our algorithm possesses a strong potential for further optimization. Here, we make a loose remark with some loose observations on bounds to theorize on how one could actually go about implementing gray codes to optimize the algorithm given its construction:

Our recursionless design updates only a constant number of tuple entries per emitted object (an adjacent unit transfer), so the inner enumeration can be implemented with a minimal-change (Gray) successor that guarantees constant worst-case delay and \(O(1)\) extra workspace beyond the current state [6], [9], [23]. Let \(c_{\mathrm{succ}}>0\) denote this per-output constant.

7.1.2.1 Cost decomposition

Write the implementation cost as \[T_{\mathrm{impl}}(D)\;=\;T_{\mathrm{outer}}(D)\;+\;T_{\mathrm{inner}}(D).\] By the stopping-time bound, the outer routine applies \(\mathcal{S}_{\beta}\) at most \(s=\lfloor D/2\rfloor\) times. If we conservatively initialize the negative-path seed in \(\Theta(D)\) time per stage (Algorithm 12), then \[T_{\mathrm{outer}}(D)\;\le\; c_{\mathrm{init}}\,D\,s\;+\;c_{\mathrm{shift}}\,s \;=\;\mathscr{O}(D^2)\quad\text{(prototype bound)}.\] With the standard pointerized update (no full re-scan), the same work is \(O(1)\) per stage, hence \(T_{\mathrm{outer}}(D)=O(D)\) (achievable).

For the inner enumeration, a Gray successor on the constrained weak compositions yields \[T_{\mathrm{inner}}(D)\;\le\; c_{\mathrm{succ}}\, \Bigl|\mathscr{C}^{w}_{D,\mathrm{dyn}}\!\bigl(\hat{s}^{(0)}\bigr)\Bigr| \;=\; c_{\mathrm{succ}}\cdot \Theta\!\bigl(D^{1/2}\gamma^{D}\bigr),\] by Theorem 2. Therefore \[T_{\mathrm{impl}}(D) \;\le\; \underbrace{c_{\mathrm{succ}}\,\Theta\bigl(D^{1/2}\gamma^{D}\bigr)}_{\text{output-sensitive, dominant}} \;+\; \underbrace{\mathscr{O}(D^2)}_{\text{prototype init + shifts}} \;=\; \Theta\!\bigl(D^{1/2}\gamma^{D}\bigr),\] and with \(O(1)\)-time reseeding the additive term improves to \(O(D)\), which is negligible compared to \(D^{1/2}\gamma^{D}\).

7.1.2.2 Takeaway

Even without changing the mathematics, a loopless Gray-code successor makes the implementation output-optimal: constant worst-case delay per emitted tuple and total time within a constant factor of the theoretical lower bound implied by the combinatorial count. Practically, the inner loops can be replaced by a streaming next() that touches only two adjacent entries per step, while the outer loop performs at most \(s=\lfloor D/2\rfloor\) constant-time reseeds.

7.1.3 Extension to\(n\)-nomial Recombining Trees↩︎

We also believe there is a closed-form enumeration for general \(n\)-nomial recombining trees—depending on depth \(D\), terminal index \(k^{*}\), and branch multiplicity \(n\)—with a corresponding complexity bound of the form \(\mathcal{O}\big(D\,b(n)^D\big)\). Deriving it appears to require a fully multivariate occupation-profile framework and new symmetry reductions beyond the trinomial case as well as a more nuanced mass-redistribution process; pursuing these technicalities is beyond the scope of this paper and is left as future work.

7.2 Applications↩︎

Our results are directly useful in several settings:

7.2.0.1 (A) Option pricing on recombining trees

The constructive enumeration supports exact valuation of path-dependent claims (e.g., Asian, barrier, lookback) by aggregating payoffs over occupation profiles (cardinality tuples). Closed-form counts enable stratified/importance sampling and reduce variance; early exercise (American-style) fits via dynamic programming on the enumerated successor sets. Listing these also allows for analysis of systemic risk when run in conjunction with known algorithms like FSG (forward-shooting grid).

7.2.0.2 (B) Discrete-event control and scheduling

Event sequences in queues or inventory systems can be modeled as trinomial walks; the cardinality tuple records level visits (e.g., buffer or stock levels). This yields exact reachability distributions, cost aggregation along trajectories, and worst-case or risk-sensitive evaluations under resource constraints.

7.2.0.3 (C) Planning and model-based reinforcement learning

Finite-horizon rollouts on the recombining tree avoid recursion and duplicates, while occupation profiles serve as compact trajectory features for value expansion or policy improvement. For long horizons, the same structure supports density-aware pruning or stratified sampling with explicit error control.

In moderate-depth regimes requiring auditability and tight error budgets, the proposed enumeration is preferable to naive recursion or unconstrained Monte Carlo; for very deep horizons, it remains a principled backbone for hybrid sampling schemes.

8 Generation of Non-Unique Combinations↩︎

Figure 8: Generate Combinations of Paths using a Recursive DFS Procedure
Figure 9: Generate Combinations of Paths using Hashing
Figure 10: Recursion-free Generation of Paths

9 Path Init Algorithm↩︎

Figure 11: Initialize Path Array

10 Negative Path Init↩︎

Figure 12: Shift-and-Reseed (\mathcal{A}_{M,t}) — one stage

11 Generate All Unique Paths↩︎

Figure 13: Unique Path Generation (Reseeded, recursion-free, with starting-tuple invariants)

Acknowledgments↩︎

We would like to acknowledge the assistance of Manav Vora, Ryan Roach, and Pingbang Hu of the University of Illinois, Urbana-Champaign who gave me some useful notes on writing my manuscript and notes on various proofs. I would also like to thank William Cosley from the University of Northern Iowa who helped me write Appendix A that helped me first visualize the path enumeration of the traditional approach. Finally, I’d like to acknowledge Dāniels Ponamarjovs from the College of Alberta, Latvia who helped me throughout the initial process when it came to proper C++ coding practices.

12 Excursions↩︎

Another useful tool for constructing paths in \(\mathscr{P}_{k^*}\) is to exploit excursions from \(0\) in the position sequence \(\pi(\mathsf{p})\) (see 7 ).

12.0.0.1 Motivating example

Assume \(D=9\) and \(k^*=2\). Consider the position sequence \[\label{E:basepath} (0,1,2,1,0,1,0,0,1,2).\tag{55}\] There are three maximal positive runs (excursions) bounded by zeros: \[(0\,|\,\underbrace{1,2,1}_{\text{exc.\;1}}\,|\,0\,|\,\underbrace{1}_{\text{exc.\;2}}\,|\,0,0\,|\,\underbrace{1,2}_{\text{exc.\;3}}),\] where the vertical bars mark zeros. Since the last run ends at \(d=D\) with value \(k^*=2\neq 0\), the rightmost block is locked (see below) and is not flippable if we wish to preserve the terminal position. Flipping any subset of the unlocked excursions (here, excursions 1 and 2) negates the entries inside those runs and yields valid paths that still end at \(k^*\). For instance, flipping excursion 1, excursion 2, both, or neither produces the four sequences \[\label{E:flippedexcursions} \begin{align} &(0,-1,-2,-1,0,1,0,0,1,2),\\ &(0,1,2,1,0,-1,0,0,1,2),\\ &(0,1,2,1,0,1,0,0,1,2),\\ &(0,-1,-2,-1,0,-1,0,0,1,2). \end{align}\tag{56}\]

12.0.0.2 Formal setup.

Let \(\mathbf{k}\overset{\text{def}}{=}(k_0,k_1,\dots,k_D)\in \mathbb{Z}^{D+1}\) be a position sequence with \(k_0=0\) and \(k_D=k^*\). We call a pair of indices \((l,r)\) with \(0\le l<r\le D+1\) an excursion interval if \[k_l=0,\qquad k_r=0\;\text{when }r\le D,\qquad k_d>0\;\;\text{for all }\,l<d<r,\] and \((l,r)\) is maximal with these properties. (When the final run reaches \(D\) with \(k_D>0\), we close it by the sentinel \(r=D+1\).) The open index set \((l,r)\cap\{0,1,\dots,D\}\) is the support of the excursion.

List all excursion intervals from left to right as \[0=l_1<r_1\le l_2<r_2\le \cdots \le l_M<r_M\le D+1.\] Define the excursion indicators \(\mathbf{1}^{(m)}\in\{0,1\}^{D+1}\) by \[\mathbf{1}^{(m)}_d \;=\; \begin{cases} 1, & l_m<d<r_m,\\ 0, & \text{otherwise}. \end{cases}\] Set the lock flag \[\epsilon(\mathbf{k})\overset{\text{def}}{=} \begin{cases} 1,& r_M=D+1\quad(\text{rightmost block hits D with }k_D\neq 0),\\ 0,& r_M\le D\quad(\text{rightmost block ends at a zero}). \end{cases}\] Thus the indices of flippable excursions are \[\mathcal{I}^*(\mathbf{k})\overset{\text{def}}{=}\{1,2,\dots,M-\epsilon(\mathbf{k})\}.\]

12.0.0.3 Flip operator

For any subset \(A\subseteq \mathcal{I}^*(\mathbf{k})\) define \[\mathscr{F}_A(\mathbf{k}) \;\overset{\text{def}}{=}\; \mathbf{k}\;-\; 2\sum_{m\in A} \bigl(\mathbf{k}\odot \mathbf{1}^{(m)}\bigr),\] where \(\odot\) denotes the Hadamard (entrywise) product. In words: on each excursion \(m\in A\) we negate the entries of \(\mathbf{k}\) (equivalently, we reflect the excursion about \(0\)), and we leave all other indices unchanged. The full flip family is \[\label{eq:FlipFamily} \mathscr{F}(\mathbf{k})\;\overset{\text{def}}{=}\;\bigl\{\mathscr{F}_A(\mathbf{k}): A\subseteq \mathcal{I}^*(\mathbf{k})\bigr\}.\tag{57}\] Hence \[\label{eq:FlipCount} \bigl|\mathscr{F}(\mathbf{k})\bigr| \;=\; 2^{\,M-\epsilon(\mathbf{k})}.\tag{58}\] If one wishes to exclude the trivial “no-flip” element corresponding to \(A=\varnothing\), the count becomes \(2^{\,M-\epsilon(\mathbf{k})}-1\).

Lemma 4 (Validity and endpoint preservation). Let \(\mathbf{k}\in\mathbb{Z}^{D+1}\) be a position sequence with \(k_0=0\) and \(k_D=k^*\), and let \(A\subseteq \mathcal{I}^*(\mathbf{k})\). Then \(\widetilde{\mathbf{k}}\overset{\text{def}}{=}\mathscr{F}_A(\mathbf{k})\) is again a valid position sequence of a path in \(\mathscr{P}\); i.e., \(\widetilde{k}_{d}-\widetilde{k}_{d-1}\in\{-1,0,+1\}\) for all \(d\), with \(\widetilde{k}_0=0\) and \(\widetilde{k}_D=k^*\).

Proof. Inside an excursion \((l_m,r_m)\), the increments of \(\mathbf{k}\) are in \(\{\pm1\}\) (because the values are strictly positive and bounded by zeros at the endpoints). Negating the entries on that block negates those increments, which remain in \(\{\pm1\}\). At the boundaries \(d=l_m\) and \(d=r_m\) we have zeros in both \(\mathbf{k}\) and \(\widetilde{\mathbf{k}}\) (for \(r_m\le D\)) or, when \(r_m=D+1\), the block is locked and not flipped by construction. Hence all increments remain in \(\{-1,0,1\}\). Since each flipped excursion begins and ends at \(0\), the net displacement contributed by that excursion remains \(0\), so the terminal value \(k_D=k^*\) is preserved (locking prevents altering the final nonzero run). ◻

Definition 1 (Nonnegative representatives). \[\mathscr{P}^+_{k^*}\;\overset{\text{def}}{=}\;\bigl\{\mathsf{p}\in \mathscr{P}_{k^*}:\;\pi(\mathsf{p})\in\mathbb{Z}_+^{D+1}\bigr\}.\]

If \(k^*\ge 0\), then \[\label{E:fliprep} \mathscr{P}_{k^*} \;=\; \bigcup_{\mathsf{p}\in \mathscr{P}^+_{k^*}} \;\pi^{-1}\!\bigl(\mathscr{F}\bigl(\pi(\mathsf{p})\bigr)\bigr).\tag{59}\] If \(k^*<0\), then \[\label{eq:flip-negative} \mathscr{P}_{k^*} \;=\; \bigcup_{\mathsf{p}\in \mathscr{P}^+_{-k^*}} \;\pi^{-1}\!\Bigl(-\,\mathscr{F}\bigl(\pi(\mathsf{p})\bigr)\Bigr).\tag{60}\]

Proof. For \(k^*\ge 0\), any \(\mathsf{p}\in \mathscr{P}_{k^*}\) has position sequence \(\mathbf{k}\) whose negative entries occur in blocks separated by zeros. Successively reflecting each negative block across \(0\) produces a nonnegative sequence \(\mathbf{k}^{(+)}\in\mathbb{Z}_+^{D+1}\) with the same endpoints and increments in \(\{-1,0,1\}\). Thus \(\mathbf{k}\in \mathscr{F}\bigl(\mathbf{k}^{(+)}\bigr)\) with \(\mathbf{k}^{(+)}=\pi(\mathsf{p}_+)\) for some \(\mathsf{p}_+\in \mathscr{P}^+_{k^*}\), proving inclusion “\(\subseteq\)”. The reverse inclusion follows from 4. The case \(k^*<0\) reduces to \(k^*>0\) by global sign-flip. ◻

12.0.0.4 Counting flips for a fixed representative

Let \(\mathbf{k}=\pi(\mathsf{p})\in \mathbb{Z}_+^{D+1}\) and let \(M\) be the number of excursions of \(\mathbf{k}\) (maximal positive runs). Then \(|\mathscr{F}(\mathbf{k})|=2^{\,M-\epsilon(\mathbf{k})}\) by 58 ; when excluding the no-flip element, \(|\mathscr{F}(\mathbf{k})|=2^{\,M-\epsilon(\mathbf{k})}-1\). In the example 55 , \(M=3\) and \(\epsilon(\mathbf{k})=1\), hence \(|\mathscr{F}(\mathbf{k})|=2^{2}=4\), exactly the four sequences in 56 .

13 Step counts and parity↩︎

Any \(\mathsf{p}\in \mathscr{P}_{k^*}\) can be decomposed into \(j_+\) up-steps, \(j_-\) down-steps, and \(j_0\) stays. Then \[\label{E:algebra} j_+ + j_- + j_0 \;=\; D, \qquad j_+ - j_- \;=\; k^*.\tag{61}\] Solving gives \[j_+ \;=\; \frac{D+k^*-j_0}{2}, \qquad j_- \;=\; \frac{D-k^*-j_0}{2}.\] Thus \(j_+\) and \(j_-\) are integers iff \[\label{eq:parity-j0} j_0 \equiv D+k^* \pmod{2},\tag{62}\] and necessarily \[\label{eq:max-jplus-minus} 0\le j_0\le D,\qquad j_+ \le \Bigl\lfloor \frac{D+k^*}{2}\Bigr\rfloor, \qquad j_- \le \Bigl\lfloor \frac{D-k^*}{2}\Bigr\rfloor.\tag{63}\] The parity condition 62 is the even–odd compatibility between depth \(D\), terminal position \(k^*\), and the number of stays. It is the sole parity restriction induced by the underlying trinary step set, and it is implicitly enforced by our path-generation rules we observed in 2 - 4. In particular, when seeding the first iteration of the (recursion-free) generator, one must choose \(j_0\) with the parity prescribed by 62 ; then \(j_\pm\) follow from 61 .

13.0.0.1 Integration with the positive-path generator.

10 details a recursion-free enumeration of the nonnegative representatives \(\mathscr{P}^+_{k^*}\) that maintains the monotone lexicographic ordering and obeys the rules outlined in 2. The full path set \(\mathscr{P}_{k^*}\) is obtained by applying \(\mathscr{F}\) (Definition 57 ) to each generated representative, as justified by [prop:flip-representation]. This two-stage procedure exhausts all paths ending at \(k^*\) without duplication.

The flip step preserves the terminal index and only toggles signs within excursions bounded by zeros, so it commutes with any ordering that respects the underlying rules and structure introduced in 2 - 4. In implementations that “ping-pong’’ across the path space, one may emit each nonnegative representative as soon as it is generated and then emit its flip family in any deterministic subset order (e.g., lexicographic on \(\mathcal{I}^*(\mathbf{k})\)), preserving a global total order.

14 Proof of the Equivalence Relation↩︎

For every path \(\mathsf{p}\in \mathscr{P}\) there exists a unique cardinality tuple \(\hat{C}(\mathsf{p})\).

Proof. For a path \(\mathsf{p}\) with positions \((k_0,\dots,k_D)\), define the counting measure \[\nu_{\mathsf{p}} := \sum_{d=0}^{D}\delta_{k_d}, \qquad c_k(\mathsf{p}):=\nu_{\mathsf{p}}(\{k\}).\] Let \[k_- := \min_{0\le d \le D} k_d, \qquad k_+ := \max_{0\le d \le D} k_d,\] so the path visits only positions in \([k_-,k_+]\). Then for any test function \(v:\mathbb{Z}\to\mathbb{R}\) we have \[\sum_{d=0}^{D} v_{k_d} \;=\; \int v\,\mathrm{d}\nu_{\mathsf{p}} \;=\; \sum_{k=k_-}^{k_+} c_k(\mathsf{p})\,v_k.\] Thus \(\hat{C}(\mathsf{p})=(c_k(\mathsf{p}))_{k=k_-}^{k_+}\) is exactly the (unique) histogram/cardinality tuple of \(\mathsf{p}\). By definition of \(\mathcal{C}_{D} := \hat{C}(\mathscr{P})\), every \(\hat{c} \in \mathcal{C}_{D}\) arises from at least one path. ◻

Fix \(\hat{c} \in \mathcal{C}_D\) and any \(\mathsf{p}\) with \(\hat{C}(\mathsf{p})=\hat{c}\). Define a legal reordering operator \(R\) on \(\mathsf{p}\) to be a bijection of indices \(R:\{0,\dots,D\}\to\{0,\dots,D\}\) such that \[\mathsf{p}^{R} := (k_{R^{-1}(0)},\,k_{R^{-1}(1)},\,\dots,\,k_{R^{-1}(D)})\] is again a valid path in \(\mathscr{T}_{D}\) (that is, \(k_{R^{-1}(0)}=0\) and \(|k_{R^{-1}(t)}-k_{R^{-1}(t-1)}|\in\{-1,0,1\}\) for all \(t\) as written in 2).

Lemma 5 (Histogram invariance). If \(R\) is a legal reordering for \(\mathsf{p}\), then we have \(\hat{C}(\mathsf{p}^{R})=\hat{C}(\mathsf{p})\).

Proof. Reindexing the multiset \(\{k_d : 0\le d\le D\}\) by a bijection does not change multiplicities at any level \(k\), hence the histogram counts are preserved. ◻

For any \(\mathsf{p}\in \mathscr{P}\), \[\{\mathsf{p}^{R} : R \text{ legal reordering for }\mathsf{p}\} \;=\;\{\mathsf{p}' \in \mathscr{P}: \hat{C}(\mathsf{p}')=\hat{C}(\mathsf{p})\}.\]

Proof. (\(\subseteq\)) If \(\mathsf{p}'=\mathsf{p}^R\) with \(R\) a legal reordering, then by the lemma \(\hat{C}(\mathsf{p}')=\hat{C}(\mathsf{p})\). (\(\supseteq\)) Conversely, if \(\mathsf{p}'\) has \(\hat{C}(\mathsf{p}')=\hat{C}(\mathsf{p})\), then the position sequences \((k_d)\) and \((k'_d)\) have the same multiplicities. Thus there exists a bijection \(R\) between time indices matching equal occurrences of each level. By assumption \(\mathsf{p}'\) is a valid path, so this \(R\) is a legal reordering. Hence \(\mathsf{p}'\in\{\mathsf{p}^R\}\). ◻

Corollary 2 (Partition of the tree). The sets \[[\hat{c}] := \{\mathsf{p}\in \mathscr{P}: \hat{C}(\mathsf{p})=\hat{c}\}, \qquad \hat{c}\in\mathcal{C}_D,\] are pairwise disjoint and their union equals \(\mathscr{P}\). Thus each \(\hat{c}\) generates exactly the class of paths with that histogram, and the union of all such classes reconstructs the entire tree without over- or undercounting.

Theorem 3 (Minimality of histogram classes). Let \(\sim\) be equality of histograms: \(\mathsf{p}\sim\mathsf{p}'\) iff \(\hat{C}(\mathsf{p})=\hat{C}(\mathsf{p}')\). A partition \(\Pi\) of \(\mathscr{P}\) is called reordering-invariant* if whenever \(\mathsf{p}\in B\in\Pi\) and \(\mathsf{p}'\) is obtained from \(\mathsf{p}\) by a legal reordering operator (as in 2), then \(\mathsf{p}'\in B\). Then the histogram partition \(\{[\hat{c}]\}_{\hat{c}\in\mathcal{C}_D}\) is the coarsest reordering-invariant partition: every such \(\Pi\) refines \(\{[\hat{c}]\}\).*

Proof. Legal reorderings preserve histograms, so each orbit under them is contained in some \([\hat{c}]\). Hence any reordering-invariant partition can only join together whole histogram classes. Therefore no strictly coarser reordering-invariant partition exists. ◻

Theorem 4 (One representative per class covers all vertices and edges). For each vertex \((k,d)\) with \(|k|\le d\le D\), let \(H(k,d)\in\mathscr{P}\) be the first-hit* path that reaches level \(k\) for the first time at depth \(d\) via the lexicographically minimal feasible prefix, and completes to depth \(D\) lexicographically minimally (all moves obeying 2). Write \(\hat{c}^{\,k,d}:=\hat{C}(H(k,d))\). Define a selection \(R:\mathcal{C}_D\to\mathscr{P}\) by: for each class \([\hat{c}]\), choose the lexicographically smallest \((k,d)\) with \(\hat{c}=\hat{c}^{\,k,d}\) and set \(R(\hat{c}):=H(k,d)\). Then \[\bigcup_{\hat{c}\in\mathcal{C}_D} V\bigl(R(\hat{c})\bigr)=V(\mathscr{T}_{D}) \qquad\text{and}\qquad \bigcup_{\hat{c}\in\mathcal{C}_D} E\bigl(R(\hat{c})\bigr)=E(\mathscr{T}_{D}).\]*

Proof. Vertices For any \((k,d)\), \(H(k,d)\) visits \((k,d)\) by construction; the class \([\hat{c}^{\,k,d}]\) selects \(R(\hat{c}^{\,k,d})=H(k,d)\), so \((k,d)\in V(R(\hat{c}^{\,k,d}))\). As \((k,d)\) was arbitrary, all vertices are covered.

Edges Fix \(e=((k,d-1),(k+s,d))\) with \(s\in\{-1,0,1\}\). The vertex \((k,d-1)\) is covered above, so \(R(\hat{c}^{\,k,d-1})=H(k,d-1)\) visits it. In \(H(k,d-1)\), the step from depth \(d-1\) to \(d\) is the lexicographically minimal feasible move out of \((k,d-1)\), realizing one of its incident edges. As \((k,d-1)\) varies over all vertices, each edge of \(\mathscr{T}_{D}\) occurs in some \(H(k,d-1)\) and hence in some \(R(\hat{c})\). This proves the second equality. ◻

References↩︎

[1]
P. Clifford and O. Zaboronski, Pricing options using trinomial trees, tech. report, University of Warwick, 2008, https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&doi=6b973e5ecae75a61bf4add2efe33c4c6356210de. Accessed April 15, 2025.
[2]
T. F. Crack, Trinomial option pricing: A primer, tech. report, University of Otago, Department of Accountancy and Finance, Nov. 2024, https://doi.org/10.2139/ssrn.4955216, https://ssrn.com/abstract=4955216. SSRN Working Paper No. 4955216. Accessed April 15, 2025.
[3]
T.-S. Dai, Pricing asian options on lattices, master’s thesis, National Taiwan University, Taipei, Taiwan, 2000, https://www.csie.ntu.edu.tw/ lyuu/theses/thesis_d88006.pdf. Accessed April 15, 2025.
[4]
D. E. Knuth, The Art of Computer Programming, Volume 4A: Combinatorial Algorithms, Part 1, Addison-Wesley Professional, 2011.
[5]
D. E. Knuth, The Art of Computer Programming, Volume 4B: Combinatorial Algorithms, Addison-Wesley Professional, 2022.
[6]
P. Eades and B. D. McKay, An algorithm for generating subsets of fixed size with a strong minimal change property, Information Processing Letters, 19 (1984), pp. 131–133.
[7]
G. Ehrlich, Loopless algorithms for generating permutations, combinations, and other combinatorial configurations, Journal of the ACM, 20 (1973), pp. 500–513, https://doi.org/10.1145/321765.321781.
[8]
S. M. Johnson, Generation of permutations by adjacent transposition, Mathematics of Computation, 17 (1963), pp. 282–285, https://doi.org/10.1090/S0025-5718-1963-0159764-2.
[9]
F. Ruskey and A. Williams, Generating combinations by prefix shifts, in Proceedings of the 11th Annual International Computing and Combinatorics Conference (COCOON 2005), vol. 3595 of Lecture Notes in Computer Science, Springer, 2005, pp. 570–576, https://doi.org/10.1007/11533719_58.
[10]
Y. Cheng, Generating combinations by three basic operations, Journal of Computer Science and Technology, 22 (2007), pp. 909–913, https://doi.org/10.1007/s11390-007-9094-7.
[11]
J. Ma and T. Zhu, Convergence rates of trinomial tree methods for option pricing under regime-switching models, Applied Mathematics Letters, 39 (2015), pp. 13–18, https://doi.org/10.1016/j.aml.2014.07.020.
[12]
J. Ahn and M. Song, Convergence of the trinomial tree method for pricing european/american options, Applied Mathematics and Computation, 189 (2007), pp. 575–582, https://doi.org/10.1016/j.amc.2006.11.132.
[13]
D. Michie, “Memo” Functions and Machine Learning, Nature, 218 (1968), pp. 19–22, https://doi.org/10.1038/218019a0.
[14]
I. Stojmenovic, Recursive algorithms in computer science courses: Fibonacci numbers and binomial coefficients, IEEE Transactions on Education, 43 (2000), pp. 273–276, https://doi.org/10.1109/13.865200.
[15]
E. T. Irons and W. Feurzeig, Comments on the implementation of recursive procedures and blocks in ALGOL 60, Communications of the ACM, 4 (1961), pp. 65–69, https://doi.org/10.1145/366062.366090.
[16]
D. R. Page, Generalized algorithm for restricted weak composition generation, Journal of Mathematical Modelling and Algorithms in Operations Research, 12 (2013), pp. 345–372, https://doi.org/10.1007/s10852-012-9194-4.
[17]
Y. Lebedev and A. Banerjee, Gaussian recombining split tree. arXiv preprint arXiv:2405.16333, 2024, https://doi.org/10.48550/arXiv.2405.16333, https://arxiv.org/abs/2405.16333. q-fin.CP.
[18]
P. Flajolet and R. Sedgewick, Analytic Combinatorics, Cambridge University Press, 2009.
[19]
T. M. Cover and J. A. Thomas, Elements of Information Theory, Wiley, 2 ed., 2006.
[20]
R. L. Graham, D. E. Knuth, and O. Patashnik, Concrete Mathematics, Addison–Wesley, 2 ed., 1994.
[21]
C. Banderier and P. Flajolet, Basic analytic combinatorics of directed lattice paths, Theoretical Computer Science, 281 (2002), pp. 37–80, https://doi.org/10.1016/S0304-3975(02)00007-5, https://lipn.univ-paris13.fr/ banderier/Papers/tcs_banderier_flajolet_2002.pdf.
[22]
R. Oste and J. V. der Jeugt, Motzkin paths, motzkin polynomials and recurrence relations, Electronic Journal of Combinatorics, 22 (2015), p. P2.8, https://doi.org/10.37236/4781, https://www.combinatorics.org/ojs/index.php/eljc/article/view/v22i2p8/pdf.
[23]
T. Takaoka, Loopless generation of combinatorial objects, Journal of Computer Science and Technology, 22 (2007), pp. 714–727, https://doi.org/10.1007/s11390-007-9094-7.

  1. Department of Industrial Engineering, University of Illinois, Urbana-Champaign, IL ().↩︎

  2. Department of Industrial Engineering, University of Illinois, Urbana-Champaign, IL ().↩︎

  3. Department of Industrial Engineering; Department of Mathematics, University of Illinois, Urbana-Champaign, IL ().↩︎

  4. Submitted to the editors Oct. 3.↩︎

  5. Throughout, we write \(\hat{c}\) for the image of an arbitrary path under the mapping \(\hat{C}(\mathsf{p})\). This convention streamlines the exposition and avoids repeatedly carrying the full function notation in formulas and text.↩︎

  6. Additional Remarks:

    • (i) If \(D+k^{*}\) is odd, then \(\beta=1\) and the second binomial in each summand is suppressed.

    • (ii) If \(D+k^{*}\) is even, then \(\beta=2\) and one recovers the full Pascal contribution, matching the “even-regime’’ rows of 2.

    ↩︎