April 01, 2024

Functional programs typically interact with stateful libraries that hide state behind typed abstractions. One particularly important class of applications are data structure implementations that rely on such libraries to provide a level of efficiency and scalability that may be otherwise difficult to achieve. However, because the specifications of the methods provided by these libraries are necessarily general and rarely specialized to the needs of any specific client, any required application-level invariants must often be expressed in terms of additional constraints on the (often) opaque state maintained by the library.

In this paper, we consider the specification and verification of such *representation invariants* using *symbolic finite automata* (SFA). We show that SFAs can be used to succinctly and precisely capture fine-grained temporal and
data-dependent histories of interactions between functional clients and stateful libraries. To facilitate modular and compositional reasoning, we integrate SFAs into a refinement type system to qualify stateful computations resulting from such
interactions. The particular instantiation we consider, *Hoare Automata Types* (HATs), allows us to both specify and automatically type-check the representation invariants of a datatype, even when its implementation depends on stateful library
methods that operate over hidden state.

We also develop a new bidirectional type checking algorithm that implements an efficient subtyping inclusion check over HATs, enabling their translation into a form amenable for SMT-based automated verification. We present extensive experimental results on an implementation of this algorithm that demonstrates the feasibility of type-checking complex and sophisticated HAT-specified OCaml data structure implementations layered on top of stateful library APIs.

<ccs2012> <concept> <concept_id>10003752.10003790.10003794</concept_id> <concept_desc>Theory of computation Automated reasoning</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10011007.10011006.10011008.10011009</concept_id> <concept_desc>Software and its engineering Language types</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10003752.10003790.10003793</concept_id> <concept_desc>Theory of computation Modal and temporal logics</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10011007.10011074.10011099</concept_id> <concept_desc>Software and its engineering Software verification and validation</concept_desc> <concept_significance>500</concept_significance> </concept> </ccs2012>

Functional programs often interact with stateful libraries that hide internal state behind their APIs. Such libraries are a vital component of a programming language’s ecosystem, providing well-tested implementations of databases, key-value stores, logging and system services, and other stateful abstractions. A common use case for how these libraries are used is for programmers to build higher-level data structures on top of the abstractions they provide [1]. A set abstract data type (ADT) equipped with intersection and cardinality operations, for example, might use the aforementioned key-value library to maintain a persistent collection of its elements by associating those elements (represented as values) with unique ids (their associated keys). Since the internal state of the underlying stateful library is opaque to its clients, the implementations of these higher-level modules will necessarily be expressed in terms of the operations provided by the library: the methods of the set ADT might use methods like and to interact with the backing key-value store, for example. To maximize reusability, these low-level libraries are typically quite permissive in how client may use them, resulting in weak specifications that only describe simple algebraic or equational constraints. For example, the specification of the key-value store might state that two operations over distinct keys commute or that a operation on a key always returns the last value in that key.

In contrast, the higher-level abstractions provided by clients built on top of these libraries are equipped with stronger guarantees: for example, the operations of our set ADT need to respect the semantics of mathematical sets, for example, so that
\(\forall A\; B : \Code{Set.t}.\; |A \cup B| + |A \cap B| = |A| + |B|\). Verifying such semantically rich properties typically requires a strong guarantee that the internal state of the set library is consistent with the
abstract states of the datatype, i.e., the ADT implementation should be equipped with a *representation invariant* [2], [3]. Ensuring that distinct keys are always associated with distinct values allows us to prove the above relationship between the intersection and cardinality operations of the set ADT, for example. Establishing such an
invariant is complicated by the fact that our set ADT is built on top of another library with an opaque internal state, and whose weak specifications do not directly address the notion of uniqueness central to the set ADT. This paper presents an automated
type-based verification procedure that addresses these complications.

Our proposed solution augments a refinement type system for a call-by-value functional core language that supports calls to stateful libraries, with *Hoare Automata Types* (HATs). HATs are a new type abstraction that qualifies basic types with
pre- and post-conditions expressed as *symbolic finite-state automata* [4], [5], a generalization of
finite-state automata equipped with an unbounded alphabet, and whose transitions are augmented with predicates over the elements of this alphabet. HATs serve as a compact representation of the trace of stateful API calls and returns made prior to a
function invocation and those performed during its execution. As a result, HATs serve as an expressive basis for a fine-grained effect system within a general refinement type framework, enabling compositional reasoning about invocations of stateful APIs in
a form amenable to automated verification of a client module’s representation invariant. For example, he representation invariant from our set ADT is captured by the following symbolic automaton, written using a variant of linear temporal logic on finite
sequences [6]. The base predicates of this formula describe *events*, or invocations of the methods of a key-value store whose values are drawn from a \(\Code{setElem}\) type: \[\begin{align} &\riA{Set}(\Code{el}) \doteq \globalA (\evop{put}{key\,val}{\I{val} = \Code{el}} \impl \nextA \neg \finalA \evop{put}{key\,val}{\I{val} = \Code{el}})
\end{align}\] This representation invariant (\(\riA{Set}\)) is expressed using standard temporal logic modalities (e.g., \(\globalA\) (always), \(\nextA\) (next), and \(\finalA\) (eventually)) to express the history of valid interactions between the set implementation and the underlying key-value store that ensure the expected element
uniqueness property holds. Informally, the invariant establishes that, at every step of the execution (\(\globalA(...)\)), once a value \(\Code{el}\) has been inserted into the set (\(\evop{put}{key\,val}{\I{val} = \Code{el}}\)), it will never be reinserted (\(\nextA \neg \finalA \evop{put}{key\,val}{\I{val} = \Code{el}}\)). An implementation that preserves this invariant
guarantees that every value is always associated with at most one key.

To verify it, we can check the \(\Code{insert}\) method of the set ADT against the following type \[\begin{align} &\tau_{\Code{insert}} \doteq \gvn{el}{setElem} \Code{x}{:}\nuot{setElem}{\top} \sarr \htriple{\riA{Set}(\Code{el})}{\nuot{unit}{\top}}{\riA{Set}(\Code{el})} \end{align}\] The output type of \(\tau_{\Code{insert}}\) (\(\htriple{\riA{Set}(\Code{el})}{\nuot{unit}{\top}}{\riA{Set}(\Code{el})}\)) is a Hoare Automata Type (HAT) over the representation invariant \(\riA{Set}\). This type captures the behavior required of any correct implementation of \(\Code{insert}\): assuming the invariant holds for any value \(\Code{el}\) (encoded here as a ghost variable scoped using \(\garr\)) prior to the insertion of a new element \(\Code{x}\), it must also hold after the insertion as well. The precondition of the HAT captures the set of interactions that lead to a valid underlying representation state (namely, one in which every value in the store is unique), and the postcondition of the HAT specifies that this condition remains valid even after executing \(\Code{insert}\).

Our use of HATs differs in important ways from other type-based verification approaches for functional programs that allow computations to perform fine-grained effects, such as F* [7] or Ynot [8]. These systems use the power of rich dependent types to specify fine-grained changes to the heap induced by an
stateful operation. The Dijkstra monad [9], [10] used by F*, for example, extends ideas first proposed by
Nanevski *et. al* [11] in their development of Hoare Type Theory to embed a weakest precondition predicate transformer semantics into a type system that
can be used to generate verification conditions consistent with a provided post-condition on the program heap.

Our approach also bears some similarity to recent (refinement) type and effect systems for verifying temporal properties of event traces *produced* during a computation [12], [13]. While the idea of using symbolic traces of previous observations as a handle to reason about the hidden state of a library is
similar to these other efforts, incorporating return values of operations as part of such traces is a significant point of difference. This capability, critical for the validation of useful representation invariants, allows the precondition encoded in a
HAT to describe a fine-grained *context* of events in which a program can run, expressed in terms of both library method arguments *and* return values. These important differences with prior work position HATs as a novel middle-ground in the
design space between explicit state-based verification techniques and trace-based effect systems. Our formulation also leads to a number of significant technical differences with other refinement type systems [14] and their associated typing algorithms, since we must now contend with type-level reasoning over symbolic automata qualifiers.

In summary, this paper makes the following contributions:

We formalize a new trace-based specification framework for expressing representation invariants of functional clients of stateful libraries that manage hidden state.

We show how this framework can be embedded within a compositional and fine-grained effect-tracking refinement type system, by encoding specifications as symbolic automata and instantiated as HATs in the type system.

We develop a bidirectional type-checking algorithm that translates the declarative type system into efficient subtype inclusion checks amenable to SMT-based automated verification.

Using a tool () that implements these ideas, we present a detailed evaluation study over a diverse set of non-trivial OCaml datatype implementations that interact with stateful libraries. To the best of our knowledge, is the first system capable of automated verification of sophisticated representation invariants for realistic OCaml programs.

The remainder of this paper is organized as follows. In the next section, we provide further motivation for the problem by developing a running example, which is used in the rest of the paper. 3 introduces a core functional
language equipped with operators that model stateful library methods. 4 presents a refinement type system that augments basic type qualifiers to also include HATs; a brief background on symbolic automata is also given. An
efficient bidirectional typing algorithm is then presented in 5. Implementation details, as well as experimental results on its effectiveness in automatically verifying the representation invariants of a number of realistic
data structure client programs that interact with stateful libraries is provided in 6. The paper ends with a discussion of related work and conclusions.

To further motive our approach, consider the (highly-simplified) Unix-like file-system directory abstraction () shown in 1. This module is built on top of other libraries that maintain byte arrays () which hold a file or a
directory’s contents and metadata, and which manipulate *paths* (), strings that represent fully-elaborated file and directory names. ’s implementation also uses an underlying key-value store library () to provide a persistent and scalable storage
abstraction, where keys represent file paths and values are byte arrays. The interface to this store is defined by three *stateful* library methods: persistently stores a key-value pair \(\Pair{k}{v}\), adding the
pair to the store if \(k\) does not already exist, and overwriting its old value with \(v\) if it does; returns true if the given key has been previously and false otherwise; and, returns
the value associated with its key argument, and fails (i.e., raises an exception) if the key does not exist in the store.

1 shows three of the methods provided by : initializes the file system by storing a root path (““); allows users to add regular files or directories to the filesystem; and, logically removes files or directories from the
filesystem. The latter two operations return a Boolean value to indicate whether they were successful. The \(\Code{add}\) operation succeeds (line \(15\)) only if (a) the given path does not
already exist in the file system (line \(6\)) and (b) a path representing the parent directory for the file does exist (line \(9\)). Besides adding the file (line \(13\)), \(\Code{add}\) also updates the contents of the parent directory to make it aware that a new child has been added (line \(14\)). Similar to , requires
that the supplied path exists and is not the root (line \(19\)). If the path corresponds to a directory, its children are marked deleted (line \(22\)) using procedure \(\Code{deleteChildren}\) (not shown). Additionally, the directory’s metadata, and that of its parent, are updated to record the deletion (lines \(25\) and \(26\)). Implicit in ’s implementation is the assumption that the parent path of the file to be deleted exists in the store: observe that there is no check to validate the existence of this path before the of the parent’s
contents is performed on line \(24\). This assumption can be framed as a representation invariant that must hold prior to and after every call and return of and . Intuitively, this invariant captures the following
property:

: *Any path that is stored as a key in the key-value store (other than the root path) must also have its parent stored as a non-deleted directory in the store.*

In a language that guarantees proper data abstraction, library developers can safely assume that any guarantees about an API established using a representation invariant are independent of a particular client’s usage. However, as we noted earlier, numerous challenges arise when validating a representation invariant in our setting because we can only characterize the state of the underlying library in terms of its admissible sequence of interactions with the client. In our example, this characterization must be able to capture the structure of a valid filesystem, in terms of sequences of , , and calls and returns that are sufficient to ensure that the invariant is preserved. Verifying that holds requires us to prove that, except for the root, the path associated with a file or directory can only be recorded in the store when its parent path exists, i.e., its parent has previously been and has not yet been deleted. Establishing that this property holds involves reasoning over both data dependencies among filesystem calls and temporal dependencies among store operations.

We address these challenges by encoding desired sets of stateful interactions between the client and library as *symbolic finite automata* (SFA). Besides constants and program variables, the alphabet of an automaton that captures these
interactions includes a set of stateful operators \(\effop\) (e.g., and ). The language recognized by an automaton specifies the sequences of permitted library method invocations. Each such invocation (called an
*event*) \(\pevapp{\effop}{\overline{v_i}}{v}\) records the operator invoked (\(\effop\)), a list of its arguments ( \(\overline{v_i}\)), and a single
result value \(v\). It is critical that we record *both* inputs and outputs of an operation, in order to enable our type system to distinguish among different hidden library states. Consider, for example, a stateful
library initialization operator \(\S{initialize}\) \(\Code{: unit\sarr bool}\). The hidden state after observing “\(\evapp{initialize}{()}{true}\)”,
representing a successful initialization action, is likely to be different from the hidden state that exists after observing the event “\(\evapp{initialize}{()}{false}\)”, which indicates an initialization error. The
language accepted by an SFA is a potentially infinite set of finite sequences of events, or *traces*.

**Example 1** (Traces). *In contrast to the correct implementation of shown in 1, consider the following incorrect version:*

`let add!$_\Code{bad}$! (path: Path.t) (bytes: Bytes.t) = !$\S{put}$! path bytes `

*This implementation simply records a path without considering whether its parent exists. We assume the file system initially contains only the root path, \(``/"\), which is when the file system is created by the
method \(\Code{init}\). The traces \(\alpha_1\) and \(\alpha_2\), shown below, represent executions of these two implementations that reflect their
differences: \[\begin{align} {\small\text{under context trace}}\; \ \alpha_0 \doteq \ &[\evapp{put}{``/"\, bytesDir}{()}]\\ \Code{add_{bad}\,``/a/b.txt"\, bytesFile}\ \ &{\small\text{yields}}\ \ \alpha_1 \doteq
[\evapp{put}{\Code{``/"}\,bytesDir}{()};\, \evapp{put}{``/a/b.txt"\, bytesFile}{()}] \\ \Code{add\,``/a/b.txt"\,bytesFile}\ \ &{\small \text{yields}}\ \ \alpha_2 \doteq [\evapp{put}{\Code{``/"}\,bytesDir}{()};\,
\evapp{exists}{``/a/b.txt"}{false};\, \evapp{exists}{``/a"}{false}] % \\\alpha_3 \doteq&\ % [\evapp{put}{``/"\,bytesDir}{()};\, \evapp{exists}{``/b.txt"}{false};\, \evapp{exists}{``/"}{true};\,
\evapp{get}{``/"}{bytesDir};\, \evapp{put}{``/b.txt"\,bytesFile}{()}]
\end{align}\] where \(\Code{bytesDir}\) and \(\Code{bytesFile}\) represent the contents of a directory and a regular file, resp. The trace \(\alpha_1\) violates since it registers the path “” in the file system even though the parent path (““) does not exist. Thus, performing the operation ”” after executing the actions in this trace will cause a runtime error when
it attempts to perform a get operation on the parent path ““. On the other hand, executing this operation after executing the actions in trace \(\alpha_2\) terminates normally and preserves .*

**Example 2** (Symbolic Finite Automata). *We express symbolic automata as formulae in a symbolic version of linear temporal logic on finite sequences [6]. can be precisely expressed in this language as the following formula (denoted \(\riA{FS}(\Code{p})\)) parameterized over a path \(\Code{p}\): \[\begin{align} &\riA{FS}(\Code{p}) \doteq \globalA \evparenth{ \Code{isRoot(p)} } \lorA (\predA{isFile}{}(\Code{p}) \lor \predA{isDir}{}(\Code{p}) \impl \predA{isDir}{}(\Code{parent(p)}))
\end{align}\] where \[\begin{align} &\predA{isDir}{}(\Code{p}) \doteq \finalA(\evop{put}{key\,val}{\I{key} = \Code{p} \land \Code{isDir}(\I{val})} \land \nextA\globalA \neg \evop{put}{key\,val}{ \I{key} = \Code{p}
\wedge (\Code{isDel}(\I{val}) \lor \Code{isFile}(\I{val})) }) \\&\predA{isFile}{}(\Code{p}) \doteq \finalA(\evop{put}{key\,val}{\I{key} = \Code{p} \land \Code{isFile}(\I{val})} \land \nextA\globalA \neg \evop{put}{key\,val}{ \I{key} = \Code{p} \wedge
(\Code{isDel}(\I{val}) \lor \Code{isDir}(\I{val})) })
\end{align}\] Here, \(\Code{isRoot}\), \(\Code{isDir}\), \(\Code{isFile}\), and \(\Code{parent}\) are pure functions
on paths and bytes. The invariant is comprised of two disjunctions involving \(\small{\Code{p}}\):*

*\(\globalA \evparenth{ \Code{isRoot(p)} }\) asserts that \(\Code{p}\) is the filesystem root and is always a valid path, regardless of any other actions that have been performed on the underlying key-value store.**\(\predA{isFile}{}(\Code{p}) \lor \predA{isDir}{}(\Code{p}) \impl \predA{isDir}{}(\Code{parent(p)})\) asserts that if \(\Code{p}\) corresponds to either a file or directory in the file system, then its parent path must also correspond to a directory in the file system. The predicate \(\predA{isDir}{}(\Code{p})\) captures contexts in which \(\Code{p}\) has been registered as a directory and then not subsequently deleted or replaced by a file (i.e., \(\nextA \globalA \neg \ltortoise \S{put}\,\I{key\,val} \eveq \vnu \,|\, \I{key} = \Code{p} \land (\Code{isDel}(\I{val})\, \lor\, \Code{isFile}(\I{val})) \rtortoise\)). The predicate (\(\predA{isFile}{}(\Code{p})\)) is defined similarly except that it requires that an existing file not be deleted or modified to become a directory. Observe that atomic predicates (e.g., \(\ltortoise \S{put}\,\I{key\,val} \eveq \vnu \,|\, \I{key} = \Code{p} \land \Code{isFile}(\I{val}) \rtortoise\)) not only capture when an event has been induced by a library operator (e.g., \(\zput{}\)), but also*qualify* the arguments to that operator and the result it produced (e.g., the argument \(\I{key}\) must be equal to the path ).^{1}*

*This specification of the representation invariant \(\riA{FS}(\Code{p})\) thus formalizes our informal characterization of a correct file system, . The two subformulas capture the property that any path recorded as a
key in the key-value store, other than the root, must have a parent directory. Notably, this specification is defined purely by constraining the sequence of operations allowed on the store and does not require any knowledge of the store’s underlying
representation.*

While SFAs provide a convenient language in which to encode representation invariants, it is not immediately apparent how they can help with verification since, by themselves, they do not impose any safety constraints on expressions. To connect SFAs to
computations, we embed them as refinements in a refinement type (and effect) system. In a standard refinement type system, the refinement type \(\rawnuot{b}{\phi}\) of an expression \(e\)
uses a qualifier \(\phi\) to refine the value of \(e\). Similarly, a Hoare Automata Type (HAT), \(\htriple{A}{\rawnuot{b}{\phi}}{B}\), additionally refines a
pure refinement type \(\rawnuot{b}{\phi}\) with two symbolic automata: a precondition automaton \(A\) defining an effect *context* under which \(e\)
is allowed to evaluate, and a postcondition automata \(B\) that augments this context with the effects induced by \(e\) as it executes. A HAT thus allows us to track the sequence of stateful
interactions between higher-level datatypes and the underlying stateful library, as well as to record the consequences of this interaction in terms of the return values of stateful calls.

Our type system leverages this information to verify the preservation of stated representation invariants in terms of these interactions. For example, and can be ascribed the following HAT-enriched types: \[\begin{align}
\Code{add} :&\ \gvn{p}{Path.t} \normalv{path}{\nuot{Path.t}{\top}} \normalv{bytes}{\nuot{Bytes.t}{\top}}\htriple{\riA{FS}(\Code{p})}{\nuot{bool}{\top}}{\riA{FS}(\Code{p})} \\ \Code{delete} :&\ \gvn{p}{Path.t} \normalv{path}{\nuot{Path.t}{\top}}
\htriple{\riA{FS}(\Code{p})}{\nuot{bool}{\top}}{\riA{FS}(\Code{p})} \end{align}\] Here, the notation “\(\gvn{p}{Path.t}\)” indicates that \(\Code{p}\) is a ghost variable representing
an arbitrary path; thus, the representation invariant must hold over *any* instantiation of \(\Code{p}\). While their input arguments (\(\nuot{Path.t}{\top}\) and (\(\nuot{Bytes.t}{\top}\))) are unconstrained, the return types of these functions are expressed as HATs in which both the precondition and postcondition automata refer to the representation invariant for the file system, \(\riA{FS}(\Code{p})\). Informally, this type reads “if we invoke the function in a context that is consistent with \(\riA{FS}\), then the state after the function returns should also be
consistent with \(\riA{FS}\)”. The traces admitted by an SFA are used to represent these contexts and states.

More concretely, observe that the trace \(\alpha_2\) from Example 1 is derived by concatenating \(\alpha_0\)
with the library calls generated during the (symbolic) execution of : \[\begin{align} {\alpha_{\Code{new}} \doteq [\evapp{exists}{``/a/b.txt"}{false};\, \evapp{exists}{``/a"}{false}]}
\end{align}\] (i.e., \(\alpha_2 = \alpha_0 \listconcat \alpha_{\Code{new}}\)). Note that both \(\alpha_0\) and \(\alpha_2\) satisfy the representation
invariant \(\riA{FS}\). To admit this trace, our type system automatically infers a new automata \(A\) that accepts the sequence of events in \(\alpha_{\Code{new}}\) by type-checking . Verifying that the representation invariant holds now reduces to an inclusion check between two symbolic automata, requiring that the trace expected by the *concatenated* automata
\(\riA{FS};A\) is also accepted by the representation invariant \(\riA{FS}\).

To formalize our approach, we first introduce , a core calculus for effectful programs.^{2}

is a call-by-value lambda calculus with built-in inductive datatypes, pattern matching, a set of pure and effectful operators, and recursive functions. Its syntax is given in 2. For simplicity, programs are expressed in monadic normal-form (MNF) [15], a variant of A-Normal Form (ANF) [16] that permits nested let-bindings. The terms in are syntactically divided into two classes: values \(v\) and expressions \(e\), where the latter include both pure terms and those that have computational effects. Similar to the simply-typed lambda calculus, function parameters have explicit type annotations. is parameterized over two sets of primitive operators: pure operators (\(\primop\)) include basic arithmetic and logical operators, while examples of effectful operators (\(\effop{}\)) include state manipulating operators (e.g., // for interacting with a key-value store, or / for manipulating a stateful set library). We express sequencing of effectful computations \(e_1; e_2\) in terms of \(\Code{let}\)-bindings: \(e_1;e_2 \doteq \zlet{x}{e_1}{e_2}\), where \(x\) does not occur free in \(e_2\). The application \(e_1\,e_2\) is syntactic sugar for \(\zlet{x}{e_1}{\zlet{y}{e_2}{\zlet{z}{x\,y}{z}}}\).

Our programming model does not have access to the underlying implementation of effectful operators, so the semantics of constrains the behavior of impure operations in terms of the outputs they may produce. To do so, we structure its semantics in terms
of *traces* that record the history of previous effectful operations and their results. The syntax of traces adopts standard list operations (i.e., cons \({::}\) and concatenation \(\listconcat{}\)), as shown in 3. *Traces* are lists of *effect events* \(\pevapp{\effop}{\overline{v}}{v}\) that record the application of
an effectful operator \(\effop\) to its arguments \(\overline{v}\) as well as the resulting value \(v\). The traces \(\alpha_2\) and \(\alpha_1\) from Example 1 record the events generated by correct and incorrect implementations of
the function of the ADT, for example.

The operational semantics of is defined via a small-step reduction relation, \(\steptr{\alpha}{e}{~\alpha'}{e'}\). This relation is read as “under an effect context (i.e., trace) \(\alpha\), the term \(e\) reduces to \(e'\) in one step and performs the effect \(\alpha'\)”, where \(\alpha'\) is either empty \(\emptr{}\) or holds a single event. The semantics is parameterized over two auxiliary relations, \(e \Downarrow v\) and \(\alpha \vDash e \Downarrow v\), used to evaluate pure and impure operations, respectively. 3 shows how the corresponding reduction rules use these relations.^{3} The reduction rule for pure operators (STPureOp) holds in an arbitrary effect context (including the empty one), and produces no additional events. The rule
for effectful operators (STEffOp), in contrast, can depend on the current context \(\alpha\), and records the result value of the effectful operator in the event it emits. The multi-step
reduction relation \(\msteptr{\alpha}{e}{\alpha'}{e'}\) defines the reflexive, transitive closure of the single-step relation; its trace \(\alpha\) records all events generated
during evaluation.

**Example 3**. *We can define the semantics for the , , and operators found in a key-value store library and used in , via the following rules:*

*[Put] §put k v () *

* *

*[ExistsF] §exists k *

* *

*[ExistsT] §exists k *

* *

*[Get] ’ §get k v *

*The rule Put asserts that operations always succeed. For a particular key, the ExistsF and ExistsT rules stipulate that returns \(\Code{false}\) or \(\Code{true}\), based on the presence of a corresponding event in the effect context. Finally, the rule Get specifies that the result of a event
is the *last* value written by a event to that key. Note that a program will get stuck if it attempts to a key in a trace without an appropriate event, since there is no rule covering this situation.*

*All the traces appearing in 2 can be derived from these rules. Under the effect context \(\alpha_{0}\), for example, the expression, \(\Code{add\
``/a/b.txt"\ bytesFile}\), induces the trace \(\alpha_2\) as follows: the function uses the operator on the key \(\Code{``/a/b.txt"}\) (line \(6\)) to check if this path is in the file system. Since the context trace \(\alpha_0\) lacks a event with the key \(\Code{``/a/b.txt"}\), we have \[\begin{align}
\vDash \S{exists}\Code{\ ``/a/b.txt"} \Downarrow \Code{false} \quad (\mathrm{\small existsF})
\end{align}\] The trace used to evaluate subsequent expressions records this event: \(\alpha_{0} \, \listconcat{} \, [\evapp{exists}{``/a/b.txt"}{false}]\). The remaining evaluation of \(\Code{add\ ``/a/b.txt"\ bytesFile}\) is similar, yielding the following execution: \[\begin{align} \msteptr{[\evapp{put}{``/"\,bytesDir}{()}]}{\Code{add\ ``/a/b.txt"\
bytesFile}}{[\evapp{exists}{``/a/b.txt"}{false};\, \evapp{exists}{``/a"}{false}]}{\Code{false}}
\end{align}\]*

The syntax of ’s types is shown in 4. Our type system uses *pure* refinement types to describe pure computations and *Hoare Automata Types* (HATs) to describe effectful computations. Our pure refinement
types are similar to those of other refinement type systems [14], and allow base types (e.g., ) to be further constrained by a logical formula or
*qualifier*. Qualifiers in are universally quantified formulae over linear arithmetic operations (\(\primop\)) as well as uninterpreted functions, or *method predicates* (\(\mpred\)), e.g., \(\Code{isRoot}\). Verification conditions generated by our type-checking algorithm can nonetheless be encoded as effectively propositional (EPR) sentences [17], which can be efficiently handled by an off-the-shelf theorem prover such as Z3 [18]. We also allow function types to be prefixed with a set of *ghost variables* with base types (\(\overline{x{:}b}\garr \tau\)). Ghost variables are purely logical – they can only
appear in other qualifiers and are implicitly instantiated when the function is applied.

Unique to are the HATs ascribed to a stateful computation, which constrain the traces it may produce. HATs use *Symbolic Finite Automata* (SFAs) [5], [19], [20] to qualify traces, similar to how standard refinement types use formulae to qualify the types
of pure terms. We adopt the symbolic version of *linear temporal logic on finite traces* [6] as the syntax of SFAs.^{4}

As shown in 4, HATs support two kinds of atomic propositions, or *symbolic effect events*: \(\evop{\effop}{\overline{x}}{\phi}\) and \(\evparenth{\phi}\). A symbolic effect event \(\evop{\effop}{\overline{x}}{\phi}\) describes an application of an effectful operator \(\effop\) to a set of
argument variables \(\overline{x}\) that produces a result variable \(\vnu\). The formula \(\phi\) constrains the possible instantiation of \(\overline{x}\) and \(\vnu\) in a concrete event. The other symbolic effect event, \(\evparenth{\phi}\), is used to constrain the relationship between pure values
(e.g., ghost variables and function arguments). In addition to the temporal next (\(\nextA A\)) and until operators (\(A \untilA A\)), the syntax of HATs includes negation (\(\neg A\)), intersection (\(A \landA A\)), union (\(A \lorA A\)), and concatenation (\(A;A\)). 4 defines notations for other useful logical and temporal operators: implication (\(A \impl B \doteq \neg A \lor B\)), the eventually operator \(\finalA\),
the globally operator \(\globalA\), and importantly, the last modality \(\lastA\), which describes a singleton trace, thus prohibiting a trace from including any other effects [6]. 4 also provides notations for specifying the value of an argument \(\evop{\effop}{{\overline{x}~\sim}{\text{v_x}}~\overline{x}}{\phi}\). As is standard, we abbreviate the qualified type \(\rawnuot{b}{\top}\) as \(b\).

::: {#ex:SFA+put+get .example} **Example 4**. *This syntax can capture rich properties over effect contexts. For example, the following formulae concisely specify traces under which a key holds a certain value (\(\predA{stored}{}\)) and when a particular key exists in the current store (\(\predA{exists}{}\)):*

*&() ( ) ()*

:::

Formally, a Hoare Automata Type \(\htriple{A}{t}{B}\) qualifies a refinement type \(t\) with two SFAs: a *precondition* SFA \(A\) that captures the
context traces in which a term can be executed and a *postcondition* SFA \(B\) that describes the effect trace after the execution of the term. Our type system also includes intersection types on HATs ( \(\tau \interty \tau\)), in order to precisely specify the behaviors of a program under different effect contexts.

::: {#ex:Typing+Delta .example} **Example 5** (Built-in Effectful Operators). *Our type system is parameterized over a (global) typing context \(\Delta\) containing the types of built-in operators.
Intuitively, the signatures of effectful operators in \(\Delta\) captures the semantics the library developer ascribes to their API. Using the SFAs from Example 4, for example, the signatures of , , and are:*

*(§put) =&
(§get) =&
(§exists) =&
& *

*The postcondition SFAs of all three operators use \(\lastA\), to indicate that they append a single effect event to the end of the effect context (\(\evop{\effop}{\overline{v}}{\phi} \land
\lastA\)). The type of permits it to be used under any context ( \(\globalA\topA\)). More interestingly, the built-in type of uses a ghost variable (\(\Code{a}\)) to represent the
current value of the key \(\Code{k}\); its precondition stipulates that it should only be applied to keys that have been previously stored. Finally, the intersection type of describes two possible behaviors, depending on
whether the key \(\Code{k}\) has been previously added to the store.* :::

**Example 6** (MinSet). *Consider a developer who wants to implement an API for a set augmented with an additional operation that returns the minimum element in the set. This implementation is defined in terms of two other stateful
libraries: a Set ADT that provides effectful operators \(\Code{\boldsymbol{insert}{:}int\sarr unit}\) and \(\Code{\boldsymbol{mem}{:}int\sarr bool}\), and a
MemCell library that provides a persistent cell with effectful \(\Code{\boldsymbol{read}{:}unit\sarr int}\) and \(\Code{\boldsymbol{write}{:}int\sarr unit}\)
operators. One implementation strategy is to track the minimum element in a persistent cell, and maintain the representation invariant that the contents of this cell are always less than or equal to every element in the underlying set. Using SFAs, we can
express this invariant as: \[\begin{align} \riA{MinSet}(\Code{el}) \doteq&\ \finalA (\evop{write}{\df{x}{el}}{\top} \land \nextA \globalA \negA \evop{write}{x}{\top}) \impl \finalA \evop{insert}{\df{x}{el}}{\top} \landA
\globalA \negA \evop{insert}{x}{\I{x} < \Code{el}} \\&\land (\globalA\neg \evop{write}{x}{\top} \impl \globalA\neg \evop{insert}{x}{\top})
\end{align}\] which specifies that an element \(\Code{el}\) is written into the persistent cell only when \(\Code{el}\) has been inserted into the backing Set *and* no element less than \(\Code{el}\) has also been inserted. The HAT ascribed to the function \(\Code{minset\_insert}\) enforces this invariant \[\begin{align} & \gvn{el}{int} \normalvn{elem}{int} \htriple{\riA{MinSet}(\Code{el})}{\Code{unit}}{\riA{MinSet}(\Code{el})}
\end{align}\]*

**Example 7** (LazySet). *As our next example, consider a LazySet ADT that provides a lazy version of an operation, delaying when elements are added to an underlying Set
ADT. The ADT does so by returning a thunk closed over a collection of elements that will be added to the set when it is forced. This ADT maintains the representation invariant that an element is never inserted twice: \[\begin{align} \riA{LSet}(\Code{el}) \doteq % \globalA (\evop{put}{key\,val}{\I{val} = % \Code{elem}} \impl \nextA \neg \finalA \evop{put}{key\,val}{\I{val} = \Code{elem}}) \globalA (\evop{insert}{\df{x}{el}}{\top} \impl \nextA \neg
\finalA \evop{insert}{\df{x}{el}}{\top})
\end{align}\] We can specify a “ lazy” insert operator using the following HAT: \[\begin{align} \gvn{el}{int} \normalvn{elem}{int} \Code{thunk}{:}(&\Code{unit}\sarr
\htriple{\riA{LSet}(\Code{el})}{\Code{unit}}{\riA{LSet}(\Code{el})}) \sarr \Code{unit}\sarr \htriple{\riA{LSet}(\Code{el})}{\Code{unit}}{\riA{LSet}(\Code{el})}
\end{align}\] This operation takes as input a (potentially new) element to be added to the set and a thunk holding any elements that have not yet been inserted into the backing set, and returns another thunk. This HAT stipulates that both input
thunk and output thunk preserve the representation invariant of the ADT: \(\Code{unit}\sarr \htriple{\riA{LSet}(\Code{el})}{\Code{unit}}{\riA{LSet}(\Code{el})}\).*

**Example 8** (DFA). *Our final example is a library built on top of a stateful graph library that is used to maintain the states and transitions of a Deterministic Finite Automaton (DFA). The underlying graph library exports two methods
to add () and remove () edges. We can specify the standard invariant that DFAs only have deterministic transitions: \[\begin{align} \riA{DFA}(\Code{n,c}) \doteq \ &\globalA
\neg(\evop{connect}{\df{w}{n}\,\df{y}{c}\,n_{end}}{\top} \land \nextA(\neg \evop{disconnect}{\df{w}{n}\,\df{y}{c}\,n_{end}}{\top} \untilA \evop{connect}{\df{w}{n}\,\df{y}{c}\,n_{end}}{\top}))
\end{align}\] \(\riA{DFA}\) stipulates that a node \(\Code{n}\) can have at most one transition on a character \(\Code{c}\) ( \(\evop{connect}{\df{w}{n}\,\df{y}{c}\,n_{end}}{\top}\)). Moreover, adding a new transition from \(\Code{n}\) on \(\Code{c}\) requires that any previous
transitions on \(\Code{c}\) have first been removed (\(\nextA(\neg \evop{disconnect}{\df{w}{n}\,\df{y}{c}\,n_{end}}{\top} \untilA \evop{connect}{\df{w}{n}\,\df{y}{c}\,n_{end}}{\top}\)). An
\(\Code{add\_transition}\) method with the following signature is thus guaranteed to preserve determinism: \[\begin{align} \Code{n{:}Node.t} \garr \Code{c}{:}\Code{Char.t} \garr
\Code{n\_start}{:}\Code{Node.t} \sarr \Code{char}{:}\Code{Char.t} \sarr \Code{n\_end}{:}\Code{Node.t} \sarr \htriple{\riA{DFA}(\Code{n, c})}{\Code{unit}}{\riA{DFA}(\Code{n, c})}
\end{align}\]*

A type context (shown in 4) is a sequence of bindings from variables to pure refinement types (i.e., \(t\)). Type contexts are not allowed to contain HATs, as doing so breaks several
structural properties (e.g., weakening) that are used to prove type safety.^{5}

5 shows selected rules from three sets of auxiliary relations used by our type system. The first set describes the type erasure functions \(\eraserf{t}\), \(\eraserf{\tau}\), and \(\eraserf{\Gamma}\). The former two functions return basic types by erasing all qualifiers and automata from types, while the latter \(\eraserf{\Gamma}\) is the type context derived by applying \(\eraserf{...}\) to all of \({\Gamma}\)’s bindings. The second set describes well-formedness conditions on SFAs (\(\Gamma \wellfoundedvdash A\)) and types (\(\Gamma \wellfoundedvdash t\) and \(\Gamma \wellfoundedvdash \tau\)). These rules largely ensure that all the qualifiers appearing in a type are closed under the current typing context \(\Gamma\). The exceptions are \(\mathrm{\small WFInter}\), which stipulates that only HATs with the same basic type can be intersected, and \(\mathrm{\small WFHoare}\), which requires the language accepted by the precondition SFA to be a prefix of the postcondition automata (i.e., \(\langA{B} \subseteq \langA{A;\globalA\topA}\), where \(\globalA\topA\) denotes the SFA that accepts an arbitrary trace) after performing a substitution consistent with the current typing context (\(\sigma \in \denotation{\Gamma}\)); the functions \(\langA{...}\) and \(\denotation{...}\) are defined in the next subsection.

Our type system also uses a mostly-standard subtyping relation that aligns with the denotation of the types being related. 5 highlights key subtyping rules that do not have analogues in a standard refinement type system. The subtyping rule for symbolic automata (\(\mathrm{\small SubAutomata}\)) checks inclusion between the languages of the automata, after performing substitutions consistent with the current typing context. The subtyping rule for (non-intersected) HATs (\(\mathrm{\small SubHoare}\)) checks that the inclusion is contravariant over the precondition automata and covariant over the postcondition automata under the same context (i.e., the conjunction of \(B_1\) and \(B_2\) with \(A_2; \globalA\topA\)). The subtyping rules for the intersection of HATs - \(\mathrm{\small SubIntLL}\), \(\mathrm{\small SubIntLR}\), and \(\mathrm{\small SubIntR}\) - are standard. The \(\mathrm{\small SubIntMerge}\) rule additionally allows the precondition automata of intersected types to be merged. Finally, the subtyping rules for ghost variables either bind a ghost variable in the type context (\(\mathrm{\small SubGhostR}\)), or instantiate it to some concrete value (\(\mathrm{\small SubGhostL}\)).

A subset of our typing rules^{6} is shown in 6. Note that an stateful computations can only be ascribed a HAT. As mentioned in
Example 5, our type system is parameterized over \(\Delta\), a typing context for built-in operators that provides HATs for both pure
(TPOp) and effectful operators (TEOp). This system features the standard subsumption and intersection rules (TSub and TInter), which
enables our type system to use fine-grained information about the effect context when typing different control flow paths. The rule TEPur allows a pure term to be treated as a computation that does not perform any
effects.

**Example 9**. *The function in 1 has four possible control flow paths, depending on the effect context under which it is called: (1) the input path exists in the file system (line \(6\)); (2) neither the input path or its parent path exist (line \(9\)); (3) its parent path exists and is a directory (line \(13-15\)), or (4) it is not (line
\(16\)). The following four automata (recall that the SFA \(\predA{isDir}{}\) was defined in 2) indicate the effect context corresponding to each of these
scenarios:*

*& A_1 () ()
& A_2 ()() ()
& A_2 ()() () ()
& A_3 ()() () ()*

*The union of these automata is exactly the representation invariant (\(\riA{FS}(\Code{p})\)), established from the following subtyping relation between their intersection and the return type of \(\tauadd{}\): \[\begin{align} \Code{p{:}}\nuot{Path.t}{\top}, \Code{path{:}}\nuot{Path.t}{\top}, \Code{bytes{:}}\nuot{Bytes.t}{\top} \vdash \biginterty_{i = 1..4} \
\htriple{A_i}{\Code{bool}}{\riA{FS}(\Code{p})} <: \htriple{\riA{FS}(\Code{p})}{\Code{bool}}{\riA{FS}(\Code{p})} }
\end{align}\] Using the TInter and Tsub rules, our type system is able to reduce checking against \(\tauadd{}\) into checking against each \(A_i\) (i.e., \(\htriple{A_i}{\Code{bool}}{\riA{FS}(\Code{p})}\)). Note that only adds the given path to the file system in the third case (line \(13\)), whose
precondition automata (\(A_2\)) indicates that the input path does not exist in the file system (\(\neg \predA{exists}{}(\Code{path})\)), although its parent path does (\(\predA{exists}{}(\Code{parent(path)})\)), and is a directory (\(\predA{isDir}{}(\Code{parent(path)})\)). When coupled with the representation invariant that the parent of a path is always a
directory in the file system, our type system is able to ensure that it is safe to perform the operation.*

TPOpApp is the standard rule for operator application in a typical refinement type system [14]. On the other hand, rule TEOpApp, the rule for effect operator application, with the help of the subsumption rule (TSub), allows operators to have function types whose ghost variables are instantiated properly; moreover, the return type of effect operators is a non-intersected HAT. After ensuring each argument \(v_i\) types against the corresponding argument type \(t_i\) and substituting all parameters (\(\overline{z_i}\)) with the supplied arguments (\(\overline{v_i}\)), the return type of an effectful operator must be in the form \(\htriple{A}{t_x}{A'}\) and have exactly the same precondition SFA (\(A\)) as the type of the surrounding term. Typing the -body \(e\) is similar to TPOpApp, where a new binding \(x{:}t_x\) is added to the type context. Moreover, the rule replaces the precondition SFA used to type the body of the with the postcondition SFA from the return type of the effect operator \(A'\), so that the body is typed in a context reflecting the use of the effectful operator.

The denotation of a SFA \(\langA{A}\), shown in 7, is the set of traces accepted or *recognized* by the automata; the denotation of refinement types \(\denotation{t}\) and HATs \(\denotation{\tau}\) are sets of terms; these functions are defined mutually recursively.

The language of traces ranges over the set of all *well-formed traces* \(\mathit{Tr}^\boldsymbol{WF}\), i.e., those only containing events (\(\pevapp{\effop}{\overline{v_i}}{v}\))
that are well-typed according to the basic type system. As is standard [6], our denotation uses an auxiliary judgement \(\alpha,i \models
A\) that defines when the suffix of a trace \(\alpha\) starting at the \(i^{th}\) position is accepted by an automaton. The denotation of an SFA \(A\)
is the set of (well-formed) traces recognized by \(A\), starting at index \(0\).

Out type denotations use a basic typing judgement \(\emptyset \basicvdash e : s\) that types an expression \(e\) according to the standard typing rules of the simply-typed lambda-calculus (STLC), erasing all qualifiers in function argument types. The denotation of pure refinement types is standard [14]. The denotation of ghost variables is similar to function types whose parameters are restricted to base types. The denotation of an intersection type \(\tau_1 \interty \tau_2\) is the intersection of the denotations \(\tau_1\) and \(\tau_2\). An expression \(e\) belongs to the denotation of a HAT \(\htriple{A}{t}{B}\) iff every trace and value produced by \(e\) is consistent with the SFA \(B\) and refinement type \(t\), under any effect context accepted by the SFA \(A\). Intuitively, the denotation of a HAT is naturally derived from the language’s operational semantics, as depicted by the following correspondence: \[\begin{align} {3} \text{Pure Language:}&& \ \ \mstep{e}{v} &\impl e \in \denotation{\rawnuot{b}{\phi}} \ &&\text{ where } \phi[\vnu \mapsto v] \text{ is valid} \\ \text{\langname:}&& \ \ \msteptr{\alpha}{e}{\alpha'}{v} &\impl e \in \denotation{\htriple{A}{\rawnuot{b}{\phi}}{B}} \ &&\text{ where } \alpha \in \langA{A}, \alpha \listconcat \alpha' \in \langA{B}, \text{and }\phi[\vnu \mapsto v] \text{ is valid} \end{align}\] In a pure language with the simple multi-step reduction relation \(\mstep{e}{v}\), refinement types qualify the value \(v\) that \(e\) produces. In contrast, the multi-step reduction relation of , \(\msteptr{\alpha}{e}{\alpha'}{v}\) depends on the effect context \(\alpha\) and emits a trace \(\alpha'\) that records the sequence of effects performed by \(e\) during evaluation. Thus, HATs use precondition and postcondition automata (i.e., \(A\) and \(B\)) to qualify the trace before and after a term is evaluated (i.e., \(\alpha\) and \(\alpha \listconcat \alpha'\)).

The denotation of a type context is a set of *closing substitutions* \(\sigma\), i.e., a sequence of bindings \(\overline{[x_i \mapsto v_i]}\) consistent with the type denotations
of the corresponding variables in the type context. The denotation of the empty context is a singleton set containing only the identity substitution \(\emptyset\).

**Definition 1** (Well-formed built-in operator typing context). *The built-in operator typing context \(\Delta\) is well-formed iff the semantics of every built-in operator \(\theta\) is consistent with its type: \(\Delta(\theta) ~ = ~ \overline{x{:}b_{\I{x}}}\garr \overline{y{:}t_{\I{y}}}\sarr \tau \implies \overline{\forall x{:}b_{\I{x}}}.\; \overline{\forall v_y \in
\denotation{t_{\I{y}}}}.\; (\theta\, \overline{v_y}) \in \denotation{\tau\overline{[y \mapsto v_y]}}\). ^{7}*

**Theorem 1**. *[Fundamental Theorem] Given a well-formed typing context for built-in operators \(\Delta\), \(\Gamma \vdash e : \tau \implies \forall \sigma, \sigma \in
\denotation{\Gamma} \impl \sigma(e) \in \denotation{\sigma(\tau)}\). ^{8}*

**Corollary 1**. *[Type Soundness] Under a built-in operator type context that is well-formed, if a function \(f\) preserves the representation invariant \(A\), i.e. it
has the type: \(\emptyset \vdash {\I{f}} : \overline{x{:}b_{\I{x}}}\garr \overline{y{:}b_{\I{y}}}\sarr \htriple{A}{t}{A}\), then for every set of well-typed terms \(\emptyset \vdash
\overline{v_{\I{x}} : b_{\I{x}}}\) and \(\emptyset \vdash \overline{v_{\I{y}} : b_{\I{y}}}\), applying \(f\) to \(\overline{v_{\I{y}}}\) under an
effect context \(\alpha\) that is consistent with \(A\) results in a context \(\alpha\listconcat \alpha'\) that is also consistent with \(A\): \[\begin{align} \msteptr{\alpha}{({\I{f}}\ \overline{v_{\I{y}}})}{\alpha'}{v} \impl \alpha \listconcat \alpha'\in \langA{A[\overline{x \mapsto v_{\I{x}}}][\overline{y \mapsto v_{\I{y}}}]}
\land v \in \denotation{t[\overline{x \mapsto v_{\I{x}}}][\overline{y\mapsto v_{\I{y}}}]}
\end{align}\] *

Converting our declarative type system into an efficient algorithm requires resolving two key issues. First, we cannot directly use existing SFA inclusion algorithms [19], [20] to implement the check in SubAutomata, because our SFAs may involve non-local variables corresponding to function parameters and ghost variables; these variables can have refinement types. We deal with this wrinkle by integrating the subtyping algorithm for pure refinement types into the existing SFA algorithm. Second, typing the use of an effectful operator, e.g., or , depends on the set of effects in the precondition of the current HAT. The declarative type system can ensure that this automaton has the right shape by applying the TInter and TSub rules at any point in a derivation, but an efficient implementation must perform this conversion more intelligently. Our solution to this problem is to employ a bidirectional type system [21] that closely tracks a context of effects that have preceded the term being typed, and selectively applies the subsumption rule to simplify the context when switching between checking and synthesis modes.

Symbolic alphabets allow the character domains of SFAs to be infinite, making them strictly more expressive than standard FAs. This enhanced expressivity comes at a cost, however, as the standard FA inclusion algorithm cannot be directly applied to
SFAs. Thankfully, prior work has shown how to reduce an inclusion check over SFAs to an inclusion check over FAs [19], [20]. The high-level approach is to first construct a finite set of equivalence classes over an SFA’s infinite character domain, defined in terms of a set of maximal satisfiable Boolean
combinations of logic literals (called *minterms*) in the automaton’s symbolic alphabet. An alphabet transformation procedure then replaces characters in the original SFA with their corresponding equivalence classes, and replaces the SFA’s symbolic
alphabet with minterms, thus allowing SFAs to be translated into FAs. As long as the satisfiability of minterms is decidable, so is checking inclusion between two SFAs.

The pre- and post-conditions used in HATs have two features that distinguish them from standard SFAs, however: 1) characters are drawn from a set of distinct events and their qualifiers may range over multiple variables, instead of a single local variable; and, 2) event qualifiers may refer to variables in the typing context, i.e., function parameters and ghost variables.

The first step in checking SFA inclusion is to construct the equivalence classes used to finitize its character set. With HATs, these characters range over events triggered by the use of library operations \(\pevapp{\effop}{\overline{v}}{v}\), and return events \(\pevappret{v}\). As each class of events is already disjoint (e.g., and events will never overlap), we only need to build minterms that partition individual classes. The minterms of each class of events have the form \(\evop{\effop}{\overline{x_i}}{(\bigwedge \overline{l_j}) \land (\bigwedge \overline{\neg l_k})}\). To build the literals used in these minterms for an event \(\effop\), we collect all the literals \(\overline{l}\) used to qualify \(\effop\), as well as any literals appearing in atomic predicates \(\evparenth{\phi}\), and then construct a maximal set of satisfiable Boolean combinations of those literals. Notably, these literals may contain variables bound in the current typing context.

The soundness of the alphabet transfer procedure requires that *only* satisfiable minterms are used, so that every transition in the output FA corresponds to an actual transition in the original SFA. To understand how this is done, observe that
the typing context plays a similar role in checking both the satisfiability of minterms and the subtyping relation. Thus, the algorithm reduces checking the satisfiability of \(\evop{\effop}{\overline{x_i}}{(\bigwedge
\overline{l_j}) \land (\bigwedge \overline{\neg l_k})}\) to checking whether the refinement type \(\rawnuot{b}{(\bigwedge \overline{l_j}) \land (\bigwedge \overline{\neg l_k})}\) is *not* a subtype of \(\rawnuot{b}{\bot}\) (line \(8\)), that is \(\Gamma, \overline{x_i{:}b_i} \not\vdash \rawnuot{b}{(\bigwedge \overline{l_j}) \land (\bigwedge \overline{\neg l_k})} <:
\rawnuot{b}{\bot}\). Since \(\rawnuot{b}{\bot}\) is uninhabited, this subtype check fails precisely when \((\bigwedge \overline{l_j}) \land (\bigwedge \overline{\neg l_k})\) is
satisfiable under \(\Gamma\).

Equipped with the set of satisfiable minterms, the final inclusion check (line \(10\)) is a straightforward application of the textbook inclusion check between the FA produced by the standard alphabet transformation
algorithm (line \(10\)).^{9}

As is standard, our bidirectional type system consists of both type synthesis (\(\typeinfer\)) and type checking (\(\typecheck\)) judgments. The bidirectional system features a minor divergence from the declarative rules. While the declarative system was able to use the subsumption rule (TSub) to freely instantiate ghost variables, a typing algorithm requires a more deterministic strategy. Our solution is to treat ghost variables \(x{:}b \garr\) as instead being qualified by an unknown formula which is specialized as needed, based on the information in the current typing context.

With this minor tweak in hand, the top-level typing algorithm is mostly a matter of bidirectionalizing the typing rules of by choosing appropriate modes for the assumptions of each rule. This is largely straightforward: the type checking rule for ghost variables (ChkGhost) adapts the corresponding declarative rule (TGhost) in the standard way, for example. The rule for checking effectful operations (ChkEOpApp) is similar, although it adopts a fixed strategy for applying the subsumption rule: 1) instead of using TSub to instantiate ghost variables directly, it now relies on an auxiliary function, \(\effinfer\), to strengthen their qualifiers based on the current context; 2) when the result of the library operation \(\effop\) is an intersection type (\(\biginterty \overline{\htriple{A_i}{t_i}{A_i'}}\)), ChkEOpApp considers each case of the intersection, instead of using TSub to focus on a single case; and 3) instead of relying on TSub to align the postcondition automata (\(A_i'\)) of \(\effop\) with the precondition of the HAT being checked against (\(A\)), the rule uses the conjunction of the automata (\((A;\globalA\topA) \land A_i'\)) as the precondition automata used to check the term that uses the result of \(\effop\).

**Theorem 2**. *[Soundness of Algorithmic Typing] Given a well-formed built-in typing context \(\Delta\), type context \(\Gamma\), term \(e\), and HAT \(\tau\), \(\Gamma \vdash e \typecheck \tau \implies \Gamma \vdash e : \tau\). ^{10}*

**Theorem 3**. *[Decidability of Algorithmic Typing]*

We have implemented a tool based on the above approach, called , that targets ADTs implemented in terms of other stateful libraries. consists of approximately 12K lines of OCaml and uses Z3 [18] as its backend solver for both SMT and FA inclusion queries.^{11}

takes as an input the implementation of an ADT in OCaml, enhanced signatures of ADT operations that include representation invariants expressed as HATs, and specifications of the supporting libraries as HATs (e.g., the signatures in Example 5). The typing context used by includes signatures for a number of (pure) OCaml primitives, including the pure operators listed in 2. also includes a set of predefined method predicates (i.e., \(mp\) in 4) that allow qualifiers to capture non-trivial datatype properties. For example, the method predicates \(\Code{isDir}(\Code{x})\) and \(\Code{isDel}(\Code{y})\) from the motivating example in 2 encode that \(\Code{x}\) holds the contents of a directory, and that \(\Code{y}\) is marked as deleted, respectively. The semantics of method predicates are defined via a set of lemmas in FOL, in order to enable automated verification; e.g., the axiom \(\forall \Code{x}.\Code{isDir}(\Code{x}) \impl \neg\Code{isDel}(\Code{x})\) encodes that a set of bytes cannot simultaneously be an active directory and marked as deleted.

ADT | Library | #Method | #Ghost | s\(_{I}\ \ \) | t\(_{\text{total}}\) (s) | #Branch | #App | #SAT | #FA\(_\subseteq\) | avg. s\(_{\text{FA}}\) | t\(_{\text{SAT}}\) (s) | t\(_{\text{FA}_\subseteq}\) (s) |
---|---|---|---|---|---|---|---|---|---|---|---|---|

Stack\(^{\dagger}\) | LinkedList | 7 | 0 | 4 | 5.94 | 4 | 7 | 297 | 7 | 98 | 1.63 | 0.06 |

KVStore | 7 | 1 | 9 | 17.88 | 4 | 10 | 874 | 17 | 226 | 5.78 | 0.23 | |

Queue\(^{\dagger}\) | LinkedList | 6 | 0 | 4 | 4.88 | 4 | 9 | 190 | 4 | 96 | 1.08 | 0.06 |

Graph | 6 | 1 | 24 | 28.24 | 5 | 12 | 1212 | 14 | 525 | 8.19 | 0.86 | |

Set\(^{\star}\) | Tree | 5 | 0 | 12 | 27.12 | 5 | 12 | 1589 | 11 | 531 | 9.34 | 0.53 |

KVStore | 3 | 1 | 9 | 2.39 | 3 | 5 | 245 | 6 | 160 | 1.51 | 0.06 | |

Heap\(^{\star}\) | Tree | 7 | 0 | 12 | 25.71 | 5 | 12 | 1589 | 11 | 531 | 9.33 | 0.52 |

LinkedList | 6 | 0 | 4 | 8.84 | 4 | 8 | 497 | 8 | 118 | 3.03 | 0.08 | |

MinSet | Set | 4 | 1 | 28 | 10.55 | 3 | 6 | 612 | 17 | 294 | 4.19 | 1.58 |

KVStore | 4 | 1 | 25 | 32.84 | 5 | 8 | 2227 | 23 | 519 | 10.93 | 9.34 | |

LazySet | Tree | 6 | 0 | 12 | 27.10 | 5 | 12 | 1589 | 11 | 531 | 9.33 | 0.57 |

Set | 4 | 1 | 9 | 1.42 | 2 | 3 | 101 | 4 | 106 | 0.57 | 0.04 | |

KVStore | 5 | 1 | 9 | 3.10 | 3 | 5 | 245 | 6 | 160 | 1.49 | 0.06 | |

FileSystem | Tree | 6 | 1 | 20 | 58.80 | 3 | 8 | 2085 | 17 | 652 | 14.15 | 2.21 |

KVStore | 4 | 1 | 17 | 157.27 | 4 | 10 | 8144 | 43 | 481 | 56.64 | 16.54 | |

DFA | KVStore | 5 | 2 | 18 | 42.62 | 3 | 3 | 3604 | 25 | 228 | 18.84 | 1.60 |

Graph | 5 | 2 | 11 | 78.44 | 4 | 4 | 3625 | 27 | 225 | 25.53 | 3.42 | |

ConnectedGraph | Set | 5 | 2 | 9 | 80.46 | 4 | 4 | 3889 | 50 | 357 | 27.39 | 12.80 |

Graph | 5 | 1 | 20 | 176.89 | 4 | 3 | 1349 | 17 | 360 | 19.19 | 53.90 |

Client ADT | Representation | Underlying | Policy governing interactions with underlying API |

Invariant | Library | ||

Stack | LIFO-property | LinkedList | The addresses that store elements are unique |

KVStore | Not a circular linked list | ||

Queue | FIFO-property | LinkedList | Not a circular linked list |

Graph | No self-loops in graph, degree of nodes in graph are at most \(1\) | ||

Set | Unique | Tree | The underlying tree is a binary search tree |

elements | KVStore | Every key is associated with a distinct value | |

Heap | Min-heap property | Tree | The value of the parent node in the tree |

is less than the value of its child nodes | |||

LinkedList | Not a circular linked list; the elements are sorted | ||

MinSet | Set | The cached element is less than all inserted elements | |

Uniqueness and | in the set and has been inserted into the set | ||

minimality | KVStore | The cached element is less than all put elements | |

in the store and has been put before | |||

LazySet | Uniqueness | Tree | The underlying tree is a binary search tree |

of elements | Set | An element has never been inserted twice | |

KVStore | Every key is associated with a distinct value | ||

FileSystem | Tree | The Parent node stores a path that is prefix of its child nodes; | |

Policy of Unix-like | a non-root parent node stores directories | ||

filesystem paths | KVStore | Any non-root path stored as a key in the key-value store must | |

have its parent stored as a non-deleted directory in the store | |||

DFA | Determinism | KVStore | All stored transitions are represented as tuples (start, char, end); |

of Transitions | starting state has at most have one transition for a character | ||

Graph | Two nodes (which represent states in FA) can have at most one edge, | ||

which is labeled with a character | |||

ConnectedGraph | Connectivity | Set | The set stores unique pairs (fst, snd); only elements that have |

both in and out edges are treated as a value node in the graph | |||

Graph | All nodes in the graph are connected |

We have evaluated on a corpus of complex and realistic ADTs drawn from a variety of sources (shown in the title of [tab:evaluation]), including general datatypes
such as stacks, queues, sets, heaps, graphs, as well as custom data structures including the Unix-style file system from 1 and the deterministic finite automata described in 4
(**Q1**). [tab:evaluation] presents the results of this evaluation. Each datatype includes multiple implementations using different underlying
libraries (Library column), each of which provides a distinct set of operations. Our backing APIs include linked lists, a persistent key-value store, sets, trees, and graphs, each of which induces a different encoding of the target ADT’s representation
invariant.

[tab:evaluation] divides the results of our experiments into two categories, separated by the double bars; the first gives comprehensive results about analyzing
the entire datatype, while the second reports information about the most complex method of each ADT implementation.^{12} The first group of columns describes broad
characteristics of each ADT, including its number of methods (#Method), the number of ghost variables in its representation invariant (#Ghost), and the size of the formula encoding the representation invariant (s\(_I\)),
after it is desugared into a symbolic regular expression. The next column reports the total time needed to verify all the method of each ADT (t\(_{\text{total}}\)). performs quite reasonably on all of our benchmarks, taking
between 1.42 to 176.89 seconds for each ADT. As expected, more complicated ADT implementations (i.e., those with larger values in the #Method, #Ghost, and s\(_I\) columns), take longer to verify (**Q3**).

The first group of columns in the second half of [tab:evaluation] list features relevant to the complexity of the most demanding method in the ADT. These methods
feature between \(3\) and \(5\) branches (#Branch) and \(3\) to \(12\) uses of built-in operators (#App). The last two
groups of columns present type checking results for these methods. The #SAT and #FA\(_\subseteq\) columns list the number of SMT queries and finite automata (FA) inclusion checks needed to type check the method. The next
column (avg. s\(_{\text{FA}}\)) gives the average number of transitions in the finite state automata (FA) after the alphabet transformation described in 5. These numbers are roughly proportional
to code complexity (column #Branch and #App) and invariant complexity (column #Ghost and s\(_I\)) — intuitively, programs that have more uses of operators and larger specifications, require more queries and produce more
complicated FAs. The last two columns report verification time for the method (**Q3**). These times are typically dominated by the SFA inclusion checks (ranging from .63 to 73.09 seconds) that result from minterm satisfiability checks during
the alphabet transformation phase (t\(_{\text{SAT}}\)) and FA inclusion checks (t\(_{\text{FA}_\subseteq}\)). Unsurprisingly, generating more queries (both #SMT and #FA\(_\subseteq\)) result in more time spent checking minterm satisfiability and FA inclusion. Our results also indicate that FA inclusion checks are not particularly sensitive to the size of the FA involved; rather, the cost of
these checks corresponds more closely to the deep semantics of the representation invariant, in which the choice of the underlying library is an important factor. Taking the FileSystem benchmark as an example, both Tree and KVStore implementations lead to similar sizes for the invariant, in term of #Ghost. s\(_I\), and FA (avg.s\(_{\text{FA}}\)). However, the use of the former library results in a significantly shorter verification time since the relevant property captured by the invariant “*every child has its parent in the file system*” is
naturally supported by the Tree library; the only remaining verification task in this case for the client to handle involves ensuring these parents are indeed directories in the file system. In contrast, the KVStore provides no such guarantees on how its elements are related, requiring substantially more verification effort to ensure the invariant is preserved.

*Representation Invariant Inference.* [3] develop a data-driven CEGIS-based inference approach, while [2] develop a solution derived from examining testcases. Solvers used to infer predicates that satisfy a set of Constrained Horn Clauses (CHCs) [23]–[25] can also be thought of as broadly addressing similar goals. Our focus in this paper is orthogonal to these efforts, however, as it is centered on the
automated verification of user-specified representation invariants expressed in terms of SFAs.

*Effect Verification.* There has been significant prior work that considers the type-based verification of functional programs extended with support for (structured) effectful computation. Ynot [8] and F* [7] are two well-known examples whose type specifications allow fine-grained effect tracking of
stateful functional programs. These systems allow writing specifications that leverage type abstractions like the Hoare monad [11] or the Dijkstra monad [9], [10], [26]–[28]. For example, specifications using the Hoare monad involve writing type-level assertions in the form of pre- and post-conditions over the heaps that exist before
and after a stateful computation executes. The Dijkstra monad generalizes this idea by allowing specifications to be written in the style of a (weakest precondition) predicate transformer semantics, thus making it more amenable for improved automation and
VC generation. While our goals are broadly similar to these other efforts, our setup (and consequently, our approach) is fundamentally different. Because we assume that the implementation of effectful operations are hidden behind an opaque interface, and
that the specifications of these operations are not tailored for any specific client application, our verification methodology must necessarily reason about stateful library actions, indirectly in terms of the *history* of calls (and returns) to
library methods made by the client, rather than in terms of predicates over the concrete representation of the state. One important way that our significantly different representation choice impacts our verification methodology is that unlike e.g., the
Dijkstra monad that uses a predicate transformer semantics to establish a relation between pre- and post-states of an effectful action, our typing of interaction history in terms of HATs allows us to simply reuse a standard subtying relation (suitably
adapted), instead. Consequently, the implementation of our typing algorithm is amenable to a significant degree of automation, on par with other refinement type systems like Liquid Haskell [29].

*Type and Effect Systems.* Type and effect systems comprise a broad range of techniques that provide state guarantees about both *what* values a program computes and *how* it computes them. These systems have been developed for a
number of different applications: static exception checking in Java [30], ensuring effects are safely handled in the languages with algebraic effects [31], and reasoning about memory accesses [32], in order to
ensure, e.g., programs can be safely parallelized [33]. Notably, a majority of these systems are agnostic to the order in which these effects are produced:
the effect system in Java, for example, only tracks whether or not a computation may raise an exception at run-time.

In contrast, *sequential* effect systems [34] provide finer grained information about the order in which a computation’s effects may occur. More
closely related to this work are type and effect systems that target *temporal* properties on the sequences of effects a program may *produce*. For example, [35] present a type and effect system for reasoning about the shape of histories (i.e., finite traces) of events embedded in a program, with a focus on security-related properties expressed as regular expressions augmented
with fixed points. Their specification language is not rich enough to capture data-dependent properties: the histories of a conditional expression must include the histories of both branches, even when its condition is a variable that is provably false.
[36] present a type and effect system that additionally supports verification properties of infinite traces, specified as Büchi automata. Their system features
refinement types and targets a language similar to . Their effect specifications also include a second component describing the sets of *infinite* traces a (non-terminating) program may produce, enabling it to reason about liveness properties.
Unlike , however, events in their system are not allowed to refer to program variables. This restriction was later lifted by [12], whose results support
value-dependent predicates in effect specifications written in a first-order fixpoint logic. To solve the resulting verification conditions, they introduce a novel proof system for this logic. More recently, [13] consider how to support richer control flow structures, e.g., delimited continuations, in such an effect system. To the best of our knowledge, all
of these effect systems lack an analogue to the precondition automaton of HATs, and instead have to rely on the arguments of a dependently typed function to constrain its set of output traces, as opposed to constructing and managing the context of
previously seen events and their results.

While our focus has been on the development of a practical type system for the purposes of verification, there have also been several works that provide foundational characterizations of sequential effect systems. Most of these works adopt a
semantics-oriented approach and seek to develop categorical semantics for effectful languages [34], [37], [38]. More closely related is the recent work of Gordon [39], which
defines a generic model of sequential effects systems in terms of a parameterized system. The key idea is to represent effects as a *quantale*, a join semi-lattice equipped with a top element and whose underlying set is also a monoid. The proposed
type system is parameterized over an arbitrary effect quantale; this system is proven to be safe for any appropriate instantiation of this parameter.

*Type-Based Protocol Enforcement.* Because HATs express a history of the interaction between a client and an stateful library, they naturally serve as a form of protocol specification on client behavior , although the kinds of protocols we
consider in this work are limited to those that are relevant to defining a representation invariant, i.e., those that capture constraints on library methods sufficient to ensure a desired client invariant. In this sense, HATs play a similar role as
typestate in object-oriented languages [40]–[42], which
augments the interface of objects with coarse-grained information about the state of an object, in order to constrain its set of available methods. Typestate specifications differ in obvious ways from HATs, most notably with respect to the level of
granularity that is expressible; in particular, HATs are designed to capture fine-grained interactions between libraries and clients that are necessary to specify useful representation invariants.

This paper explores the integration of SFAs within a refinement type system as a means to specify and verify client-specific representation invariants of functional programs that interact with stateful libraries. Our key innovation is the manifestation of SFAs as HATs in the type system, which allows the specification of context traces (preconditions) as well as computation traces (postconditions) that allow us to characterize the space of allowed client/library interactions and thus enable us to prove that these interactions do not violate provided representation invariants. HATs integrate cleanly within a refinement type system, enjoy pleasant decidability properties, and are amenable to a high-degree of automation.

Another potential direction is extending our current formalization to support richer automata classes (e.g., symbolic visibly pushdown [43] or Büchi automata), in order to support a larger class of safety properties, .

The auxiliary big-step reduction rules for effect operators and the small-step operational semantics of our core language are shown in 10.

The basic typing rules of our core language and qualifiers are shown in 11 and 12. We use an auxiliary function \(\S{Ty}\) to provide a basic type for the primitives of our language, e.g., constants, built-in operators, and data constructors.

The full set of rules for our auxiliary typing relations are shown in 13, and the full set of declarative typing rules are shown in 14. We elide the basic typing relation (\(\emptyset \basicvdash e : s\)) in the premises of the rules in 14; which assume all terms have a basic type.

In this section, we provide the details of our typing algorithm. The full set of bi-directional typing rules for this algorithm are shown in 15; again, we again omit the basic typing relation (\(\emptyset \basicvdash e : s\)) in the precondition of these rules. The detail of alphabet transformation (\(\mintermReplace\)) and SFA instantiation (\(\instantiate\)) procedures are provided in [algo:alphabet] and [algo:instantiate], respectively.

The alphabet transformation function \(\mintermReplace\) translates a SFA into a FA that shares the same regular operators. Notably, the alphabet transformation for effect events gathers all minterms that can imply the qualifier of the effect event (line \(17\)); moreover, the effect event \(\evop{\effop}{\overline{x_i}}{\phi}\) requires the gathered minterms have the same operator (\(\effop\)).

As we show in [algo:instantiate], the abductive synthesizer adapts an existing algorithm [44] that synthesizes a *maximal* assignment of unknown qualifier \(\phi_i\) for the given ghost variables \(x_i{:}b_i\). These inferred qualifiers are
strong enough to prove the given inclusion relation between two input automata under the given type context (i.e., \(\Gamma \vdash A \subseteq A'\)). Note that the ghost variables only make sense when the inclusion
relation holds with all unknown formulae \(\phi_i\) are maximally weak (i.e., true) (line \(2\)). Similar to the minterm algorithm in [algo:alphabet], the abductive algorithm gathers all literals that appear in the pre- and post-condition automata and type context (line \(3\)) that will be used to generate candidate
minterms (feature vectors, in the literature of abductive synthesis [44]), to build the hypothesis space for synthesis. The function uses a
counterexample-guided refinement loop (line \(6\) - \(12\)) adapted from the existing algorithm [44] manipulating three sets of minterms: unknown minterms \(M^?\), positive minterms \(M^+\), and negative minterms \(M^-\).
Thanks to the multi-abduction approach from the existing algorithm [44], inferring a formula \(\phi\) from the union of all
minterms induces all the assignment of unknown qualifiers \(\overline{\phi_i}\) at one shot. The auxiliary synthesizer (line \(8\)) returns a formula that can be implied by given positive
minterm and unknown minterms but cannot be implied by any given negative minterms. Following the existing algorithm [44], function is implemented as a Decision
Tree (DT) that classifies these two sets of minterms. On the other hand, the inclusion query on line \(9\) decides if unknown minterms should be positive or negative; it also guarantees the soundness of the algorithm, that
is, a synthesized type is always consistent with the given automaton \(A\) and \(A'\). Finally, the function returns refinement type \(\rawnuot{b_i}{\phi_i}\) for each ghost variable \(x_i\), where \(\phi_i\) is the union of all local minterms that can imply \(\phi\) (line \(14\) - \(15\)).

The Coq formalization of our core language, typing rules, and the proof of [theorem:fundamental] and Corollary 1 are available on Zenodo

https://doi.org/10.5281/zenodo.10806686

We present the proof for [theorem:algo-sound] from 5. The proof requires the following lemmas about the \(\subqueryA\) subroutine and the instantiation function \(\instantiate\).

**Lemma 1**. *[Soundness of Inclusion Algorithm] Given type context \(\Gamma\), automaton \(A\) and \(A'\), \[\begin{align} \subqueryA(\Gamma, A, A') \impl \Gamma \vdash A \subseteq A'
\end{align}\]*

**Lemma 2**. *[Soundness of Instantiation Function] Given a type context \(\Gamma\), ghost variables \(\overline{x_i{:}b_i}\), automaton \(A\) and \(A'\), \[\begin{align} \instantiate(\Gamma, A, A', \overline{x_i{:}b_i}) \implies \Gamma, \overline{x_i{:}\rawnuot{b_i}{\phi_i}} \vdash A \subseteq
A'
\end{align}\]*

The soundness of establishes that it is safe to treat the subtyping relation in our typing algorithm as in declarative type system. On the other hand, the soundness of instantiation function induces the following lemma.

**Lemma 3**. *[Instantiation Implies Precondition Automata Inclusion] Given a type context \(\Gamma\), term \(e\), consider the following the refinement type that has
ghost variables \(\overline{z_k{:}b_k}\garr \overline{y_j{:}t_j}\sarr \biginterty \overline{\htriple{A_i}{t_i}{A_i'}}\), \[\begin{align} \Gamma \infervdash{A} \overline{z_k{:}b_k}\garr
\overline{y_j{:}t_j}\sarr \overline{\htriple{A_i}{t_i}{A_i'}} \effinfer \overline{z_k{:}t_k} \implies & \Gamma, \overline{z_k{:}t_k} \vdash A \subseteq \bigvee A_i \land \\ &\Gamma, \overline{z_k{:}t_k} \vdash \overline{z_k{:}b_k}\garr
\overline{y_j{:}t_j}\sarr \overline{\htriple{A_i}{t_i}{A_i'}} <: \overline{y_j{:}t_j}\sarr \overline{\htriple{A_i}{t_i}{A_i'}}
\end{align}\]*

We need three auxiliary lemmas about declarative typing, which establish that we can drop unused binding from the type context, well-typed substitution preserves typing judgement, and well-formedness implies subtyping.

**Lemma 4**. *[Strengthening] Given a type context \(\Gamma\), term \(e\) and HAT \(\tau\), for any free variable \(x\) and pure refinement type \(t\): \(\Gamma, x{:}t \vdash e : \tau \land \Gamma \wellfoundedvdash \tau \impl \Gamma \vdash e : \tau\)*

**Lemma 5**. *[Substitution Preserves Typing] Given a type context \(\Gamma\), term \(e\) and HAT \(\tau\), for any free variable \(x\) and pure refinement type \(t\): \(\Gamma, x{:}t \vdash e : \tau \land \Gamma \vdash v : t \impl \Gamma \vdash e : \tau[x\mapsto v]\)*

**Lemma 6**. *[Well-formedness implies Subtyping] Given a type context \(\Gamma\), term \(e\) and HAT \(\htriple{A}{t}{B}\), we have \(\Gamma \wellfoundedvdash \htriple{A}{t}{B} \impl \Gamma \vdash \htriple{A}{t}{B} <: \htriple{A \land A' }{t}{B \land (A';\globalA\topA) }\).*

*Proof.* Consider the denotation of HAT \(\htriple{A}{t}{B}\) and \(\htriple{A \land A' }{t}{B \land (A';\globalA\topA) }\): \[\begin{align}
\denotation{ \htriple{A}{t}{B}} &= \{e ~|~ \emptyset \basicvdash e : \eraserf{t} \land \forall \alpha\, \alpha'\, v.\; \alpha \in \langA{A} \land \msteptr{\alpha}{e}{\alpha'}{v} \impl v \in \denotation{t} \land \alpha \listconcat \alpha'
\in \langA{B} \} \\ \denotation{ \htriple{A \land A' }{t}{B \land (A';\globalA\topA) }} &= \{e ~|~ \emptyset \basicvdash e : \eraserf{t} \land \forall \alpha\, \alpha'\, v.\; \alpha \in \langA{A \land A'} \land
\msteptr{\alpha}{e}{\alpha'}{v} \impl \\&\qquad\qquad v \in \denotation{t} \land \alpha \listconcat \alpha' \in \langA{B \land (A';\globalA\topA)} \}
\end{align}\] Consider a trace \(\alpha \in \langA{A \land A'}\), it also can be recognized by automata \(A\), thus for any \(\msteptr{\alpha}{e}{\alpha'}{v}\), we have \(\alpha \listconcat \alpha' \in \langA{B}\). Moreover, the trace \(\alpha \listconcat \alpha'\) has a
prefix \(\alpha\), thus is naturally recognized by automata \((A \land A');\globalA\topA\). Thus, we know \(\alpha \listconcat \alpha' \in \langA{(A \land
A');\globalA\topA \land B}\). Note that the rule WFHoare requires that precondition is always the “prefix” of the postcondition automata, thus \(\Gamma \wellfoundedvdash
\htriple{A}{t}{B}\) implies \((A;\globalA\topA) \subseteq B\), which means that \(\langA{(A \land A');\globalA\topA \land B} = \langA{A';\globalA\topA \land B}\). Thus the
term \(e\) is in the denotation of type \(\htriple{A \land A' }{t}{B \land (A';\globalA\topA) }\) when it is in the denotation of type \(\htriple{A}{t}{B}\). ◻

Now we can prove the soundness theorem of our typing algorithm with respect to our declarative type system. As the type synthesis rules are defined mutually recursively, we simultaneously prove both are correct. The following theorem states a stronger inductive invariant that is sufficient to prove the soundness of the typing algorithm.

**Theorem 4**. *[Soundness of the type synthesis and type check algorithm] For all type context \(\Gamma\), term \(e\), pure refinement type \(t\) and HAT \(\tau\), \[\begin{align} &\Gamma \vdash e \typecheck \tau \implies \Gamma \vdash e : \tau \\ &\Gamma \vdash e \typecheck t \implies \Gamma \vdash
e : t \\ &\Gamma \vdash e \typeinfer \tau \implies \Gamma \vdash e : \tau \\ &\Gamma \vdash e \typeinfer t \implies \Gamma \vdash e : t
\end{align}\]*

*Proof.* We proceed by induction over the mutual recursive structure of type synthesis (\(\typeinfer\)) and type check (\(\typecheck\)) rules. In the cases for; synthesis and
checking rules of rule SynConst, SynBaseVar, SynFuncVar, ChkGhost, ChkInter, ChkFunc, ChkFix, ChkMatch, the typing rules in 14 align exactly with these rules. There are seven remaining cases that are about
let-bindings (ChkLetV and ChkLetE), application (ChkApp, ChkPOpApp, and ChkEOpApp), and subtyping
(ChkSubV and ChkSub). The rule ChkLetV, ChkPOpApp, and ChkSubV are less interesting, which are
similar with standard refinement typing algorithm[14] and simply drop the pre- and post-condition automata from their effectful version. Consequently, we show the
proof of three interesting cases (ChkSub, ChkLetE, ChkEOpApp) below, where the case ChkApp is almost the same as ChkEOpApp.

ChkSub:

[ChkSub] e

This rule connects the relation between the type synthesis and type check judgement. From the induction hypothesis and the precondition of ChkSub, we know \[\begin{align} {2} \setcounter{equation}{0} &\Gamma \vdash A_2 <: A_1 \ \ &\ \ &\text{from precondition}\\ &\Gamma \vdash t_1 <: t_2 \ \ &\ \ &\text{from precondition}\\ &\Gamma \vdash (A_2;\globalA\topA) \land B_1 \subseteq (A_2;\globalA\topA) \land B_2 \ \ &\ \ &\text{from precondition}\\ &\Gamma \vdash \htriple{A_1}{t_1}{B_1} <: \htriple{A_2}{t_2}{B_2} \ \ &\ \ &\text{via rule \mathrm{\small SubHoare} and (1) (2) (3)}\\ &\Gamma \vdash e : \htriple{A_1}{t_1}{B_1} \ \ &\ \ &\text{inductive hypothesis}\\ &\Gamma \vdash e : \htriple{A_2}{t_2}{B_2} \ \ &\ \ &\text{via rule \mathrm{\small TSub} and (4) (5)} \end{align}\] Then the proof immediate holds in this case.

ChkLetE:

[ChkLetE]

This rule can be treated as a combination of subtyping of intersection types (e.g., SubIntLL) and TLet. From the induction hypothesis and the precondition of ChkApp, we know \[\begin{align} {2} \setcounter{equation}{0} &\Gamma \vdash e_x : \biginterty \htriple{A_i}{t}{A_i'} \ \ &\ \ &\text{inductive hypothesis}\\ &\forall i. \Gamma \vdash e_x : \overline{\htriple{A_i}{t_i}{A_i'}} \ \ &\ \ &\text{via rule \mathrm{\small TSub}, rule \mathrm{\small SubIntLL} and (1)}\\ &\forall i. \Gamma \vdash e_x : \overline{\htriple{A \land A_i}{t_i}{(A;\globalA\topA) \land A_i'}} \ \ &\ \ &\text{via Lemma~\ref{lemma:wf-sub}, precondition, and (2)}\\ &\forall i.\Gamma, x{:}t_i \vdash e : \htriple{(A;\globalA\topA) \land A_i'}{t}{B} \ \ &\ \ &\text{inductive hypothesis}\\ &\forall i.\Gamma \vdash \zlet{x}{e_x}{e} : \htriple{A \land A_i'}{t}{B} \ \ &\ \ &\text{rule \mathrm{\small TLet} and (3) (4)}\\ &\Gamma \vdash \zlet{x}{e_x}{e} : \biginterty \htriple{A \land A_i}{t}{B} \ \ &\ \ &\text{rule \mathrm{\small TInter} and (5)}\\ &\Gamma \vdash \zlet{x}{e_x}{e} : \htriple{A \land \bigvee A_i}{t}{B} \ \ &\ \ &\text{via rule \mathrm{\small TSub}, rule \mathrm{\small SubIntMerge} and (6)}\\ &\Gamma \vdash A \subseteq \bigvee A_i \ \ &\ \ &\text{precondition}\\ &\Gamma \vdash \zlet{x}{e_x}{e} : \htriple{A}{t}{B} \ \ &\ \ &\text{via rule \mathrm{\small TSub}, rule \mathrm{\small SubHoare} and (7) (8)} \end{align}\] which is exactly what we needed to prove for this case.

ChkEOpApp:

[ChkEOpApp]

This rule can be treated as a combination of TSub and TEffOp. From the induction hypothesis and the precondition of ChkEffOp, we know \[\begin{align} {2} \setcounter{equation}{0} &\Delta(\effop) = \overline{z_k{:}b_k}\garr \overline{y_j{:}t_j}\sarr \tau_x \ \ &\ \ &\text{precondition}\\ &\Gamma \vdash \effop : \overline{z_k{:}b_k}\garr \overline{y_j{:}t_j}\sarr \tau_x \ \ &\ \ &\text{via rule \mathrm{\small TEOp} and (1)}\\ &\forall j. \Gamma \vdash v_j \typecheck t_j \ \ &\ \ &\text{precondition}\\ &\Gamma \vdash \effop : \overline{z_k{:}b_k}\garr \overline{y_j{:}t_j}\sarr \tau_x\overline{[y_j \mapsto v_j]} \ \ &\ \ &\text{Lemma~\ref{lemma:subst-typing} and (2) (3)}\\ &\Gamma \vdash \effop : \overline{z_k{:}b_k}\garr \overline{y_j{:}t_j}\sarr \biginterty \overline{\htriple{A_i}{t_i}{A_i'}} \ \ &\ \ &\text{precondition and (4)}\\ &\Gamma \infervdash{A} \overline{z_k{:}b_k}\garr \overline{y_j{:}t_j}\sarr \tau_x \effinfer \overline{z_k{:}t_k} \ \ &\ \ &\text{precondition}\\ &\Gamma, \overline{z_k{:}t_k} \vdash A \subseteq \bigvee A_i \ \ &\ \ &\text{via Lemma~\ref{lemma:ghost}}\\ &\Gamma, \overline{z_k{:}t_k}\vdash \effop : \overline{y_j{:}t_j}\sarr \biginterty \overline{\htriple{A_i}{t_i}{A_i'}} \ \ &\ \ &\text{via Lemma~\ref{lemma:ghost} and rule \mathrm{\small TSub}} \end{align}\] Now the operator \(\effop\) has the return type \(\biginterty \overline{\htriple{A_i}{t_i}{A_i'}}\). The rest of proof it almost the same as the proof in the case ChkLetE: \[\begin{align} {2} &\forall i. \Gamma, \overline{z_k{:}t_k} \vdash \effop : \overline{\htriple{A_i}{t_i}{A_i'}} \ \ &\ \ &\text{via rule \mathrm{\small TSub}, rule \mathrm{\small SubIntLL} and (8)}\\ &\forall i. \Gamma, \overline{z_k{:}t_k} \vdash \effop : \overline{\htriple{A \land A_i}{t_i}{(A;\globalA\topA) \land A_i'}} \ \ &\ \ &\text{via Lemma~\ref{lemma:wf-sub}, precondition, and (9)}\\ &\forall i.\Gamma, \overline{z_k{:}t_k}, x{:}t_i \vdash e : \htriple{(A;\globalA\topA) \land A_i'}{t}{B} \ \ &\ \ &\text{inductive hypothesis}\\ &\forall i.\Gamma, \overline{z_k{:}t_k} \vdash \perform{x}{\effop}{\overline{v_i}}{e} : \htriple{A \land A_i'}{t}{B} \ \ &\ \ &\text{rule \mathrm{\small TEOpApp} and (10) (11)}\\ &\Gamma, \overline{z_k{:}t_k} \vdash \perform{x}{\effop}{\overline{v_i}}{e} : \biginterty \htriple{A \land A_i}{t}{B} \ \ &\ \ &\text{rule \mathrm{\small TInter} and (12)}\\ &\Gamma, \overline{z_k{:}t_k} \vdash \perform{x}{\effop}{\overline{v_i}}{e} : \htriple{A \land \bigvee A_i}{t}{B} \ \ &\ \ &\text{via rule \mathrm{\small TSub}, rule \mathrm{\small SubIntMerge} and (13)}\\ &\Gamma,\overline{z_k{:}t_k} \vdash \perform{x}{\effop}{\overline{v_i}}{e} : \htriple{A}{t}{B} \ \ &\ \ &\text{via rule \mathrm{\small TSub}, rule \mathrm{\small SubHoare} and (7) (14)}\\ &\Gamma \vdash \perform{x}{\effop}{\overline{v_i}}{e} : \htriple{A}{t}{B} \ \ &\ \ &\text{via Lemma~\ref{lemma:strengthening}, precondition and (15)} \end{align}\] which are valid.

◻

We present the proof for [theorem:algo-decidable] from 5. The proof requires the following lemmas about the \(\subqueryA\) subroutine and the instantiation function \(\instantiate\).

**Lemma 7**. *[Decidability of Base Subtyping] the \(\Gamma \vdash \rawnuot{b}{\phi_1} <: \rawnuot{b}{\phi_2}\) is decidable.*

*Proof.* As we mentioned in Section 5.1, similar to standard refinement type system, the subtyping check will be encoded as a query in EPR, which is in the decidable fragment of FOL. Thus, the base subtyping check is decidable. ◻

**Lemma 8**. *[Decidability of Inclusion Algorithm] the \(\subqueryA\) is decidable.*

*Proof.* The gathered literals are finite, thus the induced boolean combination is also finite. Then the loop in \(\subqueryA\) (line \(3\) in [algo:inclusion]) has a finite bound. According to Lemma 7, \(\subqueryA\) is decidable. ◻

**Lemma 9**. *[Decidability of Instantiation Algorithm] the \(\instantiate\) is decidable.*

*Proof.* The gathered literals are finite, thus the induced boolean combination is also finite. Then the loop in \(\instantiate\) (line \(6\) in [algo:inclusion]) has a finite bound. According to Lemma 7, \(\instantiate\) is decidable. ◻

**Lemma 10**. *[Decidability of Well-Formedness] For all type context \(\Gamma\), type \(t\), and \(\tau\), the \(\Gamma \wellfoundedvdash t\) and \(\Gamma \wellfoundedvdash \tau\) is decidable.*

*Proof.* According to the well-formedness rules shown in 13, these rules are inductively defined and structural decreasing over the types (\(t_1\) or \(\tau_1\)). Thus, they always terminate. ◻

**Lemma 11**. *[Decidability of Subtyping] For all type context \(\Gamma\), type \(t_1\), \(t_2\), \(\tau_1\), and \(\tau_2\) , the \(\Gamma \vdash t_1 <: t_2\) and \(\Gamma \vdash \tau_1 <: \tau_2\) is decidable.*

*Proof.* According to the subtyping rules shown in 13, Lemma [lemma:sub-base-deciable] and
Lemma [lemma:subqueryA-deciable], these rules are inductively defined and structural decreasing over the types (\(t_1\) or \(\tau_1\)). Thus, they always terminate. ◻

**Lemma 12**. *[Decidability of Instantiation] For all type context \(\Gamma\) and automata \(A\),*

*\(\Gamma \infervdash{A} \overline{z_k{:}b_k}\garr \overline{y_j{:}t_j}\sarr \tau_x \effinfer \overline{z_k{:}t_k}\) is decidable.*

Now we can prove the decidability theorem of our typing algorithm. As the type synthesis rules are defined mutually recursively, we simultaneously prove both are decidable. The following theorem states a stronger inductive invariant that is sufficient to prove the decidability of the typing algorithm.

**Theorem 5** (Decidability of Algorithmic Typing). *Type checking a term \(e\) against a type \(\tau\) in a typing context \(\Gamma\),
all \(\Gamma \vdash e \typecheck \tau\), \(\Gamma \vdash v \typecheck t\), \(\Gamma \vdash e \typeinfer \tau\), and \(\Gamma \vdash
v \typeinfer t\) is decidable.*

*Proof.* According to Lemma 10, Lemma 11, and Lemma 12, the auxiliary functions (well-formedness, subtyping, instantiation) always terminate. Thus,
the key of the decidability proof is finding a ranking function over the inputs of typing algorithm, which is based on the size of terms (\(\sizeE\)) and types (\(\sizeT\)): \[\begin{align} \sizeE(c) \ &\doteq 1 \\\sizeE(x) \ &\doteq 1 \\\sizeE(d(\overline{v_i})) \ &\doteq 1 + \Sigma_i \sizeE(v_i) \\\sizeE(\zlam{x}{t}{e}) \ &\doteq 1 + \sizeE(e) \\\sizeE(\zfix{f}{t}{x}{t}{e}) \ &\doteq
2 + \sizeE(e) \\\sizeE(\zlet{x}{\effop\ \overline{v_i}}{e}) \ &\doteq 1 + \sizeE(e) + \Sigma_i \sizeE(v_i) \\\sizeE(\zlet{x}{\primop\ \overline{v_1}}{e}) \ &\doteq 1 + \sizeE(e) + \Sigma_i \sizeE(v_i) \\\sizeE(\zlet{x}{v_1\ v_2}{e}) \ &\doteq 1
+ \sizeE(e) + \sizeE(v_1) + \sizeE(v_2) \\\sizeE(\zlet{x}{e_1}{e_2}) \ &\doteq 1 + \sizeE(e_1) + \sizeE(e_2) \\\sizeE(\match{v}{\overline{d\ \overline{y} \sarr {e_i}}}) \ &\doteq 1 + \sizeE v + \Sigma_i \sizeE(e_i) \\\sizeT(\rawnuot{b}{\phi}) \
&\doteq 1 \\\sizeT(x{:}t\sarr\tau) \ &\doteq 1 + \sizeT(t) + \sizeT(\tau) \\\sizeT(x{:}b\garr\tau) \ &\doteq 1 + \sizeT(\tau) \\\sizeT(\htriple{A}{t}{A}) \ &\doteq 1 + \sizeT(t) \\\sizeT(\tau_1 \interty \tau_2) \ &\doteq 1 +
\sizeT(\tau_1) + \sizeT(\tau_2)
\end{align}\] With the help of \(\sizeE\), we define a lexicographic ranking function as a string contains two natural numbers (a pair of natural numbers). \[\begin{align} \rankE(\Gamma
\vdash e \typecheck \tau) \ &\doteq (\sizeE(e), \sizeT(\tau)) \\\rankE(\Gamma \vdash e \typeinfer \tau) \ &\doteq (\sizeE(e), 0)
\end{align}\] where one rank is less than another iff \[\begin{align} (a_1, b_1) < (a_2, b_2) \iff a_1 < b_1 \lor (a_1 = b_1 \land b_1 < b_2)
\end{align}\] We proceed by induction over the mutual recursive structure of type synthesis (\(\typeinfer\)) and type check (\(\typecheck\)) rules. In the cases for; synthesis rule
SynConst, SynBaseVar, and SynFuncVar don’t have recursive call, thus always terminates. Consequently, we try to prove that the rank current call is greater than
recursive calls in the rest of cases below.

ChkGhost:

[ChkGhost] e x:b

We the rank current call is \[\begin{align} \rankE(\Gamma \vdash e \typecheck x{:}b \garr \tau) &= (\sizeE(e), 1 + \sizeT(\tau)) \end{align}\] which is greater than recursive call: \[\begin{align} \rankE(\Gamma, x{:}\rawnuot{b}{\top} \vdash e \typecheck \tau) &= (\sizeE(e), \sizeT(\tau)) \end{align}\]ChkInter:

[ChkInter] e _1 _2

We the rank current call is \[\begin{align} \rankE(\Gamma \vdash e \typecheck \tau_1 \interty \tau_2) &= (\sizeE(e), 1 + \sizeT(\tau_1) + \sizeT(\tau_2)) \end{align}\] which is greater than recursive call: \[\begin{align} \rankE(\Gamma \vdash e \typecheck \tau_1) &= (\sizeE(e), \sizeT(\tau_1)) \\\rankE(\Gamma \vdash e \typecheck \tau_2) &= (\sizeE(e), \sizeT(\tau_2)) \end{align}\]ChkLetV:

[ChkLetV]

We the rank current call is \[\begin{align} \rankE(\Gamma \vdash \zlet{x}{v_x}{e} \typecheck \htriple{A}{t}{B}) &= (1 + \sizeE(v_x) +\sizeE(e), 1 + \sizeT(t)) \end{align}\] which is greater than recursive calls: \[\begin{align} \rankE(\Gamma \vdash v_x \typeinfer t_x) &= (\sizeE(v_x), 0) \\\rankE(\Gamma, x{:}t_x \vdash e \typecheck \htriple{A}{t}{B}) &= (\sizeE(e), 1 + \sizeT(t)) \end{align}\]ChkLetE:

[ChkLetE]

We the rank current call is \[\begin{align} \rankE(\Gamma \vdash \zlet{x}{e_x}{e} \typecheck \htriple{A}{t}{B}) &= (1 + \sizeE(e_x) +\sizeE(e), 1 + \sizeT(t)) \end{align}\] which is greater than recursive calls: \[\begin{align} \rankE(\Gamma \vdash e_x \typeinfer \biginterty \overline{\htriple{A_i}{t_i}{A_i'}}) &= (\sizeE(e_x), 0) \\\rankE(\Gamma, x{:}t_i \vdash e \typecheck \htriple{(A;\globalA\topA) \land A_i'}{t}{B}) &= (\sizeE(e), 1 + \sizeT(t)) \end{align}\]ChkPOpApp:

[ChkPOpApp]

We the rank current call is \[\begin{align} \rankE(\Gamma \vdash \perform{x}{\effop}{\overline{v_i}}{e} \typecheck \htriple{A}{t}{B}) &= (1 + \sizeE(e) + \Sigma \sizeE(v_i), 1 + \sizeT(t)) \end{align}\] which is greater than recursive calls: \[\begin{align} \rankE(\Gamma \vdash v_j \typecheck t_j) &= (\sizeE(v_j), \sizeT(t_j)) \\\rankE(\Gamma, x{:}t_i \vdash e \typecheck \htriple{(A;\globalA\topA) \land A_i'}{t}{B}) &= (\sizeE(e), 1 + \sizeT(t)) \end{align}\]ChkEOpApp:

[ChkEOpApp]

We the rank current call is \[\begin{align} \rankE(\Gamma \vdash \perform{x}{\effop}{\overline{v_i}}{e} \typecheck \htriple{A}{t}{B}) &= (1 + \sizeE(e) + \Sigma \sizeE(v_i), 1 + \sizeT(t)) \end{align}\] which is greater than recursive calls: \[\begin{align} \rankE(\Gamma \vdash v_j \typecheck t_j) &= (\sizeE(v_j), \sizeT(t_j)) \\\rankE(\Gamma, \overline{z_k{:}t_k}, x{:}t_i \vdash e \typecheck \htriple{(A;\globalA\topA) \land A_i'}{t}{B}) &= (\sizeE(e), 1 + \sizeT(t)) \end{align}\]ChkApp:

[ChkApp]

We the rank current call is \[\begin{align} \rankE(\Gamma \vdash \zlet{x}{v_1\ v_2}{e} \typecheck \htriple{A}{t}{B}) &= (1 + \sizeE(e) + \sizeE(v_1) + \sizeE(v_2), 1 + \sizeT(t)) \end{align}\] which is greater than recursive calls: \[\begin{align} \rankE(\Gamma \vdash v_1 \typeinfer \overline{z_k{:}b_k}\garr y{:}t_y \sarr \tau_x) &= (\sizeE(v_1), 0) \\\rankE(\Gamma \vdash v_2 \typecheck t_y) &= (\sizeE(v_2), \sizeT(t_y)) \\\rankE(\Gamma, \overline{z_k{:}t_k}, x{:}t_i \vdash e \typecheck \htriple{(A;\globalA\topA) \land A_i'}{t}{B}) &= (\sizeE(e), 1 + \sizeT(t)) \end{align}\]ChkFunc:

[ChkFunc] x:t_x’

We the rank current call is \[\begin{align} \rankE(\Gamma \vdash \zlam{x}{t_x}{e} \typecheck x{:}t_x' \sarr \tau) &= (1 + \sizeE(e) , 1 + \sizeT(t_x') + \sizeT(\tau)) \end{align}\] which is greater than recursive calls: \[\begin{align} \rankE(\Gamma, x{:}t_x' \vdash e \typecheck \tau) &= (\sizeE(e), \sizeT(\tau)) \end{align}\]ChkFix:

[ChkFix] t_f’

We the rank current call is \[\begin{align} \rankE(\Gamma \vdash \zfix{f}{t_f}{x}{t_x}{e} \typecheck t_f') &= (2 + \sizeE(e) , \sizeT(t_f')) \end{align}\] which is greater than recursive calls: \[\begin{align} \rankE(\Gamma, f{:}t_f' \vdash \zlam{x}{t_x}{e} \typecheck t_f') &= (1 + \sizeE(e), \sizeT(t_f')) \end{align}\]ChkMatch:

[ChkMatch]

We the rank current call is \[\begin{align} \rankE(\Gamma \vdash \match{v} \overline{d_i\, \overline{y_{j}} \to e_i} \typecheck \tau) &= (1 + \sizeE(v) + \Sigma \sizeE(e_i) , \sizeT(\tau)) \end{align}\] which is greater than recursive calls: \[\begin{align} \rankE(\Gamma, \overline{{y_{j}}{:}t_{j}}, z{:}\rawnuot{b}{\vnu = v \land \phi_i} \vdash e_i \typecheck \tau) &= (\sizeE(e_i), \sizeT(\tau)) \end{align}\]ChkSub:

[ChkSub] e

We the rank current call is \[\begin{align} \rankE(\Gamma \vdash e \typecheck \htriple{A_2}{t_2}{B_2}) &= (\sizeE(e) , 1 + \sizeT(t_2)) \end{align}\] which is greater than recursive calls: \[\begin{align} \rankE(\Gamma \vdash e \typeinfer \htriple{A_1}{t_1}{B_1}) &= (\sizeE(e), 0) \end{align}\]ChkSubV:

[ChkSubV] v t

We the rank current call is \[\begin{align} \rankE(\Gamma \vdash v \typecheck t) &= (\sizeE(v) , \sizeT(t)) \end{align}\] which is greater than recursive calls: \[\begin{align} \rankE(\Gamma \vdash v \typeinfer t') &= (\sizeE(v), 0) \end{align}\]

Thus, the typing algorithm is decidable. ◻

[tab:evaluation-long-1] and [tab:evaluation-long-2] list the information for individual ADT methods from [tab:evaluation]. All our benchmarks suite, and corresponding Coq proofs are available on Zenodo https://doi.org/10.5281/zenodo.10806686

Datatype | Library | #Ghost | s\(_{I}\) | Method | #Branch | #App | #SAT | #Inc | avg. s\(_{A}\) | t\(_{\text{SAT}}\) (s) | t\(_{\text{Inc}}\) (s) |
---|---|---|---|---|---|---|---|---|---|---|---|

Stack | LinkedList | 0 | 4 | is_empty | 3 | 2 | 60 | 3 | 59 | 0.29 | 0.03 |

LinkedList | 0 | 4 | empty | 2 | 3 | 68 | 2 | 47 | 0.37 | 0.01 | |

LinkedList | 0 | 4 | concat_aux | 4 | 7 | 297 | 7 | 98 | 1.63 | 0.06 | |

LinkedList | 0 | 4 | concat | 2 | 3 | 34 | 2 | 57 | 0.16 | 0.01 | |

LinkedList | 0 | 4 | cons | 3 | 7 | 203 | 5 | 112 | 1.18 | 0.05 | |

LinkedList | 0 | 4 | head | 2 | 2 | 65 | 3 | 55 | 0.36 | 0.03 | |

LinkedList | 0 | 4 | tail | 2 | 2 | 159 | 4 | 69 | 0.79 | 0.04 | |

KVStore | 1 | 9 | is_empty | 2 | 1 | 93 | 4 | 79 | 0.55 | 0.04 | |

KVStore | 1 | 9 | empty | 3 | 5 | 213 | 6 | 176 | 1.29 | 0.06 | |

KVStore | 1 | 9 | concat_aux | 3 | 6 | 709 | 16 | 167 | 3.84 | 0.19 | |

KVStore | 1 | 9 | concat | 2 | 3 | 100 | 4 | 89 | 0.58 | 0.03 | |

KVStore | 1 | 9 | cons | 4 | 10 | 874 | 17 | 226 | 5.78 | 0.23 | |

KVStore | 1 | 9 | head | 2 | 2 | 183 | 5 | 98 | 0.97 | 0.06 | |

KVStore | 1 | 9 | tail | 2 | 2 | 396 | 8 | 125 | 1.79 | 0.12 | |

Set | Tree | 0 | 12 | mem | 3 | 5 | 115 | 4 | 212 | 0.69 | 0.05 |

Tree | 0 | 12 | insert | 2 | 5 | 85 | 3 | 215 | 0.51 | 0.04 | |

Tree | 0 | 12 | empty | 1 | 0 | 17 | 1 | 111 | 0.11 | 0.01 | |

Tree | 0 | 12 | insert_aux | 5 | 12 | 1589 | 11 | 531 | 9.34 | 0.53 | |

Tree | 0 | 12 | mem_aux | 5 | 10 | 1469 | 11 | 490 | 8.48 | 0.41 | |

KVStore | 1 | 9 | insert | 3 | 5 | 245 | 6 | 160 | 1.51 | 0.06 | |

KVStore | 1 | 9 | empty | 1 | 0 | 7 | 1 | 42 | 0.04 | 0.01 | |

KVStore | 1 | 9 | mem | 1 | 1 | 97 | 4 | 73 | 0.60 | 0.03 | |

Queue | LinkedList | 0 | 4 | snoc | 2 | 6 | 171 | 4 | 110 | 0.84 | 0.03 |

LinkedList | 0 | 4 | head | 2 | 2 | 65 | 3 | 55 | 0.36 | 0.03 | |

LinkedList | 0 | 4 | append | 4 | 9 | 190 | 4 | 96 | 1.08 | 0.06 | |

LinkedList | 0 | 4 | empty | 2 | 3 | 68 | 2 | 47 | 0.37 | 0.01 | |

LinkedList | 0 | 4 | is_empty | 3 | 2 | 60 | 3 | 59 | 0.29 | 0.03 | |

LinkedList | 0 | 4 | tail | 2 | 2 | 167 | 4 | 93 | 0.84 | 0.04 | |

Graph | 1 | 24 | snoc | 3 | 9 | 984 | 12 | 491 | 6.60 | 0.20 | |

Graph | 1 | 24 | head | 2 | 2 | 291 | 5 | 251 | 1.57 | 0.24 | |

Graph | 1 | 24 | append | 5 | 12 | 1212 | 14 | 525 | 8.19 | 0.86 | |

Graph | 1 | 24 | empty | 2 | 3 | 202 | 4 | 318 | 1.27 | 0.08 | |

Graph | 1 | 24 | is_empty | 2 | 1 | 165 | 4 | 234 | 1.03 | 0.10 | |

Graph | 1 | 24 | tail | 3 | 4 | 980 | 12 | 409 | 6.58 | 0.59 | |

MinSet | Set | 1 | 28 | minset_mem | 1 | 1 | 199 | 6 | 241 | 1.43 | 0.24 |

Set | 1 | 28 | minimum | 2 | 2 | 181 | 5 | 251 | 0.89 | 0.37 | |

Set | 1 | 28 | minset_insert | 3 | 6 | 612 | 17 | 294 | 4.19 | 1.58 | |

Set | 1 | 28 | minset_singleton | 2 | 3 | 120 | 4 | 259 | 0.80 | 0.13 | |

KVStore | 1 | 25 | minset_mem | 1 | 1 | 207 | 6 | 223 | 1.49 | 0.31 | |

KVStore | 1 | 25 | minimum | 2 | 2 | 191 | 5 | 234 | 0.95 | 0.36 | |

KVStore | 1 | 25 | minset_insert | 5 | 8 | 2227 | 23 | 519 | 10.93 | 9.34 | |

KVStore | 1 | 25 | minset_singleton | 3 | 6 | 235 | 5 | 408 | 1.59 | 0.32 | |

LazySet | Tree | 0 | 12 | mem_aux | 5 | 10 | 1469 | 11 | 490 | 8.37 | 0.41 |

Tree | 0 | 12 | lazy_insert | 2 | 6 | 85 | 3 | 301 | 0.50 | 0.04 | |

Tree | 0 | 12 | force | 1 | 1 | 17 | 1 | 171 | 0.11 | 0.01 | |

Tree | 0 | 12 | insert_aux | 5 | 12 | 1589 | 11 | 531 | 9.33 | 0.57 | |

Tree | 0 | 12 | lazy_mem | 3 | 6 | 115 | 4 | 298 | 0.68 | 0.07 | |

Tree | 0 | 12 | new_thunk | 1 | 0 | 17 | 1 | 111 | 0.11 | 0.01 | |

Set | 1 | 9 | lazy_mem | 1 | 2 | 98 | 4 | 105 | 0.59 | 0.04 | |

Set | 1 | 9 | force | 1 | 1 | 7 | 1 | 66 | 0.03 | 0.01 | |

Set | 1 | 9 | lazy_insert | 2 | 3 | 101 | 4 | 106 | 0.57 | 0.04 | |

Set | 1 | 9 | new_thunk | 1 | 0 | 7 | 1 | 42 | 0.03 | 0.01 | |

KVStore | 1 | 9 | lazy_mem | 1 | 2 | 98 | 4 | 105 | 0.59 | 0.04 | |

KVStore | 1 | 9 | insert_aux | 3 | 5 | 245 | 6 | 160 | 1.49 | 0.06 | |

KVStore | 1 | 9 | force | 1 | 1 | 7 | 1 | 66 | 0.03 | 0.01 | |

KVStore | 1 | 9 | lazy_insert | 2 | 3 | 101 | 4 | 121 | 0.58 | 0.04 | |

KVStore | 1 | 9 | new_thunk | 1 | 0 | 7 | 1 | 42 | 0.03 | 0.01 |

Datatype | Library | #Ghost | s\(_{I}\) | Method | #Branch | #App | #SAT | #Inc | avg. s\(_{A}\) | t\(_{\text{SAT}}\) (s) | t\(_{\text{Inc}}\) (s) |
---|---|---|---|---|---|---|---|---|---|---|---|

Heap | Tree | 0 | 12 | minimum | 2 | 3 | 82 | 3 | 207 | 0.49 | 0.03 |

Tree | 0 | 12 | insert | 2 | 5 | 85 | 3 | 215 | 0.50 | 0.04 | |

Tree | 0 | 12 | contains_aux | 5 | 10 | 871 | 11 | 311 | 4.87 | 0.17 | |

Tree | 0 | 12 | contains | 3 | 5 | 115 | 4 | 212 | 0.68 | 0.06 | |

Tree | 0 | 12 | empty | 1 | 0 | 17 | 1 | 111 | 0.11 | 0.01 | |

Tree | 0 | 12 | insert_aux | 5 | 12 | 1589 | 11 | 531 | 9.33 | 0.52 | |

Tree | 0 | 12 | minimum_aux | 2 | 3 | 408 | 5 | 334 | 2.11 | 0.09 | |

LinkedList | 0 | 4 | insert_aux | 4 | 8 | 497 | 8 | 118 | 3.03 | 0.08 | |

LinkedList | 0 | 4 | empty | 1 | 0 | 11 | 1 | 19 | 0.07 | 0.01 | |

LinkedList | 0 | 4 | contains | 2 | 4 | 65 | 3 | 72 | 0.36 | 0.02 | |

LinkedList | 0 | 4 | contains_aux | 3 | 5 | 359 | 6 | 129 | 1.88 | 0.05 | |

LinkedList | 0 | 4 | insert | 2 | 6 | 137 | 5 | 93 | 0.88 | 0.05 | |

LinkedList | 0 | 4 | minimum | 2 | 2 | 63 | 3 | 65 | 0.36 | 0.03 | |

FileSystem | Tree | 1 | 20 | deleteChildren | 3 | 7 | 1055 | 13 | 470 | 6.39 | 1.79 |

Tree | 1 | 20 | del_path_in_dir | 2 | 5 | 452 | 5 | 284 | 1.76 | 0.17 | |

Tree | 1 | 20 | add | 3 | 8 | 2085 | 17 | 652 | 14.15 | 2.21 | |

Tree | 1 | 20 | add_path_in_dir | 2 | 5 | 837 | 9 | 424 | 4.90 | 7.25 | |

Tree | 1 | 20 | delete | 4 | 16 | 1064 | 12 | 520 | 6.95 | 0.76 | |

Tree | 1 | 20 | init | 1 | 5 | 431 | 7 | 323 | 3.22 | 0.49 | |

KVStore | 1 | 17 | deleteChildren | 3 | 7 | 2244 | 19 | 392 | 14.00 | 1.38 | |

KVStore | 1 | 17 | add | 4 | 10 | 8144 | 43 | 481 | 56.64 | 16.54 | |

KVStore | 1 | 17 | delete | 4 | 20 | 5658 | 10 | 234 | 7.91 | 1.49 | |

KVStore | 1 | 17 | init | 1 | 3 | 449 | 5 | 166 | 3.45 | 0.08 | |

DFA | KVStore | 2 | 18 | is_node | 1 | 0 | 19 | 1 | 79 | 0.11 | 0.01 |

KVStore | 2 | 18 | add_transition | 2 | 2 | 1417 | 14 | 176 | 9.68 | 0.45 | |

KVStore | 2 | 18 | add_node | 1 | 0 | 19 | 1 | 79 | 0.11 | 0.01 | |

KVStore | 2 | 18 | del_transition | 2 | 2 | 704 | 8 | 152 | 4.72 | 0.32 | |

KVStore | 2 | 18 | is_transition | 3 | 3 | 3604 | 25 | 228 | 18.84 | 1.60 | |

Graph | 2 | 11 | is_node | 1 | 1 | 115 | 4 | 95 | 0.74 | 0.04 | |

Graph | 2 | 11 | add_transition | 4 | 4 | 2420 | 27 | 192 | 16.73 | 2.12 | |

Graph | 2 | 11 | add_node | 2 | 2 | 118 | 4 | 96 | 0.71 | 0.04 | |

Graph | 2 | 11 | del_transition | 4 | 4 | 3625 | 27 | 225 | 25.53 | 3.42 | |

Graph | 2 | 11 | is_transition | 4 | 3 | 3622 | 27 | 208 | 25.86 | 2.72 | |

ConnectedGraph | Set | 2 | 9 | is_transition | 2 | 1 | 1503 | 30 | 77 | 10.03 | 0.25 |

Set | 2 | 9 | add_transition | 4 | 4 | 3889 | 50 | 357 | 27.39 | 12.80 | |

Set | 2 | 9 | add_node | 3 | 3 | 2332 | 35 | 314 | 15.80 | 1.24 | |

Set | 2 | 9 | is_node | 3 | 2 | 1441 | 30 | 197 | 9.59 | 0.45 | |

Set | 2 | 9 | singleton | 2 | 2 | 362 | 10 | 160 | 2.30 | 0.09 | |

Graph | 1 | 20 | is_transition | 4 | 3 | 1349 | 17 | 360 | 19.19 | 53.90 | |

Graph | 1 | 20 | add_transition | 4 | 4 | 1351 | 17 | 361 | 19.23 | 52.38 | |

Graph | 1 | 20 | add_node | 3 | 4 | 666 | 12 | 300 | 14.40 | 14.34 | |

Graph | 1 | 20 | is_node | 1 | 1 | 139 | 4 | 237 | 0.90 | 0.34 | |

Graph | 1 | 20 | singleton | 2 | 3 | 202 | 4 | 255 | 1.28 | 0.47 |

[1]

.
[2]

.
[3]

.
[4]

.
[5]

.
[6]

.
[7]

.
[8]

.
[9]

.
[10]

.
[11]

.
[12]

.
[13]

.
[14]

.
[15]

.
[16]

.
[17]

.
[18]

.
[19]

.
[20]

.
[21]

.
[22]

.
[23]

.
[24]

.
[25]

.
[26]

.
[27]

.
[28]

.
[29]

.
[30]

.
[31]

.
[32]

.
[33]

.
[34]

.
[35]

.
[36]

.
[37]

.
[38]

vol. 9560. Springer–Verlag, Berlin, Heidelberg, 1?32. ://doi.org/10.1007/978–3–319–27810–0_1.

[39]

.
[40]

.
[41]

[42]

.
[43]

.
[44]

.
*Throughout this paper, we italicize event arguments.*↩︎The remaining rules are completely standard and provided in the .↩︎

Our type system is agnostic to the syntax used to express SFAs, and our implementation uses the more developer-friendly syntax of Symbolic Regular Languages [6]; this syntax is provided in .↩︎

Intuitively, since a type of the form \(\htriple{A}{t}{B}\) describes a stateful computation, it cannot be duplicated or eliminated, and thus it would be unsafe to allow unrestricted use of “computational” variables, as would be possible if type contexts could contain such bindings.↩︎

The complete set of typing rules for is included in the .↩︎

*Recall that the semantics of an operator is defined by the auxiliary \(e \Downarrow v\) and \(\alpha \vDash e \Downarrow v\).*↩︎*A Coq mechanization of this theorem and the type soundness corollary are provided in the .*↩︎Our provides a docker image that contains the source code of and all our benchmarks.↩︎

Our includes a full table listing the corresponding information for each ADT method.↩︎