STATISTICAL INFERENCE FOR A SERVICE SYSTEM WITH NON-STATIONARY ARRIVALS AND UNOBSERVED BALKING


Abstract

We study a multi-server queueing system with a periodic arrival rate and customers whose joining decision is based on their patience and a delay proxy. Specifically, each customer has a patience level sampled from a common distribution. Upon arrival, they receive an estimate of their delay before joining service and then join the system only if this delay is not more than their patience, otherwise they balk. The main objective is to estimate the parameters pertaining to the arrival rate and patience distribution. Here the complication factor is that this inference should be performed based on the observed process only, i.e., balking customers remain unobserved. We set up a likelihood function of the state dependent effective arrival process (i.e., corresponding to the customers who join), establish strong consistency of the MLE, and derive the asymptotic distribution of the estimation error. Due to the intrinsic non-stationarity of the Poisson arrival process, the proof techniques used in previous work become inapplicable. The novelty of the proving mechanism in this paper lies in the procedure of constructing i.i.d.objects from dependent samples by decomposing the sample path into i.i.d.regeneration cycles. The feasibility of the MLE-approach is discussed via a sequence of numerical experiments, for multiple choices of functions which provide delay estimates. In particular, it is observed that the arrival rate is best estimated at high service capacities, and the patience distribution is best estimated at lower service capacities.

Keywords. Service systems \(\circ\) balking \(\circ\) periodic arrivals \(\circ\) estimation \(\circ\) customer patience

Affiliations. SAB is with the Korteweg-de Vries Institute for Mathematics, University of Amsterdam, Science Park 904, 1098 XH Amsterdam, The Netherlands. s.a.bodas@uva.nl

MM is with the Mathematical Institute, Leiden University, P.O. Box 9512, 2300 RA Leiden, The Netherlands. He is also affiliated with Korteweg-de Vries Institute for Mathematics, University of Amsterdam, Amsterdam, The Netherlands; Eurandom, Eindhoven University of Technology, Eindhoven, The Netherlands; Amsterdam Business School, Faculty of Economics and Business, University of Amsterdam, Amsterdam, The Netherlands. m.r.h.mandjes@math.leidenuniv.nl

LR is with the Department of Statistics, University of Haifa, 199 Aba Khoushy Avenue, Mount Carmel, Haifa, Israel. lravner@stat.haifa.ac.il

Acknowledgments. This research was supported by the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement no., by the NWO Gravitation project NETWORKS under grant agreement no..003 and by the Israel Science Foundation (ISF), grant no./23. image Date: 2025-09-22.

1 INTRODUCTION↩︎

In our modern world, customers interact with service systems on a daily basis. Examples abound across a broad range of economic sectors: besides conventional shops, one can think of online delivery services, communication and computing networks, call centers, and various types of healthcare systems. Seen from a conceptual perspective, in these systems the service capacity is a scarce resource, for which customers compete. When these customers are provided information on the current occupancy level, they may find the anticipated service level too poor, so that they decide not to join, a phenomenon known as ‘balking’. Such balking decisions are typically made based on an estimate of the delay the customer under consideration would be facing, or, more generally, the performance level they would receive.

When operating a service system, it is of crucial importance to have an accurate quantitative description of its underlying dynamics. Often one resorts to a queueing-theoretic framework, encompassing a customer arrival process, a specification of the service requirements, and a service mechanism. In a practical context, this means that the ‘model primitives’ are estimated, by performing statistical inference based on observations of the system over time. A frequently overlooked complication, however, is that often balking customers remain unobserved. Despite the fact that balking customers do not join the system, a system operator still wishes to reliably estimate the effective arrival rate, i.e., the rate of joining customers plus the rate of balking customers. The reason why this total arrival rate is relevant, is that it reflects the demand for the service, which is of key importance in decisions concerning e.g.staffing, dimensioning or pricing. Healthcare service providers, for instance, could use knowledge of the total demand to decide whether extra doctors or beds are required. Along the same lines, delivery services could consider a price adjustment so as to enlarge their market share.

A second complication that is ignored in many studies, is that the demand for service often fluctuates with time, typically following daily or weekly patterns; see for instance [1], [2] for analyses in the call center context. This means that while many modelling frameworks assume a time-independent Poissonian effective arrival rate, it would be more natural to work with a rate that follows some periodic pattern.

When setting up an estimation approach, one would like to be sure it provides reliable output. This concretely means that, ideally, the proposed estimator is consistent (i.e., converge to the true values when the sample size grows large), while in addition we would like to have insight into its asymptotic properties (for instance by proving that the estimator is asymptotically normally distributed).

The main conclusion from the above considerations, is that there is a clear need for a robust methodology to statistically estimate the service system’s model parameters. To make sure it can be used in an operational context, it should fulfil the following three requirements: (a) it should be based on observable data only, i.e., data corresponding to the joining customers, (b) it should be able to deal with a non-constant effective arrival rate, and (c) the proposed procedure should be equipped with provable performance guarantees.

In this paper we choose to model the service system as a first-come-first-served queue of the M\(_t\)/G/\(s\)+H type. We have selected this class of models because of its generality, capturing a broad range of relevant service systems. It can be described as follows:

  • In the first place, customers arrive according to a non-homogeneous periodic Poissonian arrival rate \(\lambda_{{\boldsymbol{\alpha} }}(\cdot)\ge 0\). Acknowledging that this rate typically follows a daily (or weekly) pattern, we throughout impose the natural assumption that we know the length of the corresponding period. In our setup we develop a parametric inference procedure: we let the arrival rate be parametrized by an unknown (finite-dimensional) parameter vector \({\boldsymbol{\alpha} }\).

  • Each customer brings along a random amount of work. These service requirements for an i.i.d.sequence of non-negative random variable, with cumulative distribution function \(G(\cdot)\), sampled independently of the arrival process.

  • There are \(s\in{\mathbb{N}}\) servers who serve in a first-come-first-serve manner. As it is a design parameter that is under full control of the service provider, throughout this paper we assume we know \(s\).

  • Our system explicitly takes into account the option of balking by incorporating the following impatience mechanism. Each arriving customer is equipped with a patience level, sampled independently of everything else. The customers’ patience levels are i.i.d., distributed as the non-negative random variable \(Y\), characterized through the cumulative distribution function \(H_{{\boldsymbol{\theta} }}(\cdot)\). Again, we work in a parametric context, with \({\boldsymbol{\theta} }\) representing the underlying (finite-dimensional) parameter vector. At every time instant \(t \geq 0\), the service operator announces \(\Delta(t)\) - an estimate of the delay before service which a customer arriving at time \(t\) can expect given the state of the system at time \(t\). By comparing this delay estimate with their patience, customers decide to join or to balk.

  • In this paper, we consider a general delay announcement mechanism, unlike [3] which analyzes only the scenario \(\Delta(t) = V(t)\) (where \(V(t)\) is the virtual waiting time at time \(t\)). In particular, we consider the form \(\Delta(t) = \psi\big(\mathcal{R}(t)\big)\), where \(\psi\) is a function which maps the state of the system to a corresponding delay estimate. For example, in some systems delay announcements may be estimated based on the number of customers in the system; think of a call center or a physical shop. Once a customer joins the system, they do not leave until service completion.

In the context of this M\(_t\)/G/\(s\)+H setup, our concrete objective is to devise a procedure, endowed with provable performance guarantees, to estimate the parameter vector \(({\boldsymbol{\alpha} }, {\boldsymbol{\theta} })\) governing the arrival rate and the patience distribution, relying on information regarding the non-balking customers only. Our procedure is based on maximum likelihood estimation (MLE), exploiting that the system’s key dynamics are dictated by the state-dependent effective arrival rate in the following way. Suppose that at a given point in time, which we associate for ease with time \(0\), the virtual waiting time is \(v\); this virtual waiting time is defined as the waiting time of a hypothetical customer joining at time \(0\). If there are no effective arrivals in \((0,t]\), then the virtual waiting time at time \(t\) is \((v-t)^{+}\), so that the effective arrival rate at time \(t\) becomes \[\lambda_{{\boldsymbol{\alpha} }}(t)\big(1-H_{{\boldsymbol{\theta} }}((v-t)^{+})\big).\] Observe how this arrival rate depends both on time as well as the state of the system (or, more concretely, the current congestion level). Various other variants can be thought of, such as the above-mentioned mechanism in which the delay announcement is a function of the number of customers in the system. Note that in the latter case, between successive events (which can be arrivals or departures), the arrival rate depends only on time. The idea is then that once we know the state-dependent effective arrival rate, we can calculate the distribution of the effective interarrival times conditioned on the system state, which we use to produce a closed-form expression for the conditional log-likelihood of the arrival process.

When attempting to estimate \(({\boldsymbol{\alpha} }, {\boldsymbol{\theta} })\), there are various additional complications to be dealt with. An important challenge is that when the non-homogeneous arrival rate is significantly greater than the service capacity, the system operator only observes the joining customers may incorrectly conclude that the arrival rate is a small constant, as explained in Example 1. Another challenge concerns the (possible lack of) identifiability of the arrival and patience distribution functions with respect to the parameters, as explained in Example 2.

Example 1.

a
b
c
d
e
f

Figure 1: Effect of impatience and service capacity on state of system. Here ‘balk’ is the percentage of customers balking. Blue lines represents the arrival rate, and red lines the number of customers in the system. a — \(s=1\), balk=%, b — \(s=2\), balk=%, c — \(s=4\), balk=%, d — \(s=8\), balk=%, e — \(s=16\), balk=%, f — \(s=32\), balk=%

Figure 1 shows the time evolution of six M\(_t\)/G/\(s\)+H systems. We throughout assume that the arrival rate is given through a sinusoidal with period \(20\,\pi\), given by \(\lambda_{{\boldsymbol{\alpha} }_0}(t) = 50+20\sin(1-0.1t)\). Both the patience distribution function \(H_{{\boldsymbol{\theta} }_0}(\cdot)\) and the service requirements distribution function \(G(\cdot)\) are taken exponentially with mean 1. The number of servers we vary: we consider \(s \in \{1,2,4,8,16,32\}\). The specific realizations of the arrival times, service requirements, and patience times are kept unchanged across all experiments.

The blue line is a realization of the empirical arrival process with rate \(\lambda_{{\boldsymbol{\alpha} }_0}(t)\). For each integer-valued time point \(n\), the y-coordinate of the blue line represents the total number of arrivals (balking plus non-balking, that is) that have occurred in \([n,n+1)\). The red line plots the number of customers in the system as a function of time.

We proceed by examining how the red line behaves across the six instances. For the cases \(s=1\) and \(s=2\), the waiting times are so high that a dominant fraction of customers balk. By observing only the red line in Figures 1 (a) and 1 (b), one cannot infer that that the arrival rate is actually large and non-homogeneous. In fact, if the administrator is unaware of balking they may even conclude that the arrival process is stationary and not time-dependent. Furthermore, from a statistical perspective, balking customers help estimate the patience parameters \({\boldsymbol{\theta} }_0\), but not the arrival rate parameters \({\boldsymbol{\alpha} }_0\). On the other hand, for the cases \(s=16\) and \(s=32\), the service capacity is much higher than the arrival rate and therefore, so that only a negligible percentage of the arriving customers balk. As a consequence, the estimates of \({\boldsymbol{\alpha} }_0\) are precise whereas those of \({\boldsymbol{\theta} }_0\) are poor. In a way, the ‘best’ instances are therefore the middle ones (i.e., \(s=4\) and \(s=8\)), where both \({\boldsymbol{\alpha} }_0\) and \({\boldsymbol{\theta} }_0\) allow a reasonably accurate estimation. \(\Diamond\)

Figure 2: Identifiability challenge while estimating patience distribution

Example 2. We simulate an M\(_t\)/G/\(s\)+H queue characterized by \(\lambda_{{\boldsymbol{\alpha} }_0}(t) = 50 + 15\sin(4-0.1t) + 10\sin(1-0.5t)\), \(H_{{\boldsymbol{\theta} }_0}(x) = 1 - 0.8e^{-x} - 0.2e^{-0.1x}\), \(G \sim \text{Exp}(0.4)\), and \(100\,000\) arrivals (balking plus non-balking, that is). The true parameter vectors are therefore \({\boldsymbol{\alpha} }_0 = (50, 15, 10, 4, 1)\) and \({\boldsymbol{\theta} }_0 = (0.8, 1, 0.1)\). By applying our estimation procedure, described later in this paper, the estimates that we obtain are \(\hat{{\boldsymbol{\alpha} }} = (49.44, 14.95, 10.41, 4.00, 0.98)\) and \(\hat{{\boldsymbol{\theta} }} = (0.85, 0.91, 0.05)\). Observe that \(\hat{{\boldsymbol{\alpha} }}\) is reasonably accurate, whereas \(\hat{{\boldsymbol{\theta} }}\) is relatively far from \({\boldsymbol{\theta} }_0\). However, now observe in Figure 2, where the blue and red lines plot the distribution functions \(H_{{{\boldsymbol{\theta} }_0}}(x)\) and \(H_{\hat{{\boldsymbol{\theta} }}}(x)\) respectively, that the estimate \(\hat{{\boldsymbol{\theta} }}\) being inaccurate does not mean that the corresponding estimate \(H_{\hat{{\boldsymbol{\theta} }}}(x)\) is. Put differently, the inaccuracy in the estimation of \({\boldsymbol{\theta} }_0\) is in a way harmless, because \(H_{{\boldsymbol{\theta} }_0}(x)\) is estimated with great precision. One could say that we are facing a ‘quasi-unidentifiability’ issue: multiple parameter vectors lead to ‘almost identical’ models.

The system’s congestion level plays a crucial role in the ability to accurately estimate the patience parameters. In this simulation the maximum virtual waiting time observed by a customer is 5.304. This is less than 6, which is, as revealed by Figure 2, the value beyond which one can really differentiate between \(H_{{\boldsymbol{\theta} }_0}(x)\) and \(H_{\hat{{\boldsymbol{\theta} }}}(x)\). When increasing the number of arrivals, the maximum observed virtual waiting time will increase, thus reducing the gap between \(\hat{{\boldsymbol{\theta} }}\) and \({\boldsymbol{\theta} }_0\). \(\Diamond\)

1.1 Contributions↩︎

This paper succeeds in devising an estimation methodology that fulfils the requirements (a), (b), and (c) defined above. The proposed approach parametrically estimates the arrival rate and patience distribution parameter vectors \(({\boldsymbol{\alpha} }, {\boldsymbol{\theta} })\) in an M\(_t\)/G/\(s\)+H system by only observing information associated with the non-balking customers. It thus extends previous work [3] by incorporating a time-dependent customer arrival rate. Due to this time-dependence one cannot use a proof technique in which one has assumed that the system has reached stationarity. In greater detail, the contributions are the following:

  • Our estimation procedure is based on maximum likelihood. One would initially think that such an approach should make use of closed-form (transient or stationary) results for the M\(_t\)/G/\(s\)+H queue, but those are hardly available; even the (simpler) M/G/\(s\) system is essentially intractable. We circumvent this problem, however, by constructing a closed-form expression for the conditional log-likelihood pertaining to the state-dependent arrival process.

  • As mentioned before, due to the periodic arrival rate, one cannot assume the system to be in stationarity. We remedy this complication by decomposing the sample path of the queueing system variables into distinct regeneration cycles, which are periods of time after which the system probabilistically resets. This allows us to introduce, out of sequences of dependent observations, an i.i.d.structure, thus facilitating the use of existing results (laws of large numbers, central limit theorems) so as to establish performance guarantees of the resulting estimator. For arrival rate functions and patience distributions satisfying mild regularity conditions and under other natural assumptions, we prove the strong consistency of the maximum likelihood estimator. We also prove asymptotic normality of the corresponding estimation error (scaled by \(\sqrt{n}\)), which is the fastest convergence rate possible in a parametric setting.

    In addition, we have developed a number of verifiable assumptions, based solely on the model primitives, under which the above asymptotic analysis is valid.

  • Our model allows arriving customers to base their joining decision on a delay announcement which is any reasonable function of the complete state of the system. In particular, let \(\mathcal{R}(t) \in \Omega\) encode the full information of the system at time \(t\) (for example, the residual service times and order of service for all customers present in the system at time \(t\)), and let \(\psi: \Omega \mapsto \mathbb{R}\) be a Borel function. Then \(\Delta(t)\) takes the form \(\psi\big(\mathcal{R}(t)\big)\). Concrete examples of \(\psi(\cdot)\) include functions which calculate the the exact or noisy estimates of the expected waiting time given the state of the system at time \(t\). An important example is that in which \(\Delta(t)\) is estimated based on the number of customers (say \(q\)) in the system at time \(t\). If \(q < s\), then there is at least one idle server and hence \(\Delta(t) = 0\). If \(q \geq s\), then a (biased, but probably reasonable) delay announcement could be \(\Delta(t) = (q-s+1)g/s\), where \(g\) is the mean service time. We can compile this into a single delay announcement function of the form \(\Delta(t) = \psi(\mathcal{R}(t)) = (L(t)-s+1)^{+}g/s\), where \(L(t)\) is the number of customers in the system at time \(t\).

  • We discuss several interesting insights obtained from numerical experiments, such as the identification of instances where estimation of \(({\boldsymbol{\alpha} }_0, {\boldsymbol{\theta} }_0)\) is intrinsically hard (cf.the discussion in Example 1), and how ‘near-unidentifiability’ issues play a role in estimation (cf.the discussion in Example 2). We also assess the estimation accuracy in situations when exact virtual waiting times are not known.

1.2 Relevant Literature↩︎

This paper remedies two key shortcomings of the predecessor paper [3], where estimation of constant arrival rate and patience distribution was carried out in the context of service system modelled by an M/G/\(s\)+H queue. In the first place, we lift the assumption of the customer arrival rate being constant over time by allowing a periodic arrival rate. Second, [3] assumes a complete information setting, in that each arriving customer knows their exact prospective waiting time, which is often infeasible in practical situations. Importantly, our paper demonstrates that our estimation technique works even when customers have incomplete information, and estimate their prospective waiting time based on a partially observed system state.

An intensively studied topic at the intersection of operations research, applied probability and statistics, concerns the estimation of model primitives based on the (potentially partially observed) evolution of a stochastic process. For instance, [4] estimates distribution functions with truncated data. A specific subfield focuses on inference problems for queues, where the objective is to estimate the input process based on queue length or workload observations; see for instance the detailed survey [5]. Each study in this survey is characterized by its own specific underlying model, observation process, and estimation objectives. For instance, [6] studies an M/G/\(\infty\) system, observing specific aspects of the evolution of the number of customers present, with the objective to estimate the arrival rate and service-time distribution (making the problem semi-parametric). In [7] the M/M/\(s\) queue length is observed in the regime that \(c\) is large \(c\), and employs a diffusion approximation for estimating the arrival and service rates. In [8] one observes the so-called transactional data (i.e., times of service initiations and completions), and based on these one estimates various queue-related statistics. In [9] the workload process of a Lévy-driven queue is sampled at Poisson instants, providing observations that can be used in a method-of-moments estimator for the characteristic exponent of the driving Lévy process. Finally, [10] studies a discrete time M/G/\(\infty\) queue with the objective of estimating the arrival rate and service distribution. It uses an entropy-motivated geometric approximation to employ algorithms used for estimation of hidden Markov models. In general one aims to develop estimation techniques with certain concrete performance guarantees, in particular consistency or asymptotic normality.

A complication we come across in our setting with impatient customers, is that we wish to estimate the total arrival rate, i.e., including the arrival rate corresponding to the customers that balk due to impatience. In the queueing literature, impatience has been studied from several perspectives; in line with the focus of the present paper, we discuss work in which balking is due to arriving customers facing a high prospective waiting time. In [11] stability condition is established for M\(_t\)/G/1+H systems with a periodic arrival rate. Then, [12] derives the steady-state workload distribution, and hence implicitly also a stability condition, for the M/Ph/1 queue (in which service requirements are of phase type) with a constant patience level across all customers. In [13] the Laplace-Stieltjes transform of the busy period of M/Ph/1 queues is derived, as a limiting case of an associated fluid model. In [14] the busy period distribution in an M/G/1+H system is identified, for various choices of patience distribution \(H(\cdot)\). The problem of estimating customer (im)patience in service systems has also been given significant attention; see e.g., [15], while further references can be found in [3].

Time-varying multi-server queues also have attracted some attention in the queueing literature. Where [16] provides an extensive bibliography on queues time-varying arrival rates, we highlight some key works. In [17] the focus is on the distribution of the number of busy servers in an M\(_t\)/G/\(s\)/\(0\) loss system, which has \(s\) servers and no extra waiting space (i.e. there can be at most \(s\) customers in the system at any time) and a non-homogeneous Poisson arrival process, using a modified offered-load approximation. Reference [18] studies the asymptotic behaviour of arrival, departure and waiting times in unstable queues with non-stationary arrival rates. A branch of research focuses on estimation; in this context, [19] studies the approximation of a non-homogeneous arrival rate by a piecewise linear counterpart. In [20] a time-varying Little’s law is used to develop estimators for waiting times in systems with time-varying arrival rates. The main objective of [21] lies in staffing: it presents an algorithm to select the number of servers as a function of time, so as to meet given performance targets. In [22] it is shown that periodic sinusoidal arrival rates have a good empirical fit with arrival data from various service systems.

Poisson processes play a pivotal role in queueing theory, primarily with a constant rate, but the case of a time-dependent rate has received substantial attention as well. In the statistics literature, the case of non-homogeneous, potentially periodic, Poisson processes has been covered by for instance [23][27]; these papers differ in terms of the underlying model (which can be either parametric or non-parametric), the nature of the observation process, and the estimation technique relied upon. A textbook treatment of inference techniques for broad classes of Poisson processes can be found in [28].

We conclude this account of the existing literature by mentioning a number of works that have been used in our consistency and asymptotic normality proofs. In the first place, we rely on results that provide insight into our M\(_t\)/G/\(s\)+H queue; in particular, [29] shows how the regeneration cycles can be dealt with in time-varying queues. In addition, we use results from [30] in our strong consistency proof, and results from [31] on establishing the recurrence of specific sets (by which we can derive properties which play a key role in our proofs).

1.3 Paper Organization↩︎

In Section 2, we provide a model description and present preliminaries. Section 3 details the MLE-based estimator, along with theorems regarding their strong consistency and asymptotic normality. From the methodological perspective, Section 4 can be seen as the heart of this paper: we describe our procedure to decompose the sample path into distinct regeneration cycles, which is a central element in the proofs in Section 5. Section 5 also contains an exhaustive list of model assumptions under which the theorems and proofs hold. A reader who is not interested in the asymptotic analysis of the estimator can skip Sections 4, 5, and go directly to Section 6, where we establish that the theorems on strong consistency and asymptotic normality of the estimators that are stated in Section 3 hold also for a general delay announcement, under an additional assumption. Section 7 contains a series of simulation experiments, confirming our estimation procedure’s efficacy. In Section 8, we discuss our findings, and provide directions for future research. In Appendix 9, we prove results related to a regeneration structure underlying the M\(_t\)/G/\(s\)+H system, which are required in our proofs. Appendix 10 contains proofs some lemmas that are required in the proofs of consistency and asymptotic normality.

2 MODEL AND PRELIMINARIES↩︎

We model our service system via a queue with \(s\in{\mathbb{N}}\) servers. Potential customers arrive according to a Poisson process with a time-dependent arrival rate \(\lambda_{{\boldsymbol{\alpha} }}(t)\), parametrized by the vector \({\boldsymbol{\alpha} } \equiv (\alpha_1, \alpha_2, \ldots, \alpha_k ) \in \mathbb{R}^k\) for some known \(k\in{\mathbb{N}}\). We consider the setting in which the arrival rate is periodic, where the period is known; by normalizing time we can assume without loss of generality the period is one, so that \(\lambda_{{\boldsymbol{\alpha} }}(t+1) = \lambda_{{\boldsymbol{\alpha} }}(t)\) for all \(t \geq 0\). Each customer is equipped with (1) a service requirement that has cumulative distribution function \(G(\cdot)\), and (2) a patience level that has cumulative distribution function \(H_{{\boldsymbol{\theta} }}(\cdot)\), where \({\boldsymbol{\theta} } \equiv (\theta_1, \theta_2, \ldots, \theta_p) \in \mathbb{R}^p\) (for some known \(p\in{\mathbb{N}}\)) represent the parameters of the patience distribution. The service requirements \(\{B_i\}_{i\in{\mathbb{N}}}\) and patience levels \(\{Y_i\}_{i\in{\mathbb{N}}}\) are independent sequences of i.i.d.random variables, and the customer interarrival times \(\{T_i\}_{i\in{\mathbb{N}}}\) are independent of \(\{B_i\}_{i\in{\mathbb{N}}}\) and \(\{Y_i\}_{i\in{\mathbb{N}}}\). Note that the interarrival times are neither independent of each other, nor identically distributed because of the arrival rate varying with time. At each time \(t\), the service operator announces \(\Delta(t)\), which is an estimate of the delay before service which an incoming customer at time \(t\) can expect. Based on this, they join the system if \(Y \geq \Delta(t)\), otherwise they balk. We now discuss this delay announcement in detail.

2.1 Delay Announcement↩︎

The delay announcement at time \(t\) is calculated based on the complete state of the system \(\mathcal{R}(t)\) at time \(t\). Let \(\Delta(t) = \psi\big(\mathcal{R}(t)\big)\), where \(\psi: \tilde{\Omega} \mapsto \mathbb{R}_{+}\) is a Borel-measurable function. By \(\mathcal{R}(t)\), we mean a vector that represents the complete state of the system at time \(t\). Define \[\mathcal{R}(t) = (r_1, r_2, \ldots, r_k) \in \tilde{\Omega} = \bigcup_{k=0}^{\infty} \mathbb{R}_{+}^{k},\] where \(k \in{\mathbb{N}}_0\) is the number of customers in the system at time \(t\), and \(r_1, \ldots, r_k\) represent their residual service times. If \(k=0\), then there are obviously no residual service times. If \(k \in\{1,\ldots, s\}\), then all customers in the system are in service. If \(k > s\), then customers corresponding to service times \(r_1, \ldots, r_s\) are in service, and customers corresponding to \(r_{s+1}, r_{s+2}, \ldots,r_k\) are ordered according to their service priority (in the first-come-first-serve setting this means in order of arrival) and their residual times equal their original service times.

Under this construction, \(\mathcal{R}(t)\) represents the complete state of the system at time \(t\), available to the service operator at \(t\), based on which, the delay announcement \(\Delta(t)\) is constructed. One could formalize this by working with a function \(\psi\) that maps the observed state of the system to a metric, which serve as a ‘delay proxy’. For a customer arriving at time \(t\), the following list suggests some of the many possible information metrics based on which an arriving customer can decide whether or not to join the system: with \(L(t) = \big\vert \mathcal{R}(t)\big\vert\) the number of customers present at time \(t\),

  1. \(\psi\big(\mathcal{R}(t)\big)\) equals the virtual waiting time \(V(t)\).

  2. \(\psi\big(\mathcal{R}(t)\big)\) equals some noisy estimate \(\tilde{V}(t)\) of \(V(t)\). For example, one reasonable estimate of the waiting time could be the delay proxy \[\begin{align} \psi\big(\mathcal{R}(t)\big) &= \frac{1}{s}\big(L(t) -s + 1\big)^{+}\,{\mathbb{E}} B. \end{align}\]

  3. \(\psi\big(\mathcal{R}(t)\big)\) equals number of service completions that the customer has to wait for: \[\begin{align} \psi\big(\mathcal{R}(t)\big) = \big(L(t)-s + 1\big)^{+}. \end{align}\]

From a theoretical point of view, the second and third information metric are the same, as they differ by a constant factor. From a practical point of view, however, they reflect different messages, namely an expected delay message or simply the number customers in the queue. It is noted that in applications one can encounter both information metrics.

Remark 1. The assumption that \(\Delta(t)\) is a function of \(\mathcal{R}(t)\) can be weakened further. Define \(\sigma(t):= \big\{\mathcal{R}(u): u \in [T_s,t)\big\}\), where \(T_s\) is the start time of the ongoing regeneration cycle, i.e. the cycle to which time \(t\) belongs (We discuss the concept of a regeneration cycle for an M\(_t\)/G/s+H system later in this paper). In other words, \(\sigma(t)\) stores all information about the system since the start of the ongoing regeneration cycle. We can then also allow \(\Delta(t)\) to be a function of \(\sigma(t)\). A commonly used delay proxy for an incoming customer at time \(t\), see [32], [33], is the largest of the times spent waiting so far by the customers present in the system at time \(t\). To formally define this delay proxy, let \(\{\mathcal{W}(t): t \geq 0\}\) store the amount of time that the customers with residual service times corresponding to \(\mathcal{R}(t)\) have waited in the system until time \(t\), i.e., \[\mathcal{W}(t) = (w_1, \cdots, w_k),\] is the time spent waiting so far by each of the \(k\) customers present in the system at time \(t\). Then, observe that \(\mathcal{W}(t) \subseteq \sigma(t)\). We therefore define another delay proxy as follows:

  1. \(\psi(\sigma(t))\) is the maximum amount of time spent waiting amongst all customers present in the system at time \(t\), i.e. \[\begin{align} \psi(\sigma(t)) = \max_{i=1,\cdots,L(t)} w_i. \end{align}\]

Observe that this delay proxy increases linearly between departure times, and can only decrease at departure times. In the rest of the paper however, for ease of understanding, we continue with our original assumption \(\Delta(t) = \psi\big(\mathcal{R}(t)\big)\).

We now proceed to explain a sample delay mechanism and its implications. As an example, we assume that \(\Delta(t) = V(t)\), the virtual waiting time at time \(t\), which is the exact waiting time experienced by a hypothetical customer who joins the system immediately after time \(t\). The quantity \(V(t)\) can alternatively be understood as the time it would take before at least one server becomes idle if no customers join the system after time \(t\). That is, for any \(t \geq 0\), the virtual waiting time can be defined by \[\begin{align} \label{defn:VWT} V(t) = \inf \big\{u \geq 0: L(t) - \big(N_D(t+u) - N_D(t) \big) \leq s-1 \big\} \end{align}\tag{1}\] where \(L(t)\) is the number of customers in the system (including those in service) at time \(t\), and \(N_{D}(t)\) the total number of customers that have finished their service by time \(t\). The virtual waiting time process \(\{V(t)\}_{t\in{\mathbb{R}}}\) is a càdlàg (‘right continuous with left limits’) process. Customer \(i\in {\mathbb{N}}\) joins the system if their patience is at least equally large as their virtual delay: \[\begin{align} Y_i \geq V\Biggl(\sum_{j=1}^{i} T_j -\Biggr). \end{align}\] Figure 3 provides an illustration of the virtual waiting time process of an M/G/2+H queueing system with (for ease) a constant arrival rate \(\lambda = 10\), service requirements with unit mean, and patience levels that are deterministically equal to 1. The green (red) dots represent the non-balking (balking, respectively) customers. In the graph the virtual waiting time just prior to the arrival of non-balking (balking) customers is smaller (larger, respectively) than 1.

Figure 3: An example of virtual waiting time process (s=2) and joining decisions of customers

We now discuss the inference problem addressed in this paper. The crucial feature is that only non-balking customers can be observed, i.e., the customers corresponding to the green dots in Figure 3. Our estimation procedure relies on the state dependent distribution of the effective interarrival times, i.e., the times between two subsequent non-balking arrivals. In the sequel, we denote the sequence of effective interarrival times by \(\{A_i\}_{i\in{\mathbb{N}}}\), and the corresponding effective arrival times by \(\{\tilde{A}_i\}_{i\in{\mathbb{N}}}\), i.e., \(\tilde{A}_i = \sum_{j=1}^{i} A_j\). Each non-balking arrival causes a non-negative increase in the virtual waiting time, which we can be seen as an ‘upward jump’ in the virtual waiting time. We denote the sequence of such upward jumps by \(\{X_i\}_{i\in {\mathbb{N}}}\). We write \[\begin{align} X_i := V(\tilde{A}_i) - V(\tilde{A}_i-) = V(\tilde{A}_i) - W_{i}, \end{align}\] i.e., \(W_i\) denotes the virtual waiting time immediately before the \(i\)-th effective arrival; realize that the waiting time corresponding to the \(i\)-th effectively arriving customer coincides with the virtual waiting time immediately before this arrival.

In this paper we aim to develop a procedure for estimating the parameter vector \({\boldsymbol{\mu} }_0 \equiv \big({{\boldsymbol{\alpha} }_0}, {{\boldsymbol{\theta} }_0}\big) \in \Theta\subseteq \mathbb{R}^{k+p}\) corresponding the arrival rate function and the patience distribution. Here it is assumed that some system data is observed over time. The exact observables depend on the specific choice of \(\Delta(t)\). For instance, when \(\Delta(t) = V(t)\), we observe the full queueing process over time, i.e., all effective arrivals and service times. However, when \(\Delta(t) = \psi(L(t))\), i.e. the delay is some function of the number of customers in the system, then we observe only the process \(\big\{L(t)\big\}_{t \geq 0}\). The objective is to use this information to somehow learn the parameters of the true arrival rate and the patience-level distribution. More concretely, we wish to devise statistical procedures for estimating \({{\boldsymbol{\alpha} }_0}\) and \({{\boldsymbol{\theta} }_0}\) (both corresponding to non-observable quantities) with provable performance guarantees.

3 PARAMETRIC ESTIMATION PROCEDURE↩︎

In the previous section, we specified and explained all the necessary variables for the demonstration of the estimation procedure in this section. Our estimation procedure relies on constructing a conditional likelihood of the state-dependent arrival process. Finally, we make claims regarding asymptotic performance of the estimator.

Let the random variable\[\left(N\big(\tilde{A}_{i-1},\tilde{A}_{i-1}+t \big] \;\big\vert \:\Delta\big(\tilde{A}_{i-1}\big) = \delta_{i-1}\right)\] denote the the number of effective (i.e., non-balking) arrivals in time interval \(\big(\tilde{A}_{i-1},\tilde{A}_{i-1}+t \big]\), conditioned on the delay announcement immediately after the \((i-1)\)-st effective arrival being \(\delta_{i-1}\). Define the tail probability \(\tilde{H}_{{\boldsymbol{\theta} }}(x) = \mathop{\mathrm{\mathbb{P}}}(Y \geq x)\). Then, relying on standard properties of time-dependent Poisson processes, \[\begin{align} \left(N\big(\tilde{A}_{i-1},\tilde{A}_{i-1}+t \big] \;\big\vert \:\Delta\big(\tilde{A}_{i-1}\big) = \delta_{i-1}\right) \sim \text{Poisson}\bigg(\int_{0}^{t} \lambda_{{\boldsymbol{\alpha} }}\big(u+\tilde{A}_{i-1}\big) \tilde{H}_{{\boldsymbol{\theta} }}\big(\Delta\big(\tilde{A}_{i-1}+u\big)\big) \;\mathrm{d}u \bigg), \end{align}\] entailing that \[\begin{align} \mathop{\mathrm{\mathbb{P}}}\Bigl(A_i >t \;\Big\vert \;\Delta\big(\tilde{A}_{i-1}\big) = \delta_{i-1}\Bigr) &= \mathop{\mathrm{\mathbb{P}}}\bigg(N\big(\tilde{A}_{i-1},\tilde{A}_{i-1}+t\big] = 0 \;\Big\vert \;\Delta\big(\tilde{A}_{i-1}\big) = \delta_{i-1} \bigg)\notag \\ &= \exp\biggl(-\int_{0}^{t} \lambda_{{\boldsymbol{\alpha} }}\big(u+\tilde{A}_{i-1}\big)\tilde{H}_{{\boldsymbol{\theta} }}\big(\Delta\big(\tilde{A}_{i-1}+u\big)\big) \;\mathrm{d}u \biggr). \end{align}\] The density of the \(i\)-th effective interarrival time, conditioned on the delay announcement just after the \((i-1)\)-st effective arrival, thus reads \[\begin{align} \label{eqn:density32effective32interarrival32times} f_{A_i \,\vert\, \delta_{i-1}}(t) &= - \frac{\rm d}{{\rm d}t} \mathop{\mathrm{\mathbb{P}}}\big(A_i >t \;\big\vert \;\Delta\big(\tilde{A}_{i-1}\big) = \delta_{i-1} \big) \notag \\ &= \lambda_{{\boldsymbol{\alpha} }}\big(t+\tilde{A}_{i-1}\big) \tilde{H}_{{\boldsymbol{\theta} }}\big(\Delta\big(\tilde{A}_{i-1}+t\big)\big)\exp\biggl(-\int_{0}^{t} \lambda_{{\boldsymbol{\alpha} }}\big(u+\tilde{A}_{i-1}\big)\tilde{H}_{{\boldsymbol{\theta} }}\big(\Delta\big(\tilde{A}_{i-1}+u\big)\big) \, \mathrm{d}u \biggr). \end{align}\tag{2}\] We now arrive at the following conditional likelihood function: \[\begin{align} L_n&\big({\boldsymbol{\alpha} }, {\boldsymbol{\theta} };{{\boldsymbol{A} }} \,\big\vert \,{{\boldsymbol{\Delta} }}\big) = \prod_{i=1}^{n} f_{A_i \,\vert\, \delta_{i-1}}(A_i) \notag \\ = &\prod_{i=1}^{n} \lambda_{{\boldsymbol{\alpha} }}\big(\tilde{A}_i\big) \tilde{H}_{{\boldsymbol{\theta} }}\big(\Delta\bigl(\tilde{A}_i^{-}\bigr)\big)\exp\biggl(-\int_{0}^{A_i} \lambda_{{\boldsymbol{\alpha} }}\big(u+\tilde{A}_{i-1}\big)\tilde{H}_{{\boldsymbol{\theta} }}\big(\Delta\big(\tilde{A}_{i-1}+u\big)\big) \, \mathrm{d}u \biggr), \end{align}\] so that the log-likelihood of the effective arrival process is given by \[\begin{align} \ell_n \equiv \ell_n\big({\boldsymbol{\alpha} }, {\boldsymbol{\theta} };{{\boldsymbol{A} }}\, \big\vert\, {{\boldsymbol{\Delta} }}\big) &= \log L_n\big({\boldsymbol{\alpha} }, {\boldsymbol{\theta} };{{\boldsymbol{A} }}\, \big\vert \,{{\boldsymbol{\Delta} }}\big) \notag \\ &= \sum_{i=1}^{n} \biggl(\log\lambda_{{\boldsymbol{\alpha} }}\big(\tilde{A}_i\big) + \log\tilde{H}_{{\boldsymbol{\theta} }}\big(\Delta\bigl(\tilde{A}_i^{-}\bigr)\big)\: -\notag \\ &\int_{0}^{A_i} \lambda_{{\boldsymbol{\alpha} }}\big(u+\tilde{A}_{i-1}\big)\tilde{H}_{{\boldsymbol{\theta} }}\big(\Delta\big(\tilde{A}_{i-1}+u\big)\big) \, \mathrm{d}u \biggr). \label{eqn:log95likelihood:general} \end{align}\tag{3}\] For any \(n\in{\mathbb{N}}\), the maximum likelihood estimator is defined by \[\begin{align} \hat{{\boldsymbol{\mu} }}_n := \big(\hat{{\boldsymbol{\alpha} }}_n, \hat{{\boldsymbol{\theta} }}_n\big) = \mathop{\mathrm{arg\,max}}_{({\boldsymbol{\alpha} }, {\boldsymbol{\theta} }) \in \Theta} \;\ell_n\big({\boldsymbol{\alpha} }, {\boldsymbol{\theta} };{{\boldsymbol{A} }} \,\big\vert \,{{\boldsymbol{\Delta} }}\big).\label{eq:MLE} \end{align}\tag{4}\]

We now explicitly show how 3 would look like, for a couple of delay announcement functions

  • \(\Delta(t) = V(t)\). In this case, as discussed in Section 2, \(\Delta\big(\tilde{A}_{i-1}\big) = \delta_{i-1} = W_{i-1}\), where \(W_{i-1}\) is the waiting time of the \((i-1)\)-st effective arrival. Furthermore, assuming there are no effective arrivals in \(\big(\tilde{A}_{i-1}, \tilde{A}_{i-1}+u\big]\), we have \(\Delta\big(\tilde{A}_{i-1} + u\big) = \max\big\{0, \delta_{i-1}+X_{i-1}-u\big\}\), and consequently, \(\Delta\bigl(\tilde{A}_i^{-}\bigr) = \max\big\{0, \delta_{i-1}+X_{i-1}-A_i\big\}\). As a result, 3 becomes \[\begin{align} \ell_n^{\text{exact}} = \sum_{i=1}^{n} \biggl(\log\lambda_{{\boldsymbol{\alpha} }}\big(\tilde{A}_i\big) + &\log\tilde{H}_{{\boldsymbol{\theta} }}\big(W_{i-1}+X_{i-1}-A_i \big) \notag \\ &- \int_{0}^{A_i} \lambda_{{\boldsymbol{\alpha} }}\big(u+\tilde{A}_{i-1}\big)\tilde{H}_{{\boldsymbol{\theta} }}(W_{i-1}+X_{i-1}-u) \, \mathrm{d}u \biggr). \label{eqn:log95likelihood:exact} \end{align}\tag{5}\]

  • \(\Delta(t) = \psi\big(L(t)\big)\), i.e. the delay at time \(t\) is a function of the number of customers in the system at time \(t\). The arrival rate therefore depends only on time between 2 successive arrival or departure events. Then, \(\Delta\big(\tilde{A}_{i-1}\big) = \delta_{i-1} = \psi(q_{i-1})\), where \(q_{i-1}\) is the number of customers in the system just after the \((i-1)\)-st effective arrival. Let \(\big\{M(t): t \geq 0\big\}\) denote the departure process, i.e., \(M\big(t_1,t_2\big]\) represents the number of departures in \((t_1,t_2]\). For \(i=1,\cdots,n\), let \[\big\{D(i,k)\big\}_{k=1,2,\cdots, M(\tilde{A}_{i}, \tilde{A}_{i+1}]}\] denote the departure times in \(\big(\tilde{A}_{i}, \tilde{A}_{i+1}\big]\), where \(M(\tilde{A}_{i}, \tilde{A}_{i+1}] = \max\{0, q_i - q_{i+1}+1\}\). We can then write 5 as \[\begin{align} \ell^{\text{est}}_n\big(&{\boldsymbol{\alpha} }, {\boldsymbol{\theta} }\big) = \sum_{i=1}^{n} \Bigg\{\log\lambda\big({\boldsymbol{\alpha} },\tilde{A}_i\big) + \log\tilde{H}_{{\boldsymbol{\theta} }}\big(\psi(q_i-1)\big) - \int_{\tilde{A}_{i-1}}^{D(i-1,1)} \lambda({\boldsymbol{\alpha} },u)\tilde{H}_{{\boldsymbol{\theta} }}\big(\psi(q_{i-1})\big) \mathrm{d}u \\ & - \int_{D(i-1,1)}^{D(i-1,2)} \lambda({\boldsymbol{\alpha} }, u)\tilde{H}_{{\boldsymbol{\theta} }}\big(\psi(q_{i-1}-1)\big) \mathrm{d}u - \cdots - \int_{D(i-1,M(\tilde{A}_{i-1}, \tilde{A}_i])}^{\tilde{A}_i} \lambda({\boldsymbol{\alpha} }, u)\tilde{H}_{{\boldsymbol{\theta} }}\big(\psi(q_i-1)\big)\mathrm{d}u \Bigg\}. \end{align}\]

We now state the two main theorems of this paper, which claim strong consistency and asymptotic normality of the estimator 4 . For the sake of brevity, the following theorems and the construction leading up to their proofs are stated for \(\Delta(t) = V(t)\). The theorems rely on some mild and natural regularity assumptions on the arrival rate and patience distribution. The extensive list of assumptions has been provided in Section 5. In the asymptotic variance term we use an elaborate construction, to be described in Section 4, featuring regeneration cycles. In particular, we will point out in detail what the objects \({\boldsymbol{Z} }_1\), \(q({\boldsymbol{Z} }_1, {\boldsymbol{\mu} }_0)\), and \({\mathbb{E}}C_1\) mean in Section 4.

Theorem 1. If Assumptions (A1)–(A6)* are satisfied, then, as \(n \rightarrow \infty\), \[\begin{align} \hat{{\boldsymbol{\mu} }}_n \overset{\text{\rm a.s.}}{\longrightarrow} {{\boldsymbol{\mu} }_0}. \label{consistency} \end{align}\tag{6}\] *

Proof. Provided in Section 5. ◻

Theorem 2. If Assumptions (A1)–(A10)* are satisfied, then, as \(n \rightarrow \infty\), \[\begin{align} \sqrt{n}\big(\hat{{\boldsymbol{\mu} }}_n - {{\boldsymbol{\mu} }_0}\big) \overset{\text{\rm a.s.}}{\longrightarrow} \mathcal{N}\big(0, \,{\mathbb{E}} C_1 \,I({{\boldsymbol{\mu} }_0})^{-1} \big), \label{asymptotic95normality} \end{align}\tag{7}\] where \[\label{eq:defI}I\big({{\boldsymbol{\mu} }_0}\big) := -\mathop{\mathrm{\mathbb{E}}}\Big[\nabla^2 q\big({{\boldsymbol{Z} }_1}, {{\boldsymbol{\mu} }_0}\big)\Big],\tag{8}\] and \(C_1\) is the number of customers served during a regeneration cycle.*

Proof. Provided in Section 5. ◻

For a general delay announcement \(\Delta(t) = \psi\big(\mathcal{R}(t)\big)\), Theorems 1 and 2 are generalized in Proposition 2 under an additional assumption (A11). We direct the reader to Section 6 for more details. Sections 4 and 5 work towards proving the theorems stated above, and therefore work with \(\Delta(t) = V(t)\).

4 CONSTRUCTING i.i.d.REGENERATION CYCLES↩︎

In Section 3, we defined the estimator \(\hat{{\boldsymbol{\mu} }}_n\equiv (\hat{{\boldsymbol{\alpha} }}_n,\hat{{\boldsymbol{\theta} }}_n)\), and stated its consistency and asymptotic normality properties. In the proofs of these statements, a crucial role is played by ‘regeneration cycles’ into which our queueing process can be decomposed. For regeneration cycle \(j\), where \(j\in {\mathbb{N}}\), we store all relevant observable information (effective arrival times, waiting times, jump sizes) pertaining to only that regeneration cycle into a random vector \({{\boldsymbol{Z} }}_j\). The log-likelihood is then written as a sum of log-likelihoods of i.i.d.random variables \(q({\boldsymbol{Z} }_j, {{\boldsymbol{\mu} }})\), which are functions of the observation \({{\boldsymbol{Z} }}_j\) and the parameter \({{\boldsymbol{\mu} }}\). In doing so, we create a set of i.i.d.objects, which we use in Section 5 in order to prove Theorems 12.

We proceed by detailing the way how we decompose the sample path of our M\(_t\)/G/\(s\)+H system into regeneration cycles. Recalling that \(\lambda(\cdot)\) has period 1, we define \(\zeta_j\) as the \(j\)-th integer-valued point in time that the system is empty. Formally, with \(L(t)\) denoting the number of customers in the system at time \(t\), and supposing \(L(0)=0\), we define the start of the \(j\)-th regeneration cycle recursively via \[\begin{align} \zeta_j &:= \inf \big\{t > \zeta_{j-1}: t \in \mathbb{N}, \;L(t) = 0 \big\}. \end{align}\] It is clear that the trajectories of the queueing process within distinct regeneration cycles are independent and identically distributed. (Clearly, subsequent time epochs \(t\in{\mathbb{R}}\) such that \(L(t)=0\) do not induce regeneration cycles, due to the variable arrival rate: one should have that both the system is empty and the arrival rate resets.) We define the duration of the \(j\)-th cycle by \(R_j = \zeta_j - \zeta_{j-1}\). Let \(C_j\) represent the number of customers that are serviced during cycle \(j\) and let \(\eta_{j-1}+1, \eta_{j-1}+2, \cdots, \eta_{j-1}+C_j\) denote their indices, with \(\eta_0 = 0\) and \(\eta_j - \eta_{j-1} = C_j\) (so that \(\eta_j\) is the cumulative total number of customers that have arrived by the end of the \(j\)-th regeneration cycle). Finally, denote by \(N_r\) the number of regeneration cycles in the observed sample path.

We now express the data we will be working with in terms of the regeneration structure we just defined. Let \(\{\tilde{A}_{j,i}\}_{i=1,2,\cdots,C_j}\) represent the arrival times of customers arriving in the \(j\)-th regeneration cycle, relative to \(\zeta_{j-1}\) (i.e., the starting point of the regeneration cycle), and let \(\{A_{j,i} \}_{i=1,2,\cdots,C_j}\) denote the corresponding interarrival times. Define \(\{W_{j,i}\}_{i=1,2,\cdots,C_j}\) and \(\{X_{j,i}\}_{i=1,2,\cdots,C_j}\) as the waiting times and upward jumps in the virtual waiting time, respectively. More formally, for any \(j \in \{1,2,\ldots,N_r\}\), and \(i = \eta_{j-1}+1, \eta_{j-1}+2, \cdots, \eta_{j-1}+C_j\), we have \[\begin{align} \tilde{A}_i &= \zeta_{j-1} + \tilde{A}_{j, i-\eta_{j-1}},\:\:\:\:\:\: A_i = A_{j,i-\eta_{j-1}},\:\:\:\:\:\: W_i= W_{j,i-\eta_{j-1}},\:\:\:\:\:\: X_i = X_{j,i-\eta_{j-1}}.\label{eq:conversion} \end{align}\tag{9}\] In essence, we make this transformation because we wish to specify the position of any customer relative to the start time of the regeneration cycle that they belong to.

Figure 4 shows a hypothetical sample path of the number of customers as a function of time in our M\(_t\)/G/\(s\)+H system. We observe one full regeneration cycle, starting at time \(0\) and ending at time \(3\) (when the next regeneration cycle begins). Notice that the system becomes empty somewhere between time 2 and 3, but time \(3\) is the first instant when the system is empty and the arrival rate resets. Note that any regeneration cycle can consist of multiple busy periods of the underlying queueing process.

Theorem 3, stated below, is crucial to the proofs of consistency and asymptotic normality of the estimators \(\hat{{\boldsymbol{\mu} }}_n\).

Figure 4: Demonstration of regeneration cycle, in which R_1=3.

Theorem 3. For the M\(_t\)/G/\(s\)+H system, under Assumptions (A2) and (A4), the following properties hold:

  1. The mean length of a regeneration cycle is finite, i.e., \(\mathop{\mathrm{\mathbb{E}}}R_1 < \infty\).

  2. The expected number of non-balking customers in a regeneration cycle is finite, i.e., \(\mathop{\mathrm{\mathbb{E}}}C_1 < \infty\).

  3. The expected sum of waiting times of non-balking customers in a regeneration cycle is finite, i.e. \[\begin{align} \mathop{\mathrm{\mathbb{E}}}\Bigg[\sum_{i=1}^{C_1} W_{1,i} \Bigg] < \infty. \end{align}\]

Proof. Deferred to Appendix 9. ◻

In the rest of this section we present a decomposition of the log-likelihood \(\ell_n^{\text{exact}}\) in terms of \(N_r\) contributions \(\big\{\ell^\circ_{j}\big\}_{j=1}^{N_r}\) by the individual regeneration cycles \(1,2,\cdots,N_r\), in such a way that they are i.i.d., and that for each \(j \in \{1,2,\cdots, N_r\}\), \(\ell^{\circ}_{j}\) is a function only of the information pertaining to regeneration cycle \(j\). In this exposition we introduce a number of objects that feature in the assumptions we impose in the next section, under which Theorems 1 and 2 hold. The most intuitive first step towards obtaining \(\ell^{\circ}_{j}\) would be to select the following portion of the summation from 5 : \[\begin{align} \label{eqn:reg32cycles:intuitive32expression} \sum_{i= \eta_{j-1}+1}^{\eta_j} \bigg(\underbrace{\log\lambda_{{\boldsymbol{\alpha} }}\big(\tilde{A}_i\big)}_{\text{(I)}} + \underbrace{\log\tilde{H}_{{\boldsymbol{\theta} }}\big(W_{i-1}+X_{i-1}-A_i \big)}_{\text{(II)}} -\underbrace{\int_{0}^{A_i} \lambda_{{\boldsymbol{\alpha} }}\big(u+\tilde{A}_{i-1}\big)\tilde{H}_{{\boldsymbol{\theta} }}(W_{i-1}+X_{i-1}-u)}_{\text{(III)}} \;\mathrm{d}u \bigg), \end{align}\tag{10}\] i.e., the portion of the log-likelihood from 5 corresponding to the contribution by customers \(\eta_{j-1}+1, \ldots, \eta_j=\eta_{j-1}+C_j\), all of which belong to cycle \(j\). However, we will soon discover that in the above summation, we have included some portion which depends on cycle \(j-1\), call it \(T^{\text{exc}}_j\) (which we must exclude), and excluded some portion which depends on cycle \(j\), call it \(T^{\text{inc}}_j\) (which we must include). In order to understand what \(T^{\text{exc}}_{j}\) and \(T^{\text{inc}}_{j}\) are, we must express (I), (II), and (III) in terms of cycle \(j\) related quantities, through the aid of 9 .

(I): For \(i = \eta_{j-1}+1, \eta_{j-1}+2, \cdots, \eta_j\), due to 9 , we have, due to the periodic nature of \(\lambda(\cdot)\) and bearing in mind that \(\zeta_{j-1}\) is integer, \[\begin{align} \log\lambda_{{\boldsymbol{\alpha} }}\big(\tilde{A}_i\big) = \log\lambda_{{\boldsymbol{\alpha} }}\big(\zeta_{j-1} + \tilde{A}_{j, i-\eta_{j-1}}\big) = \log\lambda_{{\boldsymbol{\alpha} }}\big(\tilde{A}_{j, i-\eta_{j-1}}\big). \end{align}\] We conclude that all terms in (I) are dependent only on cycle \(j\) variables, and must therefore be included in \(\ell^{\circ}_j\).

(II): For \(i = \eta_{j-1}+2, \eta_{j-1}+3, \cdots, \eta_j\), due to 9 , we obtain \[\begin{align} \label{eq:32reg32cycle:32term322} \log\tilde{H}_{{\boldsymbol{\theta} }}\big(W_{i-1}+X_{i-1}-A_i \big) = \log\tilde{H}_{{\boldsymbol{\theta} }}\big(W_{j,i-1-\eta_{j-1}} + X_{j, i-1-\eta_{j-1}} - A_{j,i-\eta_{j-1}} \big). \end{align}\tag{11}\] In the case of \(i = \eta_{j-1}+1\), on an apparent level, it feels as if the LHS of 11 depends on \(W_{i-1}\) and \(X_{i-1}\), which are the waiting time and the upward jump in virtual waiting time caused by the last customer of the previous cycle, i.e. cycle \(j-1\). Upon closer inspection however, we realize that \(W_{\eta_{j-1}+1}\) is the waiting time of the first customer in the \(j\)-th regeneration cycle (who by definition finds the system empty). Therefore, by the Lindley recursion, \(W_{\eta_{j-1}}+X_{\eta_{j-1}}-A_{\eta_{j-1}+1} \leq 0\) which implies \(\tilde{H}_{{\boldsymbol{\theta} }}\big(W_{\eta_{j-1}}+X_{\eta_{j-1}}-A_{\eta_{j-1}+1}\big) = 1\) and hence \[\begin{align} \log\tilde{H}_{{\boldsymbol{\theta} }}\big(W_{\eta_{j-1}}+X_{\eta_{j-1}}-A_{\eta_{j-1}+1}\big) = 0. \end{align}\] Again we conclude that all terms in (II) are dependent only on cycle \(j\) variables, and must therefore be included in \(\ell^{\circ}_j\).

(III): For \(i = \eta_{j-1}+2, \eta_{j-1}+3, \cdots, \eta_j\), again by 9 in combination with the periodicity of \(\lambda_{{\boldsymbol{\alpha} }}(\cdot)\), \[\begin{align} \label{eq:reg32cycle:term323} \int_{0}^{A_i} \lambda_{{\boldsymbol{\alpha} }}\big(u+\tilde{A}_{i-1}\big)&\tilde{H}_{{\boldsymbol{\theta} }}\big(W_{i-1}+X_{i-1}-u\big) \, \mathrm{d}u \notag \\ &= \int_{0}^{A_{j,i-\eta_{j-1}}} \lambda_{{\boldsymbol{\alpha} }}\big(u+\tilde{A}_{j,i-1-\eta_{j-1}}\big)\tilde{H}_{{\boldsymbol{\theta} }}\big(W_{j,i-1-\eta_{j-1}}+X_{j,i-1-\eta_{j-1}}-u\big) \, \mathrm{d}u. \end{align}\tag{12}\] Once again we can conclude that all terms in (III) for customer indices \(i = \eta_{j-1}+2, \cdots, \eta_j\) are dependent only on cycle \(j\) variables, and must therefore be included in \(\ell^{\circ}_j\). For \(i = \eta_{j-1}+1\), observe that \(A_{\eta_{j-1}+1}\) represents the time elapsed between the arrival of the last customer of cycle \(j-1\) and the first customer of cycle \(j\), and therefore it is not straightforward to replicate 12 for \(i = \eta_{j-1}+1\). Since the integral on the LHS of 12 also depends on some portion of cycle \(j-1\), we split it into into two sub-integrals: \[\begin{align} \notag \int_{0}^{\zeta_{j-1}-\tilde{A}_{\eta_{j-1}}} \lambda_{\alpha}\big(u+\tilde{A}_{i-1}\big)&\tilde{H}_{{\boldsymbol{\theta} }}\big(W_{i-1}+X_{i-1}-u\big) \, \mathrm{d}u \:+\\& \underbrace{\int_{\zeta_{j-1}-\tilde{A}_{\eta_{j-1}}}^{A_{\eta_{j-1}+1}} \lambda_{\alpha}\big(u+\tilde{A}_{i-1}\big)\tilde{H}_{{\boldsymbol{\theta} }}\big(W_{i-1}+X_{i-1}-u\big) \, \mathrm{d}u}_{\text{(IV)}}.\label{eq:reg32cycle:term32332split} \end{align}\tag{13}\] where the first sub-integral depends on regeneration cycle \(j-1\), and must therefore be excluded by us. That is, \[\begin{align} T^{\text{exc}}_j = \int_{0}^{\zeta_{j-1}-\tilde{A}_{\eta_{j-1}}} \lambda_{\alpha}\big(u+\tilde{A}_{i-1}\big)\tilde{H}_{{\boldsymbol{\theta} }}\big(W_{i-1}+X_{i-1}-u\big) \, \mathrm{d}u. \end{align}\] What this means is that we have a term \(T^{\text{exc}}_j\) in 10 , which is dependent on cycle \(j-1\). But this also means that there is a similar term \(T^{\text{exc}}_{j+1}\) which is dependent on cycle \(j\), in the expression similar to 10 but then corresponding to cycle \(j+1\). We write that term down based on the first sub-integral from 13 as follows: \[\begin{align} T^{\text{inc}}_j = \int_{0}^{\zeta_{j}-\tilde{A}_{\eta_{j}}} \lambda_{\alpha}\big(u+\tilde{A}_{\eta_j}\big)\tilde{H}_{{\boldsymbol{\theta} }}\big(W_{\eta_j}+X_{\eta_j}-u\big) \, \mathrm{d}u. \end{align}\] Noting that \(\zeta_{j}-\tilde{A}_{\eta_{j}} = (\zeta_j - \zeta_{j-1}) - (\tilde{A}_{\eta_{j}} - \zeta_{j-1} ) = R_j - \tilde{A}_{j, C_j}\), and using 9 , this expression can be written as \[\begin{align} T^{\text{inc}}_j = \int_{0}^{\zeta_j - \tilde{A}_{j,C_j}} \lambda_{\alpha}\big(u+\tilde{A}_{j,C_j}\big)\tilde{H}_{{\boldsymbol{\theta} }}\big(W_{j,C_j}+X_{j,C_j}-u\big) \, \mathrm{d}u. \end{align}\] (IV) depends on cycle \(j\) only, as shown below. By 9 , we have \(A_{\eta_{j-1}+1} = \zeta_{j-1}-\tilde{A}_{\eta_{j-1}} + A_{j,1}\), so that (IV) reads as \[\begin{align} \label{integral952} \int_{\zeta_{j-1}-\tilde{A}_{\eta_{j-1}}}^{\zeta_{j-1}-\tilde{A}_{\eta_{j-1}} + A_{j,1}} \lambda_{\alpha}\big(u+\tilde{A}_{i-1}\big)\tilde{H}_{{\boldsymbol{\theta} }}\big(W_{i-1}+X_{i-1}-u\big) \, \mathrm{d}u. \end{align}\tag{14}\] The first customer joining the system at least \(\zeta_{j-1}-\tilde{A}_{\eta_{j-1}}\) time after \(\tilde{A}_{\eta_{j-1}}\) finds the system empty, and thus has zero waiting time. Therefore, as a consequence of the Lindley recursion, \(W_{\eta_{j-1}} + X_{\eta_{j-1}}-u \leq 0\) for any \(u \geq \zeta_{j-1}-\tilde{A}_{\eta_{j-1}}\), so that \(\tilde{H}_{{\boldsymbol{\theta} }}(W_{\eta_{j-1}}+X_{\eta_{j-1}}-u) = 1\). By performing the variable change \(u \mapsto u - \big(\zeta_{j-1} -\tilde{A}_{\eta_{j-1}} \big)\), we conclude that 14 has become \[\begin{align} \int_{0}^{A_{j,1}} \lambda_{{\boldsymbol{\alpha} }}(u) \;\mathrm{d}u. \end{align}\]

We conclude this section by defining an object that plays a crucial role in the assumptions that we impose for our main results to hold. For every regeneration cycle \(j \in \{1,2,\ldots,N_r\}\), define the vector \[\begin{align} {\boldsymbol{Z}_j } = \Big(R_j, C_j, \big\{\tilde{A}_{j,i}\big\}_{i=1,2,\cdots,C_j}, \big\{W_{j,i}\big\}_{i=1,2,\cdots,C_j}, \big\{X_{j,i}\big\}_{i=1,2,\cdots,C_j} \Big)\label{data95vector}. \end{align}\tag{15}\] This means that \({\boldsymbol{Z}_j }\) is the collection of all data corresponding to the \(j\)-th regeneration cycle: cycle length, number of customers that joined the system during this cycle, their arrival times relative to the start time of the cycle, the waiting times they experience, and the upward jumps in virtual waiting times that they effect. For a given parameter vector \({\boldsymbol{\mu} } \in \Theta\) and \[{\boldsymbol{z} } = \big\{r, c, \{\tilde{a}_i\}_{i=1,2,\cdots,c}, \{w_i\}_{i=1,2,\cdots,c},\{x_i\}_{i=1,2,\cdots,c} \big\}\in{\mathscr Z}:=\bigcup_{m=1}^{\infty}{\mathbb{N}}\times {\mathbb{N}}\times {\mathbb{R}}^m_+ \times {\mathbb{R}}^m_+ \times {\mathbb{R}}^m_+,\] we define \[\begin{align} q\big({\boldsymbol{z} },{\boldsymbol{\mu} }\big) = &\sum_{i=1}^{c} \log\lambda_{{\boldsymbol{\alpha} }}\big(\tilde{a}_i\big) + \sum_{i=2}^{c} \log\tilde{H}_{{\boldsymbol{\theta} }}\big(w_{i-1}+x_{i-1}-(\tilde{a}_i - \tilde{a}_{i-1}) \big) - \int_{0}^{\tilde{a}_1} \lambda_{{\boldsymbol{\alpha} }}(u) \;\mathrm{d}u \notag \\ & -\sum_{i=2}^{c} \int_{0}^{\tilde{a}_i - \tilde{a}_{i-1}} \lambda_{{\boldsymbol{\alpha} }}\big(u+\tilde{a}_{i-1}\big)\tilde{H}_{{\boldsymbol{\theta} }}\big(w_{i-1}+x_{i-1}-u\big) \;\mathrm{d}u \notag \\ & -\int_{0}^{r - \tilde{a}_c} \lambda_{{\boldsymbol{\alpha} }}\big(u+\tilde{a}_c\big)\tilde{H}_{{\boldsymbol{\theta} }}\big(w_c+x_c-u\big) \;\mathrm{d}u. \label{q40z44mu41} \end{align}\tag{16}\] Upon combining all above computations for the components (I), (II) and (III), (IV) and \(T^{\text{inc}}_j\) we thus conclude that \(\ell_j^\circ =q({\boldsymbol{Z} }_j, {\boldsymbol{\mu} })\) and, with the objects \(q({\boldsymbol{Z} }_1, {\boldsymbol{\mu} }), \cdots, q({\boldsymbol{Z} }_{N_r}, {\boldsymbol{\mu} })\) being i.i.d., \[\begin{align} \ell_n^{\text{exact}} = \sum_{j=1}^{N_r} \ell_j^\circ=\sum_{j=1}^{N_r} q({\boldsymbol{Z} }_j, {\boldsymbol{\mu} }). \end{align}\]

5 ASYMPTOTIC PERFORMANCE OF ESTIMATOR↩︎

In Section 4, we decomposed the sample path into distinct regeneration cycles. This allowed us to create, in our M\(_t\)/G/\(s\)+H setting with its own specific dynamics, i.i.d.objects. In this section, we extensively use this regenerative structure to prove Theorems 1 and 2. In the first subsection, we present the assumptions imposed, whereas in the second subsection is is argued that these assumptions are mild and natural. The last two subsections provide the proofs of consistency and asymptotic normality, respectively.

5.1 Assumptions↩︎

In Section 3, we explained the estimation procedure, and stated our results regarding its asymptotic performance. The theorems hold under a series of assumptions, which we formally state in this subsection. We define \(H_{{\boldsymbol{\theta} }}(\infty):=\lim_{x \rightarrow \infty} H_{{\boldsymbol{\theta} }}(x).\)

Assumptions 1. Let \(d: \mathbb{R}^{k+p} \times \mathbb{R}^{k+p} \mapsto \mathbb{R}_{+}\) be the \(L^1\)-metric. The following assumptions are imposed:

  • The parameter space \(\Theta \equiv \Theta_{{\boldsymbol{\alpha} }}\times\Theta_{{\boldsymbol{\theta} }}\subset \mathbb{R}^{k+p}\) is a convex and compact set, such that the true parameter \({\boldsymbol{\mu} }_0\) lies in the interior of the set, i.e., \({\boldsymbol{\mu} }_0 \in \Theta^{\circ}\).

  • There exist \(\lambda_{\min}, \lambda_{\max} \in( 0,\infty)\) such that \(\lambda_{\min} \leq \lambda_{{\boldsymbol{\alpha} }}(t) \leq \lambda_{\max}\) for all \({\boldsymbol{\alpha} } \in \Theta_{\alpha}\) and \(t \in [0,1)\).

  • There exists \(\kappa > 0\) such that for all \(t \in [0,1)\), \(\lambda_{{{\boldsymbol{\cdot} }}}(t): \Theta_{{\boldsymbol{\alpha} }} \mapsto \mathbb{R}\) is \(\kappa\)-Lipschitz, i.e., \[\begin{align} \sup_{t \in [0,1)} \, \Big\vert \lambda_{{\boldsymbol{\alpha} }}(t) - \lambda_{{\boldsymbol{\alpha} }'}(t) \Big\vert \leq \kappa \;d\big({\boldsymbol{\alpha} }, {\boldsymbol{\alpha} }' \big). \end{align}\]

  • \(\lambda_{\max}\, \mathop{\mathrm{\mathbb{E}}}(B)\,(1 - H_{{\boldsymbol{\theta} }}(\infty)) < 1\).

  • There exists \(G_1 > 0\) such that for all \(x \ge 0\), \(\tilde{H}_{{\boldsymbol{\theta} }}(x): \Theta_{{\boldsymbol{\theta} }} \mapsto \mathbb{R}\) is \(G_1\)-Lipschitz, i.e., \[\begin{align} \sup_{x\ge 0}\,\Big\vert \tilde{H}_{{\boldsymbol{\theta} }} (x) - \tilde{H}_{{\boldsymbol{\theta} }'}(x)\Big\vert \leq G_1 \;d\big({\boldsymbol{\theta} }, {\boldsymbol{\theta}' }\big). \end{align}\]

  • There exists a linear function \(G_2(\cdot):\mathbb{R}^{+} \mapsto \mathbb{R}^{+}\) such that, for all \(x \ge 0\), \[\begin{align} \Big\vert\log\tilde{H}_{{\boldsymbol{\theta} }} (x) -\log\tilde{H}_{{\boldsymbol{\theta} }'}(x)\Big\vert \leq G_2(x) \;d\big({\boldsymbol{\theta} }, {\boldsymbol{\theta}' }\big). \end{align}\]

  • There exists a linear function \(G_3(\cdot): \mathbb{R}^{+} \mapsto \mathbb{R}^{+}\) such that, for all \(x \ge 0\), \[\begin{align} \Big\vert \log\tilde{H}_{{\boldsymbol{\theta} }}(x) \Big\vert \leq G_3(x). \end{align}\]

  • For all \(i \in \{1,2,\ldots, k\}\), \({\boldsymbol{\alpha} } \in \Theta_{{\boldsymbol{\alpha} }}\), and \(t \in [0,1)\), the partial derivative \(\frac{\partial}{\partial \alpha_i}\lambda_{{\boldsymbol{\alpha} }}(t)\) exists and is continuous.

  • For all \(x\ge 0\), the gradient vector \(\nabla_{{\boldsymbol{\theta} }} H_{{\boldsymbol{\theta} }}(x)\) and Hessian matrix \(\nabla^2_{{\boldsymbol{\theta} }} H_{{\boldsymbol{\theta} }}(x)\) exist and are continuous with respect to \({\boldsymbol{\theta} }\).

  • The matrix \(\mathop{\mathrm{\mathbb{E}}}\big[\nabla^2 q\big({{\boldsymbol{Z} }_1}, {{\boldsymbol{\mu} }_0}\big)\big]\) is invertible.

5.2 Discussion of Assumptions↩︎

We provide some insights into the reason for imposing these assumptions and possible relaxations

  • Convexity of the parameter space \(\Theta\) is required because of the repeated application of the mean-value theorem in our proofs. The other elements of (A1) are natural, and found commonly in statistical literature.

  • Through (A2) we impose the natural assumption that during times when the service system is operational, the arrival rate has a positive lower bound and a finite upper bound.

  • Assumption (A3) essentially entails that upon slightly tweaking \({\boldsymbol{\alpha} }\), the variation in the arrival rate is small, which is crucial for the estimation of \({\boldsymbol{\alpha} }\). Furthermore, this assumption also implies that \(\log \lambda_{{\boldsymbol{\cdot} }}(t): \Theta_{{\boldsymbol{\alpha} }} \mapsto \mathbb{R}\) is \(\kappa/\lambda_{\min}\)-Lipschitz, because \[\begin{align} \big\vert \log\lambda_{{\boldsymbol{\alpha} }}(t) - \log\lambda_{{\boldsymbol{\alpha} }'}(t) \big\vert &= \frac{\big\vert \log\lambda_{{\boldsymbol{\alpha} }}(t) - \log\lambda_{{\boldsymbol{\alpha} }'}(t) \big\vert}{\big\vert \lambda_{{\boldsymbol{\alpha} }}(t) - \lambda_{{\boldsymbol{\alpha} }'}(t)\big\vert} \times \big\vert \lambda_{{\boldsymbol{\alpha} }}(t) - \lambda_{{\boldsymbol{\alpha} }'}(t)\big\vert \leq \;\frac{\kappa}{\lambda_{\min}} d\big({\boldsymbol{\alpha} }, {\boldsymbol{\alpha} }' \big). \end{align}\]

  • We require (A4) to show the stability of the M\(_t\)/G/s+H queueing system. Informally, the quantity \(\lambda_{\max}\, \mathop{\mathrm{\mathbb{E}}}(B)\,(1 - H_{{\boldsymbol{\theta} }}(\infty))\) can be interpreted as the amount of work brought to the system if customers would join regardless of the system’s congestion level. This makes us believe that this assumption can be further relaxed to just requiring that \(\lambda_{\max} \mathop{\mathrm{\mathbb{E}}}(B) (1 - H_{{\boldsymbol{\theta} }}(\infty)) < s,\) but this has turned out remarkably hard to prove. From a practical standpoint, however, this is of limited relevance, because for virtually any relevant patience distribution one has that \(H_{{\boldsymbol{\theta} }}(\infty) = 1\).

  • Assumption (A5) is of the same spirit as (A3), and is justified along the same lines.

  • We require (A6) and (A7) while proving strong consistency of the maximum likelihood estimates. We have verified that these specific assumptions hold for standard distributions.

  • (A8) and (A9) are regularity assumptions that are common in statistical literature.

  • (A10) is also a standard assumption in the statistical literature and essentially requires that the gradient of the log-likelihood is not degenerate.

5.3 Strong Consistency↩︎

In Section 4, we decomposed the observed sample path of the system into distinct regeneration cycles. We know that these cycles are i.i.d., and in the remainder of this section, we shall intensively make use of that property to establish asymptotic consistency of the estimators. At the end of Section 4, we constructed random vectors which encompass all information pertaining to a regeneration cycle, and wrote the log-likelihood as a sum of i.i.d.functions of these random vectors.

Lemma 1. Let \(d\) be a metric on \(\Theta\). Let \({\boldsymbol{\mu} }, {\boldsymbol{\mu}' } \in \Theta\) where \({\boldsymbol{\mu} } = \big({\boldsymbol{\alpha} }, {\boldsymbol{\theta} } \big)\) and \({\boldsymbol{\mu}' } = \big({\boldsymbol{\alpha} }', {\boldsymbol{\theta}' } \big)\). Then under Assumptions (A2), (A3), (A5), (A6), there exists a measurable function \(B(\cdot): \mathscr{Z} \mapsto \mathbb{R}\) and a non-random function \(h(\cdot): \mathbb{R} \mapsto \mathbb{R}\) such that for all \(j,\) \[\big\vert \,q\big({\boldsymbol{Z}_j }, {\boldsymbol{\mu}' }\big) - q\big({\boldsymbol{Z}_j }, {\boldsymbol{\mu} }\big) \,\big\vert \leq B\big({\boldsymbol{Z}_j }\big)\,h\big(d\big({\boldsymbol{\mu} }, {\boldsymbol{\mu}' }\big)\big),\] where \(\mathop{\mathrm{\mathbb{E}}}B\big({\boldsymbol{Z}_j }\big) < \infty\) and \(\lim_{x \rightarrow 0} h(x) = 0\).

Proof. Deferred to Appendix 10. ◻

Corollary 1 (Corollary of Lemma 1). The following two equations hold: \[\begin{align} \sup_{N_r \in{\mathbb{N}}} \;\frac{1}{N_r} \sum_{j=1}^{N_r} \mathop{\mathrm{\mathbb{E}}}B\big({\boldsymbol{Z}_j }\big) &< \infty, \label{D95Andrews95condition951} \\ \frac{1}{N_r} \sum_{j=1}^{N_r} \Big(B\big({\boldsymbol{Z}_j }\big) - \mathop{\mathrm{\mathbb{E}}}B\big({\boldsymbol{Z}_j }\big)\Big) &\overset{\text{a.s}}{\longrightarrow} 0\:\:\:as\:\:N_r\to\infty\,. \; \label{D95Andrews95condition952} \end{align}\] {#eq: sublabel=eq:D95Andrews95condition951,eq:D95Andrews95condition952}

Proof. The random variables \(B\big({\boldsymbol{Z}_j }\big)\) for \(j = 1,2,\cdots, N_r\) are i.i.d., and by Lemma 1, \(\mathop{\mathrm{\mathbb{E}}}B(Z_1) < \infty\), so that (?? ) follows trivially. For (?? ), we rely on the strong law of large numbers. ◻

We now possess all the requisite machinery to prove Theorem 1.

Proof of Theorem 1. First observe that as \(n \rightarrow \infty,\) we have that \(n/N_r \overset{\text{a.s.}}{\longrightarrow} \mathop{\mathrm{\mathbb{E}}}\,C_1\), which is finite due to part (b) of Theorem 3, so that \(N_r \rightarrow \infty\) is equivalent to \(n \rightarrow \infty\). Using [30], in combination with Lemma 1 and Equations ?? and ?? , conclude that \[\begin{align} \sup_{{\boldsymbol{\mu} } \in \Theta} \;\Bigg|\frac{1}{N_r} \sum_{j=1}^{N_r} q\big({\boldsymbol{Z}_j }, {\boldsymbol{\mu} }\big) - \mathop{\mathrm{\mathbb{E}}}q\big({{\boldsymbol{Z} }_1}, {\boldsymbol{\mu} }\big) \Bigg| \overset{\text{a.s}}{\longrightarrow} 0, \label{U-SLLN} \end{align}\tag{17}\] which is equivalent to, as \(n \rightarrow \infty\), \[\begin{align} \sup_{{\boldsymbol{\mu} } \in \Theta} \;\Bigg\vert \frac{1}{n} \ell_n\big({\boldsymbol{\alpha} }, {\boldsymbol{\theta} };{{\boldsymbol{A} }},{{\boldsymbol{W} }} \,\big\vert \,{{\boldsymbol{X} }}\big)\cdot \frac{n}{N_r} - \mathop{\mathrm{\mathbb{E}}}q\big({{\boldsymbol{Z} }_1}, {\boldsymbol{\mu} }\big) \Bigg\vert \overset{\text{a.s.}}{\longrightarrow} 0. \end{align}\] The remainder of the proof is similar to that of [3]. Recalling that \(n/N_r \overset{\text{a.s.}}{\longrightarrow} \mathop{\mathrm{\mathbb{E}}}\, C_1\) as \(n \rightarrow \infty,\) we have that \[\begin{align} \sup_{{\boldsymbol{\mu} }\in\Theta} \left\vert \frac{1}{n}\ell_n\big({\boldsymbol{\alpha} }, {\boldsymbol{\theta} };{{\boldsymbol{A} }},{{\boldsymbol{W} }} \,\big\vert \,{{\boldsymbol{X} }}\big)-\ell({\boldsymbol{\mu} }) \right\vert \overset{\text{a.s.}}{\longrightarrow} 0 , \end{align}\] where \(\ell({\boldsymbol{\mu} })=\mathop{\mathrm{\mathbb{E}}}q\big({{\boldsymbol{Z} }_1}, {\boldsymbol{\mu} }\big)/\mathop{\mathrm{\mathbb{E}}}\,C_1\) is a non-random function which is maximized at the true parameter \({{\boldsymbol{\mu} }_0}\). The density of the effective interarrival times given in 2 , and thereby the log-likelihood 5 , are uniquely determined by \(\lambda_{{\boldsymbol{\alpha} }}(\cdot)\) and \(H_{{\boldsymbol{\theta} }}(\cdot)\), which in turn are uniquely determined by \({\boldsymbol{\alpha} }\) and \({\boldsymbol{\theta} }\) respectively. In particular, this means that there exist neither \({\boldsymbol{\alpha} }_1, {\boldsymbol{\alpha} }_2 \in \Theta_{\alpha}\) such that \(\lambda_{{\boldsymbol{\alpha} }_1}(\cdot) = \lambda_{{\boldsymbol{\alpha} }_2}(\cdot)\) almost everywhere, nor \({\boldsymbol{\theta} }_1, {\boldsymbol{\theta} }_2 \in \Theta_{\theta}\) such that \(H_{{\boldsymbol{\theta} }_1}(\cdot) = H_{{\boldsymbol{\theta} }_2}(\cdot)\) almost everywhere. We therefore conclude that the model is identifiable in the Kullback-Leibler sense, i.e., \[\begin{align} \ell({\boldsymbol{\mu} }) - \ell({\boldsymbol{\mu} }_0) < 0, \;\forall {\boldsymbol{\mu} } \neq {\boldsymbol{\mu} }_0. \end{align}\] Hence we can conclude that, \({\boldsymbol{\mu}_n } \overset{\text{a.s.}}{\longrightarrow} {{\boldsymbol{\mu} }_0}\). ◻

5.4 Asymptotic Normality↩︎

In the previous subsection, we established strong consistency of the estimator \(\hat{{\boldsymbol{\mu} }}_n\), showing it converges almost surely to the true parameter \({{\boldsymbol{\mu} }_0}\). The next logical step is to discuss the fluctuation of the estimator around \({{\boldsymbol{\mu} }_0}\). In this section we show, under the assumptions imposed, that the fluctuations \(\big(\hat{{\boldsymbol{\mu} }}_n - {{\boldsymbol{\mu} }_0}\big)\) scaled by a factor \(\sqrt{n}\) tend to zero-mean normally distributed random variable. Further advantages of working with regeneration cycles will become evident in this section. Being able to write down the log-likelihood as a sum of i.i.d.functions of the data in each regeneration cycle, we will use existing knowledge for proving Theorem 2. For convenience, let us define \[\begin{align} \mathcal{L}_{N_r}\big({\boldsymbol{\mu} }\big) := \frac{1}{N_r} \ell_n\big({\boldsymbol{\alpha} }, {\boldsymbol{\theta} };{{\boldsymbol{A} }},{{\boldsymbol{W} }} \,\big\vert \,{{\boldsymbol{X} }}\big)=\frac{1}{N_r} \sum_{j=1}^{N_r} q\big({\boldsymbol{Z}_j },{{\boldsymbol{\mu} }}). \end{align}\] Lemma 2 is important with regards to proving Theorem 2, as we shall see below.

Lemma 2. Under Assumptions (A2), (A3), (A6), (A7), (A8), and (A9), \(\mathop{\mathrm{\mathbb{E}}}\big[\nabla q\big({{\boldsymbol{Z} }_1}, {{\boldsymbol{\mu} }_0}\big)\big]= {\boldsymbol{0} }_{(k+p)\times 1}.\)

Proof. Deferred to Appendix 10. ◻

Proof of Theorem 2. As \(n \rightarrow \infty\), by Theorem 3(a), \(N_r \rightarrow \infty\). Therefore, by the central limit theorem in combination with Lemma 2, \[\begin{align} \sqrt{N_r} \;\nabla\mathcal{L}_{N_r}\big({{\boldsymbol{\mu} }_0}\big) = \sqrt{N_r}\times\frac{1}{N_r}\sum_{i=1}^{N_r} \nabla q\big({\boldsymbol{Z}_i }, {{\boldsymbol{\mu} }_0}\big) &= \sqrt{N_r}\Bigg(\frac{1}{N_r}\sum_{i=1}^{N_r} \nabla q\big({\boldsymbol{Z}_i }, {{\boldsymbol{\mu} }_0}\big) - \mathop{\mathrm{\mathbb{E}}}\big[\nabla q\big({\boldsymbol{Z}_i }, {{\boldsymbol{\mu} }_0}\big)\big]\Bigg) \notag \\ &\overset{\text{d}}{\longrightarrow} \;\mathcal{N}\Big(0, \text{Var}\big(\nabla q\big({{\boldsymbol{Z} }_1}, {{\boldsymbol{\mu} }_0}\big)\big) \Big), \label{asymptotic95normality:eqn:1} \end{align}\tag{18}\] where, using Lemma 2 again, the variance term is given by \[\begin{align} \label{asymptotic95normality:eqn:2} \text{Var}\big(\nabla q\big({{\boldsymbol{Z} }_1}, {{\boldsymbol{\mu} }_0}\big)\big) &= \mathop{\mathrm{\mathbb{E}}}\bigg[\Big(\nabla q\big({{\boldsymbol{Z} }_1}, {{\boldsymbol{\mu} }_0}\big) - \mathop{\mathrm{\mathbb{E}}}\nabla q\big({{\boldsymbol{Z} }_1}, {{\boldsymbol{\mu} }_0}\big)\Big)\Big(\nabla q\big({{\boldsymbol{Z} }_1}, {{\boldsymbol{\mu} }_0}\big) - \mathop{\mathrm{\mathbb{E}}}\nabla q\big({{\boldsymbol{Z} }_1}, {{\boldsymbol{\mu} }_0}\big)\Big)^{\text{T}} \Big] \notag \\ &= \mathop{\mathrm{\mathbb{E}}}\Big[\nabla q\big({{\boldsymbol{Z} }_1}, {{\boldsymbol{\mu} }_0}\big) \nabla q\big({{\boldsymbol{Z} }_1}, {{\boldsymbol{\mu} }_0}\big)^{\text{T}} \Big] \notag \\ &= -\mathop{\mathrm{\mathbb{E}}}\Big[\nabla^2 q\big({{\boldsymbol{Z} }_1}, {{\boldsymbol{\mu} }_0}\big)\Big], \end{align}\tag{19}\] where the final equality has been explained in [34]. As \(n \rightarrow \infty\), for all \({\boldsymbol{\mu} } \in \Theta\), we also have by the strong law of large numbers, \[\begin{align} \nabla^2 \mathcal{L}_{N_r} \big({\boldsymbol{\mu} }\big) = \frac{1}{N_r} \sum_{i=1}^{N_r} \nabla^2 q\big({\boldsymbol{Z}_i }, {\boldsymbol{\mu} }\big) \overset{\text{a.s}}{\longrightarrow} \mathop{\mathrm{\mathbb{E}}}\Big[\nabla^2 q\big({{\boldsymbol{Z} }_1}, {\boldsymbol{\mu} }\big)\Big]. \label{asymptotic95normality:eqn:3} \end{align}\tag{20}\] For some \(c \in (0,1)\) and \(\tilde{{\boldsymbol{\mu} }}_{n} = c{{\boldsymbol{\mu} }_0} + (1-c)\hat{{\boldsymbol{\mu} }}_{n}\), by the mean-value theorem, \[\begin{align} \nabla \mathcal{L}_{N_r}\big(\hat{{\boldsymbol{\mu} }}_{n}\big) = \nabla \mathcal{L}_{N_r}\big({\boldsymbol{\mu} }_0\big) + \nabla^2 \mathcal{L}_{N_r}\big(\tilde{{\boldsymbol{\mu} }}_{n}\big) \cdot \big(\hat{{\boldsymbol{\mu} }}_{n} - {\boldsymbol{\mu} }_0 \big). \end{align}\] By Theorem 1, we know that \(\hat{{\boldsymbol{\mu} }}_{n} \overset{\text{a.s.}}{\rightarrow} {\boldsymbol{\mu} }_0\), and by Assumption (A1), \({\boldsymbol{\mu} }_0 \in \Theta^{\circ}\). This means that there exists an almost surely finite \(\tilde{n}\), such that for all \(n > \tilde{n}\), \(\hat{{\boldsymbol{\mu} }}_n \in \Theta^\circ\). For \(n > \tilde{n}\), since \(\hat{{\boldsymbol{\mu} }}_{n} \in \Theta^{\circ}\) is a maximum likelihood estimate, we know that \(\nabla \ell_n\big(\hat{{\boldsymbol{\mu} }}_{n};{{\boldsymbol{A} }},{{\boldsymbol{W} }} \,\big\vert \,{{\boldsymbol{X} }}\big) = 0\) for all \(n> \tilde{n}\), and therefore, \(\nabla \mathcal{L}_{N_r}\big(\hat{{\boldsymbol{\mu} }}_{n}\big) \overset{\text{d}}{\longrightarrow} 0\). Hence, the above equation can be rewritten as \[\begin{align} \nabla^2 \mathcal{L}_{N_r}\big(\tilde{{\boldsymbol{\mu} }}_n\big) \cdot \big(\hat{{\boldsymbol{\mu} }}_{n} - {\boldsymbol{\mu} }_0 \big) = - \nabla \mathcal{L}_{N_r}\big({\boldsymbol{\mu} }_0\big). \end{align}\] Multiplying both sides by \(\sqrt{N_r}\) we get \[\begin{align} \nabla^2 \mathcal{L}_{N_r}\big(\tilde{{\boldsymbol{\mu} }}_n\big) \cdot \sqrt{N_r}\big(\hat{{\boldsymbol{\mu} }}_{n} - {\boldsymbol{\mu} }_0\big) = -\sqrt{N_r} \nabla \mathcal{L}_{N_r}\big({\boldsymbol{\mu} }_0\big). \end{align}\] Since \(\hat{{\boldsymbol{\mu} }}_{n} \overset{\text{a.s.}}{\longrightarrow} {{\boldsymbol{\mu} }_0}\) as \(n \rightarrow \infty\), and \(\big\vert \tilde{{\boldsymbol{\mu} }}_{n} - {{\boldsymbol{\mu} }_0}\big\vert \leq \big\vert \hat{{\boldsymbol{\mu} }}_{n} - {{\boldsymbol{\mu} }_0}\big\vert\), we can conclude that \(\tilde{{\boldsymbol{\mu} }}_{n} \overset{\text{a.s.}}{\longrightarrow} {{\boldsymbol{\mu} }_0}\). Using this fact and Equations (18 ), (19 ), (20 ) as \(N_r \rightarrow \infty\), we conclude \[\begin{align} \label{equation:asymptotic95normality:sqrt40N95r4195convergence} \sqrt{N_r}\big(\hat{{\boldsymbol{\mu} }}_{n} - {\boldsymbol{\mu} }_0\big) \overset{\text{d}}{\longrightarrow} & -\mathop{\mathrm{\mathbb{E}}}\Big[\nabla^2 q\big({{\boldsymbol{Z} }_1}, {{\boldsymbol{\mu} }_0}\big)\Big]^{-1} \mathcal{N}\bigg(0, -\mathop{\mathrm{\mathbb{E}}}\Big[\nabla^2 q\big({{\boldsymbol{Z} }_1}, {{\boldsymbol{\mu} }_0}\big)\Big] \bigg) \notag \\ \overset{\text{d}}{=} & \;\mathcal{N}\bigg(0, -\mathop{\mathrm{\mathbb{E}}}\Big[\nabla^2 q\big({{\boldsymbol{Z} }_1}, {{\boldsymbol{\mu} }_0}\big)\Big]^{-1}\bigg) \overset{\text{d}}{=} \;\mathcal{N}\Big(0, I\big({{\boldsymbol{\mu} }_0}\big)^{-1} \Big). \end{align}\tag{21}\] Recalling that \({n}/{N_r} \overset{\text{a.s.}}{\longrightarrow} \mathop{\mathrm{\mathbb{E}}}C_1\), and multiplying (21 ) on both sides with \(\sqrt{n/N_r}\), we conclude \[\begin{align} \sqrt{n} \big({\hat{{\boldsymbol{\mu} }}_n} - {{\boldsymbol{\mu} }_0}\big) \overset{\text{d}}{\longrightarrow} \mathcal{N}\Big(0, \;\mathop{\mathrm{\mathbb{E}}}C_1\, I\big({{\boldsymbol{\mu} }_0}\big)^{-1} \Big)\,, \end{align}\] as \(n\to\infty\). ◻

6 GENERAL FRAMEWORK FOR JOINING BASED ON INCOMPLETE INFORMATION↩︎

In Section 3, we assured the reader that we performed calculations for the case \(\Delta(t) = V(t)\) only for the sake of demonstration. In this section, we will generalize Theorems 1 and 2 for a generalized delay announcement \(\Delta(t) = \psi\big(\mathcal{R}(t)\big)\). The consistency and asymptotic normality proofs of this estimator once again rely on the underlying idea of dividing the sample path into distinct regeneration cycles. For regeneration cycle \(j \in \{1,2,\ldots,N\}\), let the objects \(\{\tilde{A}_{j,i}\}_{i=1,2,\ldots,C_j}\), \(\zeta_j\), \(R_j\), and \(C_j\) carry the same meaning as before. Let \({\boldsymbol{Z}_j }\) denote the new data vector corresponding to the \(j\)-th regeneration cycle. As in Sections 4 and 5, we define, for compactness abbreviating \(h(x):=\tilde{H}_{{\boldsymbol{\theta} }}(\psi(\mathcal{R}(x)))\), \[\begin{align} \label{eqn:q:incomplete} q\big({\boldsymbol{Z}_j }, {\boldsymbol{\mu} }\big) = &\sum_{i=1}^{C_j} \log\lambda_{{\boldsymbol{\alpha} }}\big(\tilde{A}_{j,i}\big) + \sum_{i=1}^{C_j} \log h(\tilde{A}_{j,i}) - \sum_{i=2}^{n} \int_{0}^{\tilde{A}_{j,i}} \lambda_{{\boldsymbol{\alpha} }}\big(u + \tilde{A}_{j,i-1}\big) h(u+\tilde{A}_{j,i-1}) \, \mathrm{d}u \notag \\ & - \int_{0}^{A_{j,1}} \lambda_{{\boldsymbol{\alpha} }}(u)h(u)\,{\rm d}u -\int_{0}^{R_j - \tilde{A}_{j,C_j}} \lambda_{{\boldsymbol{\alpha} }}\big(u + \tilde{A}_{j,C_j}\big) h(u+\tilde{A}_{j,C_j})\, \mathrm{d}u. \end{align}\tag{22}\]

Assumptions, continued 1 (A11). Either of the following additional assumption is imposed:

  • Let \(\tilde{\Omega}\) be the collection of all possible states of our system (as explained at the beginning of Section 6), and let \(\tilde{V}: \tilde{\Omega} \mapsto \mathbb{R}_{+}\) be the virtual waiting time function, in the sense that, given a vector of residual service times of customers \({\boldsymbol{r} } \in \tilde{\Omega}\), \(\tilde{V}({\boldsymbol{r} })\) is the corresponding virtual waiting time. Then there exists a constant \(\tilde{M} > 0\) such that for every \({\boldsymbol{r} } \in \tilde{\Omega}\), we must have \(\psi({\boldsymbol{r} }) \leq \tilde{M}\tilde{V}({\boldsymbol{r} })\).

  • \(\psi\big(\mathcal{R}(t)\big) = \psi_1\big(L(t)\big)\), where \(\psi_1\) is a linear function, and \(\mathop{\mathrm{\mathbb{E}}}\big[C_1^2\big] < \infty\).

Discussion of Assumptions, continued.

  • (A11)(a) essentially says that given the virtual waiting time is \(V(t)\), the delay proxy \(\psi(\mathcal{R}(t))\) should not be larger than \(\tilde{M}V(t)\). In particular, model 1 given earlier clearly follows (A11)(a) with the choice \(\tilde{M}=1\).

  • (A11)(b) is required for delay proxies based on the number of customers in the system length, such as Models 2,3. The first part of the assumption reflects this. The second part assumes that the second moment of the number of customers in a cycle is finite. Verifying this condition can be a challenging problem in the context of M\(_t\)/G/\(s\)+H systems. In theory, we can allow \(\psi\big(\mathcal{R}(t)\big) = \psi_1\big(L(t)^{1+\eta}\big)\), but then we require \(\mathop{\mathrm{\mathbb{E}}}\big[C_1^{2+\eta}\big] < \infty\).

Observe that model 4 satisfies neither (A11)(a) or (A11)(b). However, Assumption (A11) merely tries to provide sufficient conditions under which Proposition 2 holds. We do not believe that these are strictly necessary.

Proposition 2. Under assumptions (A1)–(A6) and (A11)(a) or (A11)(b), the statement of Theorem 1 holds in the generalized setting of this section. Furthermore, under assumptions (A1)–(A10) and (A11)(a) or (A11)(b), the statement of Theorem 2 holds in the generalized setting of this section, where \(q({\boldsymbol{Z} }_1, {\boldsymbol{\mu} })\) is now given by 22 .

The proof of Proposition 2 is very similar to that of Theorems 1-2, as detailed in Appendix 10.

7 NUMERICAL EXPERIMENTS↩︎

In this section we numerically assess the performance of our estimation procedure through a series of experiments. We consider three mechanisms:

  • customers know their exact waiting time,

  • customers receive an estimate for their expected waiting time based on the queue length, and

  • customers simply observe the number of customers in the system, and join or balk accordingly.

For each of these, we consider an arrival rate function that is a sinusoidal or a weighted sum of sinusoidals (where the ratios of the periods are rational numbers, making the aggregate arrival rate periodic), and exponential, hyper-exponential, Pareto, or geometric distributed patience. Maximization of the log-likelihood is potentially challenging, as there is no guarantee whatsoever on local maxima also being global maxima. To deal with this complication, we have used (the Python implementation of) the L-BFGS-B and Nelder-Mead algorithms. For a given observation of the arrival process, service times and patience levels, we simulate 5 or 6 different service systems, with 1, 2, 4, 8, and 16/32 servers. We repeat this procedure 100 times to generate empirical confidence intervals of the obtained estimates, which we visualize via box plots. In each of the experiments we report the number of arrivals that has been used, which has typically been chosen such that the width of the corresponding confidence intervals is ‘sufficiently small’.

7.1 Model I: Exact waiting time is known↩︎

In this model, customers are provided with the exact value of the virtual waiting time upon arrival. This means that the information function is assumed to be \(\Delta(t) = V(t)\), so that we are in the framework of Sections 25.

7.1.1 Sinusoidal arrival rate and exponentially distributed patience↩︎

In this example, our model of choice is an M\(_t\)/M/\(s\)+H system, with arrival rate, service time distribution and patience distribution given by \[\begin{align} \lambda_{{\boldsymbol{\alpha} }}(t) &= 50 + 20\sin\big(1 - 0.1t\big), \\ M &\sim \text{Exp}(0.2), \\ H_{{\boldsymbol{\theta} }}(x) &= 1 - e^{-0.5x}. \end{align}\] The parameters to be estimated are therefore \({\boldsymbol{\alpha} }_0 = (50, 20, 1)\), and \({\boldsymbol{\theta} }_0 = 0.5\). We work with simulated data corresponding to \(20,000\) total arrivals (i.e., balking and non-balking customers). The resulting box plots are shown in Figure 5. Recalling Example 1, where we discussed how the service capacity affects the accuracy of the estimates, in this example we observe the same phenomenon: the accuracy of the \({\boldsymbol{\alpha} }\) estimates improves as the number of servers increases. On the other hand, observe from Figure 5 (d) that \({\boldsymbol{\theta} }\) is estimated poorly when the number of servers is relatively large: then the virtual waiting time becomes small, so that virtually any arriving customer joins, thus not providing any information on the patience distribution. In a more formal sense, this can also be observed from Equation 5 , by noting that the terms which contain \(\theta\) start disappearing: if \(W_{i-1}\) and \(X_{i-1}\) are virtually zero, then \(W_{i-1} + X_{i-1} - A_i\) typically takes negative values, and therefore, \(\log \tilde{H}_{{\boldsymbol{\theta} }}(W_{i-1}+X_{i-1}-A_i)\) is zero. A similar effect taking place inside the integral, all terms containing \({\boldsymbol{\theta} }\) start dissolving for such higher numbers of servers, thus prohibiting the optimization of the log-likelihood over \({\boldsymbol{\theta} }\). The patience parameter \({{\boldsymbol{\theta} }}\) is generally best estimated if the number of servers allows for a system where the virtual waiting takes a wide spectrum of values, which is typically the case when the number of servers is relatively low. However, if the number of servers is really low, then the fraction of customers joining is low, which has a detrimental effect on the accuracy of the estimator of \({\boldsymbol{\theta} }\). Due to the conflict between the accuracy of the estimators for \({\boldsymbol{\alpha} }\) and \({\boldsymbol{\theta} }\), scenarios in which the number of servers is ‘moderate’ lead to the best overall performance.

a
b
c
d

Figure 5: Box plots of Model I - Submodel 1 estimates. a — Confidence Intervals for \(\alpha_1\) (True value = 50), b — Confidence Intervals for \(\alpha_2\) (True value = 20), c — Confidence Intervals for \(\alpha_3\) (True value = 1), d — Confidence Intervals for \(\theta\) (True value = 0.5)

7.1.2 Weighted sinsuoidal arrival rate and hyperexponentially distributed patience↩︎

In this example, our model of choice is an M\(_t\)/M/\(s\)+H system, with arrival rate, service distribution and patience distribution given by \[\begin{align} \lambda_{{\boldsymbol{\alpha} }}(t) &= 50 + 15\sin\big(4 - 0.1t\big) + 10\sin\big(1 - 0.5t\big), \\ M &\sim \text{Exp}(0.4), \\ H_{{\boldsymbol{\theta} }}(x) &= 1 - 0.8e^{-x} - 0.2e^{-0.1x}. \end{align}\] The parameters to be estimated are therefore \({\boldsymbol{\alpha} }_0 = (50, 15, 10, 4, 1)\), and \({\boldsymbol{\theta} }_0 = (0.8, 1, 0.1)\). In this example, the number of total arrivals is \(10^6\), leading to the box plots of Figure 6. Similar to the previous model, we observe that accuracy of the estimator of \({\boldsymbol{\alpha} }\) improves as the number of servers increases. However, the estimates corresponding to \({\boldsymbol{\theta} }\) now do not exhibit a clear pattern. When the number of servers is low (say, \(s=1\)), the estimates of \(\theta_3\) is accurate, whereas the estimate of \(\theta_2\) is poor. This is because virtual waiting times are typically high, entailing that all customers with patience sampled from the \(\text{Exp}(1)\) distribution tend to get rejected due to their low patience values. Therefore, a large proportion of non-balking customers have patience sampled from \(\text{Exp}(0.1)\), so that \(\theta_3\) can be estimated accurately but \(\theta_2\) not. This model requires a substantial amount of data for the estimation procedure for multiple reasons, one of which has been discussed in Example 2. The other reason is that the log-likelihood has a rather irregular surface, which is hard to optimize over (in comparison to the previous model).

a
b
c
d
e
f
g
h

Figure 6: Box plots of Model I - Submodel 2 estimates. a — Confidence Intervals for \(\alpha_1\) (True value = 50), b — Confidence Intervals for \(\alpha_2\) (True value = 15), c — Confidence Intervals for \(\alpha_3\) (True value = 10), d — Confidence Intervals for \(\alpha_4\) (True value = 4), e — Confidence Intervals for \(\alpha_5\) (True value = 1), f — Confidence Intervals for \(\theta_1\) (True value = 0.8), g — Confidence Intervals for \(\theta_2\) (True value = 1), h — Confidence Intervals for \(\theta_3\) (True value = 0.1)

7.2 Model II: Estimated virtual waiting time based joining↩︎

We now demonstrate the procedure working with incomplete information, as was presented in Section 6, in which the delay proxy at time \(t\) can be any estimate of the waiting time, depending only on the information in the regeneration cycle time \(t\) belongs to. In this particular experiment we choose \(\Delta(t) = (L(t)-s+1)^{+}\,{\mathbb{E}}B/s\), where \(L(t)\) denotes the number of customers in the system at time \(t\). Our model of choice is an M\(_t\)/G/\(s\)+H system with arrival rate, service disribution and patience distribution given by \[\begin{align} \lambda_{{\boldsymbol{\alpha} }}(t) &= 50 + 20\sin(4-0.1t) + 10\sin(1-0.5t), \\ G &\sim \text{Gamma}(0.1,4), \\ H_{{\boldsymbol{\theta} }}(x) &= 1-\frac{1}{(1 + x)^2}. \end{align}\] The paramaters to be estimated are therefore \({\boldsymbol{\alpha} }_0 = (50, 20, 10, 4, 1)\), and \({\boldsymbol{\theta} }_0 = (1,2)\). We use simulated data from 0.5\(\times 10^6\) total arrivals (balking and non-balking). Figure 7 shows the box plots for the empirical confidence intervals of the estimates for \({\boldsymbol{\alpha} }\) and \({\boldsymbol{\theta} }\). They follow a similar trend as in the one observed in previous examples.

a
b
c
d
e
f
g

Figure 7: Box plots of Model II estimates. a — Confidence Intervals for \(\alpha_1\) (True value = 50), b — Confidence Intervals for \(\alpha_2\) (True value = 20), c — Confidence Intervals for \(\alpha_3\) (True value = 10), d — Confidence Intervals for \(\alpha_4\) (True value = 4), e — Confidence Intervals for \(\alpha_5\) (True value = 1), f — Confidence Intervals for \(\theta_1\) (True value = 1), g — Confidence Intervals for \(\theta_2\) (True value = 2)

7.3 Model III: System size based joining↩︎

In this model, an arriving customer only receives information about the total number of customers in the system at time \(t\), i.e. \(\Delta(t) = L(t)\). Or alternatively, an arriving customer receives information about the number of service completions that they must wait for, before their service begins. That is, \(\Delta(t) = \big(L(t)-s + 1\big)^{+}\). In particular, our model of interest is an M\(_t\)/G/\(s\)+H system with arrival rate, service distribution and patience distribution given by \[\begin{align} \lambda({\boldsymbol{\alpha} }, t) &= 50 + 20\sin(4 - 0.1t) + 10\sin(1 - 0.5t), \\ G &\sim \text{Gamma}(0.05,4), \\ H_{{\boldsymbol{\theta} }} &\sim \text{Geo}(0.1). \end{align}\] The paramaters to be estimated are therefore \({\boldsymbol{\alpha} }_0 = (50, 20, 10, 4, 1)\), and \({\boldsymbol{\theta} }_0 = 0.1\). In this example, we work with 40,000 total arrivals. Box plots of empirical confidence intervals for the estimates of \({\boldsymbol{\alpha} }\) and \({\boldsymbol{\theta} }\) are shown in Figure 8. Once again, we observe that they follow a similar trend as in the previous examples.

a
b
c
d
e
f

Figure 8: Box plots of Model III estimates. a — Estimates for \(\alpha_1\) (True value = 50), b — Estimates for \(\alpha_2\) (True value = 20), c — Estimates for \(\alpha_3\) (True value = 10), d — Estimates for \(\alpha_4\) (True value = 4), e — Estimates for \(\alpha_5\) (True value = 1), f — Estimates for \(\theta\) (True value = 0.1)

8 CONCLUDING REMARKS↩︎

In this paper, we considered a service system with a delay announcement mechanism. Based on their patience level, customers decide to join the system or to balk. As the underlying model we chose the versatile M\(_t\)/G/\(s\)+H queue, with a periodic time-dependent arrival rate \(\lambda_{{\boldsymbol{\alpha} }_0}(t)\), and a patience distribution \(H_{{\boldsymbol{\theta} }_0}(\cdot)\). The goal of this paper was to estimate the parameter vectors \({\boldsymbol{\alpha} }_0\) and \({\boldsymbol{\theta} }_0\), even though the balking customers remain unobserved. Through an MLE procedure, we devised estimators \(\hat{{\boldsymbol{\alpha} }}_n\) and \(\hat{{\boldsymbol{\theta} }}_n\) given an observed realization of the system corresponding to \(n\) non-balking customers, which we proved to be consistent and asymptotically normal as \(n\to\infty\). Through numerical experiments, some of them working with delay proxies, we demonstrate the performance of the estimation procedure. We also illustrate instances where estimation is easier and harder, with supporting explanations for the same.

Although the framework considered is rather general, several extensions could be thought of. An interesting generalization could be one in which the service times and patience levels are correlated, which could e.g.cover cases in which a customer with a large job size is naturally willing to wait longer than customers with smaller job sizes.

In the context of our paper the impatience mechanism can be seen as a decentralized control policy, in that it makes sure that the number of customers in the system does not become large. Thus, the material presented in this work is an example of an estimation problem based on partial information, in a system that operates under what could be called an ‘equalizing control’. In the introduction of our paper we came across instances in which (some of) the parameters were hard to identify, corresponding to situations in which the control succeeded in essentially equalizing the system occupancy, entailing that relatively little information in our observations. The complications arising when aiming to estimate parameters under ‘equalizing control measures’ are discussed in detail in [35]: the control tries to make the congestion level as constant as possible, whereas estimation would benefit from more fluctuations.

ONLINE APPENDIX

9 ANALYSIS OF M\(_t\)/G/\(s\)+H SYSTEM↩︎

In Section 4, we constructed a number of random variables pertaining to a regeneration cycle of an M\(_t\)/G/\(s\)+H system and then made some intuitive claims about them in Theorem 3. In this appendix we prove this theorem.

Proof of Theorem 3(a). Let \(\{{\boldsymbol{X}_n }\}_{n \geq 0}\) denote the vector of residual service times at times \(0,1,2,\cdots\), i.e., \({\boldsymbol{X}_n } = \big(X_{n,1}, X_{n,2}, \cdots, X_{n,s} \big) \in \mathbb{R}_{+}^s\) where \(X_{n,i}\) denotes the residual service time corresponding to server \(i\) at time \(n\). Then, \(\{{\boldsymbol{X}_n }\}_{n \geq 0}\) is a discrete-time Markov chain. Also define \(\{{\boldsymbol{L}_{n} }\}_{n \geq 1}, \{{\boldsymbol{T}_{n} }\}_{n \geq 1} \in \mathbb{R}^s\) as follows: \({\boldsymbol{L}_n } = \big(L_{n,1}, L_{n,2}, \cdots, L_{n,s} \big)\) where \(L_{n,i}\) denotes the amount of workload server \(i\) receives during time \((n-1, n]\), and \({\boldsymbol{T}_n } = \big(T_{n,1}, T_{n,2}, \cdots, T_{n,s} \big)\) where \(T_{n,i}\) denotes the amount of service completed by server \(i\) during time \((n-1, n]\). If a server \(i\) is never idle during \((n-1,n]\), then \(T_{n,i} = 1\), otherwise \(T_{n,i} < 1\).

By Assumption (A4), there exists \(1 > \delta > 0\) and \(p \geq 2\) such that \[\begin{align} \lambda_{\max}\mathop{\mathrm{\mathbb{E}}}(B)\tilde{H}_{{\boldsymbol{\theta} }}(p-1) \leq 1-\delta. \end{align}\] We define the Lyapunov function \(\phi: \mathbb{R}^s_{+} \mapsto [1,\infty)\) by \[\begin{align} \phi({\boldsymbol{x} }) = 1 + \frac{1}{\delta} \max_{i = 1,2,\cdots, s} {x_i}. \label{equation:Lyapunov95function} \end{align}\tag{23}\] Let \(\mathcal{C}\) denote the set \([0, k]^s\) for \(k \gg p\), the exact lower bound for \(k\) being evaluated later. Suppose \({\boldsymbol{X}_n } \in \mathcal{C}\). Then we show that \(\mathop{\mathrm{\mathbb{E}}}\big[\phi\big({\boldsymbol{X}_{n+1} }\big) | {\boldsymbol{X}_n } \big]\) is bounded above by some fixed quantity, as follows: \[\begin{align} \mathop{\mathrm{\mathbb{E}}}\big[\phi\big({\boldsymbol{X}_{n+1} }\big) \;\big\vert {\boldsymbol{X}_n } \big] &= \mathop{\mathrm{\mathbb{E}}}\bigg[1 + \frac{1}{\delta} \max_{i=1,2,\cdots,s} \big(X_{n,i} + L_{n+1,i} - T_{n+1,i}\big) \;\Big\vert {\boldsymbol{X}_n } \bigg] \notag \\ &\leq 1 + \frac{1}{\delta} \max_{i=1,2,\cdots,s} X_{n,i} + \frac{1}{\delta}\mathop{\mathrm{\mathbb{E}}}\Bigg[\sum_{i=1}^{s} L_{n+1,i} \;\Big\vert {\boldsymbol{X}_n } \Bigg] \notag \\ &\leq 1 + \frac{k}{\delta} + \frac{\lambda_{\max}\mathop{\mathrm{\mathbb{E}}}(B)}{\delta}. \end{align}\] Suppose now that \({\boldsymbol{X}_n } \in \mathbb{R}^{s} \setminus \mathcal{C}\). Then, we consider two distinct cases as follows.

Case I: \(\min_{i=1,2,\cdots,s} X_{n,i} \geq p\)

In this case, \[\begin{align} \mathop{\mathrm{\mathbb{E}}}\big[\phi\big({\boldsymbol{X}_{n+1} }\big) - \phi\big({\boldsymbol{X}_n }) \;\big\vert {\boldsymbol{X}_n } \big] &= \;\mathop{\mathrm{\mathbb{E}}}\bigg[\cancel{1} + \frac{1}{\delta} \max_{i=1,2,\cdots,s} \big(X_{n,i}+L_{n+1,i}-1\big) - \cancel{1} - \frac{1}{\delta} \max_{i=1,2,\cdots,s} X_{n,i} \;\Big\vert {\boldsymbol{X}_n } \bigg] \notag \\ &\leq \;\frac{1}{\delta}\mathop{\mathrm{\mathbb{E}}}\bigg[\cancel{\max_{i=1,2,\cdots,s} X_{n,i}} + \max_{i=1,2,\cdots,s} L_{n+1,i} -1 -\cancel{\max_{i=1,2,\cdots,s} X_{n,i}} \;\Big\vert {\boldsymbol{X}_n } \bigg] \notag \\ &\leq \;\frac{1}{\delta} \bigg(-1 + \mathop{\mathrm{\mathbb{E}}}\bigg[\sum_{i=1}^{s} L_{n+1,i} \;\Big\vert {\boldsymbol{X}_n } \bigg]\bigg) \notag \\ &\leq \;\frac{1}{\delta} \Big(-1 + \lambda_{\max} \mathop{\mathrm{\mathbb{E}}}(B) \tilde{H}_{\theta}(p-1)\Big) \notag \\ &\leq \;\frac{1}{\delta} \big(-1 + 1-\delta \big) = -1. \end{align}\]

Case II: \(\min_{i=1,2,\cdots, s} X_{n,i} < p\)

Denote the total incoming workload during the interval \((n, n+1]\) by \(W_{(n,n+1]}\). Then, \[\begin{align} \mathop{\mathrm{\mathbb{E}}}\big[\phi&({\boldsymbol{X}_{n+1} }) - \phi({\boldsymbol{X}_n }) | {\boldsymbol{X}_n } \big] \notag \\ = \;& -\frac{1}{\delta} \mathop{\mathrm{\mathbb{P}}}\Big(W_{(n,n+1]} \leq k-p-1 \Big) + \mathop{\mathrm{\mathbb{E}}}\Big[\big(\phi({\boldsymbol{X}_{n+1} }) - \phi({\boldsymbol{X}_n })\big)\mathop{\mathrm{\mathbb{I}}}\Big(W_{(n,n+1]} > k-p-1 \Big) \;\big\vert {\boldsymbol{X}_n }\Big]. \label{equation:finite95mean95length:V40X95123n43112541-V40X95n41:total} \end{align}\tag{24}\] Explanation for the first term of 24 : During the time interval \((n,n+1]\), the difference between the workloads of maximum and minimum workload servers is at least \(k-p-1\). Therefore, if \(W_{(n,n+1]} < k-p-1\), then it will always be assigned to some server which is not the maximum workload server. In this case, \(\phi({\boldsymbol{X}_{n+1} }) - \phi({\boldsymbol{X}_n }) = -1/\delta\). The second term of 24 is majorized by \[\begin{align} \mathop{\mathrm{\mathbb{E}}}\Big[\big(\phi&({\boldsymbol{X}_{n+1} }) - \phi({\boldsymbol{X}_n })\big)\mathop{\mathrm{\mathbb{I}}}\Big(W_{(n,n+1]} > k-p-1 \Big) \;\big\vert {\boldsymbol{X}_n }\Big] \notag \\ \leq \;&\sum_{j=1}^{\infty} \frac{j}{\delta} \;\mathop{\mathrm{\mathbb{P}}}\Big(W_{(n,n+1]} \in \big(k-p+j-2, \;k-p+j-1\big) \;\big\vert {\boldsymbol{X}_n }\Big) \notag \\ \leq \;&\frac{1}{\delta} \int_{k-p-1}^{\infty} (x-k+p+2) \;f_{W_{(n,n+1]} \vert {\boldsymbol{X}_n }}(x) \;\mathrm{d}x \notag \\ \leq \;&\frac{1}{\delta} \int_{k-p-1}^{\infty} x \;f_{W_{(n,n+1]} \vert {\boldsymbol{X}_n }}(x) \;\mathrm{d}x \longrightarrow 0 \notag \end{align}\] as \(k \rightarrow \infty\), because \(\mathop{\mathrm{\mathbb{E}}}\big[W_{(n,n+1]} \;\vert {\boldsymbol{X}_n }\big] \leq \lambda_{\max} \mathop{\mathrm{\mathbb{E}}}[B] < \infty\). Therefore, we obtain \[\begin{align} \lim_{k \rightarrow \infty} \mathop{\mathrm{\mathbb{E}}}\Big[\big(\phi({\boldsymbol{X}_{n+1} }) - \phi({\boldsymbol{X}_n })\big)\mathop{\mathrm{\mathbb{I}}}\Big(W_{(n,n+1]} > k-p-1 \Big) \Big] \leq 0. \end{align}\] Since \(\mathop{\mathrm{\mathbb{P}}}\big(W_{(n,n+1]} \leq k-p-1 \big) \rightarrow 1\) as \(k \rightarrow \infty\), we can conclude that \[\begin{align} -\frac{1}{\delta} \mathop{\mathrm{\mathbb{P}}}\Big(W_{(n,n+1]} \leq k-p-1 \Big) + \mathop{\mathrm{\mathbb{E}}}\Big[\big(\phi({\boldsymbol{X}_{n+1} }) - \phi({\boldsymbol{X}_n })\big)\mathop{\mathrm{\mathbb{I}}}\Big(W_{(n,n+1]} > k-p-1 \Big) \Big] \rightarrow -\frac{1}{\delta} \end{align}\] as \(k \rightarrow \infty\). Since \(-1/\delta < -1\), there must exist \(\widetilde{k} \in \mathbb{R}\) such that the above expression is less than \(-1\). Choosing \(\mathcal{C} = \big[0, \widetilde{k}\big]^s\), we have verified that for all \({\boldsymbol{X}_n } \in \mathbb{R}^s \setminus \mathcal{C}\), \[\begin{align} \mathop{\mathrm{\mathbb{E}}}\big[\phi\big({\boldsymbol{X}_{n+1} }\big) - \phi\big({\boldsymbol{X}_n }\big) \;\big\vert {\boldsymbol{X}_n }] \leq -1. \end{align}\] Therefore, by [31], \(\mathcal{C}\) is a recurrent set. Now, let \(\mathcal{U} = \{(0,0,\cdots,0) \}\). Define the induced chain \(\{{\boldsymbol{Y}_n }\}_{n \geq 1}\) as follows: \[\begin{align} {\boldsymbol{Y}_n } = {\boldsymbol{X}_{\tau_{\mathcal{C}}^{(n)}} }. \end{align}\] Choose \(N = \widetilde{k}+1\), and \({\boldsymbol{Y}_0 } = {\boldsymbol{X}_0 } = (0,0,\cdots,0)\). \({\boldsymbol{Y}_1 } \in \mathcal{C}\) implies that \(\max_{i=1,2,\cdots,s} Y_{1,i} \leq \widetilde{k}\). Under the scenario that no customers arrive for the next \(\widetilde{k}\) units of time, every server’s workload continues dropping at unit rate and therefore, there exists \(i \in \{2, 3, \cdots, \widetilde{k}+1\}\) such that \({\boldsymbol{Y}_i } = (0,0,\cdots,0)\). The probability of this event is \[\begin{align} \mathop{\mathrm{\mathbb{P}}}\Big(\text{No arrivals during} \;\big(\tau_{\mathcal{C}}^{(1)}, \tau_{\mathcal{C}}^{(1)}+ \widetilde{k} \big) \Big) \geq e^{-\lambda_{\max}\widetilde{k}}. \end{align}\] By [31], we get that \(\mathcal{U}\) is recurrent. Furthermore, by [31], \(\sup_{x \in \mathcal{C}} \mathop{\mathrm{\mathbb{E}}}_{x}\big[\tau_{\mathcal{U}}\big] < \infty\). Therefore, the expected return time to \((0,0,\cdots,0)\) starting at \((0,0,\cdots, 0)\) itself, is finite, i.e. \(\mathop{\mathrm{\mathbb{E}}}(R_1) < \infty\). ◻

Proof of Theorem 3(b). The maximum amount of service that can be completed in regeneration cycle \(1\) is bounded from above by \(sR_1\), which corresponds to the scenario that each server works for the entire duration of the regeneration cycle. On the other hand, the desired amount of service is given by \(B_{1,1} + B_{1,2} + \cdots + B_{1,C_1}\). This implies that \(sR_1 \geq B_{1,1} + B_{1,2} + \cdots + B_{1,C_1}\). Taking expectation on both sides and making use of the previous result, we conclude that \[\begin{align} \mathop{\mathrm{\mathbb{E}}}\Bigg[\sum_{i=1}^{C_1} B_{1,i}\Bigg] < s\mathop{\mathrm{\mathbb{E}}}\big[R_1\big] < \infty. \end{align}\] For \(k \geq 1\), define \(S_k := B_{1,1} + B_{1,2} + \cdots + B_{1,k}\). Then, \[\begin{align} \underbrace{\sum_{i=1}^{C_1} B_{1,i}}_{Z} = \sum_{k=1}^{\infty} S_k \mathop{\mathrm{\mathbb{I}}}\big\{C_1 = k\big\} = \lim_{K \rightarrow \infty} \;\underbrace{\sum_{k=1}^{K} S_k\mathop{\mathrm{\mathbb{I}}}\big\{C_1 = k\big\}}_{Z_K}. \end{align}\] \(0 \leq Z_1 \leq Z_2 \leq \cdots \leq Z\) and \(\lim_{K \rightarrow \infty} Z_K = Z\). Therefore, by the monotone convergence theorem, \[\begin{align} \mathop{\mathrm{\mathbb{E}}}&\Bigg[\sum_{i=1}^{C_1} B_{1,i}\Bigg] = \lim_{K \rightarrow \infty} \mathop{\mathrm{\mathbb{E}}}\Bigg[\sum_{k=1}^{K} S_k \mathop{\mathrm{\mathbb{I}}}\big\{C_1 = k\big\}\Bigg] = \lim_{K \rightarrow \infty} \sum_{k=1}^{K} \mathop{\mathrm{\mathbb{E}}}\Big[S_k \mathop{\mathrm{\mathbb{I}}}\big\{C_1 = k\big\} \Big] = \sum_{k=1}^{\infty} \mathop{\mathrm{\mathbb{E}}}\Big[S_k \mathop{\mathrm{\mathbb{I}}}\big\{C_1 = k\big\}\Big] \\ &= \sum_{k=1}^{\infty} \mathop{\mathrm{\mathbb{E}}}\Bigg[\sum_{i=1}^{k} B_{1,i} \mathop{\mathrm{\mathbb{I}}}\big\{C_1 = k\big\} \Bigg] = \sum_{k=1}^{\infty} \sum_{i=1}^{k} \mathop{\mathrm{\mathbb{E}}}\Big[B_{1,i}\mathop{\mathrm{\mathbb{I}}}\big\{C_1 = k\big\}\Big] \overset{\text{(i)}}{=} \sum_{i=1}^{\infty} \sum_{k=i}^{\infty} \mathop{\mathrm{\mathbb{E}}}\Big[B_{1,i}\mathop{\mathrm{\mathbb{I}}}\big\{C_1 = k\big\} \Big] \\ &= \sum_{i=1}^{\infty} \lim_{K \rightarrow \infty} \sum_{k=i}^{i+K} \mathop{\mathrm{\mathbb{E}}}\Big[B_{1,i}\mathop{\mathrm{\mathbb{I}}}\big\{C_1 = k\big\} \Big] = \sum_{i=1}^{\infty} \lim_{K \rightarrow \infty} \mathop{\mathrm{\mathbb{E}}}\Bigg[\sum_{k=i}^{i+K} B_{1,i}\mathop{\mathrm{\mathbb{I}}}\big\{C_1 = k\big\} \Bigg] \\&= \sum_{i=1}^{\infty} \lim_{K \rightarrow \infty} \mathop{\mathrm{\mathbb{E}}}\Big[B_{1,i}\mathop{\mathrm{\mathbb{I}}}\big\{i \leq C_1 \leq i+K\big\}\Big] \\ &\overset{\text{(ii)}}{=} \sum_{i=1}^{\infty} \mathop{\mathrm{\mathbb{E}}}\bigg[\lim_{K \rightarrow \infty} B_{1,i}\mathop{\mathrm{\mathbb{I}}}\big\{i \leq C_1 \leq i+K\big\} \bigg] = \sum_{i=1}^{\infty} \mathop{\mathrm{\mathbb{E}}}\Big[B_{1,i}\mathop{\mathrm{\mathbb{I}}}\big\{C_1 \geq i\big\} \Big]\\& \overset{\text{(iii)}}{=} \sum_{i=1}^{\infty} \mathop{\mathrm{\mathbb{E}}}\big[B_{1,i}\big] \mathop{\mathrm{\mathbb{P}}}\big(C_1 \geq i\big) = \mathop{\mathrm{\mathbb{E}}}\big[B_{1,1}]\mathop{\mathrm{\mathbb{E}}}\big[C_1\big]. \end{align}\] The equalities in the above display are justified as follows: (i) is allowed since all summands are positive and the summation is finite; (ii) again follows because of monotone convergence theorem; (iii) holds because the event \(\big\{C_1 < i\big\}\) is independent of the random variable \(B_{1,i}\). Note that \(\big\{C_1 < i\big\}\) depends on the arrival times, job sizes and patience levels pertaining to all balking and non-balking customers arriving until the \((i-1)\)-st non-balking customer enters the system. It also depends on the arrival times and patience levels of subsequent customers, but not on their job sizes. In conclusion, we get \[\begin{align} \mathop{\mathrm{\mathbb{E}}}\Bigg[\sum_{i=1}^{C_1} B_{j,i}\Bigg] = \mathop{\mathrm{\mathbb{E}}}\big[B_{1,1}\big] \mathop{\mathrm{\mathbb{E}}}\big[C_1\big] < \infty, \end{align}\] so that \(\mathop{\mathrm{\mathbb{E}}}\big[C_1\big] < \infty\). ◻

Proof of Theorem 3(c). Let \(\{t_n: n \geq 0\}\), \(\{a_n: n \geq 0\}\) be defined as follows: \(t_0 = 0\), and for \(k \geq 0\), \[\begin{align} a_k &= \text{Number of effective arrivals during } (0, t_k], \\ t_{k+1} &= \min\Big\{\big\lceil t_k+ \big\rceil, \tilde{A}_{a_k+1} \Big\}. \end{align}\] Consider the process \(\big\{Y_n: n \geq 0\big\}\) given by \(Y_n = \big\{t_n, {\boldsymbol{X}(t_n) }\big\}\) where \({\boldsymbol{X}(t_n) }\) is the vector of residual service times at time \(t_n\), i.e., \({\boldsymbol{X} }(t_n) \in \mathbb{R}^s\) stores the amount of residual workload corresponding to each of the \(s\) servers. The evolution of this process can be described as a join-the-shortest workload system (which for the purposes here is identical to our system). Intuitively, the process \(\big\{Y_n: n \geq 0\big\}\) captures the pending workload for each server after every effective arrival and at integral time points. Define \[\begin{align} \tau = \inf\Big\{k \in \mathbb{N} \:\big\vert \;t_k \in \mathbb{N} \; \text{and} \;{\boldsymbol{X}(t_k) } = {\boldsymbol{0} }_{s \times 1} \Big\}. \end{align}\] Then, the process \(\{Y_n: n \geq 0\}\) regenerates at time \(\tau\). Define a mapping \[\begin{align} f({\boldsymbol{Y}_j }) = \min_{i=1,2,\cdots,s} X_i(t_j). \end{align}\] By the renewal-reward theorem, we get \[\begin{align} \label{equation:renewal95reward} \lim_{n \rightarrow \infty} \;\frac{1}{n} \sum_{j=1}^{n} f({\boldsymbol{Y}_j }) = \frac{1}{\mathop{\mathrm{\mathbb{E}}}\tau} \mathop{\mathrm{\mathbb{E}}}\Bigg[\sum_{j=0}^{\tau-1} f({\boldsymbol{Y}_j }) \Bigg]. \end{align}\tag{25}\] By the ergodic theorem, the LHS of 25 is finite almost surely. Furthermore, \(\mathop{\mathrm{\mathbb{E}}}[\tau] < \infty\). Therefore, \[\begin{align} \mathop{\mathrm{\mathbb{E}}}\Bigg[\sum_{j=0}^{\tau-1} f({\boldsymbol{Y}_j })\Bigg] < \infty. \end{align}\] The above quantity represents the expected sum of virtual waiting times just after arrival instants as well as at integral time points, which is greater than the sum of virtual waiting times just before effective arrival instants. This implies that \[\begin{align} \mathop{\mathrm{\mathbb{E}}}\Bigg[\sum_{i=1}^{C_1} W_{1,i} \Bigg] \le \mathop{\mathrm{\mathbb{E}}}\Bigg[\sum_{j=0}^{\tau-1} f({\boldsymbol{Y}_j })\Bigg] < \infty. \end{align}\] Hence we conclude that the expected sum of waiting times of customers in a regeneration cycle is finite. ◻

10 PROOFS OF AUXILIARY RESULTS↩︎

Proof for Lemma 1 in Section 5. By the triangle inequality, we can write \[\begin{align} &\Big\vert q\big({\boldsymbol{Z}_j }, {\boldsymbol{\mu}' }\big) - q\big({\boldsymbol{Z}_j }, {\boldsymbol{\mu} }\big) \Big\vert \leq \sum_{i=1}^{C_j} \underbrace{\Big\vert \log\lambda_{{\boldsymbol{\alpha} }'}\big(\tilde{A}_{j,i}\big) - \log\lambda_{{\boldsymbol{\alpha} }}\big(\tilde{A}_{j,i}\big) \Big\vert}_{\text{(I)}} + \int_{0}^{A_{j,1}} \underbrace{\Big\vert \lambda_{{\boldsymbol{\alpha} }'}(u) - \lambda_{{\boldsymbol{\alpha} }}(u) \Big\vert}_{\text{(II)}} \, \mathrm{d}u \;+ \notag \\ &\sum_{i=2}^{C_j} \underbrace{\Big\vert \log\tilde{H}_{{\boldsymbol{\theta}' }}\big(W_{j,i-1}+X_{j,i-1}-A_{j,i} \big) - \log\tilde{H}_{{\boldsymbol{\theta} }}\big(W_{j,i-1}+X_{j,i-1}-A_{j,i} \big) \Big\vert}_{\text{(III)}} \;+ \notag \\ &\sum_{i=2}^{C_j} \int_{0}^{A_{j,i}} \underbrace{\Big\vert \lambda_{{\boldsymbol{\alpha} }'}\big(u+\tilde{A}_{j,i-1}\big)\tilde{H}_{{\boldsymbol{\theta}' }}\big(W_{j,i-1}+X_{j,i-1}-u\big) - \lambda_{{\boldsymbol{\alpha} }}\big(u+\tilde{A}_{j,i-1}\big)\tilde{H}_{{\boldsymbol{\theta} }}\big(W_{j,i-1}+X_{j,i-1}-u\big)\Big\vert}_{\text{(IV)}} \, \mathrm{d}u \, + \notag \\ &\int_{0}^{R_j - \tilde{A}_{j, C_j}} \underbrace{\Big\vert \lambda_{{\boldsymbol{\alpha} }'}\big(u+\tilde{A}_{j,C_j}\big)\tilde{H}_{{\boldsymbol{\theta}' }}\big(W_{j,C_j}+X_{j,C_j}-u\big) - \lambda_{{\boldsymbol{\alpha} }}\big(u+\tilde{A}_{j,C_j}\big)\tilde{H}_{{\boldsymbol{\theta} }}\big(W_{j,C_j}+X_{j,C_j}-u\big) \Big\vert}_{\text{(V)}} \, \mathrm{d}u. \end{align}\]

Let us analyze (I), (II), (III), (IV), (V) separately. Upper bounds for (I) and (II) follow from Assumption (A3) and its corresponding discussion, the upper bound for (III) follows due to (A6), and upper bounds for (IV) and (V) follow due to (A2), (A3) and (A5). Concretely, \[\begin{align} \boldsymbol{(I):} \;&\Big\vert\log\lambda_{{\boldsymbol{\alpha} }'}\big(\tilde{A}_{j,i}\big) - \log\lambda_{{\boldsymbol{\alpha} }}\big(\tilde{A}_{j,i}\big) \Big\vert \leq \frac{\kappa}{\lambda_{\min}} \;d \big({\boldsymbol{\alpha} }', {\boldsymbol{\alpha} }\big). \\ \boldsymbol{(II):} \;&\Big\vert \lambda_{{\boldsymbol{\alpha} }'}(u) - \lambda_{{\boldsymbol{\alpha} }}(u) \Big\vert \leq \kappa \;d\big({\boldsymbol{\alpha} }', {\boldsymbol{\alpha} } \big). \\ \boldsymbol{(III):} \;&\Big\vert \log\tilde{H}_{{\boldsymbol{\theta}' }}\big(W_{j,i-1}+X_{j,i-1}-A_{j,i} \big) - \log\tilde{H}_{{\boldsymbol{\theta} }}\big(W_{j,i-1}+X_{j,i-1}-A_{j,i} \big) \Big\vert \\ \leq \;& G_2\big(W_{j,i-1}+X_{j,i-1}-A_{j,i}\big) \;d\big({\boldsymbol{\theta} }, {\boldsymbol{\theta}' }\big) \leq G_2\big(W_{j,i}\big) \;d\big({\boldsymbol{\theta} }, {\boldsymbol{\theta}' }\big). \\ \boldsymbol{(IV):} \;& \Big|\lambda_{{\boldsymbol{\alpha} }'}\big(u+\tilde{A}_{j,i-1}) \tilde{H}_{{\boldsymbol{\theta}' }}\big(W_{j,i-1}+X_{j,i-1}-u \big) - \lambda_{{\boldsymbol{\alpha} }}\big(u+\tilde{A}_{j,i-1}) \tilde{H}_{{\boldsymbol{\theta} }}\big(W_{j,i-1}+X_{j,i-1}-u \big) \Big| \\ \leq \;&\Big|\lambda_{{\boldsymbol{\alpha} }'}\big(u+\tilde{A}_{j,i-1}) \tilde{H}_{{\boldsymbol{\theta}' }}\big(W_{j,i-1}+X_{j,i-1}-u \big) - \lambda_{{\boldsymbol{\alpha} }'}\big(u+\tilde{A}_{j,i-1}) \tilde{H}_{{\boldsymbol{\theta} }}\big(W_{j,i-1}+X_{j,i-1}-u \big) \Big| \\ & + \Big|\lambda_{{\boldsymbol{\alpha} }'}\big(u+\tilde{A}_{j,i-1}) \tilde{H}_{{\boldsymbol{\theta} }}\big(W_{j,i-1}+X_{j,i-1}-u \big) - \lambda_{{\boldsymbol{\alpha} }}\big(u+\tilde{A}_{j,i-1}) \tilde{H}_{{\boldsymbol{\theta} }}\big(W_{j,i-1}+X_{j,i-1}-u \big) \Big| \\ \leq \;& \lambda_{\max} \Big|\tilde{H}_{{\boldsymbol{\theta}' }}\big(W_{j,i-1}+X_{j,i-1}-u \big) - \tilde{H}_{{\boldsymbol{\theta} }}\big(W_{j,i-1}+X_{j,i-1}-u \big) \Big| + \Big\vert\lambda_{{\boldsymbol{\alpha} }'}\big(u+\tilde{A}_{j,i-1}) - \lambda_{{\boldsymbol{\alpha} }}\big(u+\tilde{A}_{j,i-1}) \Big\vert \\ \leq \;& \lambda_{\max} \;G_1 \;d\big({\boldsymbol{\theta} }, {\boldsymbol{\theta}' }\big) + \kappa \;d\big({\boldsymbol{\alpha} }', {\boldsymbol{\alpha} }\big). \\ \boldsymbol{(V):} \;& \text{Same computations as \boldsymbol{(IV)} lead to} \\ \leq \;& \lambda_{\max} \;G_1 \;d\big({\boldsymbol{\theta} }, {\boldsymbol{\theta}' }\big) + \kappa \;d\big({\boldsymbol{\alpha} }, {\boldsymbol{\alpha} }'\big). \end{align}\] Therefore, \[\begin{align} \Big\vert q\big({\boldsymbol{Z}_j }, {\boldsymbol{\mu}' }\big) - q\big({\boldsymbol{Z}_j }, {\boldsymbol{\mu} }\big) \Big\vert \leq \;&C_j \frac{\kappa}{\lambda_{\min}} \;d\big({\boldsymbol{\alpha} }, {\boldsymbol{\alpha} }'\big) + \kappa A_{j,1} \;d\big({\boldsymbol{\alpha} }, {\boldsymbol{\alpha} }'\big) + \sum_{i=2}^{C_j} G_2\big(W_{j,i}\big) \;d\big({\boldsymbol{\theta} }, {\boldsymbol{\theta}' }\big) \\ & + \Big(\lambda_{\max} \;G_1 \;d\big({\boldsymbol{\theta} }, {\boldsymbol{\theta}' }\big) + \kappa \;d\big({\boldsymbol{\alpha} }, {\boldsymbol{\alpha} }'\big) \Big) \Big(A_{j,2}+A_{j,3}+ \cdots + A_{j,C_j} + R_j - \tilde{A}_{j,C_j} \Big) \\ = \;& \Big( C_j\frac{\kappa}{\lambda_{\min}} + R_j \kappa \Big) \;d\big({\boldsymbol{\alpha} }, {\boldsymbol{\alpha} }'\big) + \Bigg(\sum_{i=2}^{C_j} G_2\big(W_{j,i}\big) + \lambda_{\max}\big(R_j - A_{j,1}\big) G_1 \Bigg) \;d\big({\boldsymbol{\theta} }, {\boldsymbol{\theta}' }\big). \end{align}\] Separating, we get \[\begin{align} \Big\vert q\big({\boldsymbol{Z}_j }, {\boldsymbol{\mu}' }\big) - q\big({\boldsymbol{Z}_j }, {\boldsymbol{\mu} }\big) \Big\vert \leq \underbrace{d\big({\boldsymbol{\mu} }, {\boldsymbol{\mu}' }\big)}_{h(x)=x} \times \underbrace{\Bigg(C_j\frac{\kappa}{\lambda_{\min}} + R_j \kappa + \sum_{i=2}^{C_j} G_2\big(W_{j,i}\big) + \lambda_{\max}\big(R_j - A_{j,1}\big) G_1 \Bigg)}_{B\big({\boldsymbol{Z}_j }\big)}. \end{align}\] By Assumption (A6) and Theorem 3, it is clear that \(\mathop{\mathrm{\mathbb{E}}}\big[B({\boldsymbol{Z}_j })\big] < \infty\). ◻

Proof of Lemma 2 in Section 5. For a fixed data vector \({{\boldsymbol{Z} }_1}\), we have \[\begin{align} q\big({{\boldsymbol{Z} }_1}, .\big) : \mathbb{R}^{k+p} \longrightarrow \mathbb{R}, \; \nabla q\big({{\boldsymbol{Z} }_1}, .\big) : \mathbb{R}^{k+p} \longrightarrow \mathbb{R}^{k+p}. \end{align}\] Because of Assumptions (A8), (A9), we can write \[\begin{align} \nabla q\big({{\boldsymbol{Z} }_1}, {{\boldsymbol{\mu} }_0}\big) = \begin{bmatrix} \frac{\partial q\big({{\boldsymbol{Z} }_1}, {{\boldsymbol{\mu} }_0}\big)}{\partial \alpha_1}& \cdots& \frac{\partial q\big({{\boldsymbol{Z} }_1}, {{\boldsymbol{\mu} }_0}\big)}{\partial \alpha_k}& \frac{\partial q\big({{\boldsymbol{Z} }_1}, {{\boldsymbol{\mu} }_0}\big)}{\partial \theta_1}&\cdots& \frac{\partial q\big({{\boldsymbol{Z} }_1}, {{\boldsymbol{\mu} }_0}\big)}{\partial \theta_p}. \end{bmatrix}^\top \end{align}\] We need to prove that every component of \(\mathop{\mathrm{\mathbb{E}}}\Big[\nabla q\big({{\boldsymbol{Z} }_1}, {{\boldsymbol{\mu} }_0}\big) \Big]\) is 0. We prove \[\begin{align} \mathop{\mathrm{\mathbb{E}}}\bigg[\frac{\partial q\big({{\boldsymbol{Z} }_1}, {{\boldsymbol{\mu} }_0}\big)}{\partial \alpha_1}\bigg] = \mathop{\mathrm{\mathbb{E}}}\bigg[\frac{\partial q\big({{\boldsymbol{Z} }_1}, {{\boldsymbol{\mu} }_0}\big)}{\partial \theta_1}\bigg] = 0; \end{align}\] the other components then follow along the same lines. By Assumptions (A2), (A6), (A7) and Theorem 3, we can conclude that \(q\big({\boldsymbol{Z}_j }, {{\boldsymbol{\mu} }_0}\big)\) is integrable: \[\begin{align} \Big\vert q\big({\boldsymbol{Z}_j }, {{\boldsymbol{\mu} }_0}\big) \Big\vert &\leq C_j \max\Big\{\big\vert\log\lambda_{\min}\big\vert, \big\vert \log\lambda_{\max}\big\vert \Big\} + \sum_{i=2}^{C_j} G_3\big(W_j\big) + \lambda_{\max} R_j \end{align}\] implies \[\begin{align} \mathop{\mathrm{\mathbb{E}}}\Big\vert q\big({\boldsymbol{Z}_j }, {{\boldsymbol{\mu} }_0}\big) \Big\vert &\leq \mathop{\mathrm{\mathbb{E}}}\big[C_j\big] \max\Big\{\big\vert\log\lambda_{\min}\big\vert, \big\vert \log\lambda_{\max}\big\vert \Big\} + \mathop{\mathrm{\mathbb{E}}}\bigg[\sum_{i=2}^{C_j} G_3\big(W_j\big) \bigg] + \lambda_{\max} \mathop{\mathrm{\mathbb{E}}}[R_j] < \infty. \end{align}\] By Assumption (A8), \(q\big({\boldsymbol{Z}_j }, {\boldsymbol{\mu} }\big)\) is continuously differentiable with respect to \(\alpha_1, \alpha_2, \cdots, \alpha_k\) and \(\theta_1, \theta_2, \cdots, \theta_p\). Now, we take the partial derivative of \(q\big({\boldsymbol{Z}_j }, {\boldsymbol{\mu} }\big)\) with respect to \(\alpha_1\): \[\begin{align} \frac{\partial}{\partial \alpha_1} q\big({\boldsymbol{Z}_j }, {{\boldsymbol{\mu} }_0}\big) = &\sum_{i=1}^{C_j} \frac{1}{\lambda_{{{\boldsymbol{\alpha} }_0}}\big(\tilde{A}_{j,i}\big)} \frac{\partial}{\partial \alpha_1}\lambda_{{{\boldsymbol{\alpha} }_0}}\big(\tilde{A}_{j,i}\big) - \int_{0}^{A_{j,1}} \frac{\partial}{\partial \alpha_1} \lambda_{{{\boldsymbol{\alpha} }_0}}\big(u\big) \;\mathrm{d}u \\ & -\sum_{i=2}^{C_j} \int_{0}^{A_{j,i}} \frac{\partial}{\partial \alpha_1} \lambda_{{{\boldsymbol{\alpha} }_0}}\big(u+\tilde{A}_{j,i-1}\big)\tilde{H}_{{\boldsymbol{\theta} }}\big(W_{j,i-1}+X_{j,i-1}-u\big) \;\mathrm{d}u \\ & + \int_{0}^{R_j - \tilde{A}_{j,C_j}} \frac{\partial}{\partial \alpha_1} \lambda_{{{\boldsymbol{\alpha} }_0}}\big(u+\tilde{A}_{j,C_j} \big) \tilde{H}_{{\boldsymbol{\theta} }}\big(W_{j,C_j} + X_{j,C_j} - u\big) \;\mathrm{d}u. \end{align}\] By Assumption (A2), (A3), \[\begin{align} \bigg\vert \frac{\partial}{\partial \alpha_1} q\big({\boldsymbol{Z}_j }, {{\boldsymbol{\mu} }_0}\big) \bigg\vert \leq \frac{\kappa}{\lambda_{\min}}C_j + \kappa R_j. \end{align}\] Furthermore, by Theorem 3(a), (b), \[\begin{align} \mathop{\mathrm{\mathbb{E}}}\Bigg[\frac{\kappa}{\lambda_{\min}} C_j + \kappa R_j \Bigg] = \frac{\kappa}{\lambda_{\min}} \mathop{\mathrm{\mathbb{E}}}\big[C_j\big] + \kappa \mathop{\mathrm{\mathbb{E}}}\big[R_j\big]< \infty. \end{align}\] Therefore, by the dominated convergence theorem, \[\begin{align} \mathop{\mathrm{\mathbb{E}}}\Bigg[\frac{\partial q\big({{\boldsymbol{Z} }_1}, {{\boldsymbol{\mu} }_0}\big)}{\partial \alpha_1} \Bigg] = \frac{\partial}{\partial \alpha_1} \mathop{\mathrm{\mathbb{E}}}\big[q\big({{\boldsymbol{Z} }_1},{{\boldsymbol{\mu} }_0}\big)\big] = 0. \end{align}\] Now we take the partial derivative of \(q\big({\boldsymbol{Z}_j }, {\boldsymbol{\mu} }\big)\) with respect to \(\theta_1\). \[\begin{align} \frac{\partial}{\partial \theta_1} q\big({\boldsymbol{Z}_j }, {{\boldsymbol{\mu} }_0}\big) = &\sum_{i=2}^{C_j} \frac{\partial}{\partial \theta_1} \log\tilde{H}_{{{\boldsymbol{\theta} }_0}}\big(W_{j,i-1}+X_{j,i-1}-A_{j,i}\big) \\ &- \sum_{i=2}^{C_j} \int_{0}^{A_{j,i}} \lambda_{{{\boldsymbol{\alpha} }_0}}\big(u+\tilde{A}_{j,i-1}\big) \frac{\partial}{\partial \theta_1}\tilde{H}_{{{\boldsymbol{\theta} }_0}}\big(W_{j,i-1}+X_{j,i-1}-u\big) \;\mathrm{d}u \\ &+ \int_{0}^{R_j - \tilde{A}_{j,C_j}} \lambda_{{{\boldsymbol{\alpha} }_0}}\big(u+\tilde{A}_{j,C_j}\big) \frac{\partial}{\partial \theta_1} \tilde{H}_{{{\boldsymbol{\theta} }_0}}\big(W_{j,C_j}+X_{j,C_j}-u\big) \;\mathrm{d}u. \end{align}\] By Assumptions (A2), (A5) and (A7), \[\begin{align} \bigg\vert \frac{\partial}{\partial \theta_1} q\big({\boldsymbol{Z}_j }, {{\boldsymbol{\mu} }_0}\big) \bigg\vert \leq \sum_{i=2}^{C_j} G_2\big(W_{j,i}\big) + \lambda_{\max}\;G_1 \big(R_j - A_{j,1}\big). \end{align}\] Furthermore, by Assumption (A6), and Theorem 3(a), (c), \[\begin{align} \mathop{\mathrm{\mathbb{E}}}\Bigg[\sum_{i=2}^{C_j} G_2\big(W_{j,i}\big) + \lambda_{\max} G_1\big(R_j - A_{j,1}\big) \Bigg] = \mathop{\mathrm{\mathbb{E}}}\Bigg[\sum_{i=2}^{C_j} G_2\big(W_{j,i}\big) \Bigg] + \lambda_{\max} \;G_1 \mathop{\mathrm{\mathbb{E}}}\big[R_j - A_{j,1} \big] < \infty. \end{align}\] Therefore, again by the dominated convergence theorem, \[\begin{align} \mathop{\mathrm{\mathbb{E}}}\Bigg[\frac{\partial q\big({{\boldsymbol{Z} }_1}, {{\boldsymbol{\mu} }_0}\big)}{\partial \theta_1} \Bigg] = \frac{\partial}{\partial \theta_1} \mathop{\mathrm{\mathbb{E}}}q\big({{\boldsymbol{Z} }_1},{{\boldsymbol{\mu} }_0}\big) = 0. \end{align}\] The same argument applies to \(\alpha_2, \cdots, \alpha_k\) and \(\theta_2, \cdots, \theta_p\), so that Therefore, \(\mathop{\mathrm{\mathbb{E}}}\big[\nabla q\big({{\boldsymbol{Z} }_1}, {{\boldsymbol{\mu} }_0}\big) \big] = 0\). ◻

Proof for Proposition 2 in Section 6. For our new objects \(q({\boldsymbol{Z} }_j, {\boldsymbol{\mu} })\), we re-prove Lemma 1. We start with \[\begin{align} &\Big\vert q\big({\boldsymbol{Z}_j }, {\boldsymbol{\mu}' }\big) - q\big({\boldsymbol{Z}_j }, {\boldsymbol{\mu} }\big) \Big\vert \leq \sum_{i=1}^{C_j} \underbrace{\Big\vert \log\lambda_{{\boldsymbol{\alpha} }'}\big(\tilde{A}_{j,i}\big) - \log\lambda_{{\boldsymbol{\alpha} }}\big(\tilde{A}_{j,i}\big) \Big\vert}_{\text{(I)}} + \int_{0}^{A_{j,1}} \underbrace{\Big\vert \lambda_{{\boldsymbol{\alpha} }'}(u) - \lambda_{{\boldsymbol{\alpha} }}(u) \Big\vert}_{\text{(II)}} \, \mathrm{d}u \;+ \notag \\ &\sum_{i=2}^{C_j} \underbrace{\Big\vert \log\tilde{H}_{{\boldsymbol{\theta}' }}\big(\psi\big(\mathcal{R}(\tilde{A}_{j,i})\big)\big) - \log\tilde{H}_{{\boldsymbol{\theta} }}\big(\psi\big(\mathcal{R}(\tilde{A}_{j,i})\big)\big) \Big\vert}_{\text{(III)}} \;+ \notag \\ &\sum_{i=2}^{C_j} \int_{0}^{A_{j,i}} \underbrace{\Big\vert \lambda_{{\boldsymbol{\alpha} }'}\big(u+\tilde{A}_{j,i-1}\big)\tilde{H}_{{\boldsymbol{\theta}' }}\big(\psi\big(\mathcal{R}(u+\tilde{A}_{j,i-1})\big)\big) - \lambda_{{\boldsymbol{\alpha} }}\big(u+\tilde{A}_{j,i-1}\big)\tilde{H}_{{\boldsymbol{\theta} }}\big(\psi\big(\mathcal{R}(u+\tilde{A}_{j,i-1})\big)\big)\Big\vert}_{\text{(IV)}} \, \mathrm{d}u \, + \notag \\ &\int_{0}^{R_j - \tilde{A}_{j, C_j}} \underbrace{\Big\vert \lambda_{{\boldsymbol{\alpha} }'}\big(u+\tilde{A}_{j,C_j}\big)\tilde{H}_{{\boldsymbol{\theta}' }}\big(\psi\big(\mathcal{R}(u+\tilde{A}_{j,C_j})\big)\big) - \lambda_{{\boldsymbol{\alpha} }}\big(u+\tilde{A}_{j,C_j}\big)\tilde{H}_{{\boldsymbol{\theta} }}\big(\psi\big(\mathcal{R}(u+\tilde{A}_{j,C_j})\big)\big) \Big\vert}_{\text{(V)}} \, \mathrm{d}u. \end{align}\] Observe that the upper bounds for terms (I), (II), (IV) and (V) follow in the exact same way as from the proof of Lemma 1. However, for term (III), under (A6) we have \[\begin{align} \Big\vert \log\tilde{H}_{{\boldsymbol{\theta}' }}\big(\psi\big(\mathcal{R}(\tilde{A}_{j,i})\big)\big) - \log\tilde{H}_{{\boldsymbol{\theta} }}\big(\psi\big(\mathcal{R}(\tilde{A}_{j,i})\big)\big) \Big\vert &\leq G_2\big(\psi\big(\mathcal{R}(\tilde{A}_{j,i})\big)\big) \;d\big(\theta, \theta'\big). \end{align}\] Further, we get \[\begin{align} G_2\big(\psi\big(\mathcal{R}(\tilde{A}_{j,i})\big)\big) &\leq G_2\big(\tilde{M}W_{j,i}\big), \;\text{under Assumption (A11)(a)}. \\ G_2\big(\psi\big(\mathcal{R}(\tilde{A}_{j,i})\big)\big) &\leq G_2\big(\psi_1\big(L(\tilde{A}_{j,i})\big)\big), \;\text{under Assumption (A11)(b)}. \end{align}\] Under (A11)(a), we therefore get \[\begin{align} \Big\vert q\big({\boldsymbol{Z}_j }, {\boldsymbol{\mu}' }\big) - q\big({\boldsymbol{Z}_j }, {\boldsymbol{\mu} }\big) \Big\vert \leq \underbrace{d\big({\boldsymbol{\mu} }, {\boldsymbol{\mu}' }\big)}_{h(x)=x} \times \underbrace{\Bigg(C_j\frac{\kappa}{\lambda_{\min}} + R_j \kappa + \sum_{i=2}^{C_j} G_2\big(\tilde{M} W_{j,i}\big) + \lambda_{\max}\big(R_j - A_{j,1}\big) G_1 \Bigg)}_{B_1(Z_j)}, \end{align}\] while under (A11)(b), we get \[\begin{align} \Big\vert q\big({\boldsymbol{Z}_j }, {\boldsymbol{\mu}' }\big) - q\big({\boldsymbol{Z}_j }, {\boldsymbol{\mu} }\big) \Big\vert \leq \underbrace{d\big({\boldsymbol{\mu} }, {\boldsymbol{\mu}' }\big)}_{h(x)=x} \times \underbrace{\Bigg(C_j\frac{\kappa}{\lambda_{\min}} + R_j \kappa + \sum_{i=2}^{C_j} G_2\big(\psi_1\big(L(\tilde{A}_{j,i})\big)\big) + \lambda_{\max}\big(R_j - A_{j,1}\big) G_1 \Bigg)}_{B_2(Z_j)}. \end{align}\] Recalling that \(G_2\) is linear together with 3, it is clear that \(\mathop{\mathrm{\mathbb{E}}}\big[B_1(Z_j)\big] < \infty\). Recalling that \(G_2\) and \(\psi_1\) are linear, and that \(\mathop{\mathrm{\mathbb{E}}}\big[C_1^2\big] < \infty\), together with Theorem 3, it is clear that \(\mathop{\mathrm{\mathbb{E}}}\big[B_2({\boldsymbol{Z}_j })\big] < \infty\). This ensures that the Lemma 1 holds. Subsequently, the proof of Theorem 2 relies only on the objects \(q({\boldsymbol{Z} }_j, {\boldsymbol{\mu} })\) which are i.i.d.and whose sum is the log-likelihood. We have managed to define such objects in Equation 22 , and hence the proof carries over directly. ◻

References↩︎

[1]
R Ibrahim and P L’Ecuyer. Forecasting call center arrivals: Fixed-effects, mixed-effects, and bivariate models. Manufacturing & Service Operations Management, 15(1):72–85, 2013.
[2]
R Ibrahim, H Ye, P L’Ecuyer, and H Shen. Modeling and forecasting call center arrivals: A literature survey and a case study. International Journal of Forecasting, 32(3):865–874, 2016.
[3]
Y Inoue, L Ravner, and M Mandjes. Estimating customer impatience in a service system with unobserved balking. Stochastic Systems, 13(2):181–210, 2023.
[4]
M Woodroofe. Estimating a distribution function with truncated data. The Annals of Statistics, 13(1):163–177, 1985.
[5]
A Asanjarani, Y Nazarathy, and P Taylor. A survey of parameter and state estimation in queues. Queueing Systems, 97:39–80, 2021.
[6]
NH Bingham and SM Pitts. Non-parametric estimation for the M/G/\(\infty\) queue. Annals of the Institute of Statistical Mathematics, 51:71–97, 1999.
[7]
JV Ross, T Taimre, and PK Pollett. Estimation for queues from queue length data. Queueing Systems, 55:131–138, 2007.
[8]
RC Larson. The queue inference engine: Deducing queue statistics from transactional data. Management Science, 36(5):586–601, 1990.
[9]
L Ravner, O Boxma, and M Mandjes. Estimating the input of a lévy-driven queue by poisson sampling of the workload process. Bernoulli, 25:3734–3761, 2019.
[10]
J. Pickands III and RA Stine. Estimation for an M/G/\(\infty\) queue with incomplete information. Biometrika, 84(2):295–308, 1997.
[11]
F Baccelli, P Boyer, and G Hebuterne. Single-server queues with impatient customers. Advances in Applied Probability, 16(4):887–905, 1984.
[12]
L Liu and VG Kulkarni. Explicit solutions for the steady state distributions in M/PH/1 queues with workload dependent balking. Queueing Systems, 52:251–260, 2006.
[13]
L Liu and VG Kulkarni. Busy period analysis for M/PH/1 queues with workload dependent balking. Queueing Systems, 59:37–51, 2008.
[14]
O Boxma, D Perry, W Stadje, and S Zacks. The busy period of an M/G/1 queue with customer impatience. Journal of Applied Probability, 47(1):130–145, 2010.
[15]
A Mandelbaum and S Zeltyn. Data-stories about (im) patient customers in tele-queues. Queueing Systems, 75(2-4):115–146, 2013.
[16]
W Whitt. Queues with time-varying arrival rates: A bibliography. Available on http://www. columbia. edu/~ ww2040/TV_bibliography_091016. pdf, 2016.
[17]
WA Massey and W Whitt. An analysis of the modified offered-load approximation for the nonstationary erlang loss model. The Annals of applied probability, pages 1145–1160, 1994.
[18]
WA Massey and W Whitt. Unstable asymptotics for nonstationary queues. Mathematics of Operations Research, 19(2):267–291, 1994.
[19]
WA Massey, GA Parker, and W Whitt. Estimating the parameters of a nonhomogeneous poisson process with linear rate. Telecommunication systems, 5:361–388, 1996.
[20]
SH Kim and W Whitt. Estimating waiting times with the time-varying little’s law. Probability in the Engineering and Informational Sciences, 27(4):471–506, 2013.
[21]
OB Jennings, A Mandelbaum, WA Massey, and W Whitt. Server staffing to meet time-varying demand. Management Science, 42(10):1383–1394, 1996.
[22]
N Chen, R Gürlek, DKK Lee, and H Shen. Can customer arrival rates be modelled by sine waves? Service Science, 2023.
[23]
R Helmers and IW Mangku. On estimating the period of a cyclic poisson process. Lecture Notes-Monograph Series, pages 345–356, 2003.
[24]
R Helmers and IW Mangku. Estimating the intensity of a cyclic poisson process in the presence of linear trend. Annals of the Institute of Statistical Mathematics, 61:599–628, 2009.
[25]
R Helmers, IW Mangku, and R Zitikis. Consistent estimation of the intensity function of a cyclic poisson process. Journal of Multivariate Analysis, 84(1):19–39, 2003.
[26]
ME Kuhl and JR Wilson. Least squares estimation of nonhomogeneous poisson processes. Journal of Statistical Computation and Simulation, 67(1):699–712, 2000.
[27]
N Yoshida and T Hayashi. On the robust estimation in poisson processes with periodic intensities. Annals of the Institute of Statistical Mathematics, 42:489–507, 1990.
[28]
YA Kutoyants. Statistical inference for spatial Poisson processes, volume 134. Springer Science & Business Media, 2012.
[29]
DP Heyman and W Whitt. The asymptotic behavior of queues with time-varying arrival rates. Journal of Applied Probability, 21(1):143–156, 1984.
[30]
DWK Andrews. Generic uniform convergence. Econometric Theory, 8(2):241–257, 1992.
[31]
M Benaı̈m and T Hurth. Markov Chains on Metric Spaces: A Short Course. Springer, 2022.
[32]
R Ibrahim and W Whitt. Real-time delay estimation based on delay history. Manufacturing & Service Operations Management, 11(3):397–415, 2009.
[33]
R Ibrahim and W Whitt. Wait-time predictors for customer service systems with time-varying demand and capacity. Operations research, 59(5):1106–1118, 2011.
[34]
TS Ferguson. A Course in Large Sample Theory. Routledge, 1996.
[35]
G Mendelson and K Xu. Principles for statistical analysis in dynamic service systems. 2023. Unpublished manuscript, Stanford University.