Let $G$ be a connected reductive group over a totally real field $F$ which is compact modulo center at archimedean places. We find congruences modulo an arbitrary power of p between the space of arbitrary automorphic forms on $G(\mathbb A_F)$ and that of automorphic forms with supercuspidal components at p, provided that p is larger than the Coxeter number of the absolute Weyl group of $G$. We illustrate how such congruences can be applied in the construction of Galois representations. Our proof is based on type theory for representations of p-adic groups, generalizing the prototypical case of GL(2) in [arXiv:1506.04022, Section 7] to general reductive groups. We exhibit a plethora of new supercuspidal types consisting of arbitrarily small compact open subgroups and characters thereof. We expect these results of independent interest to have further applications. For example, we extend the result by Emerton--Pa\v{s}k\=unas on density of supercuspidal points from definite unitary groups to general $G$ as above.

We study K-theoretic Gromov--Witten invariants of projective hypersurfaces using a virtual localization formula under finite group actions. In particular, it provides all K-theoretic Gromov--Witten invariants of the quintic threefold modulo 41, up to genus 19 and degree 40. Applying the same idea to a K-theoretic version of FJRW theory, we determine it modulo 41 for the quintic polynomial with minimal group and narrow insertions, in every genus.

We are concerned with a semi-linear elliptic equation on a smooth bounded domain $\Omega$ of $\mathbb{R}^n,\,n\geq 5,$ which involves a critical nonlinearity and a linear term of the form $K(x)u^{(n+2)/(n-2)}$ and $\mu u,$ respectively. By using a test function procedure, we give an existence criterion involving the parameter $\mu$ and the function $K(x).$ For a particular case of $\Omega,\,K(x)$ and $n,$ we prove its optimality through a Pohozaev type identity.

We construct a functor that maps $C^*$-correspondences to their Cuntz-Pimsner algebras. The objects in our domain category are $C^*$-correspondences, and the morphisms are the isomorphism classes of $C^*$-correspondences satisfying certain conditions. As an application, we recover a well-known result of Muhly and Solel. In fact, we show that functoriality leads us to a more generalized result: strongly Morita equivalent $C^*$-correspondences have Morita equivalent Cuntz-Pimsner algebras.

Numerical resolution of exterior Helmholtz problems requires some approach to domain truncation. As an alternative to approximate nonreflecting boundary conditions and invocation of the Dirichlet-to-Neumann map, we introduce a new, nonlocal boundary condition. This condition is exact and requires the evaluation of layer potentials involving the free space Green's function. However, it seems to work in general unstructured geometry, and Galerkin finite element discretization leads to convergence under the usual mesh constraints imposed by G{\aa}rding-type inequalities. The nonlocal boundary conditions are readily approximated by fast multipole methods, and the resulting linear system can be preconditioned by the purely local operator involving transmission boundary conditions.

This letter proposes a pragmatic link adaptation algorithm considering power density offsets (PDOs) for next-generation uplink wireless channels. The proposed algorithm consists of PDO calculation between a physical uplink shared channel and its associated sounding reference signal, key channel state metric generation, and modulation and coding scheme (MCS) adaptation with respect to the PDO. Scaling is applied to estimated channel matrices based on multiple reference PDO points to generate corresponding reference mutual information (MI) values, followed by interpolation or extrapolation to obtain the adapted MI and ultimately MCS. The proposed algorithm has low complexity in terms of hardware implementation, while yielding satisfactory block error rates and throughput for a wide range of PDOs as shown by simulation results.

We propose a formulation of the relative Bogomolov conjecture and show that it gives an affirmative answer to a question of Mazur's concerning the uniformity of the Mordell-Lang conjecture for curves. In particular we show that the relative Bogomolov conjecture implies the uniform Manin-Mumford conjecture for curves. The proof is built up on our previous work "Uniformity in Mordell-Lang for curves".

Pilot contamination attack (PCA) in a time division duplex wireless communication system is considered, where an eavesdropper (Eve) attacks the reverse pilot transmission phase in order to wiretap the data transmitted from a transmitter, Alice, to a receiver, Bob. We propose a new PCA scheme for Eve, wherein Eve does not emit any signal by itself but uses an intelligent reflecting surface (IRS) to reflect the pilot sent by Bob to Alice. The proposed new PCA scheme, referred to as IRS-PCA, increases the signal leakage from Alice to the IRS during the data transmission phase, which is then reflected by the IRS to Eve in order to improve the wiretapping capability of Eve. The proposed IRS-PCA scheme disables many existing countermeasures on PCA due to the fact that with IRS-PCA, Eve no longer needs to know the pilot sequence of Bob, and therefore, poses severe threat to the security of the legitimate wireless communication system. In view of this, the problems of 1) IRS-PCA detection and 2) secure transmission under IRSPCA are considered in this paper. For IRS-PCA detection, a generalized cumulative sum (GCUSUM) detection procedure is proposed based on the framework of quickest detection, aiming at detecting the occurrence of IRS-PCA as soon as possible once it occurs. For secure transmission under IRS-PCA, a cooperative channel estimation scheme is proposed to estimate the channel of the IRS, based on which zero-forcing beamforming is designed to reduce signal leakage.

This paper is focused on SIS epidemic dynamics (also known as the contact process) on stationary Poisson point processes of the Euclidean plane, when the infection rate of a susceptible point is proportional to the number of infected points in a ball around it. Two models are discussed, the first with a static point process, and the second where points are subject to some random motion. For both models, we use conservation equations for moment measures to analyze the stationary point processes of infected and susceptible points. A heuristic factorization of the third moment measure is then proposed to derive simple polynomial equations allowing one to derive closed form approximations for the fraction of infected nodes and the steady state. These polynomial equations also lead to a phase diagram which tentatively delineates the regions of the space of parameters (population density, infection radius, infection and recovery rate, and motion rate) where the epidemic survives and those where there is extinction. According to this phase diagram, the survival of the epidemic is not always an increasing function of the motion rate. These results are substantiated by simulations on large two dimensional tori. These simulations show that the polynomial equations accurately predict the fraction of infected nodes when the epidemic survives. The phase diagram is also partly substantiated by the simulation of the mean survival time of the epidemic on large tori. The phase diagram accurately predicts the parameter regions where the mean survival time increases or decreases with the motion rate.

Morrison, Walker, and Wedrich used the blob complex to construct a generalization of Khovanov-Rozansky homology to links in the boundary of a 4-manifold. The degree zero part of their theory, called the skein lasagna module, admits an elementary definition in terms of certain diagrams in the 4-manifold. We give a description of the skein lasagna module for 4-manifolds without 1- and 3-handles, and present some explicit calculations for disk bundles over $S^2$.

The location of the spectrum and the Riesz basis property of well-posed homogeneous infinite-dimensional linear port-Hamiltonian systems on a 1D spatial domain are studied. It is shown that the Riesz basis property is equivalent to the fact that system operator generates a strongly continuous group. Moreover, in this situation the spectrum consists of eigenvalues only, located in a strip parallel to the imaginary axis and they can decomposed into finitely many sets having each a uniform gap.

We study certain sequences of moduli spaces of sheaves on K3 surfaces, building on work of Markman. We show that these sequences can be given the structure of a geometric categorical sl_2 action in the sense of Cautis, Kamnitzer, and Licata. As a corollary, we get an equivalence between derived categories of some moduli spaces that are birational via stratified Mukai flops.

The rational Borel equivariant cohomology for actions of a compact connected Lie group is determined by restriction of the action to a maximal torus. We show that a similar reduction holds for any compact Lie group $G$ when there is a closed subgroup $K$ such that the cohomology of the classifying space $BK$ is free over the cohomology of $BG$ for field coefficients. We study the particular case when $G$ is a semi-direct product and $K$ is its maximal elementary abelian 2-subgroup for cohomology with coefficients in a field of characteristic two. This provides a different approach to investigate the syzygy order of the equivariant cohomology of a space with a torus action and a compatible involution, and we relate this description with results for 2-torus actions.

In a previous paper the authors generalized classical results of minimal realizations of non-commutative (nc) rational functions, using nc Fornasini--Marchesini realizations which are centred at an arbitrary matrix point. In particular, it was proved that the domain of regularity of a nc rational function is contained in the invertibility set of a corresponding pencil of any minimal realization of the function. In this paper we prove an equality between the domain of a nc rational function and the domain of any of its minimal realizations. As for evaluations over stably finite algebras, we show that the domain of the realization w.r.t any such algebra coincides with the so called matrix domain of the function w.r.t the algebra. As a corollary we show that the domain of regularity and the stable extended domain coincide. In contrary to both the classical case and the scalar case -- where every matrix coefficients which satisfy the controllability and observability conditions can appear in a minimal realization of a nc rational function -- the matrix coefficients in our case have to satisfy certain equations, called linearized lost-abbey conditions, which are related to Taylor--Taylor expansions in nc function theory.

Let $G$ be a 2-elementary abelian group and $X$ be a manifold with a locally standard action of $G$. We provide a criterion to determine the syzygy order of the $G$-equivariant cohomology of $X$ with coefficients over a field of characteristic two using a complex associated to the cohomology of the face filtration of the manifold with corners $X/G$. This result is the real version of the quotient criterion for locally standard torus actions developed by M. Franz in arXiv:1205.4462z .

We consider the following Schr\"odinger-Bopp-Podolsky system in $\mathbb R^{3}$ $$\left\{ \begin{array}{c} -\varepsilon^{2} \Delta u + V(x)u + \phi u = f(u)\\ -\varepsilon^{2} \Delta \phi + \varepsilon^{4} \Delta^{2}\phi = 4\pi u^{2}\\ \end{array} \right.$$ where $\varepsilon > 0$ with $ V:\mathbb{R}^{3} \rightarrow \mathbb{R}, f:\mathbb{R} \rightarrow \mathbb{R}$ satisfy suitable assumptions. By using variational methods, we prove that the number of positive solutions is estimated below by the Ljusternick-Schnirelmann category of $M$, the set of minima of the potential $V$.

For $G$ a simple, connected graph, a vertex labeling $f:V(G)\rightarrow \mathbb{Z}_+$ is called a $\textit{radio labeling of}$ $G$ if it satisfies $|f(u)-f(v)|\geq \operatorname{diam}(G) + 1 - d(u,v)$ for all distinct vertices $u,v\in V(G)$. The $\textit{radio number}$ of $G$ is the minimal span over all radio labelings of $G$. If a bijective radio labeling onto $\{1,2,...,|V(G)|\}$ exists, $G$ is called a $\textit{radio graceful graph}$. We determine the radio number of all diameter $3$ Hamming graphs and show that an infinite subset of them is radio graceful.

An important component of a number of computational modeling algorithms is an interpolation method that preserves the positivity of the function being interpolated. This report describes the numerical testing of a new positivity-preserving algorithm that is designed to be used when interpolating from a solution defined on one grid to different spatial grid. The motivating application is a numerical weather prediction (NWP) code that uses spectral elements as the discretization choice for its dynamics core and Cartesian product meshes for the evaluation of its physics routines. This combination of spectral elements, which use nonuniformly spaced quadrature/collocation points, and uniformly-spaced Cartesian meshes combined with the desire to maintain positivity when moving between these necessitates our work. This new approach is evaluated against several typical algorithms in use on a range of test problems in one or more space dimensions. The results obtained show that the new method is competitive in terms of observed accuracy while at the same time preserving the underlying positivity of the functions being interpolated.

A weak Galerkin (WG) finite element method without stabilizers was introduced in [J. Comput. Appl. Math., 371 (2020). arXiv:1906.06634] on polytopal mesh. Then it was improved in [arXiv:2008.13631] with order one superconvergence. The goal of this paper is to develop a new stabilizer free WG method on polytopal mesh. This method has convergence rates two orders higher than the optimal convergence rates for the corresponding WG solution in both an energy norm and the $L^2$ norm. The numerical examples are tested for low and high order elements in two and three dimensional spaces.

In this work we study the deformation of a Hopf algebra $H$ by a partial action of $H$ on its ground field $\Bbbk$ through the partial smash product algebra $\underline{\Bbbk # H}$. We introduce the concept of $\lambda$-Hopf algebra such as a Hopf algebra obtained as a partial smash product algebra, as well as we show that, indeed, every Hopf algebra is a $\lambda$-Hopf algebra. Moreover, we develop a method to compute partial actions of a given Hopf algebra on its ground field and, as an application, we exhibit all partial actions of such type for some families of Hopf algebras.

The linear homotopy theory for codifferential operator on Riemannian manifolds is developed in analogy to the theory for exterior derivative. A new class of anticoexact forms that exist locally in a star-shaped region is defined. Their application to physics, including vacuum Dirac-K\"{a}hler equation, coupled Maxwell-Kalb-Ramond system of equations occurring in a bosonic string theory and its reduction to the Dirac equation, is presented.

SC-Flip (SCF) is a low-complexity polar code decoding algorithm with improved performance, and is an alternative to high-complexity (CRC)-aided SC-List (CA-SCL) decoding. However, the performance improvement of SCF is limited since it can correct up to only one channel error ($\omega=1$). Dynamic SCF (DSCF) algorithm tackles this problem by tackling multiple errors ($\omega \geq 1$), but it requires logarithmic and exponential computations, which make it infeasible for practical applications. In this work, we propose simplifications and approximations to make DSCF practically feasible. First, we reduce the transcendental computations of DSCF decoding to a constant approximation. Then, we show how to incorporate special node decoding techniques into DSCF algorithm, creating the Fast-DSCF decoding. Next, we reduce the search span within the special nodes to further reduce the computational complexity. Following, we describe a hardware architecture for the Fast-DSCF decoder, in which we introduce additional simplifications such as metric normalization and sorter length reduction. All the simplifications and approximations are shown to have minimal impact on the error-correction performance, and the reported Fast-DSCF decoder is the only SCF-based architecture that can correct multiple errors. The Fast-DSCF decoders synthesized using TSMC $65$nm CMOS technology can achieve a $1.25$, $1.06$ and $0.93$ Gbps throughput for $\omega \in \{1,2,3\}$, respectively. Compared to the state-of-the-art fast CA-SCL decoders with equivalent FER performance, the proposed decoders are up to $5.8\times$ more area-efficient. Finally, observations at energy dissipation indicate that the Fast-DSCF is more energy-efficient than its CA-SCL-based counterparts.

A graph $X$ is defined inductively to be $(a_0,\dots,a_{n-1})$-regular if $X$ is $a_0$-regular and for every vertex $v$ of $X$, the sphere of radius $1$ around $v$ is an $(a_1,\dots,a_{n-1})$-regular graph. Such a graph $X$ is said to be highly regular (HR) of level $n$ if $a_{n-1}\neq 0$. Chapman, Linial and Peled studied HR-graphs of level 2 and provided several methods to construct families of graphs which are expanders "globally and locally". They ask whether such HR-graphs of level 3 exist. In this paper we show how the theory of Coxeter groups, and abstract regular polytopes and their generalisations, can lead to such graphs. Given a Coxeter system $(W,S)$ and a subset $M$ of $S$, we construct highly regular quotients of the 1-skeleton of the associated Wythoffian polytope $\mathcal{P}_{W,M}$, which form an infinite family of expander graphs when $(W,S)$ is indefinite and $\mathcal{P}_{W,M}$ has finite vertex links. The regularity of the graphs in this family can be deduced from the Coxeter diagram of $(W,S)$. The expansion stems from applying superapproximation to the congruence subgroups of the linear group $W$. This machinery gives a rich collection of families of HR-graphs, with various interesting properties, and in particular answers affirmatively the question asked by Chapman, Linial and Peled.

We define a new structure for collections of nodes in trees which are called "Sweep-Covers" for their 'covering' of all the nodes in the tree by some ancestor-descendent relationship. Then, we analyze an algorithm for finding all sweep covers of a given size in any tree. The complexity of the algorithm is analyzed on a class of infinite $\Delta$-ary trees with constant path lengths between the $\Delta$-star internal nodes. The lower bound of the complexity on these infinite trees is proven to be the Raney numbers due to how Raney trees embed onto the infinite trees of bounded out-degree.

In this paper, we consider the use of Total Variation (TV) minimization for compressive imaging; that is, image reconstruction from subsampled measurements. Focusing on two important imaging modalities -- namely, Fourier imaging and structured binary imaging via the Walsh--Hadamard transform -- we derive uniform recovery guarantees asserting stable and robust recovery for arbitrary random sampling strategies. Using this, we then derive a class of theoretically-optimal sampling strategies. For Fourier sampling, we show recovery of an image with approximately $s$-sparse gradient from $m \gtrsim_d s \cdot \log^2(s) \cdot \log^4(N)$ measurements, in $d \geq 1$ dimensions. When $d = 2$, this improves the current state-of-the-art result by a factor of $\log(s) \cdot \log(N)$. It also extends it to arbitrary dimensions $d \geq 2$. For Walsh sampling, we prove that $m \gtrsim_d s \cdot \log^2(s) \cdot \log^2(N/s) \cdot \log^3(N) $ measurements suffice in $d \geq 2$ dimensions. To the best of our knowledge, this is the first recovery guarantee for structured binary sampling with TV minimization.

We show that for a generic conformal metric perturbation of a hyperbolic 3-manifold $\Sigma$, the order of vanishing of the Ruelle zeta function at zero equals $4-b_1(\Sigma)$, contrary to the hyperbolic case where it is equal to $4-2b_1(\Sigma)$. The result is proved by developing a suitable perturbation theory that exploits the natural pairing between resonant and co-resonant differential forms. To obtain a metric conformal perturbation we need to establish the non-vanishing of the pushforward of a certain product of resonant and co-resonant states and we achieve this by a suitable regularisation argument. Along the way we describe geometrically all resonant differential forms (at zero) for a closed hyperbolic 3-manifold and study the semisimplicity of the Lie derivative.

We study the loci of the Brocard points over two selected families of triangles: (i) 2 vertices fixed on a circumference and a third one which sweeps it, (ii) Poncelet 3-periodics in the homothetic ellipse pair. Loci obtained include circles, ellipses, and teardrop-like curves. We derive expressions for both curves and their areas. We also study the locus of the vertices of Brocard triangles over the homothetic and Brocard-poristic Poncelet families.

This paper asks a basic question: how much training is required to beat a universal source coder? Traditionally, there have been two types of source coders: fixed, optimum coders such as Huffman coders; and universal source coders, such as Lempel-Ziv The paper considers a third type of source coders: learned coders. These are coders that are trained on data of a particular type, and then used to encode new data of that type. This is a type of coder that has recently become very popular for (lossy) image and video coding. The paper consider two criteria for performance of learned coders: the average performance over training data, and a guaranteed performance over all training except for some error probability $P_e$. In both cases the coders are evaluated with respect to redundancy. The paper considers the IID binary case and binary Markov chains. In both cases it is shown that the amount of training data required is very moderate: to code sequences of length $l$ the amount of training data required to beat a universal source coder is $m=K\frac{l}{\log l}$, where the constant in front depends the case considered.

In this paper, we describe a novel iterative procedure called SISTA to learn the underlying cost in optimal transport problems. SISTA is a hybrid between two classical methods, coordinate descent ("S"-inkhorn) and proximal gradient descent ("ISTA"). It alternates between a phase of exact minimization over the transport potentials and a phase of proximal gradient descent over the parameters of the transport cost. We prove that this method converges linearly, and we illustrate on simulated examples that it is significantly faster than both coordinate descent and ISTA. We apply it to estimating a model of migration, which predicts the flow of migrants using country-specific characteristics and pairwise measures of dissimilarity between countries. This application demonstrates the effectiveness of machine learning in quantitative social sciences.

The design of navigation observers able to simultaneously estimate the position, linear velocity and orientation of a vehicle in a three-dimensional space is crucial in many robotics and aerospace applications. This problem was mainly dealt with using the extended Kalman filter and its variants which proved to be instrumental in many practical applications. Although practically efficient, the lack of strong stability guarantees of these algorithms motivated the emergence of a new class of geometric navigation observers relying on Riemannian geometry tools, leading to provable strong stability properties. The objective of this brief tutorial is to provide an overview of the existing estimation schemes, as well as some recently developed geometric nonlinear observers, for autonomous navigation systems relying on inertial measurement unit (IMU) and landmark measurements.

Any smooth tropical plane curve contains a distinguished trivalent graph called its skeleton. In 2020 Morrison and Tewari proved that the so-called big face graphs cannot be the skeleta of tropical curves for genus $12$ and greater. In this paper we answer an open question they posed to extend their result to the prism graphs, proving that they are the skeleton of a smooth tropical plane curve precisely when the genus is at most $11$. Our main tool is a classification of lattice polygons with two points than can simultaneously view all others, without having any one point that can observe all others.

Let $K := \mathrm{GL}_n(\mathcal{O})$ denote the maximal compact subgroup of $\mathrm{GL}_n(F)$ with $F$ a nonarchimedean local field. We study the decomposition of the space of square-integrable functions on the unit sphere in $F^n$ into irreducible $K$-modules; for $F = \mathbb{Q}_p$, these are the $p$-adic analogues of spherical harmonics. As an application, we characterise the newform and conductor exponent of a generic irreducible admissible smooth representation of $\mathrm{GL}_n(F)$ in terms of distinguished $K$-types. Finally, we compare our results to analogous results in the archimedean setting.

We present a private information retrieval (PIR) scheme that allows a user to retrieve a single message from an arbitrary number of databases by colluding with other users while hiding the desired message index. This scheme is of particular significance when there is only one accessible database -- a special case that turns out to be more challenging for PIR in the multi-database case. The upper bound for privacy-preserving capacity for these scenarios is $C=(1+\frac{1}{S}+\cdots+\frac{1}{S^{K-1}})^{-1}$, where $K$ is the number of messages and $S$ represents the quantity of information sources such as $S=N+U-1$ for $U$ users and $N$ databases. We show that the proposed information retrieval scheme attains the capacity bound even when only one database is present, which differs from most existing works that hinge on the access to multiple databases in order to hide user privacy. Unlike the multi-database case, this scheme capitalizes on the inability for a database to cross-reference queries made by multiple users due to computational complexity.

In this paper we aim to compare Kurepa trees and Aronszajn trees. Moreover, we analyze the affect of large cardinal assumptions on this comparison. Using the the method of walks on ordinals, we will show it is consistent with ZFC that there is a Kurepa tree and every Kurepa tree contains a Souslin subtree, if there is an inaccessible cardinal. This is stronger than Komjath's theorem that asserts the same consistency from two inaccessible cardinals. We will show that our large cardinal assumption is optimal, i.e. if every Kurepa tree has an Aronszajn subtree then $\omega_2$ is inaccessible in the constructible universe \textsc{L}. Moreover, we prove it is consistent with ZFC that there is a Kurepa tree $T$ such that if $U \subset T$ is a Kurepa tree with the inherited order from $T$, then $U$ has an Aronszajn subtree. This theorem uses no large cardinal assumption. Our last theorem immediately implies the following: assume $\textrm{MA}_{\omega_2}$ holds and $\omega_2$ is not a Mahlo cardinal in $\textsc{L}$. Then there is a Kurepa tree with the property that every Kurepa subset has an Aronszajn subtree. Our work entails proving a new lemma about Todorcevic's $\rho$ function which might be useful in other contexts.

Suppose that $M$ is a closed isotropic Riemannian manifold and that $R_1,...,R_m$ generate the isometry group of $M$. Let $f_1,...,f_m$ be smooth perturbations of these isometries. We show that the $f_i$ are simultaneously conjugate to isometries if and only if their associated uniform Bernoulli random walk has all Lyapunov exponents zero. This extends a linearization result of Dolgopyat and Krikorian from $S^n$ to real, complex, and quaternionic projective spaces. In addition, we identify and remedy an oversight in that earlier work.

Assume $\mathcal{C}$ is the class of all linear orders $L$ such that $L$ is not a countable union of well ordered sets, and every uncountable subset of $L$ contains a copy of $\omega_1$. We show it is consistent that $\mathcal{C}$ has minimal elements. This answers an old question due to Galvin.

Finding permutation polynomials with low differential and boomerang uniformityis an important topic in S-box designs of many block ciphers. For example, AES chooses the inverse function as its S-box, which is differentially 4-uniform and boomerang 6-uniform. Also there has been considerable research on many non-quadratic permutations which are obtained by modifying certain set of points from the inverse function. In this paper, we give a novel approach that shows that plenty of existing modifications of the inverse function are in fact affine equivalent to permutations of low Carlitz rank and those modifications cannot be APN (almost perfect nonlinear) unless the Carlitz rank is very large. Using nice properties of the permutations of Carlitz form, we present the complete list of permutations of Carlitz rank 3 having the boomerang uniformity six, and also give the complete classification of the differential uniformity of permutations of Carlitz rank 3. We also provide, up to affine equivalence, all the involutory permutations of Carlitz rank 3 having the boomerang uniformity six.

Based on an equation for the rank of an elliptic surface over $\mathbb{Q}$ which appears in the work of Nagao, Rosen, and Silverman, we conjecture that 100% of elliptic surfaces have rank $0$ when ordered by the size of the coefficients of their Weierstrass equations, and present a probabilistic heuristic to justify this conjecture. We then discuss how it would follow from either understanding of certain $L$-functions, or from understanding of the local behaviour of the surfaces. Finally, we make a conjecture about ranks of elliptic surfaces over finite fields, and highlight some experimental evidence supporting it.

We prove that the length of any gap in the differential grading of the Khovanov homology of any quasi-alternating link is one. As a consequence, we obtain that the length of any gap in the Jones polynomial of any such link is one. This establishes a weaker version of Conjecture 2.3 in [5]. Moreover, we obtain a lower bound for the determinant of any such link in terms of the breadth of its Jones polynomial. This establishes a weaker version of Conjecture 3.8 in [16]. The main tool in obtaining this result is proving the Knight Move Conjecture [2] for the class of quasi-alternating links.

We study the stability of low-density parity-check (LDPC) codes under blockwise or bitwise maximum $\textit{a posteriori}$ (MAP) decoding, where transmission takes place over a binary-input memoryless output-symmetric channel. Our study stems from the consideration of constructing universal capacity-achieving codes under low-complexity decoding algorithms, where universality refers to the fact that we are considering a family of channels with equal capacity. Consider, e.g., the right-regular sequence by Shokrollahi and the heavy-tail Poisson sequence by Luby $\textit{et al}$. Both sequences are provably capacity-achieving under belief propagation (BP) decoding when transmission takes place over the binary erasure channel (BEC). In this paper we show that many existing capacity-achieving sequences of LDPC codes are not universal under BP decoding. We reveal that the key to showing this non-universality result is determined by the stability of the underlying codes. More concretely, for an ordered and complete channel family and a sequence of LDPC code ensembles, we determine a stability threshold associated with them, which further gives rise to a sufficient condition such that the sequence is not universal under BP decoding. Furthermore, we show that the same stability threshold applies to blockwise or bitwise MAP decoding as well. We present how stability can determine an upper bound on the corresponding blockwise or bitwise MAP threshold, revealing the operational significance of the stability threshold.

We study the hard Lefschetz property on compact symplectic solvmanifolds, i.e., compact quotients $M=\Gamma\backslash G$ of a simply-connected solvable Lie group $G$ by a lattice $\Gamma$, admitting a symplectic structure.

In this paper, we present an integral Suzuki-type fixed point theorem for multivalued mappings defined on a complete metric space in terms of the \'{C}iri\'{c} integral contractions. As an application, we will prove an existence and uniqueness theorem for a functional equation arising in dynamic programming of continuous multistage decision processes.

This paper basically expresses the core fundamentals and brief overview of the research of R. G. GALLAGER [1] on Low-Density Parity-Check (LDPC) codes and various parameters related to LDPC codes like, encoding and decoding of LDPC codes, code rate, parity check matrix, tanner graph. We also discuss advantages and applications as well as the usage of LDPC codes in 5G technology. We have simulated encoding and decoding of LDPC codes and have acquired results in terms of BER vs SNR graph in MATLAB software. This report was submitted as an assignment in Nirma University

Evolution strategies (ESs) are zero-order stochastic black-box optimization heuristics invariant to monotonic transformations of the objective function. They evolve a multivariate normal distribution, from which candidate solutions are generated. Among different variants, CMA-ES is nowadays recognized as one of the state-of-the-art zero-order optimizers for difficult problems. Albeit ample empirical evidence that ESs with a step-size control mechanism converge linearly, theoretical guarantees of linear convergence of ESs have been established only on limited classes of functions. In particular, theoretical results on convex functions are missing, where zero-order and also first order optimization methods are often analyzed. In this paper, we establish almost sure linear convergence and a bound on the expected hitting time of an ES, namely the (1 + 1)-ES with (generalized) one-fifth success rule and an abstract covariance matrix adaptation with bounded condition number, on a broad class of functions. The analysis holds for monotonic transformations of positively homogeneous functions and of quadratically bounded functions, the latter of which particularly includes monotonic transformation of strongly convex functions with Lipschitz continuous gradient. As far as the authors know, this is the first work that proves linear convergence of ES on such a broad class of functions.

In this paper, we investigate the initial value problem for the Euler-Riesz system, where the interaction forcing is given by $\nabla(-\Delta)^{s}\rho$ for some $-1~~
~~

We discuss $4$-dimensional achiral Lefschetz fibrations bounding $3$-dimensional open books and study their Lefschetz fibration embedding in a bounded $6$-dimensional manifold, in the sense of Ghanwat--Pancholi. As an application we give another proof of the fact that every orientable $4$-manifold embeds in the $\mathbb{R}^7$.

The dimension of the visible part of self-affine sets, that satisfy domination and a projection condition, is being studied. The main result is that the assouad dimension of the visible part equals to 1 for all directions outside the set of limit directions of the cylinders of the self-affine set. The result holds regardless of the overlap of the cylinders. The sharpness of the result is also being discussed.

In this paper we study the homogenization of a class of energies concentrated on lines. In dimension $2$ (i.e., in codimension $1$) the problem reduces to the homogenization of partition energies studied by \cite{AB}. There, the key tool is the representation of partitions in terms of $BV$ functions with values in a discrete set. In our general case the key ingredient is the representation of closed loops with discrete multiplicity either as divergence-free matrix-valued measures supported on curves or with $1$-currents with multiplicity in a lattice. In the $3$ dimensional case the main motivation for the analysis of this class of energies is the study of line defects in crystals, the so called dislocations.

In this paper we construct, by means of a variational formulation, the solutions of a problem of elastodynamics which includes the effect of damage for the elastic material. The result is a wave equation with time dependent operators which represents the elastic coefficients of the material undergoing damage. The dynamics that we construct also satisfies a threshold condition with the same threshold value that characterizes the quasi-static evolution of damage (see \cite{GL}).

By some result on the study of arithemtic over trivially valued field, we find its applications to Arakelov geometry over adelic curves. We prove a partial result of the continuity of arithmetic $\chi$-volume along semiample divisors. Moreover, we give a upper bound estimate of arithmetic Hilbert-Samuel function.

Non-convex methods for linear inverse problems with low-dimensional models have emerged as an alternative to convex techniques. We propose a theoretical framework where both finite dimensional and infinite dimensional linear inverse problems can be studied. We show how the size of the the basins of attraction of the minimizers of such problems is linked with the number of available measurements. This framework recovers known results about low-rank matrix estimation and off-the-grid sparse spike estimation, and it provides new results for Gaussian mixture estimation from linear measurements. keywords: low-dimensional models, non-convex methods, low-rank matrix recovery, off-the-grid sparse recovery, Gaussian mixture model estimation from linear measurements.

We define the equivariant Cox ring of a normal variety with algebraic group action. We study algebraic and geometric aspects of this object and show how it is related to the usual Cox ring. Then, we specialize to the case of normal rational varieties of complexity one under the action of a connected reductive group G. We show that the equivariant Cox ring is finitely generated in this case. Under a mild additional condition, we give a presentation by generators and relations of its subalgebra of U-invariants, where U is the unipotent part of a Borel subgroup of G. The ordinary Cox ring also has a canonical structure of U-algebra, and we prove that the subalgebra of U-invariants is a finitely generated Cox ring of a variety of complexity one under the action of a torus. Using an earlier work from Hausen and Herppich, we obtain that this latter algebra is a complete intersection. Iteration of Cox rings has been introduced by Arzhantsev, Braun, Hausen and Wrobel in [1]. For log terminal quasicones with a torus action of complexity one, they proved that the iteration sequence is finite with a finitely generated factorial master Cox ring. We prove that the iteration sequence is finite for equivariant and ordinary Cox rings of normal rational G-varieties of complexity one satisfying a mild additional condition (e.g. complete varieties or almost homogeneous varieties with only constant invertible functions). In the almost homogeneous case, we prove that the equivariant and ordinary master Cox rings are finitely generated and factorial.

We study Cox rings of normal threefolds on which SL2 acts with a dense orbit. Exploiting the method of U-invariants, we obtain combinatorial criteria for the total coordinate space and the base variety to have log terminal singularities. Also, we develop a general approach to the description of the Cox ring by generators and relations which is effective for normal SL2 /$\mu$n-embeddings.

For any subset $\Theta$ of the natural numbers and any dominant weight $\lambda$ we construct an indecomposable module $S_\Theta(\lambda)$ with maximal weight $\lambda$ for the quantum group $U_{{\mathscr Z}_{\mathfrak p}}$, where ${\mathscr Z}_{\mathfrak p}$ is a quantum version of the $p$-adic integers. For $\Theta=\emptyset$ we obtain the Weyl module, for $\Theta=\mathbb N$ we obtain the indecomposable tilting module. Each $S_\Theta(\lambda)$ admits a Weyl filtration, and for $l\in\Theta$ the character of $S_\Theta(\lambda)$ equals the character of an (in general decomposable) tilting module for the quantum group at a primitive $p^l$-th root of unity.

In this work we prove a strong law of large numbers for longest consecutive switches in an IID Bernoulli sequence. Moreover, we give sharp low and upper bounds for the longest switches. This work is an extension of results in Erd\"{o}s and R\'{e}v\'{e}sz (1975) for longest head-run and Hao et al. (2020) for longest consecutive switches in unbiased coin-tossing, and they might be applied to reliability theory, biology, quality control, pattern recognition, finance, etc.

The Learning with Errors (LWE) problem is an $\mathcal{NP}$-hard problem that lies the foundation of several cryptographic algorithms. Generalizing this principle, several cryptosystems based on the closely related Ring Learning with Errors (RLWE) problem have been proposed within the NIST PQC standardization process, e.g., the systems LAC and NewHope. The combination of encryption and decryption for these kinds of algorithms can be interpreted as data transmission over noisy channels. To the best of our knowledge this paper is the first work that analyzes the capacity of this channel. In particular, we present lower bounds on the capacity, which show that the transmission rate can be significantly increased compared to standard proposals in the literature. Furthermore, under the assumption of stochastically independent coefficient failures we give achievability bounds as well as concrete code constructions achieving a decryption failure rate (DFR) less than $2^{-112}$ for LAC and less than $2^{-216}$ for NewHope for their highest proposed security levels. Our results show that the data rate of the encryption scheme is significantly increased for NewHope, namely by factor of $7$ and for LAC by a factor of $2$. This is partly based on choosing larger alphabet sizes. Furthermore we show how to significantly decrease the decryption failure rate compared to the original proposals while achieving the same bit rate. We also show that the measures we take to achieve these results have no negative impact on the security level. Thus, by means of our constructions, we can either increase the total bit rate while guaranteeing the same DFR or for the same bit rate, we can significantly reduce the DFR (e.g., for NewHope from $2^{-216}$ to $2^{-12764}$).

A function $f \in \mathcal{C}(\phi)$ if $1+ zf''(z)/f'(z) \prec \phi (z)$ and $f\in \mathcal{C}_{c}(\phi)$ if $2(zf'(z))'/(f(z)+\overline{f(\bar{z})})' \prec \phi (z)$ for $ z\in \mathbb{D}:=\{z\in \mathbb{C}: |z|<1\}$. In this article, we consider the classes $\mathcal{HC}(\phi)$ and $\mathcal{HC}_{c}(\phi)$ consisting of harmonic mappings $f=h+\overline{g}$ of the form $$ h(z)=z+ \sum \limits_{n=2}^{\infty} a_{n}z^{n} \quad \mbox{and} \quad g(z)=\sum \limits_{n=2}^{\infty} b_{n}z^{n} $$ in the unit disk $\mathbb{D}$, where $h$ belongs to $\mathcal{C}(\phi)$ and $\mathcal{C}_{c}(\phi)$ respectively, with the dilation $g'(z)=\alpha z h'(z)$ and $|\alpha|<1$. Using Bohr phenomenon for subordination classes \cite[Lemma 1]{bhowmik-2018}, we find a radius $R_{f}<1$ such that Bohr inequality $$ |z|+\sum_{n=2}^{\infty} (|a_{n}|+|b_{n}|)|z|^{n} \leq d(f(0),\partial f(\mathbb{D})) $$ holds for $|z|=r\leq R_{f}$ for the classes $\mathcal{HC}(\phi)$ and $\mathcal{HC}_{c}(\phi)$ .

In this paper, we consider the problem of jointly performing online parameter estimation and optimal sensor placement for a partially observed infinite dimensional linear diffusion process. We present a novel solution to this problem in the form of a continuous-time, two-timescale stochastic gradient descent algorithm, which recursively seeks to maximise the log-likelihood with respect to the unknown model parameters, and to minimise the expected mean squared error of the hidden state estimate with respect to the sensor locations. We also provide extensive numerical results illustrating the performance of the proposed approach in the case that the hidden signal is governed by the two-dimensional stochastic advection-diffusion equation.

We present a second-order extension of the first-order Lohe hermitian sphere(LHS) model and study its emergent asymptotic dynamics. Our proposed model incorporates an inertial effect as a second-order extension. The inertia term can generate an oscillatory behavior of particle trajectory in a small time interval(initial layer) which causes a technical difficulty for the application of monotonicity-based arguments. For emergent estimates, we employ two-point correlation function which is defined as an inner product between positions of particles. For a homogeneous ensemble with the same frequency matrix, we provide two sufficient frameworks in terms of system parameters and initial data to show that two-point correlation functions tend to the unity which is exactly the same as the complete aggregation. In contrast, for a heterogeneous ensemble with distinct frequency matrices, we provide a sufficient framework in terms of system parameters and initial data, which makes two-point correlation functions close to unity by increasing the principal coupling strength.

A signed graph is a graph in which each edge has a positive or negative sign. In this article, first we characterize the distance compatibility in the case of a connected signed graph and discussed the distance compatibility criterion for the cartesian product, lexicographic product and tensor product of signed graphs. We also deal with the distance matrix of the cartesian product, lexicographic product and tensor product of signed graphs in terms of the distance matrix of the factor graphs.

Gain graphs are graphs where the edges are given some orientation and labeled with the elements (called gains) from a group so that gains are inverted when we reverse the direction of the edges. Generalizing the notion of gain graphs, skew gain graphs have the property that the gain of a reversed edge is the image of edge gain under an anti-involution. In this paper, we deal with the adjacency matrix of skew gain graphs with involutive automorphism on a field of characteristic zero and their charactersitic polynomials. Spectra of some particular skew gain graphs are also discussed. Meanwhile it is interesting to note that weighted graphs are particular cases of skew gain graphs.

Direct MultiSearch (DMS) is a robust and efficient derivative-free optimization algorithm, able to generate approximations to the complete Pareto front of a given multiobjective optimization (MOO) problem. When first (or higher) order derivatives of the different components of the objective function are available, typical approaches for MOO problems are based on generating a single sequence of iterates that converges to a point with corresponding image lying on the Pareto front (one at a time). The purpose of this work is to asses the potential enrichment of adding first-order information, when derivatives are available, to the DMS framework. For that, we describe and analyze several different combined techniques that maintain the search/poll paradigm of DMS, while adding in a suitable way gradient information to the poll step. To properly evaluate the new proposed schemes, we provide numerical results for a set of benchmark MOO problems, in the form of performance profiles, where common performance metrics considered in the MOO community are reported. The proposed schemes are compared with the original DMS and also with the recently developed MOSQP approach that approximates the entire Pareto front at once, using first and second order information.

If {\omega} is a closed G-invariant 2-form and {\mu} a moment map, then we obtain necessary and sufficient conditions for equivariant pre-quantizability that can be computed in terms of the moment map {\mu}. We also compute the obstructions to lift the action of G to a pre-quantization bundle of {\omega}. Our results are valid for any compact and connected Lie group G.

The exclusive lasso (also known as elitist lasso) regularization has become popular recently due to its superior performance on structured sparsity. Its complex nature poses difficulties for the computation of high-dimensional machine learning models involving such a regularizer. In this paper, we propose an adaptive sieving (AS) strategy for generating solution paths of machine learning models with the exclusive lasso regularizer, wherein a sequence of reduced problems with much smaller sizes need to be solved. In order to solve these reduced problems, we propose a highly efficient dual Newton method based proximal point algorithm (PPDNA). As important ingredients, we systematically study the proximal mapping of the weighted exclusive lasso regularizer and the corresponding generalized Jacobian. These results also make popular first-order algorithms for solving exclusive lasso models practical. Various numerical experiments for the exclusive lasso models have demonstrated the effectiveness of the AS strategy for generating solution paths and the superior performance of the PPDNA.

In this corrigendum, we offer a correction to [J. Korean. Math. Soc., 54 (2017), pp. 461--477]. We construct a counterexample for the strengthened Cauchy--Schwarz inequality used in the original paper. In addition, we provide a new proof for Lemma 5 of the original paper, an estimate for the extremal eigenvalues of the standard unpreconditioned FETI-DP dual operator.

We consider nonlinear multibody systems and present a suitable set of coordinates for the internal dynamics which allow to decouple the internal dynamics without the need to compute the Byrnes-Isidori form. Furthermore, we derive sufficient conditions on the system parameters such that the internal dynamics are bounded-input, bounded-output stable.

The Directed Oberwolfach Problem can be considered as the directed version of the well-known Oberwolfach Problem, first mentioned by Ringel at a conference in Oberwolfach, Germany in 1967. In this paper, we describe some new partial results on the Directed Oberwolfach Problem with variable cycle lengths. In particular, we show that the complete symmetric digraph $K_n^{*}$ admits a $( \vec{C}_2, ..., \vec{C}_2, \vec{C}_3) $-factorization for all $ n\equiv 1, 3,$ or $ 7\pmod{8}$. We also show that $K_n^{*}$ admits a $(\vec{C}_2, \vec{C}_{n-2})$-factorization for any integer $n \geq 5$.

In this paper we discuss some examples and general properties of holonomy groups of $\nabla^\varphi$ introduced by Wylie and the author, the connection corresponding to the $N=1$ Bakry-\'Emery Ricci curvature, and also Wylie's $\bar{\mathrm{sec}}_f$. In particular we classify all possible holonomy groups in dimension 2 and also provide two infinite families: $SL_n(\mathbb{R})$ and $SO^+(p,q)$.

We present dimension-free convergence and discretization error bounds for the unadjusted Hamiltonian Monte Carlo algorithm applied to high-dimensional probability distributions of mean-field type. These bounds require the discretization step to be sufficiently small, but do not require strong convexity of either the unary or pairwise potential terms present in the mean-field model. To handle high dimensionality, our proof uses a particlewise coupling that is contractive in a complementary particlewise metric.

We prove that any large even Artin group is poly-free and that any even Artin group based on a triangle graph is also poly-free.

The Probabilistic $p$-Center problem under Pressure ({\tt Min P$p$CP}) is a variant of the usual {\tt Min $p$-Center} problem we recently introduced in the context of wildfire management. The problem is %basically to locate $p$ shelters minimizing the maximum distance people will have to cover %in order to reach one of these shelters to reach the closest accessible shelter in case of fire. The landscape is divided in zones and is modeled as an edge-weighted graph with vertices corresponding to zones and edges corresponding to direct connections between two adjacent zones. The uncertainty associated with fire outbreaks is modeled using a finite set of fire scenarios. Each scenario %defines corresponds to a fire outbreak on a single zone (i.e., on a vertex) with the main consequence of modifying evacuation paths in two ways. First, an evacuation path cannot pass through the vertex on fire. Second, the fact that %somebody someone close to the fire may not take rational decisions when selecting a direction to escape is modeled using new kinds of evacuation paths. In this paper, for a given instance of {\tt Min P$p$CP} defined by an edge-weighted graph $G=(V,E,L)$ and an integer $p$, we characterize the set of feasible solutions of {\tt Min P$p$CP}. We prove that {\tt Min P$p$CP} cannot be approximated with a ratio less than $\frac{56}{55}$ on subgrids (subgraphs of grids) of degree at most 3. Then, we propose some approximation results for {\tt Min P$p$CP}. These results require approximation results for two variants of the (deterministic) {\tt Min $p$-Center} problem called {\tt Min MAC $p$-Center} and {\tt Min Partial $p$-Center}.

We give an easy proof of the fact that $C^\infty$ solutions to non-linear elliptic equations of second order $$ \phi(x, u, D u, D^2 u)=0 $$ are analytic. Following ideas of Kato, the proof uses an inductive estimate for suitable weighted derivatives. We then conclude the proof using Cauchy's method of majorants}.

Over a commutative noetherian ring $R$, the prime spectrum controls, via the assignment of support, the structure of both $\mathsf{Mod}(R)$ and $\mathsf{D}(R)$. We show that, just like in $\mathsf{Mod}(R)$, the assignment of support classifies hereditary torsion pairs in the heart of any nondegenerate compactly generated $t$-structure of $\mathsf{D}(R)$. Moreover, we investigate whether these $t$-structures induce derived equivalences, obtaining a new source of Grothendieck categories which are derived equivalent to $\mathsf{Mod}(R)$.

Model predictive control (MPC) samples a generally unknown and complicated feedback law point by point. The solution for the current state $x$ contains, however, more information than only the optimal signal $u$ for this particular state. In fact, it provides an optimal affine feedback law $x\rightarrow u(x)$ on a polytope $\Pi \subset \mathbb{R}^n$, i.e., on a full-dimensional state space set. It is an obvious idea to reuse this affine feedback law as long as possible. Reusing it on its polytope $\Pi$ is too conservative, however, because any $\Pi$ is a state space set with a common affine law $x\rightarrow (u_0^\prime (x), \dots, u_{N-1}^\prime (x))^\prime\in\mathbb{R}^{Nm}$ for the entire horizon $N$. We show a simple criterion exists for identifying the polytopes that have a common $x\rightarrow u_0(x)$, but may differ with respect to $u_1(x), \dots, u_{N-1}(x)$. Because this criterion is too computationally expensive for an online use, we introduce a simple heuristics for the fast construction of a subset of the polytopes of interest. Computational examples show (i) a considerable fraction of QPs can be avoided (10% to 40%) and (ii) the heuristics results in a reduction very close to the maximum one that could be achieved if the explicit solution was available. We stress the proposed approach is intended for use in online MPC and it does not require the explicit solution.

It is well known that the capacity region of an average transmit power constrained Gaussian Broadcast Channel (GBC) with independent noise realizations at the receivers is enlarged by the presence of causal noiseless feedback. Capacity region enlargement is also known to be possible by using only passive noisy feedback, when the GBC has identical noise variances at the receivers. The last fact remains true even when the feedback noise variance is very high, and available only from one of the receivers. While such capacity enlargements are feasible for several other feedback models in the Gaussian BC setting, it is also known that feedback does not change the capacity region for physically degraded broadcast channels. In this paper, we consider a two user GBC with independent noise realizations at the receivers, where the feedback links from the receivers are corrupted by independent additive Gaussian noise processes. We investigate the set of four noise variances, two forward and two feedback, for which no capacity enlargement is possible. A sharp characterization of this region is derived, i.e., any quadruple outside the presented region will lead to a capacity enlargement, whereas quadruples inside will leave the capacity region unchanged. Our results lead to the conclusion that when the forward noise variances are different, too noisy a feedback from one of the receivers alone is not always beneficial for enlarging the capacity region, be it from the stronger user or the weaker one, in sharp contrast to the case of equal forward noise variances.

We classify the Seifert fibrations of lens spaces where the base orbifold is non-orientable. This is an addendum to our earlier paper `Seifert fibrations of lens spaces'. We correct Lemma 4.1 of that paper and fill the gap in the classification that resulted from the erroneous lemma.

Our main discovery is a surprising interplay between quadrature domains on the one hand, and the zero process of the Gaussian Entire Function (GEF) on the other. Specifically, consider the GEF conditioned on the rare hole event that there are no zeros in a given large Jordan domain. We show that in the natural scaling limit, a quadrature domain enclosing the hole emerges as a forbidden region, where the zero density vanishes. Moreover, we give a description of those holes for which the forbidden region is a disk. The connecting link between random zeros and potential theory is supplied by a constrained extremal problem for the Zeitouni-Zelditch functional. To solve this problem, we recast it in terms of a seemingly novel obstacle problem, where the solution is forced to be harmonic inside the hole.

We consider a mathematical model for wound contraction, which is based on solving a momentum balance under the assumptions of isotropy, homogeneity, Hooke's Law, infinitesimal strain theory and point forces exerted by cells. However, point forces, described by Dirac Delta distributions lead to a singular solution, which in many cases may cause trouble to finite element methods due to a low degree of regularity. Hence, we consider several alternatives to address point forces, that is, whether to treat the region covered by the cells that exert forces as part of the computational domain or as 'holes' in the computational domain. The formalisms develop into the immersed boundary approach and the 'hole approach', respectively. Consistency between these approaches is verified in a theoretical setting, but also confirmed computationally. However, the 'hole approach' is much more expensive and complicated for its need of mesh adaptation in the case of migrating cells while it increases the numerical accuracy, which makes it hard to adapt to the multi-cell model. Therefore, for multiple cells, we consider the polygon that is used to approximate the boundary of cells that exert contractile forces. It is found that a low degree of polygons, in particular triangular or square shaped cell boundaries, already give acceptable results in engineering precision, so that it is suitable for the situation with a large amount of cells in the computational domain.

It is a remarkable theorem by Maffei--Nakajima that the Slodowy variety, which consists of certain complete flags, can be realized as certain Nakajima quiver variety of type A. However, the isomorphism is rather implicit as it takes to solve a system of equations in which variables are linear maps. In this paper, we construct solutions to this system and thus establish an explicit and efficient way to realize these quiver varieties in terms of complete flags in the corresponding Slodowy varieties. As Slodowy varieties contain Springer fibers naturally, we further provide an explicit description of irreducible components of two-row Springer fibers in terms of a family of kernel relations via quiver representations, which allows us to formulate a characterization of irreducible components of Springer fibers of classical type.

In this paper, we investigate the convergence speed of the Arimoto-Blahut algorithm. For many channel matrices the convergence is exponential, but for some channel matrices it is slower than exponential. By analyzing the Taylor expansion of the defining function of the Arimoto-Blahut algorithm, we will make the conditions clear for the exponential or slower convergence. The analysis of the slow convergence is new in this paper. Based on the analysis, we will compare the convergence speed of the Arimoto-Blahut algorithm numerically with the values obtained in our theorems for several channel matrices. The purpose of this paper is a complete understanding of the convergence speed of the Arimoto-Blahut algorithm.

We build blowing-up solutions for a supercritical perturbation of the Yamabe problem on manifolds with boundary, provided the dimension of the manifold is n>6 and the trace-free part of the second fundamental form is non-zero everywhere on the boundary.

The determining number of a graph $G = (V,E)$ is the minimum cardinality of a set $S\subseteq V$ such that pointwise stabilizer of $S$ under the action of $Aut(G)$ is trivial. In this paper, we prove some improved upper and lower bounds on the determining number of Kneser graphs. Moreover, we compute the exact value of the determining number for some subfamilies of Kneser graphs. Finally, we show that the number of Kneser graphs with a given determining number $r$ is an increasing function of $r$.

We consider several mathematical issues regarding models that simulate forces exerted by cells. Since the size of cells is much smaller than the size of the domain of computation, one often considers point forces, modelled by Dirac Delta distributions on boundary segments of cells. In the current paper, we treat forces that are directed normal to the cell boundary and that are directed toward the cell centre. Since it can be shown that there exists no smooth solution, at least not in $H^1$ for solutions to the governing momentum balance equation, we analyse the convergence and quality of the approximation. Furthermore, the expected finite element problems that we get necessitate scrutinizing alternative model formulations, such as the use of smoothed Dirac distributions, or the so-called smoothed particle approach as well as the so-called 'hole' approach where cellular forces are modelled through the use of (natural) boundary conditions. In this paper, we investigate and attempt to quantify the conditions for consistency between the various approaches. This has resulted in error analyses in the $H^1$-norm of the numerical solution based on Galerkin principles that entail Lagrangian basis functions. The paper also addresses well-posedness in terms of existence and uniqueness. The current analysis has been performed for the linear steady-state (hence neglecting inertia and damping) momentum equations under the assumption of Hooke's law.

We study an efficient strategy based on finite elements to value spread options on commodities whose underlying assets follow a dynamic described by a certain class of two-dimensional Levy models by solving their associated partial integro-differential equation (PIDE). To this end we consider a Galerkin approximation in space along with an implicit scheme for time evolution. Diffusion and drift in the associated operator are discretized using an exact Gaussian quadrature, while the integral part corresponding to jumps is approximated using the symbol method recently introduced in the literature. A system with blocked Toeplitz with Toeplitz blocks (BTTB) matrix is efficiently solved via biconjugate stabilized gradient method (BICSTAB) with a circulant pre-conditioner at each time step. The technique is applied to the pricing of \textit{crack} spread options between the prices of futures RBOB gasoline (reformulated blendstock for oxygenate blending) and West Texas Intermediate(WTI) oil in NYMEX.

In this survey we discuss derivatives of the Wright functions (of the first and the second kind) with respect to parameters. Differentiation of these functions leads to infinite power series with coefficient being quotients of the digamma (psi) and gamma functions. Only in few cases it is possible to obtain the sums of these series in a closed form. Functional form of the power series resembles those derived for the Mittag-Leffler functions. If the Wright functions are treated as the generalized Bessel functions, differentiation operations can be expressed in terms of the Bessel functions and their derivatives with respect to the order. It is demonstrated that in many cases it is possible to derive the explicit form of the Mittag-Leffler functions by performing simple operations with the Laplace transforms of the Wright functions. The Laplace transform pairs of the both kinds of the Wright functions are discussed for particular values of the parameters. Some transform pairs serve to obtain functional limits by applying the shifted Dirac delta function.

We prove a variant of the well-known result that intertwiners for the bilateral shift on `$\ell^2(Z)$ are unitarily equivalent to multiplication operators on $L^2(T)$. This enables us to unify and extend fundamental aspects of rigidity theory for bar-joint frameworks with an abelian symmetry group. In particular, we formulate the symbol function for a wide class of frameworks and show how to construct generalised rigid unit modes in a variety of new contexts.

Error-correcting codes over sets, with applications to DNA storage, are studied. The DNA-storage channel receives a set of sequences, and produces a corrupted version of the set, including sequence loss, symbol substitution, symbol insertion/deletion, and limited-magnitude errors in symbols. Various parameter regimes are studied. New bounds on code parameters are provided, which improve upon known bounds. New codes are constructed, at times matching the bounds up to lower-or der terms or small constant factors.

The main purpose of this paper is to find conditions for Holder calmness of the solution mapping, viewed as a function of the boundary data, of a hemivariational inequality governed by the Navier-Stokes operator. To this end, a more abstract model is studied first: a class of parametric equilibrium problems defined by trifunctions. The presence of trifunctions allows the extension of the monotonicity notions and of the duality principle in the theory of equilibrium problems.

This paper investigates a new class of modifier-adaptation schemes to overcome plant-model mismatch in real-time optimization of uncertain processes. The main contribution lies in the integration of concepts from the areas of Bayesian optimization and derivative-free optimization. The proposed schemes embed a physical model and rely on trust-region ideas to minimize risk during the exploration, while employing Gaussian process regression to capture the plant-model mismatch in a non-parametric way and drive the exploration by means of acquisition functions. The benefits of using an acquisition function, knowing the process noise level, or specifying a nominal process model are illustrated on numerical case studies, including a semi-batch photobioreactor optimization problem.

We show that maximal causal curves for a Lipschitz continuous Lorentzian metric admit a $\mathcal{C}^{1,1}$-parametrization and that they solve the geodesic equation in the sense of Filippov in this parametrization. Our proof shows that maximal causal curves are either everywhere lightlike or everywhere timelike. Furthermore, the proof demonstrates that maximal causal curves for an $\alpha$-H\"older continuous Lorentzian metric admit a $\mathcal{C}^{1,\frac{\alpha}{4}}$-parametrization.

Traditionally, ensembles of Slepian-Wolf (SW) codes are defined such that every bin of each $n$-vector of each source is randomly drawn under the uniform distribution across the sets $\{0,1,\ldots,2^{nR_X}-1\}$ and $\{0,1,\ldots,2^{nR_Y}-1\}$, where $R_X$ and $R_Y$ are the coding rates of the two sources, $X$ and $Y$, respectively. In a few more recent works, where only one source, say, $X$, is compressed and the other one, $Y$, serves as side information available at the decoder, the scope is extended to variable-rate S-W (VRSW) codes, where the rate is allowed to depend on the type class of the source string, but still, the random-binning distribution is assumed uniform within the corresponding, type-dependent, bin index set. In this expository work, we investigate the role of the uniformity of the random binning distribution from the perspective of the trade-off between the reliability (defined in terms of the error exponent) and the compression performance (measured from the viewpoint of the source coding exponent). To this end, we study a much wider class of random-binning distributions, which includes the ensemble of VRSW codes as a special case, but it also goes considerably beyond. We first show that, with the exception of some pathological cases, the smaller ensemble, of VRSW codes, is as good as the larger ensemble in terms the trade-off between the error exponent and the source coding exponent. Notwithstanding this finding, the wider class of ensembles proposed is motivated in two ways. The first is that it outperforms VRSW codes in the above-mentioned pathological cases, and the second is that it allows robustness: in the event of a system failure that causes unavailability of the compressed bit-stream from one of the sources, it still allows reconstruction of the other source within some controllable distortion.

We exhibit non-equivariant perturbations of the blowup solutions constructed in \cite{KST} for energy critical wave maps into $\mathbb{S}^2$. Our admissible class of perturbations is an open set in some sufficiently smooth topology and vanishes near the light cone. We show that the blowup solutions from \cite{KST} are rigid under such perturbations, including the space-time location of blowup. As blowup is approached, the dynamics agree with the classification obtained in \cite{DJKM}, and all six symmetry parameters converge to limiting values. Compared to the previous work \cite{KMiao} in which the rigidity of the blowup solutions from \cite{KST} under equivariant perturbations was proved, the class of perturbations considered in the present work does not impose any symmetry restrictions. Separation of variables and decomposing into angular Fourier modes leads to an infinite system of coupled nonlinear equations, which we solve for small admissible data. The nonlinear analysis is based on the distorted Fourier transform, associated with an infinite family of Bessel type Schr\"odinger operators on the half-line indexed by the angular momentum~$n$. A semi-classical WKB-type spectral analysis relative to the parameter $\hbar=\frac{1}{n+1}$ for large $|n|$ allows us to effectively determine the distorted Fourier basis for the entire infinite family. Our linear analysis is based on the global Liouville-Green transform as in the earlier works \cite{CSST, CDST}.

The Olson constant $\mathcal{O}L(\mathbb{F}_{p}^{d})$ represents the minimum positive integer $t$ with the property that every subset $A\subset \mathbb{F}_{p}^{d}$ of cardinality $t$ contains a nonempty subset with vanishing sum. The problem of estimating $\mathcal{O}L(\mathbb{F}_{p}^{d})$ is one of the oldest questions in additive combinatorics, with a long and interesting history even for the case $d=1$. In this paper, we prove that for any fixed $d \geq 2$ and $\epsilon > 0$, the Olson constant of $\mathbb{F}_{p}^{d}$ satisfies the inequality $$\mathcal{O}L(\mathbb{F}_{p}^{d}) \leq (d-1+\epsilon)p$$ for all sufficiently large primes $p$. This settles a conjecture of Hoi Nguyen and Van Vu.

In this paper, we consider high-dimensional stationary processes where a new observation is generated from a compressed version of past observations. The specific evolution is modeled by an encoder-decoder structure. We estimate the evolution with an encoder-decoder neural network and give upper bounds for the expected forecast error under specific structural and sparsity assumptions. The results are shown separately for conditions either on the absolutely regular mixing coefficients or the functional dependence measure of the observed process. In a quantitative simulation we discuss the behavior of the network estimator under different model assumptions. We corroborate our theory by a real data example where we consider forecasting temperature data.

Sumterms are introduced as syntactic entities, and sumtuples are introduced as semantic entities. Equipped with these concepts a new description is obtained of the notion of a sum as (the name for) a role which can be played by a number. Sumterm splitting operators are introduced and it is argued that without further precautions the presence of these operators gives rise to instance of the so-called sum splitting paradox. A survey of solutions to the sum splitting paradox is given.

In this note we consider the optimal design problem for estimating the slope of a polynomial regression with no intercept at a given point, say z. In contrast to previous work, which considers symmetric design spaces we investigate the model on the interval $[0, a]$ and characterize those values of $z$, where an explicit solution of the optimal design is possible.

We construct smooth, non-symmetric plasma equilibria which possess closed, nested flux surfaces and solve the Magnetohydrostatic (steady three-dimensional incompressible Euler) equations sustained by a small force. The solutions are also `nearly' quasisymmetric. The primary idea is that, given a desired quasisymmetry direction $\xi$, change the smooth structure on space so that the vector field $\xi$ is Killing for the new metric and construct $\xi$--symmetric solutions of the Magnetohydrostatic equations on that background by solving a generalized Grad-Shafranov equation. If $\xi$ is close to a symmetry of Euclidean space, then these are solutions on flat space up to a small forcing.

The odd diagram of a permutation is a subset of the classical diagram with additional parity conditions. In this paper, we study classes of permutations with the same odd diagram, which we call odd diagram classes. First, we prove a conjecture relating odd diagram classes and 213- and 312-avoiding permutations. Secondly, we show that each odd diagram class is a Bruhat interval. Instrumental to our proofs is an explicit description of the Bruhat edges that link permutations in a class.

We begin with a context more general than set theory. The basic ingredients are essentially the object and functor primitives of category theory, and the logic is weak, requiring neither the Law of Excluded Middle nor quantification. Inside this we find "relaxed" set theory, which is much easier to use with full precision than traditional axiomatic theories. There is also an implementation of the Zermillo-Fraenkel-Choice axioms that is maximal in the sense that any other implementation uniquely embeds in it.

We present an algorithm for the solution of a simultaneous space-time discretization of linear parabolic evolution equations with a symmetric differential operator in space. Building on earlier work, we recast this discretization into a Schur-complement equation whose solution is a quasi-optimal approximation to the weak solution of the equation at hand. Choosing a tensor-product discretization, we arrive at a remarkably simple linear system. Using wavelets in time and standard finite elements in space, we solve the resulting system in optimal linear complexity on a single processor, and in optimal logarithmic complexity when parallelized in both space and time. We complement these theoretical findings with large-scale parallel computations showing the effectiveness of the method.

We give another proof of the recent result of Ringel, which asserts equality between the finitistic dimension and delooping level of Nakayama algebras. The main tool is syzygy filtration method introduced in \cite{sen2019}. In particular, we give characterization of the finitistic dimension one Nakayama algebras.

In this section, we consider the heat equation on a plate with thickness h > 0 being heated by a heat source on upper and lower faces of the plate. We obtain an asymptotic profile of the solution as the thickness h > 0 approaches to zero.

In this paper, we develop new affine-invariant algorithms for solving composite convex minimization problems with bounded domain. We present a general framework of Contracting-Point methods, which solve at each iteration an auxiliary subproblem restricting the smooth part of the objective function onto contraction of the initial domain. This framework provides us with a systematic way for developing optimization methods of different order, endowed with the global complexity bounds. We show that using an appropriate affine-invariant smoothness condition, it is possible to implement one iteration of the Contracting-Point method by one step of the pure tensor method of degree $p \geq 1$. The resulting global rate of convergence in functional residual is then ${\cal O}(1 / k^p)$, where $k$ is the iteration counter. It is important that all constants in our bounds are affine-invariant. For $p = 1$, our scheme recovers well-known Frank-Wolfe algorithm, providing it with a new interpretation by a general perspective of tensor methods. Finally, within our framework, we present efficient implementation and total complexity analysis of the inexact second-order scheme $(p = 2)$, called Contracting Newton method. It can be seen as a proper implementation of the trust-region idea. Preliminary numerical results confirm its good practical performance both in the number of iterations, and in computational time.

We introduce a new digraph width measure called directed branch-width. To do this, we generalize a characterization of graph classes of bounded tree-width in terms of their line graphs to digraphs. Under parameterizations by directed branch-width we obtain linear time algorithms for many problems, such as directed Hamilton path and Max-Cut, which are hard when parameterized by other known directed width measures. More generally, we obtain an algorithmic meta-theorem for the model-checking problem for a restricted variant of MSO_2-logic on classes of bounded directed branch-width.

The First Fundamental Theorem of Invariant Theory describes a minimal generating set of the invariant polynomial ring under the action of some group $G$. In this note we give an elementary and direct proof for the $\operatorname{GL}_2(K)$ and $\operatorname{SL}_2(K)$ for any infinite field $K$. Our proof can be generalized to $\operatorname{GL}_m(K)$ and $\operatorname{SL}_m(K)$ for $m>2$. Moreover, we present a family of counter-examples to the statements of the First Fundamental Theorems for all finite fields and $m=2$.

Phase-space analysis or time-frequency analysis can be thought as Fourier analysis simultaneously both in time and in frequency, originating from signal processing and quantum mechanics. On groups having unitary Fourier transform, we introduce and study a natural family of time-frequency transforms, and investigate the related pseudo-differential operators.

The integral representation of Choquet operators defined on a space C(X) is established by using the Choquet-Bochner integral of a real-valued function with respect to a vector capacity.

We study the concept of hypervaluations on hyperfields. In particular, we show that any hypervaluation from a hyperfield onto an ordered canonical hypergroup is the composition of a hypervaluation onto an ordered abelian group (which induces the same valuation hyperring) and an order preserving homomorphism of hypergroups.

If $G$ is a graph and $\mathcal{H}$ is a set of subgraphs of $G$, we say that an edge-coloring of $G$ is $\mathcal{H}$-polychromatic if every graph from $\mathcal{H}$ gets all colors present in $G$ on its edges. The $\mathcal{H}$-polychromatic number of $G$, denoted $\operatorname{poly}_\mathcal{H} (G)$, is the largest number of colors in an $\mathcal{H}$-polychromatic coloring. In this paper we determine $\operatorname{poly}_\mathcal{H} (G)$ exactly when $G$ is a complete graph on $n$ vertices, $q$ is a fixed nonnegative integer, and $\mathcal{H}$ is one of three families: the family of all matchings spanning $n-q$ vertices, the family of all $2$-regular graphs spanning at least $n-q$ vertices, and the family of all cycles of length precisely $n-q$. There are connections with an extension of results on Ramsey numbers for cycles in a graph.

This paper considers Lorden's minimax quickest change detection (QCD) problem with a privacy constraint. The goal is to sanitize a signal to satisfy inference privacy requirements while being able to detect a change quickly. We show that the Generalized Likelihood Ratio (GLR) CuSum achieves asymptotic optimality with a properly designed sanitization channel. We formulate the design of this sanitization channel as an optimization problem, which is however challenging to solve. We propose relaxations to the optimization problem and develop algorithms to obtain a solution. We also consider the privacy-aware QCD problem under a decentralized framework and propose algorithms to solve the relaxed channel design problem under this framework.

We prove results exploring the relationship between the fundamental group and the second Betti number of minimal symplectic fillings of lens spaces. These results unify and generalize several disparate facts appearing in the literature. The Fibonacci numbers make a cameo appearance.

We propose a novel method---based on local moment matching---to approximate the value function of a Markov Decision Process. The method is grounded in recent work by Braverman et al (2020) that relates the solution of the Bellman equation to that of a PDE where, in the spirit of the central limit theorem, the transition matrix is reduced to its local first and second moments. Solving the PDE is not required by our method. Instead we construct a "sister" Markov chain whose two local transition moments are (approximately) identical with those of the focal chain. Because they share these moments, the original chain and its "sister" are coupled through the PDE, a coupling that facilitates optimality guarantees. We show how this view can be embedded into the existing aggregation framework of ADP, providing a disciplined mechanism to tune the aggregation and disaggregation probabilities. This embedding into aggregation also reveals how the approximation's accuracy depends on a certain local linearity of the value function. The computational gains arise from the reduction of the effective state space from $N$ to $N^{\frac{1}{2}+\epsilon}$ is as one might intuitively expect from approximations grounded in the central limit theorem.

We extend previous work on Hrushovski's stabilizer's theorem and prove a measure-theoretic version of a well-known result of Pillay-Scanlon-Wagner on products of three types. This generalizes results of Gowers and of Nikolov-Pyber, on products of three sets and yields model-theoretic proofs of existing asymptotic results for quasirandom groups. Furthermore, we bound the number of solutions to certain equations, such as $x^n\cdot y^m=z^k$ for $n+m=k$, in subsets of small tripling in groups. In particular, we show the existence of lower bounds on the number of arithmetic progressions of length $3$ for subsets of small doubling without involutions in arbitrary abelian groups.

Consider the characteristic initial value problem for the Einstein vacuum equations without any symmetry assumptions. Impose a sequence of data on two intersecting null hypersurfaces, each of which is foliated by spacelike $2$-spheres. Assume that the sequence of data is such that the derivatives of the metrics along null directions are only uniformly bounded in $L^2$ but the derivatives of the metrics along the directions tangential to the $2$-spheres obey higher regularity bounds uniformly. By the results in [J. Luk and I. Rodnianski, Nonlinear interaction of impulsive gravitational waves for the vacuum Einstein equations, Camb. J. Math. 5(4), 2017], it follows that the sequence of characteristic initial value problems gives rise to a sequence of vacuum spacetimes $(\mathcal M, g_n)$ in a fixed double-null domain $\mathcal M$. Since the existence theorem requires only very low regularity, the sequence of solutions may exhibit both oscillations and concentrations, and the limit need not be vacuum. We prove nonetheless that, after passing to a subsequence, the metrics converge in $C^0$ and weakly in $W^{1,2}$ to a solution of the Einstein-null dust system with two families of (potentially measure-valued) null dust. We show moreover that all sufficiently regular solutions to the Einstein-null dust system (with potentially measure-valued null dust) adapted to a double null coordinate system arise locally as weak limits of solutions to the Einstein vacuum system in the manner described above. As a consequence, we also give the first general local existence and uniqueness result for solutions to the Einstein-null dust system for which the null dusts are only measures. This in particular includes as a special case solutions featuring propagating and interacting shells of null dust.

It is a folklore conjecture that the M\"obius function exhibits cancellation on shifted primes; that is, $\sum_{p\le X}\mu(p+h) \ = \ o(\pi(X))$ as $X\to\infty$ for any fixed shift $h>0$. We prove the conjecture on average for shifts $h\le H$, provided $\log H/\log\log X\to\infty$. We also obtain results for shifts of prime $k$-tuples, and for higher correlations of M\"obius with von Mangoldt and divisor functions. Our argument combines sieve methods with a refinement of Matom\"aki, Radziwi\l\l, and Tao's work on an averaged form of Chowla's conjecture.

One of the most common hypothesis on the theory of nonsmooth dynamics is a regular surface as switching manifold, at which case there is at least the well-defined and established Filippov dynamics. However, although present in many relevant models, systems with singular switching manifolds still lack such a well established dynamics. At this work we explore a framework that, through blow-ups and singular perturbation, allows the extension of Filippov dynamics to the singular case. More specifically, we focus on a configuration in $\mathbb{R}^3$ of codimension 2 known as double discontinuity, whose switching manifold consists into the union of two perpendicular planes intersecting at a straight line, where ordinary Filippov dynamics is undefined. When generated by affine vector fields, we provide theorems describing the induced dynamics at the singular part of this configuration. Furthermore, Peixoto-like theorems on the structural stability are also derived.

Topological statistics, in the form of persistence diagrams, are a class of shape descriptors that capture global structural information in data. The mapping from data structures to persistence diagrams is almost everywhere differentiable, allowing for topological gradients to be backpropagated to ordinary gradients. However, as a method for optimizing a topological functional, this backpropagation method is expensive, unstable, and produces very fragile optima. Our contribution is to introduce a novel backpropagation scheme that is significantly faster, more stable, and produces more robust optima. Moreover, this scheme can also be used to produce a stable visualization of dots in a persistence diagram as a distribution over critical, and near-critical, simplices in the data structure.

We consider the problem of maximizing the asymptotic growth rate of an investor under drift uncertainty in the setting of stochastic portfolio theory (SPT). As in the work of Kardaras and Robertson we take as inputs (i) a Markovian volatility matrix $c(x)$ and (ii) an invariant density $p(x)$ for the market weights, but we additionally impose long-only constraints on the investor. Our principal contribution is proving a uniqueness and existence result for the class of concave functionally generated portfolios and developing a finite dimensional approximation, which can be used to numerically find the optimum. In addition to the general results outlined above, we propose the use of a broad class of models for the volatility matrix $c(x)$, which can be calibrated to data and, under which, we obtain explicit formulas of the optimal unconstrained portfolio for any invariant density.

In this paper, we analyze the asymptotic behavior of alternating direction method of multipliers (ADMM) for compressed sensing, where we reconstruct an unknown structured signal from its underdetermined linear measurements. The analytical tool used in this paper is recently developed convex Gaussian min-max theorem (CGMT), which can be applied to various convex optimization problems to obtain its asymptotic error performance. In our analysis of ADMM, we analyze the convex subproblem in the update of ADMM and characterize the asymptotic distribution of the tentative estimate obtained at each iteration. The result shows that the update equations in ADMM can be decoupled into a scalar-valued stochastic process in the asymptotic regime with the large system limit. From the asymptotic result, we can predict the evolution of the error (e.g. mean-square-error (MSE) and symbol error rate (SER)) in ADMM for large-scale compressed sensing problems. Simulation results show that the empirical performance of ADMM and its theoretical prediction are close to each other in sparse vector reconstruction and binary vector reconstruction.

We examine a new variant of the classic prisoners and lightswitches puzzle: A warden leads his $n$ prisoners in and out of $r$ rooms, one at a time, in some order, with each prisoner eventually visiting every room an arbitrarily large number of times. The rooms are indistinguishable, except that each one has $s$ lightswitches; the prisoners win their freedom if at some point a prisoner can correctly declare that each prisoner has been in every room at least once. What is the minimum number of switches per room, $s$, such that the prisoners can manage this? We show that if the prisoners do not know the switches' starting configuration, then they have no chance of escape -- but if the prisoners do know the starting configuration, then the minimum sufficient $s$ is surprisingly small. The analysis gives rise to a number of puzzling open questions, as well.

In a random sequential adsorption process, objects are deposited randomly, irreversibly, and sequentially; if an attempt to add an object results in an overlap with previously deposited objects, the attempt is discarded. The process continues until the system reaches a jammed state when no further additions are possible. Exact analyses have been performed only in one-dimensional models, and the average number of absorbed particles has been computed in a few solvable situations. We analyze a process in which landing on an empty site is allowed when at least $b$ neighboring sites on the left and the right are empty. For the minimal model ($b=1$), we compute the full counting statistics of the occupation number.

Neurodegenerative diseases, such as Alzheimer's or Parkinson's disease, show characteristic degradation of structural brain networks. This degradation eventually leads to changes in the network dynamics and degradation of cognitive functions. Here, we model the progression in terms of coupled physical processes: The accumulation of toxic proteins, given by a nonlinear reaction-diffusion transport process, yields an evolving brain connectome characterized by weighted edges on which a neuronal-mass model evolves. The progression of the brain functions can be tested by simulating the resting-state activity on the evolving brain network. We show that while the evolution of edge weights plays a minor role in the overall progression of the disease, dynamic biomarkers predict a transition over a period of 10 years associated with strong cognitive decline.

We propose an analytical approach for computing the eigenspectrum and corresponding eigenstates of a hyperbolic double well potential of arbitrary height or width, which goes beyond the usual techniques applied to quasi-exactly solvable models. We map the time-independent Schr\"odinger equation onto the Heun confluent differential equation, which is solved by using an infinite power series. The coefficients of this series are polynomials in the quantisation parameter, whose roots correspond to the system's eigenenergies. This leads to a quantisation condition that allows us to determine a whole spectrum, instead of individual eigenenergies. This method is then employed to perform an in depth analysis of electronic wave-packet dynamics, with emphasis on intra-well tunneling and the interference-induced quantum bridges reported in a previous publication [H. Chomet et al, New J. Phys. 21, 123004 (2019)]. Considering initial wave packets of different widths and peak locations, we compute autocorrelation functions and Wigner quasiprobability distributions. Our results exhibit an excellent agreement with numerical computations, and allow us to disentangle the different eigenfrequencies that govern the phase-space dynamics.

Due to the physics behind quantum computing, quantum circuit designers must adhere to the constraints posed by the limited interaction distance of qubits. Existing circuits need therefore to be modified via the insertion of SWAP gates, which alter the qubit order by interchanging the location of two qubits' quantum states. We consider the Nearest Neighbor Compliance problem on a linear array, where the number of required SWAP gates is to be minimized. We introduce an Integer Linear Programming model of the problem of which the size scales polynomially in the number of qubits and gates. Furthermore, we solve $131$ benchmark instances to optimality using the commercial solver CPLEX. The benchmark instances are substantially larger in comparison to those evaluated with exact methods before. The largest circuits contain up to $18$ qubits or over $100$ quantum gates. This formulation also seems to be suitable for developing heuristic methods since (near) optimal solutions are discovered quickly in the search process.

Motivated by future automotive applications, we study some joint radar target detection and parameter estimation problems where the transmitter, equipped with a mono-static MIMO radar, wishes to detect multiple targets and then estimate their respective parameters, while simultaneously communicating information data using orthogonal time frequency space (OTFS) modulation. Assuming that the number of radio frequency chains is smaller than the number of antennas over the mmWave frequency band, we design hybrid digital-analog beamforming at the radar transmitter adapted to different operating phases. The first scenario considers a wide angular beam in order to perform the target detection and parameter estimation, while multicasting a common message to all possible active users. The second scenario considers narrow angular beams to send information streams individually to the already detected users and simultaneously keep tracking of their respective parameters. Under this setup, we propose an efficient maximum likelihood scheme combined with hybrid beamforming to jointly perform target detection and parameter estimation. Our numerical results demonstrate that the proposed algorithm is able to reliably detect multiple targets with a sufficient number of antennas and achieves the Cram\'er-Rao lower bound for radar parameter estimation such as delay, Doppler and angle-of-arrival (AoA).

In this paper an additive regression model for a symmetric positive-definite matrix valued response and multiple scalar predictors is proposed. The model exploits the abelian group structure inherited from either the Log-Cholesky metric or the Log-Euclidean framework that turns the space of symmetric positive-definite matrices into a Riemannian manifold and further a bi-invariant Lie group. The additive model for responses in the space of symmetric positive-definite matrices with either of these metrics is shown to connect to an additive model on a tangent space. This connection not only entails an efficient algorithm to estimate the component functions but also allows to generalize the proposed additive model to general Riemannian manifolds that might not have a Lie group structure. Optimal asymptotic convergence rates and normality of the estimated component functions are also established. Numerical studies show that the proposed model enjoys superior numerical performance, especially when there are multiple predictors. The practical merits of the proposed model are demonstrated by analyzing diffusion tensor brain imaging data.

This paper presents HDGlab, an open source MATLAB implementation of the hybridisable discontinuous Galerkin (HDG) method. The main goal is to provide a detailed description of both the HDG method for elliptic problems and its implementation available in HDGlab. Ultimately, this is expected to make this relatively new advanced discretisation method more accessible to the computational engineering community. HDGlab presents some features not available in other implementations of the HDG method that can be found in the free domain. First, it implements high-order polynomial shape functions up to degree nine, with both equally-spaced and Fekete nodal distributions. Second, it supports curved isoparametric simplicial elements in two and three dimensions. Third, it supports non-uniform degree polynomial approximations and it provides a flexible structure to devise degree adaptivity strategies. Finally, an interface with the open-source high-order mesh generator Gmsh is provided to facilitate its application to practical engineering problems.

The sparse identification of nonlinear dynamics (SINDy) is a regression framework for the discovery of parsimonious dynamic models and governing equations from time-series data. As with all system identification methods, noisy measurements compromise the accuracy and robustness of the model discovery procedure. In this work, we develop a variant of the SINDy algorithm that integrates automatic differentiation and recent time-stepping constrained motivated by Rudy et al. for simultaneously (i) denoising the data, (ii) learning and parametrizing the noise probability distribution, and (iii) identifying the underlying parsimonious dynamical system responsible for generating the time-series data. Thus within an integrated optimization framework, noise can be separated from signal, resulting in an architecture that is approximately twice as robust to noise as state-of-the-art methods, handling as much as 40% noise on a given time-series signal and explicitly parametrizing the noise probability distribution. We demonstrate this approach on several numerical examples, from Lotka-Volterra models to the spatio-temporal Lorenz 96 model. Further, we show the method can identify a diversity of probability distributions including Gaussian, uniform, Gamma, and Rayleigh.

Disordered complex networks are of fundamental interest as stochastic models for information transmission over wireless networks. Well-known networks based on the Poisson point process model have limitations vis-a-vis network efficiency, whereas strongly correlated alternatives, such as those based on random matrix spectra (RMT), have tractability and robustness issues. In this work, we demonstrate that network models based on random perturbations of Euclidean lattices interpolate between Poisson and rigidly structured networks, and allow us to achieve the best of both worlds : significantly improve upon the Poisson model in terms of network efficacy measured by the Signal to Interference plus Noise Ratio (abbrv. SINR) and the related concept of coverage probabilities, at the same time retaining a considerable measure of mathematical and computational simplicity and robustness to erasure and noise. We investigate the optimal choice of the base lattice in this model, connecting it to the celebrated problem optimality of Euclidean lattices with respect to the Epstein Zeta function, which is in turn related to notions of lattice energy. This leads us to the choice of the triangular lattice in 2D and face centered cubic lattice in 3D. We demonstrate that the coverage probability decreases with increasing strength of perturbation, eventually converging to that of the Poisson network. In the regime of low disorder, we approximately characterize the statistical law of the coverage function. In 2D, we determine the disorder strength at which the PTL and the RMT networks are the closest measured by comparing their network topologies via a comparison of their Persistence Diagrams . We demonstrate that the PTL network at this disorder strength can be taken to be an effective substitute for the RMT network model, while at the same time offering the advantages of greater tractability.

Privacy amplification (PA) is an indispensable component in classical and quantum cryptography. Error correction (EC) and data compression (DC) algorithms are also indispensable in classical and quantum information theory. We here study quantum algorithms of these three types (PA, EC, and DC) in the one-shot scenario, and show that they all become equivalent if modified properly. As an application of this equivalence, we take previously known security bounds of PA, and translate them into coding theorems for EC and DC which have not been obtained previously. Further, we apply these results to simplify and improve our previous result that the two prevalent approaches to the security proof of quantum key distribution (QKD) are equivalent. We also propose a new method to simplify the security proof of QKD.

Concentration inequality are widely used for analysing machines learning algorithms. However, current concentration inequalities cannot be applied to the most popular deep neural network, notably in NLP processing. This is mostly due to the non-causal nature of this data. In this paper, a framework for modelling non-causal random fields is provided. A McDiarmid-type concentration inequality is obtained for this framework. In order to do so, we introduce a local i.i.d approximation of the non-causal random field.

We investigate how the initial number of infected individuals affects the behavior of the critical susceptible-infected-recovered process. We analyze the outbreak size distribution, duration of the outbreaks, and the role of fluctuations.