Transportation plays a critical role in supply chain networks, directly impacting cost efficiency, delivery reliability, and environmental sustainability. This study provides an enhanced optimization model for transportation planning, emphasizing environmental sustainability and cost-efficiency. An Integer Linear Programming (ILP) model was developed to minimize the total transportation costs by considering organizational and third-party vehicles' operational and rental costs while incorporating constraints on carbon emissions. The model incorporates multi-modal transportation routing and emission caps to select the optimized number of organizational and rental vehicles of different modes in each route to ensure adherence to sustainability goals. Key innovations include adding carbon emission constraints and optimizing route selection to reduce overall emissions. The model was implemented using the Gurobi solver, and numerical analysis reveals a trade-off between cost minimization and carbon footprint reduction. The results indicate that adopting tight environmental policies increases the costs by around 8% on average while more than 95% of the vehicles utilized will be rented. These insights provide actionable guidance for industries aiming to enhance both economic performance and environmental responsibility.
In this article, we review the metastable hierarchy in low-temperature lattice models. In the first part, we state that for any abstract lattice system governed by a Hamiltonian potential and evolving according to a Metropolis-type dynamics, there exists a hierarchical decomposition of the collection of stable plateaux in the system into multiple $\mathfrak{m}$ levels, such that at each level there exist tunneling metastable transitions between the stable plateaux, which can be characterized by convergence to a simple Markov chain as the inverse temperature $\beta$ tends to infinity. In the second part, we collect several examples that realize this hierarchical structure of metastability. In order to fix the ideas, we select the Ising model as our lattice system and discuss its metastable behavior under four different types of dynamics, namely the Glauber dynamics with positive/zero external fields and the Kawasaki dynamics with few/many particles. This review article is submitted to the proceedings of the event PSPDE XII, held at the University of Trieste from September 9-13, 2024.
We bound the rate of uniform convergence in compact sets for both entropic potentials and their gradients towards the Brenier potential and its gradient, respectively. Both results hold in the quadratic Euclidean setting for absolutely continuous measures satisfying some convexity assumptions.
We show on complete metric spaces a polynomial tail decay for stationary measures of contracting on average generating measures. Our result applies for example to self-similar or self-affine measures.
Minkowski tensors, also known as tensor valuations, provide robust $n$-point information for a wide range of random spatial structures. Local estimators for voxelized data, however, are unavoidably biased even in the limit of infinitely high resolution. Here, we substantially improve a recently proposed, asymptotically unbiased algorithm to estimate Minkowski tensors for voxelized data. Our improved algorithm is more robust and efficient. Moreover we generalize the theoretical foundations for an asymptotically bias-free estimation of the interfacial tensors to the case of finite unions of compact sets with positive reach, which is relevant for many applications like rough surfaces or composite materials. As a realistic test case, we consider, among others, random (beta) polytopes. We first derive explicit expressions of the expected Minkowski tensors, which we then compare to our simulation results. We obtain precise estimates with relative errors of a few percent for practically relevant resolutions. Finally, we apply our methods to real data of metallic grains and nanorough surfaces, and we provide an open-source python package, which works in any dimension.
We consider newform vectors in cuspidal representations of $p$-adic general linear groups. We extend the theory from the complex setting to include~$\ell$-modular representations with~$\ell\neq p$, and prove that the conductor is compatible with congruences modulo~$\ell$ for (ramified) supercuspidal~$\ell$-modular representations and for depth zero cuspidals. In the complex and modular setting, we prove explicit formulae for depth zero and minimax cuspidal representations of integral depth, in Bushnell-Kutzko and Whittaker models.
We present a Lyapunov analysis of Korpelevich's extragradient method and establish an $\mathcal{O}(1/k)$ last-iterate convergence rate. Building on this, we propose flexible extensions that combine extragradient steps with user-specified directions, guided by a line-search procedure derived from the same Lyapunov analysis. These methods retain global convergence under practical assumptions and can achieve superlinear rates when directions are chosen appropriately. Numerical experiments highlight the simplicity and efficiency of this approach.
Let $S^n$ be the $n$-sphere with the geodesic metric and of diameter $\pi$. The intrinsic \v{C}ech complex of $S^n$ at scale $r$ is the nerve of all open balls of radius $r$ in $S^n$. In this paper, we show how to control the homotopy connectivity of \v{C}ech complexes of spheres at each scale between $0$ and $\pi$ in terms of coverings of spheres. Our upper bound on the connectivity, which is sharp in the case $n=1$, comes from the chromatic numbers of Borsuk graphs of spheres. Our lower bound is obtained using the conicity (in the sense of Barmak) of \v{C}ech complexes of the sufficiently dense, finite subsets of $S^n$. Our bounds imply the new result that for $n\ge 1$, the homotopy type of the \v{C}ech complex of $S^n$ at scale $r$ changes infinitely many times as $r$ varies over $(0,\pi)$; we conjecture only countably many times. Additionally, we lower bound the homological dimension of \v{C}ech complexes of finite subsets of $S^n$ in terms of their packings.
In contrast to the well-known and unambiguous notion of ADM mass for asymptotically Euclidean manifolds, the notion of mass for asymptotically hyperbolic manifolds admits several interpretations. Historically, there are two approaches to defining the mass in the asymptotically hyperbolic setting: the mass aspect function of Wang defined on the conformal boundary at infinity, and the mass functional of Chru\'sciel and Herzlich which may be thought of as the closest asymptotically hyperbolic analogue of the ADM mass. In this paper we unify these two approaches by introducing an ADM-style definition of the mass aspect function that applies to a broad range of asymptotics and in very low regularity. Additionally, we show that the mass aspect function can be computed using the Ricci tensor. Finally, we demonstrate that this function exhibits favorable covariance properties under changes of charts at infinity, which includes a proof of the asymptotic rigidity of hyperbolic space in the context of weakly regular metrics.
For positive integers $m$ and $n$, the grid graph $G_{m,n}$ is the Cartesian product of the path graph $P_m$ on $m$ vertices and the path graph $P_n$ on $n$ vertices. An integer $\{2\}$-dominating function of a graph is a mapping from the vertex set to $\{0,1,2\}$ such that the sum of the mapped values of each vertex and its neighbors is at least $2$; the integer $\{2\}$-domination number of a graph is defined to be the minimum sum of mapped values of all vertices among all integer $\{2\}$-dominating functions. In this paper, we compute the integer $\{2\}$-domination numbers of $G_{1,n}$ and $G_{2,n}$, attain an upper bound to the integer $\{2\}$-domination numbers of $G_{3,n}$, and propose an algorithm to count the integer $\{2\}$-domination numbers of $G_{m,n}$ for arbitrary $m$ and $n$. As a future work, we list the integer $\{2\}$-domination numbers of $G_{4,n}$ for small $n$, and conjecture on its formula.
An edge colored graph is said to contain rainbow-$F$ if $F$ is a subgraph and every edge receives a different color. In 2007, Keevash, Mubayi, Sudakov, and Verstra\"ete introduced the \emph{rainbow extremal number} $\mathrm{ex}^*(n,F)$, a variant on the classical Tur\'an problem, asking for the maximum number of edges in a $n$-vertex properly edge-colored graph which does not contain a rainbow-$F$. In the following years many authors have studied the asymptotic behavior of $\mathrm{ex}^*(n,F)$ when $F$ is bipartite. In the particular case that $F$ is a tree $T$, the infamous Erd\"os-S\'os conjecture says that the extremal number of $T$ depends only on the size of $T$ and not its structure. After observing that such a pattern cannot hold for $\mathrm{ex}^*$ in the usual setting, we propose that the relative rainbow extremal number $\mathrm{ex}^*(Q_n,T)$ in the $n$-dimensional hypercube $Q_n$ will satisfy an Erd\"os-S\'os type Conjecture and verify it for some infinite families of trees $T$.
Joint phase-time arrays (JPTA) is a new mmWave radio frequency front-end architecture constructed with appending time-delay elements to phase shifters for analog beamforming. JPTA allows the mmWave base station (BS) to form multiple frequency-dependent beams with a single RF chain, exploiting the extra degrees of freedom the time-delay elements offer. Without requiring extra power-hungry RF chains, a BS with JPTA can schedule multiple users in different directions in a frequency-division multiplexing (FDM) manner. A BS with JPTA achieves various advantages over the traditional analog beamforming system. Simulation results show that JPTA can bring significant system-level benefits, e.g., extending uplink throughput coverage by 100%. To realize these system benefits of JPTA, high-resolution delay elements with a wide delay dynamic range are essential. With newly developed delay elements, we demonstrate that a single TRX RF chain can serve four users in four different directions in the mmWave band.
Let $F$ be an imaginary quadratic field, and let $\mathcal{O}_F$ be its ring of integers. For any ideal $\mathfrak{n} \subset \mathcal{O}_F$, let $\Gamma_0(\mathfrak{n})$ be the congruence subgroup of level $\mathfrak{n}$ consisting of matrices that are upper triangular mod $\mathfrak{n}$. In this paper, we develop techniques to compute spaces of Bianchi modular forms of level $\Gamma_0(\mathfrak{n})$ as a Hecke module in the case where $F$ has cyclic class group of order $4$. This represents the first attempt at such computations and complements work for smaller class numbers done by Cremona and his students Bygott, Lingham \cite{bygott,lingham}. We implement the algorithms for $F = \mathbb{Q}(\sqrt{-17})$. In our results we observe a variety of phenomena.
Given a compact surface of revolution with Laplace-beltrami operator $\Delta$, we consider the spectral projector $P_{\lambda,\delta}$ on a polynomially narrow frequency interval $[\lambda-\delta,\lambda + \delta]$, which is associated to the self-adjoint operator $\sqrt{-\Delta}$. For a large class of surfaces of revolution, and after excluding small disks around the poles, we prove that the $L^2 \to L^{\infty}$ norm of $P_{\lambda,\delta}$ is of order $\lambda^{\frac{1}{2}} \delta^{\frac{1}{2}}$ up to $\delta \geq \lambda^{-\frac{1}{32}}$. We adapt the microlocal approach introduced by Sogge for the case $\delta = 1$, by using the Quantum Completely Integrable structure of surfaces of revolution introduced by Colin de Verdi\`ere. This reduces the analysis to a number of estimates of explicit oscillatory integrals, for which we introduce new quantitative tools.
In data assimilation, the model may be subject to uncertainties and errors. The weak-constraint data assimilation framework enables incorporating model uncertainty in the dynamics of the governing equations. % We propose a new framework for near-optimal sensor placement in the weak-constrained setting. This is achieved by first deriving a design criterion based on the expected information gain, which involves the Kullback-Leibler divergence from the forecast prior to the posterior distribution. An explicit formula for this criterion is provided, assuming that the model error and background are independent and Gaussian and the dynamics are linear. % We discuss algorithmic approaches to efficiently evaluate this criterion through randomized approximations. To provide further insight and flexibility in computations, we also provide alternative expressions for the criteria. %t We provide an algorithm to find near-optimal experimental designs using column subset selection, including a randomized algorithm that avoids computing the adjoint of the forward operator. % Through numerical experiments in one and two spatial dimensions, we show the effectiveness of our proposed methods.
Inspired by the Roller Coaster Theorem from graph theory, we prove the existence of artinian Gorenstein algebras with unconstrained Hilbert series, which we call Roller Coaster algebras. Our construction relies on Nagata idealization of quadratic monomial algebras defined by whiskered graphs. The monomial algebras are interesting in their own right, as our results suggest that artinian level algebras defined by quadratic monomial ideals rarely have the weak Lefschetz property. In addition, we discover a large family of G-quadratic Gorenstein algebras failing the weak Lefschetz property.
We develop multipoint stress mixed finite element methods for linear elasticity with weak stress symmetry on cuboid grids, which can be reduced to a symmetric and positive definite cell-centered system. The methods employ the lowest-order enhanced Raviart-Thomas finite element space for the stress and piecewise constant displacement. The vertex quadrature rule is employed to localize the interaction of stress degrees of freedom, enabling local stress elimination around each vertex. We introduce two methods. The first method uses a piecewise constant rotation, resulting in a cell-centered system for the displacement and rotation. The second method employs a continuous piecewise trilinear rotation and the vertex quadrature rule for the asymmetry bilinear forms, allowing for further elimination of the rotation and resulting in a cell-centered system for the displacement only. Stability and error analysis is performed for both methods. For the stability analysis of the second method, a new auxiliary H-curl conforming matrix-valued space is constructed, which forms an exact sequence with the stress space. A matrix-matrix inf-sup condition is shown for the curl of this auxiliary space and the trilinear rotation space. First-order convergence is established for all variables in their natural norms, as well as second-order superconvergence of the displacement at the cell centers. Numerical results are presented to verify the theory.
A Hamiltonian path in the complete graph $K_v$ whose vertices are labeled with the integers $0,1,\ldots,v-1$ is a linear realization for the multiset $L$ of the linear edge-lengths (given by $|x-y|$ for the edge between vertices $x$ and $y$) of the edges in the path. A linear realization is standard if an end-vertex is 0 and perfect if the end-vertices are 0 and $v-1$. Linear realizations are useful in the study of the Buratti-Horak-Rosa (BHR) Conjecture on the existence of cyclic realizations (where cyclic edge-lengths are given by distance modulo $v$) for given multisets. In this paper, we focus on multisets of the form $\{1^a, (y-k)^b, y^c\}$. Using core perfect linear realizations for supports of size 2 (which have the forms $\{x^{y-1},y^{x+1}\}$ whenever $\gcd(x,y)=1$), we construct standard linear realizations (with $a=k-1$, $b=j(y-k)$, $c=jy$) when $k\mid y$ or $k \leq 4$. When $k=2$, these allow us to show that there is a linear realization whenever $a \geq y$. This is in line with the known results for the case of $k=1$. We also supplement these results for $k=1$ by constructing linear realizations whenever $b+c < y$ and $a \geq y - \min(b,c)$, from which the coprime version of the BHR Conjecture (requiring that $v$ is coprime with each element of the multiset) follows for $k=1$ when $y \leq 16$. Our methods show promise for constructing linear realizations for arbitrary $k$, in the direction of a resolution of the BHR Conjecture for supports of size 3.
Hypergeometric class equations are given by second order differential operators in one variable whose coefficient at the second derivative is a polynomial of degree $\leq2$, at the first derivative of degree $\leq1$ and the free term is a number. Their solutions, called hypergeometric class functions, include the Gauss hypergeometric function and its various limiting cases. The paper presents a unified approach to these functions. The main structure behind this approach is a family of complex 4-dimensional Lie algebras, originally due to Willard Miller. Hypergeometric class functions can be interpreted as eigenfunctions of the quadratic Casimir operator in a representation of Miller's Lie algebra given by differential operators in three complex variables. One obtains a unified treatment of various properties of hypergeometric class functions such as recurrence relations, discrete symmetries, power series expansions, integral representations, generating functions and orthogonality of polynomial solutions.
The goal of this paper is to develop the theory of Courant algebroids with integrable para-Hermitian vector bundle structures by invoking the theory of Lie bialgebroids. We consider the case where the underlying manifold has an almost para-complex structure, and use this to define a notion of para-holomorphic algebroid. We investigate connections on para-holomorphic algebroids and determine an appropriate sense in which they can be para-complex. Finally, we show through a series of examples how the theory of exact para-holomorphic algebroids with a para-complex connection is a generalization of both para-K\"{a}hler geometry and the theory of Poisson-Lie groups.
What is the maximum number of points that can be selected from an $n \times n$ square lattice such that no $k+1$ of them are in a line? This has been asked more than $100$ years ago for $k=2$ and it remained wide open ever since. In this paper, we prove the precise answer is $kn$, provided that $k>C\sqrt{n\log{n}}$ for an absolute constant $C$. The proof relies on carefully constructed bi-uniform random bipartite graphs and concentration inequalities.
This article considers the automatic selection problem of the relevant explanatory variables in a right-censored model on a massive database. We propose and study four aggregated censored adaptive LASSO estimators constructed by dividing the observations in such a way as to keep the consistency of the estimator of the survival curve. We show that these estimators have the same theoretical oracle properties as the one built on the full database. Moreover, by Monte Carlo simulations we obtain that their calculation time is smaller than that of the full database. The simulations confirm also the theoretical properties. For optimal tuning parameter selection, we propose a BIC-type criterion.
We prove a Brunn-Minkowski type inequality for the first (nontrivial) Dirichlet eigenvalue of the weighted $p$-operator \[ -\Delta_{p,\gamma}u=-\text{div}(|\nabla u|^{p-2} \nabla u)+(x,\nabla u)|\nabla u|^{p-2}, \] where $p>1$, in the class of bounded Lipschitz domains in $\mathbb{R}^n$. We also prove that any corresponding positive eigenfunction is log-concave if the domain is convex.
This work introduces a framework for quantifying the information content of logical propositions through the use of implication hypergraphs. We posit that a proposition's informativeness is primarily determined by its relationships with other propositions -- specifically, the extent to which it implies or derives other propositions. To formalize this notion, we develop a framework based on implication hypergraphs, that seeks to capture these relationships. Within this framework, we define propositional information, derive some key properties, and illustrate the concept through examples. While the approach is broadly applicable, mathematical propositions emerge as an ideal domain for its application due to their inherently rich and interconnected structure. We provide several examples to illustrate this and subsequently discuss the limitations of the framework, along with suggestions for potential refinements.
We give a unified proof of the Yamada-Watanabe-Engelbert theorem for various notions of solutions for SPDEs in Banach spaces with cylindrical Wiener noise. We use Kurtz' generalization of the theorems of Yamada, Watanabe and Engelbert. Moreover, we deduce the classical Yamada-Watanabe theorem for SPDEs, with a slightly different notion of 'unique strong solution' than that of Kurtz. Our setting includes analytically strong solutions, analytically weak solutions and mild solutions. For each of these notions, our approach allows a vast flexibility with regard to which function spaces and integrability conditions are chosen in the definition of a solution (and therefore changing the meaning of existence and uniqueness). All results hold in Banach spaces which are either martingale type 2 or UMD. For analytically weak solutions, the results hold in arbitrary Banach spaces. In particular, our results extend the Yamada-Watanabe theorems of Ondr\'ejat for mild solutions in 2-smooth Banach spaces, of R\"ockner et al. for the variational framework and of Kunze for analytically weak solutions, and cover many new settings. As a tool, and of interest itself, we construct a measurable representation I of the stochastic integral in a martingale type 2 or UMD Banach space, in the sense that for any stochastically integrable process f and cylindrical Brownian motion W, we have $I(f(\omega),W(\omega),\mathrm{Law}(f,W)) = (\int_0^{\cdot} f\, \mathrm{d}W)(\omega)$ for almost every $\omega$.
Text datasets can be represented using models that do not preserve text structure, or using models that preserve text structure. Our hypothesis is that depending on the dataset nature, there can be advantages using a model that preserves text structure over one that does not, and viceversa. The key is to determine the best way of representing a particular dataset, based on the dataset itself. In this work, we propose to investigate this problem by combining text distortion and algorithmic clustering based on string compression. Specifically, a distortion technique previously developed by the authors is applied to destroy text structure progressively. Following this, a clustering algorithm based on string compression is used to analyze the effects of the distortion on the information contained in the texts. Several experiments are carried out on text datasets and artificially-generated datasets. The results show that in strongly structural datasets the clustering results worsen as text structure is progressively destroyed. Besides, they show that using a compressor which enables the choice of the size of the left-context symbols helps to determine the nature of the datasets. Finally, the results are contrasted with a method based on multidimensional projections and analogous conclusions are obtained.
We revisit global existence and decay for small-data solutions of semilinear wave equations on extremal Reissner-Nordstr\"om black hole backgrounds satisfying the classical null condition, a problem which was previously addressed by the first author in joint work with Aretakis and Gajic (Ann. of PDE, 2020). In this paper, we develop a new approach based on propagating a significantly weaker set of estimates, which allows for a simpler and more streamlined proof. Our proof does not require tracking sharp estimates for the solution in the near-horizon region, which means that it is compatible with, but does not imply, the non-decay and growth hierarchy of derivatives of the solution along the event horizon expected from the Aretakis instability. In particular, this approach is in principle compatible with other settings where stronger horizon instabilities are expected, such as nonlinear charged scalar fields on extremal Reissner-Nordstr\"om, or nonlinear waves on extremal Kerr. We also sketch how our proof applies to semilinear problems on spacetimes settling down to extremal Reissner-Nordstr\"om, such as those constructed in our joint work with Kehle (arXiv:2410.16234, 2024).
In this article we introduce the notion of Floer function which has the property that the Hessian is a Fredholm operator of index zero in a scale of Hilbert spaces. Since the Hessian has a complicated transformation under chart transition, in general this is not an intrinsic condition. Therefore we introduce the concept of Floerfolds for which we show that the notion of Floer function is intrinsic.
In recent years, Neural Networks (NNs) have been employed to control nonlinear systems due to their potential capability in dealing with situations that might be difficult for conventional nonlinear control schemes. However, to the best of our knowledge, the current literature on NN-based control lacks theoretical guarantees for stability and tracking performance. This precludes the application of NN-based control schemes to systems where stringent stability and performance guarantees are required. To address this gap, this paper proposes a systematic and comprehensive methodology to design provably-stable NN-based control schemes for affine nonlinear systems. Rigorous analysis is provided to show that the proposed approach guarantees stability of the closed-loop system with the NN in the loop. Also, it is shown that the resulting NN-based control scheme ensures that system states asymptotically converge to a neighborhood around the desired equilibrium point, with a tunable proximity threshold. The proposed methodology is validated and evaluated via simulation studies on an inverted pendulum and experimental studies on a Parrot Bebop 2 drone.
We study in detail the class of even polynomials and their behavior with respect to finite free convolutions. To this end, we use some specific hypergeometric polynomials and a variation of the rectangular finite free convolution to understand even real-rooted polynomials in terms of positive-rooted polynomials. Then, we study some classes of even polynomials that are of interest in finite free probability, such as even hypergeometric polynomials, symmetrizations, and finite free commutators. Specifically, we provide many new examples of these objects, involving classical families of special polynomials (such as Laguerre, Hermite, and Jacobi). Finally, we relate the limiting root distributions of sequences of even polynomials with the corresponding symmetric measures that arise in free probability.
We prove relative versions of many earlier results about almost invariant sets and splittings of groups. In particular, we prove a relative version of the algebraic torus theorem, and we prove the existence and uniqueness of relative versions of algebraic regular neighbourhoods and of JSJ decompositions.
We compute the stringy chow ring of a general Deligne-Mumford stack of the form [X/G] for a smooth variety X and diagonalizable group scheme G, working over a base field that is not necessarily algebraically closed. We then specialize to the stringy chow ring of the weighted blow up of a smooth variety along a smooth center. We explore finite generation properties of this ring.
In this article, we establish new results on the probabilistic parking model (introduced by Durm\'ic, Han, Harris, Ribeiro, and Yin) with $m$ cars and $n$ parking spots and probability parameter $p\in[0,1]$. For any $ m \leq n$ and $p \in [0,1]$, we study the parking preference of the last car, denoted $a_m$, and determine the conditional distribution of $a_m$ and compute its expected value. We show that both formulas depict explicit dependence on the probability parameter $p$. We study the case where $m = cn $ for some $ 0 < c < 1 $ and investigate the asymptotic behavior and show that the presence of ``extra spots'' on the street significantly affects the rate at which the conditional distribution of $ a_m $ converges to the uniform distribution on $[n]$. Even for small $ \varepsilon = 1 - c $, an $ \varepsilon $-proportion of extra spots reduces the convergence rate from $ 1/\sqrt{n} $ to $ 1/n $ when $ p \neq 1/2 $. Additionally, we examine how the convergence rate depends on $c$, while keeping $n$ and $p$ fixed. We establish that as $c$ approaches zero, the total variation distance between the conditional distribution of $a_m$ and the uniform distribution on $[n]$ decreases at least linearly in $c$.
We consider a status update system consisting of one source, one server, and one sink. The source generates packets according to a Poisson process and the packets are served according to a generally distributed service time. We consider a system with a capacity of one packet, i.e., there is no waiting buffer in the system, and model it as an M/G/1/1 queueing system. We introduce a probabilistically preemptive packet management policy and calculate the moment generating functions (MGFs) of the age of information (AoI) and peak AoI (PAoI) under the policy. According to the probabilistically preemptive policy, when a packet arrives, the possible packet in the system is replaced by the arriving packet with a fixed probability. Numerical results show the effectiveness of the packet management policy.
We define finite-time hyperbolic coordinates, describe their geometry, and prove various results on both their convergence as the time scale increases, and on their variation in the state space. Hyperbolic coordinates reframe the classical paradigm of hyperbolicity: rather than define a hyperbolic dynamical system in terms of a splitting of the tangent space into stable and unstable subspaces, we define hyperbolicity in terms of the co-eccentricity of the map. The co-eccentricity describes the distortion of unit circles in the tangent space under the differential of the map. Finite-time hyperbolic coordinates have been used to demonstrate the existence of SRB measures for the Henon map; our eventual goal is to both elucidate these techniques and to extend them to a broad class of nonuniformly and singular hyperbolic systems.
We consider the source model key agreement problem involving two legitimate parties and an eavesdropper who observe n i.i.d. samples of X and Y and Z respectively. The best-known upper bound on the key capacity is characterized by an inf-max optimization problem that generally lacks a closed-form solution. In this paper, we solve the optimization for some class of sources, thereby providing simple expressions for the upper bound. We provide general conditions under which the upper bound reduces to I(X;Y). As an example, we consider the XOR setting in which X and Y are binary, and Z is the XOR of X and Y . The upper bound reduces to I(X;Y) for this source. Next, we conjecture that the rate I(X;Y) is not achievable for the XOR source, and provide some ideas that might be useful for developing a new upper bound on the source model problem.
Let $\alpha$ be a fixed quadratic irrational. Consider the Diophantine equation \[ y^a\ =\ q_{N_1} + \cdots + q_{N_K},\quad N_1 \geq \cdots \geq N_{K} \geq 0,\quad a, y \geq 2 \] where $(q_N)_{N\,\geq\,0}$ is the sequence of convergent denominators to $\alpha$. We find two effective upper bounds for $y^a$ which depend on the Hamming weights of $y$ with respect to its radix and Zeckendorf representations, respectively. The latter bound extends a recent result of Vukusic and Ziegler. En route, we obtain an analogue of a theorem by Kebli, Kihel, Larone and Luca.
A representation of solutions of the one-dimensional Dirac equation is obtained. The solutions are represented as Neumann series of Bessel functions. The representations are shown to be uniformly convergent with respect to the spectral parameter. Explicit formulas for the coefficients are obtained via a system of recursive integrals. The result is based on the Fourier-Legendre series expansion of the transmutation kernel. An efficient numerical method for solving initial-value and spectral problems based on this approach is presented with a numerical example. The method can compute large sets of eigendata with non-deteriorating accuracy.
We study periodic points and finitely supported invariant measures for continuous semigroup actions. Introducing suitable notions of periodicity in both topological and measure-theoretical contexts, we analyze the space of invariant Borel probability measures associated with these actions. For embeddable semigroups, we establish a direct relationship between the extensibility of invariant measures to the free group on the semigroup and the denseness of finitely supported invariant measures. Applying this framework to shift actions on the full shift, we prove that finitely supported invariant measures are dense for every left amenable semigroup that is residually a finite group and for every finite-rank free semigroup.
This paper is devoted to the global solvability of the Navier-Stokes system with fractional Laplacian $(-\Delta)^{\alpha}$ in $\mathbb{R}^{n}$ for $n\geq2$, where the convective term has the form $(|u|^{m-1}u)\cdot\nabla u$ for $m\geq1$. By establishing the estimates for the difference $|u_{1}|^{m-1}u_{1}-|u_{2}|^{m-1}u_{2}$ in homogeneous Besov spaces, and employing the maximal regularity property of $(-\Delta)^{\alpha}$ in Lorentz spaces, we prove global existence and uniqueness of the strong solution of the Navier-Stokes in critical Besov spaces for both $m=1$ and $m>1$
We describe an algorithm to rigorously compute the power series expansion at a CM point of a weight $2$ cusp form of level coprime to $6$. Our algorithm works by bounding the denominators that appear due to ramification, and without recourse to computing an explicit model of the corresponding modular curve. Our result is the first in a series of papers toward an eventual implementation of equationless Chabauty.
We consider the Cahn-Hilliard equation with Neumann boundary conditions in a three-dimensional curved thin domain around a given closed surface. When the thickness of the curved thin domain tends to zero, we show that the weighted average in the thin direction of a weak solution to the thin-domain problem converges on the limit surface in an appropriate sense. Moreover, we rigorously derive a limit problem, which is the surface Cahn-Hilliard equation with weighted Laplacian, by characterizing the limit function as a unique weak solution to the limit problem. The proof is based on a detailed analysis of the weighted average and the use of Sobolev inequalities and elliptic regularity estimates on the curved thin domain with constants explicitly depending on the thickness. This is the first result on a rigorous thin-film limit of nonlinear fourth order equations in general curved thin domains.
We construct explicit examples that are algebraic varieties in positive characteristic to show that locally trivial moduli functors do not always satisfy Schlessinger's condition $(H_1)$ in [3], in contrast to the complex/characteristic $0$ case. The first example is an algebraic curve, and the second is a normal rational projective surface with only one rational double point.
We deduce the asymptotic behaviour of a broad class of multiple $q$-orthogonal polynomials as their degree tends to infinity. We achieve this by rephrasing multiple $q$-orthogonal polynomials as part of a solution to a Riemann Hilbert Problem (RHP). In particular, we study multiple $q$-orthogonal polynomials of the first kind (see [12]), which are Type II orthogonal polynomials with weights given by \begin{equation} w_1(x) = x^\alpha \omega(x)d_qx,\qquad w_2(x) = x^\beta \omega(x)d_qx, \nonumber \end{equation} which satisfy the constraint \begin{equation}\nonumber |\omega(q^{2n})-1| = \mathcal{O}(q^{2n}), \end{equation} as $n\to \infty$. Using $q$-calculus we obtain detailed asymptotics for these polynomials from the RHP. This class of polynomials studied was chosen in part to their connection to the work of [11,12], concerning the irrationality of $\zeta_q(1)$ and $\zeta_q(2)$. To conduct our asymptotic analysis we will require the following added restrictions on $w_1(x)$ and $w_2(x)$: $\alpha \notin \mathbb{Z}$, $\beta \notin \mathbb{Z}$ and $\alpha \neq \beta \mod \mathbb{Z}$. These restrictions are necessary for the asymptotic analysis but not the statement of multiple $q$-orthogonal polynomials as solutions to a RHP. The author wishes to extend special thanks to Prof. Walter Van Assche, who motivated this studied and provided valuable discussion.
Fractional cumulative residual entropy (FCRE) is a powerful tool for the analysis of complex systems. Most of the theoretical results and applications related to the FCRE of the lifetime random variable are based on the distribution function approach. However, there are situations in which the distribution function may not be available in explicit form but has a closed-form quantile function (QF), an alternative method of representing a probability distribution. Motivated by this, in the present study we introduce a quantile-based FCRE, its dynamic version and their various properties and examine their usefulness in different applied fields.
This work derives two basic engagement zone models, describing regions of potential risk or capture for a mobile vehicle by a pursuer. The pursuer is modeled as having turn-constraints rather than simple motion. Turn-only (C-Paths) and turn-straight (CS-Paths) paths are considered for the pursuer of limited range. Following the derivation, a simulation of a vehicle avoiding the pursuer's engagement zone is provided.
In this paper, we prove that a compact K\"ahler manifold $X$ with semi-positive holomorphic sectional curvature admits a locally trivial fibration $\phi \colon X \to Y$, where the fiber $F$ is a rationally connected projective manifold and the base $Y$ is a finite \'etale quotient of a torus. This result extends the structure theorem, previously established for projective manifolds, to compact K\"ahler manifolds. A key part of the proof involves analyzing the foliation generated by truly flat tangent vectors and showing the abelianness of the topological fundamental group $\pi_{1}(X)$, with a focus on varieties of special type.
In this contribution, we introduce a general class of car-following models with an input-state-output port-Hamiltonian structure. We derive stability conditions and long-term behavior of the finite system with periodic boundaries and quadratic interaction potential by spectral analysis and using asymptotic properties of multivariate Ornstein-Uhlenbeck processes. The uncontrolled dynamics exhibit instability and random collective behavior under stochastic perturbations. By implementing an open-loop speed control, the system stabilizes and weakly converges to Gaussian limit distributions. The convergence is unconditional for constant speed control. However, a stability condition arises for the closed-loop system where the speed control acts as a dynamic feedback depending on the distance ahead. The results are illustrated by numerical simulations. Interestingly, only the closed-loop system is able to reproduce, at least transiently, realistic stop-and-go behavior that can be resolved using the Hamiltonian component of the model.
We study multidimensional discontinuous backward stochastic differential equations in a filtration that supports both a Brownian motion and an independent integer-valued random measure. Under suitable $\mathbb{L}^p$-integrability conditions on the data, we establish the existence and uniqueness of $\mathbb{L}^p$-solutions for both cases: $p \geq 2$ and $p \in (1,2)$. The generator is assumed to be stochastically monotone in the state variable $y$, stochastically Lipschitz in the control variables $(z, u)$, and to satisfy a stochastic linear growth condition, along with an appropriate $\mathbb{L}^p$-integrability requirement.
The two subjects in the title are related via the specialization of symmetric polynomials at roots of unity. Let $f(z_1,\ldots,z_n)\in\mathbb{Z}[z_1,\ldots,z_n]$ be a symmetric polynomial with integer coefficients and let $\omega$ be a primitive $d$th root of unity. If $d|n$ or $d|(n-1)$ then we have $f(1,\ldots,\omega^{n-1})\in\mathbb{Z}$. If $d|n$ then of course we have $f(\omega,\ldots,\omega^n)=f(1,\ldots,\omega^{n-1})\in\mathbb{Z}$, but when $d|(n+1)$ we also have $f(\omega,\ldots,\omega^n)\in\mathbb{Z}$. We investigate these three families of integers in the case $f=h_k^{(b)}$, where $h_k^{(b)}$ is the coefficient of $t^k$ in the generating function $\prod_{i=1}^n (1+z_it+\cdots+(z_it)^{b-1})$. These polynomials were previously considered by several authors. They interpolate between the elementary symmetric polynomials ($b$=2) and the complete homogeneous symmetric polynomials ($b\to\infty$). When $\gcd(b,d)=1$ with $d|n$ or $d|(n-1)$ we find that the integers $h_k^{(b)}=(1,\omega,\ldots,\omega^{n-1})$ are related to cyclic sieving of multisets with multiplicities bounded above by $b$, generalizing the well know cyclic sieving results for sets ($b=2$) and multisets ($b\to \infty$). When $\gcd(b,d)=1$ and $d|(n+1)$ we find that the integers $h_k^{(b)}(\omega,\omega^2,\ldots,\omega^n)$ are related to the Frobenius coin problem with two coins. The case $\gcd(b,d)\neq 1$ is more complicated. At the end of the paper we combine these results with the expansion of $h_k^{(b)}$ in various bases of the ring of symmetric polynomials.
Let $R$ be a (not necessary commutative) ring with unit, $d\geq 1$ an integer, and $\lambda$ a unitary character of the additive group $(R,+).$ A pair $(U,V)$ of unitary representations $U$ and $V$ of $R^d$ on a Hilbert space $\mathcal{H}$ is said to satisfy the canonical commutation relations (relative to $\lambda$) if $U(a) V(b)= \lambda(a\cdot b)V(b) U(a)$ for all $a=(a_1, \dots, a_d), b= (b_1, \dots, b_d)\in R^d$, where $a\cdot b= \sum_{k=1}^d a_k b_k.$ We give a new and quick proof of the classical Stone von Neumann Theorem about the essential uniqueness of such a pair in the case where $R$ is a local field (e.g. $R= \mathbf{R}$). Our methods allow us to give the following extension of this result to a general locally compact ring $R$. For a unitary representation $U$ of $R^d$ on a Hilbert space $\mathcal{H}, $ define the inflation $U^{(\infty)}$ of $U$ as the (countably) infinite multiple of $U$ on $\mathcal{H}^{(\infty)}=\oplus_{i\in \mathbf{N}} \mathcal{H}$. Let $(U_1, V_1), (U_2, V_2)$ be two pairs of unitary representations of $R^d$ on corresponding Hilbert spaces $\mathcal{H}_1, \mathcal{H}_2$ satisfying the canonical commutation relations (relative to $\lambda$). Provided that $\lambda$ satisfies a mild faithful condition, we show that the inflations $(U_1^{(\infty)}, V_1^{(\infty)}), (U_2^{(\infty)}, V_2^{(\infty)})$ are approximately equivalent, that is, there exists a sequence $(\Phi_n)_n$ of unitary isomorphisms $\Phi_n: \mathcal{H}_1^{(\infty)}\to \mathcal{H}_2^{(\infty)}$ such that $\lim_{n} \Vert U_2^{(\infty)}(a) - \Phi_n U_1^{(\infty)}(a) \Phi_n^{*}\Vert=0$ and $\lim_{n} \Vert V_2^{(\infty)}(b) - \Phi_n V_1^{(\infty)}(b) \Phi_n^{*}\Vert=0,$ uniformly on compact subsets of $R^d.$
Stochastic partial differential equations (SPDEs) are often difficult to solve numerically due to their low regularity and high dimensionality. These challenges limit the practical use of computer-aided studies and pose significant barriers to statistical analysis of SPDEs. In this work, we introduce a highly efficient multi-index Monte Carlo method (MIMC) designed to approximate statistics of mild solutions to semilinear parabolic SPDEs. Key to our approach is the proof of a multiplicative convergence property for coupled solutions generated by an exponential integrator numerical solver, which we incorporate with MIMC. We further describe theoretically how the asymptotic computational cost of MIMC can be bounded in terms of the input accuracy tolerance, as the tolerance goes to zero. Notably, our methodology illustrates that for an SPDE with low regularity, MIMC offers substantial performance improvements over other viable methods. Numerical experiments comparing the performance of MIMC with the multilevel Monte Carlo method on relevant test problems validate our theoretical findings. These results also demonstrate that MIMC significantly outperforms state-of-the-art multilevel Monte Carlo, thereby underscoring its potential as a robust and tractable tool for solving semilinear parabolic SPDEs.
Differential Dynamic Programming (DDP) is a trajectory optimization method, particularly resilient to poor initial guesses. However, its long run times compared to other methods make it less suitable for embedded systems. In this work, we introduce polynomial-based DDP methods capable of enforcing constraints while optimizing for fuel efficiency. Additionally, a polynomial-based Newton solver is implemented to enforce constraints with high precision. The proposed solver, Differential Algebra-based Differential Dynamic programming (DADDy), is validated and tested on various astrodynamics scenarios. Results demonstrate that DADDy achieves the same solutions as state-of-the-art DDP methods but with significantly reduced run times. Specifically, for the scenarios investigated in this work, the most stable method was able to achieve 100% convergence and achieved runtime reductions of 70% in the Sun-centered two-body problem, 23% to 94% in the Earth-Moon CR3BP, and 46\% to 59% in the Earth-centered two-body problem.
A longstanding open question in sub-Riemannian geometry is the smoothness of (the arc-length parameterization of) length-minimizing curves. In [6], this question is negative answered, with an example of a $C^2$ but not $C^3$ length-minimizer of a real-analytic (even polynomial) sub-Riemannian structure. In this paper, we study a class of examples of sub-Riemannian structures that generalizes that presented in [6], and we prove that length-minimizing curves must be at least of class $C^2$ within these examples. In particular, we prove that Theorem 1.1 in [6] is sharp.
The $\{K_{1,1}, K_{1,2},C_m: m\geq3\}$-factor of a graph is a spanning subgraph whose each component is an element of $\{K_{1,1}, K_{1,2},C_m: m\geq3\}$. In this paper, through the graph spectral methods, we establish the lower bound of the signless Laplacian spectral radius and the upper bound of the distance spectral radius to determine whether a graph admits a $\{K_2\}$-factor. We get a lower bound on the size (resp. the spectral radius) of $G$ to guarantee that $G$ contains a $\{K_{1,1}, K_{1,2},C_m: m\geq3\}$-factor. Then we determine an upper bound on the distance spectral radius of $G$ to ensure that $G$ has a $\{K_{1,1}, K_{1,2},C_m: m\geq3\}$-factor. Furthermore, by constructing extremal graphs, we show that the above all bounds are best possible.
In this paper, we analyze the output stabilization problem for cascaded nonlinear ODE with $1-d$ heat diffusion equation affected by both in-domain and boundary perturbations. We assume that the only available part of states is the first components of the ODE-subsystem and one boundary of the heat-subsystem. The particularity of this system is two folds i) it contains a nonlinear additive term in the ODE-subsystem, and ii) it is affected by both boundary and in-domain perturbations signals. For such a system, and unlike the existing works, we succeeded to design an output observer-based feedback that guarantees not only asymptotic stabilization result but also a globally {\it disturbance-to-state stabilization} for our cascaded system. The output feedback is designed using an adequate backstepping transformation recently introduced for coupled ODE-heat equations combined with high-gain observer and high-gain controller.
We discuss some recent results by a number of authors regarding word maps on algebraic groups and finite simple groups, their mixing properties and the geometry of their fibers, emphasizing the role played by equidistribution results in finite fields via recent advances on character bounds and non-abelian arithmetic combinatorics. In particular, we discuss character varieties of random groups. In the last section, we give a new proof of a recent theorem of Hrushovski about the geometric irreducibility of the generic fibers of convolutions of dominant morphisms to simply connected algebraic groups. These notes stem out of lectures given by the authors in Oxford, and by the first author in ICTS Bangalore, in spring 2024.
Following Nazarov's suggestion~\cite{Naz1}, we refer to the cyclotomic Nazarov-Wenzl algebra as the cyclotomic Brauer algebra. When the cyclotomic Brauer algebra is isomorphic to the endomorphism algebra of $M_{I_i, r}$-- the tensor product of a simple scalar-type parabolic Verma module with the natural module in the parabolic BGG category $\mathcal O$ of types $B_n$, $C_n$ and $D_n$, its decomposition numbers can theoretically be computed, based on general results from \cite{AST} and \cite[Corollary~5.10]{RS}. This paper aims to establish explicit connections between the parabolic Verma modules that appear as subquotients of $M_{I_i, r}$ and the right cell modules of the cyclotomic Brauer algebra under condition~\eqref{simple111}. It allows us to explicitly decompose $M_{I_i, r}$ into a direct sum of indecomposable tilting modules by identifying their highest weights and multiplicities. Our result demonstrates that the decomposition numbers of such a cyclotomic Brauer algebra can be explicitly computed using the parabolic Kazhdan-Lusztig polynomials of types $B_n$, $C_n$, and $D_n$ with suitable parabolic subgroups~\cite{So}. Finally, condition~\eqref{simple111} is well-supported by a result of Wei Xiao presented in Section~6.
The paper considers the uniqueness question of factorization of a knotted handlebody in the $3$-sphere along decomposing $2$-spheres. We obtain a uniqueness result for factorization along decomposing $2$-spheres meeting the handlebody at three parallel disks. The result is used to examine handlebody-knot symmetry; particularly, the chirality of $6_{10}$ in the handlebody-knot table, previously unknown, is determined. In addition, an infinite family of hyperbolic handlebody-knots with homeomorphic exteriors is constructed.
This paper focuses on the dense uniform Li-Yorke chaos for linear operators on a Banach space. Some sufficient conditions and equivalent conditions are established under which the dynamical system is densely uniformly Li-Yorke chaotic. It is shown that there are plenty of densely uniformly Li-Yorke chaotic operators. For unilateral backward weighted shifts and bilateral backward weighted shifts on $\ell^p$, it is shown that Li-Yorke chaos is equivalent to dense uniform Li-Yorke chaos.
In this paper, we study the existence and uniqueness of solutions to the Euler equations with initial conditions that exhibit analytic regularity near the boundary and Sobolev regularity away from it. A key contribution of this work is the introduction of the diamond-analyticity framework, which captures the spatial decay of the analyticity radius in a structured manner, improving upon uniform analyticity approaches. We employ the Leray projection and a nonstandard mollification technique to demonstrate that the quotient between the imaginary and real parts of the analyticity radius remains unrestricted, thus extending the analyticity persistence results beyond traditional constraints. Our methodology combines analytic-Sobolev estimates with an iterative scheme which is nonstandard in the Cauchy-Kowalevskaya framework, ensuring rigorous control over the evolution of the solution. These results contribute to a deeper understanding of the interplay between analyticity and boundary effects in fluid equations. They might have implications for the study of the inviscid limit of the Navier-Stokes equations and the role of complex singularities in fluid dynamics.
We demonstrate that the flux group nullifies the influence of the flux homomorphism in the study of the Hofer-like geometry of the group of Hamiltonian diffeomorphisms. Consequently, the Hofer-like norm and the usual Hofer norm coincide on the group of all Hamiltonian diffeomorphisms. This resolves Banyaga's conjecture and enhances the existing proofs of the conjecture. As a result, several findings within the theory of Hofer-like geometry can be derived directly from their Hamiltonian analogs, eliminating the need for any sophisticated tools.
One of the most often used methods of summing divergent series in physics is the Borel-type summation with control parameters improving convergence, which are defined by some optimization conditions. The well known annoying problem in this procedure is the occurrence of multiple solutions for control parameters. We suggest a method for resolving this problem, based on the minimization of cost functional. Control parameters can be introduced by employing the Borel-Leroy or Mittag-Leffler transforms. Also, two novel transformations are proposed using fractional integrals and fractional derivatives. New cost functionals are advanced, based on lasso and ridge selection criteria, and their performance is studied for a number of models. The developed method is shown to provide good accuracy for the calculated quantities.
Trimmed (multi-patch) geometries are the state-of-the-art technology in computer-aided design for industrial applications such as automobile crashworthiness. In this context, fast solution techniques extensively rely on explicit time integration schemes in conjunction with mass lumping techniques that substitute the consistent mass with a (usually diagonal) approximation. For smooth isogeometric discretizations, Leidinger [1] first showed that mass lumping removed the dependency of the critical time-step on the size of trimmed elements. This finding has attracted considerable attention but has unfortunately overshadowed another more subtle effect: mass lumping may disastrously impact the accuracy of low frequencies and modes, potentially inducing spurious oscillations in the solution. In this article, we provide compelling evidence for this phenomenon and later propose a stabilization technique based on polynomial extensions that restores a level of accuracy comparable to boundary-fitted discretizations.
Potential theory has important applications in various fields such as physics, finance, and biology. In this paper, we investigate the potentials of two classic types of discrete-time skip-free Markov chains: upward skip-free and downward skip-free Markov chains. The key to deriving these potentials lies in the use of truncation approximation techniques. The results are then applied to GI/M/1 queues and M/G/1 queues, and further extended to continuous-time skip-free Markov chains.
Kempf proved that when a point is unstable in the sense of Geometric Invariant Theory, there is a ``worst'' destabilizing 1-parameter subgroup $\lambda^{*}$. It is natural to ask: what are the worst 1-PS for the unstable points in the GIT problems used to construct the moduli space of curves $\overline{M}_g$? Here we consider Chow points of toric rational curves with one unibranch singular point. We translate the problem as an explicit problem in convex geometry (finding the closest point on a polyhedral cone to a point outside it). We prove that the worst 1-PS has a combinatorial description that persists once the embedding dimension is sufficiently large, and present some examples.
We formulate a solution to the Algebraic version of the Inverse Jacobi problem. Using this solution we produce explicit addition laws on any algebraic curve generalizing the law suggested by Leykin [2] in the case of (n, s) curves. This gives a positive answer to a question asked by T. Shaska whether addition laws appearing in [2] can be produced in a coordinate free manner.
We study primal-dual algorithms for general empirical risk minimization problems in distributed settings, focusing on two prominent classes of algorithms. The first class is the communication-efficient distributed dual coordinate ascent (CoCoA), derived from the coordinate ascent method for solving the dual problem. The second class is the alternating direction method of multipliers (ADMM), including consensus ADMM, linearized ADMM, and proximal ADMM. We demonstrate that both classes of algorithms can be transformed into a unified update form that involves only primal and dual variables. This discovery reveals key connections between the two classes of algorithms: CoCoA can be interpreted as a special case of proximal ADMM for solving the dual problem, while consensus ADMM is closely related to a proximal ADMM algorithm. This discovery provides the insight that by adjusting the augmented Lagrangian parameter, we can easily enable the ADMM variants to outperform the CoCoA variants. We further explore linearized versions of ADMM and analyze the effects of tuning parameters on these ADMM variants in the distributed setting. Our theoretical findings are supported by extensive simulation studies and real-world data analysis.
The inhomogeneous spin $q$-Whittaker polynomials are a family of symmetric polynomials which generalize the Macdonald polynomials at $t=0$. In this paper we prove that they are orthogonal with respect to a variant of the Sklyanin measure on the $n$ dimensional torus and as a result they form a basis of the space of symmetric polynomials in $n$ variables. Instrumental to the proof are inhomogeneous eigenrelations, which partially generalize those of Macdonald polynomials. We also consider several special cases of the inhomogeneous spin $q$-Whittaker polynomials, which include variants of symmetric Grothendieck polynomials or spin Whittaker functions.
In this paper we introduce and study generally non-self-adjoint realizations of the Dirac operator on an arbitrary finite metric graph. Employing the robust boundary triple framework, we derive, in particular, a variant of the Birman Schwinger principle for its eigenvalues, and with an example of a star shaped graph we show that the point spectrum may exhibit diverse behaviour. Subsequently, we find sufficient and necessary conditions on transmission conditions at the graph's vertices under which the Dirac operator on the graph is symmetric with respect to the parity, the time reversal, or the charge conjugation transformation.
What minimum degree of a graph $G$ on $n$ vertices guarantees that the union of $G$ and a random $2$-factor (or permutation) is with high probability Hamiltonian? Gir\~ao and Espuny D{\'\i}az showed that the answer lies in the interval $[\tfrac15 \log n, n^{3/4+o(1)}]$. We improve both the upper and lower bounds to resolve this problem asymptotically, showing that the answer is $(1+o(1))\sqrt{n\log n/2}$. Furthermore, if $G$ is assumed to be (nearly) regular then we obtain the much stronger bound that any degree growing at least polylogarithmically in $n$ is sufficient for Hamiltonicity. Our proofs use some insights from the rich theory of random permutations and a randomised version of the classical technique of P\'osa rotation adapted to multiple exposure arguments.
The paper studies properties of acoustic operators in bounded Lipschitz domains $\Omega$ with m-dissipative generalized impedance boundary conditions. We prove that such acoustic operators have a compact resolvent if and only if the impedance operator from the trace space $H^{1/2} (\partial \Omega)$ to the other trace space $H^{-1/2} (\partial \Omega)$ is compact. This result is applied to the question of the discreteness of the spectrum and to the particular cases of damping and impedance boundary conditions. The method of the paper is based on abstract results written in terms of boundary tuples and is applicable to other types of wave equations.
Inspired by the quantization of classical quantities and Rankin Selberg convolution, we study the anticommutator operation $\{\cdot, \cdot\}$, where $\{A,B\} = AB + BA$, applied to real symmetric random matrix ensembles including Gaussian orthogonal ensemble (GOE), the palindromic Toeplitz ensemble (PTE), the $k$-checkerboard ensemble, and the block $k$-circulant ensemble ($k$-BCE). Using combinatorial and topological techniques related to non-crossing and free matching properties of GOE and PTE, we obtain closed-form formulae for the moments of the limiting spectral distributions of $\{$GOE, GOE$\}$, $\{$PTE, PTE$\}$, $\{$GOE, PTE$\}$ and establish the corresponding limiting spectral distributions with generating functions and convolution. On the other hand, $\{$GOE, $k$-checkerboard$\}$ and $\{$$k$-checkerboard, $j$-checkerboard$\}$ exhibit entirely different spectral behavior than the other anticommutator ensembles: while the spectrum of $\{$GOE, $k$-checkerboard$\}$ consists of 1 bulk regime of size $\Theta(N)$ and 1 blip regime of size $\Theta(N^{3/2})$, the spectrum of $\{$$k$-checkerboard, $j$-checkerboard$\}$ consists of 1 bulk regime of size $\Theta(N)$, 2 intermediary blip regimes of size $\Theta(N^{3/2})$, and 1 largest blip regime of size $\Theta(N^2)$. In both cases, with the appropriate weight function, we are able to isolate the largest regime fro other regime(s) and analyze its moments and convergence results via combinatorics. We end with numerical computation of lower even moments of $\{$GOE, $k$-BCE$\}$ and $\{$$k$-BCE, $k$-BCE$\}$ based on genus expansion and discussion on the challenge with analyzing the intermediary blip regimes of $\{$$k$-checkerboard, $j$-checkerboard$\}$.
We construct algorithms and topological invariants that allow us to distinguish the topological type of a surface, as well as functions and vector fields for their topological equivalence. In the first part (arXiv:2501.15657), we discused basic concepts of diferential topology. In the second part we discus the main discrete topological structures used in the topological theory of dynamic systems: simplicial complexes, regular SW-complexes, Euler characteristic and homology groops, Morse-Smale complexes and handle decomposition of manifolds, Poincare rotation index of vector field, discrete Morse function and vector fields.
The main aim of this study is to analyze a fractional parabolic SIR epidemic model of a reaction-diffusion, by using the nonlocal Caputo fractional time-fractional derivative and employing the $p$-Laplacian operator. The immunity is imposed through the vaccination program, which is regarded as a control variable. Finding the optimal control pair that reduces the number of sick people, the associated vaccination, and treatment expenses across a constrained time and space is our main study. The existence and uniqueness of the nonnegative solution for the spatiotemporal SIR model are established. It is also demonstrated that an optimal control exists. In addition, we obtain a description of the optimal control in terms of state and adjoint functions. Then, the optimality system is resolved by a discrete iterative scheme that converges after an appropriate test, similar to the forward-backward sweep method. Finally, numerical approximations are given to show the effectiveness of the proposed control program, which provides meaningful results using different values of the fractional order and $p$, respectively the order of the Caputo derivative and the $p$-Laplacian operators.
We investigate the problem of detecting and estimating a changepoint in the attachment function of a network evolving according to a preferential attachment model on $n$ vertices, using only a single final snapshot of the network. Bet et al.~\cite{bet2023detecting} show that a simple test based on thresholding the number of vertices with minimum degrees can detect the changepoint when the change occurs at time $n-\Omega(\sqrt{n})$. They further make the striking conjecture that detection becomes impossible for any test if the change occurs at time $n-o(\sqrt{n}).$ Kaddouri et al.~\cite{kaddouri2024impossibility} make a step forward by proving the detection is impossible if the change occurs at time $n-o(n^{1/3}).$ In this paper, we resolve the conjecture affirmatively, proving that detection is indeed impossible if the change occurs at time $n-o(\sqrt{n}).$ Furthermore, we establish that estimating the changepoint with an error smaller than $o(\sqrt{n})$ is also impossible, thereby confirming that the estimator proposed in Bhamidi et al.~\cite{bhamidi2018change} is order-optimal.
In this article, we investigate the stabilizability of the two- and three-dimensional Navier-Stokes equations with memory effects around a non-constant steady state using a localized interior control. The system is first linearized around a non-constant steady state and then reformulated into a coupled system by introducing a new variable to handle the integral term. Due to the presence of variable coefficients in the linear operator, the rigorous computation of eigenvalues and eigenfunctions becomes infeasible. Therefore, we concentrate on the principal operator, and investigate its analyticity and spectral properties. We establish a feedback stabilization result for the principal system, ensuring a specific decay rate. Using the feedback operator derived from this analysis, we extend the approach to the full system, constructing a closed-loop system. By proving a suitable regularity result and applying a fixed-point argument, we ultimately demonstrate the stabilizability of the full system. We also discuss the stabilizability of the corresponding vorticity equation around a non-constant steady state.
In this paper, we consider the minimization of a $C^2-$smooth and strongly convex objective depending on a given parameter, which is usually found in many practical applications. We suppose that we desire to solve the problem with some inertial methods which cover a broader existing well-known inertial methods. Our main goal is to analyze the derivative of this algorithm as an infinite iterative process in the sense of ``automatic'' differentiation. This procedure is very common and has gain more attention recently. From a pure optimization perspective and under some mild premises, we show that any sequence generated by these inertial methods converge to the unique minimizer of the problem, which depends on the parameter. Moreover, we show a local linear convergence rate of the generated sequence. Concerning the differentiation of the scheme, we prove that the derivative of the sequence with respect to the parameter converges to the derivative of the limit of the sequence showing that any sequence is <<derivative stable>>. Finally, we investigate the rate at which the convergence occurs. We show that, this is locally linear with an error term tending to zero.
In this paper, we address two main topics. First, we study the problem of minimizing the sum of a smooth function and the composition of a weakly convex function with a linear operator on a closed vector subspace. For this problem, we propose a projected variable smoothing algorithm and establish a complexity bound of $\mathcal{O}(\epsilon^{-3})$ to achieve an $\epsilon$-approximate solution. Second, we investigate the Moreau envelope and the proximity operator of functions defined as the supremum of weakly convex functions, and we compute the proximity operator in two important cases. In addition, we apply the proposed algorithm for solving a distributionally robust optimization problem, the LASSO with linear constraints, and the max dispersion problem. We illustrate numerical results for the max dispersion problem.
Dirichlet's Lemma states that every primitive quadratic Dirichlet character $\chi$ can be written in the form $\chi(n) = (\frac{\Delta}n)$ for a suitable quadratic discriminant $\Delta$. In this article we define a group, the separant class group, that measures the extent to which Dirichlet's Lemma fails in general number fields $F$. As an application we will show that over fields with trivial separant class groups, genus theory of quadratic extensions can be made as explicit as over the rationals.
We study an iterative nonlinear solver for the Oldroyd-B system describing incompressible viscoelastic fluid flow. We establish a range of attributes of the fixed-point-based solver, together with the conditions under which it becomes contractive and examining the smoothness properties of its corresponding fixed-point function. Under these properties, we demonstrate that the solver meets the necessary conditions for recent Anderson acceleration (AA) framework, thereby showing that AA enhances the solver's linear convergence rate. Results from two benchmark tests illustrate how AA improves the solver's ability to converge as the Weissenberg number is increased.
Let $R$ be a commutative ring with identity, and let $\R(R)$ denote the semiring of radical ideals of $R$. The radical functor $\R$, from the category of $R$-modules $R{-}\boldsymbol{\sf{Mod}}$ to the category of $\R(R)$-semimodules $\R(R){-}\boldsymbol{\sf{Semod}}$, maps any complex $\M=(M_n, f_n)_{n\geq 0}$ of $R$-modules to a complex $\R(\M)=(\R(M_n), \R(f_n))_{n\geq 0}$ of $\R(R)$-semimodules, where $\R(M_n)$ consists of radical submodules of $M_n$, and the $\R(R)$-semimodule homomorphisms $\R(f_n):\R(M_n)\rightarrow \R(M_{n-1})$ are defined by $\R(f_n)(N)=\rad(f_n(N))$. The $n$-th radical homology of the complex $(\R(M_n), \R(f_n))_{n\geq 0}$, denoted $H_n(\R(\M))$, consists of radical submodules $N$ of $M_n$ such that $f_n(N)$ is contained in the radical of the zero submodule of $M_{n-1}$, and two such radical submodules are equivalent under the Bourne relation modulo the image of $\R(f_{n+1})$. $H_n(\R(-))$ is regarded as a covariant functor from the category $\boldsymbol{\sf{Ch}}(R{-}\boldsymbol{\sf{Mod}})$ of chain complexes of $R$-modules to $\R(R){-}\boldsymbol{\sf{Semod}}$, which acts identically on any pair of homotopic maps of complexes of $R$-modules. In particular, if $\M$ and $\M'$ are homotopically equivalent, then $H_n(\R(\M))$ and $H_n(\R(\M'))$ are isomorphic $\R(R)$-semimodules. We provide conditions under which $H_n(\R(-))$ induces a long exact sequence of radical homology modules for any short exact sequence of complexes of $R$-modules, and satisfies the naturality condition for exact homology sequences. Finally, we introduce a projective resolution for an $R$-module $M$ based on $\R(R)$-semimodules and give conditions under which such a projective resolution exists and is unique up to a homotopy.
This paper presents a boundary element formulation for the solution of the Mild-Slope equation in wave propagation problems with variable water depth in one direction. Based on the Green's function approximation proposed by Belibassakis \cite{Belibassakis2000}, a complete fundamental-solution kernel is developed and combined with a boundary element scheme for the solution of water wave propagation problems in closed and open domains where the bathymetry changes arbitrarily and smoothly in a preferential direction. The ability of the proposed formulation to accurately represent wave phenomena like refraction, reflection, diffraction and shoaling, is demonstrated with the solution of some example problems, in which arbitrary geometries and variable seabed profiles with slopes up to 1:3 are considered. The obtained results are also compared with theoretical solutions, showing an excellent agreement that demonstrates its potential.
Using the result of Petersen $\&$ Wink '21, we find obstructions to the curvature and topology of compact Lorentzian manifolds admitting a unit-length timelike Killing vector field.
This paper investigates an initial boundary value problem for the relaxed one-dimensional compressible Navier-Stokes-Fourier equations. By transforming the system into Lagrangian coordinates, the resulting formulation exhibits a uniform characteristic boundary structure. We first construct an approximate system with non-characteristic boundaries and establish its local well-posedness by verifying the maximal nonnegative boundary conditions. Subsequently, through the construction of a suitable weighted energy functional and careful treatment of boundary terms, we derive uniform a priori estimates, thereby proving the global well-posedness of smooth solutions for the approximate system. Utilizing these uniform estimates and standard compactness arguments, we further obtain the existence and uniqueness of global solutions for the original system. In addition, the global relaxation limit is established. The analysis is fundamentally based on energy estimates.
Call a curve $C \subset \mathbb{P}^2$ defined over $\mathbb{F}_q$ transverse-free if every line over $\mathbb{F}_q$ intersects $C$ at some closed point with multiplicity at least 2. In 2004, Poonen used a notion of density to treat Bertini Theorems over finite fields. In this paper we develop methods for density computation and apply them to estimate the density of the set of polynomials defining transverse-free curves. In order to do so, we use a combinatorial approach based on blocking sets of $\operatorname{PG}(2, q)$ and prove an upper bound on the number of such sets of fixed size $< 2q$. We thus obtain that nearly all transverse-free curves contain singularities at every $\mathbb{F}_q$-point of some line.
We derive existence results and first order necessary optimality conditions for optimal control problems governed by quasilinear parabolic PDEs with a class of first order nonlinearities that include for instance quadratic gradient terms. Pointwise in space and time or averaged in space and pointwise in time constraints on the gradient of the state control the growth of the nonlinear terms. We rely on and extend the improved regularity analysis for quasilinear parabolic PDEs on a whole scale of function spaces from [Hoppe et al, 2023]. In case of integral in space gradient-constraints we derive first-order optimality conditions under rather general regularity assumptions for domain, coefficients, and boundary conditions, similar to e.g. [Bonifacius and Neitzel, 2018]. In the case of pointwise in time and space gradient-constraints we use slightly stronger regularity assumptions leading to a classical smoother $W^{2,p}$-setting similar to [Casas and Chrysafinos, 2018].
In this paper we study holomorphic properties of infinite dimensional spin factors. Among the infinite dimensional Banach spaces with homogeneous open unit balls, we show that the spin factors are natural outlier spaces in which to ask the question (as was proved in the early 1970s for Hilbert spaces): Do biholomorphic automorphisms $g$ of the open unit ball $B$ have fixed points in $\overline B$? In this paper, for infinite dimensional spin factors, we provide reasonable conditions on $g$ that allow us to explicitly construct fixed points of $g$ lying on $\partial B$. En route, we also prove that every spin factor has the density property. In another direction, we focus on (compact) holomorphic maps $f:B\rightarrow B$, having no fixed point in $B$ and examine the sequence of iterates $(f^n)$. As $(f^n)$ does not generally converge, we instead trace the target set $T(f)$ of $f$, that is, the images of all accumulation points of $(f^n)_n$, for any topology finer than the topology of pointwise convergence on B. We prove for a spin factor that $T(f)$ lies on the boundary of a single bidisc unique to $f$.
An example of an infinite regular feebly compact quasitopological group is presented such that all continuous real-valued functions on the group are constant. The example is based on the use of Korovin orbits in $X^G$, where $X$ is a special regular countably compact space constructed by S.Bardyla and L.Zdomskyy and $G$ is an abstract Abelian group of an appropriate cardinality. Also, we study the interplay between the separation properties of the space $X$ and Korovin orbits in $X^G$. We show in particular that if $X$ contains two nonempty disjoint open subsets, then every Korovin orbit in $X^G$ is Hausdorff.
We study the uniform-in-time weak propagation of chaos for the consensus-based optimization (CBO) method on a bounded searching domain. We apply the methodology for studying long-time behaviors of interacting particle systems developed in the work of Delarue and Tse (ArXiv:2104.14973). Our work shows that the weak error has order $O(N^{-1})$ uniformly in time, where $N$ denotes the number of particles. The main strategy behind the proofs are the decomposition of the weak errors using the linearized Fokker-Planck equations and the exponential decay of their Sobolev norms. Consequently, our result leads to the joint convergence of the empirical distribution of the CBO particle system to the Dirac-delta distribution at the global minimizer in population size and running time in Wasserstein-type metrics.
The broad goal of the research surveyed in this article is to develop methods for understanding the aggregate behavior of interconnected dynamical systems, as found in mathematical physics, neuroscience, economics, power systems and neural networks. Questions concern prediction of emergent (often unanticipated) phenomena, methods to formulate distributed control schemes to influence this behavior, and these topics prompt many other questions in the domain of learning. The area of mean field games, pioneered by Peter Caines, are well suited to addressing these topics. The approach is surveyed in the present paper within the context of controlled coupled oscillators.
Given a squarefree monomial ideal $I$ of a polynomial ring $Q$, we show that if the minimal free resolution $\mathbb{F}$ of $Q/I$ admits the structure of a differential graded (dg) algebra, then so does any "pruning" of $\mathbb{F}$. As an application, we show that if $Q/\mathcal{F}(\Delta)$, the quotient of the ambient polynomial ring by the facet ideal $\mathcal{F}(\Delta)$ of a simplicial complex $\Delta$, is minimally resolved by a dg algebra, then so is the quotient by the facet ideal of each facet-induced subcomplex of $\Delta$ (over the smaller polynomial ring). Along with techniques from discrete Morse theory and homological algebra, this allows us to give complete classifications of the trees and cycles $G$ with $Q/I_G$ minimally resolved by a dg algebra in terms of the diameter of $G$, where $I_G$ is the edge ideal of $G$.
Motivated by a popular code golf challenge, we review some key ideas from information theory and discuss how to efficiently compress a streaming file with an acceptable error rate.
We prove the existence of clopen marker sets with some strong regularity property. For each $n\geq 1$ and any integer $d\geq 1$, we show that there are a positive integer $D$ and a clopen marker set $M$ in $F(2^{\mathbb{Z}^n})$ such that (1) for any distinct $x,y\in M$ in the same orbit, $\rho(x,y)\geq d$; (2) for any $1\leq i\leq n$ and any $x\in F(2^{\mathbb{Z}^n})$, there are non-negative integers $a, b\leq D$ such that $a\cdot x\in M$ and $-b\cdot x\in M$. As an application, we obtain a clopen tree section for $F(2^{\mathbb{Z}^n})$. Based on the strong marker sets, we get a quick proof that there exist clopen continuous edge $(2n+1)$-colorings of $F(2^{\mathbb{Z}^n})$. We also consider a similar strong markers theorem for more general generating sets. In dimension 2, this gives another proof of the fact that for any generating set $S\subseteq \mathbb{Z}^2$, there is a continuous proper edge $(2|S|+1)$-coloring of the Schreier graph of $F(2^{\mathbb{Z}^n})$ with generating set $S$.
We introduce the notion of bounded quasi-inversion closed semiprime f-algebras and we prove that, if A is such an algebra, then any intermediate algebra in A is an order ideal of A. This extends a recent result by Dominguez who has dealt with the unital case (the problem on C(X)-type spaces has been solved earlier by Dominguez, Gomez-Perez, and Mulero). Our results are illustrated by examples of algebras of continuous functions and algebras of measurable functions.
This work introduces the Query/Hit (Q/H) learning model. The setup consists of two agents. One agent, Alice, has access to a streaming source, while the other, Bob, does not have direct access to the source. Communication occurs through sequential Q/H pairs: Bob sends a sequence of source symbols (queries), and Alice responds with the waiting time until each query appears in the source stream (hits). This model is motivated by scenarios with communication, computation, and privacy constraints that limit real-time access to the source. The error exponent for sequential hypothesis testing under the Q/H model is characterized, and a querying strategy, the Dynamic Scout-Sentinel Algorithm (DSSA), is proposed. The strategy employs a mutual information neural estimator to compute the error exponent associated with each query and to select the query with the highest efficiency. Extensive empirical evaluations on both synthetic and real-world datasets -- including mouse movement trajectories, typesetting patterns, and touch-based user interactions -- are provided to evaluate the performance of the proposed strategy in comparison with baselines, in terms of probability of error, query choice, and time-to-detection.
We present a conforming setting for a mixed formulation of linear elasticity with symmetric stress that has normal-normal continuous components across faces of tetrahedral meshes. We provide a stress element for this formulation with 30 degrees of freedom that correspond to standard boundary conditions. The resulting scheme converges quasi-optimally and is locking free. Numerical experiments illustrate the performance.
A coupled boundary spectral element method (BSEM) and spectral element method (SEM) formulation for the propagation of small-amplitude water waves over variable bathymetries is presented in this work. The wave model is based on the mild-slope equation (MSE), which provides a good approximation of the propagation of water waves over irregular bottom surfaces with slopes up to 1:3. In unbounded domains or infinite regions, space can be divided into two different areas: a central region of interest, where an irregular bathymetry is included, and an exterior infinite region with straight and parallel bathymetric lines. The SEM allows us to model the central region, where any variation of the bathymetry can be considered, while the exterior infinite region is modelled by the BSEM which, combined with the fundamental solution presented by Cerrato et al. [A. Cerrato, J. A. Gonz\'alez, L. Rodr\'iguez-Tembleque, Boundary element formulation of the mild-slope equation for harmonic water waves propagating over unidirectional variable bathymetries, Eng. Anal. Boundary Elem. 62 (2016) 22-34.] can include bathymetries with straight and parallel contour lines. This coupled model combines important advantages of both methods; it benefits from the flexibility of the SEM for the interior region and, at the same time, includes the fulfilment of the Sommerfeld's radiation condition for the exterior problem, that is provided by the BSEM. The solution approximation inside the elements is constructed by high order Legendre polynomials associated with Legendre-Gauss-Lobatto quadrature points, providing a spectral convergence for both methods. The proposed formulation has been validated in three different benchmark cases with different shapes of the bottom surface. The solutions exhibit the typical p-convergence of spectral methods.
Let $p$ be an odd prime number. In this article, we study the variation of Iwasawa invariants among $p$-congruent elliptic curves over certain $p$-adic Lie extensions. We investigate both the classical Selmer group as well as the fine Selmer group.
In this paper, we prove that a compact K\"ahler manifold $X$ with pseudo-effective (resp. singular positively curved) tangent bundle admits a smooth (resp. locally constant) rationally connected fibration $\phi \colon X \to Y$ onto a finite \'etale quotient $Y$ of a compact complex torus. This result extends the structure theorem previously established for smooth projective varieties to compact K\"ahler manifolds.
In the paper, the author expresses the difference $2^m\bigl[\zeta\bigl(-m,\frac{1+x}{2}\bigr)-\zeta\bigl(-m,\frac{2+x}{2}\bigr)\bigr]$ in terms of a linear combination of the function $\Gamma(m+1){\,}_2F_1(-m,-x;1;2)$ for $m\in\mathbb{N}_0$ and $x\in(-1,\infty)$ in the form of matrix equations, where $\Gamma(z)$, $\zeta(z,\alpha)$, and ${}_2F_1(a,b;c;z)$ stand for the classical Euler gamma function, the Hurwitz zeta function, and the Gauss hypergeometric function, respectively. This problem originates from the Landau level quantization in solid state materials.
Supersingular elliptic curve isogeny graphs underlie isogeny-based cryptography. For isogenies of a single prime degree $\ell$, their structure has been investigated graph-theoretically. We generalise the notion of $\ell$-isogeny graphs to $L$-isogeny graphs (studied in the prime field case by Delfs and Galbraith), where $L$ is a set of small primes dictating the allowed isogeny degrees in the graph. We analyse the graph-theoretic structure of $L$-isogeny graphs. Our approaches may be put into two categories: cycles and graph cuts. On the topic of cycles, we provide: a count for the number of non-backtracking cycles in the $L$-isogeny graph using traces of Brandt matrices; an efficiently computable estimate based on this approach; and a third ideal-theoretic count for a certain subclass of $L$-isogeny cycles. We provide code to compute each of these three counts. On the topic of graph cuts, we compare several algorithms to compute graph cuts which minimise a measure called the \textit{edge expansion}, outlining a cryptographic motivation for doing so. Our results show that a \emph{greedy neighbour} algorithm out-performs standard spectral algorithms for computing optimal graph cuts. We provide code and study explicit examples. Furthermore, we describe several directions of active and future research.
The main objective of this paper is to show that balls under invariant metrics on hyperbolic planar domains are finitely-connected. As applications, we give new and transparent proofs of classical results on conformal mappings of planar domains. In particular, we show that any conformal self-map of a hyperbolic planar domain with three fixed points is the identity. We also give a new and very simple proof of the theorem by Aumann and Carath\'eodory that states that the isotropy groups of a hyperbolic planar domain are either finite or the domain is simply-connected.
This paper introduces a multi-parameter regularization approach using the $\ell_1$ norm, designed to better adapt to complex data structures and problem characteristics while offering enhanced flexibility in promoting sparsity in regularized solutions. As data volumes grow, sparse representations of learned functions become critical for reducing computational costs during function operations. We investigate how the selection of multiple regularization parameters influences the sparsity of regularized solutions. Specifically, we characterize the relationship between these parameters and the sparsity of solutions under transform matrices, enabling the development of an iterative scheme for selecting parameters that achieve prescribed sparsity levels. Special attention is given to scenarios where the fidelity term is non-differentiable, and the transform matrix lacks full row rank. In such cases, the regularized solution, along with two auxiliary vectors arising in the sparsity characterization, are essential components of the multi-parameter selection strategy. To address this, we propose a fixed-point proximity algorithm that simultaneously determines these three vectors. This algorithm, combined with our sparsity characterization, forms the basis of a practical multi-parameter selection strategy. Numerical experiments demonstrate the effectiveness of the proposed approach, yielding regularized solutions with both predetermined sparsity levels and satisfactory approximation accuracy.
Let $(\mathcal X, d,\mu)$ be an RD-space, and let $\rho$ be an admissible function on $\mathcal X$. We establish necessary and sufficient conditions for the boundedness of a new class of generalized Calder\'on-Zygmund operators of log-Dini type on the Hardy space $H^1_\rho(\mathcal X)$, introduced by Yang and Zhou. Our results extend and unify some recent results, providing further insights into the study of singular integral operators in this setting.
Let $(\overline{M},g_0)$ be a $2$-D compact surface with boundary $\partial M$ and its interior $M$. We show that for a large class of initial and boundary data, the initial-boundary value problem of the normalized Ricci flow $(1.10)-(1.12)$, with prescribed geodesic curvature $\psi$ on $\partial M$, has a unique solution for all $t>0$, and it converges to the complete hyperbolic metric locally uniformly in $M$. Here the natural condition that $\psi>0$ causes the main difficulty in the a priori estimates in the corresponding initial-boundary problem $(1.15)-(1.17)$ of the parabolic equations, for which an auxiliary Cauchy-Dirichlet problem is introduced. We also provide examples of the boundary data $\psi$ which fits well with the natural asymptotic behavior of the geodesic curvature, but the solution to $(1.10)-(1.12)$ fails to converge to the complete hyperbolic metric.
A subgraph $H$ of an edge-colored graph $G$ is rainbow if all the edges of $H$ receive different colors. If $G$ does not contain a rainbow subgraph isomorphic to $H$, we say that $G$ is rainbow $H$-free. For connected graphs $H_1$ and $H_2$, if every rainbow $H_1$-free edge-colored complete graph colored in sufficiently many colors is rainbow $H_2$-free, we write $H_1\le H_2$. The binary relation $\le$ is reflexive and transitive, and hence it is a preorder. If $H_1$ is a subgraph of $H_2$, then trivially $H_1\le H_2$ holds. On the other hand, there exists a pair $(H_1, H_2)$ such that $H_1$ is a proper supergraph of $H_2$ and $H_1\le H_2$ holds. Cui et al.~[Discrete Math.~\textbf{344} (2021) Article Number 112267] characterized these pairs. In this paper, we investigate the pairs $(H_1, H_2)$ with $H_1\le H_2$ when neither $H_1$ nor $H_2$ is a subgraph of the other. We prove that there are many such pairs and investigate their structure with respect to $\le$.
For a given graph, by its \emph{connected partial symmetry index} we mean the number of all isomorphisms between connected induced subgraphs of the graph. In this brief note we answer the question in the title.
In this paper, we solve the fractional anisotropic Calder\'on problem with external data in the Euclidean space, in dimensions two and higher, for smooth Riemannian metrics that agree with the Euclidean metric outside a compact set. Specifically, we prove that the knowledge of the partial exterior Dirichlet--to--Neumann map for the fractional Laplace-Beltrami operator, given on arbitrary open nonempty sets in the exterior of the domain in the Euclidean space, determines the Riemannian metric up to diffeomorphism, fixing the exterior. We provide two proofs of this result: one relies on the heat semigroup representation of the fractional Laplacian and a pseudodifferential approach, while the other is based on a variable-coefficient elliptic extension interpretation of the fractional Laplacian.
In this paper, we consider a free boundary problem of two-phase inviscid incompressible fluid in gravity field. The presence of the gravity field induces novel phenomena that there might be some stagnation points on free surface of the two-phase flow, where the velocity field of the fluid vanishes. From the mathematical point of view, the gradient of the stream function degenerates near the stagnation point, leading to singular behaviors on the free surface. The primary objective of this study is to investigate the singularity and regularity of the two-phase free surface, considering their mutual interaction between the two incompressible fluids in two dimensions. More precisely, if the two fluids meet locally at a single point, referred to as the possible two-phase stagnation point, we demonstrate that the singular side of the two-phase free surface exhibits a symmetric Stokes singular profile, while the regular side near this point maintains the $C^{1,\alpha}$ regularity. On the other hand, if the free surfaces of the two fluids stick together and have non-trivial overlapping common boundary at the stagnation point, then the interaction between the two fluids will break the symmetry of the Stokes corner profile, which is attached to the $C^{1,\alpha}$ regular free surface on the other side. As a byproduct of our analysis, it's shown that the velocity field for the two fluids cannot vanish simultaneously on the two-phase free boundary. Our results generalize the significant works on the Stokes conjecture in [V$\check{a}$rv$\check{a}$ruc$\check{a}$-Weiss, Acta Math., 206, (2011)] for one-phase gravity water wave, and on regular results on the free boundaries in [De Philippis-Spolaor-Velichkov, Invent. Math., 225, (2021)] for two-phase fluids without gravity.
Characteristic functions of linear operators are analytic functions that serve as complete unitary invariants. Such functions, as long as they are built in a natural and canonical manner, provide representations of inner functions on a suitable domain and make significant contributions to the development of various theories in Hilbert function spaces. In this paper, we solve this problem in polydiscs. In particular, we present a concrete description of the characteristic functions of tuples of commuting pure contractions and, consequently, provide a description of inner functions on polydiscs.
Diophantine approximation explores how well irrational numbers can be approximated by rationals, with foundational results by Dirichlet, Hurwitz, and Liouville culminating in Roth's theorem. Schmidt's subspace theorem extends Roth's results to higher dimensions, with profound implications to Diophantine equations and transcendence theory. This article provides a self-contained and accessible exposition of Roth's theorem and Schlickewei's refinement of the subspace theorem, with an emphasis on proofs. The arguments presented are classical and approachable for readers with a background in algebraic number theory, serving as a streamlined, yet condensed reference for these fundamental results.
We use tools of combinatorial group theory in order to compute the fundamental group of ramified covers of the projective line with the most general ramification type.
In this paper, we prove the hydrodynamic limit for the ergodic dynamics of the Facilitated Exclusion Process with closed boundaries in the symmetric, asymmetric and weakly asymmetric regimes. For this, we couple it with a Simple Exclusion Process by constructing a mapping that transforms the facilitated dynamics into the simple one. As the hydrodynamic behaviour of the simple exclusion process with closed boundaries has been extensively studied, we can deduce the corresponding hydrodynamics for the facilitated exclusion process.
We present an alternative $\mathbb{Q}$-form for Racinet's cyclotomic double shuffle Lie algebra, inspired by the double shuffle relations among congruent multiple zeta values studied by Yuan and Zhao. Our main result establishes an invariance characterization theorem, demonstrating how these two $\mathbb{Q}$-forms can be reconstructed from each other under Galois action.
This paper aims at giving solutions to six interesting interconnected open questions suggested by Professor Biagio Ricceri. The questions focus on the behavior of nonvanishing continuous vector-valued functions in finite-dimensional normed spaces as well as in infinite-dimensional normed spaces. Using the celebrated Hartman-Stampacchia Theorem (1966) on the solution existence of variational inequalities, we establish sharp lower estimates for the maximum displacements of nonvanishing continuous vector-valued functions. Then, combining the obtained results with suitable tools from functional analysis and several novel geometrical constructions, we get the above-mentioned solutions.
Smoothness is crucial for attaining fast rates in first-order optimization. However, many optimization problems in modern machine learning involve non-smooth objectives. Recent studies relax the smoothness assumption by allowing the Lipschitz constant of the gradient to grow with respect to the gradient norm, which accommodates a broad range of objectives in practice. Despite this progress, existing generalizations of smoothness are restricted to Euclidean geometry with $\ell_2$-norm and only have theoretical guarantees for optimization in the Euclidean space. In this paper, we address this limitation by introducing a new $\ell*$-smoothness concept that measures the norm of Hessian in terms of a general norm and its dual, and establish convergence for mirror-descent-type algorithms, matching the rates under the classic smoothness. Notably, we propose a generalized self-bounding property that facilitates bounding the gradients via controlling suboptimality gaps, serving as a principal component for convergence analysis. Beyond deterministic optimization, we establish an anytime convergence for stochastic mirror descent based on a new bounded noise condition that encompasses the widely adopted bounded or affine noise assumptions.
The aim of this article is to study the largest domain space $[T,X]$, whenever it exists, of a given continuous linear operator $T\colon X\to X$, where $X\subseteq H(\mathbb{D})$ is a Banach space of analytic functions on the open unit disc $\mathbb{D}\subseteq \mathbb{C}$. That is, $[T,X]\subseteq H(\mathbb{D})$ is the \textit{largest} Banach space of analytic functions containing $X$ to which $T$ has a continuous, linear, $X$-valued extension $T\colon [T,X]\to X$. The class of operators considered consists of generalized Volterra operators $T$ acting in the Korenblum growth Banach spaces $X:=A^{-\gamma}$, for $\gamma>0$. Previous studies dealt with the classical Ces\`aro operator $T:=C$ acting in the Hardy spaces $H^p$, $1\leq p<\infty$, \cite{CR}, \cite{CR1}, in $A^{-\gamma}$, \cite{ABR-R}, and more recently, generalized Volterra operators $T$ acting in $X:=H^p$, \cite{BDNS}.
We study the two-plectic geometry of the six-sphere induced by pulling back a canonical $G_2$-invariant three-form from $\mathbb{R}^7$ . Notably we explicitly prove non-flatness of this structure and show that its infinitesimal automorphisms are given by the exceptional Lie algebra $\mathfrak{g}_2$. Several interesting classes of solutions of the dynamical Hamilton-de Donder-Weyl equations with one- and two-dimensional sources are exhibited.
In this work, we investigate the asymptotic behavior of integral functionals of stationary Gaussian random fields as the integration domain tends to be the whole space. More precisely, using the Wiener chaos expansion and Malliavin-Stein method, we establish an almost sure central limit theorem (ASCLT) only under mild conditions on the covariance function of the underlying stationary Gaussian field. In this setting, we additionally derive a quantitative central limit theorem with rate of convergence in Wasserstein distance, and show certain regularity property for the said integral functionals (the latter under weaker conditions). In particular, we solved an open question on the Malliavin differentiability of the excursion volume of Berry's random wave model. As a key consequence of our analysis, we obtain the exact asymptotic rate (as a function of the exponent) for moments of Bessel functions, thus confirming a conjecture based on existing numerical simulations. In the end, we provide two applications of our result: (i) ASCLT in the context of Breuer-Major central limit theorems, (ii) ASCLT for Berry's random wave model. It is worth stressing that our approach does not require any knowledge on the regularity properties of random variables (e.g., Malliavin differentiability) and hence not only complements the existing literature, but also leads to novel results that are of independent interest.
Let $f_1(z),\ldots, f_m(z)$ be power series in $\mathbb{Q}_p[[z]]$ such that, for every $1\leq i\leq m$, $f_i(z)$ is solution of a differential operator $\mathcal{L}_i\in E_p[d/dz]$, where $E_p$ is the field of analytic elements. We prove that if, for every $1\leq i\leq m$, $\mathcal{L}_i$ has a strong Frobenius structure and has maximal order multplicity at zero (MOM) then $f_1(z),\ldots, f_m(z)$ are algebraically dependent over $E_p$ if and only if there are integers $a_1,\ldots, a_m$ not all zero, such that $f^{a_1}_1(z)\cdots f^{a_m}_m(z)\in E_p$. The main consequence of this result is that it allows us to study the algebraic independence of a large class of $G$\nobreakdash-functions and certain $E$\nobreakdash-functions.
We show that different choices of generators of the Galois group of $\mathbb{F}_{q^n}/\mathbb{F}_{q}$ ($n\geq 3$) produce non-isotopic cyclic semifields $\mathbb{F}_{q^n}[t;\sigma]/\mathbb{F}_{q^n}[t;\sigma](t^m-a)$ when $m=n$: for $n\geq m$, there are $\varphi(n)$ non-isomorphic classes of Sandler semifields $\mathbb{F}_{q^n}[t;\sigma]/\mathbb{F}_{q^n}[t;\sigma](t^m-a)$, one class for each generator $\sigma$ of ${\rm Gal}(\mathbb{F}_{q^n}/\mathbb{F}_{q})$ involved in their construction, where $\varphi$ is the Euler function. We prove that when $n=m$, two Sandler semifields constructed from different generators $\sigma_1$ and $\sigma_2$ of ${\rm Gal}(\mathbb{F}_{q^n}/\mathbb{F}_{q})$ are not isotopic. Hence when $n=m$ there are $\varphi(m)$ non-isotopic classes of these semifields, each class belonging to one choice of generator. We then present a full parametrization of the non-isomorphic Sandler semifields of order $q^{m^2}$ with center $\mathbb{F}_q$, and nuclei $\mathbb{F}_{q^m}$, when $m$ is prime and $\mathbb{F}_{q}$ contains a primitive $m$th root of unity. Since for $m=n$, two Sandler semifields constructed from the same generator are isotopic if and only if they are isomorphic, this parametrizes these Sandler semifields up to isotopy, and thus the corresponding non-Desarguesian projective planes as well. Most of our results are proved in all generality for any cyclic Galois field extension.
In this paper, we define a family of dimensions for Borel measures that lie between the Hausdorff and Minkowski dimensions for measures, analogous to the intermediate dimensions of sets. Previously, Hare et. al. in [10] defined intermediate dimensions that interpolate between the Minkowski and Assouad dimensions for measures. Additionally, Fraser, in [7] introduced intermediate dimensions that interpolate between the Fourier and Hausdorff dimensions of measures. Our results address a "gap" in the study of dimension interpolation for measures, almost completing the spectrum of intermediate dimensions for measures: from Fourier to Assouad dimensions. Furthermore, Theorem 3.11 can be interpreted as a "reverse Frostman" lemma for intermediate dimensions. We also obtain a capacity-theoretic definition that enables us to estimate the intermediate dimensions of pushforward measures by projections.
We define a new relation between character triples and prove some Clifford theory properties for weights in terms of character triples.
Let $(W, R)$ be a Coxeter system and let $w \in W$. We say that $u$ is a prefix of $w$ if there is a reduced expression for $u$ that can be extended to one for $w$. That is, $w = uv$ for some $v$ in $W$ such that $\ell(w) = \ell(u) + \ell(v)$. We say that $w$ has the ancestor property if the set of prefixes of $w$ contains a unique involution of maximal length. In this paper we show that all Coxeter elements of finitely generated Coxeter groups have the ancestor property, and hence a canonical expression as a product of involutions. We conjecture that the property in fact holds for all non-identity elements of finite Coxeter groups.
This paper shows that lumped directed-area vectors at edges and dual control volumes required to implement the edge-based discretization can be computed without explicitly defining the dual control volume around each node for triangular and tetrahedral grids. It is a simpler implementation because there is no need to form a dual control volume by connecting edge-midpoints, face centroids, and element centroids, and also reduces the time for computing lumped directed-area vectors for a given grid, especially for tetrahedral grids. The speed-up achieved by the proposed algorithm may not be large enough to greatly impact the overall simulation time, but the proposed algorithm is expected to serve as a major stepping stone towards extending the edge-based discretization to four dimensions and beyond (e.g., space-time simulations). Efficient algorithms for computing lumped directed-area vectors and dual volumes without forming dual volumes are presented, and their implementations are described and compared with traditional algorithms in terms of complexity as well as actual computing time for a given grid.
The Theory of Proportions and Symbolic Allusions applied Interdisciplinary (TPASAI) is a framework that integrates mathematics, linguistics, psychology, and game theory to uncover hidden patterns and proportions in reality. Its central idea is that numerical encoding of symbols, dates, and language can reveal recurring structures and connections that reflect universal principles. By applying fractal analysis, the theory identifies patterns across different scales, offering a unifying perspective on the structure of the world. One key aspect of TPASAI is symbolic analysis, which allows for the reinterpretation of traumatic experiences in psychotherapy. For example, assigning numerical values to elements like fingers, dates, or words can help individuals uncover meaningful associations between personal experiences and collective symbols. This approach encourages cognitive flexibility and provides a therapeutic avenue for recontextualizing emotions. The theory also incorporates principles of game theory, which frame reality as a system of symbolic "codes" governed by rules that can be understood and strategically used. This perspective is especially useful for psychological conditions like obsessive-compulsive disorder (OCD), enabling patients to approach their obsessions as decipherable patterns rather than rigid constraints. TPASAI has practical applications in psychology, education, and technology. In education, it aids in teaching mathematical and linguistic concepts by exploring connections between symbolic representations and real-world events. In technology, the methodology can be employed in ciphering and natural language processing. The innovation of TPASAI lies in its ability to merge the structured rigor of mathematics with the interpretative flexibility of symbolic analysis, offering a deeper understanding of events and relationships.
Let $\mathrm{Mp}(2n)$ be the metaplectic group over a local field $F \supset \mathbb{Q}_p$ defined by an additive character of $F$ of conductor $4\mathfrak{o}_F$. Gan-Savin ($p \neq 2$) and Takeda-Wood ($p=2$) obtained an equivalence between the Bernstein block of $\mathrm{Mp}(2n)$ containing the even (resp. odd) Weil representation and the Iwahori-spherical block of the split $\mathrm{SO}(2n+1)$ (resp. its non-split inner form), by giving an isomorphism between Hecke algebras. We revisit this equivalence from an endoscopic perspective. It turns out that the L-parameters of irreducible representations are preserved, whilst the difference between characters of component groups is governed by symplectic local root numbers.
The Fermi-Pasta-Ulam (FPU) system, initially introduced by Fermi for numerical simulations, models vibrating chains with fixed endpoints, where particles interact weakly, nonlinearly with their nearest neighbors. Contrary to the anticipated ergodic behavior, the simulation revealed nearly periodic (quasi-periodic) motion of the solutions, a phenomenon later referred to as the FPU paradox. A partial but remarkable explanation was provided by Zabusky and Kruskal [36], who formally derived the continuum limit of the FPU system, connecting it to the Korteweg-de Vries (KdV) equation. This formal derivation was later rigorously justified by Bambusi and Ponno [4]. In this paper, we revisit the problem studied in [4], specifically focusing on the continuum limit of the periodic FPU system for a broader class of initial data, as the number of particles N tends to infinity within a fixed domain. Unlike the non-periodic case discussed in [15], periodic FPU solutions lack a (local) smoothing effect, posing a significant challenge in controlling one derivative in the nonlinearity. This control is crucial not only for proving the (uniform in N) well-posedness for rough data but also for deriving the continuum limit. The main strategies to resolve this issue involve deriving L4-Strichartz estimates for FPU solutions, analogous to those previously derived for KdV solutions in [7], and regularizing the system via the normal form method introduced in [1].
In this paper, we introduce a linear stochastic volatility model driven by $\alpha$-stable processes, which admits a unique positive solution. To preserve positivity, we modify the classical forward Euler-Maruyama scheme and analyze its numerical properties. The scheme achieves a strong convergence order of $1/\alpha$. Numerical simulations are presented at the end to verify theoretical results.
In this paper, we investigate properties of potential triples $(X,\Delta,D)$ which consists of a pair $(X,\Delta)$ and a pseudoeffective $\mathbb{R}$-Cartier divisor $D$. In particular, we show that if $D$ admits a birational Zariski decomposition, then one can associate a generalized pair structure to the potential triple $(X,\Delta,D)$. Moreover, we can run the generalized MMP on $(K_X+\Delta+D)$ as special cases. As an application, we also show that for a pklt pair $(X,\Delta)$, if $-(K_X+\Delta)$ admits a birational Zariski decomposition with $\mathrm{NQC}$ positive part, then there exists a $-(K_X+\Delta)$-minimal model.
In this paper, we describe an explicit extension formula in sensitivity analysis regarding the Malliavin weight for jump-diffusion mean-field stochastic differential equations whose local Lipschitz drift coefficients are influenced by the product of the solution and its law. We state that these extended equations have unique Malliavin differentiable solutions in Wiener-Poisson space and establish the sensitivity analysis of path-dependent discontinuous payoff functions. It will be realized after finding a relation between the stochastic flow of the solutions and their derivatives. The Malliavin derivatives are defined in a chaos approach in which the chain rule is not held. The convergence of the Euler method to approximate Delta Greek is proved. The simulation experiment illustrates our results to compute the Delta, in the context of financial mathematics, and demonstrates that the Malliavin Monte-Carlo computations applied in our formula are more efficient than using the finite difference method directly.
We prove some results concerning the finitely additive, vector integral of Bochner and Pettis and their representation over a countably additive probability space. Applications to convergence of vector valued martingales and to the non compact Choquet theorem are provided.
We study point-line configurations, their minimal matroids, and their associated circuit varieties. We present an algorithm for identifying the minimal matroids of these configurations with respect to dependency order, or equivalently, the maximal matroids with respect to weak order, and use it to determine the irreducible decomposition of their corresponding circuit varieties. Our algorithm is applied to several classical configurations, including the Fano matroid, affine plane of order three, MacLane, and Pappus configurations. Additionally, we explore the connection to a conjecture by Jackson and Tanigawa, which provides a criterion for the uniqueness of the minimal matroids.
Constraint-based metabolic models can be used to investigate the intracellular physiology of microorganisms. These models couple genes to reactions, and typically seek to predict metabolite fluxes that optimize some biologically important metric. Classical techniques, like Flux Balance Analysis (FBA), formulate the metabolism of a microbe as an optimization problem where growth rate is maximized. While FBA has found widespread use, it often leads to thermodynamically infeasible solutions that contain internal cycles (loops). To address this shortcoming, Loopless-Flux Balance Analysis (ll-FBA) seeks to predict flux distributions that do not contain these loops. ll-FBA is a disjunctive program, usually reformulated as a mixed-integer program, and is challenging to solve for biological models that often contain thousands of reactions and metabolites. In this paper, we compare various reformulations of ll-FBA and different solution approaches. %We discuss the use of intersection cuts and compare the performance of blocking cycles to decomposing the problem and to solving the convex hull formulation. Overall, the combinatorial Benders' decomposition is the most promising of the tested approaches with which we could solve most instances. However, the model size and numerical instability pose a challenge to the combinatorial Benders' method.
We utilize Galois cohomology to classify the real subalgebras of the special linear algebra $\mathfrak{sl}_3(\mathbb{R})$, leveraging the established classification of complex subalgebras of $\mathfrak{sl}_3(\mathbb{C})$. Although a classification of the real subalgebras of $\mathfrak{sl}_3(\mathbb{R})$ already exists, our results and the methodology employed are significant for three key reasons. Firstly, it introduces a new approach to classifying real subalgebras of real semisimple Lie algebras, providing a framework for future work. Secondly, given the computational complexity and intricacy of such classifications, our verification of the previous classification through a different method provides a valuable contribution to the literature. Thirdly, subalgebras of semisimple Lie algebras have applications in both physics and applied mathematics; contributions to understanding the subalgebra structure of semisimple Lie algebras may enhance the implementation and comprehension of these applications.
We can directly sample from the conditional distribution of any log-affine model. The algorithm is a Markov chain on a bounded integer lattice, and its transition probability is the ratio of the UMVUE (uniformly minimum variance unbiased estimator) of the expected counts to the total number of counts. The computation of the UMVUE accounts for most of the computational cost, which makes the implementation challenging. Here, we investigated an approximate algorithm that replaces the UMVUE with the MLE (maximum likelihood estimator). Although it is generally not exact, it is efficient and easy to implement; no prior study is required, such as about the connection matrices of the holonomic ideal in the original algorithm.
We introduce a construction of the Koch snowflake that is not inherently six-way symmetrical, based on iteratively placing similar rhombi. This construction naturally splits the snowflake into four identical self-similar curves, in contrast to the typical decomposition into three Koch curves. Varying the shape of the rhombi creates a continuous family of new fractal curves with rectangular symmetry. We compute the Hausdorff dimension of the generalized curve and show that it attains a maximum at the original Koch snowflake.
We review some regularity results for the Laplacian and $p$-Laplacian in metric measure spaces. The focus is mainly on interior H\"older, Lipschitz and second-regularity estimates and on spaces supporting a Poincar\'e inequality or having Ricci curvature bounded below.
The aim of this paper is to give a categorical equivalence for Stone algebras. We introduce the variety of Stone-Kleene algebras with intuitionistic negation, or Stone KAN-algebras for short, and explore Kalman's construction for Stone algebras. We examine the centered algebras within this new variety and prove that the category of Stone algebras is equivalent to the category of centered Stone KAN-algebras. Moreover, inspired by Monteiro's construction for Nelson algebras, we propose a method to construct a centered Stone KAN-algebra from a given Stone KAN-algebra and show the connection between Kalman's construction and Monteiro's construction.
Choosing the right system architecture for the problem at hand is challenging due to the large design space and high uncertainty in the early stage of the design process. Formulating the architecting process as an optimization problem may mitigate some of these challenges. This work investigates strategies for solving System Architecture Optimization (SAO) problems: expensive, black-box, hierarchical, mixed-discrete, constrained, multi-objective problems that may be subject to hidden constraints. Imputation ratio, correction ratio, correction fraction, and max rate diversity metrics are defined for characterizing hierar chical design spaces. This work considers two classes of optimization algorithms for SAO: Multi-Objective Evolutionary Algorithms (MOEA) such as NSGA-II, and Bayesian Optimization (BO) algorithms. A new Gaussian process kernel is presented that enables modeling hierarchical categorical variables, extending previous work on modeling continuous and integer hierarchical variables. Next, a hierarchical sampling algorithm that uses design space hierarchy to group design vectors by active design variables is developed. Then, it is demonstrated that integrating more hierarchy information in the optimization algorithms yields better optimization results for BO algorithms. Several realistic single-objective and multi-objective test problems are used for investigations. Finally, the BO algorithm is applied to a jet engine architecture optimization problem. This work shows that the developed BO algorithm can effectively solve the problem with one order of magnitude less function evaluations than NSGA-II. The algorithms and problems used in this work are implemented in the open-source Python library SBArchOpt.
We prove that there are $\gg\frac{X^{\frac{1}{3}}}{(\log X)^2}$ imaginary quadratic fields $k$ with discriminant $|d_k|\leq X$ and an ideal class group of $5$-rank at least $2$. This improves a result of Byeon, who proved the lower bound $\gg X^{\frac{1}{4}}$ in the same setting. We use a method of Howe, Lepr\'{e}vost, and Poonen to construct a genus $2$ curve $C$ over $\mathbb{Q}$ such that $C$ has a rational Weierstrass point and the Jacobian of $C$ has a rational torsion subgroup of $5$-rank $2$. We deduce the main result from the existence of the curve $C$ and a quantitative result of Kulkarni and the second author.
Bayesian optimization (BO) is one of the most powerful strategies to solve computationally expensive-to-evaluate blackbox optimization problems. However, BO methods are conventionally used for optimization problems of small dimension because of the curse of dimensionality. In this paper, a high-dimensionnal optimization method incorporating linear embedding subspaces of small dimension is proposed to efficiently perform the optimization. An adaptive learning strategy for these linear embeddings is carried out in conjunction with the optimization. The resulting BO method, named efficient global optimization coupled with random and supervised embedding (EGORSE), combines in an adaptive way both random and supervised linear embeddings. EGORSE has been compared to state-of-the-art algorithms and tested on academic examples with a number of design variables ranging from 10 to 600. The obtained results show the high potential of EGORSE to solve high-dimensional blackbox optimization problems, in terms of both CPU time and the limited number of calls to the expensive blackbox simulation.
We introduce $(r+1)$-completed cycles $k$-leaky Hurwitz numbers and prove piecewise polynomiality as well as establishing their chamber polynomiality structure and their wall crossing formulae. For $k=0$ the results recover previous results of Shadrin-Spitz-Zvonkine. The specialization for $r=1$ recovers Hurwitz numbers that are close to the ones studied by Cavalieri-Markwig-Ranganathan and Cavalieri-Markwig-Schmitt. The ramifications differ by a lower order torus correction, natural from the Fock space perspective, not affecting the genus zero enumeration, nor the enumeration for leaky parameter values $k = \pm 1$ in all genera.
This paper presents a Newton-based stochastic extremum-seeking control method for real-time optimization in multi-input systems with distinct input delays. It combines predictor-based feedback and Hessian inverse estimation via stochastic perturbations to enable delay compensation with user-defined convergence rates. The method ensures exponential stability and convergence near the unknown extremum, even under long delays. It extends to multi-input, single-output systems with cross-coupled channels. Stability is analyzed using backstepping and infinite-dimensional averaging. Numerical simulations demonstrate its effectiveness in handling time-delayed channels, showcasing both the challenges and benefits of real-time optimization in distributed parameter settings.
We investigate a cancellation property satisfied by an Eulerian digraph $D$. Namely, unless $D$ is a single directed cycle, we have $\sum_{t\geq 1} (-1)^{t} |\mathfrak{C}_{t}(D)| =0 $, where $\mathfrak{C}_{t}(D)$ is the set of partitions of Eulerian circuits of $D$ into $t$ circuits. To show this, we utilize Viennot's theory of Heaps of Pieces, and in particular, the bijection between closed walks of a digraph and heaps with a unique maximal piece. We consider the partition lattice of the edge-set of a digraph $D$, restricted to the join-semilattice $T(D)$ induced by elements whose blocks are Eulerian. The up-set of a minimal element $a\in T(D)$ is shown to be isomorphic to the bond lattice $L(G)$ of the intersection graph $G$ of cycles of $D$. Tools developed by Whitney and Rota for the calculation of the M\"{o}bius function of the bond lattice allow us to proceed by induction on the number of edges of $D$. In the process, we use the equivalence between heaps with a fixed maximal piece and unique sink orientations of the bond lattice $L(G)$. Finally, we apply the aforementioned cancellation property in order to deduce the classical Harary-Sachs Theorem for graphs of rank $2$ from a hypergraph generalization thereof, remedying a gap in a previous proof of this.
We formulate and answer Gorenstein projective, flat, and injective analogues of a classical projectivity question for group rings under some mild additional assumptions. Although the original question, that was proposed by Jang-Hyun Jo in 2007, was for integral group rings, in this article, we deal with more general commutative base rings. We make use of the vast developments that have happened in the field of Gorenstein homological algebra over group rings in recent years, and we also improve and generalize several existing results from this area along the way.
A split graph is a graph whose vertex set can be partitioned into a clique and an independent set. The word-representability of split graphs was studied in a series of papers in the literature, and the class of word-representable split graphs was characterized through semi-transitive orientation. Nonetheless, the representation number of this class of graphs is still not known. In general, determining the representation number of a word-representable graph is an NP-complete problem. In this work, through an algorithmic procedure, we show that the representation number of the class of word-representable split graphs is at most three. Further, we characterize the class of word-representable split graphs as well as the class of split comparability graphs which have representation number exactly three.
A complete description of the local geometry of the $p$-adic eigencurve at $p$-irregular classical weight one cusp forms is given in the cases where the usual $R=T$ methods fall short. As an application, we show that the ordinary $p$-adic \'etale cohomology group attached to the tower of elliptic modular curves $X_1(Np^r)$ is not free over the Hecke algebra, when localized at a $p$-irregular weight one point.
We prove a quantitative isoperimetric inequality for the nearly spherical subset of the Bergman ball in $\mathbb{C}^n$. We prove the Fuglede theorem for such sets. This result is a counterpart of a similar result obtained for the hyperbolic unit ball and it makes the first result on the isoperimetric phenomenon in the Bergman ball.
Following recent work of T.~Alazard and C.~Shao on applications of para-differential calculus to smooth conjugacy and stability problems for Hamiltonian systems, we prove finite codimension stability of invariant surfaces (in finite differentiability classes) of flat geodesic flows on translation surfaces. The result is also based on work of the author on the cohomological equation for translation flows.
We construct a family of bases for the Kauffman bracket skein module (KBSM) of the product of an annulus and a circle. Using these bases, we find a new basis for the KBSM of $(\beta,2)$-fibered torus as a first step toward developing techniques for computing KBSM of a family of small Seifert fibered $3$-manifolds.
Competitive games involving thousands or even millions of players are prevalent in real-world contexts, such as transportation, communications, and computer networks. However, learning in these large-scale multi-agent environments presents a grand challenge, often referred to as the "curse of many agents". In this paper, we formalize and analyze the Static Mean-Field Game (SMFG) under both full and bandit feedback, offering a generic framework for modeling large population interactions while enabling independent learning. We first establish close connections between SMFG and variational inequality (VI), showing that SMFG can be framed as a VI problem in the infinite agent limit. Building on the VI perspective, we propose independent learning and exploration algorithms that efficiently converge to approximate Nash equilibria, when dealing with a finite number of agents. Theoretically, we provide explicit finite sample complexity guarantees for independent learning across various feedback models in repeated play scenarios, assuming (strongly-)monotone payoffs. Numerically, we validate our results through both simulations and real-world applications in city traffic and network access management.
The aim of this paper is to study measure-theoretical rigidity and partial rigidity for classes of Cantor dynamical systems including Toeplitz systems and enumeration systems. We use Bratteli diagrams to control invariant measures that are produced in our constructions. This leads to systems with desired properties. Among other things, we show that there exist Toeplitz systems with zero entropy which are not partially measure-theoretically rigid with respect to any of its invariant measures. We investigate enumeration systems defined by a linear recursion, prove that all such systems are partially rigid and present an example of an enumeration system which is not measure-theoretically rigid. We construct a minimal $\mathcal{S}$-adic Toeplitz subshift which has countably infinitely many ergodic invariant probability measures which are rigid for the same rigidity sequence.
Stochastic differential equations have proved to be a valuable governing framework for many real-world systems which exhibit ``noise'' or randomness in their evolution. One quality of interest in such systems is the shape of their equilibrium probability distribution, if such a thing exists. In some cases a straightforward integral equation may yield this steady-state distribution, but in other cases the equilibrium distribution exists and yet that integral equation diverges. Here we establish a new equilibrium-analysis technique based on the logic of finite-timestep simulation which allows us to glean information about the equilibrium regardless -- in particular, a relationship between the raw moments of the equilibrium distribution. We utilize this technique to extract information about one such equilibrium resistant to direct definition.
We construct homotopy formulae $f=\overline\partial\mathcal H_qf+\mathcal H_{q+1}\overline\partial f$ for $(0,q)$ forms on the product domain $\Omega_1\times\dots\times\Omega_m$, where each $\Omega_j$ is either a bounded Lipschitz domain in $\mathbb C^1$, a bounded strongly pseudoconvex domain with $C^2$ boundary, or a smooth convex domain of finite type. Such homotopy operators $\mathcal H_q$ yield solutions to the $\overline\partial$ equation with optimal Sobolev regularity $W^{k,p}\to W^{k,p}$ simultaneously for all $k\in\mathbb Z$ and $1<p<\infty$.
We present a general mathematical framework for optimizing cell deployment and antenna configuration in wireless networks, inspired by quantization theory. Unlike traditional methods, our framework supports networks with deterministically located nodes, enabling modeling and optimization under controlled deployment scenarios. We demonstrate our framework through two applications: joint fine-tuning of antenna parameters across base stations (BSs) to optimize network coverage, capacity, and load balancing, and the strategic deployment of new BSs, including the optimization of their locations and antenna settings. These optimizations are conducted for a heterogeneous 3D user population, comprising ground users (GUEs) and uncrewed aerial vehicles (UAVs) along aerial corridors. Our case studies highlight the framework's versatility in optimizing performance metrics such as the coverage-capacity trade-off and capacity per region. Our results confirm that optimizing the placement and orientation of additional BSs consistently outperforms approaches focused solely on antenna adjustments, regardless of GUE distribution. Furthermore, joint optimization for both GUEs and UAVs significantly enhances UAV service without severely affecting GUE performance.
This paper proposes the incorporation of static event-triggered control in the actuation path of Newton-based extremum seeking and its comparison with the earlier gradient version. As in the continuous methods, the convergence rate of the gradient approach depends on the unknown Hessian of the nonlinear map to be optimized, whereas the proposed event-triggered Newton-based extremum seeking eliminates this dependence, becoming user-assignable. This is achieved by means of a dynamic estimator for the Hessian's inverse, implemented as a Riccati equation filter. Lyapunov stability and averaging theory for discontinuous systems are applied to analyze the closed-loop system. Local exponential practical stability is guaranteed to a small neighborhood of the extremum point of scalar and static maps. Numerical simulations illustrate the advantages of the proposed approach over the previous gradient method, including improved convergence speed, followed by a reduction in the amplitude and updating frequency of the control signals.
The study of transversal fluctuations of the optimal path is a crucial aspect of the Kardar-Parisi-Zhang (KPZ) universality class. In this work, we establish the large deviation limit for the midpoint transversal fluctuations in a general last-passage percolation (LPP) model with mild assumption on the i.i.d. weights. The rate function is expressed in terms of the right tail large deviation rate function of the last-passage value and the shape function. When the weights are chosen to be i.i.d. exponential random variables, our result verifies a conjecture communicated to us by Liu [Liu'22], showing the asymptotic probability of the geodesic from $(0,0)$ to $(n,n)$ following the corner path $(0,0) \to (n,0) \to (n,n)$ is $({4}/{e^2})^{n+o(n)}$.
We establish the consistency of classical scaling under a broad class of noise models, encompassing many commonly studied cases in literature. Our approach requires only finite fourth moments of the noise, significantly weakening standard assumptions. We derive convergence rates for classical scaling and establish matching minimax lower bounds, demonstrating that classical scaling achieves minimax optimality in recovering the true configuration even when the input dissimilarities are corrupted by noise.
On the set of positive integers, we consider an iterated process that sends $n$ to $\frac{3n+1}{2}$ or to $\frac{n}{2}$ depending on the parity of $n$. According to a conjecture due to Collatz, all such sequences end up in the cycle $(1,2)$. In a seminal paper, Terras further conjectured that the proportion of odd terms encountered when starting from $n\geq2$ is sufficient to determine its stopping time, namely, the number of iterations needed to descend below $n$. However, when iterating beyond the stopping time, there exist "paradoxical" sequences for which the first term is unexpectedly exceeded. In the present study, we show that this topic is strongly linked to the Collatz conjecture. Moreover, this non-typical behavior seems to occur finitely many times apart from the trivial cycle, thus lending support to Terras' conjecture.
Two graph parameters are said to be coarsely equivalent if they are within constant factors from each other for every graph $G$. Recently, several graph parameters were shown to be coarsely equivalent to tree-length. Recall that the length of a tree-decomposition ${\cal T}(G)$ of a graph $G$ is the largest diameter of a bag in ${\cal T}(G)$, and the tree-length of $G$ is the minimum of the length, over all tree-decompositions of $G$. We present simpler and sometimes with better bounds proofs for those known in literature results and further extend this list of graph parameters coarsely equivalent to tree-length. Among other new results, we show that the tree-length of a graph $G$ is small if and only if for every bramble ${\cal F}$ (or every Helly family of connected subgraphs ${\cal F}$, or every Helly family of paths ${\cal F}$) of $G$, there is a disk in $G$ with small radius that intercepts all members of ${\cal F}$. Furthermore, the tree-length of a graph $G$ is small if and only if $G$ can be embedded with a small additive distortion to an unweighted tree with the same vertex set as in $G$ (not involving any Steiner points). Additionally, we introduce a new natural "bridging`` property for cycles, which generalizes a known property of cycles in chordal graphs, and show that it also coarsely defines the tree-length.
This paper investigates two FEM-BEM coupling formulations for acoustic fluid-structure interaction (FSI) problems, using the Finite Element Method (FEM) to model the structure and the Boundary Element Method (BEM) to represent a linear acoustic fluid. The coupling methods described interconnect fluid and structure using classical or localized Lagrange multipliers, allowing the connection of non-matching interfaces. First coupling technique is the well known mortar method, that uses classical multipliers and is compared with a new formulation of the method of localized Lagrange multipliers (LLM) for FSI applications with non-matching interfaces. The proposed non-overlapping domain decomposition technique uses a classical non-symmetrical acoustic BEM formulation for the fluid, although a symmetric Galerkin BEM formulation could be used as well. A comparison between the localized methodology and the mortar method in highly non conforming interface meshes is presented. Furthermore, the methodology proposes an iterative preconditioned and projected bi-conjugate gradient solver which presents very good scalability properties in the solution of this kind of problems.
We describe the main properties of the $RO(C_2\times \Sigma_2)$-graded cohomology ring of a point and apply the results to compute the subring of motivic classes given by the Bredon motivic cohomology of real numbers and to compute $RO(C_2\times \Sigma_2)$-graded cohomology ring of $E_{\Sigma_2}C_2$. This generalizes Voevodsky's identification of motivic cohomology of real numbers with the positive cone of $RO(C_2)$ graded cohomology of a point.
We construct an explicit and calculable models for rational U(2)-spectra. This is obtained by assembling seven blocks obtained in previous work: the toral part and earlier work on small toral groups. The assembly process requires detailed input on fusion and Weyl groups.
We generalize the concept of a field by allowing addition to be a partial operation. We show that elements of such a "partially additive field" share many similarities with physical quantities. In particular, they form subsets of mutually summable elements (similar to physical dimensions), dimensionless elements (those summable with 1) form a field, and every element can be uniquely represented as a product of a dimensionless element and any non-zero element of the same dimension (a unit). We also discuss the conditions for the existence of a coherent unit system. In contrast to previous works, our axiomatization encompasses quantities, values, units, and dimensions in a single algebraic structure, illustrating that partial operations may provide a more elegant description of the physical world.
For a countable, complete, first-order theory $T$, we study $At$, the class of atomic models of $T$. We develop an analogue of $U$-rank and prove two results. On one hand, if some tp(d/a) is not ranked, then there are $2^{\aleph_1}$ non-isomorphic models in $At$ of size $\aleph_1$. On the other hand, if all types have finite rank, then the rank is fully additive and every finite tuple is dominated by an independent set of realizations of pseudo-minimal types.
We prove conditional weak-strong uniqueness of the potential Euler solution for external flow around a smooth body in three space dimensions, within the class of viscosity weak solutions with the same initial data. Our sufficient condition is the vanishing of the streamwise component of the skin friction in the inviscid limit, somewhat weaker than the condition of Bardos-Titi in bounded domains. Because global-in-time existence of the smooth potential solution leads back to the d'Alembert paradox, we argue that weak-strong uniqueness is not a valid criterion for "relevant" notions of generalized Euler solution and that our condition is likely to be violated in the inviscid limit. We prove also that the Drivas-Nguyen condition on uniform continuity at the wall of the normal velocity component implies weak-strong uniqueness within the general class of admissible weak Euler solutions in bounded domains.
We extend the spectral theory of commutative C*-categories to the non full-case, introducing a suitable notion of spectral spaceoid provinding a duality between a category of "non-trivial" *-functors of non-full commutative C*-categories and a category of Takahashi morphisms of "non-full spaceoids" (here defined). As a byproduct we obtain a spectral theorem for a non-full generalization of imprimitivity Hilbert C*-bimodules over commutative unital C*-algebras via continuous sections vanishing at infinity of a Hilbert C*-line-bundle over the graph of a homeomorphism between open subsets of the corresponding Gel'fand spectra of the C*-algebras.
Let $E$ be a nonisotrivial elliptic curve over $\mathbb{Q}(T)$ and denote the rank of the abelian group $E(\mathbb{Q}(T))$ by $r$. For all but finitely many $t\in \mathbb{Q}$, specialization will give an elliptic curve $E_t$ over $\mathbb{Q}$ for which the abelian group $E_t(\mathbb{Q})$ has rank at least $r$. Conjecturally, the set of $t\in\mathbb{Q}$ for which $E_t(\mathbb{Q})$ has rank exactly $r$ has positive density. We produce the first known example for which $E_t(\mathbb{Q})$ has rank $r$ for infinitely many $t\in\mathbb{Q}$. For our particular $E/\mathbb{Q}(T)$ which has rank $0$, we will make use of a theorem of Green on $3$-term arithmetic progressions in the primes to produce $t\in\mathbb{Q}$ for which $E_t$ has only a few bad primes that we understand well enough to perform a $2$-descent.
With a fixed prime power $q>1$, define the ring of polynomials $A=\mathbb{F}_q[t]$ and its fraction field $F=\mathbb{F}_q(t)$. For each pair $a=(a_1,a_2) \in A^2$ with $a_2$ nonzero, let $\phi(a)\colon A\to F\{\tau\}$ be the Drinfeld $A$-module of rank $2$ satisfying $t\mapsto t+a_1\tau+a_2\tau^2$. The Galois action on the torsion of $\phi(a)$ gives rise to a Galois representation $\rho_{\phi(a)}\colon \operatorname{Gal}(F^{\operatorname{sep}}/F)\to \operatorname{GL}_2(\widehat{A})$, where $\widehat{A}$ is the profinite completion of $A$. We show that the image of $\rho_{\phi(a)}$ is large for random $a$. More precisely, for all $a\in A^2$ away from a set of density $0$, we prove that the index $[\operatorname{GL}_2(\widehat{A}):\rho_{\phi(a)}(\operatorname{Gal}(F^{\operatorname{sep}}/F))]$ divides $q-1$ when $q>2$ and divides $4$ when $q=2$. We also show that the representation $\rho_{\phi(a)}$ is surjective for a positive density set of $a\in A^2$.
This paper proposes a direct inversion scheme for fluorescence diffuse optical tomography (FDOT) to reconstruct the location of a point target using the measured peak time of the temporal response functions. A sphere is defined for the target, with its radius determined by the peak time, indicating that the target lies on the sphere. By constructing a tetrahedron with edges determined by the radii, we identify the location of the target as the vertex of the tetrahedron. Asymptotically, we derive the relationship between the radius of the sphere and the peak time. Several numerical tests are implemented to demonstrate the accuracy and performance of the asymptotic relationship and the inversion scheme.
Let $1\leq p\leq 2$ and let $\Lambda = \{\lambda_n\}_{n\in \mathbb{N}} \subseteq \mathbb{R}$ be an arbitrary subset. We prove that for any $g\in M^p(\mathbb{R})$ with $1\leq p\leq 2$ the system of translates $\{g(x-\lambda_n)\}_{n\in \mathbb{N}}$ is never an unconditional basis for $M^q(\mathbb{R})$ for $p\leq q\leq p'$, where $p'$ is the conjugate exponent of $p.$ Moreover, we will also prove that for any $g\in M^p(\mathbb{R})$ with $1< p\leq 2$ the system of translates $\{g(x-\lambda_n)\}_{n\in \mathbb{N}}$ is never an unconditional frame for $M^p(\mathbb{R}).$ Several results regarding the existence of unconditional frames formed by a system of translates in $M^1(\mathbb{R})$ as well as in $M^p(\mathbb{R})$ with $2<p<\infty$ will be presented as well.
Contact-implicit motion planning-embedding contact sequencing as implicit complementarity constraints-holds the promise of leveraging continuous optimization to discover new contact patterns online. Nevertheless, the resulting optimization, being an instance of Mathematical Programming with Complementary Constraints, fails the classical constraint qualifications that are crucial for the convergence of popular numerical solvers. We present robust contact-implicit motion planning with sequential convex programming (CRISP), a solver that departs from the usual primal-dual algorithmic framework but instead only focuses on the primal problem. CRISP solves a convex quadratic program with an adaptive trust region radius at each iteration, and its convergence is evaluated by a merit function using weighted penalty. We (i) provide sufficient conditions on CRISP's convergence to first-order stationary points of the merit function; (ii) release a high-performance C++ implementation of CRISP with a generic nonlinear programming interface; and (iii) demonstrate CRISP's surprising robustness in solving contact-implicit planning with naive initialization. In fact, CRISP solves several contact-implicit problems with all-zero initialization.
We study the asymptotic stability of a composition of rarefaction and shock waves for the one-dimensional barotropic compressible fluid of Korteweg type, called the Navier-Stokes-Korteweg(NSK) system. Precisely, we show that the solution to the NSK system asymptotically converges to the composition of the rarefaction wave and shifted viscous-dispersive shock wave, under certain smallness assumption on the initial perturbation and strength of the waves. Our method is based on the method of $a$-contraction with shift developed by Kang and Vasseur \cite{KV16}, successfully applied to obtain contraction or stability of nonlinear waves for hyperbolic systems.
The ACF-monotonicity formula is a powerful tool in the study of two-phase free boundary problems, which was introduced by Alt, Caffarelli, and Friedman[1]. In this paper, we extend it to RCD(0,N) metric measure cones. As an application, we give a rigidity result for RCD(0,N) metric measure cones.
We provide a new upper bound for the energy of graphs in terms of degrees and number of leaves. We apply this formula to study the energy of Erd\"os-R\'enyi graphs and Barabasi-Albert trees.
We consider the family of elliptic curves $E_{a,b}:y^2=x^3+a(x-b)^2$ with $a,b \in \mathbb{Z}$. These elliptic curves have a rational $3$-isogeny, say $\varphi$. We give an upper and a lower bound on the rank of the $\varphi$-Selmer group of $E_{a,b}$ over $K:=\mathbb{Q}(\zeta_3)$ in terms of the $3$-part of the ideal class group of certain quadratic extension of $K$. Using our bounds on the Selmer groups, we construct infinitely many curves in this family with arbitrary large $3$-Selmer rank over $K$ and no non-trivial $K$-rational point of order $3$. We also show that for a positive proportion of natural numbers $n$, the curve $E_{n,n}/\mathbb{Q}$ has root number $-1$ and $3$-Selmer rank $=1$.
Takao Fujita proposed in 1980 three closely related conjectures called $A_n$, $B_n$ and $C_n$, which relate the K\"{a}hler smooth compactification of contractible complex manifolds to the uniqueness of K\"{a}hler structure on cohomology complex projective spaces. Recently Peternell solved $A_n$ and $B_n$ when $n$ are even. In this note we push forward his arguments to show that the conjectures $A_n$ and $B_n$ are true when $n\neq3 \pmod{4}$. Moreover, the contractility condition can be weakened to homology triviality. A related application is given and some remarks are discussed.
This paper studies generalized semi-infinite programs (GSIPs) defined with polyhedral parameter sets. Assume these GSIPs are given by polynomials. We propose a new approach to solve them as a disjunctive program. This approach is based on the Kurash-Kuhn-Tucker (KKT) conditions of the robust constraint and a technique called partial Lagrange multiplier expressions. We summarize a semidefinite algorithm and study its convergence properties. Numerical experiments are given to show the efficiency of our method. In addition, we checked its performance in gemstone cutting and robust control applications.
This paper proposes a novel parallel coding transmission strategy and an iterative detection and decoding receiver signal processing technique for orthogonal delay-Doppler division multiplexing (ODDM) modulation. Specifically, the proposed approach employs a parallel channel encoding (PCE) scheme that consists of multiple short-length codewords for each delay-Doppler multicarrier (DDMC) symbol. Building upon such a PCE transmission framework, we then introduce an iterative detection and decoding algorithm incorporating a successive decoding feedback (SDF) technique, which enables instant information exchange between the detector and decoder for each DDMC symbol. To characterize the error performance of the proposed scheme, we perform density evolution analysis considering the finite blocklength effects. Our analysis results, coupled with extensive simulations, demonstrate that the proposed PCE scheme with the SDF algorithm not only showcases a better overall performance but also requires much less decoding complexity to implement, compared to the conventional benchmark scheme that relies on a single long channel code for coding the entire ODDM frame.
In this paper, we successfully establish a Courant-type nodal domain theorem for both the Dirichlet eigenvalue problem and the closed eigenvalue problem of the Witten-Laplacian. Moreover, we also characterize the properties of the nodal lines of the eigenfunctions of the Witten-Laplacian on smooth Riemannian $2$-manifolds. Besides, for a Riemann surface with genus $g$, an upper bound for the multiplicity of closed eigenvalues of the Witten-Laplacian can be provided.
Fix a d-minimal expansion of an ordered field. We consider the space $\mathcal D^p(M)$ of definable $\mathcal C^p$ functions defined on a definable $\mathcal C^p$ submanifold $M$ equipped with definable $\mathcal C^p$ topology. The set of definable $\mathcal C^p$ Morse functions is dense in $\mathcal D^p(M)$.
Let [n]=\{1,\,2,...,\,n\} be colored in k colors. A rainbow AP(k) in [n] is a k term arithmetic progression whose elements have diferent colors. Conlon, Jungic and Radoicic [10] had shown that there exists an equinumerous 4-coloring of [4n] which happens to be rainbow AP(4) free, when n is even and subsequently Haghighi and Nowbandegani [7] shown that such a coloring of [4n] also exists when n>1 is odd. Based on their construction, we shown that a balanced 4-coloring of [n] ( i.e. size of each color class is at least \left\lfloor n/4\right\rfloor ) actually exists for all natural number n. Further we established that for nonnegative integers k\geq3 and n>1, every balanced k-coloring of [kn+r] with 0\leq r<k-1, contains a rainbow AP(k) if and only if k=3. In this paper we also have discussed about rainbow free equinumerous 4-coloring of \mathbb{Z}_{n}.
Let $L$ be a positive self-adjoint operator on $L^2(X)$, where $X$ is a $\sigma$-finite metric measure space. When $\alpha \in (0,1)$, the subordinated semigroup $\{\exp(-tL^{\alpha}):t \in \mathbb{R}^+\}$ can be defined on $L^2(X)$ and extended to $L^p(X)$. We prove various results about the semigroup $\{\exp(-tL^{\alpha}):t \in \mathbb{R}^+\}$, under different assumptions on $L$. These include the weak type $(1,1)$ boundedness of the maximal operator $f \mapsto \sup _{t\in \mathbb{R}^+}\exp(-tL^{\alpha})f$ and characterisations of Hardy spaces associated to the operator $L$ by the area integral and vertical square function.
In this article, we study regularity properties for degenerate parabolic double-phase equations. We establish continuity estimates for bounded weak solutions in terms of elliptic Riesz potentials on the right-hand side of the equation.
In this paper, we investigate the bound states of $2+1$ fermionic trimers on a three-dimensional lattice at strong coupling. Specifically, we analyze the discrete spectrum of the associated three-body discrete Schr\"odinger operator $H_{\gamma,\lambda}(K),$ focusing on energies below the continuum and within its gap. Depending on the quasi-momentum $K,$ we show that if the mass ratio $\gamma>0$ between the identical fermions and the third particle is below a certain threshold, the operator lacks a discrete spectrum below the essential spectrum for sufficiently large coupling $\lambda>0.$ Conversely, if $\gamma$ exceeds this threshold, $H_{\gamma,\lambda}(K)$ admits at least one eigenvalue below the essential spectrum. Similar phenomena are observed in the neighborhood of the two-particle branch of the essential spectrum, which resides within the gap and grows sublinearly as $\lambda\to+\infty.$ For $K=0,$ the mass ratio thresholds are explicitly calculated and it turns out that, for certain intermediate mass ratios and large couplings, bound states emerge within the gap, although ground states are absent.
In this paper, we prove the Gross-Koblitz-Thakur formulas relating special $v$-adic gamma values to the newly introduced geometric Gauss sums in the function field setting. These are analogous to those for the $p$-adic gamma function in the classical setting due to Gross-Koblitz and the $v$-adic arithmetic gamma function over function fields due to Thakur. For these new Gauss sums, we establish their key arithmetic properties, including the uniformity of absolute values and prime factorizations. We also determine their signs at infinite places, and derive two analogs of the Hasse-Davenport relations.
We discuss an "almost" version of Auslander regularity and use it to prove the Auslander regularity of various Banach algebras over non-discretely valued fields appearing naturally in $p$-adic locally analytic representation theory: completed Weyl algebras, the completed enveloping algebra of a Lie algebra, and the Banach completion of the distribution algebra $D(G, K)$ for a compact $p$-adic Lie group $G$.
The famous example of the double-Watt mechanism given by Connelly and Servatius raises some problems concerning the classical definitions of higher-order flexibility and rigidity, respectively. Recently, the author was able to give a proper redefinition of the flexion/rigidity order for bar-joint frameworks, but the question for the flexes associated with higher-order flexible structures remained open. In this paper we properly define these flexes based on the theory of algebraic curves and demonstrate their computation by means of Puiseux series. The presented algebraic approach also allows to take reality issues into account.
Numerical simulation of incompressible fluid flows has been an active topic of research in Scientific Computing for many years, with many contributions to both discretizations and linear and nonlinear solvers. In this work, we propose an improved relaxation scheme for higher-order Taylor-Hood discretizations of the incompressible Stokes and Navier-Stokes equations, demonstrating its efficiency within monolithic multigrid preconditioners for the linear(ized) equations. The key to this improvement is an improved patch construction for Vanka-style relaxation introducing, for the first time, overlap in the pressure degrees of freedom within the patches. Numerical results demonstrate significant improvement in both multigrid iterations and time-to-solution for the linear Stokes case, on both triangular and quadrilateral meshes. For the nonlinear Navier-Stokes case, we show similar improvements, including in the number of nonlinear iterations needed in an inexact Newton method.
For a Cohen--Macaulay positively graded ring $R$, we say that $R$ is pseudo-Gorenstein if its leading coefficient is 1. In this paper, we study the relationship between canonical trace and pseudo-Gorensteinness for a graded ring. In particular, we show that if a nearly Gorenstein graded domain satisfies certain mild assumptions and is pseudo-Gorenstein, then it is necessarily Gorenstein. As an application, we clarify the relationships among nearly Gorensteinness, almost Gorensteinness, and levelness, which generalize the notion of Gorensteinness, in the context of standard graded domains. Moreover, we give a method for constructing quasi-Gorenstein rings by taking a Veronese subalgebra of certain Noetherian graded rings.
A surface $S$ in a manifold $M$ is filling if $M-S$ consists of contractible components. We prove for any closed hyperbolic $3$-manifold $M$, there exists an $\epsilon_0>0$ such that every homotopy class of $(1+\epsilon)$-quasi-Fuchsian surfaces with $0<\epsilon \leq \epsilon_0$ is filling. As a corollary, the set of embedded surfaces in $M$ satisfies a dichotomy: it consists of at most finitely many totally geodesic surfaces and surfaces with a quasi-Fuchsian constant lower bound $1+\epsilon_0$. Each of these nearly geodesic surfaces separates any pair of distinct points at the boundary of infinity of the universal cover. Crucial tools include the rigidity results of Mozes-Shah, Ratner, and Shah. This work is inspired by a question of Wu and Xue whether random geodesics on random hyperbolic surfaces are filling.
We develop a representation theory of categories as a means to explore characteristic structures in algebra. Characteristic structures play a critical role in isomorphism testing of groups and algebras, and their construction and description often rely on specific knowledge of the parent object and its automorphisms. In many cases, questions of reproducibility and comparison arise. Here we present a categorical framework that addresses these questions. We prove that every characteristic structure is the image of a functor equipped with a natural transformation. This shifts the local description in the parent object to a global one in the ambient category. Through constructions in representation theory, such as tensor products, we can combine characteristic structure across multiple categories. Our results are constructive, stated in the language of a constructive type theory, which facilitates implementations in theorem checkers.
This paper concerns the rigidity from infinity for Alfv\'en waves governed by ideal incompressible magnetohydrodynamic equations subjected to strong background magnetic fields along the $x_1$-axis in 3D thin domains $\Omega_\delta=\mathbb{R}^2\times(-\delta,\delta)$ with $\delta\in(0,1]$ and slip boundary conditions. We show that in any thin domain $\Omega_\delta$, Alfv\'en waves must vanish identically if their scattering fields vanish at infinities. As an application, the rigidity of Alfv\'en waves in $\Omega_{\delta}$, propagating along the horizontal direction, can be approximated by the rigidity of Alfv\'en waves in $\mathbb{R}^2$ when $\delta$ is sufficiently small. Our proof relies on the uniform (with respect to $\delta$) weighted energy estimates with a position parameter in weights to track the center of Alfv\'en waves. The key issues in the analysis include dealing with the nonlinear nature of Alfv\'en waves and the geometry of thin domains.
In this paper, we prove that the graph of Takagi function $$T_r(x)=\sum_{n=0}^\infty \frac{1}{r^n}\phi(r^n x), \quad x\in [0,1]$$ has Assouad dimension $1$, for every integer $r\geq 2$, where $\phi(x)=dist(x,\mathbb{Z})$ is the distance from $x$ to the nearest integer.
In this paper, we develop a Discontinuous Galerkin (DG) method for solving H(curl)-elliptic hemivariational inequalities. By selecting an appropriate numerical flux, we construct an Interior Penalty Discontinuous Galerkin (IPDG) scheme. A comprehensive numerical analysis of the IPDG method is conducted, addressing key aspects such as consistency, boundedness, stability, and the existence, uniqueness, uniform boundedness of the numerical solutions. Building on these properties, we establish a priori error estimates, demonstrating the optimal convergence order of the numerical solutions under suitable solution regularity assumptions. Finally, a numerical example is presented to illustrate the theoretically predicted convergence order and to show the effectiveness of the proposed method.
We study parabolic automorphisms of irreducible holomorphically symplectic manifolds with a lagrangian fibration. Such automorphisms are (possibly up to taking a power) fiberwise translations on smooth fibers, and their orbits in a general fiber are dense ([1]). We provide a simple proof that the associated Betti map is of maximal rank, in particular, the set of fibers where the induced translation is of finite order is dense as well. R{\'E}SUM{\'E}. Nous {\'e}tudions les automorphismes paraboliques des vari{\'e}t{\'e}s symplectiques holomorphes qui sont irr{\'e}ductibles et projectives.
Since the Ginzburg-Landau theory is concerned with macroscopic phenomena, and gravity affects how objects interact at the macroscopic level. It becomes relevant to study the Ginzburg-Landau theory in curved space, that is, in the presence of gravity. In this paper, some existence theorems are established for the vortex solutions of the magnetic Ginzburg-Landau theory coupled to the Einstein equations. First, when the coupling constant \lambda=1, we get a self-dual structure from the Ginzburg-Landau theory, then a partial differential equation with a gravitational term that has power-type singularities is deduced from the coupled system. To overcome the difficulty arising from the orders of singularities at the vortices, a constraint minimization method and a monotone iteration method are employed. We also show that the quantized flux and total curvature are determined by the number of vortices. Second, when the coupling constant \lambda>0, we use a suitable ansatz to get the radially symmetric case for the magnetic Ginzburg-Landau theory in curved space. The existence of the symmetric vortex solutions are obtained through combining a two-step iterative shooting argument and a fixed-point theorem approach. Some fundamental properties of the solutions are established via applying a series of analysis techniques.
This paper introduces novel theoretical approximation bounds for the output of quantized neural networks, with a focus on convolutional neural networks (CNN). By considering layerwise parametrization and focusing on the quantization of weights, we provide bounds that gain several orders of magnitude compared to state-of-the art results on classical deep convolutional neural netorks such as MobileNetV2 or ResNets. These gains are achieved by improving the behaviour of the approximation bounds with respect to the depth parameter, which has the most impact on the approximation error induced by quantization. To complement our theoretical result, we provide a numerical exploration of our bounds on Mo-bileNetV2 and ResNets.
The web permutations were introduced by Hwang, Jang and Oh to interpret the entries of the transition matrix between the Specht and $\mathrm{SL}_2$-web bases of the irreducible $\S_{2n}$-representation indexed by $(n,n)$. They conjectured that certain classes of web permutations are enumerated by the Seidel triangle. Using generating functions, Xu and Zeng showed that enumerating web permutations by the number of drops, fixed points and cycles gives rise to the normalized $\gamma$-coefficients of the $(\alpha,t)$-Eulerian polynomials. They posed the problems to prove their result combinatorially and to find an interpretation of the normalized $\gamma$-coefficients in terms of cycle-up-down permutations. In this work, we prove the enumerative conjecture of Hwang-Jang-Oh and answer the two open problems proposed by Xu and Zeng.
Estimating optimal transport maps between two distributions from respective samples is an important element for many machine learning methods. To do so, rather than extending discrete transport maps, it has been shown that estimating the Brenier potential of the transport problem and obtaining a transport map through its gradient is near minimax optimal for smooth problems. In this paper, we investigate the private estimation of such potentials and transport maps with respect to the distribution samples.We propose a differentially private transport map estimator achieving an $L^2$ error of at most $n^{-1} \vee n^{-\frac{2 \alpha}{2 \alpha - 2 + d}} \vee (n\epsilon)^{-\frac{2 \alpha}{2 \alpha + d}} $ up to poly-logarithmic terms where $n$ is the sample size, $\epsilon$ is the desired level of privacy, $\alpha$ is the smoothness of the true transport map, and $d$ is the dimension of the feature space. We also provide a lower bound for the problem.
We study a population of N individuals evolving according to a biparental Moran model with two types, one being advantaged compared to the other. The advantage is conferred by a Mendelian mutation, that reduces the death probability of individuals carrying it. We assume that a proportion a of individuals initially carry this mutation, which therefore eventually gets fixed with high probability. After a long time, we sample a gene uniformly from the population, at a new locus, independent of the locus under selection, and calculate the probability that this gene originated from one of the initially advantaged individuals, when population size is large. Our theorem provides quantitative insights, such as the observation that under strong selection, if only 1% of the individuals are initially advantaged, approximately 19% of the population's genome will originate from them after a long time.
We present an explicit solution to the discrete-time Bellman equation for minimax optimal control of positive systems under unconstrained disturbances. The primary contribution of our result relies on deducing a bound for the disturbance penalty, which characterizes the existence of a finite solution to the problem class. Moreover, this constraint on the disturbance penalty reveals that, in scenarios where a solution is feasible, the problem converges to its equivalent minimization problem in the absence of disturbances.
Cutting planes are crucial for the performance of branch-and-cut algorithms for solving mixed-integer programming (MIP) problems, and linear row aggregation has been successfully applied to better leverage the potential of several major families of MIP cutting planes. This paper formulates the problem of finding good quality aggregations as an $\ell_0$-norm minimization problem and employs a combination of the lasso method and iterative reweighting to efficiently find sparse solutions corresponding to good aggregations. A comparative analysis of the proposed algorithm and the state-of-the-art greedy heuristic approach is presented, showing that the greedy heuristic implements a stepwise selection algorithm for the $\ell_0$-norm minimization problem. Further, we present an example where our approach succeeds, whereas the standard heuristic fails to find an aggregation with desired properties. The algorithm is implemented within the constraint integer programming solver SCIP, and computational experiments on the MIPLIB 2017 benchmark show that although the algorithm leads to slowdowns on relatively ``easier'' instances, our aggregation approach decreases the mean running time on a subset of challenging instances and leads to smaller branch-and-bound trees.
This article studies the problem of estimating the state variable of non-smooth subdifferential dynamics constrained in a bounded convex domain given some real-time observation. On the one hand, we show that the value function of the estimation problem is a viscosity solution of a Hamilton Jacobi Bellman equation whose sub and super solutions have different Neumann type boundary conditions. This intricacy arises from the non-reversibility in time of the non-smooth dynamics, and hinders the derivation of a comparison principle and the uniqueness of the solution in general. Nonetheless, we identify conditions on the drift (including zero drift) coefficient in the non-smooth dynamics that make such a derivation possible. On the other hand, we show in a general situation that the value function appears in the small noise limit of the corresponding stochastic filtering problem by establishing a large deviation result. We also give quantitative approximation results when replacing the non-smooth dynamics with a smooth penalised one.
We investigate stochastic parabolic evolution equations with time-dependent random generators and locally Lipschitz continuous drift terms. Using pathwise mild solutions, we construct an infinite-dimensional stationary Ornstein-Uhlenbeck type process, which is shown to be tempered in suitable function spaces. This property, together with a bootstrapping argument based on the regularizing effect of parabolic evolution families, is then applied to prove the global well-posedness and the existence of a random attractor for reaction-diffusion equations with random non-autonomous generators and nonlinearities satisfying certain growth and dissipativity assumptions.
Assume that B(X) is the algebra of all bounded linear operators on a complex Banach space X, and let W in B(X) is such that cl(W(X)) is not equal to X or W=zI, where z is a complex number and I is the identity operator. We show that if f: B(X) --> B(X) is an additive mapping Lie centralizable at W, then f(A)=kA+h(A) for all A in B(X), where k is a complex number and h:B(X)--> CI is an additive mapping such that h([A,B])=0 for all A,B in B(X) with AB=W.
This paper formulates a conjectural description of of the space of weightless functions (see\cite{BK}) and raises a question about a possibility of extending such a description in a more general context.
We provide sufficient conditions for the convergence of Revuz measures to follow from the convergence of positive continuous additive functionals, in the context of smooth measures with finite energy integrals, under both the vague topology and the topology induced by a Dirichlet form.
The goal of this note is to introduce Teissier singularities and to explain why they are candidate to play, in positive characteristics, a role for resolution of singularities which is similar to the role played by quasi-ordinary singularities in characteristic zero.
We consider the problem of deriving uniform confidence bands for the mean of a monotonic stochastic process, such as the cumulative distribution function (CDF) of a random variable, based on a sequence of i.i.d.~observations. Our approach leverages the coin-betting framework, and inherits several favourable characteristics of coin-betting methods. In particular, for each point in the domain of the mean function, we obtain anytime-valid confidence intervals that are numerically tight and adapt to the variance of the observations. To derive uniform confidence bands, we employ a continuous union bound that crucially leverages monotonicity. In the case of CDF estimation, we also exploit the fact that the empirical CDF is piece-wise constant to obtain simple confidence bands that can be easily computed. In simulations, we find that our confidence bands for the CDF achieve state-of-the-art performance.
Criteria for a diffeomorphism of a smooth manifold $M$ to be lifted to a linear automorphism of a given real vector bundle $p\colon V\rightarrow M$, are stated. Examples are included and the metric and complex vector-bundle cases are also considered.
We classify four-dimensional connected simply-connected indecomposable Lorentzian symmetric spaces $M$ with connected nontrivial isotropy group furnishing solutions of the Einstein-Yang-Mills equations. Those solutions with respect to some invariant metric connection $\Lambda$ in the bundle of orthonormal frames of $M$ and some diagonal metric on the holonomy algebra corresponding to $\Lambda$.
We establish a necessary and sufficient condition for the quantile process based on iid sampling to converge in distribution in $L^1(0,1)$. The condition is that the quantile function is locally absolutely continuous on the open unit interval and satisfies a slight strengthening of square integrability. We further establish a necessary and sufficient condition for the P-P process based on iid sampling from two populations to converge in distribution in $L^1(0,1)$. The condition is that the P-P curve is locally absolutely continuous on the open unit interval. If either process converges in distribution then it may be approximated using the bootstrap.
In this paper we consider a dynamic Erd\H{o}s-R\'{e}nyi random graph with independent identically distributed edge processes. Our aim is to describe the joint evolution of the entries of a subgraph count vector. The main result of this paper is a functional central limit theorem: we establish, under an appropriate centering and scaling, the joint functional convergence of the vector of subgraph counts to a specific multidimensional Gaussian process. The result holds under mild assumptions on the edge processes, most notably a Lipschitz-type condition.
For arbitrary star graph $S$ with a non-degenerate vertex labeling $l\colon V(S) \to \mathbb{R}^+$ we denote by $d_l$ the corresponding ultrametric on the vertex set $V(S)$ of $S$. We characterize the class $\bf US$ of all ultrametric spaces $(V(S), d_l)$ up to isometry. We also find the necessary and sufficient conditions under which the group of all self-isometries of ultrametric space $(V(S), d_l)$ coincides with the group of all self-isomorphisms of the labeled star graph $S(l)$.
Formulating the Laplace transform from the viewpoint of Katz theory, we introduce a transformation called the middle Laplace transform for linear Pfaffian systems with irregular singularities. While the definition of the middle Laplace transform is purely algebraic, its categorical interpretation is also provided. We then show the fundamental properties (invertibility, irreducibility) of the middle Laplace transform. As an application of the middle Laplace transform, we define the middle convolution for linear Pfaffian systems with irregular singularities. This gives a generalization of Haraoka's middle convolution, which was defined for linear Pfaffian systems with logarithmic singularities. The fundamental properties (additivity, irreducibility) of the middle convolution follow from the properties of the middle Laplace transform.
It is well-known that in the logic of quantum mechanics disjunctions and conjunctions can be represented by joins and meets, respectively, in an orthomodular lattice provided their entries commute. This was the reason why J. Pykacz introduced new derived operations called ''sharp'' and ''flat'' coinciding with joins and meets, respectively, for commuting elements but sharing some appropriate properties with disjunction and conjunction, respectively, in the whole orthomodular lattice in question. The problem is that orthomodular lattices need not formalize the logic of quantum mechanics since joins may not be defined provided their entries are neither comparable nor orthogonal. A corresponding fact holds for meets. Therefore, orthomodular posets are more accepted as an algebraic formalization of such a logic. The aim of the present paper is to extend the concepts of ''sharp'' and ''flat'' operations to operators in skew orthomodular and strong skew orthomodular posets. We generalize the relation of commuting elements as well as the commutator to such posets and we present some important properties of these operators and their mutual relationships. Moreover, we show that if the poset in question is even Boolean then there can be defined a ternary operator sharing the identities of a Pixley term. Finally, under some weak conditions which are automatically satisfied in Boolean algebras we show some kind of adjointness for operators formalizing conjunction and implication, respectively.
Tail dependence plays an essential role in the characterization of joint extreme events in multivariate data. However, most standard tail dependence parameters assume continuous margins. This note presents a form of tail dependence suitable for non-continuous and discrete margins. We derive a representation of tail dependence based on the volume of a copula and prove its properties. We utilize a bivariate regular variation to show that our new metric is consistent with the standard tail dependence parameters on continuous margins. We further define tail dependence on autocorrelated margins where the tail dependence parameter examine lagged correlation on the sample.
We systematically introduce an approach to the analysis and (numerical) solution of a broad class of nonlinear unconstrained optimal control problems, involving ordinary and distributed systems. Our approach relies on exact representations of the increments of the objective functional, drawing inspiration from the classical Weierstrass formula in Calculus of Variations. While such representations are straightforward to devise for state-linear problems (in vector spaces), they can also be extended to nonlinear models (in metric spaces) by immersing them into suitable linear "super-structures". We demonstrate that these increment formulas lead to necessary optimality conditions of an arbitrary order. Moreover, they enable to formulate optimality conditions of "infinite order", incorporating a kind of feedback mechanism. As a central result, we rigorously apply this general technique to the optimal control of nonlocal continuity equations in the space of probability measures.
To efficiently manage serverless computing platforms, a key aspect is the auto-scaling of services, i.e., the set of computational resources allocated to a service adapts over time as a function of the traffic demand. The objective is to find a compromise between user-perceived performance and energy consumption. In this paper, we consider the \emph{scale-per-request} auto-scaling pattern and investigate how many function instances (or servers) should be spawned each time an \emph{unfortunate} job arrives, i.e., a job that finds all servers busy upon its arrival. We address this problem by following a stochastic optimization approach: we develop a stochastic gradient descent scheme of the Kiefer--Wolfowitz type that applies \emph{over a single run of the state evolution}. At each iteration, the proposed scheme computes an estimate of the number of servers to spawn each time an unfortunate job arrives to minimize some cost function. Under natural assumptions, we show that the sequence of estimates produced by our scheme is asymptotically optimal almost surely. In addition, we prove that its convergence rate is $O(n^{-2/3})$ where $n$ is the number of iterations. From a mathematical point of view, the stochastic optimization framework induced by auto-scaling exhibits non-standard aspects that we approach from a general point of view. We consider the setting where a controller can only get samples of the \emph{transient} -- rather than stationary -- behavior of the underlying stochastic system. To handle this difficulty, we develop arguments that exploit properties of the mixing time of the underlying Markov chain. By means of numerical simulations, we validate the proposed approach and quantify its gain with respect to common existing scale-up rules.
Given a permutation group $G$, the derangement graph of $G$ is defined with vertex set $G$, where two elements $x$ and $y$ are adjacent if and only if $xy^{-1}$ is a derangement. We establish that, if $G$ is transitive with degree exceeding 30, then the derangement graph of $G$ contains a complete subgraph with four vertices. As a consequence, if $G$ is a normal subgroup of $A$ such that $|A : G| = 3$, and if $U$ is a subgroup of $G$ satisfying $G = \bigcup_{a \in A} U^a$, then $|G : U| \leq 10$. This result provides support for a conjecture by Neumann and Praeger concerning Kronecker classes.
Let $S_F(t)=\pi^{-1}\arg L(1/2+it, F)$, where $F$ is a Hecke--Maass cusp form for $\rm SL_3(\mathbb{Z})$ in the generic position with the spectral parameter $\nu_{F}=\big(\nu_{F,1},\nu_{F,2},\nu_{F,3}\big)$ and the Langlands parameter $\mu_{F}=\big(\mu_{F,1},\mu_{F,2},\mu_{F,3}\big)$. In this paper, we establish an unconditional asymptotic formula for the moments of $S_F(t)$. Previouly, such a formula was only known under the Generalized Riemann Hypothesis. The key ingredient is a weighted zero-density estimate in the spectral aspect for $L(s, F)$ which was recently proved by the authors in [18].
Berry's random wave conjecture posits that high energy eigenfunctions of chaotic systems resemble random monochromatic waves at the Planck scale. One important consequence is that, at the Planck scale around "many" points in the manifold, any solution to the Helmholtz equation $\Delta\varphi+\varphi =0$ can be approximated by high energy eigenfunctions. This property, sometimes called inverse localization, has useful applications to the study of the nodal sets of eigenfunctions. Alas, the only manifold for which the local limits of a sequence of high energy eigenfunctions are rigorously known to be given by random waves is the flat torus $(\mathbf{R}/\mathbf{Z})^2$, which is certainly not chaotic. Our objective in this paper is to study the validity of this "inverse localization" property in the class of integrable billiards, exploiting the fact that integrable polygonal billiards are classified and that Birkhoff conjectured that ellipses are the only smooth integrable billiards. Our main results show that, while there are infinitely many integrable polygons exhibiting good inverse localization properties, for "most" integrable polygons and ellipses, this property fails dramatically. We thus conclude that, in a generic integrable billiard, the local limits of Dirichlet and Neumann eigenfunctions do not match random waves, as one might expect in view of Berry's conjecture. Extensions to higher dimensions and nearly integrable polygons are discussed too.
We are interested in the numerical solution of the tensor least squares problem \[ \min_{\mathcal{X}} \| \mathcal{F} - \sum_{i =1}^{\ell} \mathcal{X} \times_1 A_1^{(i)} \times_2 A_2^{(i)} \cdots \times_d A_d^{(i)} \|_F, \] where $\mathcal{X}\in\mathbb{R}^{m_1 \times m_2 \times \cdots \times m_d}$, $\mathcal{F}\in\mathbb{R}^{n_1\times n_2 \times \cdots \times n_d}$ are tensors with $d$ dimensions, and the coefficients $A_j^{(i)}$ are tall matrices of conforming dimensions. We first describe a tensor implementation of the classical LSQR method by Paige and Saunders, using the tensor-train representation as key ingredient. We also show how to incorporate sketching to lower the computational cost of dealing with the tall matrices $A_j^{(i)}$. We then use this methodology to address a problem in information retrieval, the classification of a new query document among already categorized documents, according to given keywords.
Every convex optimization problem has a dual problem. The $p$-Dirichlet problem in metric measure spaces is an optimization problem whose solutions are $p$-harmonic functions. What is its dual problem? In this paper, we give an answer to this problem in the following form. We give a generalized modulus problem whose solution is the gradient of the $p$-harmonic function for metric measure spaces. Its dual problem is an optimization problem for measures on curves and we show exact duality and the existence of minimizers for this dual problem under appropriate assumptions. When applied to $p$-harmonic functions the minimizers of this dual problem are supported on gradient curves, yielding a natural concept associated to such functions that has yet to be studied. This process defines a natural dual metric current and proves the existence of gradient curves. These insights are then used to construct a counter example answering the old ``sheaf problem'' on metric spaces: in contrast to Euclidean spaces, in general metric spaces being $p$-harmonic is not strictly speaking a local property.
Monotone stochastic matrices are stochastic matrices in which each row stochastically dominates the previous one. While the eigenvalue regions for stochastic matrices have been fully described by F.I. Karpelevich in 1951, this study focuses on the analysis of monotone matrices. This paper examines their spectral properties and establishes a reduction theorem stating that, for all n from 3 on, the eigenvalue region for the nxn monotone matrices is included in those for the (n-1)x(n-1) stochastic matrices. Moreover, the eigenvalue region, along with the corresponding realising matrices, is determined for monotone matrices up till order 3.
This paper tackles the challenge of coordinating traffic lights and automated vehicles at signalized intersections, formulated as a constrained finite-horizon optimal control problem. The problem falls into the category of mixed-integer nonlinear programming, posing challenges for solving large instances. To address this, we introduce a decomposition approach consisting of an upper-level problem for traffic light timing allocation and a set of lower-level problems that generate appropriate commands for automated vehicles in each intersection movement. By leveraging solutions from the lower-level problems and employing parametric optimization techniques, we solve the upper-level problem using a standard sequential quadratic programming approach. The paper concludes by presenting an illustrative numerical example that highlights the effectiveness of our algorithm compared to scenarios where no coordination between traffic lights and vehicles exists.
The aim of this article is to count the $n$-tuples of positive integers $(a_{1},\ldots,a_{n})$ solutions of the equation $\begin{pmatrix} a_{n} & -1 \\[4pt] 1 & 0 \end{pmatrix} \begin{pmatrix} a_{n-1} & -1 \\[4pt] 1 & 0 \end{pmatrix} \cdots \begin{pmatrix} a_{1} & -1 \\[4pt] 1 & 0 \end{pmatrix}=\pm M$ when $M$ is equal to the generators of the modular group $S=\begin{pmatrix} 0 & -1 \\[4pt] 1 & 0 \end{pmatrix}$ and $T=\begin{pmatrix} 1 & 1 \\[4pt] 0 & 1 \end{pmatrix}$. To count these elements, we will study the $\lambda$-quiddities, which are the solutions of the equation in the case $M=Id$ (related to Coxeter's friezes), whose last component is fixed.
In this paper we give a short overview about the Ball-Evans approximation problem, i.e. about the approximation of Sobolev homeomorphism by a sequence of diffeomorphisms (or piecewise affine homeomorphisms) and we recall the motivation for this problem. We show some recent planar results and counterexamples in higher dimension and we give a number of open problems connected to this problem and related fields.
The Pascal matrix, which is related to Pascal's triangle, appears in many places in the theory of uniform distribution and in many other areas of mathematics. Examples are the construction of low-discrepancy sequences as well as normal numbers or the binomial transforms of Hankel matrices. Hankel matrices which are defined by Catalan numbers and related to the paperfolding sequence are interesting objects in number theory. Therefore, matrices that share many properties with the Pascal matrix or such Hankel matrices are of interest. In this note we will collect common features of the Pascal matrix and the same modulo $2$ as well as the Hankel matrix defined by Catalan numbers once pure and once modulo $2$ in the ring of integers. Hankel matrices with only $0$ and $1$ entries in e.g. finite fields gave recently access to counterexamples to the so-called $X$-adic Liouville conjecture. This justifies as well as motivates our consideration of further matrices with $0$ and $1$ entries.
Let $K$ be a number field with ring of integers $\mathcal{O}_K$. Let $\mathcal{N}_K$ be the set of positive integers $n$ such that there exist units $\varepsilon, \delta \in \mathcal{O}_K^\times$ satisfying $\varepsilon + \delta = n$. We show that $\mathcal{N}_K$ is a finite set if $K$ does not contain any real quadratic subfield. In the case where $K$ is a cubic field, we also explicitly classify all solutions to the unit equation $\varepsilon + \delta = n$ when $K$ is either cyclic or has negative discriminant.
Mazur and Rubin introduced the notion of $n$-Selmer companion elliptic curves and gave several examples of pairs of non-isogenous Selmer companions. We construct two infinite families of pairs of non-isogenous $3$-Selmer companions, parameterised by $t\in\mathbb{Z}$.
In this note, we propose a probabilistic approach to bound the (dimension-free) Lipschitz constant of the Langevin flow map on $\mathbb{R}^d$ introduced by Kim and Milman (2012). As example of application, we construct Lipschitz maps from a uniformly $\log$-concave probability measure to $\log$-Lipschitz perturbations as in Fathi, Mikulincer, Shenfeld (2024). Our proof is based on coupling techniques applied to the stochastic representation of the family of vector fields inducing the transport map. This method is robust enough to relax the uniform convexity to a weak asymptotic convexity condition and to remove the bound on the third derivative of the potential of the source measure.
In this article we propose a novel method for sampling from Gibbs distributions of the form $\pi(x)\propto\exp(-U(x))$ with a potential $U(x)$. In particular, inspired by diffusion models we propose to consider a sequence $(\pi^{t_k})_k$ of approximations of the target density, for which $\pi^{t_k}\approx \pi$ for $k$ small and, on the other hand, $\pi^{t_k}$ exhibits favorable properties for sampling for $k$ large. This sequence is obtained by replacing parts of the potential $U$ by its Moreau envelopes. Sampling is performed in an Annealed Langevin type procedure, that is, sequentially sampling from $\pi^{t_k}$ for decreasing $k$, effectively guiding the samples from a simple starting density to the more complex target. In addition to a theoretical analysis we show experimental results supporting the efficacy of the method in terms of increased convergence speed and applicability to multi-modal densities $\pi$.
In this paper, we introduce drifted versions of the generalized counting process (GCP) with a deterministic drift and a random drift. The composition of stable subordinator with an independent inverse stable subordinator is taken as the random drift. We derive the probability law and its governing fractional differential equations for these drifted versions. Also, we study the GCP time-changed with different Brownian clocks, for example, the Brownian first passage-time with or without drift, elastic Brownian motion, Brownian sojourn time on positive half-line and the Bessel times. For these time-changed processes, we obtain the governing system of differential equation of their state probabilities, probability generating function, etc. Further, we consider a time-changed GCP where the time-change is done by subordinators linked to incomplete gamma function. Later, we study the fractional integral of GCP and its time-changed variant.
For the $\beta$-Hermite, Laguerre, and Jacobi ensembles of dimension $N$ there exist central limit theorems for the freezing case $\beta\to\infty$ such that the associated means and covariances can be expressed in terms of the associated Hermite, Laguerre, and Jacobi polynomials of order $N$ respectively as well as via the associated dual polynomials in the sense of de Boor and Saff. In this paper we derive limits for $N\to\infty$ for the covariances of the $r\in\mathbb N$ largest (and smallest) eigenvalues for these frozen Jacobi ensembles in terms of Bessel functions. These results correspond to the hard edge analysis in the frozen Laguerre cases by Andraus and Lerner-Brecher and to known results for finite $\beta$.
In this paper, we prove that for a threefold of Fano type $X$ and a movable $\mathbb{Q}$-Cartier Weil divisor $D$ on $X$, the number of singular varieties that arise during the running of a $D$-MMP is bounded by $1 + h^1(X, 2D)$. Additionally, we prove a partial converse to the Kodaira vanishing theorem for a movable divisor on a threefold of Fano type.
In this paper, we discuss the stable discretisation of the double layer boundary integral operator for the wave equation in $1d$. For this, we show that the boundary integral formulation is $L^2$-elliptic and also inf-sup stable in standard energy spaces. This turns out to be a particular case of a recent result on the inf-sup stability of boundary integral operators for the wave equation and contributes to its further understanding. Moreover, we present the first BEM discretisations of second-kind operators for the wave equation for which stability is guaranteed and a complete numerical analysis is offered. We validate our theoretical findings with numerical experiments.
We find new estimates and a new asymptotic decoupling phenomenon for solutions to Hitchin's self-duality equations at high energy. These generalize previous results for generically regular semisimple Higgs bundles to arbitrary Higgs bundles. We apply our estimates to the Hitchin WKB problem and to high energy harmonic maps to symmetric spaces and buildings.
Algebraic symplectic cobordism is the universal symplectically oriented cohomology theory for schemes, represented by the motivic commutative ring spectrum $\text{MSp}$ constructed by Panin and Walter. The graded algebraic diagonal $\text{MSp}^*$ of the coefficient ring of $\text{MSp}$ is unknown. Through a symplectic version of the Pontryagin-Thom construction, one can associate any symplectic variety $X$ with a symplectic class $[X]_\text{MSp}$ in $\text{MSp}^{-\text{dim} X}$. Still, the problem in using these classes to study the ring $\text{MSp}^*$ is the paucity of non-trivial examples of symplectic varieties. We modify this construction to obtain elements in $\text{MSp}^*$ from a large family of varieties that are not symplectic but carry a certain "symplectic twist". Then, using a strategy relying on the Adams spectral sequence for $\text{MSp}$, we find a criterion to select generators among these classes, after taking a completion along the motivic Hopf map $\eta$.
For a positive rational $\alpha$, call a set of distinct positive integers $\{a_1, a_2, \ldots, a_r\}$ an $\alpha$-partition of $n$, if the sum of the $a_i$ is equal to $n$ and the sum of the reciprocals of the $a_i$ is equal to $\alpha$. Define $n_{\alpha}$ to be the smallest positive integer such that for all $n \ge n_{\alpha}$ an $\alpha$-partition of $n$ exists and, for a positive integer $M \ge 2$, define $N_M$ to be the smallest positive integer such that for all $n \ge N_M$ a $1$-partition of $n$ exists where $M$ does not divide any of the $a_i$. In this paper we determine $N_M$ for all $M \ge 2$, and find the set of all $\alpha$ such that $n_{\alpha} \le 100$.
We provide a new sufficient condition to detect the finite convergence of moment relaxations of polynomial optimization problems with correlative sparsity. The condition requires that certain moment matrices in the relaxation admit a flat extension, and that the variable cliques used to construct the relaxation satisfy a `running intersection' property. The proof also reveals an algorithm to extract at least as many minimizers for the original polynomial optimization problem as the smallest rank of the moment matrices in its relaxation. The necessity of the running intersection property is demonstrated with an illustrative example.
We study the geometry of the moduli stack of torsion-free sheaves on ribbons. We introduce a stratification of the stack by the complete type of the sheaves, and we investigate the geometric properties of the strata and their closure relation. Then we describe the irreducible components of the stack, by revealing an interesting trichotomy between Fano, Calabi-Yau and canonically polarized cases. Finally, we describe which strata intersect the (semi)stable locus.
In this paper, we develop a numerical algorithm for an inverse problem on determining fractional orders of time derivatives simultaneously in a coupled subdiffusion system. Following the theoretical uniqueness, we reformulate the order inverse problem as a discrete minimization problem, so that we derive a concise Gauss-Newton iterative method. Abundant numerical tests demonstrate the efficiency and accuracy of the proposed algorithm.
We show that a family of Dirichlet series generalizing the Fibonacci zeta function $\sum F(n)^{-s}$ has meromorphic continuation in terms of dihedral $\mathrm{GL}(2)$ Maass forms.
The frozen Erd\H{o}s-R\'enyi random graph is a variant of the standard dynamical Erd\H{o}s-R\'enyi random graph that prevents the creation of the giant component by freezing the evolution of connected components with a unique cycle. The formation of multicyclic components is forbidden, and the growth of components with a unique cycle is slowed down, depending on a parameter $p\in [0,1]$ that quantifies the slowdown. At the time when all connected components of the graph have a (necessary unique) cycle, the graph is entirely frozen and the process stops. In this paper we study the fluid limit of the main statistics of this process, that is their functional convergence as the number of vertices of the graph becomes large and after a proper rescaling, to the solution of a system of differential equations. Our proofs are based on an adaption of Wormald's differential equation method. We also obtain, as a main application, a precise description of the asymptotic behavior of the first time when the graph is entirely frozen.
We prove that the classical planar $n$-body problem when restricted to a common level of the energy and the angular momentum is not integrable except the case when both values these integrals are zero.
Introduced in 2017 \cite{B1-pinnau2017consensus}, Consensus-Based Optimization (CBO) has rapidly emerged as a significant breakthrough in global optimization. This straightforward yet powerful multi-particle, zero-order optimization method draws inspiration from Simulated Annealing and Particle Swarm Optimization. Using a quantitative mean-field approximation, CBO dynamics can be described by a nonlinear Fokker-Planck equation with degenerate diffusion, which does not follow a gradient flow structure. In this paper, we demonstrate that solutions to the CBO equation remain positive and maintain full support. Building on this foundation, we establish the { unconditional} global convergence of CBO methods to global minimizers. Our results are derived through an analysis of solution regularity and the proof of existence for smooth, classical solutions to a broader class of drift-diffusion equations, despite the challenges posed by degenerate diffusion.
Multivariate polynomial optimization is a prevalent model for a number of engineering problems. From a mathematical viewpoint, polynomial optimization is challenging because it is non-convex. The Lasserre's theory, based on semidefinite relaxations, provides an effective tool to overcome this issue and to achieve the global optimum. However, this approach can be computationally complex for medium and large scale problems. For this motivation, in this work, we investigate a local minimization approach, based on the alternating direction method of multipliers, which is low-complex, straightforward to implement, and prone to decentralization. The core of the work is the development of the algorithm tailored to polynomial optimization, along with the proof of its convergence. Through a numerical example we show a practical implementation and test the effectiveness of the proposed algorithm with respect to state-of-the-art methodologies.
We announce a database of rigorously computed Maass forms on congruence subgroups $\Gamma_0(N)$ and briefly describe the methods of computation.
We introduce the classes of holomorphic $p$-contact manifolds and holomorphic $s$-symplectic manifolds that generalise the classical holomorphic contact and holomorphic symplectic structures. After observing their basic properties and exhibiting a wide range of examples, we give three types of general conceptual results involving the former class of manifolds: structure theorems; hyperbolicity results; unobstructedness theorems, generalising to our context the classical Bogomolov-Tian-Todorov theorem, for two types of small deformations of complex structures that generalise the small essential deformations previously introduced for the Iwasawa manifold and for Calabi-Yau page-$1$-$\partial\bar\partial$-manifolds.
Lov\'{a}sz et al. proved that every $6$-edge-connected graph has a nowhere-zero $3$-flow. In fact, they proved a more technical statement which says that there exists a nowhere zero $3$-flow that extends the flow prescribed on the incident edges of a single vertex $z$ with bounded degree. We extend this theorem of Lov\'{a}sz et al. to allow $z$ to have arbitrary degree, but with the additional assumption that there is another vertex $x$ with large degree and no small cut separating $x$ and $z$. Using this theorem, we prove two results regarding the generation of minimal graphs with the property that prescribing the edges incident to a vertex with specific flow does not extend to a nowhere-zero $3$-flow. We use this to further strengthen the theorem of Lov\'{a}sz et al., as well as make progress on a conjecture of Li et al.
We give an extension of Cheeger's deformation techniques for smooth Lie group actions on manifolds to the setting of singular Riemannian foliations induced by Lie groupoids actions. We give an explicit description of the sectional curvature of our generalized Cheeger deformation.
In this paper, we consider a new transmission eigenvalue problem derived from the scattering by a clamped cavity in a thin elastic material. Scattering in a thin elastic material can be modeled by the Kirchhoff--Love infinite plate problem. This results in a biharmonic scattering problem that can be handled by operator splitting. The main novelty of this transmission eigenvalue problem is that it is posed in all of $\mathbb{R}^2$. This adds analytical and computational difficulties in studying this eigenvalue problem. Here, we prove that the eigenvalues can be recovered from the far field data as well as discreteness of the transmission eigenvalues. We provide some numerical experiments via boundary integral equations to demonstrate the theoretical results. We also conjecture monotonicity with respect to the measure of the scatterer from our numerical experiments.
Klein, Majda, and Damodaran have previously developed a formalized asymptotic motion law describing the evolution of nearly parallel vortex filaments within the framework of the three-dimensional Euler equations for incompressible fluids. In this study, we rigorously justify this model for two configurations: the central configuration consisting of regular polygons of $N$ helical-filaments rotating with constant speed, and the central configurations of $N+1$ vortex filaments, where an $N$-polygonal central configuration surrounds a central straight filament.
Efficient remote monitoring of distributed sources is essential for many Internet of Things (IoT) applications. This work studies the uncertainty at the receiver when tracking two-state Markov sources over a slotted random access channel without feedback, using the conditional entropy as a performance indicator, and considering the last received value as current state estimate. We provide an analytical characterization of the metric, and evaluate three access strategies: (i) maximizing throughput, (ii) transmitting only on state changes, and (iii) minimizing uncertainty through optimized access probabilities. Our results reveal that throughput optimization does not always reduce uncertainty. Moreover, while reactive policies are optimal for symmetric sources, asymmetric processes benefit from mixed strategies allowing transmissions during state persistence.
Bioprocesses are often characterised by nonlinear and uncertain dynamics, posing particular challenges for model predictive control (MPC) algorithms due to their computational demands when applied to nonlinear systems. Recent advances in optimal control theory have demonstrated that concepts from convex optimisation, tube MPC, and differences of convex functions (DC) enable efficient, robust online process control. Our approach is based on DC decompositions of nonlinear dynamics and successive linearisations around predicted trajectories. By convexity, the linearisation errors have tight bounds and can be treated as bounded disturbances within a robust tube MPC framework. We describe a systematic, data-driven method for computing DC model representations using deep learning neural networks with a special convex structure, and explain how the resulting MPC optimisation can be solved using convex programming. For the problem of maximising product formation in a cultivation with uncertain model parameters, we design a controller that ensures robust constraint satisfaction and allows online estimation of unknown model parameters. Our results indicate that this method is a promising solution for computationally tractable, robust MPC of bioprocesses.
We introduce the concepts of branched coarse coverings and transfers between coarse homology theories along them. We show that various versions of coarse $K$-homology theories admit the additional structure of transfers. We show versions of Atiyah's $L^{2}$-index theorem in coarse homotopy theory and apply them to give a new argument for the corresponding step in Higson's counterexample to the coarse Baum-Connes conjecture.
Under mild hypotheses, given a scheme $U$ and an open subset $V$ whose complement has codimension at least two, the pushforward of a torsion-free coherent sheaf on $V$ is coherent on $U$. We prove an analog of this result in the context of formal schemes over a complete discrete valuation ring. We then apply this to obtain a result about gluing formal functions, where the patches do not cover the entire scheme.
We show new properties of the Langlands correspondence for arbitrary tori over local fields. Furthermore, we give a detailed analysis of depth-zero characters of reductive p-adic groups, for groups that may be wildly ramified. We present several different definitions of ``depth-zero'' for characters, and show that these notions are in fact equivalent. These results are useful for proving new cases of local Langlands correspondences, in particular for depth zero representations.
In this work, we study the Hodge wave equation on a compact orientable manifold. We present the necessary differential geometry language to treat Sobolev spaces of differential forms and use these tools to identify a boundary triplet for the problem. We use this boundary triplet to determine a class of boundary conditions for which the problem is well-posed.
We construct an exemple of a full factor $M$ such that its canonical outer modular flow $\sigma^M : \mathbb{R} \rightarrow \mathrm{Out}(M)$ is almost periodic but $M$ has no almost periodic state. This can only happen if the discrete spectrum of $\sigma^M$ contains a nontrivial integral quadratic relation. We show how such a nontrivial relation can produce a 3-cohomological obstruction to the existence of an almost periodic state. To obtain our main theorem, we first strengthen a recent result of Bischoff and Karmakar by showing that for any compact connected abelian group $K$, every cohomology class in $ H^3(K,\mathbb{T})$ can be realized as an obstruction of a $K$-kernel on the hyperfinite $\mathrm{II}_1$ factor. We also prove a positive result : if for a full factor $M$ the outer modular flow $\sigma^M : \mathbb{R} \rightarrow \mathrm{Out}(M)$ is almost periodic, then $M \otimes R$ has an almost periodic state, where $R$ is the hyperfinite $\mathrm{II}_1$ factor. Finally, we prove a positive result for crossed product factors associated to strongly ergodic actions of hyperbolic groups.
We prove that the variety of flexes of algebraic curves of degree $3$ in the projective plane is an ideal theoretic complete intersection in the product of a two-dimensional and a nine-dimensional projective spaces.
An expansion set is a set $\mathcal{B}$ such that each $b \in \mathcal{B}$ is equipped with a set of expansions $\mathcal{E}(b)$. The theory of expansion sets offers a systematic approach to the construction of classifying spaces for generalized Thompson groups. We say that $\mathcal{B}$ is simple if proper expansions are unique when they exist. We will prove that any given simple expansion set determines a cubical complex with a metric of non-positive curvature. In many cases, the cubical complex will be CAT(0). We are thus able to recover proofs that Thompsons groups $F$, $T$, and $V$, Houghton's groups $H_{n}$, and groups defined by finite similarity structures all act on CAT(0) cubical complexes. We further state a sufficient condition for the cubical complex to be locally finite, and show that the latter condition is satisfied in the cases of $F$, $T$, $V$, and $H_{n}$.
We study small eigenvalues of Toeplitz operators on polarized complex projective manifolds. For Toeplitz operators whose symbols are supported on proper subsets, we prove the existence of eigenvalues that decay exponentially with respect to the semiclassical parameter. We moreover, establish a connection between the logarithmic distribution of these eigenvalues and the Mabuchi geodesic between the fixed polarization and the Lebesgue envelope associated with the polarization and the non-zero set of the symbol. As an application of our approach, we also obtain analogous results for Toeplitz matrices.
We discuss the existence of positive superharmonic functions $u$ in $\mathbb{R}^N_+=\mathbb{R}^{N-1}\times (0, \infty)$, $N\geq 3$, in the sense $-\Delta u=\mu$ for some Radon measure $\mu$, so that $u$ satisfies the nonlocal boundary condition $$ \frac{\partial u}{\partial n}(x',0)=\lambda \int\limits_{\mathbb{R}^{N-1}}\frac{u(y',0)^p}{|x'-y'|^k}dy' \quad\mbox{ on }\partial \mathbb{R}^N_+, $$ where $p,\lambda>0$ and $k\in (0, N-1)$. First, we show that no solutions exist if $0<k\leq 1$. Next, if $1<k<N-1$, we obtain a new critical exponent given by $p^*=\frac{N-1}{k-1}$ for the existence of such solutions. If $\mu\equiv 0$ we construct an exact solution for $p>p^*$ and discuss the existence of regular solutions, case in which we identify a second critical exponent given by $p^{**}=2\cdot \frac{N-1}{k-1}-1$. Our approach combines various integral estimates with the properties of the newly introduced $\alpha$-lifting operator and fixed point theorems.
Several results regarding the rigidity of maps and cocycles in the setting of Polish groups are established. Firstly, if $G\overset{\phi}\longrightarrow H$ is a map from a locally compact second countable group $G$ into a group $H$ and such that there is a conull subset $Z\subseteq G\times G$ satisfying $$ \phi(xy)=\phi(x)\cdot \phi(y) $$ for all $(x,y)\in Z$, then there is a homomorphism $G\overset{\pi}\longrightarrow H$ agreeing with $\phi$ almost everywhere. A similar statement holds for Baire category. Secondly, if $G\times X\overset{\psi}\longrightarrow H$ is a Baire measurable cocycle associated with a Polish group action $G\curvearrowright X$ and $\psi$ is continuous in the second variable, then $\psi$ is jointly continuous. Again, a related statement holds for measure.
Pinching antennas have been recently proposed as a promising flexible-antenna technology, which can be implemented by attaching low-cost pinching elements to dielectric waveguides. This work explores the potential of employing pinching antenna systems (PASs) for downlink transmission in a multiuser MIMO setting. We consider the problem of hybrid beamforming, where the digital precoder at the access point and the activated locations of the pinching elements are jointly optimized to maximize the achievable weighted sum-rate. Invoking fractional programming, a novel low-complexity algorithm is developed to iteratively update the precoding matrix and the locations of the pinching antennas. We validate the proposed scheme through extensive numerical experiments. Our investigations demonstrate that using PAS the system throughput can be significantly boosted as compared with the conventional fixed-location antenna systems, enlightening the potential of PAS as an enabling candidate for next-generation wireless networks.
In this paper, we consider semi-extraspecial $p$-groups $G$ that have an automorphism of order $|G:G'| - 1$. We prove that these groups are isomorphic to Sylow $p$-subgroups of ${\rm SU}_3 (p^{2a})$ for some integer $a$. If $p$ is odd, this is equivalent to saying that $G$ is isomorphic to a Sylow $p$-subgroup of ${\rm SL}_3 (p^a)$.
We prove that globally hyperbolic compact anti-de Sitter (2+1)-spacetimes with strictly convex spacelike boundary that is either smooth or polyhedral and whose holonomy is close to Fuchsian are determined by the induced metric on the boundary.
Kim, Kim, and Neggers (2019) defined probability functions on a poset, by listing some very natural conditions that a function \(\pi: P \times P \to [0,1]\) should satisfy in order to capture the intuition of "the likelihood that \(a\) precedes \(b\) in \(P\)". In particular, this generalizes the common notion of poset probability for finite posets, where \(\pi(a,b)\) is the proportion of linear extensions of \(P\) in which \(a\) precedes \(b\). They constructed a family of such functions for posets embedded in the ordered plane; that is two say, for posets of order dimension at most two. We study probability functions of a finite poset \(P\) by constructing an ancillary poset \(\tilde{P}\), that we call *probability functions posets*. The relations of this new poset encodes the restrictions imposed on probability functions of the original poset by the conditions of the definition. Then, we define the probability functions polytope, which parameterizes the probability functions on \(P\), and show that it can be realized as the order polytope of \(\tilde{P}\) intersected by a certain affine subspace. We give a partial description of the vertices of probability functions polytope and show that, in contrast to the order polytope, it is not always a lattice polytope.
In this paper we consider the problem of finding ``as many edge-disjoint Hamilton cycles as possible'' in the binomial random digraph $D_{n,p}$. We show that a typical $D_{n,p}$ contains precisely the minimum between the minimum out- and in-degrees many edge-disjoint Hamilton cycles, given that $p\geq \log^{15} n/n$, which is optimal up to a factor of poly$\log n$. Our proof provides a randomized algorithm to generate the cycles and uses a novel idea of generating $D_{n,p}$ in a sophisticated way that enables us to control some key properties, and on an ``online sprinkling'' idea as was introduced by Ferber and Vu.
We investigate how a constant time delay influences a parametric autoresonant system. This is a nonlinear system driven by a parametrically chirped force with a negative delay-feedback that maintains adiabatic phase locking with the driving frequency. This phase locking results in a continuous amplitude growth, regardless of parameter changes. Our study reveals a critical threshold for delay strength; above this threshold, autoresonance is sustained, while below it, autoresonance diminishes. We examine the interplay between time delay and autoresonance stability, using multi-scale perturbation methods to derive analytical results, which are corroborated by numerical simulations. Ultimately, the goal is to understand and control autoresonance stability through the time-delay parameters.
Causal inference across multiple data sources has the potential to improve the generalizability, transportability, and replicability of scientific findings. However, data integration methods for time-to-event outcomes -- common in medical contexts such as clinical trials -- remain underdeveloped. Existing data fusion methods focus on binary or continuous outcomes, neglecting the distinct challenges of survival analysis, including right-censoring and the unification of discrete and continuous time frameworks. To address these gaps, we propose two novel approaches for multi-source causal survival analysis. First, considering a target site-specific causal effect, we introduce a semiparametric efficient estimator for scenarios where data-sharing is feasible. Second, we develop a federated learning framework tailored to privacy-constrained environments. This framework dynamically adjusts source site-specific contributions, downweighting biased sources and upweighting less biased ones relative to the target population. Both approaches incorporate nonparametric machine learning models to enhance robustness and efficiency, with theoretical guarantees applicable to both continuous and discrete time-to-event outcomes. We demonstrate the practical utility of our methods through extensive simulations and an application to two randomized trials of a monoclonal neutralizing antibody for HIV-1 prevention: HVTN 704/HPTN 085 (cisgender men and transgender persons in the Americas and Switzerland) and HVTN 703/HPTN 081 (women in sub-Saharan Africa). The results highlight the potential of our approaches to efficiently estimate causal effects while addressing heterogeneity across data sources and adhering to privacy and robustness constraints.
Hypergraph states are a special kind of multipartite states encoded by hypergraphs. They play a significant role in quantum error correction, measurement--based quantum computation, quantum non locality and entanglement. In a series of two papers, we introduce and study calibrated hypergraph states, a broad generalization of weighted hypergraph states codified by hypergraphs equipped with calibrations, an ample extension of weightings. We propose as a guiding principle that a constructive theory of hypergraph states must be based on a categorical framework for hypergraphs on one hand and multi qudit states on the other constraining hypergraph states enough to render the determination of their general structure possible. In this first paper, we introduce graded $\varOmega$ monads, concrete Pro categories isomorphic to the Pro category $\varOmega$ of finite von Neumann ordinals and equipped with an associative and unital graded multiplication, and their morphisms, maps of $\varOmega$ monads compatible with their monadic structure. We then show that both calibrated hypergraphs and multi qudit states naturally organize in graded $\varOmega$ monads. In this way, we lay the foundation for the construction of calibrated hypergraph state map as a special morphism of these $\varOmega$ monads in the companion paper.
Hypergraph states are a special kind of multipartite states encoded by hypergraphs relevant in quantum error correction, measurement--based quantum computation, quantum non locality and entanglement. In a series of two papers, we introduce and investigate calibrated hypergraph states, an extension of weighted hypergraph states codified by hypergraphs equipped with calibrations, a broad generalization of weightings. The guiding principle informing our approach is that a constructive theory of hypergraph states must be based on a categorical framework for both hypergraphs and multi qudit states constraining hypergraph states enough to render the determination of their general structure possible. In this second paper, we build upon the graded $\varOmega$ monadic framework worked out in the companion paper, focusing on qudits over a generic Galois ring. We explicitly construct a calibrated hypergraph state map as a special morphism of the calibrated hypergraph and multi qudit state $\varOmega$ monads. We further prove that the calibrated hypergraph states so yielded are locally maximally entangleable stabilizer states, elucidate their relationship to weighted hypergraph states, show that they reduce to the weighted ones in the familiar qubit case and prove through examples that this is no longer the case for higher qudits.
We introduce Super Quantum Mechanics (SQM) as a theory that considers states in Hilbert space subject to multiple quadratic constraints. Traditional quantum mechanics corresponds to a single quadratic constraint of wavefunction normalization. In its simplest form, SQM considers states in the form of unitary operators, where the quadratic constraints are conditions of unitarity. In this case, the stationary SQM problem is a quantum inverse problem with multiple applications in machine learning and artificial intelligence. The SQM stationary problem is equivalent to a new algebraic problem that we address in this paper. The SQM non-stationary problem considers the evolution of a quantum system, distinct from the explicit time dependence of the Hamiltonian, $H(t)$. Several options for the SQM dynamic equation are considered, and quantum circuits of 2D type are introduced, which transform one quantum system into another. Although no known physical process currently describes such dynamics, this approach naturally bridges direct and inverse quantum mechanics problems, allowing for the development of a new type of computer algorithm. Beyond computer modeling, the developed theory could be directly applied if or when a physical process capable of solving an inverse quantum problem in a single measurement act (analogous to wavefunction measurement in traditional quantum mechanics) is discovered in the future.
One of the most significant challenges in combating against the spread of infectious diseases was the difficulty in estimating the true magnitude of infections. Unreported infections could drive up disease spread, making it very hard to accurately estimate the infectivity of the pathogen, therewith hampering our ability to react effectively. Despite the use of surveillance-based methods such as serological studies, identifying the true magnitude is still challenging. This paper proposes an information theoretic approach for accurately estimating the number of total infections. Our approach is built on top of Ordinary Differential Equations (ODE) based models, which are commonly used in epidemiology and for estimating such infections. We show how we can help such models to better compute the number of total infections and identify the parametrization by which we need the fewest bits to describe the observed dynamics of reported infections. Our experiments on COVID-19 spread show that our approach leads to not only substantially better estimates of the number of total infections but also better forecasts of infections than standard model calibration based methods. We additionally show how our learned parametrization helps in modeling more accurate what-if scenarios with non-pharmaceutical interventions. Our approach provides a general method for improving epidemic modeling which is applicable broadly.
We extend the work of Hahn and Carvalho (2015) and develop a doubly-regularized sparse regression estimator by synthesizing Bayesian regularization with penalized least squares within a decision-theoretic framework. In contrast to existing Bayesian decision-theoretic formulation chiefly reliant upon the symmetric 0-1 loss, the new method -- which we call Bayesian Decoupling -- employs a family of penalized loss functions indexed by a sparsity-tuning parameter. We propose a class of reweighted l1 penalties, with two specific instances that achieve simultaneous bias reduction and convexity. The design of the penalties incorporates considerations of signal sizes, as enabled by the Bayesian paradigm. The tuning parameter is selected using a posterior benchmarking criterion, which quantifies the drop in predictive power relative to the posterior mean which is the optimal Bayes estimator under the squared error loss. Additionally, in contrast to the widely used median probability model technique which selects variables by thresholding posterior inclusion probabilities at the fixed threshold of 1/2, Bayesian Decoupling enables the use of a data-driven threshold which automatically adapts to estimated signal sizes and offers far better performance in high-dimensional settings with highly correlated predictors. Our numerical results in such settings show that certain combinations of priors and loss functions significantly improve the solution path compared to existing methods, prioritizing true signals early along the path before false signals are selected. Consequently, Bayesian Decoupling produces estimates with better prediction and selection performance. Finally, a real data application illustrates the practical advantages of our approaches which select sparser models with larger coefficient estimates.
Supervised dimensionality reduction aims to map labeled data to a low-dimensional feature space while maximizing class discriminability. Despite the availability of methods for learning complex non-linear features (e.g. Deep Learning), there is an enduring demand for dimensionality reduction methods that learn linear features due to their interpretability, low computational cost, and broad applicability. However, there is a gap between methods that optimize linear separability (e.g. LDA), and more flexible but computationally expensive methods that optimize over arbitrary class boundaries (e.g. metric-learning methods). Here, we present Supervised Quadratic Feature Analysis (SQFA), a dimensionality reduction method for learning linear features that maximize the differences between class-conditional first- and second-order statistics, which allow for quadratic discrimination. SQFA exploits the information geometry of second-order statistics in the symmetric positive definite manifold. We show that SQFA features support quadratic discriminability in real-world problems. We also provide a theoretical link, based on information geometry, between SQFA and the Quadratic Discriminant Analysis (QDA) classifier.
Targeted maximum likelihood estimators (TMLEs) are asymptotically optimal among regular, asymptotically linear estimators. In small samples, however, we may be far from "asymptopia" and not reap the benefits of optimality. Here we propose a variant (score-preserving TMLE; SP-TMLE) that leverages an initial estimator defined as the solution of a large number of possibly data-dependent score equations. Instead of targeting only the efficient influence function in the TMLE update to knock out the plug-in bias, we also target the already-solved scores. Solving additional scores reduces the remainder term in the von-Mises expansion of our estimator because these scores may come close to spanning higher-order influence functions. The result is an estimator with better finite-sample performance. We demonstrate our approach in simulation studies leveraging the (relaxed) highly adaptive lasso (HAL) as our initial estimator. These simulations show that in small samples SP-TMLE has reduced bias relative to plug-in HAL and reduced variance relative to vanilla TMLE, blending the advantages of the two approaches. We also observe improved estimation of standard errors in small samples.
We address the prominent communication bottleneck in federated learning (FL). We specifically consider stochastic FL, in which models or compressed model updates are specified by distributions rather than deterministic parameters. Stochastic FL offers a principled approach to compression, and has been shown to reduce the communication load under perfect downlink transmission from the federator to the clients. However, in practice, both the uplink and downlink communications are constrained. We show that bi-directional compression for stochastic FL has inherent challenges, which we address by introducing BICompFL. Our BICompFL is experimentally shown to reduce the communication cost by an order of magnitude compared to multiple benchmarks, while maintaining state-of-the-art accuracies. Theoretically, we study the communication cost of BICompFL through a new analysis of an importance-sampling based technique, which exposes the interplay between uplink and downlink communication costs.
P300 is an Event-Related Potential widely used in Brain-Computer Interfaces, but its detection is challenging due to inter-subject and temporal variability. This work introduces a clustering methodology based on Normalized Compression Distance (NCD) to extract the P300 structure, ensuring robustness against variability. We propose a novel signal-to-ASCII transformation to generate compression-friendly objects, which are then clustered using a hierarchical tree-based method and a multidimensional projection approach. Experimental results on two datasets demonstrate the method's ability to reveal relevant P300 structures, showing clustering performance comparable to state-of-the-art approaches. Furthermore, analysis at the electrode level suggests that the method could assist in electrode selection for P300 detection. This compression-driven clustering methodology offers a complementary tool for EEG analysis and P300 identification.
Discrete diffusion models have emerged as a powerful generative modeling framework for discrete data with successful applications spanning from text generation to image synthesis. However, their deployment faces challenges due to the high dimensionality of the state space, necessitating the development of efficient inference algorithms. Current inference approaches mainly fall into two categories: exact simulation and approximate methods such as $\tau$-leaping. While exact methods suffer from unpredictable inference time and redundant function evaluations, $\tau$-leaping is limited by its first-order accuracy. In this work, we advance the latter category by tailoring the first extension of high-order numerical inference schemes to discrete diffusion models, enabling larger step sizes while reducing error. We rigorously analyze the proposed schemes and establish the second-order accuracy of the $\theta$-trapezoidal method in KL divergence. Empirical evaluations on GPT-2 level text and ImageNet-level image generation tasks demonstrate that our method achieves superior sample quality compared to existing approaches under equivalent computational constraints.
Learning effective regularization is crucial for solving ill-posed inverse problems, which arise in a wide range of scientific and engineering applications. While data-driven methods that parameterize regularizers using deep neural networks have demonstrated strong empirical performance, they often result in highly nonconvex formulations that lack theoretical guarantees. Recent work has shown that incorporating structured nonconvexity into neural network-based regularizers, such as weak convexity, can strike a balance between empirical performance and theoretical tractability. In this paper, we demonstrate that a broader class of nonconvex functions, difference-of-convex (DC) functions, can yield improved empirical performance while retaining strong convergence guarantees. The DC structure enables the use of well-established optimization algorithms, such as the Difference-of-Convex Algorithm (DCA) and a Proximal Subgradient Method (PSM), which extend beyond standard gradient descent. Furthermore, we provide theoretical insights into the conditions under which optimal regularizers can be expressed as DC functions. Extensive experiments on computed tomography (CT) reconstruction tasks show that our approach achieves strong performance across sparse and limited-view settings, consistently outperforming other weakly supervised learned regularizers. Our code is available at \url{https://github.com/YasminZhang/ADCR}.
Spectral bias, the tendency of neural networks to prioritize learning low-frequency components of functions during the initial training stages, poses a significant challenge when approximating solutions with high-frequency details. This issue is particularly pronounced in physics-informed neural networks (PINNs), widely used to solve differential equations that describe physical phenomena. In the literature, contributions such as Wavelet Kolmogorov Arnold Networks (Wav-KANs) have demonstrated promising results in capturing both low- and high-frequency components. Similarly, Fourier features (FF) are often employed to address this challenge. However, the theoretical foundations of Wav-KANs, particularly the relationship between the frequency of the mother wavelet and spectral bias, remain underexplored. A more in-depth understanding of how Wav-KANs manage high-frequency terms could offer valuable insights for addressing oscillatory phenomena encountered in parabolic, elliptic, and hyperbolic differential equations. In this work, we analyze the eigenvalues of the neural tangent kernel (NTK) of Wav-KANs to enhance their ability to converge on high-frequency components, effectively mitigating spectral bias. Our theoretical findings are validated through numerical experiments, where we also discuss the limitations of traditional approaches, such as standard PINNs and Fourier features, in addressing multi-frequency problems.
People's opinions on a wide range of topics often evolve over time through their interactions with others. Models of opinion dynamics primarily focus on one-dimensional opinions which represent opinions on one topic. However, opinions on various topics are rarely isolated; instead, they can be interdependent and exhibit correlations. In a bounded-confidence model (BCM) of opinion dynamics, agents influence each other's opinions only if their opinions are sufficiently similar. We extend classical agent-based BCMs -- namely, the Hegeselmann--Krause BCM, which has synchronous interactions, and the Deffuant--Weisbuch BCM, which has asynchronous interactions -- to a multidimensional setting, in which the opinions are multidimensional vectors representing opinions of different topics and opinions on different topics are interdependent. To measure opinion differences between agents, we introduce topic-weighted discordance functions that account for opinion differences in all topics. We use the regions of receptiveness to characterize the steady-state opinion clusters and provide an analytical approach to compute these regions. In addition, we numerically simulate our models on various networks with initial opinions drawn from a variety of distributions. When initial opinions are correlated across different topics, our topic-weighted BCMs yield significantly different results in both transient and steady states compared to baseline models, where the dynamics of each opinion topic are independent.
How can we identify groups of primate individuals which could be conjectured to drive social structure? To address this question, one of us has collected a time series of data for social interactions between chimpanzees. Here we use a network representation, leading to the task of combining these data into a time series of a single weighted network per time stamp, where different proximities should be given different weights reflecting their relative importance. We optimize these proximity-type weights in a principled way, using an innovative loss function which rewards structural consistency across time. The approach is empirically validated by carefully designed synthetic data. Using statistical tests, we provide a way of identifying groups of individuals that stay related for a significant length of time. Applying the approach to the chimpanzee data set, we detect cliques in the animal social network time series, which can be validated by real-world intuition from prior research and qualitative observations by chimpanzee experts.
Constrained optimization demands highly efficient solvers which promotes the development of learn-to-optimize (L2O) approaches. As a data-driven method, L2O leverages neural networks to efficiently produce approximate solutions. However, a significant challenge remains in ensuring both optimality and feasibility of neural networks' output. To tackle this issue, we introduce Homeomorphic Polar Learning (HoP) to solve the star-convex hard-constrained optimization by embedding homeomorphic mapping in neural networks. The bijective structure enables end-to-end training without extra penalty or correction. For performance evaluation, we evaluate HoP's performance across a variety of synthetic optimization tasks and real-world applications in wireless communications. In all cases, HoP achieves solutions closer to the optimum than existing L2O methods while strictly maintaining feasibility.
Physics-Informed Neural Networks (PINNs) are a kind of deep-learning-based numerical solvers for partial differential equations (PDEs). Existing PINNs often suffer from failure modes of being unable to propagate patterns of initial conditions. We discover that these failure modes are caused by the simplicity bias of neural networks and the mismatch between PDE's continuity and PINN's discrete sampling. We reveal that the State Space Model (SSM) can be a continuous-discrete articulation allowing initial condition propagation, and that simplicity bias can be eliminated by aligning a sequence of moderate granularity. Accordingly, we propose PINNMamba, a novel framework that introduces sub-sequence modeling with SSM. Experimental results show that PINNMamba can reduce errors by up to 86.3\% compared with state-of-the-art architecture. Our code is available at https://github.com/miniHuiHui/PINNMamba.
We show that a gradient-descent with a simple, universal rule for step-size selection provably finds $k$-SVD, i.e., the $k\geq 1$ largest singular values and corresponding vectors, of any matrix, despite nonconvexity. There has been substantial progress towards this in the past few years where existing results are able to establish such guarantees for the \emph{exact-parameterized} and \emph{over-parameterized} settings, with choice of oracle-provided step size. But guarantees for generic setting with a step size selection that does not require oracle-provided information has remained a challenge. We overcome this challenge and establish that gradient descent with an appealingly simple adaptive step size (akin to preconditioning) and random initialization enjoys global linear convergence for generic setting. Our convergence analysis reveals that the gradient method has an attracting region, and within this attracting region, the method behaves like Heron's method (a.k.a. the Babylonian method). Empirically, we validate the theoretical results. The emergence of modern compute infrastructure for iterative optimization coupled with this work is likely to provide means to solve $k$-SVD for very large matrices.
Matrix Product Unitaries (MPUs) have emerged as essential tools for representing locality-preserving 1D unitary operators, with direct applications to quantum cellular automata and quantum phases of matter. A key challenge in the study of MPUs is determining when a given local tensor generates an MPU, a task previously addressed through fixed-point conditions and canonical forms, which can be cumbersome to evaluate for an arbitrary tensor. In this work, we establish a simple and efficient necessary and sufficient condition for a tensor $M$ to generate an MPU of size $N$, given by $\operatorname{Tr}(\mathbb{E}_M^N) = \operatorname{Tr}(\mathbb{E}_T^N) = 1$, where $\mathbb{E}_M$ and $\mathbb{E}_T$ are the transfer matrices of $M$ and $T = MM^\dagger$. This condition provides a unified framework for characterizing all uniform MPUs and significantly simplifies their evaluation. Furthermore, we show that locality preservation naturally arises when the MPU is generated for all system sizes. Our results offer new insights into the structure of MPUs, highlighting connections between unitary evolution, transfer matrices, and locality-preserving behavior, with potential extensions to higher-dimensions.
This paper is devoted to investigating the effects and biological consequences of the predator-mediated apparent competition based on a two prey species (one is native and the other is invasive) and one predator model with Holling type I and II functional response functions. Through the analytical results and case studies alongside numerical simulations, we find that the initial mass of the invasive prey species, capture rates of prey species, and the predator's mortality rate are all important factors determining the success/failure of invasions and the species coexistence/extinction. The global dynamics can be completely classified for the Holling type I functional response function, but can only be partially determined for the Holling type II functional response function. For the Holling type I response function, we find that whether the invasive prey species can successfully invade to promote the predator-mediated apparent competition is entirely determined by the capture rates of prey species. If the Holling type II response function is applied, then the dynamics are more complicated. First, if two prey species have the same ecological characteristics, then the initial mass of the invasive prey species is the key factor determining the success/failure of the invasion and hence the effect of the predator-mediated apparent competition. Whereas if two prey species have different ecological characteristics, say different capture rates, then the success of the invasion no longer depends on the initial mass of the invasive prey species, but on the capture rates. In all cases, if the invasion succeeds, then the predator-mediated apparent competition's effectiveness essentially depends on the predator's mortality rate.
We propose and study three confidence intervals (CIs) centered at an estimator that is intentionally biased to reduce mean squared error. The first CI simply uses an unbiased estimator's standard error; compared to centering at the unbiased estimator, this CI has higher coverage probability for confidence levels above 91.7%, even if the biased and unbiased estimators have equal mean squared error. The second CI trades some of this "excess" coverage for shorter length. The third CI is centered at a convex combination of the two estimators to further reduce length. Practically, these CIs apply broadly and are simple to compute.
This work explores the interplay between quantum information theory, algebraic geometry, and number theory, with a particular focus on multiqubit systems, their entanglement structure, and their classification via geometric embeddings. The Segre embedding, a fundamental construction in algebraic geometry, provides an algebraic framework to distinguish separable and entangled states, encoding quantum correlations in projective geometry. We develop a systematic study of qubit moduli spaces, illustrating the geometric structure of entanglement through hypercube constructions and Coxeter chamber decompositions. We establish a bijection between the Segre embeddings of tensor products of projective spaces and binary words of length $n-1$, structured as an $(n-1)$-dimensional hypercube, where adjacency corresponds to a single Segre operation. This reveals a combinatorial structure underlying the hierarchy of embeddings, with direct implications for quantum error correction schemes. The symmetry of the Segre variety under the Coxeter group of type $A$ allows us to analyze quantum states and errors through the lens of reflection groups, viewing separable states as lying in distinct Coxeter chambers on a Segre variety. The transitive action of the permutation group on these chambers provides a natural method for tracking errors in quantum states and potentially reversing them. Beyond foundational aspects, we highlight relations between Segre varieties and Dixon elliptic curves, drawing connections between entanglement and number theory.
We consider the noisy matrix sensing problem in the over-parameterization setting, where the estimated rank $r$ is larger than the true rank $r_\star$. Specifically, our main objective is to recover a matrix $ X_\star \in \mathbb{R}^{n_1 \times n_2} $ with rank $ r_\star $ from noisy measurements using an over-parameterized factorized form $ LR^\top $, where $ L \in \mathbb{R}^{n_1 \times r}, \, R \in \mathbb{R}^{n_2 \times r} $ and $ \min\{n_1, n_2\} \ge r > r_\star $, with the true rank $ r_\star $ being unknown. Recently, preconditioning methods have been proposed to accelerate the convergence of matrix sensing problem compared to vanilla gradient descent, incorporating preconditioning terms $ (L^\top L + \lambda I)^{-1} $ and $ (R^\top R + \lambda I)^{-1} $ into the original gradient. However, these methods require careful tuning of the damping parameter $\lambda$ and are sensitive to initial points and step size. To address these limitations, we propose the alternating preconditioned gradient descent (APGD) algorithm, which alternately updates the two factor matrices, eliminating the need for the damping parameter and enabling faster convergence with larger step sizes. We theoretically prove that APGD achieves near-optimal error convergence at a linear rate, starting from arbitrary random initializations. Through extensive experiments, we validate our theoretical results and demonstrate that APGD outperforms other methods, achieving the fastest convergence rate. Notably, both our theoretical analysis and experimental results illustrate that APGD does not rely on the initialization procedure, making it more practical and versatile.
Under the assumptions of the ViSE model, we investigate the welfare and performance of a society consisting of one group (a ``party'') and individualists. In the case of Gaussian proposal generators, the expected capital gains can be expressed in standard functions. The relative effectiveness of individualistic and group strategies of agents, as well as the benefits of the entire society, depend on the level of cooperation, the voting threshold, and the favorability of the environment. We focus on the evolution of society in neutral environments caused by changing its structure and the voting rule in the interests of agents.
Forecasting multiscale chaotic dynamical systems with deep learning remains a formidable challenge due to the spectral bias of neural networks, which hinders the accurate representation of fine-scale structures in long-term predictions. This issue is exacerbated when models are deployed autoregressively, leading to compounding errors and instability. In this work, we introduce a novel approach to mitigate the spectral bias which we call the Binned Spectral Power (BSP) Loss. The BSP loss is a frequency-domain loss function that adaptively weighs errors in predicting both larger and smaller scales of the dataset. Unlike traditional losses that focus on pointwise misfits, our BSP loss explicitly penalizes deviations in the energy distribution across different scales, promoting stable and physically consistent predictions. We demonstrate that the BSP loss mitigates the well-known problem of spectral bias in deep learning. We further validate our approach for the data-driven high-dimensional time-series forecasting of a range of benchmark chaotic systems which are typically intractable due to spectral bias. Our results demonstrate that the BSP loss significantly improves the stability and spectral accuracy of neural forecasting models without requiring architectural modifications. By directly targeting spectral consistency, our approach paves the way for more robust deep learning models for long-term forecasting of chaotic dynamical systems.
This paper explores challenges in training Physics-Informed Neural Networks (PINNs), emphasizing the role of the loss landscape in the training process. We examine difficulties in minimizing the PINN loss function, particularly due to ill-conditioning caused by differential operators in the residual term. We compare gradient-based optimizers Adam, L-BFGS, and their combination \al{}, showing the superiority of \al{}, and introduce a novel second-order optimizer, NysNewton-CG (NNCG), which significantly improves PINN performance. Theoretically, our work elucidates the connection between ill-conditioned differential operators and ill-conditioning in the PINN loss and shows the benefits of combining first- and second-order optimization methods. Our work presents valuable insights and more powerful optimization strategies for training PINNs, which could improve the utility of PINNs for solving difficult partial differential equations.
The Initial Value Problem (IVP) is concerned with finding solutions to a system of autonomous ordinary differential equations (ODE) \begin{equation} \textbf{x}' = \textbf{f}(\textbf{x}) \end{equation} with given initial condition $\textbf{x}(0)\in B_0$ for some box $B_0\subseteq \mathbb{R}^n$. Here $\textbf{f}:\mathbb{R}^n\to\mathbb{R}^n$ and $\textbf{x}:[0,1]\to\mathbb{R}^n$ where $\textbf{f}$ and $\textbf{x}$ are $C^1$-continuous. Let $\texttt{IVP}_\textbf{f}(B_0)$ denote the set of all such solutions $\textbf{x}$. Despite over 40 years of development to design a validated algorithm for the IVP problem, no complete algorithm currently exists. In this paper, we introduce a novel way to exploit the theory of $\textbf{logarithmic norms}$: we introduce the concept of a $\textbf{radical transform}$ $\pi:\mathbb{R}^n\to\mathbb{R}^n$ to convert the above $(\textbf{x},\textbf{f})$-system into another system $\textbf{y}' = \textbf{g}(\textbf{y})$ so that the $(\textbf{y},\textbf{g})$-space has negative logarithmic norm in any desired small enough neighborhood. Based on such radical transform steps, we construct a complete validated algorithm for the following $\textbf{End-Enclosure Problem}$: \begin{equation} INPUT: (\textbf{f}, B_0,\varepsilon), \qquad\qquad OUTPUT: (\underline{B}_0,B_1) \end{equation} where $B_0\subseteq \mathbb{R}^n$ is a box, $\varepsilon>0$, such that $\underline{B}_0\subseteq B_0$, the diameter of $B_1$ is at most $\varepsilon$, and $B_1$ is an end-enclosure for $\texttt{IVP}(\underline{B}_0)$, i.e., for all $\textbf{x}\in \texttt{IVP}(\underline{B}_0)$, $\textbf{x}(1)\in B_1$. A preliminary implementation of our algorithm shows promise.
This paper establishes an equivalence between the pairwise compatibility of all observables in a scenario, and our ability to create a deterministic underlying-state model for that scenario (a type of hidden-variable model, typically used in the contextuality and nonlocality literature, where quantum states are treated as probability measures over ``better-defined states''). We first argue that the quantum state update rule implies that underlying-state models must update their states in agreement with the rules of conditional probability. We then demonstrate that deterministic underlying-state models meeting this criterion exist if and only if the system's observables are pairwise compatible, which is equivalent to the theoretical predictions of sequential measurements being independent of measurement order.
Laplace Neural Operators (LNOs) have recently emerged as a promising approach in scientific machine learning due to the ability to learn nonlinear maps between functional spaces. However, this framework often requires substantial amounts of high-fidelity (HF) training data, which is often prohibitively expensive to acquire. To address this, we propose multi-fidelity Laplace Neural Operators (MF-LNOs), which combine a low-fidelity (LF) base model with parallel linear/nonlinear HF correctors and dynamic inter-fidelity weighting. This allows us to exploit correlations between LF and HF datasets and achieve accurate inference of quantities of interest even with sparse HF data. We further incorporate a modified replica exchange stochastic gradient Langevin algorithm, which enables a more effective posterior distribution estimation and uncertainty quantification in model predictions. Extensive validation across four canonical dynamical systems (the Lorenz system, Duffing oscillator, Burgers equation, and Brusselator reaction-diffusion system) demonstrates the framework's effectiveness. The results show significant improvements, with testing losses reduced by 40% to 80% compared to traditional approaches. This validates MF-LNO as a versatile tool for surrogate modeling in parametric PDEs, offering significant improvements in data efficiency and uncertainty-aware prediction.
We introduce a time-series analysis method for transient two-dimensional flow patterns based on Topological Flow Data Analysis (TFDA), a new approach to topological data analysis. TFDA identifies local topological flow structures from an instantaneous streamline pattern and describes their global connections as a unique planar tree and its string representation. With TFDA, the evolution of two-dimensional flow patterns is reduced to a discrete dynamical system represented as a transition graph between topologically equivalent streamline patterns. We apply this method to study the lid-driven cavity flow at Reynolds numbers ranging from $Re=14000$ to $Re=16000$, a benchmark problem in fluid dynamics data analysis. Our approach reveals the transition from periodic to chaotic flow at a critical Reynolds number when the reduced dynamical system is modelled as a Markov process on the transition graph. Additionally, we perform an observational causal inference to analyse changes in local flow patterns at the cavity corners and discuss differences with a standard interventional sensitivity analysis. This work demonstrates the potential of TFDA-based time-series analysis for uncovering complex dynamical behaviours in fluid flow data.
Neural Operators that directly learn mappings between function spaces, such as Deep Operator Networks (DONs) and Fourier Neural Operators (FNOs), have received considerable attention. Despite the universal approximation guarantees for DONs and FNOs, there is currently no optimization convergence guarantee for learning such networks using gradient descent (GD). In this paper, we address this open problem by presenting a unified framework for optimization based on GD and applying it to establish convergence guarantees for both DONs and FNOs. In particular, we show that the losses associated with both of these neural operators satisfy two conditions -- restricted strong convexity (RSC) and smoothness -- that guarantee a decrease on their loss values due to GD. Remarkably, these two conditions are satisfied for each neural operator due to different reasons associated with the architectural differences of the respective models. One takeaway that emerges from the theory is that wider networks should lead to better optimization convergence for both DONs and FNOs. We present empirical results on canonical operator learning problems to support our theoretical results.
We show that local gauge invariance and superselection rules enforce a ``packaging'' principle for quantum field excitations: \textbf{internal quantum numbers (IQNs)} such as charge, color, or flavor are locked into \textbf{irreducible representation (irrep)} blocks. At the single-particle level, this packaging principle forbids the partial factorization of IQNs (e.g.\ ``half an electron charge'' or ``just the color quantum number of a quark''). Extending to multi-particle states, superselection restricts the net gauge charge to a single sector, eliminating cross-sector Bell-type superpositions while permitting packaged entangled states within one superselection sector. We provide rigorous theorems clarifying (i)~why no partial factorization of IQNs is possible, (ii)~how multi-particle superpositions remain gauge-invariant within a single net-charge sector, and (iii)~how external \textbf{degrees of freedom (DOFs)} (spin, momentum) can be hybridized with these internal charges to form gauge-invariant entanglement. We also discuss how measurements of spin or momentum in such hybrid states lead to collapse of the internal entanglement. The interplay of gauge constraints and superselection thus gives quantum field states a nontrivial information-theoretic structure: \textit{internal charges must be packaged with each particle's creation operator, yet multi-particle states may form entangled superpositions so long as the net charge remains consistent}.
Asynchronous methods are fundamental for parallelizing computations in distributed machine learning. They aim to accelerate training by fully utilizing all available resources. However, their greedy approach can lead to inefficiencies using more computation than required, especially when computation times vary across devices. If the computation times were known in advance, training could be fast and resource-efficient by assigning more tasks to faster workers. The challenge lies in achieving this optimal allocation without prior knowledge of the computation time distributions. In this paper, we propose ATA (Adaptive Task Allocation), a method that adapts to heterogeneous and random distributions of worker computation times. Through rigorous theoretical analysis, we show that ATA identifies the optimal task allocation and performs comparably to methods with prior knowledge of computation times. Experimental results further demonstrate that ATA is resource-efficient, significantly reducing costs compared to the greedy approach, which can be arbitrarily expensive depending on the number of workers.
The electronic and emission properties of correlated multi-particle states are studied theoretically using ${\bf k}\cdot{\bf p}$ and the configuration interaction methods on a well-known and measured GaAs/AlGaAs quantum dots as a test system. The convergence of the calculated energies and radiative lifetimes of Coulomb correlated exciton, biexciton, positive and negative trions to experimentally observed values is reached when the electron-electron and hole-hole exchange interactions are neglected. That unexpected and striking result uncovers a rich structure of multi-particle states in the studied system, which is further quantitatively compared to published measurements in the literature, obtaining astonishingly good agreement. It is proposed that in real experiments the neglected electron-electron and hole-hole exchange interactions are emitted as acoustic phonons during the radiative recombination of the ground state of complexes, leading to the observation of polaronic multi-particle states. Analysis of their energy spectra provides a direct and measurable insight into the Coulomb correlation, being interesting both on the fundamental level and as possible experimentally tunable property in a wide variety of solid-state systems, in particular associated with quantum computing.
Due to the ever growing amounts of data leveraged for machine learning and scientific computing, it is increasingly important to develop algorithms that sample only a small portion of the data at a time. In the case of linear least-squares, the randomized block Kaczmarz method (RBK) is an appealing example of such an algorithm, but its convergence is only understood under sampling distributions that require potentially prohibitively expensive preprocessing steps. To address this limitation, we analyze RBK when the data is sampled uniformly, showing that its iterates converge in a Monte Carlo sense to a $\textit{weighted}$ least-squares solution. Unfortunately, for general problems the condition number of the weight matrix and the variance of the iterates can become arbitrarily large. We resolve these issues by incorporating regularization into the RBK iterations. Numerical experiments, including examples arising from natural gradient optimization, suggest that the regularized algorithm, ReBlocK, outperforms minibatch stochastic gradient descent for realistic problems that exhibit fast singular value decay.
Understanding the generalization properties of optimization algorithms under heavy-tailed noise has gained growing attention. However, the existing theoretical results mainly focus on stochastic gradient descent (SGD) and the analysis of heavy-tailed optimizers beyond SGD is still missing. In this work, we establish generalization bounds for SGD with momentum (SGDm) under heavy-tailed gradient noise. We first consider the continuous-time limit of SGDm, i.e., a Levy-driven stochastic differential equation (SDE), and establish quantitative Wasserstein algorithmic stability bounds for a class of potentially non-convex loss functions. Our bounds reveal a remarkable observation: For quadratic loss functions, we show that SGDm admits a worse generalization bound in the presence of heavy-tailed noise, indicating that the interaction of momentum and heavy tails can be harmful for generalization. We then extend our analysis to discrete-time and develop a uniform-in-time discretization error bound, which, to our knowledge, is the first result of its kind for SDEs with degenerate noise. This result shows that, with appropriately chosen step-sizes, the discrete dynamics retain the generalization properties of the limiting SDE. We illustrate our theory on both synthetic quadratic problems and neural networks.
We present Coalition Logic, a three-valued modal fixed-point logic designed for declaratively specifying and reasoning about distributed algorithms, such as the Paxos consensus algorithm. Our methodology represents a distributed algorithm as a logical theory, enabling correctness properties to be derived directly within the framework -- or revealing logical errors in the algorithm's design when they exist. Coalition Logic adopts a declarative approach, specifying the overall logic of computation without prescribing control flow. Notably, message-passing is not explicitly modeled, distinguishing our framework from approaches like TLA+. This abstraction emphasises the logical essence of distributed algorithms, offering a novel perspective on their specification and reasoning. We define the syntax and semantics of Coalition Logic, explore its theoretical properties, and demonstrate its applicability through a detailed treatment of the Paxos consensus algorithm. By presenting Paxos as a logical theory and deriving its standard correctness properties, we showcase the framework's capacity to handle non-trivial distributed systems. We envision Coalition Logic as a versatile tool for specifying and reasoning about distributed algorithms. The Paxos example highlights the framework's ability to capture intricate details, offering a new lens through which distributed algorithms can be specified, studied, and checked.
We study Vaidya-type solutions in Weyl conformal gravity (WCG) using Eddington--Finkelstein-like coordinates. Our considerations focus on spherical as well as hyperbolic and planar symmetries. In particular, we find all vacuum dynamical solutions for the aforementioned symmetries. These are, in contrast to general relativity, structurally quite non-trivial. We provide a thorough analysis of their basic properties, such as, relation to other known WCG solutions, algebraic types, singularities, horizons, and symmetries. In the same vein, we also derive, classify, and discuss non-vacuum solutions with the Coulombic electric field and null dust. Other salient issues, such as the gauge equivalence of WCG solutions to Einstein spaces and the role of the Birkhoff--Riegert theorem, are also addressed.
This work outlines a Lattice Boltzmann Method (LBM) for geometrically and constitutively nonlinear solid mechanics to simulate large deformations under dynamic loading conditions. The method utilizes the moment chain approach, where the nonlinear constitutive law is incorporated via a forcing term. Stress and deformation measures are expressed in the reference configuration. Finite difference schemes are employed for gradient and divergence computations, and Neumann- and Dirichlet-type boundary conditions are introduced. Numerical studies are performed to assess the proposed method and illustrate its capabilities. Benchmark tests for weakly dynamic uniaxial tension and simple shear across a range of Poisson's ratios demonstrate the feasibility of the scheme and serve as validation of the implementation. Furthermore, a dynamic test case involving the propagation of bending waves in a cantilever beam highlights the potential of the method to model complex dynamic phenomena.
The Basel Committee on Banking Supervision proposed replacing all approaches for operational risk capital, including the Advanced Measurement Approach (AMA), with a simplified formula called the Standardized Measurement Approach (SMA). This paper examines and criticizes the weaknesses and failures of SMA, such as instability, insensitivity to risk, superadditivity, and the implicit relationship between the SMA capital model and systemic risk in the banking sector. Furthermore, it discusses the issues of the proposed Operational Risk Capital (OpCar) model by the Basel Committee, a precursor to SMA. The paper concludes by advocating for the maintenance of the AMA internal model framework and suggests a series of standardization recommendations to unify internal operational risk modeling. The findings and viewpoints presented in this paper have been discussed and supported by numerous operational risk professionals and academics from various regions of the world.
Bilevel optimization, addressing challenges in hierarchical learning tasks, has gained significant interest in machine learning. The practical implementation of the gradient descent method to bilevel optimization encounters computational hurdles, notably the computation of the exact lower-level solution and the inverse Hessian of the lower-level objective. Although these two aspects are inherently connected, existing methods typically handle them separately by solving the lower-level problem and a linear system for the inverse Hessian-vector product. In this paper, we introduce a general framework to address these computational challenges in a coordinated manner. Specifically, we leverage quasi-Newton algorithms to accelerate the resolution of the lower-level problem while efficiently approximating the inverse Hessian-vector product. Furthermore, by exploiting the superlinear convergence properties of BFGS, we establish the non-asymptotic convergence analysis of the BFGS adaptation within our framework. Numerical experiments demonstrate the comparable or superior performance of the proposed algorithms in real-world learning tasks, including hyperparameter optimization, data hyper-cleaning, and few-shot meta-learning.
Lipschitz decomposition is a useful tool in the design of efficient algorithms involving metric spaces. While many bounds are known for different families of finite metrics, the optimal parameters for $n$-point subsets of $\ell_p$, for $p > 2$, remained open, see e.g. [Naor, SODA 2017]. We make significant progress on this question and establish the bound $\beta=O(\log^{1-1/p} n)$. Building on prior work, we demonstrate applications of this result to two problems, high-dimensional geometric spanners and distance labeling schemes. In addition, we sharpen a related decomposition bound for $1<p<2$, due to Filtser and Neiman [Algorithmica 2022].
Quantitative information flow analyses (QIF) are a class of techniques for measuring the amount of confidential information leaked by a program to its public outputs. Shannon entropy is an important method to quantify the amount of leakage in QIF. This paper focuses on the programs modeled in Boolean constraints and optimizes the two stages of the Shannon entropy computation to implement a scalable precise tool PSE. In the first stage, we design a knowledge compilation language called \ADDAND that combines Algebraic Decision Diagrams and conjunctive decomposition. \ADDAND avoids enumerating possible outputs of a program and supports tractable entropy computation. In the second stage, we optimize the model counting queries that are used to compute the probabilities of outputs. We compare PSE with the state-of-the-art probably approximately correct tool EntropyEstimation, which was shown to significantly outperform the existing precise tools. The experimental results demonstrate that PSE solved 55 more benchmarks compared to EntropyEstimation in a total of 441. For 98% of the benchmarks that both PSE and EntropyEstimation solved, PSE is at least $10\times$ as efficient as EntropyEstimation.
We present a novel generative approach based on Denoising Diffusion Models (DDMs), which produces high-quality image samples along with their losslessly compressed bit-stream representations. This is obtained by replacing the standard Gaussian noise sampling in the reverse diffusion with a selection of noise samples from pre-defined codebooks of fixed iid Gaussian vectors. Surprisingly, we find that our method, termed Denoising Diffusion Codebook Model (DDCM), retains sample quality and diversity of standard DDMs, even for extremely small codebooks. We leverage DDCM and pick the noises from the codebooks that best match a given image, converting our generative model into a highly effective lossy image codec achieving state-of-the-art perceptual image compression results. More generally, by setting other noise selections rules, we extend our compression method to any conditional image generation task (e.g., image restoration), where the generated images are produced jointly with their condensed bit-stream representations. Our work is accompanied by a mathematical interpretation of the proposed compressed conditional generation schemes, establishing a connection with score-based approximations of posterior samplers for the tasks considered.
We apply the Noether symmetry analysis in $f\left( Q\right)$-Cosmology to determine invariant functions and conservation laws for the cosmological field equations. For the FLRW background and the four families of connections, it is found that only power-law $f\left( Q\right)$ functions admit point Noether symmetries. Finally, exact and analytic solutions are derived using the invariant functions.
This paper investigates scalable neural networks with learnable activation functions based on orthogonal function bases and tropical polynomials, targeting ImageNet-1K classification and next token prediction on OpenWebText. Traditional activations, such as ReLU, are static. In contrast, learnable activations enable the network to adapt dynamically during training. However, stability issues, such as vanishing or exploding gradients, arise with improper variance management in deeper networks. To remedy this, we propose an initialization scheme that single-handedly preserves unitary variance in transformers and convolutional networks, ensuring stable gradient flow even in deep architectures. Extensive experiments demonstrate that networks with Hermite, Fourier, and Tropical-based learnable activations significantly improve over GPT-2 and ConvNeXt networks in terms of accuracy and perplexity in train and test, highlighting the viability of learnable activations in large-scale tasks. The activation functions developed here are the subject of a library coded entirely in pure PyTorch: torchortho, available at https://github.com/K-H-Ismail/torchortho.
As the unification of various models of ordered quantities, generalized order statistics act as a simplistic approach introduced in \cite{kamps1995concept}. In this present study, results pertaining to the expressions of marginal and joint moment generating functions from half logistic geometric distribution are presented based on generalized order statistics framework. We also consider the estimation problem of $\theta$ and provides a Bayesian framework. The two widely and popular methods called Markov chain Monte Carlo and Lindley approximations are used for obtaining the Bayes estimators.The results are derived under symmetric and asymmetric loss functions. Analysis of the special cases of generalized order statistics, \textit{i.e.,} order statistics is also presented. To have an insight into the practical applicability of the proposed results, two real data sets, one from the field of Demography and, other from reliability have been taken for analysis.
Low-order climate models can play an important role in understanding low-frequency variability in the atmospheric circulation and how forcing consistent with anthropogenic climate change may affect this variability. Here, we study a conceptual model of the mid-latitudes' atmospheric circulation from the perspective of nonautonomous dynamical systems. First, a bifurcation analysis is carried out under time-independent forcing in order to identify different types of behavior in the autonomous model's parameter space. Next, we focus on the study of the nonautonomous system in which the cross-latitudinal heat flux varies seasonally, according to insolation changes. The forward attractor of the seasonally forced model is compared with the attractor of the autonomous one. The seasonal forcing results in a clear change of the attractor's shape. The summer attractor loses its periodicity, and hence predictability, when the forcing is seasonal, while the winter attractor favors energy transport through one of the model's two wave components. Climate change forcing produces several remarkable effects. Thus, the analysis of the model's snapshot attractor under climate trends suggests that the jet speed does not always follow the sign of the change in equator-to-pole thermal contrast, while the change in the energy transported by the eddies does. Chaotic behavior can be completely suppressed in favor of a regular periodic one and vice-versa. Circulation patterns can change, suddenly disappear, and rebuild. The model's snapshot attractor proves to be a robust tool to study its changes in internal variability due to climate trends, both positive and negative.
For a 3D N=4 gauge theory, turning on the $\Omega$-background in RxR$^2_{\epsilon}$ deforms the Coulomb branch chiral ring into the quantum Coulomb branch algebra, generated by the 1/2-BPS monopoles together with the complex scalar in the vector-multiplet. We conjecture that for a 3D N=4 quiver gauge theory with unitary gauge group, the quantum Coulomb branch algebra can be formulated as the truncated shifted quiver Yangian Y$(\widehat{Q},\widehat{W})$ based on the triple quiver $\widehat{Q}$ of the original quiver Q with canonical potential $\widehat{W}$. We check this conjecture explicitly for general tree-type quivers Q by considering the action of monopoles on the 1/2-BPS vortex configurations. The Hilbert spaces of vortices approaching different vacua at spatial infinity furnish different representations of the shifted quiver Yangian, and all the charge functions have only simple poles. For quivers beyond tree-type, our conjecture is consistent with known results on special examples.
Approximating the derivative-of-the-Gaussian profile proposed by Gibbons and Hawking by the scarf potential, the scattering of particles by a gravitational wave generated by flyby is described analytically by following the Nikiforov-Uvarov method. Pure displacement arises when the wave zone contains an integer number of half-waves. The results confirm the prediction of Zel'dovich and Polnarev.
The coherent equalization problem consists in designing a quantum system acting as a mean-square near optimal filter for a given quantum communication channel. The paper develops an improved method for the synthesis of transfer functions for such equalizing filters, based on a linear quantum system model of the channel. The method draws on a connection with the two-disk problem of ${H}_{\infty}$ control for classical (i.e., nonquantum) linear uncertain systems. Compared with the previous methods, the proposed method applies to a broader class of linear quantum communication channels.
In phase retrieval and similar inverse problems, the stability of solutions across different noise levels is crucial for applications. One approach to promote it is using signal priors in a form of a generative model as a regularization, at the expense of introducing a bias in the reconstruction. In this paper, we explore and compare the reconstruction properties of classical and generative inverse problem formulations. We propose a new unified reconstruction approach that mitigates overfitting to the generative model for varying noise levels.
The concatenation of encryption and decryption can be interpreted as data transmission over a noisy communication channel. In this work, we use finite blocklength methods (normal approximation and random coding union bound) as well as asymptotics to show that ciphertext and key sizes of the state-of-the-art post-quantum secure key encapsulation mechanism (KEM) Kyber can be reduced without compromising the security of the scheme. We show that in the asymptotic regime, it is possible to reduce the sizes of ciphertexts and secret keys by 25% for the parameter set Kyber1024 while keeping the bitrate at 1 as proposed in the original scheme. For a single Kyber encryption block used to share a 256-bit AES key, we furthermore show that reductions in ciphertext size of 39% and 33% are possible for Kyber1024 and Kyber512, respectively.
Previous research has proven that the set of maps implemented by neural networks with a ReLU activation function is identical to the set of piecewise linear continuous maps. Furthermore, such networks induce a hyperplane arrangement splitting the input domain into convex polyhedra $G_J$ over which the network $\Phi$ operates in an affine manner. In this work, we leverage these properties to define the equivalence class of inputs $\sim_\Phi$, which can be split into two sets related to the local rank of $\Phi_J$ and the intersections $\cap \text{Im}\Phi_{J_i}$. We refer to the latter as the overlap decomposition $O_\Phi$ and prove that if the intersections between each polyhedron and the input manifold are convex, the homology groups of neural representations are isomorphic to relative homology groups $H_k(\Phi(M)) \simeq H_k(M,O_\Phi)$. This lets us compute Betti numbers without the choice of an external metric. We develop methods to numerically compute the overlap decomposition through linear programming and a union-find algorithm. Using this framework, we perform several experiments on toy datasets showing that, compared to standard persistent homology, our relative homology-based computation of Betti numbers tracks purely topological rather than geometric features. Finally, we study the evolution of the overlap decomposition during training on various classification problems while varying network width and depth and discuss some shortcomings of our method.
Mathematical modelling of coupled flow systems containing a free-flow region in contact with a porous medium is challenging, especially for arbitrary flow directions to the fluid--porous interface. Transport processes in the free flow and porous medium are typically described by distinct equations: the Stokes equations and Darcy's law, respectively, with an appropriate set of coupling conditions at the common interface. Classical interface conditions based on the Beavers--Joseph condition are not accurate for general flows. Several generalisations are recently developed for arbitrary flows at the interface, some of them are however only theoretically formulated and still need to be validated. In this manuscript, we propose an alternative to couple free flow and porous-medium flow, namely, the hybrid-dimensional Stokes--Brinkman--Darcy model. Such formulation incorporates the averaged Brinkman equations within a complex interface between the free-flow and porous-medium regions. The complex interface acts as a buffer zone facilitating storage and transport of mass and momentum and the model is applicable for arbitrary flow directions. We validate the proposed hybrid-dimensional model against the pore-scale resolved model in multiple examples and compare numerical simulation results also with the classical and generalised coupling conditions from the literature. The proposed hybrid-dimensional model demonstrates its applicability to describe arbitrary coupled flows and shows its advantages in comparison to other generalised coupling conditions.
We study fundamental limitations of Graph Neural Networks (GNNs) for learning sparse matrix preconditioners. While recent works have shown promising results using GNNs to predict incomplete factorizations, we demonstrate that the local nature of message passing creates inherent barriers for capturing non-local dependencies required for optimal preconditioning. We introduce a new benchmark dataset of matrices where good sparse preconditioners exist but require non-local computations, constructed using both synthetic examples and real-world matrices. Our experimental results show that current GNN architectures struggle to approximate these preconditioners, suggesting the need for new architectural approaches beyond traditional message passing networks. We provide theoretical analysis and empirical evidence to explain these limitations, with implications for the broader use of GNNs in numerical linear algebra.
Magnetic Resonance Imaging generally requires long exposure times, while being sensitive to patient motion, resulting in artifacts in the acquired images, which may hinder their diagnostic relevance. Despite research efforts to decrease the acquisition time, and designing efficient acquisition sequences, motion artifacts are still a persistent problem, pushing toward the need for the development of automatic motion artifact correction techniques. Recently, diffusion models have been proposed as a solution for the task at hand. While diffusion models can produce high-quality reconstructions, they are also susceptible to hallucination, which poses risks in diagnostic applications. In this study, we critically evaluate the use of diffusion models for correcting motion artifacts in 2D brain MRI scans. Using a popular benchmark dataset, we compare a diffusion model-based approach with state-of-the-art methods consisting of Unets trained in a supervised fashion on motion-affected images to reconstruct ground truth motion-free images. Our findings reveal mixed results: diffusion models can produce accurate predictions or generate harmful hallucinations in this context, depending on data heterogeneity and the acquisition planes considered as input.
Identifying symmetries in quantum dynamics, such as identity or time-reversal invariance, is a crucial challenge with profound implications for quantum technologies. We introduce a unified framework combining group representation theory and subgroup hypothesis testing to predict these symmetries with optimal efficiency. By exploiting the inherent symmetry of compact groups and their irreducible representations, we derive an exact characterization of the optimal type-II error (failure probability to detect a symmetry), offering an operational interpretation for the quantum max-relative entropy. In particular, we prove that parallel strategies achieve the same performance as adaptive or indefinite-causal-order protocols, resolving debates about the necessity of complex control sequences. Applications to the singleton group, maximal commutative group, and orthogonal group yield explicit results: for predicting the identity property, Z-symmetry, and T-symmetry of unknown qubit unitaries, with zero type-I error and type-II error bounded by $\delta$, we establish the explicit optimal sample complexity which scales as $\mathcal{O}(\delta^{-1/3})$ for identity testing and $\mathcal{O}(\delta^{-1/2})$ for T/Z-symmetry testing. These findings offer theoretical insights and practical guidelines for efficient unitary property testing and symmetry-driven protocols in quantum information processing.
In this paper we study strong coupling asymptotic expansions of ${\mathcal N}=2$ $D=4$ $SU(2)$ gauge theory partition functions in general $\Omega$-background. This is done by refining Painlev\'e/gauge theory correspondence in terms of quantum Painlev\'e equations, obtained from $\mathbb{C}^2/\mathbb{Z}_2$ blowup relations. We present a general ansatz and a systematic analysis of the expansions of the gauge theory partition functions by solving the above equations around the strong coupling singularities, including Argyres-Douglas points. We compare our results with refined holomorphic anomaly equations and irregular Virasoro conformal blocks.
Circular and non-flat data distributions are prevalent across diverse domains of data science, yet their specific geometric structures often remain underutilized in machine learning frameworks. A principled approach to accounting for the underlying geometry of such data is pivotal, particularly when extending statistical models, like the pervasive Gaussian distribution. In this work, we tackle those issue by focusing on the manifold of symmetric positive definite matrices, a key focus in information geometry. We introduced a non-isotropic wrapped Gaussian by leveraging the exponential map, we derive theoretical properties of this distribution and propose a maximum likelihood framework for parameter estimation. Furthermore, we reinterpret established classifiers on SPD through a probabilistic lens and introduce new classifiers based on the wrapped Gaussian model. Experiments on synthetic and real-world datasets demonstrate the robustness and flexibility of this geometry-aware distribution, underscoring its potential to advance manifold-based data analysis. This work lays the groundwork for extending classical machine learning and statistical methods to more complex and structured data.
It was recently found that connection coefficients of the Heun equation can be derived in closed form using crossing symmetry in two-dimensional Liouville theory via the Nekrasov-Shatashvili functions. In this work, we systematize this approach to second-order linear ODEs of Fuchsian type, which arise in the description of N = 2, four-dimensional quiver gauge theories. After presenting the general procedure, we focus on the specific case of Fuchsian equations with five regular singularities and present some applications to black hole perturbation theory. First, we consider a massive scalar perturbation of the Schwarzschild black hole in AdS7. Next, we analyze vector type perturbations of the Reissner-Nordstr\"om-AdS5 black hole. We also discuss the implications of our results in the context of the AdS/CFT correspondence and present explicit results in the large spin limit, where we make connection with the light-cone bootstrap. Furthermore, using the spectral network technology, we identify the region of the moduli space in Seiberg-Witten theory that is relevant for the study of black hole quasinormal modes. Our results suggest that, in some cases, this region corresponds to the strong-coupling regime, highlighting the potential applicability of the conformal GMN TBA framework to address scenarios where the gravitational dictionary implies that the instanton counting parameters are not parametrically small.
Local certification is a topic originating from distributed computing, where a prover tries to convince the vertices of a graph $G$ that $G$ satisfies some property $\mathcal{P}$. To convince the vertices, the prover gives a small piece of information, called certificate, to each vertex, and the vertices then decide whether the property $\mathcal{P}$ is satisfied by just looking at their certificate and the certificates of their neighbors. When studying a property $\mathcal{P}$ in the perspective of local certification, the aim is to find the optimal size of the certificates needed to certify $\mathcal{P}$, which can be viewed a measure of the local complexity of $\mathcal{P}$. A certification scheme is considered to be efficient if the size of the certificates is polylogarithmic in the number of vertices. While there have been a number of meta-theorems providing efficient certification schemes for general graph classes, the proofs of the lower bounds on the size of the certificates are usually very problem-dependent. In this work, we introduce a notion of hardness reduction in local certification, and show that we can transfer a lower bound on the certificates for a property $\mathcal{P}$ to a lower bound for another property $\mathcal{P}'$, via a (local) hardness reduction from $\mathcal{P}$ to $\mathcal{P}'$. We then give a number of applications in which we obtain polynomial lower bounds for many classical properties using such reductions.
Despite exceptional achievements, training neural networks remains computationally expensive and is often plagued by instabilities that can degrade convergence. While learning rate schedules can help mitigate these issues, finding optimal schedules is time-consuming and resource-intensive. This work explores theoretical issues concerning training stability in the constant-learning-rate (i.e., without schedule) and small-batch-size regime. Surprisingly, we show that the order of gradient updates affects stability and convergence in gradient-based optimizers. We illustrate this new line of thinking using backward-SGD, which processes batch gradient updates like SGD but in reverse order. Our theoretical analysis shows that in contractive regions (e.g., around minima) backward-SGD converges to a point while the standard forward-SGD generally only converges to a distribution. This leads to improved stability and convergence which we demonstrate experimentally. While full backward-SGD is computationally intensive in practice, it highlights opportunities to exploit reverse training dynamics (or more generally alternate iteration orders) to improve training. To our knowledge, this represents a new and unexplored avenue in deep learning optimization.
Multi-index models provide a popular framework to investigate the learnability of functions with low-dimensional structure and, also due to their connections with neural networks, they have been object of recent intensive study. In this paper, we focus on recovering the subspace spanned by the signals via spectral estimators -- a family of methods that are routinely used in practice, often as a warm-start for iterative algorithms. Our main technical contribution is a precise asymptotic characterization of the performance of spectral methods, when sample size and input dimension grow proportionally and the dimension $p$ of the space to recover is fixed. Specifically, we locate the top-$p$ eigenvalues of the spectral matrix and establish the overlaps between the corresponding eigenvectors (which give the spectral estimators) and a basis of the signal subspace. Our analysis unveils a phase transition phenomenon in which, as the sample complexity grows, eigenvalues escape from the bulk of the spectrum and, when that happens, eigenvectors recover directions of the desired subspace. The precise characterization we put forward enables the optimization of the data preprocessing, thus allowing to identify the spectral estimator that requires the minimal sample size for weak recovery.
We investigate double descent and scaling laws in terms of weights rather than the number of parameters. Specifically, we analyze linear and random features models using the deterministic equivalence approach from random matrix theory. We precisely characterize how the weights norm concentrate around deterministic quantities and elucidate the relationship between the expected test error and the norm-based capacity (complexity). Our results rigorously answer whether double descent exists under norm-based capacity and reshape the corresponding scaling laws. Moreover, they prompt a rethinking of the data-parameter paradigm - from under-parameterized to over-parameterized regimes - by shifting the focus to norms (weights) rather than parameter count.
Adaptive optimization algorithms -- such as Adagrad, Adam, and their variants -- have found widespread use in machine learning, signal processing and many other settings. Several methods in this family are not rotationally equivariant, meaning that simple reparameterizations (i.e. change of basis) can drastically affect their convergence. However, their sensitivity to the choice of parameterization has not been systematically studied; it is not clear how to identify a "favorable" change of basis in which these methods perform best. In this paper we propose a reparameterization method and demonstrate both theoretically and empirically its potential to improve their convergence behavior. Our method is an orthonormal transformation based on the expected gradient outer product (EGOP) matrix, which can be approximated using either full-batch or stochastic gradient oracles. We show that for a broad class of functions, the sensitivity of adaptive algorithms to choice-of-basis is influenced by the decay of the EGOP matrix spectrum. We illustrate the potential impact of EGOP reparameterization by presenting empirical evidence and theoretical arguments that common machine learning tasks with "natural" data exhibit EGOP spectral decay.
The primary entropic measures for quantum states are additive under the tensor product. In the analysis of quantum information processing tasks, the minimum entropy of a set of states, e.g., the minimum output entropy of a channel, often plays a crucial role. A fundamental question in quantum information and cryptography is whether the minimum output entropy remains additive under the tensor product of channels. Here, we establish a general additivity statement for the optimized sandwiched R\'enyi entropy of quantum channels. For that, we generalize the results of [Devetak, Junge, King, Ruskai, CMP 2006] to multi-index Schatten norms. As an application, we strengthen the additivity statement of [Van Himbeeck and Brown, 2025] thus allowing the analysis of time-adaptive quantum cryptographic protocols. In addition, we establish chain rules for R\'enyi conditional entropies that are similar to the ones used for the generalized entropy accumulation theorem of [Metger, Fawzi, Sutter, Renner, CMP 2024].