We present a digital-twin simulator for a pastillation process. The simulation framework produces realistic thermal image data of the process that is used to train computer vision-based soft sensors based on convolutional neural networks (CNNs); the soft sensors produce output signals for temperature and product flow rate that enable real-time monitoring and feedback control. Pastillation technologies are high-throughput devices that are used in a broad range of industries; these processes face operational challenges such as real-time identification of clog locations (faults) in the rotating shell and the automatic, real-time adjustment of conveyor belt speed and operating conditions to stabilize output. The proposed simulator is able to capture this behavior and generates realistic data that can be used to benchmark different algorithms for image processing and different control architectures. We present a case study to illustrate the capabilities; the study explores behavior over a range of equipment sizes, clog locations, and clog duration. A feedback controller (tuned using Bayesian optimization) is used to adjust the conveyor belt speed based on the CNN output signal to achieve the desired process outputs.
In this paper we present a mathematically rigorous and constructive framework that unifies two canonical model constructions in classical first order logic. In particular, we define two functors F and G from the category of consistent first order theories to the category of models. The functor F is constructed via the Henkin method, which extends any given theory to a maximal consistent theory by means of a fixed enumeration and the systematic introduction of Henkin constants, and then constructs a term model by taking the quotient of the term algebra with respect to provable equality. The functor G is obtained through a canonical compactness based construction, using either a fixed ultraproduct or a saturation procedure, ensuring that the resulting model is unique up to isomorphism. We prove the existence of a natural transformation eta from F to G such that each component is an isomorphism. Moreover, by leveraging the uniqueness of saturated (or prime) models in countable languages, we show that eta is rigid, meaning any other natural transformation between F and G must equal eta. Furthermore, we establish a strong natural equivalence between F and G in the two categorical sense, with eta and its inverse satisfying the required coherence conditions. This unification not only deepens our understanding of the interplay between proof theory and model theory, but also opens new avenues for applications in automated theorem proving, formal verification, and the study of alternative logical systems.
This paper develops a systematic framework for integrating local categories that model logical connectives using higher category theory. By extending these local categories into a unified two-category enriched with natural isomorphisms, the universal properties of logical operations such as negation, conjunction, disjunction, and implication are rigorously captured. Advanced techniques including pseudo-limits, pseudo-colimits, and strictification are employed to transform the resulting weak structure into a strict two-category, thereby simplifying composition rules and coherence verification without loss of semantic content. The framework is validated through detailed diagrammatic proofs and concrete examples, demonstrating its robustness and potential impact in areas such as type theory, programming language semantics, and formal verification.
We consider estimating the proportion of random variables for two types of composite null hypotheses: (i) the means or medians of the random variables belonging to a non-empty, bounded interval; (ii) the means or medians of the random variables belonging to an unbounded interval that is not the whole real line. For each type of composite null hypotheses, uniformly consistent estimators of the proportion of false null hypotheses are constructed for random variables whose distributions are members of a Type I location-shift family. Further, uniformly consistent estimators of certain functions of a bounded null on the means or medians are provided for the random variables mentioned earlier; these functions are continuous and of bounded variation. The estimators are constructed via solutions to Lebesgue-Stieltjes integral equations and harmonic analysis, do not rely on a concept of p-value, and have various applications.
Pre-trained Transformers, through in-context learning (ICL), have demonstrated exceptional capabilities to adapt to new tasks using example prompts without model update. Transformer-based wireless receivers, where prompts consist of the pilot data in the form of transmitted and received signal pairs, have shown high detection accuracy when pilot data are abundant. However, pilot information is often costly and limited in practice. In this work, we propose the DEcision Feedback INcontExt Detection (DEFINED) solution as a new wireless receiver design, which bypasses channel estimation and directly performs symbol detection using the (sometimes extremely) limited pilot data. The key innovation in DEFINED is the proposed decision feedback mechanism in ICL, where we sequentially incorporate the detected symbols into the prompts as pseudo-labels to improve the detection for subsequent symbols. Furthermore, we proposed another detection method where we combine ICL with Semi-Supervised Learning (SSL) to extract information from both labeled and unlabeled data during inference, thus avoiding the errors propagated during the decision feedback process of the original DEFINED. Extensive experiments across a broad range of wireless communication settings demonstrate that a small Transformer trained with DEFINED or IC-SSL achieves significant performance improvements over conventional methods, in some cases only needing a single pilot pair to achieve similar performance of the latter with more than 4 pilot pairs.
Recently, renewed interest in cislunar space spurred by private and public organizations has driven research for future infrastructure in the region. As Earth-Moon traffic increases amidst a growing space economy, monitoring architectures supporting this traffic must also develop. These are likely to be realized as constellations of patrol satellites surveying traffic between the Earth and the Moon. This work investigates the concurrent optimization of patrol satellite phasing and tasking to provide information-maximal coverage of traffic in periodic orbits.
We revisit the results of Kim, and of Katsoulis and Ramsey concerning hyperrigidity for non-degenerate C*-correspondences. We show that the tensor algebra is hyperrigid, if and only if Katsura's ideal acts non-degenerately, if and only if Katsura's ideal acts non-degenerately under any representation. This gives a positive answer to the question of Katsoulis and Ramsey, showing that their necessary condition and their sufficient condition for hyperrigidity of the tensor algebra are equivalent. Non-degeneracy of the left action of Katsura's ideal was also shown by Kim to be equivalent to hyperrigidity for the selfadjoint operator space associated with the C*-correspondence, and our approach provides a simplified proof of this result as well. In the process we revisit Arveson's criterion connecting maximality with the unique extension property and hyperrigidity, in conjunction with the work of Salomon on generating sets.
In this note, we prove a filtered version of Beilinson-type formula for the V-filtration of Kashiwara and Malgrange for any D-module underlying a complex mixed Hodge module along a hypersurface, using Hodge filtrations on the localization. We give some applications to the theory of higher multiplier and Hodge ideals. The main result is that the higher multiplier ideals can be deduced directly from the Hodge ideals by taking a suitable limit. As a corollary, we conclude that the Hodge ideals are left semi-continuous if and only if they coincide with the higher multiplier ideals. In an appendix, we make some general observations about Hodge and higher multiplier ideals. We observe that results of Saito and Chen-Musta\c{t}\u{a} give a birational formula for higher multiplier ideals, answering a question of Schnell and the second author, and that the Kodaira vanishing theorem for twisted Hodge modules gives a short proof of the vanishing theorem for Hodge ideals, strengthening a result of B. Chen.
We introduce the concept of F-decomposable systems, well-ordered inverse systems of Hausdorff compacta with fully closed bonding mappings. A continuous mapping between Hausdorff compacta is called fully closed if the intersection of the images of any two closed disjoint subsets is finite. We give a characterization of such systems in terms of a property of the continuous functions on their limit. When, moreover, the fibers of neighboring bonding mappings are metrizable, we call the limit of such a system an F_d-compact, a particular case of a Fedorchuk compact. The stated property allows us to obtain a locally uniformly rotund renorming on the space C(K), where K is an F_d-compact of countable spectral height.
We consider a queueing network operating under a strictly upper-triangular routing matrix with per column at most one non-negative entry. The root node is fed by a Gaussian process with stationary increments. Our aim is to characterize the distribution of the multivariate stationary workload process under a specific scaling of the queue's service rates. In the main results of this paper we identify, under mild conditions on the standard deviation function of the driving Gaussian process, in both light and heavy traffic parameterization, the limiting law of an appropriately scaled version (in both time and space) of the joint stationary workload process. In particular, we develop conditions under which specific queueing processes of the network effectively decouple, i.e., become independent in the limiting regime.
This paper considers non-smooth optimization problems where we seek to minimize the pointwise maximum of a continuously parameterized family of functions. Since the objective function is given as the solution to a maximization problem, neither its values nor its gradients are available in closed form, which calls for approximation. Our approach hinges upon extending the so-called gradient sampling algorithm, which approximates the Clarke generalized gradient of the objective function at a point by sampling its derivative at nearby locations. This allows us to select descent directions around points where the function may fail to be differentiable and establish algorithm convergence to a stationary point from any initial condition. Our key contribution is to prove this convergence by alleviating the requirement on continuous differentiability of the objective function on an open set of full measure. We further provide assumptions under which a desired convex subset of the decision space is rendered attractive for the iterates of the algorithm.
We study Type C $K$-Stanley symmetric functions, which are $K$-theoretic extensions of the Type C Stanley symmetric functions. They are indexed by signed permutations and can be used to enumerate reduced words via their expansion into Schur $Q$-functions, which are indexed by strict partitions. A combinatorial description of the Schur $Q$- coefficients is given by Kra\'skiewicz insertion. Similarly, their $K$-Stanley analogues are conjectured to expand positively into $GQ$'s, which are $K$-theory representatives for the Lagrangian Grassmannian introduced by Ikeda and Naruse also indexed by strict partitions. We introduce a $K$-theoretic analogue of Kra\'skiewicz insertion, which can be used to enumerate 0-Hecke expressions for signed permutations and gives a conjectural combinatorial rule for computing this $GQ$ expansion. We show the Type C $K$-Stanleys for certain fully commutative signed permutations are skew $GQ$'s. Combined with a Pfaffian formula of Anderson's, this allows us to prove Lewis and Marberg's conjecture that $GQ$'s of (skew) rectangle shape are $GQ$'s of trapezoid shape. Combined with our previous conjecture, this also gives an explicit combinatorial description of the skew $GQ$ expansion into $GQ$'s. As a consequence, we obtain a conjecture for the product of two $GQ$ functions where one has trapezoid shape.
The effect of multiplicative noise to the Turing instability of the Brusselator system is investigated. We show that when the noise acts on both of the concentrations with the same intensities, then the Turing instability is suppressed provided that the intensities are sufficiently large. This aligns with the stabilizing effect of multiplicative noise in partial differential equations. Utilizing the linearized system, we can quantify the magnitude of noise which stabilizes the system. On the other hand, when the noise is involving only one concentration, then the Turing instability can be triggered with suitable intensities. These are confirmed by numerical simulations.
``Fundamental logic" is a non-classical logic recently introduced by Wesley Holliday. It has an elegant Fitch-style natural deduction system and, in a sense, it unifies orthologic and the $\{\land,\lor,\neg\}$-fragment of intuitionistic logic. In this paper, we incorporate strict implication into fundamental propositional logic (and a slightly weaker logic, respectively). We provide the axiomatization and prove the soundness and completeness theorems.
The Set Partitioning Problem is a combinatorial optimization problem with wide-ranging applicability, used to model various real-world tasks such as facility location and crew scheduling. However, real-world applications often require solving large-scale instances that involve hundreds of thousands of variables. Although the conventional Column Generation method is popular for its computational efficiency, it lacks a guarantee for exact solutions. This paper proposes a novel solution method integrating relaxation of Column Generation conditions and automatic elimination of redundant columns, aimed at overcoming the limitations of conventional Column Generation methods in guaranteeing exact optimal solutions. Numerical experiments using actual bus route data reveal that while the traditional method achieves an exact solution rate of only about 3%, the proposed method attains a rate of approximately 99% and remarkably improves solution accuracy.
This is a contribution to the Special Volume in Celebration of the 70th Birthday of Edoardo Ballico. First, I describe how some results of Ballico on moduli of vector bundles and categories coherent sheaves were useful for solving problems in a variety of areas: Homological Mirror Symmetry, symplectic geometry, Hodge theory, mathematical physics, noncommutative geometry. Second, I summarise some strong results of Ballico about the number of components of moduli scheme of sheaves and about the existence of singularities on moduli of vector bundles. Third, the text includes a section written by Wojciech Kucharz, about the work of Ballico on moduli flexibility of real manifolds.
This paper investigates a subgradient-based algorithm to solve the system identification problem for linear time-invariant systems with non-smooth objectives. This is essential for robust system identification in safety-critical applications. While existing work provides theoretical exact recovery guarantees using optimization solvers, the design of fast learning algorithms with convergence guarantees for practical use remains unexplored. We analyze the subgradient method in this setting where the optimization problems to be solved change over time as new measurements are taken, and we establish linear convergence results for both the best and Polyak step sizes after a burn-in period. Additionally, we characterize the asymptotic convergence of the best average sub-optimality gap under diminishing and constant step sizes. Finally, we compare the time complexity of standard solvers with the subgradient algorithm and support our findings with experimental results. This is the first work to analyze subgradient algorithms for system identification with non-smooth objectives.
In addition to a proposed codeword, error correction decoders that provide blockwise soft output (SO) return an estimate of the likelihood that the decoding is correct. Following Forney, such estimates are traditionally only possible for list decoders where the soft output is the likelihood that a decoding is correct given it is assumed to be in the list. Recently, it has been established that Guessing Random Additive Noise Decoding (GRAND), Guessing Codeword Decoding (GCD), Ordered Statistics Decoding (OSD), and Successive Cancellation List (SCL) decoding can provide more accurate soft output, even without list decoding. Central to the improvement is a per-decoding estimate of the likelihood that a decoding has not been found that can be readily calculated during the decoding process. Here we explore how linear codebook constraints can be employed to further enhance the precision of such SO. We evaluate performance by adapting a forecasting statistic called the Brier Score. Results indicate that the SO generated by the approach is essentially as accurate as the maximum a posteriori estimate.
In this paper, we reconstruct Euclid's theory of similar triangles, as developed in Book VI of the \textit{Elements}, along with its 20th-century counterparts, formulated within the systems of Hilbert, Birkhoff, Borsuk and Szmielew, Millman and Parker, as well as Hartshorne. In the final sections, we present recent developments concerning non-Archimedean fields and mechanized proofs. Thales' theorem (VI.2) serves as the reference point in our comparisons. It forms the basis of Euclid's system and follows from VI.1 the only proposition within the theory of similar triangles that explicitly applies the definition of proportion. Instead of the ancient proportion, modern systems adopt the arithmetic of line segments or real numbers. Accordingly, they adopt other propositions from Euclid's Book VI, such as VI.4, VI.6, or VI.9, as a basis. In {\S}\,10, we present a system that, while meeting modern criteria of rigor, reconstructs Euclid's theory and mimics its deductive structure, beginning with VI.1. This system extends to automated proofs of Euclid's propositions from Book VI. Systems relying on real numbers provide the foundation for trigonometry as applied in modern mathematics. In {\S}\,9, we prove Thales' theorem in geometry over the hyperreal numbers. Just as Hilbert managed to prove Thales' theorem without referencing the Archimedean axiom, so do we by applying the arithmetic of the non-Archimedean field of hyperreal numbers.
We propose a random bipartite graph with weights assigned to both parts of the vertex sets. Edges are formed independently with probabilities that depend on these weights. This bipartite graph naturally gives rise to a random intersection graph which has nontrivial clustering properties and inhomogeneous vertex degrees. We focus on the situation where the weights are themselves i.i.d. random variables. In the so-called moderate clustering regime, we identify three types of scaling limit for the large connected components in the graphs at criticality, depending on the tail behaviours of the weight distributions of both parts.
We identify various classes of neural networks that are able to approximate continuous functions locally uniformly subject to fixed global linear growth constraints. For such neural networks the associated neural stochastic differential equations can approximate general stochastic differential equations, both of It\^o diffusion type, arbitrarily well. Moreover, quantitative error estimates are derived for stochastic differential equations with sufficiently regular coefficients.
We give some criteria for the Lie algebra $\mathrm{HH}^1(B)$ to be solvable, where $B$ is a $p$-block of a finite group algebra, in terms of the action of an inertial quotient of $B$ on a defect group of $B$.
We show that the Calabi--Yau metrics with isolated conical singularities of Hein--Sun admit polyhomogeneous expansions near their singularities. Moreover, we show that, under certain generic assumptions, natural families of smooth Calabi--Yau metrics on crepant resolutions and on polarized smoothings of conical Calabi--Yau manifolds degenerating to the initial conical Calabi--Yau metric admit polyhomogeneous expansions where the singularities are forming. The construction proceeds by performing weighted Melrose--type blow--ups and then gluing conical and scaled asymptotically conical Calabi--Yau metrics on the fibers, close to the blow--up's front face without compromising polyhomogeneity. This yields a polyhomogeneous family of K\"ahler metrics that are approximately Calabi--Yau. Solving formally a complex Monge--Amp\`ere equation, we obtain a polyhomogeneous family of K\"ahler metrics with Ricci potential converging rapidly to zero as the family is degenerating. We can then conclude that the corresponding family of degenerating Calabi--Yau metrics is polyhomogeneous by using a fixed point argument.
We establish upper and lower bounds for the expected Wasserstein distance between the random empirical measure and the uniform measure on the Boolean cube. Our analysis leverages techniques from Fourier analysis, following the framework introduced in \cite{bobkov2021simple}, as well as methods from large deviations theory.
In recent decades, the defect of finite extensions of valued fields has emerged as the main obstacle in several fundamental problems in algebraic geometry such as the local uniformization problem. Hence, it is important to identify defectless fields and study properties related to defect. In this paper we study the relations between the following properties of valued fields: simply defectless, immediate-defectless and algebraically maximal. The main result of the paper is an example of an algebraically maximal field that admits a simple defect extension. For this, we introduce the notion of quasi-finite elements in the generalized power series field $k\left(\left(t^\Gamma\right)\right)$.
We integrate random sketching techniques into block orthogonalization schemes needed for s-step GMRES. The resulting block orthogonalization schemes generate the basis vectors whose overall orthogonality error is bounded by machine precision as long as each of the corresponding block vectors are numerically full rank. We implement these randomized block orthogonalization schemes using standard distributed-memory linear algebra kernels for s-step GMRES available in the Trilinos software packages. Our performance results on the Perlmutter supercomputer (with four NVIDIA A100 GPUs per node) demonstrate that these randomized techniques can enhance the numerical stability of the orthogonalization and overall solver, without a significant increase in the execution time.
We show that the fundamental group of a geometrically clean graph of finite rank free groups does not need to be virtually compact special, answering a question of Wise. This implies that the class of the virtually VH-clean graphs of finite rank free groups is a proper subclass of the class of virtually geometrically clean graphs of finite rank free groups.
We show that bounded divergence-free vector fields $u : [0,\infty) \times \mathbb{R}^d \to\mathbb{R}^d$ decrease the ''concentration'', quantified by the modulus of absolute continuity with respect to the Lebesgue measure, of solutions to the associated advection-diffusion equation when compared to solutions to the heat equation. In particular, for symmetric decreasing initial data, the solution to the advection-diffusion equation has (without a prefactor constant) larger variance, larger entropy, and smaller $L^p$ norms for all $p \in [1,\infty]$ than the solution to the heat equation. We also note that the same is not true on $\mathbb{T}^d$.
In this paper, we prove the existence and uniqueness of the conditional expectation of an event $A$ given a $\sigma$-algebra $\mathcal{G}$ as a linear problem in the Lebesgue spaces $L^{p}$ associated with a probability space through the Riesz Representation Theorems. For the $L^{2}$ case, we state the Dirichlet's principle. Then, we extend this principle for specific values of $p$, framing the existence of the conditional expectation as a variational problem. We conclude with a proof of the law of total probability using these tools.
Given two coprime numbers $p<q$, KW semigroups contain $p,q$ and are contained in $\langle p,q,r \rangle$ where $2r= p,q, p+q$ whichever is even. These semigroups were first introduced by Kunz and Waldi. Kunz and Waldi proved that all $KW$ semigroups of embedding dimension $n\geq 4$ have Cohen-Macaulay type $n-1$ and first Betti number ${n \choose 2}$. In this paper, we characterize KW semigroups whose defining ideal is generated by the $2\times 2$ minors of a $2\times n$ matrix. In addition, we identify all KW semigroups that lie on the interior of the same face of the Kunz cone $\mathcal C_p$ as a KW semigroup with determinantal defining ideal. Thus, we provide an explicit formula for the Betti numbers of all those KW semigroups.
This letter studies the impact of fluid antenna system (FAS) technology on the performance of unmanned aerial vehicle (UAV)-assisted multiuser communication networks. Specifically, we consider a scenario where a fixed-position antenna (FPA) base station (BS) serves K FAS-equipped users with the assistance of a UAV acting as an aerial relay. The BS employs rate-splitting multiple access (RSMA), while the UAV operates in half-duplex (HD) mode using the decode-and-forward (DF) strategy. For this system, we derive a compact analytical expression for the outage probability (OP) and its asymptotic behavior in the high signal-to-noise ratio (SNR) regime, leveraging the multivariate t-distribution. Our results show how deploying FAS at ground users (GUs) in UAV-aided communications improves overall system performance compared to using FPA GUs.
We call a dynamical system on a measurable metric space {\em measure-expansive} if the probability of two orbits remain close each other for all time is negligible (i.e. zero). We extend results of expansive systems on compact metric spaces to the measure-expansive context. For instance, the measure-expansive homeomorphisms are characterized as those homeomorphisms $f$ for which the diagonal is almost invariant for $f\times f$ with respect to the product measure. In addition, the set of points with converging semi-orbits for such homeomorphisms have measure zero. In particular, the set of periodic orbits for these homeomorphisms is also of measure zero. We also prove that there are no measure-expansive homeomorphisms in the interval and, in the circle, they are the Denjoy ones. As an application we obtain probabilistic proofs of some result of expansive systems. We also present some analogous results for continuous maps.
We investigate the nonlinear stability of compressible vortex sheet solutions for three-dimensional (3D) isentropic elastic flows. Building upon previous results on the weakly linear stability of elastic vortex sheets [19], we perform a detailed study of the roots of the Lopatinskii determinant and identify a geometric stability condition associated with the deformation gradient. We employ an upper triangularization technique that isolates the outgoing modes into a closed system, where they appear only at the leading order. This enables us to derive energy estimates despite derivative loss. The major novelty of our approach includes the following two key aspects: (1) For the 3D compressible Euler vortex sheets, the front symbol exhibits degenerate ellipticity in certain frequency directions, which makes it challenging to ensure the front's regularity using standard energy estimates. Our analysis reveals that the non-parallel structure of the deformation gradient tensor plays a crucial role in recovering ellipticity in the front symbol, thereby enhancing the regularity of the free interface. (2) Another significant challenge in 3D arises from the strong degeneracy caused by the collision of repeated roots and poles. Unlike in 2D, where such interactions are absent, we encounter a co-dimension one set in frequency space where a double root coincides with a double pole. To resolve this, we refine Coulombel's diagonalization framework [21] and construct a suitable transformation that reduces the degeneracy order of the Lopatinskii matrix, enabling the use of localized Garding-type estimates to control the characteristic components. Finally, we employ a Nash-Moser iteration scheme to establish the local existence and nonlinear stability of vortex sheets under small initial perturbations, showing stability within a subsonic regime.
We consider $\Sigma$ an embedded free boundary minimal annulus in a geodesic ball in the round hemisphere $\mathbb{S}^3_+$ or in the hyperbolic space $\mathbb{H}^3$. Under the hypothesis of invariance due to an antipodal map on the geodesic ball and using the fact that this surface satisfies the Steklov problem with frequency, we prove that $\Sigma$ is congruent to a critical rotational annulus.
In this paper, a thermodynamically consistent phase-field model is proposed to describe the mass transport and reaction processes of multiple species in a fluid. A key feature of this model is that reactions between different species occur only at the interface, and may induce deformation of the interface. For the governing equations derived based on the energy variational method, we propose a structure-preserving numerical scheme that satisfies the mass conservation and energy dissipation laws at the discrete level. Furthermore, we carry out a rigorous error analysis of the time-discrete scheme for a simplified case. A series of numerical experiments are conducted to validate the effectiveness of the model as well as the accuracy and stability of the scheme. In particular, we simulate microvessels with straight and bifurcated structures to illustrate the risk of microaneurysm formation.
A result of Kento Fujita says that the volume of a K-semistable Fano manifold is bounded from above by the volume of the projective space. In this short note we establish quantized versions of Fujita's result.
We study the $b$-biased Oriented-cycle game where two players, OMaker and OBreaker, take turns directing the edges of $K_n$ (the complete graph on $n$ vertices). In each round, OMaker directs one previously undirected edge followed by OBreaker directing between one and $b$ previously undirected edges. The game ends once all edges have been directed, and OMaker wins if and only if the resulting tournament contains a directed cycle. Bollob\'as and Szab\'o asked the following question: what is the largest value of the bias $b$ for which OMaker has a winning strategy? Ben-Eliezer, Krivelevich and Sudakov proved that OMaker has a winning strategy for $b \leq n/2 - 2$. In the other direction, Clemens and Liebenau proved that OBreaker has a winning strategy for $b \geq 5n/6+2$. Inspired by their approach, we propose a significantly stronger strategy for OBreaker which we prove to be winning for $b \geq 0.7845n + O(1)$.
For $1\le p,q\le \infty$, the Nikolskii factor for a trigonometric polynomial $T_{\bf a}$ is defined by $$\mathcal N_{p,q}(T_{\bf a})=\frac{\|T_{\bf a}\|_{q}}{\|T_{\bf a}\|_{p}},\ \ T_{\bf a}(x)=a_{1}+\sum\limits^{n}_{k=1}(a_{2k}\sqrt{2}\cos kx+a_{2k+1}\sqrt{2}\sin kx).$$ We study this average Nikolskii factor for random trigonometric polynomials with independent $N(0,\sigma^{2})$ coefficients and obtain that the exact order. For $1\leq p<q<\infty$, the average Nikolskii factor is order degree to the 0, as compared to the degree $1/p-1/q$ worst case bound. We also give the generalization to random multivariate trigonometric polynomials.
In the present article we introduce geometrical objects induced by the tent maps associated with special Pisot numbers that we call tent-tiles. They are compact subsets of the one-, two-, or three-dimensional Euclidean space, depending on the particular special Pisot number. Most of the tent-tiles have a fractal shape and we study the Hausdorff dimension of their boundary. Furthermore, we are concerned with tilings induced by tent-tiles. It turns out that tent-tiles give rise to two types of lattice tilings. In order to obtain these results we establish and exploit connections between tent-tiles and Rauzy fractals induced by substitutions and automorphisms of the free group.
The acid treatment of carbonate reservoirs is a widely employed technique for enhancing the productivity of oil and gas reservoirs. In this paper, we present a novel combined hybridized mixed discontinuous Galerkin (HMDG) finite element method to simulate the dissolution process near the wellbore, commonly referred to as the wormhole phenomenon. The primary contribution of this work lies in the application of hybridization techniques to both the pressure and concentration equations. Additionally, an upwind scheme is utilized to address convection-dominant scenarios, and a ``cut-off" operator is introduced to maintain the boundedness of porosity. Compared to traditional discontinuous Galerkin methods, the proposed approach results in a global system with fewer unknowns and sparser stencils, thereby significantly reducing computational costs. We analyze the existence and uniqueness of the new combined method and derive optimal error estimates using the developed technique. Numerical examples are provided to validate the theoretical analysis.
This paper deals with the parabolic $(1,\,p)$-Laplace system, a parabolic system that involves the one-Laplace and $p$-Laplace operators with $p\in(1,\,\infty)$. We aim to prove that a spatial gradient is continuous in space and time. An external force term is treated under the optimal regularity assumption in the parabolic Lebesgue spaces. We also discuss a generalized parabolic system with the Uhlenbeck structure. A main difficulty is that the uniform ellipticity of the $(1,\,p)$-Laplace operator is violated on a facet, or the degenerate region of a spatial gradient. The gradient continuity is proved by showing local H\"{o}lder continuity of a truncated gradient, whose support is far from the facet. This is rigorously demonstrated by considering approximate parabolic systems and deducing various regularity estimates for approximate solutions by classical methods such as De Giorgi's truncation, Moser's iteration, and freezing coefficient arguments. A weak maximum principle is also utilized when $p$ is not in the supercritical range.
After performing the Madelung transformation, the nonlinear Schr\"odinger equation is transformed into a hydrodynamic equation akin to the compressible Euler equations with a certain dissipation. In this short note, we construct self-similar solutions of such system in the focusing case for any mass supercritical exponent. To the best of our knowledge these solutions are new, and may formally arise as potential blow-up profiles of the focusing NLS equation.
In this paper, we present formulas for the edge zeta function and the second weighted zeta function with respect to the group matrix of a finite abelian group $\Gamma $. Furthermore, we give another proof of Dedekind Theorem for the group determinant of $\Gamma $ by the decomposition formula for a matrix of a group covering of a digraph. Finally, we treat the weighted complexity of the complete graph with entries of the group matrix of $\Gamma $ as arc weights.
We study properties of solutions to the fractional Allen-Cahn equation when $s\in (0, 1/2)$ and dimension $n\geq 2$. By applying the quantitative stratification principle developed by Naber and Valtorta, we obtain an optimal quantitative estimate on the transition set. As an application of this estimate, we improve the potential energy estimates of Cabre, Cinti, and Serra (2021), providing sharp versions for the fractional Allen-Cahn equation. Similarly, we obtain optimal perimeter estimates for stationary nonlocal minimal surfaces, extending previous results of Cinti, Serra, and Valdinoci (2019) from the stable case.
Let $K=k((t))$ be a local field of characteristic $p>0$, with perfect residue field $k$. Let $\vec{a}=(a_0,a_1,\dots,a_{n-1})\in W_n(K)$ be a Witt vector of length $n$. Artin-Schreier-Witt theory associates to $\vec{a}$ a cyclic extension $L/K$ of degree $p^i$ for some $i\le n$. Assume that the vector $\vec{a}$ is ``reduced'', and that $v_K(a_0)<0$; then $L/K$ is a totally ramified extension of degree $p^n$. In the case where $k$ is finite, Kanesaka-Sekiguchi and Thomas used class field theory to explicitly compute the upper ramification breaks of $L/K$ in terms of the valuations of the components of $\vec{a}$. In this note we use a direct method to show that these formulas remain valid when $k$ is an arbitrary perfect field.
Given a family of graphs $\mathcal{F}$, a graph $G$ is said to be $\mathcal{F}$-saturated if $G$ does not contain a copy of $F$ as a subgraph for any $F\in\mathcal{F}$, but the addition of any edge $e\notin E(G)$ creates at least one copy of some $F\in\mathcal{F}$ within $G$. The minimum size of an $\mathcal{F}$-saturated graph on $n$ vertices is called the saturation number, denoted by $\mbox{sat}(n, \mathcal{F})$. Let $C_r$ be the cycle of length $r$. In this paper, we study on $\mbox{sat}(n, \mathcal{F})$ when $\mathcal{F}$ is a family of cycles. In particular, we determine that $\mbox{sat}(n, \{C_4,C_5\})=\lceil\frac{5n}{4}-\frac{3}{2}\rceil$ for any positive integer $n$.
This paper mainly addresses the distributed online optimization problem where the local objective functions are assumed to be convex or non-convex. First, the distributed algorithms are proposed for the convex and non-convex situations, where the one-point residual feedback technology is introduced to estimate gradient of local objective functions. Then the regret bounds of the proposed algorithms are derived respectively under the assumption that the local objective functions are Lipschitz or smooth, which implies that the regrets are sublinear. Finally, we give two numerical examples of distributed convex optimization and distributed resources allocation problem to illustrate the effectiveness of the proposed algorithm.
Let F be a field of characteristic 2. In this paper we determine the Kato-Milne cohomology of the rational function field F(x) in one variable x. This will be done by proving an analogue of the Milnor exact sequence [4] in the setting of Kato-Milne cohomology. As an application, we answer the open case of the norm theorem for Kato-Milne cohomology that concerns separable irreducible polynomials in many variables. This completes a result of Mukhija [17, Theorem A.3] that gives the norm theorem for inseparable polynomials.
In this article, we study the global-in-time well-posedness of second order mean field games (MFGs) with both nonlinear drift functions simultaneously depending on the state, distribution and control variables, and the diffusion term depending on both state and distribution. Besides, the diffusion term is allowed to be degenerate, unbounded and even nonlinear in the distribution, but it does not depend on the control. First, we establish the global well-posedness of the corresponding forward-backward stochastic differential equations (FBSDEs), which arise from the maximum principle under a so-called $\beta$-monotonicity commonly used in the optimal control theory. The $\beta$-monotonicity admits more interesting cases, as representative examples including but not limited to the displacement monotonicity, the small mean field effect condition or the Lasry-Lions monotonicity; and ensures the well-posedness result in diverse non-convex examples. In our settings, we pose assumptions directly on the drift and diffusion coefficients and the cost functionals, rather than indirectly on the Hamiltonian, to make the conditions more visible. Our probabilistic method tackles the nonlinear dynamics with a linear but infinite dimensional version, and together with our recently proposed cone property for the adjoint processes, following in an almost straightforward way the conventional approach to the classical stochastic control problem, we derive a sufficiently good regularity of the value functional, and finally show that it is the unique classical solution to the MFG master equation. Our results require fairly few conditions on the functional coefficients for solution of the MFG, and a bit more conditions -- which are least stringent in the contemporary literature -- for classical solution of the MFG master equation.
We propose a fourth-order cut-cell method for solving the two-dimensional advection-diffusion equation with moving boundaries on a Cartesian grid. We employ the ARMS technique to give an explicit and accurate representation of moving boundaries, and introduce a cell-merging technique to overcome discontinuities caused by topological changes in cut cells and the small cell problem. We use a polynomial interpolation technique base on poised lattice generation to achieve fourth-order spatial discretization, and use a fourth-order implicit-explicit Runge-Kutta scheme for time integration. Numerical tests are performed on various moving regions, with advection velocity both matching and differing from boundary velocity, which demonstrate the fourth-order accuracy of the proposed method.
We study the limiting distribution of a volatility target index as the discretisation time step converges to zero. Two limit theorems (a strong law of large numbers and a central limit theorem) are established, and as an application, the exact limiting distribution is derived. We demonstrate that the volatility of the limiting distribution is consistently larger than the target volatility, and converges to the target volatility as the observation-window parameter $\lambda$ in the definition of the realised variance converges to $1$. Besides the exact formula for the drift and the volatility of the limiting distribution, their upper and lower bounds are derived. As a corollary of the exact limiting distribution, we obtain a vega conversion formula which converts the rho sensitivity of a financial derivative on the limiting diffusion to the vega sensitivity of the same financial derivative on the underlying of the volatility target index.
In this article, we study the filtered $\Phi$-modules canonically attached to the exponentially twisted cohomology associated with some nondegenerate functions. Inspired by $p$-adic Hodge theory, we conjecture that those filtered $\Phi$-modules are weakly admissible. We show that this expectation is correct under some assumptions using the theory of Adolphson and Sperber.
For a Hermitian matrix $A$ of order $n$ with eigenvalues $\lambda_1(A)\ge \cdots\ge \lambda_n(A)$, define \[ \mathcal{E}_p^+(A)=\sum_{\lambda_i > 0} \lambda_i^p(A), \quad \mathcal{E}_p^-(A)=\sum_{\lambda_i<0} |\lambda_i(A)|^p,\] to be the positive and the negative $p$-energy of $A$, respectively. In this note, first we show that if $A=[A_{ij}]_{i,j=1}^k$, where $A_{ii}$ are square matrices, then \[ \mathcal{E}_p^+(A)\geq \sum_{i=1}^{k} \mathcal{E}_p^+(A_{ii}), \quad \mathcal{E}_p^-(A)\geq \sum_{i=1}^{k} \mathcal{E}_p^-(A_{ii}),\] for any real number $p\geq 1$. We then apply the previous inequality to establish lower bounds for $p$-energy of the adjacency matrix of graphs.
In this paper, we study the parallelism between perfect numbers and Leinster groups and continue it by introducing the new concepts of almost and quasi Leinster groups which parallel almost and quasi perfect numbers. These are small deviations from perfect numbers; very few results and/or examples are known about them. We investigate nilpotent almost-/quasi-/Leinster groups and find some examples and conditions for the existence of such groups for classes of non-nilpotent groups: ZM (Zassenhaus metacyclic) groups, dihedral generalised groups, generalised dyciclic groups and affine groups.
Let $R$ be a commutative ring with unity. Consider the twisted Chevalley group $G_{\pi, \sigma} (\Phi, R)$ of type $\phi$ over $R$ and its elementary subgroup $E'_{\pi, \sigma} (\Phi, R)$. This paper investigates the normalizers of $E'_{\pi, \sigma}(\Phi, R)$ and $G_{\pi, \sigma}(\Phi, R)$ in the larger group $G_{\pi, \sigma}(\Phi, S)$, where $S$ is an extension ring of $R$. We establish that under certain conditions on $R$ these normalizers coincide. Moreover, in the case of adjoint type groups, we show that they are precisely equal to $G_{\pi, \sigma}(\Phi, R)$.
This paper explores the Bernstein problem of smooth maps $f:\mathbb{R}^4 \to \mathbb{R}^3$ whose graphs form coassociative submanifolds in $\mathbb{R}^7$. We establish a condition, expressed in terms of the second elementary symmetric polynomial of the map's slope, that ensures $f$ is affine. A corresponding result is also established for Cayley submanifolds in $\mathbb{R}^8$.
One important example of a transposed Poisson algebra can be constructed by means of a commutative algebra and its derivation. This approach can be extended to superalgebras, that is, one can construct a transposed Poisson superalgebra given a commutative superalgebra and its even derivation. In this paper we show that including odd derivations in the framework of this approach requires introducing a new notion. It is a super vector space with two operations that satisfy the compatibility condition of transposed Poisson superalgebra. The first operation is determined by a left supermodule over commutative superalgebra and the second is a Jordan bracket. Then it is proved that the super vector space generated by an odd derivation of a commutative superalgebra satisfies all the requirements of introduced notion. We also show how to construct a 3-Lie superalgebra if we are given a transposed Poisson superalgebra and its even derivation.
Implicit variables of an optimization problem are used to model variationally challenging feasibility conditions in a tractable way while not entering the objective function. Hence, it is a standard approach to treat implicit variables as explicit ones. Recently, it has been shown in terms of a comparatively complex model problem that this approach, generally, is theoretically disadvantageous as the surrogate problem typically suffers from the presence of artificial stationary points and the need for stronger constraint qualifications. The purpose of the present paper is twofold. First, it introduces a much simpler and easier accessible model problem which can be used to recapitulate and even broaden the aforementioned findings. Indeed, we will extend the analysis to two more classes of stationary points and the associated constraint qualifications. These theoretical results are accompanied by illustrative examples from cardinality-constrained, vanishing-constrained, and bilevel optimization. Second, the present paper illustrates, in terms of cardinality-constrained portfolio optimization problems, that treating implicit variables as explicit ones may also be disadvantageous from a numerical point of view.
The purpose of this paper is to introduce the construction of a stochastic process called ``diffusion house-moving'' and to explore its properties. We study the weak convergence of diffusion bridges conditioned to stay between two curves, and we refer to this limit as diffusion house-moving. Applying this weak convergence result, we give the sample path properties of diffusion house-moving.
This paper is concerned with the computation of the capacity region of a continuous, Gaussian vector broadcast channel (BC) with covariance matrix constraints. Since the decision variables of the corresponding optimization problem are Gaussian distributed, they can be characterized by a finite number of parameters. Consequently, we develop new Blahut-Arimoto (BA)-type algorithms that can compute the capacity without discretizing the channel. First, by exploiting projection and an approximation of the Lagrange multiplier, which are introduced to handle certain positive semidefinite constraints in the optimization formulation, we develop the Gaussian BA algorithm with projection (GBA-P). Then, we demonstrate that one of the subproblems arising from the alternating updates admits a closed-form solution. Based on this result, we propose the Gaussian BA algorithm with alternating updates (GBA-A) and establish its convergence guarantee. Furthermore, we extend the GBA-P algorithm to compute the capacity region of the Gaussian vector BC with both private and common messages. All the proposed algorithms are parameter-free. Lastly, we present numerical results to demonstrate the effectiveness of the proposed algorithms.
Let $G_1,\dots, G_m$ be independent identically distributed Bernoulli random subgraphs of the complete graph ${\cal K}_n$ having vertex sets of random sizes $X_1,\dots, X_m\in \{0,1,2,\dots\}$ and random edge densities $Q_1,\dots, Q_m\in [0,1]$. Assuming that each $G_i$ has a vertex of degree $1$ with positive probability, we establish the $k$-connectivity threshold as $n,m\to+\infty$ for the union $\cup_{i=1}^mG_i$ defined on the vertex set of ${\cal K}_n$.
In this article, we prove that any pair of doubly commuting $2$-isometries on a Hilbert space has a Wold-type decomposition. Moreover, the analytic part of the pair is unitary equivalent to the pair of multiplication by coordinate function on a Dirichlet-type space on the bidisc.
In this survey, we consider various analytic problems related to the geometry of the Chern connection on Hermitian manifolds, such as the existence of metrics with constant Chern-scalar curvature, generalizations of the K\"ahler-Einstein condition to the non-K\"ahler setting, and the convergence of the Chern-Ricci flow on compact complex surfaces.
On a closed, orientable Riemannian surface $\Sigma_g$ of arbitrary genus $g\geq 1$ and Riemannian metric $h$ we study the magnetic Laplacian with magnetic potential given by a harmonic $1$-form $A$. Its lowest eigenvalue (magnetic ground state energy) is positive, unless $A$ represents an integral cohomology class. We isolate a countable set of ground state energies which we call $\textit{ground state spectrum}$ of the metric $h$. The main result of the paper is to show that the ground state spectrum determines the volume and the conformal class of the metric $h$. In particular, hyperbolic metrics are distinguished by their ground state spectrum. We also compute the magnetic spectrum of flat tori and introduce some magnetic spectral invariants of $(\Sigma_g,h)$ which are conformal by definition and involve the geometry of what we call the Jacobian torus of $(\Sigma_g,h)$ (in Algebraic Geometry, the Jacobian variety of a Riemann surface).
We prove that the discrete Hardy-Littlewood maximal function associated with Euclidean spheres with small radii has dimension-free estimates on $\ell^p(\mathbb{Z}^d)$ for $p\in[2,\infty).$ This implies an analogous result for the Euclidean balls, thus making progress on a question of E.M. Stein from the mid 1990s. Our work provides the first dimension-free estimates for full discrete maximal functions related to spheres and balls without relying on comparisons with their continuous counterparts. An important part of our argument is a uniform (dimension-free) count of lattice points in high-dimensional spheres and balls with small radii. We also established a dimension-free estimate for a multi-parameter maximal function of a combinatorial nature, which is a new phenomenon and may be useful for studying similar problems in the future.
In this paper, we study the Severi varieties parametrizing integral curves of geometric genus one on polarized toric surfaces in characteristic zero and describe their irreducible components. We show that the irreducible components are in natural bijection with certain affine sublattices of the lattice of characters of the toric surface. The sublattices are described explicitly in terms of the polygon defining the polarization of the toric surface.
We investigate the weak limit of the hyper-rough square-root process as the Hurst index $H$ goes to $-1/2\,$. This limit corresponds to the fractional kernel $t^{H - 1 / 2}$ losing integrability. We establish the joint convergence of the couple $(X, M)\,$, where $X$ is the hyper-rough process and $M$ the associated martingale, to a fully correlated Inverse Gaussian L\'evy jump process. This unveils the existence of a continuum between hyper-rough continuous models and jump processes, as a function of the Hurst index. Since we prove a convergence of continuous to discontinuous processes, the usual Skorokhod $J_1$ topology is not suitable for our problem. Instead, we obtain the weak convergence in the Skorokhod $M_1$ topology for $X$ and in the non-Skorokhod $S$ topology for $M$.
Let $\mathbb F$ be a local field and $G$ be a linear algebraic group defined over $\mathbb F$. For $k\in\mathbb N$, let $g\to g^k$ be the $k$-th power map $P_k$ on $G(\mathbb F)$. The purpose of this article is two-fold. First, we study the power map on real algebraic group. We characterise the density of the images of the power map $P_k$ on $G(\mathbb R)$ in terms of Cartan subgroups. Next we consider the linear algebraic group $G$ over non-Archimedean local field $\mathbb F$ with any characteristic. If the residual characteristic of $\mathbb F$ is $p$, and an element admits $p^k$-th root in $G(\mathbb F)$ for each $k$, then we prove that some power of the element is unipotent. In particular, we prove that an element $g\in G(\mathbb F)$ admits roots of all orders if and only if $g$ is contained in a one-parameter subgroup in $G(\mathbb F)$. Also, we extend these results to all linear algebraic groups over global fields.
Consider a standard graded artinian $k$-algebra $B$ and an extension of $B$ by a new variable, $A=B\otimes_k k[x]/(x^d)$ for some $d\geq 1$. We will show how maximal rank properties for powers of a general linear form on $A$ can be determined by maximal rank properties for different powers of general linear forms on $B$. This is then used to study Lefschetz properties of algebras that can be obtained via such extensions. In particular, it allows for a new proof that monomial complete intersections have the strong Lefschetz property over a field of characteristic zero. Moreover, it gives a recursive formula for the determinants that show up in that case. Finally, for algebras over a field of characteristic zero, we give a classification for what properties $B$ must have for all extensions $B\otimes_k k[x]/(x^d)$ to have the weak or the strong Lefschetz property.
We construct smooth localised orthonormal bases compatible with anisotropic Triebel-Lizorkin and Besov type spaces on $\mathbb{R}^d$. The construction is based on tensor products of so-called univariate brushlet functions that are based on local trigonometric bases in the frequency domain, and the construction is painless in the sense that all parameters for the construction are explicitly specified. It is shown that the associated decomposition system form unconditional bases for the full family of Triebel-Lizorkin and Besov type spaces, including for the so-called $\alpha$-modulation and $\alpha$-Triebel-Lizorkin spaces. In the second part of the paper we study nonlinear $m$-term approximation with the constructed bases, where direct Jackson and Bernstein inequalities for $m$-term approximation with the tensor brushlet system in $\alpha$-modulation and $\alpha$-Triebel-Lizorkin spaces are derived. The inverse Bernstein estimates rely heavily on the fact that the constructed system is non-redundant.
The mechanical process of progressively debonding an adhesive membrane from a substrate is described as a quasistatic variational evolution of sets and herein investigated. Existence of energetic solutions, based on global minimisers of a suitable functional together with an energy balance, is obtained within the natural class of open sets, improving and simplifying previous results known in literature. The proposed approach relies on an equivalent reformulation of the model in terms of the celebrated one-phase Bernoulli free boundary problem. This point of view allows performing the Minimizing Movements scheme in spaces of functions instead of the more complicated framework of sets. Nevertheless, in order to encompass irreversibility of the phenomenon, it remains crucial to keep track of the debonded region at each discrete time-step, thus actually resulting in a coupled algorithm.
We develop a higher-dimensional extension of multifractal analysis for typical fiber-bunched linear cocycles. Our main result is a relative variational principle, which shows that the topological entropy of the level sets of Lyapunov exponents can be approximated by the metric entropy of ergodic measures fully concentrated on those level sets, addressing a question posed by Breuillard and Sert. We also establish a variational principle for the generalized singular value function. As an application to dynamically defined linear cocycles, we obtain a multifractal formalism for open sets of $C^{1+\alpha}$ repellers and Anosov diffeomorphisms.
In this paper, we study an optimal control problem for a brain tumor growth model that incorporates lactate metabolism, viscoelastic effects, and tissue damage. The PDE system, introduced in [G. Cavalleri, P. Colli, A. Miranville, E. Rocca, On a Brain Tumor Growth Model with Lactate Metabolism, Viscoelastic Effects, and Tissue Damage (2025)], couples a Fisher-Kolmogorov type equation for tumor cell density with a reaction-diffusion equation for the lactate, a quasi-static force balance governing the displacement, and a nonlinear differential inclusion for tissue damage. The control variables, representing chemotherapy and a lactate-targeting drug, influence tumor progression and treatment response. Starting from well-posedness, regularity, and continuous dependence results already established, we define a suitable cost functional and prove the existence of optimal controls. Then, we analyze the differentiability of the control-to-state operator and establish a necessary first-order condition for treatment optimality.
We study topological versions of an independent set in an abelian group and a linearly independent set in a vector space, a {\em topologically independent set} in a topological group and a {\em topologically linearly independent set} in a topological vector space. These counterparts of their algebraic versions are defined analogously and possess similar properties. Let $\C^\times$ be the multiplicative group of the field of complex numbers with its usual topology. We prove that a subset $A$ of an arbitrary Tychonoff power of $\C^\times$ is topologically independent if and only if the topological subgroup $\hull{A}$ that it generates is the Tychonoff direct sum $\bigoplus_{a\in A}\hull{a}$. This theorem substantially generalizes an earlier result of the author, who has proved this for Abelian precompact groups. Further, we show that topologically independent and topologically linearly independent sets coincide in vector spaces with weak topologies, although they are different in general. We characterize topologically linearly independent sets in vector spaces with weak topologies and normed spaces. In a weak topology, a set $A$ is topologically linearly independent if and only if its linear span is the Tychonoff direct sum $\R^{(A)}$. In normed spaces $A$ is topologically linearly independent if and only if it is uniformly minimal. Thus, from the point of view of topological linear independence, the Tychonoff direct sums $\R^{(A)}$ and (linear spans of) uniformly minimal sets, which are closely related to bounded biorthogonal systems, are of the same essence.
We consider in this work a $2$-dimensional $3$-wave kinetic equation describing the dynamics of the thermal cloud outside a Bose-Einstein Condensate. We construct global non-radial mild solutions for the equation. Those mild solutions are the summation of Dirac masses on circles. We prove that in each spatial direction, either Dirac masses at the origin, which are the so-called Bose-Einstein condensates, can be formed in finite time or the solutions converge to Bose-Einstein condensates as time evolves to infinity. We also describe a dynamics of the formation of the Bose-Einstein condensates latter case. In this case, on each direction, the solutions accumulate around circles close to the origin at growth rates at least linearly in time.
This paper examines the relationship between GIT heights and weighted heights, exploring their definitions and applications to weighted projective spaces and binary forms. Drawing on prior weighted height frameworks, we relate them to Zhang's GIT height via the Veronese map, showing that for a semistable cycle Z in a weighted projective space over the algebraic closure of Q, the GIT height h(Z) equals L(Z) plus an Archimedean Chow metric term. For binary forms f in V_d, we define an invariant height H(f) with respect to the Chow metric and establish that the moduli weighted height L(xi(f)) of f's invariants equals H(f) plus the field degree times the Chow height h_Ch(f), linking arithmetic and moduli properties.
This paper considers the unsourced random access (URA) problem with a random and unknown number of active users in multiple-input multiple-output (MIMO) quasi-static Rayleigh fading channels. We derive non-asymptotic achievability bounds on the probability of incorrectly estimating the number of active users, and provide scaling laws on the gap between the estimated and true numbers of active users. We prove that the error probability reaches a plateau as the power $P$ and blocklength $n$ increase, whereas it decays exponentially with the number $L$ of receive antennas and eventually vanishes. Then, we explore the fundamental limits of URA by deriving non-asymptotic achievability bounds and converse bounds (including two single-user converse bounds and one multi-user ensemble converse bound) on the minimum energy-per-bit required by each active user to transmit $J$ bits with blocklength $n$ under misdetection and false-alarm constraints. Numerical results show that the extra required energy-per-bit due to the uncertainty in the number ${\rm{K}}_a$ of active users decreases as $L$ and $\mathbb{E}[{\rm{K}}_a]$ increase and the error requirement becomes milder. In the non-asymptotic regime, using codewords distributed on a sphere outperforms Gaussian random coding. Existing schemes are shown to exhibit a large gap to our bounds when the number of active users is large, calling for more advanced schemes that perform energy-efficiently in this case. In the asymptotic regime with $n\to\infty$, we establish scaling laws on the minimum required $P$ and $L$ to reliably support ${\rm{K}}_a$ active users as functions of $n$, which highlight the potential of MIMO in enabling low-cost communication and indicate that it is possible for the minimum required $P$ and $L$ to remain on the same order when the number of active users increases but stays below a threshold.
Renewable power-to-hydrogen (ReP2H) systems require rectifiers to supply power to electrolyzers (ELZs). Two main types of rectifiers, insulated-gate bipolar transistor rectifiers (IGBT-Rs) and thyristor rectifiers (TRs), offer distinct tradeoffs. IGBT-Rs provide flexible reactive power control but are costly, whereas TRs are more affordable with lower power loss but consume a large amount of uncontrollable reactive power. A mixed configuration of rectifiers in utility-scale ReP2H systems could achieve an decent tradeoff and increase overall profitability. To explore this potential, this paper proposes an optimal investment portfolio model. First, we model and compare the active and reactive power characteristics of ELZs powered by TRs and IGBT-Rs. Second, we consider the investment of ELZs, rectifiers, and var resources and coordinate the operation of renewables, energy storage, var resources, and the on-off switching and load allocation of multiple ELZs. Subsequently, a two-stage stochastic programming (SP) model based on weighted information gap decision theory (W-IGDT) is developed to address the uncertainties of the renewable power and hydrogen price, and we apply the progressive hedging (PH) algorithm to accelerate its solution. Case studies demonstrate that optimal rectifier configurations increase revenue by at most 2.56% compared with using only TRs or IGBT-Rs, as well as those in existing projects. Under the optimal portfolio, reactive power compensation investment is nearly eliminated, with a preferred TR-to-IGBT-R ratio of 3:1.
This paper proposes a novel optimization problem building on noncooperative games under central regulation, which can be formulated as a bilevel structure. In the low-level, each player competes to minimize its own cost function that depends not only on the strategies of all players, but also on an intervention decision of the central regulator, while the central regulator located at the high-level attempts to achieve the social optimum, that is, to minimize the sum of cost functions of all players through an adjustable intervention decision. In this setting, under the intervention of the central regulator, the low-level players perform in a noncooperative game and aim to seek the Nash equilibrium, which indeed is related with the regulator's decision. Meanwhile, the objective of the regulator is to choose a decision such that the social cost, i.e., the sum of cost functions of all players is minimum. This formulated bilevel social optimization problem is proven to be constrained, nonconvex and nonsmooth. To address this intricate problem, an inexact zeroth-order algorithm is developed by virtue of the smoothing techniques, allowing for the Nash equilibrium of the low-level game to be computed in an inexact manner. Levering the properties of smoothing techniques, it is rigorously shown that the devised algorithm achieves a sublinear convergence rate for computing a stationary point of a related optimization problem with a smoothed objective. Moreover, the sublinear convergence rate in the scenario where the exact equilibrium of the low-level game is available is also discussed. Finally, numerical simulations are conducted to demonstrate the efficiency of theoretical findings.
We give a constructive proof of the fact that the treewidth of a graph $G$ is bounded by a linear function of the separation number of $G$.
We analyse domination between invariant types in o-minimal expansions of ordered groups, showing that the domination poset decomposes as the direct product of two posets: the domination poset of an o-minimal expansion of a real closed field, and one derived from a linear o-minimal structure. We prove that if the Morley product is well-defined on the former poset, then the same holds for the poset computed in the whole structure. We establish our results by employing the `short closure' pregeometry ($\mathrm{scl}$) in semi-bounded o-minimal structures, showing that types of $\mathrm{scl}$-independent tuples are weakly orthogonal to types of short tuples. As an application we prove that, in an o-minimal expansion of an ordered group, every definable type is domination-equivalent to a product of 1-types. Furthermore, there are precisely two or four classes of definable types up to domination-equivalence, depending on whether a global field is definable or not.
Budur, Fernandes de Bobadilla, Le and Nguyen (2022) conjectured that if two germs of holomorphic functions are topologically equivalent, then the Milnor fibres of their initial forms are homotopy equivalent. In this note, we give affirmative answers to this conjecture in the case of plane curves. We show also that a positive answer to this conjecture implies in a positive answer to the famous Zariski multiplicity conjecture both in the case of right equivalence or in the case of hypersurfaces with isolated singularities.
We establish the foundations of the theory of persistent cohomology operations, derive decomposition formulas for wedge sums and products, and prove their Gromov-Hausdorff stability. We use these results to construct pairs of Riemannian pseudomanifolds for which the Gromov-Hausdorff estimates derived from persistent cohomology operations are strictly sharper than those obtained using persistent homology.
We show that a smaller version of the Kontsevich graph complex spanned by triconnected graphs is quasi-isomorphic to the full Kontsevich graph complex.
In this work, we develop a cut-based unfitted finite element formulation for solving nonlinear, nonstationary fluid-structure interaction with contact in Eulerian coordinates. In the Eulerian description fluid flow modeled by the incompressible Navier-Stokes equations remains in Eulerian coordinates, while elastic solids are transformed from Lagrangian coordinates into the Eulerian system. A monolithic description is adopted. For the spatial discretization, we employ an unfitted finite element method with ghost penalties based on inf-sup stable finite elements. To handle contact, we use a relaxation of the contact condition in combination with a unified Nitsche approach that takes care implicitly of the switch between fluid-structure interaction and contact conditions. The temporal discretization is based on a backward Euler scheme with implicit extensions of solutions at the previous time step. The nonlinear system is solved with a semi-smooth Newton's method with line search. Our formulation, discretization and implementation are substantiated with an elastic falling ball that comes into contact with the bottom boundary, constituting a challenging state-of-the-art benchmark.
In the 1980's, Mahowald and Kane used integral Brown--Gitler spectra to decompose $ku \wedge ku$ as a sum of finitely generated $ku$-module spectra. This splitting, along with an analogous decomposition of $ko \wedge ko$ led to a great deal of progress in stable homotopy computations and understanding of $v_1$-periodicity in the stable homotopy groups of spheres. In this paper, we construct a $C_2$-equivariant lift of Mahowald and Kane's splitting of $ku \wedge ku$. We also give a description of the resulting $C_2$-equivariant splitting in terms of $C_2$-equivariant Adams covers and record an analogous splitting for $H\underline{\mathbb{Z}} \wedge H \underline{\mathbb{Z}}$. Similarly to the nonequivariant story, we expect the techniques of this paper to facilitate further $C_2$-equivariant stable homotopy computations and understanding of $v_1$-periodicity in $C_2$-equivariant stable stems.
A mixed regular graph is a graph where every vertex has $z$ incoming arcs, $z$ outgoing arcs, and $r$ edges; furthermore, if it has girth $g$, we say that the graph is a \emph{$[z,r;g]$-mixed graph}. A \emph{$[z,r;g]$-mixed cage} is a $[z,r;g]$-mixed graph with the smallest possible order. In this note, we give a family of $[z,q;5]$-mixed graphs for $q\geq 7$ power of prime and $q-1\leq 4z+R$ with $z\geq 1$ and $R \in \{1,\ldots,5\}$. This provides better upper bounds on the order of mixed cages until this moment.
Since the discovery of critical mistakes in Rauszer's work on bi-intuitionistic logics, solid foundations for these have progressively been rebuilt. However, the algebraic treatment of these logics has not yet been tended to. We fill this gap by algebraically analysing the bi-intuitionistic logics wBIL and sBIL. Given that these logics are only distinguished as consequence relations, and not as sets of theorems (hence the conflation in Rauszer's work), the algebraic tools we use are tailored to the treatment of such relations. We mainly inspect these logics through the lens of abstract algebraic logic, but we also provide an alternative algebraic analysis of wBIL and sBIL as logic preserving degrees of truth and truth, respectively. Our results pertaining to wBIL and sBIL are formalised in the interactive theorem prover Rocq.
The simplex graph $S(G)$ of a graph $G$ is defined as the graph whose vertices are the cliques of $G$ (including the empty set), with two vertices being adjacent if, as cliques of $G$, they differ in exactly one vertex. Simplex graphs form a subclass of median graphs and include many well-known families of graphs, such as gear graphs, Fibonacci cubes and Lucas cubes. In this paper, we characterize simplex graphs from four different perspectives: the first focuses on a graph class associated with downwards-closed sets -- namely, the daisy cubes; the second identifies all forbidden partial cube-minors of simplex graphs; the third is from the perspective of the $\Theta$ equivalent classes; and the fourth explores the relationship between the maximum degree and the isometric dimension. Furthermore, very recently, Betre et al.\ [K. H. Betre, Y. X. Zhang, C. Edmond, Pure simplicial and clique complexes with a fixed number of facets, 2024, arXiv: 2411.12945v1] proved that an abstract simplicial complex (i.e., an independence system) of a finite set can be represented to a clique complex of a graph if and only if it satisfies the Weak Median Property. As a corollary, we rederive this result by using the graph-theoretical method.
The geometric properties of quantum states are crucial for understanding many physical phenomena in quantum mechanics, condensed matter physics, and optics. The central object describing these properties is the quantum geometric tensor, which unifies the Berry curvature and the quantum metric. In this work, we use the differential-geometric framework of vector bundles to analyze the properties of parameter-dependent quantum states and generalize the quantum geometric tensor to this setting. This construction is based on an arbitrary connection on a Hermitian vector bundle, which defines a notion of quantum state transport in parameter space, and a sub-bundle projector, which constrains the set of accessible quantum states. We show that the sub-bundle geometry is similar to that of submanifolds in Riemannian geometry and is described by a generalization of the Gauss-Codazzi-Mainardi equations. This leads to a novel definition of the quantum geometric tensor, which contains an additional curvature contribution. To illustrate our results, we describe the sub-bundle geometry arising in the semiclassical treatment of Dirac fields propagating in curved spacetime and show how the quantum geometric tensor, with its additional curvature contributions, is obtained in this case. As a concrete example, we consider Dirac fermions confined to a hyperbolic plane and demonstrate how spatial curvature influences the quantum geometry. This work sets the stage for further exploration of quantum systems in curved geometries, with applications in both high-energy physics and condensed matter systems.
We proved that for every sufficiently large $n$, the complete graph $K_{2n}$ with an arbitrary edge signing $\sigma: E(K_{2n}) \to \{-1, +1\}$ admits a high discrepancy $1$-factor decomposition. That is, there exists a universal constant $c > 0$ such that every edge-signed $K_{2n}$ has a perfect matching decomposition $\{\psi_1, \ldots, \psi_{2n-1}\}$, where for each perfect matching $\psi_i$, the discrepancy $\lvert \frac{1}{n} \sum_{e\in E(\psi_i)} \sigma(e) \rvert$ is at least $c$.
Least perimeter solutions for a region with fixed mass are sought in ${\mathbb{R}^d}$ on which a density function $\rho(r) = r^p+a$, with $p>0, a>0$, weights both perimeter and mass. On the real line ($d=1$) this is a single interval that includes the origin. For $p \le 1$ the isoperimetric interval has one end at the origin; for larger $p$ there is a critical value of $a$ above which the interval is symmetric about the origin. In the case $p=2$, for $d=2$ and $3$, the isoperimetric region is a circle or sphere, respectively, that includes the origin; the centre moves towards the origin as $a$ increases, with constant radius, and then remains centred on the origin for $a$ above the critical value as the radius decreases.
We study the eigenvalue collisions for certain families of matrices $$R(s,t) = \cos(s \pi / 2)C + \sin(s \pi / 2)U(t), \quad s,t \in [0,1]$$ where $C$ is a realization of a Ginibre random matrix and $U(t)$ is a $t$-periodic matrix with eigenvalues flowing along a parametrized curve.
Cigler considered certain shifted Hankel determinants of convolution powers of Catalan numbers and conjectured identities for these determinants. Recently, Fulmek gave a bijective proof of Cigler's conjecture. Cigler then provided a computational proof. We extend Cigler's determinant identities to the convolution of general power series $F(x)$, where $F(x)$ satisfies a certain type of quadratic equation. As an application, we present the Hankel determinant identities of convolution powers of Motzkin numbers.
The Babu\v{s}ka or plate paradox concerns the failure of convergence when a domain with curved boundary is approximated by polygonal domains in linear bending problems with simple support boundary conditions. It can be explained via a boundary integral representation of the total Gaussian curvature that is part of the Kirchhoff--Love bending energy. It is shown that the paradox also occurs for a nonlinear bending-folding model which enforces vanishing Gaussian curvature. A simple remedy that is compatible with simplicial finite element methods to avoid wrong convergence is devised.
We investigate the mean value of the first moment of primitive cubic $L$-functions over $\mathbb{F}_q(T)$ in the non-Kummer setting. Specifically, we study the sum \begin{equation*} \sum_{\substack{\chi\ primitive\ cubic\\ genus(\chi)=g}}L_q(\frac{1}{2}, \chi), \end{equation*} where $L_q(s,\chi)$ denotes the $L$-function associated with primitive cubic character $\chi$. Using double Dirichlet series, we derive an error term of size $q^{(\frac{7}{8}+\varepsilon)g}$.
We consider two-dimensional directed polymers in random environment in the sub-critical regime and in the quasi-critical regime introduced recently by Caravenna, Cottini and Rossi, arXiv:2307.02453v1. For $q\leq q_N$ with $q_N\to\infty$ diverging at a suitable rate with the size of the system, we obtain upper bound estimates on the $q$-moment of the partition function for general environments. In the sub-critical regime, our results improve the $q_N$-threshold obtained for Gaussian environment in Cosco, Zeitouni, Comm. Math. Phys (2023). As a corollary, we derive large deviation estimates with a Gaussian rate function.
This paper aims to develop an efficient adaptive finite element method for the second-order elliptic problem. Although the theory for adaptive finite element methods based on residual-type a posteriori error estimator and bisection refinement has been well established, in practical computations, the use of non-asymptotic exact of error estimator and the excessive number of adaptive iteration steps often lead to inefficiency of the adaptive algorithm. We propose an efficient adaptive finite element method based on high-accuracy techniques including the superconvergence recovery technique and high-quality mesh optimization. The centroidal Voronoi Delaunay triangulation mesh optimization is embedded in the mesh adaption to provide high-quality mesh, and then assure that the superconvergence property of the recovered gradient and the asymptotical exactness of the error estimator. A tailored adaptive strategy, which could generate high-quality meshes with a target number of vertices, is developed to ensure the adaptive computation process terminated within $7$ steps. The effectiveness and robustness of the adaptive algorithm is numerically demonstrated.
We study the maximum $\phi_N^*$ of the partition function of the two dimensional (subcritical) Gaussian directed polymer over an $\sqrt N \times \sqrt N$ box. We show that $\phi_N^*/\log N$ converges towards a constant $\sigma^*$, which we identify to be the same as for the maximum of a branching random walk with a slowly varying variance profile as studied in Fang-Zeitouni, J. Stat. Phys. 2012 and (in the context of the generalized random energy model) in Bovier-Kurkova, Ann. Inst. H. Poincare 2004.
We present new and improved non-asymptotic deviation bounds for Dirichlet processes (DPs), formulated using the Kullback-Leibler (KL) divergence, which is known for its optimal characterization of the asymptotic behavior of DPs. Our method involves incorporating a controlled perturbation within the KL bound, effectively shifting the base distribution of the DP in the upper bound. Our proofs rely on two independent approaches. In the first, we use superadditivity techniques to convert asymptotic bounds into non-asymptotic ones via Fekete's lemma. In the second, we carefully reduce the problem to the Beta distribution case. Some of our results extend similar inequalities derived for the Beta distribution, as presented in [27].
We introduce a generalization of parking functions in which cars are limited in their movement backwards and forwards by two nonnegative integer parameters $k$ and $\ell$, respectively. In this setting, there are $n$ spots on a one-way street and $m$ cars attempting to park in those spots, and $1\leq m\leq n$. We let $\alpha=(a_1,a_2,\ldots,a_m)\in[n]^m$ denote the parking preferences for the cars, which enter the street sequentially. Car $i$ drives to their preference $a_i$ and parks there if the spot is available. Otherwise, car $i$ checks up to $k$ spots behind their preference, parking in the first available spot it encounters if any. If no spots are available, or the car reaches the start of the street, then the car returns to its preference and attempts to park in the first spot it encounters among spots $a_i+1,a_i+2,\ldots,a_i+\ell$. If car $i$ fails to park, then parking ceases. If all cars are able to park given the preferences in $\alpha$, then $\alpha$ is called a $(k,\ell)$-pullback $(m,n)$-parking function. Our main result establishes counts for these parking functions in two ways: counting them based on their final parking outcome (the order in which the cars park on the street), and via a recursive formula. Specializing $\ell=n-1$, our result gives a new formula for the number of $k$-Naples $(m,n)$-parking functions and further specializing $m=n$ recovers a formula for the number of $k$-Naples parking functions given by Christensen et al. The specialization of $k=\ell=1$, gives a formula for the number of vacillating $(m,n)$-parking functions, a generalization of vacillating parking functions studied by Fang et al., and the $m=n$ result answers a problem posed by the authors. We conclude with a few directions for further study.
This paper is concerned with a natural variant of the contact process modeling the spread of knowledge on the integer lattice. Each site is characterized by its knowledge, measured by a real number ranging from 0 = ignorant to 1 = omniscient. Neighbors interact at rate $\lambda$, which results in both neighbors attempting to teach each other a fraction $\mu$ of their knowledge, and individuals die at rate one, which results in a new individual with no knowledge. Starting with a single omniscient site, our objective is to study whether the total amount of knowledge on the lattice converges to zero (extinction) or remains bounded away from zero (survival). The process dies out when $\lambda \leq \lambda_c$ and/or $\mu = 0$, where $\lambda_c$ denotes the critical value of the contact process. In contrast, we prove that, for all $\lambda > \lambda_c$, there is a unique phase transition in the direction of $\mu$, and for all $\mu > 0$, there is a unique phase transition in the direction of $\lambda$. Our proof of survival relies on block constructions showing more generally convergence of the knowledge to infinity, while our proof of extinction relies on martingale techniques showing more generally an exponential decay of the knowledge.
Dynamical systems on networks are inherently high-dimensional unless the number of nodes is extremely small. Dimension reduction methods for dynamical systems on networks aim to find a substantially lower-dimensional system that preserves key properties of the original dynamics such as bifurcation structure. A class of such methods proposed in network science research entails finding a one- (or low-) dimensional system that a particular weighted average of the state variables of all nodes in the network approximately obeys. We formulate and mathematically analyze this dimension reduction technique for dynamical systems on dense graphons, or the limiting, infinite-dimensional object of a sequence of graphs with an increasing number of nodes. We first theoretically justify the continuum limit for a nonlinear dynamical system of our interest, and the existence and uniqueness of the solution of graphon dynamical systems. We then derive the reduced one-dimensional system on graphons and prove its convergence properties. Finally, we perform numerical simulations for various graphons and dynamical system models to assess the accuracy of the one-dimensional approximation.
We study the subrank of real order-three tensors and give an upper bound to the subrank of a real tensor given its complex subrank. Using similar arguments to those used by Bernardi-Blekherman-Ottaviani, we show that all subranks between the minimal typical subrank and the maximal typical subrank, which equals the generic subrank, are also typical. We then study small tensor formats with more than one typical subrank. In particular, we construct a $3 \times 3 \times 5$-tensor with subrank $2$ and show that the subrank of the $4 \times 4 \times 4$-quaternion multiplication tensor is $2$. Finally, we consider the tensor associated to componentwise complex multiplication in $\mathbb{C}^n$ and show that this tensor has real subrank $n$ - informally, no more than $n$ real scalar multiplications can be carried out using a device that does $n$ complex scalar multiplications. We also prove a version of this result for other real division algebras.
Applied category theory often studies symmetric monoidal categories (SMCs) whose morphisms represent open systems. These structures naturally accommodate complex wiring patterns, leveraging (co)monoidal structures for splitting and merging wires, or compact closed structures for feedback. A key example is the compact closed SMC of design problems (DP), which enables a compositional approach to co-design in engineering. However, in practice, the systems of interest may not be fully known. Recently, Markov categories have emerged as a powerful framework for modeling uncertain processes. In this work, we demonstrate how to integrate this perspective into the study of open systems while preserving consistency with the underlying SMC structure. To this end, we employ the change-of-base construction for enriched categories, replacing the morphisms of a symmetric monoidal $\mathcal{V}$-category $\mathcal{C}$ with parametric maps $A \to \mathcal{C}(X,Y)$ in a Markov category induced by a symmetric monoidal monad. This results in a symmetric monoidal 2-category $N_*\mathcal{C}$ with the same objects as $\mathcal{C}$ and reparametrization 2-cells. By choosing different monads, we capture various types of uncertainty. The category underlying $\mathcal{C}$ embeds into $N_*\mathcal{C}$ via a strict symmetric monoidal functor, allowing (co)monoidal and compact closed structures to be transferred. Applied to DP, this construction leads to categories of practical relevance, such as parametrized design problems for optimization, and parametrized distributions of design problems for decision theory and Bayesian learning.
We consider measures supported on sets of irrational numbers possessing many consecutive partial quotients satisfying a condition based on the previous partial quotients. We show that under mild assumptions, such sets will always support measures whose Fourier transform decays to zero.
We propose a variational tail bound for norms of random vectors under moment assumptions on their one-dimensional marginals. We also propose a simplified version of the bound that parametrizes the ``aggregating'' distribution in the proposed variational bound by considering a certain pushforward of the Gaussian distribution. Furthermore, we show that the proposed method recovers some of the well-known bounds on norms of Gaussian random vectors, as well as a recent concentration inequality for the spectral norm of sum of independent and identically distributed positive semidefinite matrices.
Classifying groups up to quasi-isometry is a fundamental problem in geometric group theory. In the context of hyperbolic and relatively hyperbolic groups, one of the key invariants in this classification is the boundary at infinity. F. Paulin proved that two hyperbolic groups are quasi-isometric if and only if their Gromov boundaries are quasiconformally equivalent. In this article, we extend Paulin's result to relatively hyperbolic groups and their Bowditch boundaries. A notion of quasiconformal map preserving the shadows of horoballs relative to a point at the Bowditch boundary is defined and we have shown that every coarsely cusp-preserving quasi-isometry between two relatively hyperbolic groups induces a shadow-preserving quasiconformal map between their Bowditch boundaries. Conversely, we have shown that if the Bowditch boundaries of two relatively hyperbolic groups are quasiconformally equivalent and the quasiconformal map coarsely preserves the shadows of horoballs relative to each boundary point, then the quasiconformal map induces a coarsely cusp-preserving quasi-isometry between those groups.
We extend the model structure on the category $\mathbf{Cat}(\mathcal{E})$ of internal categories studied by Everaert, Kieboom and Van der Linden to an algebraic model structure. Moreover, we show that it restricts to the category of internal groupoids. We show that in this case, the algebraic weak factorisation system that consists of the algebraic trivial cofibrations and algebraic fibrations forms a model of Martin-L\"{o}f type theory. Taking $\mathcal{E} = \mathbf{Set}$ and forgetting the algebraic structure, this recovers Hofmann and Streicher's groupoid model of Martin-L\"{o}f type theory. Finally, we are able to provide axioms on a $(2,1)$-category which ensure that it gives an algebraic model of Martin-L\"{o}f type theory.
I begin by explaining to non-specialists why resolution of singularities in characteristic 0 works. Then I go into some ideas telling how it actually works. I finish with a brief discussion of related results on foliations. I report on work with Andr\'e Belotto da Silva, Michael Temkin, and Jaros{\l}aw W{\l}odarczyk; any claim to originality is joint with them and appears in the paper [AdSTW25].
The main goal of this article is to investigate the relationship between action accessibility and weak action representability in the context of varieties of non-associative algebras over a field. Specifically, using an argument of J. R. A. Gray in the setting of groups, we prove that the varieties of $k$-nilpotent Lie algebras ($k \geq 3$) and the varieties of $n$-solvable Lie algebras ($n \geq 2$) do not form weakly action representable categories. These are the first known examples of action accessible varieties of non-associative algebras that fail to be weakly action representable, establishing that a subvariety of a (weakly) action representable variety of non-associative algebras needs not be weakly action representable. Eventually, we refine J. R. A. Gray's result by proving that the varieties of $k$-nilpotent groups ($k \geq 3$) and that of $2$-solvable groups are not weakly action representable.
We generalize the work of Erdos-Pomerance and Fiori-Shallue on counting Frobenius pseudoprimes from the cases of degree one and two respectively to arbitrary degree. More specifically we provide formulas for counting the number of false witnesses for a number $n$ with respect to Grantham's Frobenius primality test. We also provide conditional assymptotic lower bounds on the average number of Frobenius pseudoprimes and assymptotic upper bounds on the same.
In this paper, we introduce a shape descriptor that we call "interior function". This is a Topological Data Analysis (TDA) based descriptor that refines previous descriptors for image analysis. Using this concept, we define subcomplex lacunarity, a new index that quantifies geometric characteristics of necrosis in tumors such as conglomeration. Building on this framework, we propose a set of indices to analyze necrotic morphology and construct a diagram that captures the distinct structural and geometric properties of necrotic regions in tumors. We present an application of this framework in the study of MRIs of Glioblastomas (GB). Using cluster analysis, we identify four distinct subtypes of Glioblastomas that reflect geometric properties of necrotic regions.
For $t \in \mathbb{N}$, we say that a colouring of $E(K_n)$ is $\textit{almost}$ $t$-$\textit{Gallai}$ if no two rainbow $t$-cliques share an edge. Motivated by a lemma of Berkowitz on bounding the modulus of the characteristic function of clique counts in random graphs, we study the maximum number $\tau_t(n)$ of rainbow $t$-cliques in an almost $t$-Gallai colouring of $E(K_n)$. For every $t \ge 4$, we show that $n^{2-o(1)} \leq \tau_t(n) = o(n^2)$. For $t=3$, surprisingly, the behaviour is substantially different. Our main result establishes that $$\left ( \frac{1}{2}-o(1) \right ) n\log n \le \tau_3(n) = O\big (n^{\sqrt{2}\log n} \big ),$$ which gives the first non-trivial improvements over the simple lower and upper bounds. Our proof combines various applications of the probabilistic method and a generalisation of the edge-isoperimetric inequality for the hypercube.
Sectional curvature bounds are of central importance in the study of Riemannian mani\-folds, both in smooth differential geometry and in the generalized synthetic setting of Alexandrov spaces. Riemannian metrics along with metric spaces of bounded sectional curvature enjoy a variety of, oftentimes rigid, geometric properties. The purpose of this article is to introduce and discuss a new notion of sectional curvature bounds for manifolds equipped with continuous Riemannian metrics of Geroch--Traschen regularity, i.e., $H^1_{\mathrm{loc}} \cap C^0$, based on a distributional version of the classical formula. Our main result states that for $g \in C^1$, this new notion recovers the corresponding bound based on triangle comparison in the sense of Alexandrov. A weaker version of this statement is also proven for locally Lipschitz continuous metrics.
We show that critical parking trees conditioned to be fully parked converge in the scaling limits towards the Brownian growth-fragmentation tree, a self-similar Markov tree different from Aldous' Brownian tree recently introduced and studied by Bertoin, Curien and Riera. As a by-product of our study, we prove that positive non-linear polynomial equations involving a catalytic variable display a universal polynomial exponent $5/2$ at their singularity, confirming a conjecture by Chapuy, Schaeffer and Drmota & Hainzl. Compared to previous analytical works on the subject, our approach is probabilistic and exploits an underlying random walk hidden in the random tree model.
We extend the celebrated Glivenko-Cantelli theorem, sometimes called the fundamental theorem of statistics, from its standard setting of total variation distance to all $f$-divergences. A key obstacle in this endeavor is to define $f$-divergence on a subcollection of a $\sigma$-algebra that forms a $\pi$-system but not a $\sigma$-subalgebra. This is a side contribution of our work. We will show that this notion of $f$-divergence on the $\pi$-system of rays preserves nearly all known properties of standard $f$-divergence, yields a novel integral representation of the Kolmogorov-Smirnov distance, and has a Glivenko-Cantelli theorem.
Andersson and Chru\'sciel showed that generic asymptotically hyperboloidal initial data sets admit polyhomogeneous expansions, and that only a non-generic subclass of solutions of the conformal constraint equations is free of logarithmic singularities. The purpose of this work is twofold. First, within the evolutionary framework of the constraint equations, we show that the existence of a well-defined Bondi mass brings the asymptotically hyperboloidal initial data sets into a subclass whose Cauchy development guaranteed to admit a smooth boundary, by virtue of the results of Andersson and Chru\'sciel. Second, by generalizing a recent result of Beyer and Ritchie, we show that the existence of well-defined Bondi mass and angular momentum, together with some mild restrictions on the free data, implies that the generic solutions of the parabolic-hyperbolic form of the constraint equations are completely free of logarithmic singularities. We also provide numerical evidence to show that in the vicinity of Kerr, asymptotically hyperboloidal initial data without logarithmic singularities can indeed be constructed.
The recent adoption of artificial intelligence (AI) in robotics has driven the development of algorithms that enable autonomous systems to adapt to complex social environments. In particular, safe and efficient social navigation is a key challenge, requiring AI not only to avoid collisions and deadlocks but also to interact intuitively and predictably with its surroundings. To date, methods based on probabilistic models and the generation of conformal safety regions have shown promising results in defining safety regions with a controlled margin of error, primarily relying on classification approaches and explicit rules to describe collision-free navigation conditions. This work explores how topological features contribute to explainable safety regions in social navigation. Instead of using behavioral parameters, we leverage topological data analysis to classify and characterize different simulation behaviors. First, we apply global rule-based classification to distinguish between safe (collision-free) and unsafe scenarios based on topological properties. Then, we define safety regions, $S_\varepsilon$, in the topological feature space, ensuring a maximum classification error of $\varepsilon$. These regions are built with adjustable SVM classifiers and order statistics, providing robust decision boundaries. Local rules extracted from these regions enhance interpretability, keeping the decision-making process transparent. Our approach initially separates simulations with and without collisions, outperforming methods that not incorporate topological features. It offers a deeper understanding of robot interactions within a navigable space. We further refine safety regions to ensure deadlock-free simulations and integrate both aspects to define a compliant simulation space that guarantees safe and efficient navigation.
We introduce a modified Benamou-Brenier type approach leading to a Wasserstein type distance that allows global invariance, specifically, isometries, and we show that the problem can be summarized to orthogonal transformations. This distance is defined by penalizing the action with a costless movement of the particle that does not change the direction and speed of its trajectory. We show that for Gaussian distribution resume to measuring the Euclidean distance between their ordered vector of eigenvalues and we show a direct application in recovering Latent Gaussian distributions.
A key trait of stochastic optimizers is that multiple runs of the same optimizer in attempting to solve the same problem can produce different results. As a result, their performance is evaluated over several repeats, or runs, on the problem. However, the accuracy of the estimated performance metrics depends on the number of runs and should be studied using statistical tools. We present a statistical analysis of the common metrics, and develop guidelines for experiment design to measure the optimizer's performance using these metrics to a high level of confidence and accuracy. To this end, we first discuss the confidence interval of the metrics and how they are related to the number of runs of an experiment. We then derive a lower bound on the number of repeats in order to guarantee achieving a given accuracy in the metrics. Using this bound, we propose an algorithm to adaptively adjust the number of repeats needed to ensure the accuracy of the evaluated metric. Our simulation results demonstrate the utility of our analysis and how it allows us to conduct reliable benchmarking as well as hyperparameter tuning and prevent us from drawing premature conclusions regarding the performance of stochastic optimizers.
We reveal strong and weak inequalities relating two fundamental macroscopic quantum geometric quantities, the quantum distance and Berry phase, for closed paths in the Hilbert space of wavefunctions. We recount the role of quantum geometry in various quantum problems and show that our findings place new bounds on important physical quantities.
We consider price competition among multiple sellers over a selling horizon of $T$ periods. In each period, sellers simultaneously offer their prices and subsequently observe their respective demand that is unobservable to competitors. The demand function for each seller depends on all sellers' prices through a private, unknown, and nonlinear relationship. To address this challenge, we propose a semi-parametric least-squares estimation of the nonlinear mean function, which does not require sellers to communicate demand information. We show that when all sellers employ our policy, their prices converge at a rate of $O(T^{-1/7})$ to the Nash equilibrium prices that sellers would reach if they were fully informed. Each seller incurs a regret of $O(T^{5/7})$ relative to a dynamic benchmark policy. A theoretical contribution of our work is proving the existence of equilibrium under shape-constrained demand functions via the concept of $s$-concavity and establishing regret bounds of our proposed policy. Technically, we also establish new concentration results for the least squares estimator under shape constraints. Our findings offer significant insights into dynamic competition-aware pricing and contribute to the broader study of non-parametric learning in strategic decision-making.
We introduce an open-ended test grounded in algorithmic probability that can avoid benchmark contamination in the quantitative evaluation of frontier models in the context of their Artificial General Intelligence (AGI) and Superintelligence (ASI) claims. Unlike other tests, this test does not rely on statistical compression methods (such as GZIP or LZW), which are more closely related to Shannon entropy than to Kolmogorov complexity. The test challenges aspects related to features of intelligence of fundamental nature such as synthesis and model creation in the context of inverse problems (generating new knowledge from observation). We argue that metrics based on model abstraction and optimal Bayesian inference for planning can provide a robust framework for testing intelligence, including natural intelligence (human and animal), narrow AI, AGI, and ASI. Our results show no clear evidence of LLM convergence towards a defined level of intelligence, particularly AGI or ASI. We found that LLM model versions tend to be fragile and incremental, as new versions may perform worse than older ones, with progress largely driven by the size of training data. The results were compared with a hybrid neurosymbolic approach that theoretically guarantees model convergence from optimal inference based on the principles of algorithmic probability and Kolmogorov complexity. The method outperforms LLMs in a proof-of-concept on short binary sequences. Our findings confirm suspicions regarding the fundamental limitations of LLMs, exposing them as systems optimised for the perception of mastery over human language. Progress among different LLM versions from the same developers was found to be inconsistent and limited, particularly in the absence of a solid symbolic counterpart.
In this work, we show a connection between superstatistics and position-dependent mass (PDM) systems in the context of the canonical ensemble. The key point is to set the fluctuation distribution of the inverse temperature in terms od the system PDM. For PDMs associated to Tsallis and Kaniadakis nonextensive statistics, the pressure and entropy of the ideal gas result lower than the standard case but maintaining monotonic behavior. Gas of non-interacting harmonic oscillators provided with quadratic and exponential PDMs exhibit a behavior of standard ED harmonic oscillator gas and a linear specific heat respectively, the latter being consistent with Nernst's third law of thermodynamics. Thus, a combined PDM-superstatistics scenario offers an alternative way to study the effects of the inhomogeneities of PDM systems in their thermodynamics.
Nearly all identifiability results in unsupervised representation learning inspired by, e.g., independent component analysis, factor analysis, and causal representation learning, rely on assumptions of additive independent noise or noiseless regimes. In contrast, we study the more general case where noise can take arbitrary forms, depend on latent variables, and be non-invertibly entangled within a nonlinear function. We propose a general framework for identifying latent variables in the nonparametric noisy settings. We first show that, under suitable conditions, the generative model is identifiable up to certain submanifold indeterminacies even in the presence of non-negligible noise. Furthermore, under the structural or distributional variability conditions, we prove that latent variables of the general nonlinear models are identifiable up to trivial indeterminacies. Based on the proposed theoretical framework, we have also developed corresponding estimation methods and validated them in various synthetic and real-world settings. Interestingly, our estimate of the true GDP growth from alternative measurements suggests more insightful information on the economies than official reports. We expect our framework to provide new insight into how both researchers and practitioners deal with latent variables in real-world scenarios.
We introduce a new framework that employs Malliavin calculus to derive explicit expressions for the score function -- i.e., the gradient of the log-density -- associated with solutions to stochastic differential equations (SDEs). Our approach integrates classical integration-by-parts techniques with modern tools, such as Bismut's formula and Malliavin calculus, to address linear and nonlinear SDEs. In doing so, we establish a rigorous connection between the Malliavin derivative, its adjoint (the Malliavin divergence or the Skorokhod integral), Bismut's formula, and diffusion generative models, thus providing a systematic method for computing $\nabla \log p_t(x)$. For the linear case, we present a detailed study proving that our formula is equivalent to the actual score function derived from the solution of the Fokker--Planck equation for linear SDEs. Additionally, we derive a closed-form expression for $\nabla \log p_t(x)$ for nonlinear SDEs with state-independent diffusion coefficients. These advancements provide fresh theoretical insights into the smoothness and structure of probability densities and practical implications for score-based generative modelling, including the design and analysis of new diffusion models. Moreover, our findings promote the adoption of the robust Malliavin calculus framework in machine learning research. These results directly apply to various pure and applied mathematics fields, such as generative modelling, the study of SDEs driven by fractional Brownian motion, and the Fokker--Planck equations associated with nonlinear SDEs.
Current algorithms for large-scale industrial optimization problems typically face a trade-off: they either require exponential time to reach optimal solutions, or employ problem-specific heuristics. To overcome these limitations, we introduce SPLIT, a general-purpose quantum-inspired framework for decomposing large-scale quadratic programs into smaller subproblems, which are then solved in parallel. SPLIT accounts for cross-interactions between subproblems, which are usually neglected in other decomposition techniques. The SPLIT framework can integrate generic subproblem solvers, ranging from standard branch-and-bound methods to quantum optimization algorithms. We demonstrate its effectiveness through comparisons with commercial solvers on the MaxCut and Antenna Placement Problems, with up to 20,000 decision variables. Our results show that SPLIT is capable of providing drastic reductions in computational time, while delivering high-quality solutions. In these regards, the proposed method is particularly suited for near real-time applications that require a solution within a strict time frame, or when the problem size exceeds the hardware limitations of dedicated devices, such as current quantum computers.
We employ an adapted version of H\"ormander's asymptotic systems method to show heuristically that the standard good-bad-ugly model admits formal polyhomogeneous asymptotic solutions near null infinity. In a related earlier approach, our heuristics were unable to capture potential leading order logarithmic terms appearing in the asymptotic solution of the good equation (the standard wave equation). Presently, we work with an improved method which overcomes this shortcoming, allowing the faithful treatment of a larger class of initial data in which such logarithmic terms are manifest. We then generalize this method to encompass models that include stratified null forms as sources and whose wave operators are built from an asymptotically flat metric. We then apply this result to the Einstein field equations in generalized harmonic gauge and compute the leading decay in~$R^{-1}$ of the Weyl scalars, where~$R$ is a suitably defined radial coordinate. We detect an obstruction to peeling, a decay statement on the Weyl scalars~$\Psi_n$ that is ensured by smoothness of null infinity. The leading order obstruction appears in~$\Psi_2$ and, in agreement with the literature, can only be suppressed by a careful choice of initial
We study the martingale property and moment explosions of a signature volatility model, where the volatility process of the log-price is given by a linear form of the signature of a time-extended Brownian motion. Excluding trivial cases, we demonstrate that the price process is a true martingale if and only if the order of the linear form is odd and a correlation parameter is negative. The proof involves a fine analysis of the explosion time of a signature stochastic differential equation. This result is of key practical relevance, as it highlights that, when used for approximation purposes, the linear combination of signature elements must be taken of odd order to preserve the martingale property. Once martingality is established, we also characterize the existence of higher moments of the price process in terms of a condition on a correlation parameter.
A close relation has recently emerged between two of the most fundamental concepts in physics and mathematics: chaos and supersymmetry. In striking contrast to the semantics of the word 'chaos,' the true physical essence of this phenomenon now appears to be a spontaneous order associated with the breakdown of the topological supersymmetry (TS) hidden in all stochastic (partial) differential equations, i.e., in all systems from a broad domain ranging from cosmology to nanoscience. Among the low-hanging fruits of this new perspective, which can be called the supersymmetric theory of stochastic dynamics (STS), are theoretical explanations of 1/f noise and self-organized criticality. Central to STS is the physical meaning of TS breaking order parameter (OP). In this paper, we discuss that the OP is a field-theoretic embodiment of the 'butterfly effect' (BE) -- the infinitely long dynamical memory that is definitive of chaos. We stress that the formulation of the corresponding effective theory for the OP would mark the inception of the first consistent physical theory of the BE. Such a theory, potentially a valuable tool in solving chaos-related problems, would parallel the well-established and successful field theoretic descriptions of superconductivity, ferromagentism and other known orders arising from the spontaneous breakdown of various symmetries of nature.
The Friedman test has been extensively applied as a nonparametric alternative to the conventional F procedure for comparing treatment effects in randomized complete block designs. A chi-square distribution provides a convenient approximation to determining the critical values for the Friedman procedure in hypothesis testing. However, the chi-square approximation is generally conservative and the accuracy declines with increasing number of treatments. This paper describes an alternative transformation of the Friedman statistic along with an approximate F distribution that has the same numerator degrees of freedom as the ANOVA F test. Moreover, two approximate noncentral F distributions are presented for the proposed F-transformation under the alternative hypothesis of heterogeneous location shifts. Explicit power functions are derived when the underlying populations have the uniform, normal, Laplace, and exponential distributions. Theoretical examination and empirical assessment are presented to validate the advantages of the proposed approaches over the existing methods of the Friedman test. The developed test and power procedures are recommended due to their consistently acceptable Type I error rates and accurate power calculations for the location shift structures and population distributions considered here.
Containers are used to carve out a class of strictly positive data types in terms of shapes and positions. They can be interpreted via a fully-faithful functor into endofunctors on Set. Monadic containers are those containers whose interpretation as a Set functor carries a monad structure. The category of containers is closed under container composition and is a monoidal category, whereas monadic containers do not in general compose. In this paper, we develop a characterisation of distributive laws of monadic containers. Distributive laws were introduced as a sufficient condition for the composition of the underlying functors of two monads to also carry a monad structure. Our development parallels Ahman and Uustalu's characterisation of distributive laws of directed containers, i.e. containers whose Set functor interpretation carries a comonad structure. Furthermore, by combining our work with theirs, we construct characterisations of mixed distributive laws (i.e. of directed containers over monadic containers and vice versa), thereby completing the 'zoo' of container characterisations of (co)monads and their distributive laws. We have found these characterisations amenable to development of existence and uniqueness proofs of distributive laws, particularly in the mechanised setting of Cubical Agda, in which most of the theory of this paper has been formalised.
The primary focus of this thesis is the numerical investigation of chaos in Hamiltonian models describing charged particle orbits in plasma, star motions in barred galaxies, and orbits' diffusion in multidimensional maps. We systematically explore the interplay between magnetic and kinetic chaos in toroidal fusion plasmas, where non-axisymmetric perturbations disrupt smooth magnetic flux surfaces, generating complex particle trajectories. Using the Generalized Alignment Index (GALI) method, we efficiently quantify chaos, compare the behavior of magnetic field lines and particle orbits, visualize the radial distribution of chaotic regions, and offer GALI as a valuable tool for studying plasma physics dynamics. We also study the evolution of phase space structures in a 3D barred galactic potential, following successive 2D and 3D pitchfork and period-doubling bifurcations of periodic orbits. By employing the `color and rotation' technique to visualize the system's 4D Poincar\'e surface of sections, we reveal distinct structural patterns. We further investigate the long-term diffusion transport and chaos properties of single and coupled standard maps, focusing on parameters inducing anomalous diffusion through accelerator modes exhibiting ballistic transport. Using different ensembles of initial conditions in chaotic regions influenced by these modes, we examine asymptotic diffusion rates and time scales, identifying conditions suppressing anomalous transport and leading to long-term convergence to normal diffusion across coupled maps. Lastly, we perform the first comprehensive investigation into the GALI indices for various attractors in continuous and discrete-time dissipative systems, extending the method's application to non-Hamiltonian systems. A key aspect of our work involves analyzing and comparing GALIs' with Lyapunov Exponents for systems exhibiting hyperchaotic motion.
We study the hypothesis testing problem of detecting the presence of a thermal source emitting coherent quantum states towards an arbitrary but fixed number $K$ of detectors versus the situation where the detectors are presented uncorrelated thermal noise of the same average energy in the setting of asymmetric hypothesis testing. We compare two variations of this theme: In the first one the detectors perform heterodyne or homodyne detection and then transmit their measured results to a central processing unit with unlimited computational resources. In the second one the detectors are able to teleport the quantum states to the central unit, which acts on the received quantum states with unlimited quantum computational resources. We find that when the average received energy per detector goes to zero, the ratio of the error exponents goes to infinity, indicating an infinite-fold quantum advantage.
We investigate differentially private estimators for individual parameters within larger parametric models. While generic private estimators exist, the estimators we provide repose on new local notions of estimand stability, and these notions allow procedures that provide private certificates of their own stability. By leveraging these private certificates, we provide computationally and statistical efficient mechanisms that release private statistics that are, at least asymptotically in the sample size, essentially unimprovable: they achieve instance optimal bounds. Additionally, we investigate the practicality of the algorithms both in simulated data and in real-world data from the American Community Survey and US Census, highlighting scenarios in which the new procedures are successful and identifying areas for future work.
Solving multiple parametrised related systems is an essential component of many numerical tasks. Borrowing strength from the solved systems and learning will make this process faster. In this work, we propose a novel probabilistic linear solver over the parameter space. This leverages information from the solved linear systems in a regression setting to provide an efficient posterior mean and covariance. We advocate using this as companion regression model for the preconditioned conjugate gradient method, and discuss the favourable properties of the posterior mean and covariance as the initial guess and preconditioner. We also provide several design choices for this companion solver. Numerical experiments showcase the benefits of using our novel solver in a hyperparameter optimisation problem.
Urban Air Mobility (UAM) offers a solution to current traffic congestion by using electric Vertical Takeoff and Landing (eVTOL) vehicles to provide on-demand air mobility in urban areas. Effective traffic management is crucial for efficient operation of UAM systems, especially for high-demand scenarios. In this paper, we present a centralized framework for conflict-free takeoff scheduling of eVTOLs in on-demand UAM systems. Specifically, we provide a scheduling policy, called VertiSync, which jointly schedules UAM vehicles for servicing trip requests and rebalancing, subject to safety margins and energy requirements. We characterize the system-level throughput of VertiSync, which determines the demand threshold at which the average waiting time transitions from being stable to being increasing over time. We show that the proposed policy maximizes throughput for sufficiently large fleet size and if the UAM network has a certain symmetry property. We demonstrate the performance of VertiSync through a case study for the city of Los Angeles, and show that it significantly reduces average passenger waiting time compared to a first-come first-serve scheduling policy.
Recent work has shown that the (block) Lanczos algorithm can be used to extract approximate energy spectra and matrix elements from (matrices of) correlation functions in quantum field theory, and identified exact coincidences between Lanczos analysis methods and others. In this work, we note another coincidence: the Lanczos algorithm is equivalent to the well-known Rayleigh-Ritz method applied to Krylov subspaces. Rayleigh-Ritz provides optimal eigenvalue approximations within subspaces; we find that spurious-state filtering allows these optimality guarantees to be retained in the presence of statistical noise. We explore the relation between Lanczos and Prony's method, their block generalizations, generalized pencil of functions (GPOF), and methods based on the generalized eigenvalue problem (GEVP), and find they all fall into a larger "Prony-Ritz equivalence class", identified as all methods which solve a finite-dimensional spectrum exactly given sufficient correlation function (matrix) data. This equivalence allows simpler and more numerically stable implementations of (block) Lanczos analyses.