Electromagnetism is at the heart of the Standard Model, but despite all the successes of modern theory, our basic description of light traveling in free space remains unsatisfactory. The four bosons that compose light are introduced in a rather trivial way, simply by quantizing the (scalar and vector) potential amplitudes. This leads to quite a few conceptual problems linked to the two virtual photons, the longitudinal and scalar ones. Moreover, the spin of the photon is rather poorly handled by the conventional Quantum Electro-Dynamics theory. Therefore: what if the field's Lagrangian density, to some extent, would not be properly chosen? Here we look at these questions from a completely different point of view, bypassing the problems encountered in conventional theories. We choose a pragmatic approach that relies only on basic Condensed Matter like Quantum Mechanics, and a specific gauge fixing procedure for the potential field: we propose the concept of gauge duality, which leads to an original quantization scheme. Building on the Poincaré symmetries, all constants of motion are identified. Four bosons are introduced, responsible for a proper spin 1 pseudo-vector and parity and charge related operators. They emerge from scalar fields that can be viewed as generalized fluxes (in the sense of M. Devoret), with quantum conjugate virtual charges responsible for the "confinement" of light in space, within "virtual electrodes", somehow reproducing the holographic principle originally proposed for gravity. All observable properties of light in free space then arise from a specific choice of eigenstates (a procedure replacing here the Ward identity of Quantum Field Theory). Real photons are thus the "helicity" bosons, while virtual ones correspond to a "parity charge". Photon and anti-photon are (as expected) the same particle, linked through an internal gauge transformation.
This paper investigates how the gauge group $\text{SU}_{I}(2) \times \text{U}_{Y}(1)$ of the electroweak interactions can be derived using recent geometric techniques within the real Clifford Algebra $\mathbb{R}_4 = \text{Cl}_4(\mathbb{R})$. Central to this approach is a novel procedure for constructing the spinor space of $\mathbb{R}_4$ directly, \emph{without} complexification. We show that $\mathbb{R}_4$ naturally accommodates representations for the $\text{SU}_{I}(2) \times \text{U}_{Y}(1)$ gauge bosons and a single generation of chiral Standard Model leptons, with weak isospin acting exclusively on left-chiral states. Specifically, under hypercharge and isospin $(Y, I_3)$, $\mathbb{R}_4$ contains $(-1, \mp \tfrac{1}{2})$ irreps for left-chiral electrons and neutrinos, a $(-2, 0)$ irrep for a right-chiral electron, and a $(0, 0)$ irrep for a sterile right-chiral neutrino. The distinction between left- and right-chiral particles arises from the grade parity of the irreps, providing a natural geometric explanation for why only left-chiral particles couple to $\text{SU}_{I}(2)$. The emergence of the correct eigenvalues directly from first principles highlights the promise of this framework for the geometric foundations of Electroweak Theory and the Standard Model, as well as for Grand Unified Theories more broadly. This paper is the first panel of the Lepton Triptych, which will ultimately present the full Yang-Mills theory of the electroweak model based on these principles.
We derive analytic formulas to reconstruct particle-averaged quantities from experimental results that suffer from the efficiency loss of particle measurements. These formulas are derived under the assumption that the probabilities of observing individual particles are independent. The formulas do not agree with the conventionally used intuitive formulas.
Photoreduction of cryptochrome protein in the retina is a well-known mechanism of navigation of birds through the geomagnetic field, yet the biosignal nature of the mechanism remains unclear. The absorption of blue light by the flavin adenine dinucleotide (FAD) chromophore can alter the distribution of electrons in cryptochrome and create radical pairs with separated charges. In this study, the spin dynamics of electrons in the radical pair and its coupling with spatial position were investigated by computational modeling from a quantum mechanical perspective. Several interactions were considered in the presence of an external magnetic field, and the resulting electric dipole moment in cryptochrome was computed as the quantity emerging from this coupling. The computations show the induced electric dipole moment clearly depend on the characteristics of the applied magnetic field even after considering dissipative effects. In fact, our findings indicate that the radical pair in cryptochrome protein is a magnetic biosensor, in the sense that in the presence of the geomagnetic field, variations in spin states can influence its electric dipole moment, which may be interpreted via the bird as an orientation signal. The results can be used in the advancement of bio-inspired technologies which replicate animal magnetic sensitivity. On the other hand, with increasing concern about the detrimental effects of electromagnetic fields on wildlife and human health, studying the phenomenon of magnetoreception can contribute to a deeper understanding of how biological structures interact with these fields.
The formation and evolution of galaxies and other astrophysical objects have become of great interest, especially since the launch of the James Webb Space Telescope in 2021. The mass, size, and density of objects in the early universe appear to be drastically different from those predicted by the standard cosmology - the $\Lambda$CDM model. This work shows that the mass-size-density evolution is not surprising when we use the CCC+TL cosmology, which is based on the concepts of covarying coupling constants in an expanding universe and the tired light effect contributing to the observed redshift. This model is consistent with supernovae Pantheon+ data, the angular size of the cosmic dawn galaxies, BAO, CMB sound horizon, galaxy formation time scales, time dilation, galaxy rotation curves, etc., and does not have the coincidence problem. The effective radii $r_e$ of the objects are larger in the new model by $r_e \propto (1+z)^{0.93}$. Thus, the object size evolution in different studies, estimated as $r_e \propto (1+z)^s$ with $s=-1.0 \pm {0.3}$, is modified to $r_e \propto (1+z)^{s+0.93}$, the dynamical mass by $(1+z)^{0.93}$, and number density by $(1+z)^{-2.80}$. The luminosity modification increases slowly with $z$ to 1.8 at $z=20$. Thus, the stellar mass increase is modest, and the luminosity and stellar density decrease are mainly due to the larger object size in the new model. Since the aging of the universe is stretched in the new model, its temporal evolution is much slower (e.g., at $z=10$, the age is about a dex longer); stars, black holes, and galaxies do not have to form at unrealistic rates.
Urban green infrastructure is essential for climate resilience, public health, and environmental justice. Yet, the absence of standardised methods to quantify urban nature hinders the development of equitable greening policies. In this study, we present the first national, building-level assessment of the 3-30-300 urban greening rule, a policy framework proposing that every citizen should see three trees from their home, live in a neighbourhood with 30\% canopy cover, and reside within 300 m of a public green space. Using high-resolution LiDAR (Vegetation Object Model), Sentinel 2 imagery, and open geospatial datasets for over 28 million buildings across England, we integrate raster, vector, and socioeconomic data within a scalable computational framework. Tree segmentation was performed using adaptive local-maximum filtering, canopy cover estimated at 1 m resolution, and park accessibility derived from network-based walking distances. Inequality in access to nature was quantified via Gini coefficients and modelled with spatial error regressions against socioeconomic deprivation. Our results reveal that while most urban areas meet the 3-tree proximity rule, fewer than 3\% achieve 30\% canopy cover, and only a minority satisfy all three components simultaneously. Crucially, ambient greenness (trees and canopy) is concentrated in affluent areas, whereas proximity to parks is greatest in dense, often deprived urban centres, exposing a multidimensional nature gap. This framework establishes a reproducible, open, and computationally efficient blueprint for evaluating urban nature equity at scale, supporting the integration of environmental justice metrics into national urban planning agendas.
We present a unified framework that fully represents electromagnetic potentials, fields, and sources in vacuum, based on a reinterpretation of the classical Hertz-potential formalism. In this construction, $\phi$, $A$, $E$, $B$, $\rho$, and $J$ are systematically derived from a single vector wavefield $\Gamma(x, t)$ (called the "$\Gamma$-potential"), which is structurally aligned with the classical electric Hertz potential but of broader scope. A surjective mapping is established from such wavefields to all electromagnetic configurations in vacuum (that are sufficiently regular). This mapping induces a well-defined algebraic correspondence between the solution space of Maxwell's equations and the linear space of $C_t^{3} C_x^{3}$ vector wavefields (modulo the relevant symmetries), thereby enabling a framework for structural analysis of electromagnetic fields via their associated wavefields. Gauge freedom and the Lorenz gauge are naturally preserved; charge conservation and Maxwell's equations are inherently encoded in this representation. Building on this framework, we also introduce a transformation that provides a systematic method for generating new electromagnetic solutions from known ones. This transformation, called the "$\Gamma$-transformation", generalizes classical gauge transformations and may facilitate the exploration of hidden structures and symmetries in the solution space of Maxwell's equations.
We developed a low-energy model that can be used at any time to describe the dynamics of DNA bubbles at temperatures below the melting point. The Schrödinger equation associated with this problem is solved in imaginary time with a quantum Coulomb potential, and we obtain an approximate expression for its more general physical solution as a linear combination of the states whose energies are close to the lower bound energy. We can then determine the probability density, the first-passage time density, and the correlation functions in terms of Bessel functions. Our findings are consistent with results obtained directly from the Fokker-Planck equation. Comparisons with the Gamma and Diffusion models are discussed.
This essay provides a critical overview of the mathematical kinetic theory of active particles, which is used to model and study collective systems consisting of interacting living entities, such as those involved in behavior and evolution. The main objective is to study the interactions of large systems of living entities mathematically. More specifically, the study relates to the complex features of living systems and the mathematical tools inspired by statistical physics. The focus is on the mathematical description of these interactions and their role in deriving differential systems that describe the aforementioned dynamics. The paper demonstrates that studying these interactions naturally yields new mathematical insights into systems in the natural sciences and behavioral economics.
We present a novel derivation of the spacetime metric generated by matter, without invoking Einstein's field equations. For static sources, the metric arises from a relativistic formulation of D'Alembert's principle, where the inertial force is treated as a real dynamical entity that exactly compensates gravity. This leads to a conformastatic metric whose geodesic equation, parametrized by proper time, reproduces the relativistic version of Newton's second law for free fall. To extend the description to moving matter, uniformly or otherwise, we apply a Lorentz transformation to the static metric. The resulting non static metric accounts for the motion of the sources and, remarkably, matches the weak field limit of general relativity as obtained from the linearized Einstein equations in the de Donder or Lorenz gauge. This approach, at least at Solar System scales, where gravitational fields are weak, is grounded in a new dynamical interpretation of the Equivalence Principle. It demonstrates how gravity can emerge from the relativistic structure of inertia, without postulating or solving Einstein's equations.
Linear response theory is a well-established method in physics and chemistry for exploring excitations of many-body systems. In particular, the quasiparticle random-phase approximation (QRPA) provides a powerful microscopic framework by building excitations on top of the mean-field vacuum; however, its high computational cost limits model calibration and uncertainty quantification studies. Here, we present two complementary QRPA surrogate models and apply them to study response functions of finite nuclei. One is a reduced-order model that exploits the underlying QRPA structure, while the other utilizes the recently developed parametric matrix model algorithm to construct a map between the system's Hamiltonian and observables. Our benchmark applications, the calculation of the electric dipole polarizability of ${}^{180}$Yb and the $\beta$-decay half-life of ${}^{80}$Ni, show that both emulators can achieve 0.1\%--1\% accuracy while offering a six to seven orders of magnitude speedup compared to state-of-the-art QRPA solvers. These results demonstrate that the developed QRPA emulators are well-positioned to enable Bayesian calibration and large-scale studies of computationally expensive physics models describing the properties of many-body systems.
We study $V_{\mathrm{B}}^-$ centres generated by helium focused ion beam (FIB) irradiation in thin ($\sim$70 nm) hBN nanoflakes, in order to investigate the effect of implantation conditions on the key parameters that influence the magnetic field sensitivity of $V_{\mathrm{B}}^-$ quantum sensors. Using a combination of photoluminescence, optically detected magnetic resonance, and Raman spectroscopy, we examine the competing factors of maximising signal intensity through larger $V_{\mathrm{B}}^-$ concentration against the degradation in spin coherence and lattice quality observed at high ion fluences. Our results indicate that both the $V_{\mathrm{B}}^-$ spin properties and hBN lattice parameters are largely preserved up to an ion fluence of $10^{14}$ ions/cm$^2$, and beyond this significant degradation occurs in both. At the optimal implantation dose, an AC magnetic sensitivity of $\sim 1\,\mu\mathrm{T}/\sqrt{\mathrm{Hz}}$ is achieved. Using the patterned implantation enabled by the FIB, we find that $V_{\mathrm{B}}^-$ centres and the associated lattice damage are well localised to the implanted regions. This work demonstrates how careful selection of fabrication parameters can be used to optimise the properties of $V_{\mathrm{B}}^-$ centres in hBN, supporting their application as quantum sensors based on 2D materials.
This paper presents a data-driven approach, referred to as Quantized Skeletal Learning (QSL), for generating skeletal mechanisms. The approach has two key components: (1) a weight vector that can be used to eliminate relatively unimportant species and reactions, and (2) an end-to-end differentiable program whose loss-function gradients, with respect to the weight vector, can be used to adjust those weights. To promote sparsity in the weight vector -- and to reduce the influence of certain reactions or species to zero -- an $l_1$-regularized objective is employed alongside the standard mean squared error loss, thus removing the least important components. The proposed QSL approach is validated by generating skeletal mechanisms for methane and ethylene based on the GRI 3.0 and USC II mechanisms, respectively, demonstrating effectiveness in deriving skeletal mechanisms with various levels of fidelity. Two variants of QSL, designated as QSL-R and QSL-S, are tested; these focus on eliminating reactions and species, respectively. Analysis of ignition delay times and species mass fractions demonstrate QSL's capabilities to reliably and efficiently extract data-driven skeletal mechanisms of varying fidelities from detailed mechanisms.
We review recent advances in the study of nonlinear dynamics in mode-locked fibre lasers operating in the breathing (pulsating) soliton regime. Leveraging advanced diagnostics and control strategies -- including genetic algorithms -- we uncover a rich spectrum of dynamical behaviours, including frequency-locked breathers, fractal Farey hierarchies, Arnold tongues with anomalous features, and breather molecular complexes. We also identify a novel route to chaos via modulated subharmonic states. These findings underscore the utility of fibre lasers as model systems for exploring complex dissipative dynamics, offering new opportunities for ultrafast laser control and fundamental studies in nonlinear science.
Layered van der Waals materials offer novel opportunities for on-chip waveguiding and development of integrated photonic circuits. In the strong light-matter coupling regime, their nonlinear response can be significantly enhanced, which is crucial for developing active photonic devices. However, probing the nonlinearity of waveguide modes in subwavelength-thick structures is challenging as they are not directly accessible from far-field. Here we apply a novel nonlinear near-field spectroscopic technique based on a GaP solid immersion lens and femtosecond laser excitation to study nonlinearity of guided modes in monolayer WS$_2$ encapsulated in hBN under the strong light-matter coupling regime. We reveal formation of exciton-polaritons with $\sim 50$ meV Rabi splitting and demonstrate a pump-induced transition from strong to weak coupling. Our results show that exciton resonance saturation and broadening lead to an efficient nonlinear response of guided polaritons, which can be employed for developing compact van der Waals photonic switches and modulators.
Collaboration with peers both inside and outside the classroom can be an invaluable tool for helping students learn physics. We investigated the impact of peer collaboration on learning physics by examining the characteristics of women and men who typically worked alone versus those who typically collaborated with peers in their algebra-based introductory physics course when they took the course before and during the COVID-19 pandemic when the classes were on Zoom. Our findings indicate that, on average, students who worked with peers had higher grades and reported greater peer influence on their physics self-efficacy during the pandemic compared to those who worked alone. We also observed that, for both women and men, a larger percentage of students typically worked in groups before the pandemic, while a greater percentage typically worked alone during the pandemic. We discuss these results in relation to students' prior academic preparation, physics grades, self-efficacy and their perception of the effectiveness of peer collaboration on their physics self-efficacy.
Many early career educators, such as teaching assistants (TAs) in college courses, as well as pre-college educators, need help both with content and pedagogical knowledge to effectively help their students learn. One pedagogical approach that has been found effective in prior studies is collaboration with peers. Collaborative learning not only has the potential to help educators develop content knowledge but can also improve their pedagogical knowledge. This study examines the performance of physics graduate students, enrolled in a professional development course for teaching assistants (TAs), on the Magnetism Conceptual Survey, highlighting the impact of peer collaboration on learning both content and pedagogy. Peer interaction significantly improved performance, driven by both construction of knowledge (where the group answered a question correctly but only one member had the correct individual response) and co-construction of knowledge (where the group succeeded despite both members initially answering incorrectly). Beyond improving content understanding, peer collaboration can also foster pedagogical skills by encouraging early educators such as TAs to use peers as learning resources and communicate ideas effectively to support mutual understanding. These dual benefits-enhancing both content mastery and teaching abilities-demonstrate that this approach holds value not only for the professional development of TAs but can also be adapted for pre-college professional development programs to improve teaching and learning outcomes.
The interaction of the two beams in a collider leads to a variety of effects that may limit the performance of the machine. This lecture introduces the basic aspects necessary to understand the design of modern colliders.
Artificial intelligence (AI) holds significant promise for enhancing intraoperative perception and decision-making in telesurgery, where physical separation impairs sensory feedback and control. Despite advances in medical AI and surgical robotics, conventional electronic AI architectures remain fundamentally constrained by the compounded latency from serial processing of inference and communication. This limitation is especially critical in latency-sensitive procedures such as endovascular interventions, where delays over 200 ms can compromise real-time AI reliability and patient safety. Here, we introduce an Optical Computation-in-Communication (OCiC) framework that reduces end-to-end latency significantly by performing AI inference concurrently with optical communication. OCiC integrates Optical Remote Computing Units (ORCUs) directly into the optical communication pathway, with each ORCU experimentally achieving up to 69 tera-operations per second per channel through spectrally efficient two-dimensional photonic convolution. The system maintains ultrahigh inference fidelity within 0.1% of CPU/GPU baselines on classification and coronary angiography segmentation, while intrinsically mitigating cumulative error propagation, a longstanding barrier to deep optical network scalability. We validated the robustness of OCiC through outdoor dark fibre deployments, confirming consistent and stable performance across varying environmental conditions. When scaled globally, OCiC transforms long-haul fibre infrastructure into a distributed photonic AI fabric with exascale potential, enabling reliable, low-latency telesurgery across distances up to 10,000 km and opening a new optical frontier for distributed medical intelligence.
In this paper, we investigate superradiant emission in a free-electron laser (FEL) oscillator using a comprehensive three-dimensional time-dependent simulation tool. Using beam parameters from the University of Hawai`i (UH) at Mānoa FEL facility, our study shows that at nominal bunch length, the FEL radiation exhibits superradiant scaling in saturation. We then explore how cavity desynchronization enhances this regime by mitigating the laser lethargy effect in oscillators and improving overlap between the electron bunch and the radiation pulse, with peak power increased by more than a factor of five. Finally, we simulate a short-bunch operational mode with bunch length comparable to the slippage length, which accelerates saturation and further amplifies the FEL power. These findings highlight that the UH Mānoa FEL oscillator has the potential to achieve superradiant emission at its nominal operating mode, and that short-bunch operation offers further enhancement while requiring additional optimization.
The vacuum magnetic birefringence effect is a prediction of quantum electrodynamics, that in the presence of a magnetic field vacuum behaves as a non-linear medium, exhibiting a birefringence. In this work, an experiment is proposed to measure this effect for the first time, by sensing the changes in the frequencies of laser fields stabilized to the resonances of a 245 m long optical cavity whose eigenmode propagates through a string of 24 superconducting magnets arranged for the ALPS II experiment. Results from a prototype setup using a 19 m test cavity without a magnetic field are presented and projected in terms of the sensitivity of the proposed full-scale experiment.
Reciprocity breaking at optical frequencies typically relies on bulky magnets, dynamic modulation, or nonlinearities, all of which hinder chip-scale integration and the handling of unpolarised light. We introduce a fully passive, subwavelength metasurface that achieves polarisation-insensitive one-way transparency by combining self-magnetised ferrite nanodisks in a vortex state with symmetry-protected quasi-bound states in the continuum. The metasurface exhibits a pure synthetic moving-medium response at optical frequencies, yielding giant nonreciprocal directional dichroism. We report near-unity values for both the transmittance contrast and the emissivity-to-absorptivity ratio with experimentally widely available ferrite materials, all under unpolarised illumination and without external bias. Using temporal coupled-mode theory, we identify the design conditions necessary to maximise directional dichroism: critical coupling, Huygens-type resonance overlap, and strong inter-mode coupling. Furthermore, we propose a deterministic, stamp-assisted protocol for imprinting arbitrary, uniform, or patterned vortex configurations across large arrays of nanodisk meta-atoms, enabling scalable fabrication. This work establishes a practical route toward compact nonreciprocal photonics with applications in photonic gyrators, nonreciprocal wavefront engineering, and nonreciprocal solar cell technologies.
Strong magnetic fields are naturally self-generated in high-power, laser-solid interactions through the Biermann-battery mechanism. This work experimentally characterizes the 3D location and strength of these fields, rather than path-integrated quantities, through multi-view proton radiography and tomographic inversion on the OMEGA laser. We infer magnetic fields that extend several millimeters off the target surface into the hot, rarefied corona and are sufficient to strongly magnetize the plasma ($\Omega_{e}\tau_e \gg 1$). The data is used to validate MHD simulations incorporating recent improvements in magnetic transport modeling; we achieve reasonable agreement only with models with re-localization of transport by magnetic fields. This work provides a key demonstration of tomographic inversion in proton radiography, offering a valuable tool for investigating magnetic fields in laser-produced plasmas.
We present a parsimonious and robust machine learning approach for identifying plasma confinement states in fusion power plants (FPPs) where reliable identification of the low-confinement (L-mode) and high-confinement (H-mode) regimes is critical for safe and efficient operation. Unlike research-oriented devices, FPPs must operate with a severely constrained set of diagnostics. To address this challenge, we demonstrate that a minimalist model, using only electron cyclotron emission (ECE) signals, can deliver accurate and reliable state classification. ECE provides electron temperature profiles without the engineering or survivability issues of in-vessel probes, making it a primary candidate for FPP-relevant diagnostics. Our framework employs ECE as input, extracts features with radial basis functions, and applies a gradient boosting classifier, achieving high accuracy with test accuracy averaging 96\% correct predictions. Robustness analysis and feature importance study confirm the reliability of the approach. These results demonstrate that state-of-the-art performance is attainable from a restricted diagnostic set, paving the way for minimalist yet resilient plasma control architectures for FPPs.
FeynCraft is a browser-based game that is designed to teach players the particle interactions of the Standard Model of particle physics, and how to link these interactions together to produce valid Feynman diagrams. It is primarily targeted at undergraduates and lecturers in introductory courses in particle physics, but we anticipate that it should also be useful for school pupils and teachers studying the basics of particle physics, and perhaps also current researchers. Users may draw particle lines and link them together to form vertices and complete diagrams, and FeynCraft determines invalid vertices using a sequence of simple rules, showing users which vertices are invalid and why. Diagrams may be drawn that involve both fundamental Standard Model particles and hadrons (where hadrons are represented by their constituent quark content). Users can also be presented with a process for which they must draw valid Feynman diagrams -- FeynCraft is able to generate such 'problems' at random, but there is also the facility to create, share, import and solve curated sets of problems. Alternatively, one is able to specify the process, and ask FeynCraft itself to generate the Feynman diagrams. Finally, we include several overlay options that give more information on a Feynman diagram (e.g. QCD colour flow, interaction strengths), and the option to export a drawn diagram as LaTeX code.
We discuss the most general form of the Lorentz transformation in 1+1 dimensional spacetime, focusing mainly on its superluminal branch. For this purpose, we introduce the 2-velocity of a reference frame and the clockwork postulate. Basic special relativity effects are discussed in the proposed framework. Different forms of the superluminal Lorentz transformation, which were studied in the literature, are critically examined from the perspective of our formalism. Counterintuitive features of the superluminal Lorentz transformation are identified both in our approach and in earlier studies.
Public discourse emerges from the interplay between individuals' willingness to voice their opinions and the structural features of the social networks in which they are embedded. In this work we investigate how choice homophily and triadic closure shape the emergence of the spiral of silence, the phenomenon whereby minority views are progressively silenced due to fear of isolation. We advance the state of the art in three ways. First, we integrate a realistic network formation model, where homophily and triadic closure co-evolve, with a mean-field model of opinion expression. Second, we perform a bifurcation analysis of the associated Q-learning dynamics, revealing conditions for hysteresis and path dependence in collective expression. Third, we validate our theoretical predictions through Monte Carlo simulations, which highlight the role of finite-size effects and structural noise. Our results show that moderate triadic closure can foster minority expression by reinforcing local cohesion, whereas excessive closure amplifies asymmetries and entrenches majority dominance. These findings provide new insights into how algorithmic reinforcement of clustering in online platforms can either sustain diversity of opinion or accelerate its suppression.
Microtubules stochastically switch between growth and shrinkage during catastrophe events across a very large range of filament lengths, with the length distribution at catastrophe peaking at a finite filament length, which can aid the search for chromosomes during mitosis. To model these distinct features, we introduce a topological model of a two-component microtubule cap, where protected edge states give rise to different phases of microtubule dynamics - growth, shrinkage, and a recently observed "stutter" phase. With only two free parameters, our model quantitatively reproduces the peaked catastrophe length distribution and its dependence on tubulin concentration from experimental data. The model further provides an analytical condition for when the catastrophe length distribution is peaked. Our work shows how microtubules may utilize topological edge states to promote length exploration, elucidating a novel mechanism for search and target reaching in cellular biology.
Shadow molecular dynamics provide an efficient and stable atomistic simulation framework for flexible charge models with long-range electrostatic interactions. While previous implementations have been limited to atomic monopole charge distributions, we extend this approach to flexible multipole models. We derive detailed expressions for the shadow energy functions, potentials, and force terms, explicitly incorporating monopole-monopole, dipole-monopole, and dipole-dipole interactions. In our formulation, both atomic monopoles and atomic dipoles are treated as extended dynamical variables alongside the propagation of the nuclear degrees of freedom. We demonstrate that introducing the additional dipole degrees of freedom preserves the stability and accuracy previously seen in monopole-only shadow molecular dynamics simulations. Additionally, we present a shadow molecular dynamics scheme where the monopole charges are held fixed while the dipoles remain flexible. Our extended shadow dynamics provide a framework for stable, computationally efficient, and versatile molecular dynamics simulations involving long-range interactions between flexible multipoles. This is of particular interest in combination with modern artificial intelligence and machine learning techniques, which are increasingly used to develop physics-informed and data-driven foundation models for atomistic simulations. These models aim to provide transferable, high-accuracy representations of atomic interactions that are applicable across diverse sets of molecular systems, which requires accurate treatment of long-range charge interactions.
Metasurfaces composed of subwavelength nanostructures enable simultaneous control of polarization and wavefront, greatly enhancing holographic information capacity. Building on this capability, we extend holography into the quantum domain by experimentally realizing Bell-state holograms-distinct holographic images encoded in polarization-entangled Bell states of photon pairs. A polarization-multiplexed dielectric metasurface generates spatial modes conditioned on both input and output polarizations, entangling the holographic pattern with the two-photon state. To characterize these quantum holograms, we further develop quantum hologram tomography, reconstructing the full density matrix of the holographic state pixel by pixel. The reconstructed density-matrix hologram reveals tailor-made holographic symbols attached to individual Bell states through the metasurface, with contrast built up among the different Bell components as theory shows. This framework unifies metasurface photonics with quantum-state reconstruction and provides a scalable route toward high-dimensional quantum communication, encryption and information processing based on holographically encoded quantum light.
The ability to control the spatial distribution of light, particularly in deep sub-wavelength areas, is important for a range of materials science, microscopy, and communications applications. Separately, materials science and communications rely on the ability to temporally shape the evolution of electromagnetic pulses. In this work we investigate theoretically the propagation of ultrafast pulses inside hyperbolic metamaterials-based photonic funnels, which have been recently used to achieve deep subwavelength (wavelength/30) concentration of monochromatic mid-infrared light. By analyzing the complex spatio-temporal dynamics of the pulse-funnel interaction, we show that photonic funnels, in general, broaden bandwidth-limited ultrafast Gaussian pulses. We demonstrate that this broadening can be mitigated by pre-chirping the incoming light, realizing simultaneous intensity enhancement and spatio-temporal compression of mid-wave IR light in the all-semiconductor "designer metal" funnel platform. Our analysis suggests that, in combination with linear chirp, designer-metal-based photonic funnels can be utilized with 100 fs bandwidth- and diffraction-limited pulses to produce wavelength/30-scale signals of ~200 fs duration, with intensity enhancement on the order of 5. Lowering material absorption can further enhance the peak intensity. The results presented can be used to assess the perspectives of ultrafast sub-diffraction light manipulation in other portions of the electromagnetic spectrum by adjusting the (meta)material composition of the funnels.
The origins of consonance in human music has long been contested, and today there are three primary hypotheses: aversion to roughness, preference for harmonicity, and learned preferences from cultural exposure. While the evidence is currently insufficient to disentangle the contributions of these hypotheses, I propose several reasons why roughness is an especially promising area for future study. The aim of this review is to summarize and critically evaluate roughness theory and models, experimental data, to highlight areas that deserve further research. I identify 2 key areas: There are fundamental issues with the definition and interpretation of results due to tautology in the definition of roughness, and the lack of independence in empirical measurements. Despite extensive model development, there are many duplications and models have issues with data quality and overfitting. Future theory development should aim for model simplicity, and extra assumptions, features and parameters should be evaluated systematically. Model evaluation should aim to maximise the breadth of stimuli that are predicted.
Flying focus techniques produce laser pulses whose focal points travel at arbitrary, controllable velocities. While this flexibility can enhance a broad range of laser-based applications, existing techniques constrain the motion of the focal point to the propagation direction of the pulse. Here, we introduce a flying focus configuration that decouples the motion of the focus from the propagation direction. A chirped laser pulse focused and diffracted by a diffractive lens and grating creates a focal point that can move both along and transverse to the propagation direction. The focal length of the lens, grating period, and chirp can be tuned to control the direction and velocity of the focus. Simulations demonstrate this control for a holographic configuration suited to high-power pulses, in which two off-axis pump beams with different focal lengths encode the equivalent phase of a chromatic lens and grating in a gas or plasma. For low-power pulses, conventional solid-state or adaptive optics can be used instead. Multi-dimensional control over the focal trajectory enables new configurations for applications, including laser wakefield acceleration of ions, steering of broadband THz radiation, and surface harmonic generation.
We introduce a modified corner cube reflector that encodes information from passive optical sensors in its retroreflected diffraction pattern, enabling remote sensor-state measurement over a single-ended optical link. The design interferes a reference path and a sensor-modulated path within the retroreflected beam to produce an interferometric signal suitable for reading out phase and amplitude variations with a squarelaw camera. This enables sensor-state determination in arbitrarily oriented passive nodes, extending coherent interferometric read out of chemical, biological, and physical sensors to scalable, robust, and inert field deployments.
All-perovskite tandem solar cells with narrow and wide bandgap perovskite absorbers are promising candidates for low-cost and high efficiency photovoltaic applications. However, the open circuit voltage of typical tandem structures is generally smaller than the sum of the individual voltages in the single-junction form; a quantity we call the voltage gap. Subcell optimization can only begin once nonradiative losses associated with each absorber layer can be properly identified. To address this, we used absolute electroluminescence hyperspectral imaging to construct external radiative efficiency maps of each subcell within the tandem stack and compare these measurements with single junction devices. These measurements were then combined with additional electro-optical characterization and modeling to construct subcell current vs voltage curves. We find that the narrow band gap subcell contributes the most towards the voltage gap and therefore fabrication and processing efforts should focus on reducing nonradiative recombination losses within the narrow band gap absorber.
A novel matrix method of analyzing ion saturation current data from a general three-dimensional (3D) array of unmagnetized Mach probe tips is developed and used with data sets from two 3D Mach probes to make initial measurements of local plasma flow velocity in reversed-field pinch (RFP) experiments in the Madison Symmetric Torus (MST). The two 3D Mach probes are composed of regular polyhedral arrays of six and four tips, respectively, with the six-tip array composed of three orthogonal pairs of mutually opposite tips at the vertices of a regular octahedron and the four-tip array composed of non-opposite tips at the vertices of a regular tetrahedron, the analysis of which is specifically facilitated by the matrix method. Velocity measurement uncertainties for the Mach probes are derived based on uncertainties in probe machining and ion saturation current measurements, and typical relative uncertainties for the probes are estimated to be of order several percent, likely smaller than systematic uncertainties related to the Mach probe calibration constant and experimental uncertainties related to plasma and probe conditioning. Initial results for the octahedral probe show flow speeds of roughly the expected magnitudes based on previous MST measurements but with somes differences in flow direction, while those for the tetrahedron probe show similar flow directions to some previous measurements but also some larger than expected speeds. We consider possible causes for the unexpected results of these initial tests, with a focus on probe conditioning and fast electron issues.
We present an epi-illumination multi-camera array microscope (epi-MCAM) designed for wide-field reflective imaging of non-transparent samples. The epi-MCAM contains 24 tightly packed and synchronized epi-illumination microscope units, arranged in a $4 \times 6$ planar array at 18 mm spacing. Each unit contains a unique CMOS image sensor (13 megapixels each), an objective and tube lens pair, and a beamsplitter and epi-illumination light path. An epi-MCAM capture cycle produces a stitched image covering $72 \times 108~\mathrm{mm}^2$ at a micrometer scale resolution down to 2.46 $\mu$m. To image samples exceeding this native field of view, we translate the entire array across the sample surface to enable high-resolution coverage of large objects. We demonstrate the system's ability to image both flat and three-dimensionally structured reflective samples, such as semiconductor wafers and printed circuit boards, which highlight the epi-MCAM's strong potential within industrial inspection applications.
The dissipation mechanisms in weakly collisional plasmas have been a longstanding topic of investigation, where significant progress has been made in recent years. A recent promising development is the use of the "scale-filtered" Vlasov-Maxwell equations to fully quantify the scale-by-scale energy balance, a feature that was absent when using fluid models in kinetic plasmas. In particular, this method reveals that the energy transfer in kinetic scales is fully accounted for by the scale-filtered pressure-strain interaction. Despite this progress, the influence of ion-electron thermal disequilibrium on the kinetic-scale energy budget remains poorly understood. Using two-dimensional fully kinetic particle-in-cell simulations of decaying plasma turbulence, we systematically investigate the pressure-strain interaction and its components at sub-ion scales by varying electron-to-ion temperature ratios. Our analysis focuses on three key ingredients of the pressure-strain interaction: the normal and shear components of Pi-D and pressure dilatation. Our results demonstrate that the scale-filtered pressure-strain interaction is dominated by scale-filtered Pi-D across the kinetic range, with the shear component consistently providing the dominant contribution. We find that the scale-filtered normal and shear contributions of Pi-D exhibit persistent anticorrelation and opposite signs across all kinetic scales. We also discover that the amplitude of both anisotropic components for each species scales directly with their temperature and inversely with the temperature of the other species, while the scale-filtered pressure dilatation remains negligible compared to the Pi-D terms but shows enhanced compressibility effects as plasma temperatures decrease. We discuss the implications of these findings in thermally non-equilibrated plasmas, such as in the turbulent magnetosheath and solar wind.
Surface meltwater from glaciers and ice sheets contributes significantly to sea level rise, yet the dynamics governing its transport and retention in cold firn remain poorly constrained. We present a vertically integrated model that includes phase change during the migration of aquifers in cold firn and a constant residual trapping of liquid water. The model provides a unified framework connecting gravity-driven flows in temperate soils with those in cold firn, highlighting the analogous physics governing both systems. Analytical solutions are derived for constant volume aquifers and validated against numerical solutions that can be used to elucidate key features of meltwater dynamics and serve as benchmarks for firn hydrologic models. Finally, we demonstrate a three-dimensional expansion of an aquifer in cold, heterogeneous firn. The analytical and numerical solutions demonstrate that the dynamics of the laterally propagating aquifer slow at lower initial firn temperatures because of the enhanced reduction in porosity and the associated loss of liquid water. In summary, our framework offers insights into the formation and evolution of firn aquifers in percolation zones, which will help elucidate the role of modulating meltwater fluxes that affect surface mass loss and contribute to changes in global sea levels.
This paper discusses transport barrier formation and layering as consequences of jam formation. Extensive use is made of analogies with the theory of traffic flow in one dimension. The relation of flux jamming to motility induced phase separation (MIPS) is explained. Two routes to heat flux jamming are identified. The first is due to a rollover in the heat flux-pulse size relation, i.e. $dQ_T(\delta T)/d\delta T<0$, and is similar to the condition of flux-gradient bistability. The second occurs when the delay time between pulse and heat flux exceeds a critical value. This does not require bistability and tends to occur near marginality. This analysis yields an estimate of the answer to the eternal question of 'how near is "near"?'. Staircase development is shown to follow jamiton train formation. The relation of jamming of avalanches to phase transitions in drift wave-zonal flow turbulence is elucidated. The formation of outward propagating blob trains and inward propagating void trains is demonstrated. The important role of turbulence spreading is identified.
Accurate simulations of the flow in the human airway are essential for advancing diagnostic methods. Many existing computational studies rely on simplified geometries or turbulence models, limiting their simulation's ability to resolve flow features such shear-layer instabilities or secondary vortices. In this study, direct numerical simulations were performed for inspiratory flow through a detailed airway model which covers the nasal mask region to the 6th bronchial bifurcation. Simulations were conducted at two physiologically relevant \textsc{Reynolds} numbers with respect to the pharyngeal diameter, i.e., at Re_p=400 (resting) and Re_p=1200 (elevated breathing). These values characterize resting and moderately elevated breathing conditions. A lattice-Boltzmann method was employed to directly simulate the flow, i.e., no turbulence model was used. The flow field was examined across four anatomical regions: 1) the nasal cavity, 2) the naso- and oropharynx, 3) the laryngopharynx and larynx, and 4) the trachea and carinal bifurcation. The total pressure loss increased from 9.76 Pa at Re_p=400 to 41.93 Pa at Re_p=1200. The nasal cavity accounted for the majority of this loss for both Reynolds numbers, though its relative contribution decreased from 81.3% at Re_p=400 to 73.4% at Re_p=1200. At Re_p=1200, secondary vortices in the nasopharyngeal bend and turbulent shear-layers in the glottis jet enhanced the local pressure losses. In contrast, the carinal bifurcation mitigated upstream unsteadiness and stabilized the flow. A key outcome is the spatial correlation between the pressure loss and the onset of flow instabilities across the four regions. This yields a novel perspective on how the flow resistance and vortex dynamics vary with geometric changes and flow rate.
We study the characteristics of small-amplitude nonlinear dust-ion-acoustic (DIA) solitary waves in active magnetized positive-ion-beam-driven dusty plasmas with the effects of nonadiabatic and adiabatic dust charge variations. In the model, we consider the ion-neutral collision and thereby consider the collision enhanced ion current to the dust-charging process and dust charge fluctuations. We show that the streaming of the positive-ion beam significantly affects the dust-charging process in which the dust charge number decreases (increases) with an increased beam velocity (number density). Using the standard reductive perturbation technique, we derive the evolution equations in the form of Korteweg-de Vries (KdV) equations for DIA solitary waves for two different cases: nonadiabatic and adiabatic dust charge variations. We study the effect of positive ion beam, dust charge variation, magnetic field, ion creation, and ion-neutral collision enhanced current on the wave characteristics. We find that the soliton energy decays with time and is affected by the beam velocity. Also, the solitary waves get damped by the effects of ion creation, ion loss, ion-neutral collision enhanced current, and dust charge variation. Although the ion beam does not change the polarity of solitary waves in the case of adiabatic dust charge variation, a transition from rarefactive to compressive solitary waves occurs in the presence of an ion beam with nonadiabatic dust charge variation.
The construction of the first phase of the High energy FRagment Separator (HFRS Phase-I) has already been completed and it is anticipated to start beam commissioning in autumn 2025. This paper presents the first order and higher order beam optics calculations for the HFRS Phase-I, using measured magnet data, and evaluates its experimental performance in preparation for beam commissioning. The first order optics of HFRS is calculated based on the sliced magnetic fields and the higher order aberrations are corrected using a self-compiled program. Monte Carlo particle tracking is employed to analyze the beam phase spaces on the focal planes. The experimental performance of the machine is evaluated through Monte Carlo simulations. The beam phase spaces on the focal planes are thoroughly examined, demonstrating that the higher order aberrations have been well corrected. Moreover, the experimental performance of HFRS is evaluated based on the corrected higher order optics, yielding satisfactory results: the secondary beams of interest can be well separated and exhibit high transmission efficiency. This work provides valuable insights for the upcoming beam commissioning of HFRS Phase-I. The effective correction of higher order aberrations and optimized magnet settings lay a solid foundation for future experiments.
Starting from the Modified Newtonian Dynamics (MOND) theory and using an inverse approach, we construct a general form of the entropy expression associated with the horizon based on the entropic nature of gravity. Using the thermodynamics-gravity correspondence in the cosmological setup, we apply the corrected entropy expression and find the modified Friedmann equation by three methods, namely, (i) the first law of thermodynamics, (ii) the entropic force scenario and (iii) the emergence nature of gravity. We confirm that our model guaranties the generalized second law of thermodynamics for the universe enveloped by the apparent horizon. Our studies reveal that the MOND theory of gravity may be naturally deduced from the modification of the horizon entropy. These results may fill in the gap in the literatures, understanding the theoretical origin of the MOND theory from thermodynamics-gravity conjecture.
To address the issue of beam collapse resulting from instantaneous instability during switch transitions in beam tracking, this paper proposes a novel beam switching method based on a row-by-row switching code table. The paper first establishes an abstract model of the beam tracking application scenario and introduces the reconfigurable intelligent surface (RIS) employed in this paper. Subsequently, simulations are conducted to compare the conventional direct beam switching method with the proposed row-by-row switching code table approach, thereby elucidating the advantages and limitations of the new method. In parallel, a RIS hardware platform is constructed in a microwave anechoic chamber for experimental validation. Both simulation and experimental results show that, by incorporating intermediate state transitions, the approach achieves beam tracking without beam collapse while incurring no significant gain loss. Finally, the paper discusses the applicability scope and potential scenarios for the proposed method. This research provides valuable insights for applications in mobile communications and radar detection.
A strategy for reconstructing the water wave field using a data assimilation method is proposed in the present study. Special treatments are introduced to address the ensemble diversity and the discontinuous free surface with hydrodynamic constraints when implementing the EnKF approach. Additionally, the POD method is employed for dimensionality reduction, but from an ensemble point of view. The main purpose of this study is to achieve satisfactory consistency between the water waves computed by the numerical solver, particularly by the VOF method, and those observed in the laboratory wave flume within the test section of interest. To validate the proposed framework, three representative conditions are tested: regular waves, irregular waves, and plunging waves. The effects of observation noise, modal truncation, and other factors are also examined. From a practical perspective, this work provides a promising way to realize the coupling between experiments and numerical simulations, and establishes a prototype of a ``digital twin wave tank''.
A robust and infrared laser-excited photocathode with high quantum efficiency, high brightness, and low cost, operating under a moderate vacuum, has long been sought by the accelerator and microscopy communities. This study investigates various types of graphite photocathodes, including bulk, sheet, and flake graphite, in the regime of thermionically assisted photoemission by an irradiating infrared laser. Our experiment reveals that, under space-charge-limited photoemission, the flake-graphite photocathode with a dense population of nano-graphene fins on its surface exhibits the highest quantum efficiency, which is 770 times greater than that of a copper photocathode irradiated by the same infrared laser at 1064 nm. With our theory considering both thermionic and multiphoton emissions, we determine that the flake graphite photocathode is 200 times brighter than a copper photocathode irradiated by an infrared laser and is as bright as a LaB6 field emitter.
This letter proposes a novel anti-interference communication method leveraging computational antennas, utilizing time averaging and 1-bit reconfigurable intelligent surfaces (RIS) to achieve robust signal modulation with minimal hardware complexity. We develop a communication model for computational antennas and propose an efficient signal processing algorithm optimized for temporal modulation. A USRP-based experimental platform is established to validate the approach under strong interference conditions (e.g., 5 dB jamming-to-signal ratio). Experimental results reveal up to an 80.9\% reduction in bit error rate (BER) and effective restoration of distorted images in transmission tests. Compared to conventional techniques like spread spectrum or frequency hopping, which require significant spectral resources, our method offers superior anti-interference performance without additional spectral overhead. This research provides valuable insights for radar detection, military communications, and next-generation wireless networks.
In this study, the thermocapillary actuation behavior of an odd viscous droplet on a uniformly heated surface is numerically investigated using a phase-field-based lattice Boltzmann method. The numerical results reveal that unlike a conventional viscous droplet that remains stationary on a uniformly heated surface, the presence of odd viscosity converts tangential Marangoni stresses into asymmetric normal stresses along the interface, thereby inducing spontaneous droplet motion. Specifically, when the odd viscosity coefficient is positive (negative), the droplet migrates toward the right (left). Additionally, due to the enhanced interfacial temperature gradient, the droplet migration velocity consistently increases with the contact angle. Further, it is observed that the droplet's migration velocity decreases with an increasing viscosity ratio between the surrounding fluid and the droplet. Finally, as the droplet is placed on an inclined surface, its migration direction and velocity are governed by the interaction between gravity and the odd viscosity-induced force, and in certain cases, the droplet can even climb upward against gravity.
In this letter, we present the design and implementation of a 2-bit digaital metasurface operating in the Ku-band, engineered to exhibit advanced polarization conversion characteristics and support dual-polarization control for both X- and Y-polarizations. To address the challenge of array size scalability hindered by extensive DC control routing in 2-bit metasurfaces, we propose a novel RF-DC separation architecture. This approach integrates the metasurface and DC control circuitry onto separate printed circuit boards (PCBs), interconnected via pin cascading, enabling theoretically unlimited two-dimensional array expansion. To validate this design, a ${4\times16 \times 16}$ metasurface prototype was fabricated and experimentally evaluated, which can achieve a gain of 28.3dB and an aperture efficiency of 21.02\%, confirming the scalability and performance of the proposed architecture. The developed 2-bit high-gain metasurface offers significant reference value for applications in long-distance communication and radar detection. Furthermore, the RF-DC separation architecture introduces a pioneering framework for large-scale metasurface deployment in practical engineering scenarios, enhancing design flexibility and scalability.
While flow optimization has been extensively studied in the continuum regime, its extension to rarefied gas flows remains less explored. Here, based on the Boltzmann model equation, an adjoint topology optimization method is employed to design two-dimensional single inlet multi outlet manifolds, aiming to maximize the total mass flow rate while maintaining outflow uniformity. Two key findings are revealed. (1) analogous to the Knudsen minimum in mass flow rate in the transition regime, a wetted-area minimum is identified, but in the slip flow regime. This phenomenon arises from the competition between flow bend loss and surface friction loss, with the latter being affected by velocity slip at the solid surface. (2) the inlet outlet reciprocity emerges in the free molecular flow regime, where the optimal design becomes invariant to inlet outlet orientation and pressure ratio. Additional insights are gained regarding the channel curvature, compressibility effects, and the constraint of outflow uniformity. These findings elucidate the mechanisms governing rarefied gas transport and offer design guidance for manifolds operating in vacuum environments.
Microorganisms are ubiquitous in nature, and microbial activities are closely intertwined with the entire life cycle system and human life. Developing novel technologies for the detection, characterization and manipulation of microorganisms promotes their applications in clinical, environmental and industrial areas. Over the last two decades, terahertz (THz) technology has emerged as a new optical tool for microbiology. The great potential originates from the unique advantages of THz waves including the high sensitivity to water and inter-/intra-molecular motions, the non-invasive and label-free detecting scheme, and their low photon energy. THz waves have been utilized as a stimulus to alter microbial functions, or as a sensing approach for quantitative measurement and qualitative differentiation. This review specifically focuses on recent research progress of THz technology applied in the field of microbiology, including two major parts of THz biological effects and the microbial detection applications. In the end of this paper, we summarize the research progress and discuss the challenges currently faced by THz technology in microbiology, along with potential solutions. We also provide a perspective on future development directions. This review aims to build a bridge between THz photonics and microbiology, promoting both fundamental research and application development in this interdisciplinary field.
Forecast models in statistical seismology are commonly evaluated with log-likelihood scores of the full distribution P(n) of earthquake numbers, yet heavy tails and out-of-range observations can bias model ranking. We develop a tail-aware evaluation framework that estimates cell-wise P(n) using adaptive Gaussian kernel density estimation and tests three strategies for handling out-of-range counts. Using the AoyuX platform, we perform a ~25-year month-by-month pseudo-prospective forecast experiment in the China Seismic Experimental Site (CSES), comparing Epidemic-Type Aftershock Sequence (ETAS) model with a homogeneous background (ETAS{\mu}) to a spatially heterogeneous variant (ETAS{\mu}(x,y)) across six spatial resolutions and five magnitude thresholds. Empirical probability density functions (PDFs) of counts per cell are well described by power laws with exponents a = 1.40 +- 0.21 across all settings. Using previous theoretical results, this provides a robust estimate of the productivity exponent, {\alpha} = 0.57 +- 0.08 using a b-value equal to 0.8, providing a valuable quantification of this key parameter in aftershock modeling. Model ranking is sensitive to how the tail of the full distribution P(n) of earthquake counts is treated: power law extrapolation is both theoretically justified and empirically the most robust. Cumulative information gain (CIG) shows that ETAS{\mu}(x,y) outperforms ETAS{\mu} in data-rich configurations, whereas in data-poor settings stochastic fluctuations dominate. A coefficient-of-variation analysis of per-window log-likelihood differences distinguishes genuine upward trends in CIG from noise-dominated fluctuations. By aligning a fat-tail-aware scoring methodology with an open testing platform, our work advances fair and statistically grounded assessment of earthquake forecasting models for the CSES and beyond.
Fiber Fabry--Perot (FFP) resonators of a few centimeters are optimized as a function of the reflectivity of the mirrors and the dimensions of the intra-cavity waveguide. Loaded quality factor in excess of 10^9, with an optimum of 4___x___10^9, together with an intrinsic quality factor larger than 10^10 and intrinsic finesse in the range of 10^5 have been measured. An application to the stabilization of laser frequency fluctuations is presented.
An accurate design of a ground-coupled heat pump system requires the knowledge of the outlet fluid temperature from the borehole heat exchangers (BHEs), both in the short and long term. This paper fucuses on the short and medium term. In this time range, either 3D finite-element simulations or Thermal Resistance Capacity Models (TRCMs) can be applied. The former can yield very accurate results but require long computation times. The latter are much faster but cannot be fully precise, because they require simplifying assumptions. In this paper, we present a new method for the short-term and medium-term simulation of single U-tube BHEs, which combines the speed of TRCMs and the accuracy of finite-elements simulations. The method uses a TRCM to estimate the thermal response of the BHE, then corrects the results by interpolation with a dataset, which has been produced by running 54 finite-element simulations in various configurations. The model is implemented in a C++ program, available at the open-source online data repository of the University of Bologna. The program provides, within two seconds, the time evolution of the inlet, outlet and mean fluid temperature, of the mean BHE surface temperature, of the 3D and the effective borehole thermal resistance. It can be easily connected to long-term simulation tools to obtain the full-time-scale thermal response of a bore field.
This study investigates the suitability of Hydrogenated NanoDiamond (HND) materials as an alternative for CsI in MPGD-based photon detectors. The research focuses on characterizing HND photocathodes coupled with THGEM + Micromegas-based detectors. The HND grains were prepared via hydrogenation and stored in water for more than two years. They were then coated on PCB discs or THGEMs using a pulsed spray technique. The resulting quantum efficiency (QE) values (~4% at 122 nm) were found to be within a factor of 10 of the best freshly hydrogenated samples reported in the literature ( ~40% at 120 nm). The robustness of reflective HND photocathodes against ion bombardment was measured to be about 10 times larger than the corresponding CsI one after the same charge accumulation. Furthermore, THGEM characterization indicates minimal alteration in response after HND coatings. These results suggest that HND holds potential as a more robust photocathode for gaseous detectors, offering improved performance in single-photon detection applications.
This study investigates the influence of forest canopy heterogeneity on buoyant plume dynamics resulting from surface thermal anomalies representing wildland fires, utilizing Large Eddy Simulation (LES). The Parallelized Large-Eddy Simulation Model (PALM) was employed to simulate six canopy configurations: no canopy, homogeneous canopy, external plume-edge canopy, internal plume-edge canopy, 100 m gap canopy, and 200 m gap canopy. Each configuration was analyzed with and without a static surface heat flux patch of 5000 $\mathrm{W \cdot m^{-2}}$, resulting in a resting buoyant plume. Simulations were conducted under three crosswind speeds: 0, 5, and 10 $\mathrm{m \cdot s^{-1}}$. Results show that canopy structure significantly modifies plume behavior, mean flow, and turbulent kinetic energy (TKE) budgets. Plume updraft speed and tilt varied with canopy configuration and crosswind speed. Pressure gradients associated with plume updrafts were modified based on the canopy configuration, resulting in varying crosswind speed reductions at the plume region. Strong momentum absorption was observed above the canopy for the crosswind cases, with the greatest enhancement in the gap canopies. Momentum injection from below the canopy due to the heat source was also observed, resulting in plume structure modulation based on canopy configuration. TKE was found to be the largest in the gap canopy configurations. TKE budget analysis revealed that buoyant production dominated over shear production. At the center of the heat patch, the gap canopy configurations showed enhanced buoyancy within the gap. Spatial distributions demonstrated increased shear production at the interface between the plume and crosswind.
Artificial magnetic conductors (AMCs) mimic the idealized boundary condition of a perfect magnetic conductor (PMC), which reflects electromagnetic waves with a preserved electric field and inverted magnetic field. Despite their usefulness, existing AMC implementations often rely on complex or impractical designs, and lack a clear electromagnetic theory explaining their behavior, especially under oblique or polarization-diverse incidence. This work addresses these limitations by presenting a rigorous electromagnetic framework for PMC metasurfaces based on dipolar and quadrupolar surface susceptibilities within the generalized sheet transition conditions (GSTCs) formalism. We show that achieving polarization- and angle-independent PMC behavior requires a specific set of heteroanisotropic (nonlocal) susceptibilities, and we derive closed-form expressions for angular scattering that include higher-order multipole contributions. A physically realizable, asymmetric metasurface structure is then designed to satisfy these theoretical conditions. Despite its geometric asymmetry, the proposed structure exhibits a isotropic PMC response at resonance, confirmed by full-wave simulations and multipolar susceptibility extraction. These results demonstrate how properly engineered surface multipoles can yield angularly independent magnetic boundary conditions using only thin, passive metallic layers. This work bridges the gap between AMC design and electromagnetic theory, and enables a new class of angle-independent metasurface reflectors for more accurate simulations, optimizations and innovative AMC designs.
A systematic study is conducted to understand the coincident resolving time (CRT) for a pair of Lutetium-yttrium oxyorthosilicate (LYSO) and the plastic scintillation detector bars under the Geant4 framework. Crystals are coupled to a silicon photomultiplier single pad wafer with an appropriate optical coupling for signal readout. Pad reads the light photons undergoing optoelectronic conversion at the wafer site and generates electrical pulses with a bi-exponential shape. These signals are used to determine the trigger time stamp of back-to-back gamma-rays emitted from a point source, enabling the evaluation of CRT performance at different plastic scintillator lengths. For the LYSO detector, the simulation yields the CRT response of 174 ps, which aligns to the literature reported value for the dimensions 2 mm x 2 mm x 10 mm. To identify the plastic scintillator dimensions with an integrated gamma-ray detection efficiency comparable to LYSO's photopeak efficiency at 511 keV gamma photons, various bar lengths of commercial plastic BC404 and TIFR Ooty's in-house developed plastic material are attempted in Geant4. Consequently, for both the plastic scintillators, the equivalent length (for the same cross-sectional area) was found to be 4 times that of LYSO crystal length at a threshold of 25 keV. CRT value determined for this dimension is found to be $\approx$ 300 ps for both the plastic medium. It suggests that, if an animal preclinical PET scanner is developed with plastic bars, the minimum achievable image resolution (FWHM) of $\approx$ 4.5 cm can be expected for a pair of detection elements.
We present a practical method to measure the energy of proton beams at a medical cyclotron using the stacked foil technique in combination with a Bayesian inference method. By measuring the $^{48}$V activity induced in a stack of irradiated titanium foils, the proton energy can be inferred without relying on direct current or charge measurements, making the method suitable even for low-vacuum environments or air-exposed setups. This technique is further extended to configurations where the beam energy is degraded to levels around 8 MeV. A Bayesian fit of the measured activity profile allows not only for a robust energy estimation but also for a consistent treatment of uncertainties and nuisance parameters. Monte Carlo simulations are employed to validate the underlying assumptions, including the impact of energy dispersion or cross-section uncertainties. Our results demonstrate that this method provides accurate beam energy measurements across several typical experimental setups used at the Bern Medical Cyclotron. Additionally, we evaluate the sensitivity of the method to the choice of nuclear cross-section data and assess how the number of foils in the stack affects the uncertainty in the inferred beam energy.
All equipment for the High Intensity heavy ion Accelerator Facility (HIAF) has been installed and beam commissioning is currently underway. This paper presents a further study on the high-precision optics, namely slice optics, of the Booster Ring (BRing) at HIAF based on measured magnetic fields, focusing on two aspects: closed-orbit distortion and optical parameter variations caused by errors, and dynamic aperture. A detailed study is conducted on the closed-orbit distortion and changes in optical parameters caused by magnet alignment errors and dipole magnet field errors. Meanwhile, A detailed study is also conducted on the dynamic aperture of BRing. The results show that the sliced optics and the original optics are comparable in terms of the impact of these errors on the closed-orbit and optical parameters. Without chromaticity correction, the dynamic aperture of the sliced optics is superior to that of the original optics; after chromaticity correction, the sliced optics is also comparable to the original optics. This study provides valuable insights for accelerator tuning and optimization.
The theory of coherent transition radiation produced by a relativistic electron beam during its extraction from a microtron is established. Expressions for the beam form factor, spectral-angular and angular distribution of coherent transition radiation are obtained in explicit form. Estimates of microwave noise caused by coherent transition radiation are given.
Phase retrieval is a nonlinear inverse problem that arises in a wide range of imaging modalities, from electron microscopy to optical Fourier ptychography. Among various modalities, random phase retrieval stands out thanks to its strong theoretical guarantees and efficient reconstruction algorithms, although its applicability is hindered by prohibitive computational costs. In this paper, we propose the structured random models for phase retrieval, where we emulate a dense random matrix by a cascade of structured transforms and random diagonal matrices. We demonstrate that structured random models can achieve the same reconstruction performance as dense random models, with complexity reduced from quadratic to log-linear. Using a spectral method initialization followed by gradient descent, robust reconstruction is obtained at an oversampling ratio as low as 2.8. Moreover, we observe that the reconstruction performance is solely determined by the singular value distribution of the forward matrix. This class of models can directly be implemented with basic optical elements such as lenses and diffusers, paving the way for large-scale phase imaging with robust reconstruction guarantees.
This study investigates the potential for drag-reduction of low-mechanical-order, self-adaptive control systems, consisting of hinged flaps attached along the edges of the rectangular base of a canonical blunt body. Comparative experiments are conducted in a wind tunnel under crosswind conditions at a Reynolds number of $Re = 2.13 \times 10^5$. The flaps, made of rigid rectangular panels, are mounted in three configurations: rigidly fixed%(RF) , flexibly hinged with a single degree of freedom in bending, and flexibly hinged with two degrees of freedom, allowing both bending and torsion, the latter representing a novel drag-reduction device, easily tunable to ensure a quasi-steady, stable adaptive reconfiguration. Two geometric arrangements are tested: horizontal flaps attached along the top and bottom (TB) edges, and vertical flaps along the lateral (left and right, LR) edges. The experimental study includes force, pressure, flap deformation and wake velocity measurements at varying yaw angles to simulate crosswind conditions. When the body is aligned with the flow, both arrangements reduce drag due to a rear cavity effect that elongates the recirculating flow. The TB arrangement is found to be much more effective at reducing drag in yawed conditions and its performance is improved using the flexible hinges. In these cases, static deformations correspond to boat-tailing that reduces the induced drag together with the turbulent kinetic energy in the wake. The use of the wind-average drag coefficient (taking into account events of crosswind) to evaluate an effective drag reduction clearly shows the TB arrangement with bending and torsion as the best appendage, with a 7.62\% drag reduction compared to the body with no appendages, proving the good performance of simple, two-degrees-of-freedom control systems to adapt to changing three-dimensional wakes.
We develop a theory of two-dimensional Bloch-Landau-Zener (BLZ) oscillations of wavepackets in incommensurate moiré lattices under the influence of a weak linear gradient. Unlike periodic systems, aperiodic lattices lack translational symmetry and therefore do not exhibit a conventional band-gap structure. Instead, they feature a mobility edge, above which (in the optical context) all modes become localized. When a linear gradient is applied to a moiré lattice, it enables energy transfer between two or several localized modes, leading to the oscillatory behavior referred to as BLZ oscillations. This phenomenon represents simultaneous tunneling in real space and propagation constant (energy) space, and it arises when quasi-resonance condition for propagation constants and spatial proximity of interacting modes (together constituting a selection rule) are met. The selection rule is controlled by the linear gradient, whose amplitude and direction play a crucial role in determining the coupling pathways and the resulting dynamics. We derive a multimode model describing BLZ oscillations in the linear regime and analyze how both attractive and repulsive nonlinearities affect their dynamics. The proposed framework can be readily extended to other physical systems, including cold atoms and Bose-Einstein condensates in aperiodic potentials.
Neuromorphic computing seeks to replicate the spiking dynamics of biological neurons for brain-inspired computation. While electronic implementations of artificial spiking neurons have dominated to date, photonic approaches are attracting increasing research interest as they promise ultrafast, energy-efficient operation with low-crosstalk and high bandwidth. Nevertheless, existing photonic neurons largely mimic integrate-and-fire models, but neuroscience shows that neurons also encode information through richer mechanisms, such as the frequency and temporal patterns of spikes. Here, we present a photonic-electronic resonate-and-fire (R-and-F) spiking neuron that responds to the temporal structure of high-speed optical inputs. This is based on a light-sensitive resonant tunnelling diode that produces excitable spikes in response to nanosecond, low-power (100 microwatt) optical signals at infrared telecom wavelengths. We experimentally demonstrate control of R-and-F dynamics through inter-pulse timing of the optical stimuli and applied bias voltage, achieving bandpass filtering of both analogue and digital inputs. The R-and-F neuron also supports optical fan-in via wavelength-division multiplexed inputs from four vertical-cavity surface-emitting lasers (VCSELs). This electronic-photonic neuron exhibits key functionalities - including spike-frequency filtering, temporal pattern recognition, and digital-to-spiking conversion - critical for neuromorphic optical processing. Our approach establishes a pathway toward low-power, high-speed temporal information processing for light-enabled neuromorphic computing.
Biological tissues exhibit complex behaviors with their dynamics often resembling inert soft matter such as liquids, polymers, colloids, and liquid crystals. These analogies enable physics-based approaches for investigations of emergent behaviors in biological processes. A well-studied case is the spreading of cellular aggregates on solid surfaces, where they display dynamics similar to viscous droplets. \textit{In vivo}, however, cells and tissues are in a confined environment with varying geometries and mechanical properties to which they need to adapt. In this work, we compressed cellular aggregates between two solid surfaces and studied their dynamics using microscopy, and computer simulations. The confined cellular aggregates transitioned from compressed spheres into dynamic living capillary bridges exhibiting bridge thinning and a convex-to-concave meniscus curvature transition. We found that the stability of the bridge is determined by the interplay between cell growth and cell spreading on the confining surfaces. This interaction leads to bridge rupture at a critical length scale determined by the distance between the plates. The force distributions, formation and stability regimes of the living capillary bridges were characterized with full 3D computer simulations that included cell division, migration and growth dynamics, directly showing how mechanical principles govern the behavior of the living bridges; cellular aggregates display jamming and stiffening analogously to granular matter, and cell division along the long axis enhances thinning. Based on our results, we propose a new class of active soft matter behavior, where cellular aggregates exhibit liquid-like adaptation to confinement, but with self-organized rupturing driven by biological activity.
We present the design and simulation of a 30 $\mathrm{\mu m}$ thick 4H-SiC Low Gain Avalanche Diode (LGAD) optimized for high-voltage operation. A 2.4 $\mathrm{\mu m}$ thick epitaxially grown gain layer enables controlled internal amplification up to 1 kV reverse bias, while maintaining full depletion below 500 V. Electrical characteristics, including I-V, C-V, and gain behavior, were simulated in Synopsys Sentaurus Technology Computer-Aided Design (TCAD) using a quasi-1D geometry and verified across process-related variations in gain layer parameters. To ensure high-voltage stability and proper edge termination, a guard structure combining deep etched trenches and deep $p^+$ junction termination extension (JTE) implants was designed. TCAD simulations varying the guard structure dimensions yielded an optimized design with a breakdown voltage above 2.4 kV. A corresponding wafer run is currently processed at IMB-CNM, Barcelona.
An experiment is conducted to investigate the effects of chevrons on installed subsonic jet noise at a Mach number of 0.5 using the NASA SMC000 (round) and SMC006 (chevron) nozzles. The jets are of a diameter D=16.93 mm and placed near a flat plate, with a horizontal separation distance L=6.5D between the plate's trailing edge and the nozzle exit. The vertical separation distance H varies between 1.5D, 2D, 2.5D and 3D. Far-field sound is measured at various observable angles ranging from $\theta=60^\circ$ to $150^\circ$ to the downstream jet axis on the reflected side. The measured sound spectra are compared to the near-field scattering model developed by Lyu et al, together with the isolated near-field pressure spectra inputs from corresponding large eddy simulations for both nozzles. Results show that jet installation results in a strong noise amplification at low frequencies for both nozzles due to the scattering of the near-field pressure fluctuations and a mild noise increase at high frequencies due to surface reflection (reflected side). The low-frequency amplification is strongest at H=1.5D and has a dipolar directivity. A secondary spectral hump appears within this low-frequency amplification hump, which is hypothesised due to the interference between the sound generated by the large coherent structures directly and that by their scattering at the plate's trailing-edge. The use of chevrons reduces the low-frequency noise for isolated jets, but leads to even stronger noise amplification for installed jets; this is likely due to enhanced jet mixing resulting in stronger near-field pressure fluctuations at a fixed radial distance. Results show that the scattering model can predict the low-frequency noise amplification well at various observer angles for both nozzles, suggesting the validity of the instability-wave scattering mechanism and modelling for both round and chevron jets.
In this study, we conducted an experiment to estimate $\pi$ using body-to-body and body-to-wall collisions. By geometrically analyzing the system's motion, we first review how the collision count corresponds to the digits of $\pi$. This method utilizes the property that the number of collisions corresponds to $\pi$ to the $n$-th decimal place by setting the mass ratio of bodies to $1:100^n$ under ideal conditions. In particular, when the mass ratio is $1:100$ -- which is the case we tested experimentally -- the number of collisions is 31, and $\pi$ to the tenths decimal place (3.1) can be derived. In the experiments, a suspended apparatus was developed to minimize energy losses such as friction and air resistance. We also devised the shape and material of the colliding bodies' surface and the characteristics of the suspension string, aiming for measurements under stable conditions. Based on the experimental results, we reproduced the number of collisions consistent with the theoretical values and confirmed that estimating $\pi$ to the tenths decimal place is possible under realistic conditions.
Magneto-acoustic waves in partially ionized plasmas are damped due to elastic collisions between charged and neutral particles. Here, we use a linearized two-fluid model to describe the influence of this collisional interaction on the properties of small-amplitude waves propagating in a uniform and static background. Mainly focusing on the case of waves generated by a periodic driver, we perform a detailed study of the dependence of the wavenumbers and damping rates on the ionization degree of the plasma, the strength of the collisional coupling, and the angle of propagation. We describe how the different wave modes (fast, slow, acoustic) are related to the individual properties of each fluid in a wide range of physical conditions. In addition, we derive analytical approximations for the damping rates due to charge-neutral collisions in the limits of weak and strong coupling and check their range of validity in comparison with the exact numerical results. These approximations can be generally applied to a large variety of astrophysical and laboratory partially ionized plasmas, but here we also discuss the particular application to plasmas only composed of hydrogen.
The integrated devices that generate structural optical fields with non-trivial orbital angular momentums (OAMs) hold great potential for advanced optical applications, but are restricted to complex nanostructures and static functionalities. Here, we demonstrate a reconfigurable OAM beam generator from a simple microring resonator without requiring grating-like nanostructures. Our approach harnesses Brillouin interaction between confined phonon and optical modes, where the acoustic field is excited through microwave input. The phonon stimulate the conversion from a guided optical mode into a free-space vortex beam. Under the selection rule of radiation, the OAM order of the emitted light is determined by the acousto-optic phase matching and is rapidly reconfigurable by simply tuning the microwave frequency. Furthermore, this all-microwave control scheme allows for the synthesis of arbitrary high-dimensional OAM superposition states by programming the amplitudes and phases of the driving fields. Analytical and numerical models predict a radiation efficiency over 25\% for experimentally feasible on-chip microcavities. This work introduces a novel paradigm for chip-to-free-space interfaces, replacing fixed nanophotonic structures with programmable acousto-optic interactions for versatile structured light generation.
A key challenge in multiphase flow through porous media is to understand and predict the conditions under which trapped fluid clusters become mobilized. Here, we investigate the stability of such clusters in two-phase flow and present a simple, quasistatic model that accurately determines the critical Bond number (that is, the critical ratio between the average pressure gradient of the flow and the surface tension) for the onset of cluster mobilization. The model is derived by combining elementary geometrical considerations with mass conservation and a mechanical equilibrium condition, resulting in a system of coupled differential equations. Our derivation sheds new light on the mechanisms that govern cluster stability. In addition, since the number of equations equals the number of cluster openings, our model is significantly faster than direct numerical simulations of the same problem and enables efficient exploration of the system's parameter space. Using this approach, we highlight a discrepancy with the prediction of current mean-field theories, which predict that the largest stable cluster size scales in proportion to $r/\text{Bo}^\alpha$, where $r$ is a typical pore size, $\text{Bo}$ is the Bond number and $\alpha$ is a fixed exponent. We discuss the mechanisms that explain the breakdown of the mean field theories, and we show that a scaling law of this form can only exist if $\alpha$ is allowed to depend on a broad set of flow and geometric parameters.
Nanoporous metals are extensively investigated as platforms for applications in plasmonics. They present high surface areas and strong local electric fields that can be tuned at different energies, playing with the choice of the metals and the morphology of the porous layers. Until recently, research in the field of plasmonics has primarily focused on porous metals composed of a single element, with limited attention given to the impact of alloy composition. The investigation of bi-metallic systems has only just begun to emerge in the literature. In particular, combining two or more different plasmonic metals, it could be possible to explore the interactions between two metals excited at specific energies. This involves plasmonic coupling, electron transfer, band hybridization at the interface, electromagnetic field interactions, and possibly thermal and electronic energy transfer depending on separation, size, and materials involved. The analysis of bi-metal systems can also be interesting in biomolecule detection, such as in the case of Surface Enhanced Raman Scattering (SERS). Here we report, for the first time, a detailed study (comprising morphological analyses, numerical modelling, and optical spectroscopies) on bi-metal nanoporous platforms prepared with a dry-synthesis method enabling the easy and controllable fabrication of bilayers combining different metals such as Au, Ag, and Cu.
We present a lattice Boltzmann formulation for the simulation of compressible, non-ideal fluid flows. The method employs first-neighbor lattices and introduces a consistent set of correction terms through quasi-equilibrium attractors, ensuring positive-definite and Galilean-invariant Navier-Stokes dissipation rates. This construction circumvents the need for extended stencils or ad hoc regularization, while maintaining numerical stability and thermodynamic consistency across a broad range of flow regimes. The resulting model accurately reproduces both Euler- and Navier-Stokes-level hydrodynamics. As a stringent validation, we demonstrate, for the first time within a lattice Boltzmann framework, quantitatively accurate simulations of drop-shock interactions at Mach numbers up to 1.47. The proposed approach thus extends the applicability of lattice Boltzmann methods to high-speed, non-ideal compressible flows with a minimal kinetic stencil.
Technological advances in the fabrication of nanophotonic circuits have driven the scientific community to increasingly focus on the precise tailoring of their key optical properties, over a broadband spectral domain. In this context, the modulation of the local refractive index can be exploited to customize an effective reflectivity by the use of distributed Bragg mirrors, enabling the on-chip integration of Fabry-Pérot resonators. The resulting cavity length is strongly wavelength-dependent, offering practical solutions to the growing demand of dispersion engineering. Owing to their typically high core-to-cladding refractive index contrast and exceptional nonlinear properties, III-V semiconductor-based platforms represent promising candidates for the fabrication of Bragg reflectors. In this work, we propose an AlGaAs-on-insulator linear resonator based on distributed Bragg mirrors. We discuss the first experimental demonstration of a systematic, shape-constrained inverse design technique which tailors a prescribed dispersion profile, showing a strong agreement between simulations and measurements. In perspective, the proposed approach offers an efficient and general response to the challenge of dispersion engineering in integrated optical circuits.
The ability of our semi-empirical irregular dipole-moment functions (2022) and (2025) to predict the intensities of the yet unobserved lines, as well as to describe the observed ones not used in the fitting, is demonstrated by comparison with recent measurements in the 0-0, 1-0, 3-0, and 7-0 bands.
Rotations play a detrimental role in achieving ultra-high-performance inertial measurements with an atom interferometer, leading potentially to a total loss of interference contrast and the emergence of dominant phase shift biases. This becomes particularly significant when considering operation in dynamic conditions such as those encountered in Earth orbiting satellites in the perspective of future space gravity missions on-boarding a cold atom accelerometer. We study in this context the impact of rotation on the phase shift and contrast of an atom interferometer and investigate mitigation strategies. An analytical model is derived and compared to experimental demonstrations carried out using an original setup in which the well-controlled proof-mass of a space electrostatic accelerometer is used as the retro-reflection mirror of a cold atom gravimeter. By properly counter-rotating the electrostatic proof-mass, we demonstrate for instance the possibility of recovering the interferometer contrast, otherwise equal to zero, to a level better than 90%, in both cases of constant angular velocities or in presence of angular accelerations. Our results demonstrate the possibility to perform high performance inertial measurements with a cold atom interferometer in a challenging environments.
Thermomechanical infrared (IR) detectors have emerged as promising alternatives to traditional photon and thermoelectric sensors, offering broadband sensitivity and low noise without the need for cryogenic cooling. Despite recent advances, the field still lacks a unified framework to guide the design of these nanomechanical systems. This work addresses that gap by providing a comprehensive design guide for IR thermal detectors based on silicon nitride drumhead and trampolines. Leveraging a validated analytical model, we systematically explore how geometry, tensile stress, and optical properties influence key performance metrics such as thermal time constant, noise-equivalent power, and specific detectivity. The analysis encompasses both bare silicon nitride and structures with broadband absorber layers, revealing how different parameter regimes affect the trade-off between sensitivity and response speed. Rather than focusing on a single device architecture, this study maps out a broad design space, enabling performance prediction and optimisation for a variety of application requirements. As such, it serves not only as a reference for benchmarking existing devices but also as a practical tool for engineering next-generation IR sensors that can operate close to the fundamental detection limit. This work is intended as a foundational resource for researchers and designers aiming to tailor IR detectors to specific use cases.
The coherent-state initial-value representation (IVR) for the semi-classical real-time propagator of a quantum system, developed by Herman and Kluk (HK), is widely used in computational studies of chemical dynamics. On the other hand, the Boltzmann operator $e^{-\hat{H}/(k_B T)}$, with $\hat{H}$,$k_B$, and $T$ representing the Hamiltonian, Boltzmann constant, and temperature, respectively, plays a crucial role in chemical physics and other branches of quantum physics. One might naturally assume that a semi-classical IVR for the matrix element of this operator in the coordinate representation (i.e., $ \langle \tilde{x} | e^{-\hat{H}/(k_B T)} | x \rangle$, or the imaginary-time propagator) could be derived via a straightforward ``real-time $\rightarrow$ imaginary-time transformation'' from the HK IVR of the real-time propagator. However, this is not the case, as such a transformation results in a divergence in the high-temperature limit $(T \rightarrow \infty)$. In this work, we solve this problem and develop a reasonable HK-like semi-classical IVR for $\langle \tilde{x} | e^{-\hat{H}/(k_B T)} | x \rangle$, specifically for systems where the gradient of the potential energy (i.e., the force intensity) has a finite upper bound. The integrand in this IVR is a real Gaussian function of the positions $x$ and $\tilde{x}$, which facilitates its application to realistic problems. Our HK-like IVR is exact for free particles and harmonic oscillators, and its effectiveness for other systems is demonstrated through numerical examples.
We experimentally and numerically investigate Raman-driven power depletion in the fundamental mode of few-mode fibers (FMFs) excited by visible ultrashort pulses. Using a tunable femtosecond laser and SMF-28 fibers operated below the single-mode cutoff wavelength, we explore nonlinear mode dynamics through precise coupling, holographic mode decomposition, and spectral analysis. Experiments on 12-meter and 50-meter fibers reveal significant energy transfer to higher-order modes via intermodal Raman scattering. These findings advance our understanding of nonlinear propagation in FMFs, with important implications for high-power laser delivery and next-generation optical communication systems.
We focus on reflections and suggestions of five college quantum educators from four different institutions (two from same institution) regarding what can be done to diversify the second quantum revolution. They are leading QIST researchers, and very passionate about improving quantum education. The educators were asked about their thoughts on whether the interdisciplinary nature of the field, in which nobody can claim to be an expert in all aspects of QIST, may make it easier to create a better culture from the beginning, supportive of equitable participation of diverse groups unlike physics. This is because disciplines such as physics have an ingrained inequitable culture based on brilliance attribution that is a major impediment to diversity, equity and inclusion. Educators were interviewed on Zoom using a semi-structured think-aloud protocol about various issues related to QIST education including those pertaining to how to diversify the second quantum revolution. Their suggestions can be invaluable and can help other educators adapt and implement strategies to diversify QIST.
In 1978, Delayen showed how Self-Excited Loops (SEL) can be used to great advantage for controlling narrow-band SRF cavities. Its key capability is establishing closed-loop amplitude control early in the setup process, stabilizing Lorentz forces to allow cavity tuning and phase loop setup in a stable environment. As people around the world implement this basic idea with modern FPGA DSP technology, multiple variations and operational scenarios creep in that have both obvious and non-obvious ramifications for latency, feedback stability, and resiliency. This paper will review the key properties of a Delayen-style SEL when set up for open-loop, amplitude stabilized, and phase-stabilized modes. Then the original analog circuit will be compared and contrasted with the known variations of digital CORDIC-based implementations.
Electro-momentum coupling in piezoelectric metamaterials with broken inversion symmetry enables asymmetric elastic wave transport by linking macroscopic electric fields to momentum, an effect analogous to Willis coupling in elastic media. A one-dimensional layered piezoelectric metamaterial integrated with shunt circuits, consisting of a resistor, inductor, and strain-proportional voltage feedback gain, is proposed to achieve dynamic control of frequency-dependent stiffness and damping through electromechanical interactions. Tuning the circuit parameters yields direction-dependent wave scattering at targeted frequencies. Dynamic homogenization reveals macroscopic constitutive relations exhibiting both Willis and electro-momentum couplings. Non-Hermitian exceptional points are identified, where scattering eigenmodes coalesce and produce extreme asymmetries in wave response. Near these points, the system realizes unidirectional zero reflection (UZR) and unidirectional perfect absorption (UPA), achieving complete absorption from one direction and total reflection from the opposite side. The findings demonstrate a compact and reconfigurable platform for tunable, directional elastic wave control using passive-active hybrid metamaterials, opening new avenues for programmable devices in acoustic isolation, wave-based computing, sensing, and energy manipulation in solid media.
Non diffracting (ND) beams are often cited as a promising solution to mitigate blockage in millimeter wave (mmWave) systems. However, a quantitative answer to the fundamental question, under what specific conditions do ND beams actually outperform conventional pencil beams, has remained elusive, especially in the emerging context of near-field communications. This paper provides the first systematic answer by mapping the performance advantage regimes of ND beams for blockage-resilient near-field links. We propose a unified holographic generator that synthesizes various structured beams (e.g., Bessel, Mathieu) under the physical constraints of a planar phased array, ensuring a fair comparison against a boresight baseline with identical EIRP and aperture. Through extensive, unbiased Monte Carlo simulations, we construct advantage regime maps that delineate the specific regions where ND beams offer a tangible link-level gain. Our key finding is that the advantage of ND beams is a powerful but conditional near field phenomenon. While offering a positive average gain, its performance is highly variable, with a 60-70% probability of outperforming the baseline in its optimal range. Crucially, this performance is strongly modulated by the obstacle's geometry, revealing a significant weakness against large blockers. These findings provide not just a practical roadmap for judiciously employing ND beams but also a clear motivation for future work in environment-aware, adaptively shaped structured beams.
Density Compensation Function (DCF) is widely used in non-Cartesian MRI reconstruction, either for direct Non-Uniform Fast Fourier Transform (NUFFT) reconstruction or for iterative undersampled reconstruction. Current state-of-the-art methods involve time-consuming tens of iterations, which is one of the main hurdles for widespread application of the highly efficient non-Cartesian MRI. In this paper, we propose an efficient, non-iterative method to calculate DCF for arbitrary non-Cartesian $k$-space trajectories using Fast Fourier Deconvolution. Simulation experiments demonstrate that the proposed method is able to yield DCF for 3D non-Cartesian reconstruction in around 20 seconds, achieving orders of magnitude speed improvement compared to the state-of-the-art method while achieving similar reconstruction quality.
The use of auditory masking has long been of interest in psychoacoustics and for engineering purposes, in order to cover sounds that are disruptive to humans or to species whose habitats overlap with ours. In most cases, we seek to minimize the disturbances to the communication of wildlife. However, in the case of pathogen-carrying insects, we may want to maximize these disturbances as a way to control populations. In the current work, we explore candidate masking strategies for a generic model of active auditory systems and a model of the mosquito auditory system. For both models, we find that masks with all acoustic power focused into just one or a few frequencies perform best. We propose that masks based on rapid frequency modulation are most effective for maximal disruption of information transfer and minimizing intelligibility. We hope that these results will serve to guide the avoidance or selection of possible acoustic signals for, respectively, maximizing or minimizing communication.
Precise control of plasmonic resonances across a broad spectral range is central to the development of tunable optical devices. Yet, achieving both redshifts and blueshifts within a single nanostructure has remained elusive. Here we introduce a metal-dielectric-metal (MDM) nanodisk array that enables bidirectional tuning of resonance wavelengths throughout the near-infrared (NIR) region. The observed spectral evolution follows the plasmon ruler relationship, with unprecedented tuning properties. In particular, we report a record blueshift response of 457.82 nm for a small nanodisk thickness variation of only 5-10 nm, the highest blueshift response demonstrated in plasmonic architectures to date. This platform offers finely tunable resonances spanning an exceptionally wide NIR range, providing new insights into electromagnetic (EM) coupling mechanisms and establishing a foundation for next-generation tunable devices in sensing, optical communications, and dynamic displays.
The international Facility for Antiproton and Ion Research (FAIR) is under construction at the GSI Helmholtz Centre in Darmstadt. The first project stage includes the superconducting 100 Tm heavy-ion synchrotron SIS100, the Super Fragment Separator, and associated beam transport lines. Part of GSI's existing accelerator chain, comprising UNILAC and SIS18, will serve as injector. Installation work in the FAIR accelerator tunnels and supply buildings has been ongoing since early 2024. As progress continues, special attention is now on the start of commissioning, beginning in 2025 with the cryogenic plant. Commissioning of the transport line will follow at the end of 2025, and beam commissioning is scheduled for the second half of 2027. This paper outlines the current status of the project, commissioning strategy and timeline.
We investigate magnetic-field amplification driven by the nonresonant hybrid (NRH or Bell) instability and its impact on cosmic-ray (CR) acceleration at reverse shocks of ultrafast outflows (UFOs) from active galactic nuclei (AGN). Previous kinetic studies by particle-in-cell simulations have demonstrated that when maximum CR energy is near the injection scale, NRH instability efficiently amplifies magnetic field up to the saturation level. However, the efficiency of NRH instability goes down as maximum energy increase since CR current is carried by escaping CRs near the maximum energy. We employ a one-dimensional MHD--CR framework solving telegraph-type diffusion--convection equations to trace the coupled evolution of CRs, magnetic fields, and shock dynamics under realistic parameters. We find a distinct transition with magnetic field strength: for weak background fields ($B_{0}\!\lesssim\!10^{-4}\,\mathrm{G}$), NRH instability efficiently amplifies upstream turbulence, driving a self-regulated state where $E_{\max}$ becomes independent of initial strength of magnetic turbulence. In contrast, for stronger background fields ($B_{0}\!\gtrsim\!10^{-3}\,\mathrm{G}$), the escaping CR current is too weak to drive NRH instability, and magnetic turbulence further decays through parametric instabilities, potentially reducing the acceleration efficiency. We give the physical interpretation for the transition and discuss conditions for PeV--EeV acceleration at UFO reverse shocks.
We revisit and invalidate all dark photon dark matter constraints from resonant conversion of dark photons into photons (plasmons) in the early universe. These constraints rely on the resonant transfer of a substantial portion of the dark photon energy density into the SM plasma, heating the plasma in the process. We demonstrate that this resonant transfer saturates because of plasma nonlinearities. Dark photon dark matter resonantly converts into $k \simeq 0$ Langmuir waves in the early universe electron-ion plasma. Once the Langmuir-wave energy approaches the thermal energy of the plasma, nonlinear effects driven by the ponderomotive force become significant. In particular, we show using dedicated Particle-in-Cell simulations that large-amplitude $k = 0$ Langmuir waves excite higher-k Langmuir and ion acoustic waves, producing strong spatial variations in density and plasma frequency. These inhomogeneities suppress further resonant conversion, limiting the deposited energy to about the thermal energy of the electrons at the time of conversion, orders of magnitude below observable cosmological thresholds. Consequently, the dark photon dark matter constraints are weaker by factors of $3000$ to $10^7$ across ten orders of magnitude in dark photon mass.
We present forecasts for constraints on the matter density ($\Omega_m$) and the amplitude of matter density fluctuations at 8h$^{-1}$Mpc ($\sigma_8$) from CMB lensing convergence maps and galaxy weak lensing convergence maps. For CMB lensing convergence auto statistics, we compare the angular power spectra ($C_\ell$'s) to the wavelet scattering transform (WST) coefficients. For CMB lensing convergence $\times$ galaxy weak lensing convergence statistics, we compare the cross angular power spectra to wavelet phase harmonics (WPH). This work also serves as the first application of WST and WPH to these probes. For CMB lensing convergence, we find that WST and $C_\ell$'s yield similar constraints in forecasts for the $\textit{Simons}$ Observatory and the South Pole Telescope. However, WST gives a tighter constraint on $\sigma_8$ by a factor of $1.7$ for $\textit{Planck}$ data. When CMB lensing convergence is crossed with galaxy weak lensing convergence projected from $\textit{Euclid}$ Data Release 2 (DR2), we find that WPH outperforms cross-$C_\ell$'s by factors between $2.4$ and $3.8$ for individual parameter constraints. To compare these different summary statistics we develop a novel learned binning approach. This method compresses summary statistics while maintaining interpretability. We find this leads to improved constraints compared to more naive binning schemes for $C_\ell$'s, WST, and most significantly WPH. By learning the binning and measuring constraints on distinct data sets, our method is robust to overfitting by construction.
Recovering true signals from noisy measurements is a central challenge in inverse problems spanning medical imaging, geophysics, and signal processing. Current solutions balance prior assumptions regarding the true signal (regularization) with agreement to noisy measured data (data-fidelity). Conventional data-fidelity loss functions, such as mean-squared error (MSE) or negative log-likelihood, seek pointwise agreement with noisy measurements, often leading to overfitting to noise. In this work, we instead evaluate data-fidelity collectively by testing whether the observed measurements are statistically consistent with the noise distributions implied by the current estimate. We adopt this aggregated perspective and introduce distributional consistency (DC) loss, a data-fidelity objective that replaces pointwise matching with distribution-level calibration using model-based probability scores for each measurement. DC loss acts as a direct and practical plug-in replacement for standard data consistency terms: i) it is compatible with modern regularizers, ii) it is optimized in the same way as traditional losses, and iii) it avoids overfitting to measurement noise even without the use of priors. Its scope naturally fits many practical inverse problems where the measurement-noise distribution is known and where the measured dataset consists of many independent noisy values. We demonstrate efficacy in two key example application areas: i) in image denoising with deep image prior, using DC instead of MSE loss removes the need for early stopping and achieves higher PSNR; ii) in medical image reconstruction from Poisson-noisy data, DC loss reduces artifacts in highly-iterated reconstructions and enhances the efficacy of hand-crafted regularization. These results position DC loss as a statistically grounded, performance-enhancing alternative to conventional fidelity losses for inverse problems.
Magnetism is thought to play an important role in the evolution and dynamics of stars, though little is known about magnetic fields deep within stellar interiors. A promising avenue for probing these fields uses asteroseismic observations of global oscillations that result from the coupling of acoustic waves in the convective zone to internal gravity waves (IGWs) in the radiative interior. Recent modeling efforts implicate deep magnetic fields in the suppression of dipole mixed modes observed in 20% of red giants and a number of high-mass main sequence stars. Previous numerical and theoretical work shows that core magnetic fields could suppress axisymmetric global modes by refracting down-going IGWs into slow-magnetosonic (SM) waves that damp at magnetic cutoff heights. Here, we extend these results to the non-axisymmetric case, for which the IGWs and SM waves are coupled to a continuous spectrum of Alfven waves (AWs). We consider a Cartesian model of the radiative interior with uniform stratification and a spatially-varying, current-free magnetic field. Using a Wentzel-Kramers-Brillouin approximation to solve for the vertical mode structure, corroborated with numerical simulations, we show that IGWs convert to up-going SM waves, which resonate with the Alfven spectrum and produce mixed SM-AW modes. We find cutoff heights (as in the axisymmetric case), above which the SM/SM-AWs convert to AWs. Latitudinal variations of the background magnetic field lead to phase mixing of the AWs, resulting in rapid damping. Our results suggest that energy in both axisymmetric and non-axisymmetric IGWs is lost via interactions with a strong magnetic field.
As the scope of Computational Fluid Dynamics (CFD) grows to encompass ever larger problem scales, so does the interest in whether quantum computing can provide an advantage. In recent years, Quantum Lattice Gas Automata (QLGA) and Quantum Lattice Boltzmann Methods (QLBM) have emerged as promising candidates for quantum-native implementations of CFD solvers. Though the progress in developing QLGA and QLBM algorithms has been significant, it has largely focused on the development of models rather than applications. As a result, the zoo of QLGA and QLBM algorithms has grown to target several equations and to support many extensions, but the practical use of these models is largely limited to quantum state tomography and observable measurement. This limitation is crucial in practice, because unless very specific criteria are met, such measurements may cancel out any potential quantum advantage. In this paper, we propose an application based on discrete optimization and quantum search, which circumvents flow field measurement altogether. We propose methods for simulating many different lattice configurations simultaneously and describe how the usage of amplitude estimation and quantum search can provide an asymptotic quantum advantage. Throughout the paper, we provide detailed complexity analyses of gate-level implementations of our circuits and consider the benefits and costs of several encodings.
Low power consumption is critical for smart windows for temperature control and privacy. The recently discovered ferroelectric nematic liquid crystals exhibit strong coupling of the ferroelectric polarization with electric fields, making them promising candidates for energy-efficient electrochromic devices. Here we investigate the electrochromic properties of a room temperature chiral ferroelectric nematic liquid crystal in films with in-plane electrodes, where the electric field is perpendicular to the helical axis. We demonstrate that smart windows based on this material can regulate interior temperatures within a 10 Celsius range using only 50 milliwatt per square meter specific power, achieving fifty percent larger temperature modulation and 50-100 times lower power consumption than polymer dispersed and polymer stabilized liquid crystal windows. These findings suggest that chiral ferroelectric nematic liquid crystals offer a highly efficient approach for smart window applications, potentially surpassing existing electrochromic technologies in energy efficiency and thermal regulation.
Gyrochronology, a method for dating aged field stars ($\gtrsim$ a few Gyr) based on their rotation rate, has recently been shown to fail for many stars older than the sun. The explanation most often put forth is that a shutdown or mode change in the stellar dynamo leads to a sharp decrease in angular momentum loss in magnetized coronal winds. In this paper, we explore an alternate possibility, namely a collapse of the wind itself through a reduction of coronal heating. We show that in the low coronal temperature ($T_0$) limit, even at solar-like low rotation rates ($\Omega$) and coronal magnetic field strength ($B_{r0}$), magnetocentrifugal effects are important and preclude expression of the mass and angular momentum loss rates as power-laws of $T_0$ or $\Omega$ when $T_0$ drops below $\simeq 1.5\,$MK. Mass loss is found to scale linearly with power input into the wind at all coronal temperatures. Introducing an ad hoc power law relationship $T_0\propto B_{r0}^\sigma$ while retaining the ``standard'' dynamo relationship $B_{r0}\propto\Omega$, we show that reproducing the observed break in gyrochronology requires an exponent $\sigma\gtrsim 1.5$, with which is associated a drop by over 3 orders of magnitude in power input into the quiet corona. This appears physically unrealistic, given current observations of chromospheric and coronal non-thermal emission in aged solar-type stars.
We present a comprehensive computational framework for simulating nonadiabatic molecular dynamics with explicit inclusion of geometric phase (GP) effects. Our approach is based on a generalized two-level Hamiltonian model that can represent various electronic state crossings - conical intersections, avoided crossings, and elliptic intersections - through appropriate parameterization. We introduce a novel prelooping trajectory initialization scheme, allowing us to encode the memory as an initial phase accumulated due to the adiabatic evolution over the potential energy surface. This is a unified framework to handle different types of level crossings by incorporating Berry curvature-based force corrections to Ehrenfest dynamics, ensuring accurate representation of topological effects. For conical intersections, our method incorporates the theoretically expected phase pi, while for elliptic intersections, it yields a parametrically tunable but loop radius (energy) independent phase different from pi. We also include an eccentricity parameter (e) in the diabatic coupling to model more realistic molecular systems. Numerical simulations demonstrate the consistency of our approach with theoretical predictions for mixing of states and inhibition from mixing due to geometric phase effects. This framework provides a valuable tool for studying quantum-classical interactions in molecular systems where geometric phase effects play a significant role. The elliptical intersection and geometric phase effect opens avenue for the design and discovery of degenerate materials. It produces a fresh look to help develop a new kind of spectroscopy and potential qubit applications. This simple Hamiltonian reveals a pathological phase protection effect E = kr, where k is real, that has great utility in a new spectroscopy design.
Halide perovskites have emerged as promising candidates for high-performance solar cells. This study investigates the temperature-dependent optoelectronic properties of mixed-cation mixed-halide perovskite solar cells using electroluminescence (EL) and photoluminescence (PL) hyperspectral imaging, along with current-voltage analysis. Luminescence images, which were converted to EL and PL external radiative efficiency (ERE) maps, revealed significant changes in the optoelectronic behavior of these devices at low temperatures. Specifically, we found that a significant source of heterogeneity in the low-temperature EL ERE maps below 240 K is related to local charge injection and extraction bottlenecks, whereas PL ERE maps show suppressed non-radiative recombination and significant improvements in efficiency throughout the investigated temperature range. The spatial distribution of ERE and its variation with applied current were analyzed, offering insights into charge-carrier dynamics and defect behavior. Our results reveal that while the perovskite layer exhibits enhanced ERE at low temperatures, charge injection barriers at the interfaces of the perovskite solar cells significantly suppress EL and degrade the fill factor below 240 K. These findings reveal that a deeper understanding of the performance of perovskite solar cells under low-temperature conditions is an essential step toward their potential application in space power systems and advanced semiconductor devices.
Understanding the spectral properties of kernels offers a principled perspective on generalization and representation quality. While deep models achieve state-of-the-art accuracy in molecular property prediction, kernel methods remain widely used for their robustness in low-data regimes and transparent theoretical grounding. Despite extensive studies of kernel spectra in machine learning, systematic spectral analyses of molecular kernels are scarce. In this work, we provide the first comprehensive spectral analysis of kernel ridge regression on the QM9 dataset, molecular fingerprint, pretrained transformer-based, global and local 3D representations across seven molecular properties. Surprisingly, richer spectral features, measured by four different spectral metrics, do not consistently improve accuracy. Pearson correlation tests further reveal that for transformer-based and local 3D representations, spectral richness can even have a negative correlation with performance. We also implement truncated kernels to probe the relationship between spectrum and predictive performance: in many kernels, retaining only the top 2% of eigenvalues recovers nearly all performance, indicating that the leading eigenvalues capture the most informative features. Our results challenge the common heuristic that "richer spectra yield better generalization" and highlight nuanced relationships between representation, kernel features, and predictive performance. Beyond molecular property prediction, these findings inform how kernel and self-supervised learning methods are evaluated in data-limited scientific and real-world tasks.
High-performing medical Large Language Models (LLMs) typically require extensive fine-tuning with substantial computational resources, limiting accessibility for resource-constrained healthcare institutions. This study introduces a confidence-driven multi-model framework that leverages model diversity to enhance medical question answering without fine-tuning. Our framework employs a two-stage architecture: a confidence detection module assesses the primary model's certainty, and an adaptive routing mechanism directs low-confidence queries to Helper models with complementary knowledge for collaborative reasoning. We evaluate our approach using Qwen3-30B-A3B-Instruct, Phi-4 14B, and Gemma 2 12B across three medical benchmarks; MedQA, MedMCQA, and PubMedQA. Result demonstrate that our framework achieves competitive performance, with particularly strong results in PubMedQA (95.0\%) and MedMCQA (78.0\%). Ablation studies confirm that confidence-aware routing combined with multi-model collaboration substantially outperforms single-model approaches and uniform reasoning strategies. This work establishes that strategic model collaboration offers a practical, computationally efficient pathway to improve medical AI systems, with significant implications for democratizing access to advanced medical AI in resource-limited settings.
Diamonds hosting color centers possess intrinsically high thermal conductivity; therefore, laser-induced heating has often received little attention. However, when placed on substrates with low thermal conductivity, localized heating of diamonds under laser excitation can become significant, and the presence of an interfacial polymer layer between substrate and diamond further amplifies this effect. Yet, the relationship between substrate thermal conductivity, polymer thickness, and laser heating remains to be established. Here, a systematic investigation is presented on laser-induced heating of silicon-vacancy diamond on substrates with varying thermal conductivity and interfacial polymer thickness. Results reveal that even at a low excitation power of 737~$\mu$W/$\mu$m$^2$, thin amorphous holey carbon -- the lowest-conductivity substrate ($\sim$0.2~W~m$^{-1}$~K$^{-1}$) studied -- exhibits substantial heating, while glass ($\sim$1.4~W~m$^{-1}$~K$^{-1}$) and polydimethylsiloxane (PDMS, $\sim$0.35~W~m$^{-1}$~K$^{-1}$) show noticeable heating only above 2.95~mW/$\mu$m$^2$. For polymer interlayers, a thickness of just 2.2~$\mu$m induces significant heating at 2.95~mW/$\mu$m$^2$ and above, highlighting strong influence of both substrate and polymer thickness on local heating response. Experimental findings are further validated using COMSOL Multiphysics simulations with a steady-state 3D heat transfer model. These results provide practical guidance for substrate selection and sample preparation, enabling optimization of conditions for optical thermometry and quantum sensing applications.
The distribution of resources in the subsurface is deeply linked to the variations of its physical properties. Generative modeling has long been used to predict those physical properties while quantifying the associated uncertainty. But current approaches struggle to properly reproduce geological structures, and fluvial deposits in particular, because of their continuity. This study explores whether a generative adversarial network (GAN) - a type of deep-learning algorithm for generative modeling - can be trained to reproduce fluvial deposits simulated by a process-based model - a more expensive model that mimics geological processes. An ablation study shows that developments from the deep-learning community to generate large 2D images are directly transferable to 3D images of fluvial deposits. Training remains stable, and the generated samples reproduce the non-stationarity and details of the deposits without mode collapse or pure memorization of the training data. Using a process-based model to generate those training data allows us to include valuable properties other than the usual physical properties. We show how the deposition time let us monitor and validate the performance of a GAN by checking that its samples honor the law of superposition. Our work joins a series of previous studies suggesting that GANs are more robust that given credit for, at least for training datasets targeting specific geological structures. Whether this robustness transfers to larger 3D images and multimodal datasets remains to be seen. Exploring how deep generative models can leverage geological principles like the law of superposition shows a lot of promise.
This work provides a detailed mapping of various mechanisms of surface-trap-induced gate leakage in GaN HEMTs across a temperature range from room to cryogenic levels. Two-dimensional variable-range hopping is observed at small gate bias. Under higher reverse gate bias, the leakage is dominated by the Poole--Frenkel emission above 220 K, but gradually transitions to the trap-assisted tunneling below 220 K owing to the frozen-trap effect. The trap barrier height extracted from the gate leakage current under the upward gate sweep is 0.65 V, which is 12\% higher than that from the downward sweep. The gate leakage current as a function of the gate bias exhibits clockwise hysteresis loops above 220 K but counterclockwise ones below 220 K. This remarkable opposite hysteresis phenomenon is thoroughly explained by the trap mechanisms.
We investigate the asymmetric integrable turbulence and rogue waves (RWs) emerging from the modulation instability (MI) of plane waves for the DNLS equation. The \(n\)-th moments and ensemble-averaged kinetic and potential energy exhibit oscillatory convergence towards their steady-state values. Specifically, the amplitudes of oscillations for these indexes decay asymptotically with time as \(t^{-1.36}\), while the phase shifts demonstrate a nonlinear decay with a rate of \(t^{-0.78}\). The frequency of these oscillations is observed to be twice the maximum growth rate of MI. These oscillations can be classified into two distinct types: one is in phase with ensemble-averaged potential energy modulus $|\langle H_4\rangle|$, and the other is anti-phase. At the same time, this unity is also reflected in the wave-action spectrum \( S_k(t) \) for a given \( k \), the auto-correlation function \( g(x,t) \) for a given \( x \), as well as the PDF \( P(I,t) \). The critical feature of the turbulence is the wave-action spectrum, which follows a power-law distribution of \( |k+3|^{-\alpha} \) expect for $k=-3$. Unlike the NLS equation, the turbulence in the DNLS setting is asymmetric, primarily due to the asymmetry between the wave number of the plane wave from the MI and the perturbation wave number.. As the asymptotic peak value of \( S_k \) is observed at \( k = -3 \), the auto-correlation function exhibits a nonzero level as \( x \to \pm L/2 \). The PDF of the wave intensity asymptotically approaches the exponential distribution in an oscillatory manner. However, during the initial stage of the nonlinear phase, MI slightly increases the occurrence of RWs. This happens at the moments when the potential modulus is at its minimum, where the probability of RWs occurring in the range of \( I\in [12, 15] \) is significantly higher than in the asymptotic steady state.
The International Academy of Astronautics (IAA) SETI Committee has long provided guiding principles for responding to a potential detection of a SETI signal. The foundational Declaration of Principles Concerning Activities Following the Detection of Extraterrestrial Intelligence, first formulated in 1989, has been widely recognised by the international scientific community. A supplemental set of draft protocols addressing the possibility of a reply to an extraterrestrial signal was prepared in 1995 by the IAA SETI Permanent Committee, with both documents presented in a position paper to the UN Committee on the Peaceful Uses of Outer Space in 2000. In keeping with the evolving landscape of SETI research, the IAA Declaration of Principles was streamlined and updated in 2010. Recognising the need for continued adaptation, the IAA SETI Committee established a Task Group in 2022 to re-examine the protocols in light of recent advances in search methodologies, the expansion of international participation in SETI, and the increasing complexity of the global information environment. The Group recognises the living document nature of the protocols, which will require ongoing refinement to remain relevant and effective in a rapidly changing world. A draft revised Declaration of Principles was presented at the IAC 2024 in Milan, and initial feedback was received from the community, particularly members of the IAA SETI Committee. Since then, we have continued to seek broader community input in a structured process, refining the proposed updates based on further discussions and consultations. A Revised Declaration of Principles, is presented here.
Magnetic reconnection is a multiscale phenomenon where fluid- and particle-scale processes interact. The particle-in-cell (PIC) method, capable of resolving kinetic (particle-scale) physics, is extensively used to study the kinetic effects in magnetic reconnection. Meanwhile, because of the high computational cost, PIC simulations cannot capture the interaction between kinetic and fluid dynamics, which poses a major obstacle to understanding magnetic reconnection in large-scale phenomena such as solar flares. A multi-hierarchy simulation that combines Magnetohydrodynamics (MHD) and PIC provides a promising means to overcome these spatial and temporal scale gaps. We developed a multi-hierarchy simulation code KAMMUY (Kinetic And Magnetohydrodynamic MUlti-hierarchY simulation code), in which an ideal MHD simulation for a large domain and a PIC simulation for a smaller domain are solved in parallel with mutual information exchange. To validate the code, we conducted test simulations of MHD wave propagation and the shock tube problem. The results demonstrate that short-wavelength, high-frequency waves generated in the PIC region do not propagate into the MHD region, whereas MHD-scale structures propagate smoothly into the PIC region, highlighting the capability of our code for numerical studies of magnetic reconnection. By applying the KAMMUY code to magnetic reconnection while varying the PIC domain size, we find that the reconnection rate remains unchanged, regardless of the extent of the PIC region where the Hall magnetic field is present. It suggests that the spatial extension of the Hall magnetic field on the scale of $10 \sim 100 \lambda_i$ does not influence the reconnection rate.
Light-matter interaction in the regime of strong quantum coupling is usually treated within the framework of the Hopfield model. However, the picture of coupling well-defined modes of light and matter is correct only as long as the shapes of these eigenmodes are not substantially modified by the interaction. Moreover, parameters of theoretical models are usually obtained by fitting to experimental data. To date, there has been no straightforward method to determine a quantum master equation corresponding to a system with specific dielectric structure, which may lead to incompatibility of theoretical descriptions and physical realizations. We present a recipe for obtaining a quantum model in the polariton eigenmode basis based on Bogoliubov transformation in the conservative case and third quantization technique in the dissipative case. We show how this method can be used for boosting interaction strength and engineering nonlocal many-body interactions in carefully designed nanostructures, resulting in strongly nonclassical correlations of emitted light.
Human skin oils are a major sink for ozone in densely occupied indoor environments. Understanding how the resulting volatile and semivolatile organic oxidation products influence indoor air chemistry requires accurate representations not only of their emission into indoor air but also of their transport across the outermost skin barrier, the stratum corneum. Using molecular dynamics simulations, we investigate the passive permeation of acetone, 6-methyl-5-hepten-2-one, and water -- two representative products of skin-oil oxidation and a reference compound -- through a model stratum corneum lipid membrane. We determine position-dependent diffusivities using two complementary analyses based on the same set of simulations and evaluate their accuracy through a propagator analysis. The two approaches provide upper and lower bounds for the true diffusivity, which, when combined with previously reported free-energy profiles, yield permeabilities relevant for modeling macroscopic skin transport. Our results show that permeation is governed primarily by energetic barriers rather than by molecular mobility, and that the predicted transport coefficients vary by about one order of magnitude depending on the chosen diffusivity estimator. These findings provide molecular-level constraints for parameters used in indoor air chemistry models and establish a transferable framework for linking atomistic transport mechanisms to large-scale simulations of human exposure and indoor air quality.
The bulk properties of convection in stellar and giant planet interiors are often assumed to be independent of the molecular diffusivities, which are very small. By contrast, simulations of this process in rotating, spherical shells, which are typically driven by conductive boundary heat fluxes, generally yield results that depend on the diffusivity. This makes it challenging to extrapolate these simulation results to real objects. However, laboratory models and Cartesian-box simulations suggest diffusion-free dynamics can be obtained if convection is driven using prescribed internal heating and cooling instead of boundary fluxes. Here, we apply this methodology to simulations of Boussinesq, hydrodynamic rotating spherical shell convection. We find that this set-up unambiguously yields diffusion-free behaviour for bulk 'thermal' properties of the convection, such as the radial temperature contrast and the convective heat transport. Moreover, the transition from prograde to retrograde equatorial zonal flow is diffusion-free and only depends on the convective Rossby number. The diffusivity dependence of other bulk 'kinematic' properties is regime-dependent. In simulations that are rotationally constrained, the convective velocities, and the strength and structure of the zonal flow, are diffusion-dependent, although the zonal flow appears to approach a diffusion-free state for sufficiently high supercriticality. In simulations that are uninfluenced by rotation, or are only influenced by rotation at large scales, diffusion-free convective velocities and zonal flow amplitudes are obtained. The result that many aspects of our idealised simulations are diffusion-free has promising implications for the development of realistic stellar and giant planet convection models that can access diffusion-free regimes.
Datasets often possess an intrinsic multiscale structure with meaningful descriptions at different levels of coarseness. Such datasets are naturally described as multi-resolution clusterings, i.e., not necessarily hierarchical sequences of partitions across scales. To analyse and compare such sequences, we use tools from topological data analysis and define the Multiscale Clustering Bifiltration (MCbiF), a 2-parameter filtration of abstract simplicial complexes that encodes cluster intersection patterns across scales. The MCbiF can be interpreted as a higher-order extension of Sankey diagrams and reduces to a dendrogram for hierarchical sequences. We show that the multiparameter persistent homology (MPH) of the MCbiF yields a finitely presented and block decomposable module, and its stable Hilbert functions characterise the topological autocorrelation of the sequence of partitions. In particular, at dimension zero, the MPH captures violations of the refinement order of partitions, whereas at dimension one, the MPH captures higher-order inconsistencies between clusters across scales. We demonstrate through experiments the use of MCbiF Hilbert functions as topological feature maps for downstream machine learning tasks. MCbiF feature maps outperform information-based baseline features on both regression and classification tasks on synthetic sets of non-hierarchical sequences of partitions. We also show an application of MCbiF to real-world data to measure non-hierarchies in wild mice social grouping patterns across time.
We examine the theoretical properties of the index of agreement loss function $L_W$, the negatively oriented counterpart of Willmott's index of agreement, a common metric in environmental sciences and engineering. We prove that $L_W$ is bounded within [0, 1], translation and scale invariant, and estimates the parameter $\Bbb{E}_{F}[\underline{y}] \pm \Bbb{V}_{F}^{1/2}[\underline{y}]$ when fitting a distribution. We propose $L_{\operatorname{NR}_2}$ as a theoretical improvement, which replaces the denominator of $L_W$ with the sum of Euclidean distances, better aligning with the underlying geometric intuition. This new loss function retains the appealing properties of $L_W$ but also admits closed-form solutions for linear model parameter estimation. We show that as the correlation between predictors and the dependent variable approaches 1, parameter estimates from squared error, $L_{\operatorname{NR}_2}$ and $L_W$ converge. This behavior is mirrored in hydrologic model calibration (a core task in water resources engineering), where performance becomes nearly identical across these loss functions. Finally, we suggest potential improvements for existing $L_p$-norm variants of the index of agreement.
Frictional motion is harder to initiate than to sustain, as evident when pushing a heavy object. This disparity between static and kinetic friction drives instabilities and stick-slip dynamics in systems ranging from nanodevices and MEMS to squealing brakes, glaciers and tectonic faults, yet its origin and the transition mechanism remain poorly understood. Empirical rate-and-state friction laws predict that during the static-to-kinetic transition, friction increases for nanometer-per-second slip rates, but decreases for micrometers-per-second rates and above. These transients are believed to be associated with contact strengthening (aging) at static interfaces, although their physical basis is unclear and the crossover between regimes has never been observed directly. Here we show, through nanometer-resolution sliding experiments on macroscopic rough surfaces, that these transients are segments of a single, universal non-monotonic response whose peak defines static friction. We show that this behavior arises from mechanical reorganization of interlocking surface asperities under shear, fundamentally distinct from contact aging, which is governed by thermal molecular processes. We derive, from first principles and without invoking any empirical postulates, a differential equation that quantitatively captures the friction peak. These results unify frictional transients across scales and speeds, and establish a physics-based framework for understanding frictional instabilities and failure processes in engineering and geosciences.
The grain size distribution (GSD) plays an important role in the mechanical properties of amorphous disordered systems and complex granular materials. Varying GSD causes segregation issues and alters critical behaviors. This work used the discrete element method (DEM) to investigate the rheological and critical behaviors of sheared granular flows with various GSDs. The results show that, while a unified rheological relation can be obtained, a characteristic length scale, which is associated with the contact probability and can be obtained from any GSD, is embedded within such a polydisperse disordered system. We further acquire a correlation function between critical solid fractions and dimensionless grain volume distributions. This work elucidates the effect of particle volumes on the rheology and micromechanics of dry granular systems and provides further insights in better incorporating the influence of other particle properties into a unified framework, which is helpful and critical for the corresponding engineering and geophysical problems.
Electric dipole moment (EDM) measurements using paramagnetic molecules have significantly advanced over the last decade. Traditionally, these experiments have been analyzed in terms of the electron EDM. However, paramagnetic molecules are also sensitive to hadronic sources of charge-parity (CP) violation, highlighting the need for a new framework to interpret the experimental results. In this Letter, we introduce an effective field theory framework to relate molecular EDMs to the EDMs of neutrons and protons. We identify the dominant contributions through power counting and pinpoint the necessary nuclear matrix elements. As a practical application, we employ the nuclear shell model to calculate these nuclear matrix elements for the polar molecule BaF. Finally, we estimate the limits on the nucleon EDMs set by current molecular EDM experiments.
X-ray photon-counting computed tomography (PCCT) for extremity allows multi-energy high-resolution (HR) imaging but its radiation dose can be further improved. Despite the great potential of deep learning techniques, their application in HR volumetric PCCT reconstruction has been challenged by the large memory burden, training data scarcity, and domain gap issues. In this paper, we propose a deep learning-based approach for PCCT image reconstruction at halved dose and doubled speed validated in a New Zealand clinical trial. Specifically, we design a patch-based volumetric refinement network to alleviate the GPU memory limitation, train network with synthetic data, and use model-based iterative refinement to bridge the gap between synthetic and clinical data. Our results in a reader study of 8 patients from the clinical trial demonstrate a great potential to cut the radiation dose to half that of the clinical PCCT standard without compromising image quality and diagnostic value.
The modified Born series (MBS) is a fast and accurate method for simulating wave propagation in complex structures. In the current implementation of the MBS, the simulation size is limited by the working memory of a single computer or graphics processing unit (GPU). Here, we present a domain decomposition method that enhances the scalability of the MBS by distributing the computations over multiple GPUs, while maintaining its accuracy, memory efficiency, and guaranteed monotonic convergence. With this new method, the computations can be performed in parallel, and a larger simulation size is possible as it is no longer limited to the memory size of a single computer or GPU. We show how to decompose large problems over subdomains and demonstrate our approach by solving the Helmholtz problem for a complex structure of $3.28\cdot 10^7$ cubic wavelengths ($320 \times 320 \times 320$ wavelengths) in just $45$ minutes with a dual-GPU simulation.
Paleoclimate records provide a critical long-term perspective on natural climate variability, essential for understanding contemporary climate change. However, existing paleoclimate proxies lack the spatial-temporal coverage for studying changes in high-impact weather extremes like tropical cyclones (TCs). Here we introduce a multi-source framework that confronts the contemporary changes in TC landfalls in East Asia with a multi-century baseline (1368-1911) reconstructed from historical documents. Leveraging pre-industrial and contemporary data, the analysis reveals that a relatively small shift toward earlier landfalls in the contemporary era (1946-2020). However, this shift falls well within the range of natural fluctuations documented historically (1651-1900). This low signal-to-noise ratio indicates the forced anthropogenic signal of TC landfall timing remains challenging to detect. Besides providing a template for assessing seasonality changes in extremes, our work shows consistent natural controls of TC timing in contemporary and pre-industrial eras, lending credibility to pre-industrial observational datasets and climate simulations.
Exceptional points (EPs) in non-Hermitian photonic systems have attracted considerable research interest due to their singular eigenvalue topology and associated anomalous physical phenomena. These properties enable diverse applications ranging from enhanced quantum metrology to chiral light-matter interactions. Practical implementation of high order EPs in optical platforms however remains fundamentally challenging, requiring precise multi-parameter control that often exceeds conventional design capabilities. This work presents a novel framework for engineering high order EPs through transformation optics (TO) principles, establishing a direct correspondence between mathematical singularities and physically controllable parameters. Our TO-based paradigm addresses critical limitations in conventional Hamiltonian approaches, where abstract parameter spaces lack explicit connections to experimentally accessible degrees of freedom, while simultaneously providing full-field mode solutions. In contrast to prevailing parity-time-symmetric architectures, our methodology eliminates symmetry constraints in EP design, significantly expanding the possibilities in non-Hermitian photonic engineering. The proposed technique enables unprecedented control over EP formation and evolution in nanophotonic systems, offering new pathways for developing topological optical devices with enhanced functionality and robustness.
We report laser cooling and trapping of ytterbium atoms in a two-color magneto-optical trap (MOT). Benefited from both the broad singlet transition ($^1\text{S}_0\rightarrow {}^1\text{P}_1$) and the narrow intercombination transition ($^1\text{S}_0\rightarrow {}^3\text{P}_1$) of ytterbium atoms, the two-color MOT enables rapid loading and efficient cooling. We systematically investigate the shielding effect of the intercombination transition by examining the atom loading and loss rates of single-color and two-color MOTs. Our findings are general and can be extended to other alkaline earth(-like) atoms.
Individuals who do not comply with public health safety measures pose a significant challenge to effective epidemic control, as their risky behaviours can undermine public health interventions. This is particularly relevant in urban environments because of their high population density and complex social interactions. In this study, we employ detailed contact networks, built using a data-driven approach, to examine the impact of non-compliant individuals on epidemic dynamics in three major Italian cities: Torino, Milano, and Palermo. We use a heterogeneous extension of the Susceptible-Infected-Recovered model that distinguishes between ordinary and non-compliant individuals, who are more infectious and/or more susceptible. By combining electoral data with recent findings on vaccine hesitancy, we obtain spatially heterogeneous distributions of non-compliance. Epidemic simulations demonstrate that even a small proportion of non-compliant individuals in the population can substantially increase the number of infections and accelerate the timing of their peak. Furthermore, the impact of non-compliance is greatest when disease transmission rates are moderate. Including the heterogeneous, data-driven distribution of non-compliance in the simulations results in infection hotspots forming with varying intensity according to the disease transmission rate. Overall, these findings emphasise the importance of monitoring behavioural compliance and tailoring public health interventions to address localised risks.
We study the predictability of turbulent velocity signals using probabilistic analog-forecasting. Here, predictability is defined by the accuracy of forecasts and the associated uncertainties. We study the Gledzer--Ohkitani--Yamada (GOY) shell model of turbulence as well as experimental measurements from a fully developed turbulent flow. In both cases, we identify the extreme values of velocity at small scales as localized unpredictable events that lead to a loss of predictability: worse mean predictions and increase of their uncertainties. The GOY model, with its explicit scale separation, allows to evaluate the prediction performance at individual scales, and so to better relate the intensity of extreme events and the loss of forecast performance. Results show that predictability decreases systematically from large to small scales. These findings establish a statistical connection between predictability loss across scales and intermittency in turbulent flows.
Non-Hermitian systems host exotic phenomena absent in their Hermitian counterparts, including the recently predicted self-healing effect (SHE) of non-Hermitian skin modes. To date, the SHE of skin modes in non-Hermitian systems has not been observed experimentally. Here we propose a feasible scheme to realize SHE in photonic Floquet lattices by exploiting skin mode tunability (SMT), a mechanism in which the spectrum of skin modes localized at one boundary can be tuned via a potential applied at the opposite boundary. Such tunability arises from the non-Hermitian biorthogonality of the eigenstates. We demonstrate that a certain skin mode is exceptionally sensitive to remote-boundary potentials in an array of $100$ coupled helical waveguides, allowing broad-range spectral control and the generation of SHE with experimentally accessible parameters. Our results establish a general framework for engineering skin modes via local perturbations, thereby expanding the toolbox for non-Hermitian wave control.
This work developed an accurate and robust absorption-based method for spatially resolved measurements of gas temperatures in flames and reacting flows, with typical single-measurement uncertainties on the order of 1\%. This method exploits narrow-linewidth laser absorption of hot CO$_2$ molecules, which can be generated from combustion or artificially seeded into the flow. A collinear dual-laser setup allowed for periodic scans over tens of CO$_2$ absorption transitions near the $\nu_3$ bandhead every 100 $\mu s$, from which gas temperatures (as well as CO$_2$ concentrations) were determined with high sensitivity and robustness. Spatially resolved measurements were achieved using an electrically driven high-speed beam scanning system consisting of a 2-D galvo scanner and a pair of off-axis parabolic mirrors. An effective spatial resolution of 1 mm was achieved at a planar field measurement speed of 200 Hz and a volumetric field measurement speed of 2 Hz. A physically constrained nonlinear inference framework was also developed for the quantitative analysis of the measurement data. Proof-of-concept experiments were performed on axisymmetric flames stabilized on a Mckenna burner at various equivalence ratios and flow rates, and the results agreed asymptotically with the theoretical value of the adiabatic flame temperature. An additional experiment on a flame of complex geometry demonstrated an excellent level of resolution, precision, and contrast achieved by the current thermometry method. This method promises to provide good utility in future combustion studies due to its high performance metrics and relative ease of use.
Articular cartilage is a musculoskeletal soft tissue renowned for its unique mechanical properties. Understanding both its hierarchical structure and the interplay between its constituents could shed light on the mechanical competence of the tissue. Therefore, rheologic approaches based on high-resolution non-destructive imaging techniques are desired. In this context, X-ray imaging could ideally accomplish this task. Nevertheless, the nature of articular cartilage translates into poor contrast using conventional absorption modality. To overcome this limitation, several approaches can be embraced. X-ray visibility of articular cartilage can be increased with the use of radiopaque contrast agents. Therefore, further discrimination of structures could be provided by spectral techniques, pivoting on either multi-energy acquisitions or photon-counting technology. Alternatively, phase-contrast techniques unveil details typically undetected with conventional approaches. Phase-contrast imaging, based on the intrinsic decrement in the refractive index of the tissue, can be achieved with different configurations and implementations, including distinct X-ray sources and optical elements. Additionally, some phase-contrast techniques retrieve the small-angle scattering-based dark-field signal, relatable to sub-pixel structures. This scoping review aims to catalogue the application of these advanced X-ray techniques to articular cartilage imaging, following PRISMA guidelines. It discusses their advantages, limitations, and includes an overview of rheologic applications to articular cartilage.
This study examines the detection of oligonucleotide-specific signals in sensitive optomechanical experiments. Silica nanoparticles were functionalized using ZnCl$_2$ and 25-mers of single-stranded deoxyadenosine and deoxythymidine monophosphate which were optically trapped by a 1550 nm wavelength laser in vacuum. In the optical trap, silica nanoparticles behave as harmonic oscillators, and their oscillation frequency and amplitude can be precisely detected by optical interferometry. The data was compared across particle types, revealing differences in frequency, width and amplitude of peaks with respect to motion of the silica nanoparticles which can be explained by a theoretical model. Data obtained from this platform was analyzed by fitting Lorentzian curves to the spectra. Dimensionality reduction detected differences between the functionalized and non-functionalized silica nanoparticles. Random forest modeling provided further evidence that the fitted data were different between the groups. Transmission electron microscopy was carried out, but did not reveal any visual differences between the particle types.
Paraxial light skyrmions are topological configurations that map a spatial domain of the field onto the full Poincaré sphere of polarization states. While optical skyrmions have been explored in continuous-wave regimes, their realization in the ultrafast domain remains open. Here we demonstrate that attosecond skyrmion pulses can be generated via high-harmonic generation. Advanced simulations combining single-atom strong-field theory and macroscopic propagation reveal that an infrared linearly polarized vector beam with fractional orbital angular momentum produces extreme-ultraviolet harmonic fields with nearly identical skyrmion polarization distributions across a broad spectral range. Using 1.2 $\mu$m driving fields and experimentally feasible spectral filtering, we show that the coherent superposition of consecutive harmonics centered at 70 eV yields a train of skyrmion pulses with $\sim500$ attoseconds duration. Our results open opportunities to use structured attosecond light with topological polarization textures in fields as ultrafast control, imaging and spectroscopy.
A new comprehensive study on the Cs${_2}$ZrCl${_6}$ (CZC) crystal scintillating properties under different types of irradiation was performed over a wide temperature range from 5 to 300 K. The light yield (LY) at room temperature (RT), measured under irradiation by 662 keV $\gamma$ quanta of $^{137}$Cs, was evaluated to be 53,300 $\pm$ 4,700 photons/MeV corresponding to approximately 71% of its estimated absolute value. The maximum light emission was observed in the temperature interval 135-165 K, where the LY reached 56,900 photons/MeV and 19,700 photons/MeV for $\gamma$ quanta and $\alpha$ particles, respectively. The quenching factor (QF) for $\alpha$ particles increases smoothly from QF = 0.30 at RT to QF = 0.36 at 135 K. The shape of scintillation pulses induced by $\alpha$ particles is characterized by three time-constants (0.3, 2.5 and 11.8 $\mu$s at RT), whereas the average pulse of $\gamma$ induced events is characterized by two time-constants (1.3 and 11.5 $\mu$s at RT). At the same time, scintillating properties and pulse-shape discrimination capability of the CZC exhibit an acute deterioration at temperatures below 135 K. The optimal operating conditions to maximize the scintillating performance of undoped CZC crystals are discussed.
Positron Emission Tomography (PET) is a medical imaging modality that utilizes positron-emitting isotopes, such as Ga-68 and F-18, for many diagnostic purposes. The positron annihilates with an electron from the surrounding area, creating two photons of 511 keV energy and opposite momenta, entangled in their orthogonal polarizations. When each photon undergoes a Compton scattering process, the difference of their azimuthal scattering angles reflects the initial orthogonality of their polarizations, peaking at $\pm$90$^{\circ}$. This type of correlation is not yet utilized in conventional PET scanners, but could potentially offer an energy-independent method for background reduction. Measurements of these kinds of correlations can be achieved using Compton polarimeters, built from a single layer of segmented scintillating crystals such as Gadolinium Aluminium Gallium Garnet doped with Cerium (GAGG:Ce), read out by silicon photomultipliers (SiPMs). In this paper, we study the signal-to-random background ratios in measurements of these correlated annihilation photons from coincidence time spectra across clinically relevant source activities, from $\sim$200 MBq to $\sim$378 MBq. These are then compared to the standard single-pixel (photoelectric) measurements. We find that the signal-to-random background ratios (SBRs) obtained from the polarization-correlated events for Compton scattering angles $\theta_{1,2}\in[72^{\circ}, 90^{\circ}]$ and azimuthal angle difference $\Delta\phi=90^{\circ}\pm20^{\circ}$ are consistently higher than those from single-pixel events, with the ratio of their SBR values of 1.23. The SBR of the selected events also increases with the polarimetric modulation factor $\mu$, gaining $\sim$50\% in value during the experiment.
Symmetry considerations suggest that moire superlattices formed by twisted two-dimensional materials should preserve overall inversion symmetry. However, experiments consistently report robust ferroelectricity in systems such as twisted bilayer h-BN, posing a fundamental discrepancy between theory and experiment regarding its microscopic origin. Here, using large-scale finite-field molecular dynamics simulations, we challenge the prevailing defect-pinning hypothesis and instead identify an out-of-plane bending field, induced by in-plane compressive strain, as the key symmetry-breaking mechanism. This strain-induced rippling drives spatially heterogeneous interlayer sliding and distorts the moire domain wall network, resulting in a four-state ferroelectric system. Remarkably, we show this mechanism can be harnessed at the nanoscale, where localized nanobubbles designate the moire lattice's fundamental hexagonal domain clusters as the smallest individually addressable ferroelectric bits, thereby imposing local control on an otherwise globally defined structure. Our findings establish a geometry-driven framework for understanding and engineering moire ferroelectrics, offering not only a route toward ultra-high-density, rewritable memory, but also a strategy for locally tuning the moire potential itself, a critical step for manipulating emergent correlated and topological quantum phases.
The abundance of information on social media has reshaped public discussions, shifting attention to the mechanisms that drive online discourse. This study analyzes large-scale Twitter (now X) data from three global debates--Climate Change, COVID-19, and the Russo-Ukrainian War--to investigate the structural dynamics of engagement. Our findings reveal that discussions are not primarily shaped by specific categories of actors, such as media or activists, but by shared ideological alignment. Users consistently form polarized communities, where their ideological stance in one debate predicts their positions in others. This polarization transcends individual topics, reflecting a broader pattern of ideological divides. Furthermore, the influence of individual actors within these communities appears secondary to the reinforcing effects of selective exposure and shared narratives. Overall, our results underscore that ideological alignment, rather than actor prominence, plays a central role in structuring online discourse and shaping the spread of information in polarized environments.
The model of localized fermions on the triangular lattice is analyzed in means of the Monte Carlo simulations in the grand canonical ensemble. The Hamiltonian of the system has a form of the extended Hubbard model (at the atomic limit) with nearest-neighbor Ising-like magnetic $J$ interactions and onsite Coulomb $U$ interactions. The model is investigated for both signs of $J$, arbitrary $U$ interaction and arbitrary chemical potential $\mu$ (or, equivalently, arbitrary particle concentration $n$). Based on the specific heat capacity and sublattice magnetization analyses, the phase diagrams of the model are determined. For ferromagnetic case ($J<0$), the transition from the ordered phase (which is a standard ferromagnet and can be stable up to $k_{B}T/|J| \approx 0.61$) is found to be second-order (for sufficiently large temperatures $k_{B}T/|J| \gtrsim 0.2$) or first-order (for $-1<U/|J|<-0.65$ at the half-filling, i.e., $n=1$). In the case of $J>0$, the ordered phase occurs in a range of $-1/2<U/|J|<0$ (for $n=1$), while for larger $U$ the state with short-range order is also found (also for $n \neq 1$). The ordered phase is characterized by an antiferromagnetic arrangement of magnetic moments in two sublattices forming the hexagonal lattice. The transition from this ordered phase, which is found also for $\mu \neq 0$ ($n \neq 1$) and $U/|J|>-1/2$ is always second-order for any model parameters. The ordered phase for $J>0$ can be stable up to $k_{B}T/|J| \approx 0.06$.
Breathers have been experimentally and theoretically found in many physical systems -- in particular, in integrable nonlinear-wave models. A relevant problem is to study the \textit{breather gas}, which is the limit, for $N\rightarrow \infty $, of $N$-breather solutions. In this paper, we investigate the breather gas in the framework of the focusing nonlinear Schrödinger (NLS) equation with nonzero boundary conditions, using the inverse scattering transform and Riemann-Hilbert problem. We address aggregate states in the form of $N$-breather solutions, when the respective discrete spectra are concentrated in specific domains. We show that the breather gas coagulates into a single-breather solution whose spectral eigenvalue is located at the center of the circle domain, and a multi-breather solution for the higher-degree quadrature concentration domain. These coagulation phenomena in the breather gas are called \textit{breather shielding}. In particular, when the nonzero boundary conditions vanish, the breather gas reduces to an $n$-soliton solution. When the discrete eigenvalues are concentrated on a line, we derive the corresponding Riemann-Hilbert problem. When the discrete spectrum is uniformly distributed within an ellipse, it is equivalent to the case of the line domain. These results may be useful to design experiments with breathers in physical settings.
Run-and-tumble (RT) motion is commonly observed in flagellated microswimmers, arising from synchronous and asynchronous flagellar beating. One such example is a biflagellated alga, called \textit{Chlamydomonas reinhardtii}. Its flagellar synchronization is not only affected by hydrodynamic interactions but also through contractile stress fibers that mechanically couple the flagella, enabling adaptable swimming behavior. To explore this, we design a macroscopic mechanical system that comprises dry, self-propelled robots linked by a rigid rod to model this organism. By varying the attachment points of the two ends of the rod on each robot, the model incorporates the effect of fiber contractility observed in the real organism. To mimic a low Reynolds number environment, we program each robot to undergo overdamped active Brownian (AB) motion. We find that such a system exhibits RT-like behavior, characterized by sharp, direction-reversing tumbles and exponentially distributed run times, consistent with the real organism. Moreover, we quantify tumbling frequency and demonstrate its tunability across experimental parameters. Additionally, we provide a theoretical model that reproduces our results, elucidating physical mechanisms governing RT dynamics. Thus, our robotic system not only replicates RT motion but also captures several subtle characteristics, offering valuable insights into the underlying physics of microswimmer motility.
In this study, we unveil a new AI model, termed PhyE2E, to discover physical formulas through symbolic regression. PhyE2E simplifies symbolic regression by decomposing it into sub-problems using the second-order derivatives of an oracle neural network, and employs a transformer model to translate data into symbolic formulas in an end-to-end manner. The resulting formulas are refined through Monte-Carlo Tree Search and Genetic Programming. We leverage a large language model to synthesize extensive symbolic expressions resembling real physics, and train the model to recover these formulas directly from data. A comprehensive evaluation reveals that PhyE2E outperforms existing state-of-the-art approaches, delivering superior symbolic accuracy, precision in data fitting, and consistency in physical units. We deployed PhyE2E to five applications in space physics, including the prediction of sunspot numbers, solar rotational angular velocity, emission line contribution functions, near-Earth plasma pressure, and lunar-tide plasma signals. The physical formulas generated by AI demonstrate a high degree of accuracy in fitting the experimental data from satellites and astronomical telescopes. We have successfully upgraded the formula proposed by NASA in 1993 regarding solar activity, and for the first time, provided the explanations for the long cycle of solar activity in an explicit form. We also found that the decay of near-Earth plasma pressure is proportional to r^2 to Earth, where subsequent mathematical derivations are consistent with satellite data from another independent study. Moreover, we found physical formulas that can describe the relationships between emission lines in the extreme ultraviolet spectrum of the Sun, temperatures, electron densities, and magnetic fields. The formula obtained is consistent with the properties that physicists had previously hypothesized it should possess.
Reservoir computers can be used to predict time series generated by spatio-temporal chaotic systems. Using multiple reservoirs in parallel has shown improved performances for these predictions, by effectively reducing the input dimensionality of each reservoir. Similarly, one may further reduce the dimensionality of the input data by transforming to a lower-dimensional latent space. Combining both approaches, we show that using dimensionality-reduced latent space predictions for parallel reservoir computing not only reduces computational costs, but also leads to better prediction results for small to medium reservoir sizes. In the combined approach we further demonstrate that dimensionality reduction improves small-reservoir predictions regardless of noise contaminating the training data. The benefit of dimensionality-reduced parallel reservoir computing is illustrated and evaluated on the basis of the prediction of the one-dimensional Kuramoto-Sivashinsky equation.
Recently, triangular lattice models have received a lot of attention since they can describe a number of strongly-correlated materials that exhibit superconductivity and various magnetic and charge orders. In this research we present an extensive analysis of the charge-ordering phenomenon of the triangular-lattice extended Hubbard model with repulsive onsite and nearest-neighbor interaction, arbitrary charge concentration, and $\sqrt{3}\times\sqrt{3}$ supercell (3-sublattice assumption). The model is solved in the ground state with the mean-field approximation which allowed to identify $8$ charge-ordered phases and a large variety of phase transitions. An exotic pinball-liquid phase was found and described. Moreover, strong particle-hole asymmetry of the phase diagram is found to play an important role for triangular lattices. The detailed analysis of band structures, unavailable for more advanced methods, such as dynamical mean-field theory, allowed us to interpret the found triangular-lattice phases and provided a great insight into the mechanisms behind the phase transitions that can also be met when correlation effects are taken into account. The complexity of the mean-field phase diagram showed the importance and usefulness of the results for the further research with correlation effects included. Together with atomic-limit approximation it can serve them as both a starting point, and a tool to interpret results.
Scalable superconducting quantum processors require balancing critical constraints in coherence, control complexity, and spectral crowding. Fixed-frequency architectures suppress flux noise and simplify control via all-microwave operations but remain limited by residual ZZ crosstalk. Here we propose a microwave-activated three-qubit gate protocol for fixed-frequency transmon qubits in the large-detuning regime ($|\Delta| \gg g$), leveraging the third-order nonlinear interaction to coherently exchange $\ket{001} \leftrightarrow \ket{110}$ states. By incorporating a phase-compensated optimization protocol, numerical simulations demonstrate a high average gate fidelity exceeding $99.9\%$. Systematic error analysis identifies static long-range ZZ coupling as the dominant error source in multi-qubit systems, which can be suppressed via operations in the large-detuning regime ($\sim 1$ GHz). The protocol maintains process fidelities exceeding $98\%$ under decoherence, while demonstrating intrinsic robustness to fabrication-induced parameter variations and compatibility with existing all-microwave two-qubit gate architectures. This hardware-efficient strategy advances scalable quantum computing systems by improving coherence properties, reducing spectral congestion, and expanding the experimental toolkit for error-resilient quantum operations in the noisy intermediate-scale quantum era.
In this work, we introduce Phi-Module, a universal plugin module that enforces Poisson's equation within the message-passing framework to learn electrostatic interactions in a self-supervised manner. Specifically, each atom-wise representation is encouraged to satisfy a discretized Poisson's equation, making it possible to acquire a potential {\phi} and corresponding charges \r{ho} linked to the learnable Laplacian eigenbasis coefficients of a given molecular graph. We then derive an electrostatic energy term, crucial for improved total energy predictions. This approach integrates seamlessly into any existing neural potential with insignificant computational overhead. Our results underscore how embedding a first-principles constraint in neural interatomic potentials can significantly improve performance while remaining hyperparameter-friendly, memory-efficient, and lightweight in training. Code will be available at this https URL.
Large Language Models demonstrate substantial promise for advancing scientific discovery, yet their deployment in disciplines demanding factual precision and specialized domain constraints presents significant challenges. Within molecular design for pharmaceutical development, these models can propose innovative molecular modifications but frequently generate chemically infeasible structures. We introduce VALID-Mol, a comprehensive framework that integrates chemical validation with LLM-driven molecular design, achieving an improvement in valid chemical structure generation from 3% to 83%. Our methodology synthesizes systematic prompt optimization, automated chemical verification, and domain-adapted fine-tuning to ensure dependable generation of synthesizable molecules with enhanced properties. Our contribution extends beyond implementation details to provide a transferable methodology for scientifically-constrained LLM applications with measurable reliability enhancements. Computational analyses indicate our framework generates promising synthesis candidates with up to 17-fold predicted improvements in target binding affinity while preserving synthetic feasibility.
Efficient sampling from the Boltzmann distribution given its energy function is a key challenge for modeling complex physical systems such as molecules. Boltzmann Generators address this problem by leveraging continuous normalizing flows to transform a simple prior into a distribution that can be reweighted to match the target using sample likelihoods. Despite the elegance of this approach, obtaining these likelihoods requires computing costly Jacobians during integration, which is impractical for large molecular systems. To overcome this difficulty, we train an energy-based model (EBM) to approximate likelihoods using both noise contrastive estimation (NCE) and score matching, which we show outperforms the use of either objective in isolation. On 2d synthetic systems where failure can be easily visualized, NCE improves mode weighting relative to score matching alone. On alanine dipeptide, our method yields free energy profiles and energy distributions that closely match those obtained using exact likelihoods while achieving $100\times$ faster inference. By training on multiple dipeptide systems, we show that our approach also exhibits effective transfer learning, generalizing to new systems at inference time and achieving at least a $6\times$ speedup over standard MD. While many recent efforts in generative modeling have prioritized models with fast sampling, our work demonstrates the design of models with accelerated likelihoods, enabling the application of reweighting schemes that ensure unbiased Boltzmann statistics at scale. Our code is available at this https URL.
We present a $GW$ space-time algorithm for periodic systems in a Gaussian basis including spin-orbit coupling. We employ lattice summation to compute the irreducible density response and the self-energy, while we employ $k$-point sampling for computing the screened Coulomb interaction. Our algorithm enables accurate and computationally efficient quasiparticle band structure calculations for atomically thin transition-metal dichalcogenides. For monolayer MoS$_\text{2}$, MoSe$_\text{2}$, WS$_\text{2}$, and WSe$_\text{2}$, computed $GW$ band gaps agree on average within 50 meV with plane-wave-based reference calculations. $G_0W_0$ band structures are obtained in less than two days on a laptop (Intel i5, 192 GB RAM) or in less than 30 minutes using 1024 cores. Overall, our work provides an efficient and scalable framework for $GW$ calculations on atomically thin materials.
Orbital angular momentum (OAM) as both classical and quantum states of light has proven essential in numerous applications, from high-capacity information transfer to enhanced precision and accuracy in metrology. Here, we extend OAM metrology to relativistic scenarios to determine the Lorentz factor of a moving reference frame, exploiting the fact that OAM is not Lorentz invariant. We show that the joint OAM spectrum from entangled states is modified by length contraction when measured by two observers moving relative to the entanglement source. This relative motion rescales the spatial dimensions, thus breaking the orthogonality of the OAM measurement process and resulting in a broadening of the joint OAM spectrum that can precisely determine the Lorentz factor. We experimentally simulate velocities up to $0.99c$, confirm the predicted broadening, and use the measurement outcomes to extract the Lorentz factor. Our work provides a pathway for novel measurement techniques suitable for relativistic conditions that leverage OAM structured light as a resource.
Chiral active materials are abundant in nature, including the cytoskeleton with attached motor proteins, rotary clusters of bacteria flagella, and self-spinning starfish embryos. These materials break both time reversal and mirror-image (parity) symmetries due to injection of torques at the microscale. Recently, it was found that chiral active materials may show a new type of elastic response termed `odd' elasticity. Currently, odd elasticity is understood microscopically only in ordered structures, e.g., lattice designs of metamaterials. It still remains to explore how odd elasticity can emerge in natural or biological systems, which are usually disordered. To address this, we propose a minimal generic model for disordered `odd solids', using micropolar (Cosserat) elasticity in the presence of local active torques. We find that odd elasticity naturally emerges as a nonlinear effect of internal particle rotations. Exploring the viscoelasticity of this solid, when immersed in active self-spinning solvent (`odd fluid'), we discover both dynamically unstable regions and regions in which bulk waves can propagate even in an overdamped solid.
Temporal networks consist of timestamped directed interactions that may appear continuously in time, yet few studies have directly tackled the continuous-time modeling of networks. Here, we introduce a maximum-entropy approach to temporal networks and with basic assumptions on constraints, the corresponding network ensembles admit a modular and interpretable representation: a set of global time processes and a static maximum-entropy edge, e.g. node pair, probability. This time-edge labels factorization yields closed-form log-likelihoods, degree, clustering and motif expectations, and yields a whole class of effective generative models. We provide maximum-entropy derivation of an inhomogeneous Poisson edge intensity for temporal networks via functional optimization over path entropy, connecting NHPP modeling to maximum-entropy network ensembles. NHPP consistently improve log-likelihood over generic Poisson processes, while the maximum-entropy edge labels recover strength constraints and reproduce expected unique-degree curves. We discuss the limitations of this framework and how it can be integrated with multivariate Hawkes calibration procedures, renewal theory, and neural kernel estimation in graph neural networks.
We trace the history of conformal bootstrap from its early days to our times - a great example of unity of physics. We start by describing little-known details about the origins of conformal field theory in the study of strong interactions and critical phenomena in the 1960s and 1970s. We describe similarities and differences between approaches and results of the main groups in Moscow, Rome, and Sofia. Then come the breakthroughs in the 1980s and the 1990s, in particular 2D CFT and holography. Finally, we describe the genesis of the numerical conformal bootstrap, from the conformal technicolor bounds in the 2000s, to the determination of the 3D Ising critical exponents in the 2010s. We conclude with some outstanding challenges. We stress that conformal invariance is a symmetry of nature.
The field of gravitational wave (GW) detection is progressing rapidly, with several next-generation observatories on the horizon, including LISA. GW data is challenging to analyze due to highly variable signals shaped by source properties and the presence of complex noise. These factors emphasize the need for robust, advanced analysis tools. In this context, we have initiated the development of a low-latency GW detection pipeline based on quantum neural networks (QNNs). Previously, we demonstrated that QNNs can recognize GWs simulated using post-Newtonian approximations in the Newtonian limit. We then extended this work using data from the LISA Consortium, training QNNs to distinguish between noisy GW signals and pure noise. Currently, we are evaluating performance on the Sangria LISA Data Challenge dataset and comparing it against classical methods. Our results show that QNNs can reliably distinguish GW signals embedded in noise, achieving classification accuracies above 98\%. Notably, our QNN identified 5 out of 6 mergers in the Sangria blind dataset. The remaining merger, characterized by the lowest amplitude, highlights an area for future improvement in model sensitivity. This can potentially be addressed using additional mock training datasets, which we are preparing, and by testing different QNN architectures and ansatzes.