Accurate digital rock modeling of carbonate rocks is limited by the difficulty in acquiring morphological information on small-scale pore structures. Defined as microporosity phases in computed tomography (micro-CT) images, these small-scale pore structures may provide crucial connectivity between resolved pores (macroporosity). However, some carbonate rocks are heterogeneous, and high-resolution scans are resource-intensive, impeding comprehensive sampling of microporosity phases. In this context, we propose the usage of the ensemble smoother multiple data assimilation (ESMDA) algorithm to infer the multiphase flow properties of microporosity phases from experimental observations for digital rock modeling. The algorithm's effectiveness and compatibility are validated through a case study on a set of mm-scale Estaillades drainage image data. The case study applies ESMDA to two capillary pressure models to infer the multiphase flow properties of microporosity phases. The capillary pressure curve and saturation map were used as observations to predict wetting phase saturation at six capillary pressure steps during iterative data assimilation. The ESMDA algorithm demonstrates improved performance with increasingly comprehensive observation data inputs, achieving better prediction than recently published alternative techniques. Additionally, ESMDA can assess the consistency between various forward physical models and experimental observations, serving as a diagnostic tool for future characterization. Given the diverse application conditions, we propose that ESMDA can be a general method in the characterization workflow of carbonate rocks.
The electric grid is increasingly vital, supporting essential services such as healthcare, heating and cooling transportation, telecommunications, and water systems. This growing dependence on reliable power underscores the need for enhanced grid resilience. This study presents Eversource's Climate Vulnerability Assessment (CVA) for bulk distribution substations in Massachusetts, evaluating risks from storm surge, sea level rise, precipitation, and extreme temperatures. The focus is on developing a cost-efficient model to guide targeted resilience investments. This is achieved by overcoming the limitations of single-variable analyses through hazard-specific assessments that integrate spatial, climate, electrical asset, and other relevant data; and applying sensitivity analysis to establish data-driven thresholds for actionable climate risks. By integrating geospatial analysis and data modeling with power engineering principles, this study provides a practical and replicable framework for equitable, data-informed climate adaptation planning. The results indicate that thresholds for certain climate hazards can be highly sensitive and result in significantly larger sets of stations requiring mitigation measures to adequately adapt to climate change, indicating that high-fidelity long-term climate projections are critical.
The emergence of Large Language Models (LLMs) demonstrates their potential to encapsulate the logic and patterns inherent in human behavior simulation by leveraging extensive web data pre-training. However, the boundaries of LLM capabilities in social simulation remain unclear. To further explore the social attributes of LLMs, we introduce the CiteAgent framework, designed to generate citation networks based on human-behavior simulation with LLM-based agents. CiteAgent successfully captures predominant phenomena in real-world citation networks, including power-law distribution, citational distortion, and shrinking diameter. Building on this realistic simulation, we establish two LLM-based research paradigms in social science: LLM-SE (LLM-based Survey Experiment) and LLM-LE (LLM-based Laboratory Experiment). These paradigms facilitate rigorous analyses of citation network phenomena, allowing us to validate and challenge existing theories. Additionally, we extend the research scope of traditional science of science studies through idealized social experiments, with the simulation experiment results providing valuable insights for real-world academic environments. Our work demonstrates the potential of LLMs for advancing science of science research in social science.
Use is made of rigorous definitions for the terms normal, natural, and harmonic to reveal a number of unfamiliar aspects about them. The Gaussian distribution is not sufficient to determine who is normal, and fluctuations above or below a natural-growth curve may or may not be natural. A recipe for harmonically sustained natural growth requires that the overlap during the substitution process must be limited. As a consequence the overall growth process must experience good as well as bad 'seasons'.
Radiofrequency ablation is widely used to prevent ventricular tachycardia (VT) by creating lesions to inhibit arrhythmias; however, the current surface ablation catheters are limited in creating lesions that are deeper within the left ventricle (LV) wall. Intramyocardial needle ablation (INA) addresses this limitation by penetrating the myocardium and delivering energy from within. Yet, existing INA catheters lack adequate dexterity to navigate the highly asymmetric, trabeculated LV chamber and steer around papillary structures, limiting precise targeting. This work presents a novel dexterous INA (d-INA) toolset designed to enable effective manipulation and creation of deep ablation lesions. The system consists of an outer sheath and an inner catheter, both bidirectionally steerable, along with an integrated ablation needle assembly. Benchtop tests demonstrated that the sheath and catheter reached maximum bending curvatures of 0.088~mm$^{-1}$ and 0.114~mm$^{-1}$, respectively, and achieved stable C-, S-, and non-planar S-shaped configurations. Ex-vivo studies validated the system's stiffness modulation and lesion-creation capabilities. In-vivo experiments in two swine demonstrated the device's ability to reach previously challenging regions such as the LV summit, and achieved a 219\% increase in ablation depth compared with a standard ablation catheter. These results establish the proposed d-INA as a promising platform for achieving deep ablation with enhanced dexterity, advancing VT treatment.
Rapid changes and increasing climatic variability across the widely varied Koppen-Geiger regions of northern Europe generate significant needs for adaptation. Regional planning needs high-resolution projected temperatures. This work presents an integrative downscaling framework that incorporates Vision Transformer (ViT), Convolutional Long Short-Term Memory (ConvLSTM), and Geospatial Spatiotemporal Transformer with Attention and Imbalance-Aware Network (GeoStaNet) models. The framework is evaluated with a multicriteria decision system, Deep Learning-TOPSIS (DL-TOPSIS), for ten strategically chosen meteorological stations encompassing the temperate oceanic (Cfb), subpolar oceanic (Cfc), warm-summer continental (Dfb), and subarctic (Dfc) climate regions. Norwegian Earth System Model (NorESM2-LM) Coupled Model Intercomparison Project Phase 6 (CMIP6) outputs were bias-corrected during the 1951-2014 period and subsequently validated against earlier observations of day-to-day temperature metrics and diurnal range statistics. The ViT showed improved performance (Root Mean Squared Error (RMSE): 1.01 degrees C; R^2: 0.92), allowing for production of credible downscaled projections. Under the SSP5-8.5 scenario, the Dfc and Dfb climate zones are projected to warm by 4.8 degrees C and 3.9 degrees C, respectively, by 2100, with expansion in the diurnal temperature range by more than 1.5 degrees C. The Time of Emergence signal first appears in subarctic winter seasons (Dfc: approximately 2032), signifying an urgent need for adaptation measures. The presented framework offers station-based, high-resolution estimates of uncertainties and extremes, with direct uses for adaptation policy over high-latitude regions with fast environmental change.
Bound states in the continuum (BIC) are localized waves in electronic, photonic and acoustic systems, which remain decoupled from surrounding propagating waves and hence maintain their oscillation for extraordinary long time [Nat Rev Mater 1, 16048 (2016)]. In photonic crystals, symmetry-protected quasi-BICs (SP-qBIC) have been realized at high symmetry points of the Brillouin zone and utilized in photonic crystal and distributed feedback lasers. In the present work, we measure wavevector-resolved photoluminescence (PL) of monolayer WSe2 weakly coupled to a photonic slab, consisting of a square array of aluminum nanodisks. The results show that the slab supports a continuous band of symmetry-protected quasi-bound states along the Gamma-X direction, extending from the previously reported SP-qBIC at the Gamma point. The spectral width of this quasi-bound band in the continuum remains narrow through at least a half of the Brillouin zone, indicating its long lifetime.
Designing miniaturized optical spectrometers is an increasingly active area of research as spectrometers are crucial components for a wide range of applications including chemical and material analysis, medical diagnostics, classical and quantum sensing, characterization of light sources, and radio frequency (RF) spectrum analysis. Among these applications, designing on-chip spectrometers for RF spectrum analysis is particularly challenging since it requires combining high resolution and large bandwidth with a fast update rate. Existing chip-scale spectrometers cannot achieve the resolution required for RF analysis, setting aside challenges in maintaining a fast update rate and broad bandwidth. In this work, we address these challenges by introducing a silicon photonic integrated circuit (PIC)-based RF spectrum analyzer that combines an ultra-high-resolution speckle spectrometer with an interferometric RF-to-optical encoding scheme. The PIC-based speckle spectrometer uses a path-mismatched multimode interferometer with inverse designed splitters to compensate for waveguide loss, enabling a record-high resolution of 100 MHz (0.8 pm at a wavelength of 1550 nm). To further improve the resolution of the overall RF spectrum analyzer, we modify the RF-to-optical encoding scheme by directing the RF signal through a path mismatched interferometer and encoding the outputs of the RF interferometer on separate optical carriers. This further reduces the RF spectral correlation width of the combined system, enabling the RF spectrum analyzer to resolve RF tones separated by 10 MHz across a bandwidth of 10 GHz. Since this approach operates as a single-shot spectrometer, it can support fast update rates, providing a path to compact, persistent wideband RF spectrum analysis.
We present a single-shot near-field technique to reconstruct the isofrequency surfaces of metamaterials in the microwave regime. In our approach, we excite resonant modes using a fixed source in a resonator composed of the material under test and map the in-plane field distribution with a movable probe. Applying a fast Fourier transform (FFT) to the measured field reveals the sample's in-plane dispersion. By extending this analysis over multiple frequencies and comparing the results with Fabry-Pérot resonances, we retrieve the full three-dimensional dispersion relation. When we apply the method to a double non-connected wire metamaterial, it accurately captures the low-frequency hyperbolic isofrequency surface, providing both a precise experimental tool and conceptual insight into spatially dispersive metamaterials.
Disturbances in gravitational wave (GW) observational data are often caused by non-stationary noise in the detector itself, such as back-scattering of laser stray light into the signal field. Unlike GW signals, non-stationary noise can appear in both the GW-signal quadrature and the orthogonal quadrature, which is usually not measured. Simultaneous sensing of this orthogonal quadrature provides a witness channel that can be used to reconstruct the disturbance in the signal quadrature enabling a subtraction of non-stationary noise. Here, we present the concept of quadrature witness that is compatible with frequency-dependent squeezing, which is already used to simultaneously reduce photon shot noise and photon radiation pressure noise. We demonstrate that implementing this approach in a GW detector could reduce noise caused by loud back-scatter events, thereby improving the overall sensitivity and robustness of GW observatories.
Geothermal field development typically involves complex processes that require multi-disciplinary expertise in each process. Thus, decision-making often demands the integration of geological, geophysical, reservoir engineering, and operational data under tight time constraints. We present Geothermal Analytics and Intelligent Agent, or GAIA, an AI-based system for automation and assistance in geothermal field development. GAIA consists of three core components: GAIA Agent, GAIA Chat, and GAIA Digital Twin, or DT, which together constitute an agentic retrieval-augmented generation (RAG) workflow. Specifically, GAIA Agent, powered by a pre-trained large language model (LLM), designs and manages task pipelines by autonomously querying knowledge bases and orchestrating multi-step analyses. GAIA DT encapsulates classical and surrogate physics models, which, combined with built-in domain-specific subroutines and visualization tools, enable predictive modeling of geothermal systems. Lastly, GAIA Chat serves as a web-based interface for users, featuring a ChatGPT-like layout with additional functionalities such as interactive visualizations, parameter controls, and in-context document retrieval. To ensure GAIA's specialized capability for handling complex geothermal-related tasks, we curate a benchmark test set comprising various geothermal-related use cases, and we rigorously and continuously evaluate the system's performance. We envision GAIA as a pioneering step toward intelligent geothermal field development, capable of assisting human experts in decision-making, accelerating project workflows, and ultimately enabling automation of the development process.
The FastRICH ASIC provides high-precision, triggerless readout for the LS3 Enhancements and Upgrades II of the LHCb RICH detector. The demands of continuous data acquisition and varying hit rates across the detector impose unique challenges on the ASIC's design and verification. This work presents the verification strategy for FastRICH, focusing on functional correctness, timing performance, and operational robustness. The methodology includes simulations across occupancy scenarios, validation of timing precision, and stress testing under pile-up and high-rate conditions. Results demonstrate that FastRICH meets its performance requirements over the full range of expected occupancies. Key design and verification challenges specific to triggerless, fast-timing ASICs are discussed, along with lessons learned for future developments.
Achieving spatiotemporal control of light at subwavelength and subcycle scales is an important milestone in the development of new photonic materials and technologies. Ultrafast spatiotemporal light modulation currently relies on electronic interband and intraband transitions that yield pronounced refractive index changes but typically suffer from slow, picosecond response times due to carrier relaxation. Here we show that by leveraging resonant light-matter interactions in a high-quality factor metasurface it is possible to use the optical Kerr effect, a weaker, but instantaneous optoelectronic polarization effect, to achieve ultrafast, reconfigurable light modulation with unprecedented spatial and temporal control. By the subwavelength all-optical tuning of the refractive index of the dielectric metasurface unit cells, we experimentally demonstrate pulse-limited beam steering with a 74-fs response time at angles up to $\pm $13° in the near-infrared. The steering originates from the Kerr effect with a background contribution arising from slower two-photon-excited free carrier absorption. Additionally, we observe spatial back-action, linear frequency conversion, and demonstrate arbitrary ultrafast spatial light modulation in two dimensions. Our findings open the possibility of realizing new ultrafast physics in metastructures with applications in signal processing, pulse shaping, and ultrafast imaging.
Semiconductor photonic devices operating in the midwave infrared (mid-IR, which we roughly define here as wavelengths spanning 3 to 14 microns) uniquely address a wide range of current practical needs. These include chemical sensing, environmental monitoring, industrial process control, medical diagnostics, thermal imaging, LIDAR, free space optical communication, and security monitoring. However, mid-IR device technologies are currently still works in progress that are generally much less mature than their near infrared and visible counterparts. Not only are most of the relevant materials more difficult to grow and process, but attainment of the desired optical device performance is often fundamentally more challenging. This Roadmap will review the leading applications for mid-IR optoelectronics, summarize the status and deficiencies of current device technologies, and then suggest possible roadmaps for improving and maturing the performance, manufacturability, and cost of each device type so the critical needs that are uniquely addressed by mid-IR photonics can be satisfied.
Compound flooding from the combined effects of extreme storm surge, rainfall, and river flows poses significant risks to infrastructure and communities -- as demonstrated by hurricanes Isaac and Harvey. Yet, existing methods to quantify compound flood risk lack a unified probabilistic basis. Copula-based models capture the co-occurrence of flood drivers but not the likelihood of the flood response, while coupled hydrodynamic models simulate interactions but lack a probabilistic characterization of compound flood extremes. The Joint Probability Method (JPM), the foundation of coastal surge risk analysis, has never been formally extended to incorporate hydrologic drivers -- leaving a critical gap in quantifying compound flood risk and the statistical structure of compound flood transition zones (CFTZs). Here, we extend the JPM theory to hydrologic processes for quantifying the likelihood of compound flood depths across both tropical and non-tropical storms. This extended methodology incorporates rainfall fields, antecedent soil moisture, and baseflow alongside coastal storm surge, enabling: (1) a statistical description of the flood depth as the response to the joint distribution of hydrologic and coastal drivers, (2) a statistical delineation of the CFTZ based on exceedance probabilities, and (3) a systematic identification of design storms for specified return period flood depths, moving beyond design based solely on driver likelihoods. We demonstrate this method around Lake Maurepas, Louisiana. Results show a CFTZ more than double the area of prior event-specific delineations, with compound interactions increasing flood depths by up to 2.25 feet. This extended JPM provides a probabilistic foundation for compound flood risk assessment and planning.
Gauss's principle of least constraint transforms a dynamics problem into a pure minimization problem, where the total magnitude of the constraint force is the cost function, minimized at each instant. Newton's equation is the first-order necessary condition for minimizing the Gaussian cost, subject to the given kinematic constraints. The principle of minimum pressure gradient (PMPG) is to incompressible fluid mechanics what Gauss's principle is to particle mechanics. The PMPG asserts that an incompressible flow evolves from one instant to another by minimizing the L2-norm of the pressure gradient force. A candidate flow field whose evolution minimizes the pressure gradient cost at each instant is guaranteed to satisfy the Navier-Stokes equation. Consequently, the PMPG transforms the incompressible fluid mechanics problem into a pure minimization framework, allowing one to determine the evolution of the flow field by solely focusing on minimizing the cost. In this paper, we show that the resulting minimization problem is a convex Quadratic Programming (QP) problem-one of the most computationally tractable classes in nonlinear optimization. Moreover, leveraging tools from analytical mechanics and the Moore-Penrose theory of generalized inverses, we derive an analytical solution for this QP problem. As a result, we present an explicit formula for the projected dynamics of the spatially discretized Navier-Stokes equation on the space of divergence-free fields. The resulting ODE is ready for direct time integration, eliminating the need for solving the Poisson equation in pressure at each time step. It is typically an explicit nonlinear ODE with constant coefficients. This compact form is expected to be highly valuable for both simulation and theoretical studies, including stability analysis and flow control design. We demonstrate the framework on the lid-driven cavity problem.
The amplitude of resonant oscillations in a non-Hermitian environment can either decay or grow in time, corresponding to a mode with either loss or gain. When two coupled modes have a specific difference between their loss or gain, a feature termed an exceptional point emerges in the excitations' energy manifold, at which both the eigenfrequencies and eigenmodes of the system coalesce. Exceptional points have intriguing effects on the dynamics of systems due to their topological properties. They have been explored in contexts including optical, microwave, optomechanical, electronic and magnonic systems, and have been used to control systems including optical microcavities, the lasing modes of a PT-symmetric waveguide, and terahertz pulse generation. A challenging problem that remains open in all of these scenarios is the fully deterministic and direct manipulation of the systems' loss and gain on timescales relevant to coherent control of excitations. Here we demonstrate the rapid manipulation of the gain and loss balance of excitations of a magnonic hybrid system on durations much shorter than their decay rate, allowing us to exploit non-Hermitian physics for coherent control. By encircling an exceptional point, we demonstrate population transfer between coupled magnon-polariton modes, and confirm the distinctive chiral nature of exceptional point encircling. We then study the effect of driving the system directly through an exceptional point, and demonstrate that this allows the coupled system to be prepared in an equal superposition of eigenmodes. We also show that the dynamics of the system at the exceptional point are dependent on its generalised eigenvectors. These results extend the established toolbox of adiabatic transfer techniques with a new approach for coherent state preparation, and provide a new avenue for exploring the dynamical properties of non-Hermitian systems.
Super-Kamiokande [SK] was upgraded through the addition of gadolinium sulfate to its ultrapure water, initiating the SK-Gd program. This development enables efficient neutron tagging via the large capture cross section of gadolinium, greatly improving the identification of inverse beta decay events, the primary channel for detecting the diffuse supernova neutrino background [DSNB]. The upgrade also enhances sensitivity to galactic and pre-supernova neutrinos, as well as atmospheric neutrino interactions. To realize this capability, extensive work was performed, including the construction and operation of the EGADS demonstrator, the refurbishment of the SK tank, the development of radiopure gadolinium production methods, and the validation of the loading and uniformity of gadolinium in solution. Early SK-Gd operation has demonstrated high neutron-tagging efficiency, reduced backgrounds, and world-leading limits on the DSNB flux. With these advances, SK-Gd now stands at the threshold of discovering the DSNB and opens a wide range of new opportunities in astrophysics and neutrino physics.
Solenoid-free tokamak startup techniques are essential for spherical tokamaks and offer a pathway to cost reduction and design simplification in fusion energy systems. Local helicity injection (LHI) is one such approach, employing compact edge current sources to drive open field line current that initiates and sustains tokamak plasmas. The recently commissioned Pegasus-III spherical tokamak provides a platform for advancing this and other solenoid-free startup methods. This study investigates the effect of LHI on magnetic topology in Pegasus-III plasmas. A helical filament model represents the injected current, and the linear plasma response to its 3D field is calculated with M3D-C1. Poincaré mapping reveals substantial flux surface degradation in all modeled cases. The onset of overlapping magnetic structures and large-scale surface deformation begins at $\Psi_{N} \approx 0.37$, indicating a broad region of perturbed topology extending toward the edge. In rotating plasmas, both single-fluid and two-fluid models exhibit partial screening of the $n = 1$ perturbation, with two-fluid calculations showing stronger suppression near the edge. In contrast, the absence of rotation leads to strong resonant field amplification in the single-fluid case, while the two-fluid case with zero electron rotation mitigates this amplification and preserves edge screening. Magnetic probe measurements indicate that modeling the stream with spatial spreading$-$representing distributed current and/or oscillatory motion$-$better reproduces measured magnetic power profiles than a rigid filament model. The results underscore the role of rotation and two-fluid physics in screening stream perturbations and point to plasma flow measurements and refined stream models as key steps toward improving predictive fidelity.
Electron microscopy (EM) is a foundational tool for directly assessing the structure of materials. Recent advances in direct electron detectors have improved signal-to noise ratios via single-electron counting. However, accurately counting electrons at high fluence remains challenging. We developed a new method of electron counting for direct electron detectors, Back-Propagation Counting (BPC). BPC uses machine learning techniques designed for mathematical operations on large tensors but does not require large training datasets. In synthetic data, we show BPC is able to count multiple electron strikes per pixel and is robust to increasing occupancy. In experimental data, frames counted with BPC are shown to reconstruct diffraction peaks corresponding to individual nanoparticles with relatively higher intensity and produce images with improved contrast when compared to a standard counting method. Together, these results show that BPC excels in experiments where pixels see a high flux of electron irradiation such as in situ TEM movies and diffraction.
Theoretical predictions of photochemical processes are essential for interpreting and understanding spectral features. Reliable quantum dynamics calculations of vibronic systems require precise modeling of anharmonic effects in the potential energy surfaces and off-diagonal nonadiabatic coupling terms. In this work, we present the n-mode quantization of all vibronic Hamiltonian terms comprised of general high-dimensional model representations. This results in a second-quantized framework for accurate vibronic calculations employing the density matrix renormalization group algorithm. We demonstrate the accuracy and reliability of this approach by calculating the excited state quantum dynamics of maleimide. We analyze convergence and the choice of parameters of the underlying time-dependent density matrix renormalization group algorithm for the n-mode vibronic Hamiltonian, demonstrating that it enables accurate calculations of complex photochemical dynamics.
Over 125 years ago, Henry Selby Hele-Shaw realized that the depth-averaged flow in thin gap geometries can be closely approximated by two-dimensional (2D) potential flow, in a surprising marriage between the theories of viscous-dominated and inviscid flows. Hele-Shaw flows allow visualization of potential flows over 2D airfoils and also undergird important discoveries in the dynamics of interfacial instabilities and convection, yet they have found little use in modeling flows in microfluidic devices, although these devices often have thin gap geometries. Here, we derive a Hele-Shaw approximation for the flow in the kinds of thin gap geometries created within microfluidic devices. Although these equations have been reported before, prior work used a less direct derivation. Here, we obtain them via a modified Method of Weighted Residuals (MWR), interpreting the Hele-Shaw approximation as the leading term of an orthogonal polynomial expansion that can be systematically extended to higher-order corrections. We provide substantial numerical evidence showing that approximate equations can successfully model real microfluidic and inertial-microfluidic device geometries. By reducing three-dimensional (3D) flows to 2D models, our validated model will allow for accelerated device modeling and design.
Quantum entanglement is a phenomenon in which two physical systems are correlated in such a way that they appear to instantaneously affect one another, regardless of the distance between them. As commonly understood, Bell's Theorem famously demonstrates that any causal explanation of entanglement must discard either locality (the principle that nothing, including information, travels faster than light) or classical notions of realism (or both). Drawing on this concept, several legal scholars have metaphorically described 'entangled' legal concepts. For instance, if a state's highest court redefines the concept of 'foreseeability' in negligence law, this redefinition alters the concept of 'reasonable care' immediately in the eyes of the law. Godfrey (2024) is the first work to mathematically model entangled legal concepts, particularly in the context of legal interpretation. Here, we extend the quantification to the formulation and delineation of law (lawmaking) and the adjudication of law (judgment). In so doing, we connect legal entanglement to Sichelman's (2022) work on legal entropy, complexity, and the informational content of law. In addition to quantifying entanglement across various legal contexts, our approach provides broader insights. For example, it offers a more comprehensive analysis of the uses and limits of 'modularity' in law--specifically, the role legal boundaries (spatial or intangible) play in reducing information costs within legal systems. Moreover, we discuss how our model can improve theories of legal artificial intelligence. Finally, we explore the application of legal theory back to physics. If quantum physical entanglement operates analogously to legal entanglement, it requires discarding both locality and classical realism, though not in the manner commonly imagined.
We report on R&D study to improve the photon detection efficiency of water Cherenkov detectors by doping ultra-pure water with 4-methylumbelliferone (4-MU), a wavelength shifting additive. Cherenkov light yields from cosmic-ray muons were measured for various 4-MU concentrations and compared with those from pure water. At a concentration of 1 ppm, the detected light yield increased by approximately a factor of three. This enhancement can be attributed to wavelength shifting and improved photon collection efficiency. No noticeable degradation in optical transparency was observed across the tested concentrations of 0.5 and 1 ppm with different concentration of ethanol. These results suggest that 4-MU is a promising additive for improving the performance of water Cherenkov detectors.
We report watt-level femtosecond pulses in the 1.75 $\mu$m region using a thulium-doped core, terbium-doped cladding fluoride (Tm:Tb:ZBLAN) fiber laser system. The seed pulse is generated through stimulated Raman scattering in a silica fiber pumped by an erbium-doped fiber laser. The soliton is subsequently amplified through a multi-stage Tm:Tb:ZBLAN amplifier. The tunability of our chirped fiber Bragg grating stretcher, matched with a Treacy compressor, compresses the pulse to 217 fs. Our system generates ~250 nJ of single-pulse energy, with a corresponding average power of ~1 W at a 4 MHz repetition rate. The laser system is suitable for multiphoton microscopy.
Soliton microcombs are evolving towards octave-spanning for $f$-$2f$ self-referencing and expanding applications in spectroscopy and timekeeping. As spectra broaden and pulses shorten, the Raman-induced soliton self-frequency shift (SSFS) becomes a principal limitation: it reduces pump-to-comb conversion efficiency, constrains achievable span, and can, in extremes, preclude stationary operation. We develop a complementary theory of SSFS in microresonators that remains valid when the soliton duration $\tau_s$ is shorter than the Raman response timescale. The theory predicts a reduced dependence of the SSFS on $\tau_s$ which also expands the soliton existence range. Such predictions are validated by numerical simulations and by experiments on Si$_3$N$_4$ microresonators. Our results provide practical guidelines for engineering efficient and broadband soliton microcombs.
With the rapid development of nanophotonics and cavity quantum electrodynamics, there has been growing interest in how confined electromagnetic fields modify fundamental molecular processes such as electron transfer. In this paper, we revisit the problem of nonadiabatic electron transfer (ET) in confined electromagnetic fields studied in [J. Chem. Phys. 150, 174122 (2019)] and present a unified rate theory based on Fermi's golden rule (FGR). By employing a polaron-transformed Hamiltonian, we derive analytic expressions for the ET rate correlation functions that are valid across all temperature regimes and all cavity mode time scales. In the high-temperature limit, our formalism recovers the Marcus and Marcus-Jortner results, while in the low-temperature limit it reveals the emergence of the energy gap law. We further extend the theory to include cavity loss by using an effective Brownian oscillator spectral density, which enables closed-form expressions for the ET rate in lossy cavities. As applications, we demonstrate two key cavity-induced phenomena: (i) resonance effects, where the ET rate is strongly enhanced at certain cavity mode frequencies, and (ii) electron-transfer-induced photon emission, arising from the population of cavity photon Fock states during the ET process. These results establish a general framework for understanding how confined electromagnetic fields reshape charge transfer dynamics, and suggest novel opportunities for controlling and probing ET reactions in nanophotonic environments.
At steady laminar Y-junctions (Murray family), the observed branching-radius law follows from a single ratio extremum: entropy production per information cost (EPIC). Structure is priced by an effective bit energy Eb,eff = zeta k_B T ln 2 (J per bit); normalizing viscous entropy production by this tariff defines an information-priced entropy flux Phi_b = sigma_s / Eb,eff. Extremizing at fixed demands gives Q proportional to r^alpha with alpha = (m+4)/2 and the node rule r0^alpha = r1^alpha + r2^alpha, where m encodes how the tariff scales with radius (m=2 volume-priced -> alpha = 3; m=1 surface-priced -> alpha = 2.5). Mixed surface/volume pricing implies a local alpha_eff in the range 2.5-3 without changing the fluid physics and leads to a weighted Murray law for heterogeneous tariffs. The ratio extremum is equivalent to an additive functional via fractional programming (Dinkelbach) and reduces, for uniform tariffs, to the familiar near-equilibrium extremum of classical theory (minimum entropy production). The framework is falsifiable: measure how stabilized-bit counts and the overhead zeta scale with radius, and alpha must track (m+4)/2. EPIC recasts branching selection as maximizing entropy throughput per paid bit and provides a platform-agnostic lever to predict and tune morphology.
In this study, global nonlinear electromagnetic gyrokinetic simulations are conducted to investigate turbulence in the Internal transport barrier (ITB) region of the EAST tokamak discharge with weakly reversed magnetic shear. Linear simulations reveal two dominant ion temperature gradient (ITG) modes: a higher frequency mode at the $q=1$ surface, which dominates in the electrostatic limit, and a lower frequency mode near the $q_{\min}$ surface, which prevails under the experimental $\beta$ (the ratio of plasma pressure to magnetic pressure). Finite $\beta$ effects effectively suppress higher frequency ITG modes, and once $\beta_i$ on axis exceeds 0.5\%, this ITG mode is no longer dominant, and the ITG mode near $q_{\min}$ surface becomes the primary instability. Therefore, electromagnetic effects play a crucial role in stabilizing ITG modes, and in causing the transition between the most unstable mode at different radial positions. The linear growth rate of the unstable mode in the electrostatic limit is approximately 1.25 times higher than that of the dominant mode in the electromagnetic case. However, in the electromagnetic nonlinear regime, the thermal ion heat conductivity is reduced by at least a factor of 4. This reduction primarily results from nonlinear electromagnetic effects enhancing the shearing effect of zonal flows, thereby further suppressing microturbulence. Finally, energetic particles exert a slight stabilizing effect on ITG turbulence due to dilution and finite $\beta$ contributions. It is emphasized that the electromagnetic effect on ITG with weak magnetic shear should be included to accurately calculate the transport coefficients.
Global gyrokinetic simulations are performed for the first time to investigate cross-scale interactions between electromagnetic ion temperature gradient (ITG) turbulence and fishbone instability in tokamak plasmas. The investigation of fluctuation response in the multiscale simulation including both instabilities indicates a strong impact of fishbone on ITG turbulence. Detailed analysis reveals that fishbone-driven zonal radial electric fields at nonlinear saturation significantly suppress electromagnetic ITG turbulence, reducing ion thermal transport close to the neoclassical level. The simulation results agree well with experimental observations that turbulence suppression during fishbone bursts. These findings advance understanding of multiscale interactions that enhance thermal confinement in fusion plasmas.
A neural network model based on the Transformer architecture has been developed to predict the nonlinear evolution of optical pulses in Er-doped fiber amplifier under conditions of limited experimental data. To address data scarcity, a two-stage training strategy is employed. In the first stage, the model is pretrained on a synthetic dataset generated through numerical simulations of the amplifier's nonlinear dynamics. In the second stage, the model is fine-tuned using a small set of experimental measurements. This approach enables accurate reproduction of the fine spectral structure of optical pulses observed in experiments across various nonlinear evolution regimes, including the development of modulational instability and the propagation of high-order solitons.
This paper presents an enhanced version of the subgroup method for resonance self-shielding treatment, termed the robust subgroup method, which integrates Robust Estimation (RE) with a Differential Evolution (DE) algorithm. The RE approach is employed to handle model misspecification and data contamination, while the DE algorithm serves as an optimization tool within the RE framework to obtain constrained solutions. Numerical validation against experimental benchmarks shows that the proposed method removes a systematic absorption bias in conventional subgroup fits that would otherwise depress reactivity. This bias appears only in benchmarks sensitive to U-238. Mechanistically, it reflects a threshold-like conditioning failure: strong self-shielding leverage dominates the loss and is magnified by dilution-induced multicollinearity. This adverse conditioning appears to be seeded by a narrow, sparse resonance structure at low energies in fertile even-even nuclides, thereby causing rapid self-shielding response saturation and a weak Doppler broadening. By bounding influence and enforcing feasibility within an RE-DE framework, the inferred subgroup parameters track the underlying physics more faithfully, improving the predictive fidelity of subsequent transport simulations.
Accurately modeling seismic wave attenuation is critical for ground response analyses (GRAs), which aim to replicate local site effects in ground motions. However, theoretical transfer functions (TTFs) from GRAs often overestimate empirical transfer functions (ETFs) when the small-strain damping ratio ($D_{\text{min}}$) is set equal to laboratory measurements. Prior studies addressed this by inflating $D_{\text{min}}$ in one-dimensional (1D) GRAs to account for apparent damping mechanisms such as diffraction and mode conversions that cannot be captured in 1D. Although this approach improved fundamental-mode predictions, it often overdamped higher modes. This study explores more direct modeling of apparent damping using two-dimensional (2D) GRAs at four downhole array sites: Delaney Park (DPDA), I-15 (I15DA), Treasure Island (TIDA), and Garner Valley (GVDA). At each site, three numerical damping formulations, Full Rayleigh, Maxwell, and Rayleigh Mass, were implemented using both conventional $D_{\text{min}}$ and an inflated $D_{\text{min}}$ ($m \times D_{\text{min}}$) obtained from site-specific calibration. Results show that the appropriate $D_{\text{min}}$ multiplier ($m$) correlates with the site's velocity contrast. Using inflated $D_{\text{min}}$, Full Rayleigh and Maxwell damping systematically overdamped higher modes, with Maxwell damping also shifting modal peaks. In contrast, Rayleigh Mass damping consistently achieved the closest match to ETFs at three of the four sites while offering faster computational performance. These findings demonstrate that inflated $D_{\text{min}}$ can represent unmodeled attenuation in 2D GRAs, particularly at sites with low velocity contrast, and that frequency-dependent formulations such as Rayleigh Mass damping can more accurately predict site response than traditional frequency-independent approaches.
This study analyzes pass networks in football (soccer) using a stochastic model known as the Pólya urn. By focusing on preferential selection, it theoretically demonstrates that the time evolution of networks can be characterized by a single parameter. Building on this result, a data analysis method is proposed and applied to a large-scale public dataset of professional football matches. The statistical properties of the preferential-selection parameter are examined, demonstrating its correlation with pass accuracy and with mean pass difficulty. This method is applicable to various evolving networks.
The diffusion of ideas and language in society has conventionally been described by S-shaped models, such as the logistic curve. However, the role of sub-exponential growth -a slower than exponential pattern known in epidemiology- has been largely overlooked in broader social phenomena. Here, we present a piecewise power-law model to characterize complex growth curves with a few parameters. We systematically analyzed a large-scale dataset of approximately one billion Japanese blog articles linked to Wikipedia vocabulary, and observed consistent patterns in web search trend data (English, Spanish, and Japanese). Our analysis of the 2,965 selected items reveals that about 55% (1,625 items) were found to have no abrupt jumps and were well captured by one or two segments. For single-segment curves, we found that (i) the mode of the shape parameter alpha was near 0.5, indicating prevalent sub-exponential growth; (ii) the ultimate diffusion scale is primarily determined by the growth rate R, with minor contributions from alpha or the duration T; and (iii) alpha showed a tendency to vary with the nature of the topic, being smaller for niche/local topics and larger for widely shared ones. Furthermore, a micro-behavioral model distinguishing outward contact with strangers from inward interaction within their community suggests that alpha can be interpreted as an index of the preference for outward-oriented communication. These findings suggest that sub-exponential growth is a common pattern of social diffusion, and our model provides a practical framework for consistently describing, comparing, and interpreting complex and diverse growth curves.
Predicting phenomena that mix few-photon quantum optics with strong field nonlinear optics is hindered by the use of separate theoretical formalisms for each regime. We close this gap with a unified effective field theory valid for frequencies lower than the material-dependent cutoff set by the band gap, plasma frequency, or similar scale. The action couples the electromagnetic gauge field to vector polarisation modes. An isotropic potential generates the optical susceptibilities, while a higher-dimension axion-like term captures magnetoelectric effects; quantisation on the Schwinger-Keldysh contour with doubled BRST ghosts preserves gauge symmetry in dissipative media. One-loop renormalisation-group equations reproduce the measured dispersion of the third-order susceptibility from terahertz to near-visible frequencies after matching a single datum per material. Real-time dynamics solved with a matrix-product-operator engine yield two to four percent agreement with published results for GaAs polariton cavities, epsilon-near-zero indium-tin-oxide films and superconducting quarton circuits. The current formulation is limited to these 1-D geometries and sub-cut-off frequencies; higher-dimensional or above-cut-off phenomena will require additional degrees of freedom or numerical methods.
We develop a deep reinforcement learning framework for controlling a bio-inspired jellyfish swimmer to navigate complex fluid environments with obstacles. While existing methods often rely on kinematic and geometric states, a key challenge remains in achieving efficient obstacle avoidance under strong fluid-structure interactions and near-wall effects. We augment the agent's state representation within a soft actor-critic algorithm to include the real-time forces and torque experienced by the swimmer, providing direct mechanical feedback from vortex-wall interactions. This augmented state space enables the swimmer to perceive and interpret wall proximity and orientation through distinct hydrodynamic force signatures. We analyze how these force and torque patterns, generated by walls at different positions influence the swimmer's decision-making policy. Comparative experiments with a baseline model without force feedback demonstrate that the present one with force feedback achieves higher navigation efficiency in two-dimensional obstacle-avoidance tasks. The results show that explicit force feedback facilitates earlier, smoother maneuvers and enables the exploitation of wall effects for efficient turning behaviors. With an application to autonomous cave mapping, this work underscores the critical role of direct mechanical feedback in fluid environments and presents a physics-aware machine learning framework for advancing robust underwater exploration systems.
The Global Natural Orbital Functional (GNOF) provides a straightforward approach to capture most electron correlation effects without needing perturbative corrections or limited active spaces selection. In this work, we evaluate both the original GNOF and its modified variant GNOFm on a set of twelve 5- and 6-membered molecular rings, systems characterized primarily by dynamic correlation. This reference set is vital as it comprises essential substructures of more complex molecules. We report complete-basis-set limit correlation energies for GNOF, GNOFm, and the benchmark CCSD(T) method. Across the Dunning basis sets, both functionals deliver a balanced and accurate description of the molecular set, with GNOFm showing small but systematic improvements while preserving the overall robustness of the original formulation. These results confirm the reliability of the GNOF family and its ability to capture dynamic correlation effects.
A simplification of the VV10 van der Waals density functional [J. Chem. Phys. 133, 244103 (2010)] is made by an approximation of the integrand of the six-dimentional integral in terms of a few products of three-dimensional density-like distributions and potential-like functions of the interelectronic distance only, opening the way for its straightforward computation by fast multipole methods. An even faster computational scheme for molecular systems is implemented where the density-like distributions are fitted by linear combinations of usual atom-centered basis functions of Gaussian type and the six-dimensional integral is then computed analytically, at a fraction of the overall cost of a typical calculation. The simplicity of the new approximation is commensurate with that of the original VV10 functional, and the same level of accuracy is seen in tests on molecules.
Real-time and accurate monitoring of humidity and pH is of great significance in daily life and industrial production. Existing humidity and pH measurement suffer from limitations such as low sensitivity, signal crosstalk, complex system structures, and inability to achieve real-time monitoring. In this work, the surface of a polarization maintaining fiber (PMF) was functionalized with a composite humidity-sensitive polymer composed of polyvinyl alcohol (PVA) and carbon nanosheets (CNs). A humidity-sensitive film with a microporous structure was prepared on the PMF cladding through high-temperature rapid film formation and laser processing, enhancing humidity sensitivity and stability. To enable pH sensing, poly(allylamine hydrochloride) (PAH) and poly (acrylic acid) (PAA) were successively adsorbed onto the PMF surface via electrostatic self-assembly, forming a pH-sensitive nanofilm structure. By connecting a temperature-compensated PMF within the same Sagnac loop and combining it with a multi-wavelength matrix, simultaneous real-time monitoring of humidity, pH, and temperature was achieved, effectively solving the issue of temperature crosstalk and extending toward a universal optical fiber multi-parameter measurement platform.
Current interstitial techniques of tumor ablation face challenges that ultrasound technologies could meet. The ablation radius and directionality of the ultrasound beam could improve the efficiency and precision. Here, a 9gauge MR-compatible dual-mode ultrasound catheter prototype was experimentally evaluated for Ultrasound Imageguided High Intensity Focused Ultrasound (USgHIFU) conformal ablations. The prototype consisted of 64 piezocomposite linear array elements and was driven by an open research programmable dual-mode ultrasound platform. After verifying the US-image guidance capabilities of the prototype, the HIFU output performances (dynamic focusing and HIFU intensities) were quantitatively characterized, together with the associated 3D HIFU-induced thermal heating in tissue phantoms (using MR thermometry). Finally, the ability to produce robustly HIFU-induced thermal ablations in in-vitro liver was studied experimentally and compared to numerical modeling. Investigations of several HIFU dynamic focusing allowed overcoming the challenges of miniaturizing the device: mono-focal focusing maximized deep energy deposition, while multi-focal strategies eliminated grating lobes. The linear-array design of the prototype made it possible to produce interstitial ultrasound images of tissue and tumor mimics in situ. Multi-focal pressure fields were generated without grating lobes and transducer surface intensities reached up to Isapa =14 W$\bullet$cm -2 . Seventeen elementary thermal ablations were performed in vitro. Rotation of the catheter proved the directionality of ablation, sparing non-targeted tissue. This experimental proof of concept demonstrates the feasibility of treating volumes comparable to those of primary solid tumors with a miniaturized USgHIFU catheter whose dimensions are close to those of tools traditionally used in interventional radiology, while offering new functionalities.
Achieving state-of-the-art optical data storage requires raising device capacity well above commercial standards. This requires media structured at a much smaller scale and enabling readout at a shorter wavelength. Current CDs, DVDs and Blu-rays are read with visible light, and are based on metallic reflection gratings and phase-change recording layers structured at the few-hundred-nm scale. Herein, we introduce 10-nm structured silicon as a promising UV-readable data storage platform. Recording on it harnesses the amorphous-to-crystalline phase-change of silicon, the two phases presenting well-constrasted UV optical properties. Furthermore, the phase-change contrast is strongly enhanced in the Vacuum UV thanks to the distinct interband plasmon resonances of the amorphous and crystalline nanostructures, which have an epsilon-near-zero and surface plasmonic character, respectively. Silicon nanogratings with a 10 nm width and a 20 nm period resonate near the wavelength of 120 nm, at which phase-change induces a 600% maximum optical transmittance contrast. This paves the way toward UV-readable data storage platforms with a 10 to 100 times increased data density, which could be implemented by harnessing the well-established silicon nanotechnology.
Ionization by a sequence of extreme ultraviolet pulses is investigated based on the rigorous numerical solution of the time-dependent Schrödinger equation, when the driving laser field is treated exactly. This goes beyond the typically used first-order nondipole approximation and reveals the effects of radiation pressure to its full extent. Specifically, we observe the comb structures in both the momentum and the energy distributions of photoelectrons. The comb peaks are shifted, however, depending on the emission angle of electrons. While similar effect is observed already in the first-order nondipole approximation, with increasing the laser field strength the discrepancy with our exact results becomes more pronounced. Also, we observe the additional substructure of the comb peaks arising in the angle-integrated energy distributions of photoelectrons. Finally, as our numerical calculations account for the atomic potential in the entire interaction region, we observe the loss of coherence of comb structures with increasing the number of laser pulses, that we attribute to rescattering.
Caribou is a versatile data acquisition (DAQ) system developed within several collaborative frameworks (CERN EP R&D, DRD3, AIDAinnova, and Tangerine) to support laboratory and test-beam characterization of novel silicon pixel detectors. It combines a custom Control and Readout (CaR) board with a Xilinx Zynq System-on-Chip (SoC) running project-wide shared firmware and software stacks. The system architecture emphasizes reusability, flexibility, and ease of integration. The CaR board provides essential interfaces such as programmable power supplies, voltage and current references, high-speed ADCs, and configurable I/O lines for detector control and readout. The SoC runs an embedded Linux distribution built with PetaLinux and integrates two main components: Peary, a C++ embedded DAQ application providing hardware abstraction, configuration management, logging, and multi-device control through Command Line (CLI) and Python interfaces; and Boreal, a common Caribou FPGA firmware framework offering reusable modules and automated build workflows for user-specific bit files. The next major milestone in Caribou's evolution is the transition to version 2.0, based on a Zynq UltraScale+ System-on-Module (SoM) architecture. This paper presents the recent progress and future prospects of the project and describes recent hardware, firmware, and software developments preparing the system for the upcoming CaR board v2.0.
Accurate thermal analysis is crucial for modern spacecraft, driving demand for reliable modeling tools. This research advances space thermal modeling by improving the simulation accuracy and efficiency of radiative heat transfer, the dominant mode of heat exchange in space. To this end, we incorporate diffuse reflectivity using the Gebhart method, which computes radiative exchange factors (REFs) from geometric view factors. The view factors, obtained via Monte Carlo ray tracing (MCRT), require post-processing to mitigate statistical errors. Critically, existing correction schemes cannot simultaneously enforce closure and reciprocity for open systems. This research addresses this gap by proposing two novel enforcement methods: (i) a least-squares optimization with non-negativity rectification (NNR) and small positive value avoidance (SPVA), and (ii) an iterative enforcement algorithm. To ensure consistency across different discretization levels, this work also introduces the multi-node surface model relations to formalize the connection between sub-face, face, and node representations of view factors and REFs. A simple case study demonstrates a substantial reduction in mean absolute error (MAE): the least-squares method achieves an 81% MAE reduction, while the iterative method offers the best balance of accuracy (56% MAE reduction) and computational efficiency. A second case study shows that including diffuse reflections decreases the steady-state temperature of a plate by $4^{\circ}C$, reinforcing that reflected radiation reduces net absorption. This work introduces and validates computationally efficient methods for integrating diffuse reflectivity into space thermal analyses and for consistently coupling multi-node surface radiative models. The results enable more accurate and robust thermal predictions for spacecraft systems.
Parity mixing in photoionization, i.e. when emitted electrons have different parities but the same energy, causes interference observable only in angle-resolved measurements. The interference typically manifests as a symmetry violation in the photoelectron angular distributions. The traditional, based on HHG, RABBITT scheme with high-order harmonics separated by twice the seed field energy, precludes parity mixing. On the contrary, a free-electron laser provides a possibility to generate even harmonics. Using triple the fundamental frequency as a seed, one obtains a comb of alternating even and odd harmonics, separated by three times the initial frequency [Nature 578, 386-391 (2020)] (2-SB RABBITT). In this setup, there are two sidebands between the main photoelectron lines, versus one in the traditional scheme. In the paper, we examine the general properties of a two-sideband scheme and analyze the symmetry breakdown of photoelectron angular distributions for various polarization geometries of the incident pulse. We found a crucial difference in symmetries between 2-SB RABBITT and other photoionization schemes with parity mixing. Illustrative calculations are carried out for neon with pulse parameters typical for modern facilities. The possibility to reconstruct the temporal profile of the pulse from the angle-resolved measurements is discussed.
Comparing abstract concepts (such as electric circuits) with familiar ideas (plumbing systems) through analogies is central to practice and communication of physics. Contemporary research highlights self-generated analogies to better facilitate students' learning than the taught ones. "Spontaneous" and "self-generated" analogies represent the two ways through which students construct personalized analogies. However, facilitating them, particularly in large enrollment courses remains a challenge, and recent developments in generative artificial intelligence (AI) promise potential to address this issue. In this qualitative study, we analyze around 800 student responses in exploring the extent to which students spontaneously leverage analogies while explaining Morse potential curve in a language suitable for second graders and self-generate analogies in their preferred everyday contexts. We also compare the student-generated spontaneous analogies with AI-generated ones prompted by students. Lastly, we explore the themes associated with students' perceived ease and difficulty in generating analogies across both cases. Results highlight that unlike AI responses, student-generated spontaneous explanations seldom employ analogies. However, when explicitly asked to explain the behavior of the curve in terms of their everyday contexts, students employ diverse analogical contexts. A combination of disciplinary knowledge, agency to generate customized explanations, and personal attributes tend to influence students' perceived ease in generating explanations across the two cases. Implications of these results on the potential of AI to facilitate students' personalized analogical reasoning, and the role of analogies in making students notice gaps in their understanding are discussed.
We present an analytical theory of second harmonic generation (SHG) in hybrid structures combining a nonlinear 2D crystal with a dielectric metasurface waveguide. The theory describes the excitation spectrum and enhancement of SHG at both leaky mode and quasi-bound state in the continuum (quasi-BIC) resonances in terms of the material parameters. For low-loss systems, the SHG efficiency at leaky resonances is determined by their radiative broadening, governed by the relevant Fourier harmonics of the metasurface polarizability, whereas the SHG enhancement at quasi-BIC resonances is ultimately limited by inhomogeneous broadening and absorption in the system. We also describe the emergence and polarization properties of second harmonic diffracted beams. These beams appear even if both the 2D crystal and the meta-waveguide are centrosymmetric owing to the nonlocal mechanism of SHG. The developed framework provides a systematic theoretical basis for optimizing the resonant nonlinear frequency conversion in hybrid 2D-material-metasurface platforms and identifies the fundamental limitations of the SHG efficiency.
Cathodoluminescence (CL) enables optical-frequency analysis of samples with nanometer resolutions, originating from the interaction of a focused electron beam with radiative electronic states, or directly with the optical modes of the sample. Here we decompose the various mechanisms underlying CL generation and emission from an archetype spherical resonator using its spectral, angular and spatially resolved features. We investigate radiation of optical whispering-gallery modes in regimes of coherent and incoherent luminescence. The use of different experimental regimes allows us to disentangle the different contributions to the CL in spheres, namely, photon absorption, generation and radiative leakage, and conclude that the photon generation occurs precisely on the sphere's surface. In addition, the spheres serve as high-NA collimating lenses for CL, resulting in mode quality unprecedented for CL in free space. We believe that such collimated and directed CL in free space will enhance existing quantum measurements of CL and facilitate new ones, such as high-rate electron-photon entangled pairs, CL from quantum emitters, and homodyne analysis of CL.
We present a Monte Carlo method for simulating the inception of electric discharges in gases. The input consists of an unstructured grid containing the electrostatic field. The output of the model is the estimated probability of discharge inception per initial electron position, as well as the estimated time lag between the appearance of the initial electron and discharge inception. To obtain these quantities electron avalanches are simulated for initial electron positions throughout the whole domain, also including regions below the critical electric field. Avalanches are assumed to propagate along field lines, and they can produce additional avalanches due to photon and ion feedback. If the number of avalanches keeps increasing over time we assume that an electric discharge will eventually form. A statistical distribution for the electron avalanche size is used, which is also valid for gases with strong electron attachment. We compare this distribution against the results of particle simulations. Furthermore, we demonstrate examples of inception simulations in 2D Cartesian, 2D axisymmetric and 3D electrode geometries.
We present the physical design and systematic optimization of a high-performance storage ring tailored for the generation of high-power coherent radiation, with particular emphasis on the extreme ultraviolet (EUV) regime. The proposed ring adopts a Double Bend Achromat (DBA) lattice configuration and integrates 12 superconducting wigglers to significantly enhance radiation damping and minimize the natural emittance. And a bypass line is adopted to generate high power coherent radiation. Comprehensive linear and nonlinear beam dynamics analyses have been conducted to ensure beam stability and robustness across the operational parameter space. The optimized design achieves a natural emittance of approximately 0.8 nm and a longitudinal damping time of around 1.4 ms, enabling the efficient buildup of coherent radiation. Three-dimensional numerical simulations, incorporating the previously proposed angular dispersion-induced microbunching (ADM) mechanism, further confirm the system's capability to generate high-power EUV coherent radiation, with output powers reaching the order of several hundred watts. These results underscore the strong potential of the proposed design for applications in coherent photon science and EUV lithography.
Homodyne Quadrature interferometers (HoQI) are an interferometric displacement sensing scheme proven to have excellent noise performance, making them a strong candidate for sensing and control schemes in gravitational wave detector seismic isolation. Like many interferometric schemes, HoQIs are prone to nonlinear effects when measuring displacements. These nonlinearities, if left unsuppressed, would substantially limit the use cases of HoQIs. This paper first shows a means of measuring and quantifying nonlinearities using a working HoQI and a mechanical resonator. We then demonstrate a method for real-time correction of these nonlinearities and several approaches for accurately calibrating the correction technique. By correcting in real time, we remove one of the biggest obstacles to including HoQIs in upgrades to future gravitational wave detectors. Finally, we discuss how to post correct data from HoQIs, suppressing even further the nonlinearity-induced errors, broadening the appeal of such sensors to other applications where measurement data can be reconstructed after the fact. We demonstrate all of this on a working HoQI system and show the measured suppression of nonlinear effects from each of these methods. Our work makes HoQIs a more broadly applicable tool for displacement sensing.
3D-printed materials are used in many different industries (automotive, aviation, medicine, etc.). Most of these 3D-printed materials are based on ceramics or polymers whose mechanical properties vary with frequency. For numerical modeling, it is crucial to characterize this frequency dependency accurately to enable realistic finite-element simulations. At the same time, the damping behavior plays a key role in product development, since it governs a component's response at resonance and thus impacts both performance and longevity. In current research, inverse material characterization methods are getting more and more popular. However, their practical validation and applicability on real measurement data have not yet been discussed widely. In this work, we show the identification of two different materials, POM and additively manufactured sintered ceramics, and validate it with experimental data of a well-established measurement technique (dynamic mechanical analysis). The material identification process considers state-of-the-art reduced-order modeling and constrained particle swarm optimization, which are used to fit the frequency response functions of point measurements obtained by a laser Doppler vibrometer. This work shows the quality of the method in identifying the parameters defining the viscoelastic fractional derivative model, including their uncertainty. It also illustrates the applicability of this identification method in the presence of practical difficulties that come along with experimental data such as boundary conditions and noise.
We demonstrate dark-field x-ray microtomography in a compact, laboratory-based system capable of resolving attenuation, phase, and anisotropic scattering signals with micrometer-scale resolution across centimetre-scale samples. The method is based on two-directional beam tracking (2DBT), which requires only a single optical element and is compatible with standard x-ray sources and detectors. We validate the system's capabilities through imaging of a custom-built phantom, a fibre-reinforced composite and ex-vivo biological tissues, including a bovine intervertebral disc, a rat heart, and a porcine meniscus. The results show that dark-field tomography provides complementary information to attenuation as well as to phase tomography, by revealing sub-resolution features such as fibre orientation and microstructural heterogeneity at length scales that are well below the voxel size. A key element of our system is its sensitivity to scattering along two orthogonal directions in the image plane, enabling the measurement of scattering anisotropy with a single exposure. As well as simple and robust, our approach is sensitive and precise. These findings demonstrate the potential of 2DBT for non-destructive and three-dimensional structural characterisation of samples and materials in engineering, materials science and biomedical applications.
Contemporary schemes for waveform-resolved characterization are constrained by setup-specific requirements, which severely limits their adaptability and fails to establish standard procedures for routine in-line diagnostic. This work reports a comprehensive experimental demonstration that relative yield measurements from a broad variety of media and nonlinear observables, combined with our family of open-source reconstruction algorithms (CRIME and lazyCRIME), allow for robust waveform retrieval with attosecond accuracy on a standard workstation in just minutes. We have further adapted this framework to multiple configurations -- including non-invasive, simultaneous waveform characterization during an attosecond transient absorption spectroscopy (ATAS) experiment -- showcasing the low-cost and non-intrusive nature of the new pulse characterization approach. Together, this work establishes an easy-to-implement universal characterization scheme for in-line diagnostic of ultrashort pulses that is readily accessible to the broader ultrafast science community.
As a critical component of power supply systems, low-voltage distribution net-works directly affect grid stability and user power supply reliability, yet they face significant threats from lightning-induced faults. Transient simulations are more economical and adaptable for investigating lightning-induced faults in low-voltage distribution networks than experiments. A hybrid Variable Time Step (VTS)-Partial Element Equivalent Circuit (PEEC) method, has been validat-ed in previous study, is used for Lightning-induced Electromagnetic Pulse (LEMP) simulation and fault analysis. The lightning-induced faults in ex-tended unequal-length double-circuit low-voltage distribution networks are ana-lyzed in this paper. The impact of lightning stroke location on overvoltage and fault risk is the primary focus of this study. Key findings indicate that, for ground strokes in front of the center of one double circuit, similar three-phase negative and bipolar oscillatory waveforms that are linked to fault initiation emerge. Closer strokes promote bipolar waveforms with the main peak negative as well as higher overvoltages and fault risk. These results provide essential insights for under-standing lightning-induced fault mechanisms, thereby laying a foundation for formulating more targeted and effective lightning protection measures.
This study presents a method for deterministic Er3+ doping of x-cut TFLN using focused ion beam (FIB) implantation with sub-100 nm spatial precision, enabling seamless integration of active rare-earth ions into this technologically relevant platform for lithium niobate integrated nanophotonics. Photoluminescence (PL) measurements from implanted regions reveal Stark-split 4f-4f transitions consistent with bulk Er-doped lithium niobate, indicating similar lattice occupation. Temperature-dependent PL measurements from 300 K to 5 K exhibit conventional behaviour down to approximately 50 K, followed by a marked decrease in the emission intensity and lifetime. This anomaly is attributed to a suppression of the pyroelectric response in LiNbO3 at low temperatures, which affects local electric fields and, consequently, Er3+ emission. The sensitivity of the PL response to the modulation frequency and polarization of the 980 nm excitation light is also consistent with possible mechanisms linking thermal effects and internal fields arising in the thin film. The results demonstrate a method for the targeted doping with Er3+ ions into the most widely used cut of TFLN for integrated photonic devices and provide further important considerations for their exploitation in cryogenic quantum devices.
This paper briefly presents an order statistic approach to the time distribution of the first detected event after a primary avalanche breakdown from a mixture of correlated and dark counting processes. The well-known order statistic method, commonly used to describe the time resolution of scintillation detectors, is applied to the arrival times of correlated events. The established model of crosstalk as a branching Poisson process is extended to afterpulsing, and correlated events are considered starting from their seeds -- free (de-trapped or diffused) charge carriers capable of triggering secondary avalanche breakdowns. The proposed approach enables the extraction of timing information for delayed crosstalk and afterpulsing events mixed with dark counts and predicts that the distribution of the first arrival time narrows as the number of seeds increases, corresponding to a higher probability of correlated events.
Optical spectrometers are indispensable tools across various fields, from chemical and biological sensing to astronomical observations and quantum technologies. However, the integration of spectrometers onto photonic chips has been hindered by the low spectral resolution or large device footprint with complex multiple channel operations. Here, we introduce a novel chip-integrated spectrometer by leveraging the acoustically-stimulated Brillouin scattering in a hybrid photonic-phononic chip. The Brillouin interaction provides a dynamic reflection grating with a high reflectivity up to 50% and a fast switching time on the microsecond scale, achieving an unprecedented spectral resolution of 0.56 nm over a 110 nm bandwidth using just a single 1 mm-long straight waveguide. This remarkable performance approaches the fundamental limit of resolution for a given device size, validating the potential of the hybrid photonic-phononic device for efficient and dynamically-reconfigurable spectral analysis, and thus opens up new avenues for advanced optical signal processing and sensing applications.
Quaternions provide a unified algebraic and geometric framework for representing three-dimensional rotations without the singularities that afflict Euler-angle parametrisations. This article develops a pedagogical and conceptual analysis of the \emph{Gimbal lock} phenomenon and demonstrates, step by step, how quaternion algebra resolves it. Beginning with the limitations of Euler representations, the work introduces the quaternionic rotation operator $v' = q\,v\,q^{*}$, derives the Rodrigues formula, and establishes the continuous, singularity-free mapping between unit quaternions and the rotation group $SO(3)$. The approach combines historical motivation, formal derivation, and illustrative examples designed for advanced undergraduate and graduate students. As an extension, Appendix~A presents the geometric and topological interpretations of quaternions, including their relation to the groups $\mathbb{Q}_8$ and $SU(2)$, and the Dirac belt trick, offering a visual analogy that reinforces the connection between algebra and spatial rotation. Overall, this work highlights the educational value of quaternions as a coherent and elegant framework for understanding rotational dynamics in physics.
Efficient identification of promising drug candidates for nanomaterial-based delivery systems is essential for advancing next-generation therapeutics. In this work, we present a synergistic framework combining density functional theory (DFT) and machine learning (ML) to explore the adsorption behavior and electronic interactions of drugs on a novel 2D graphene allotrope, termed Graphsene (GrS). Graphsene, characterized by its porous ring topology and large surface area, offers an excellent platform for efficient adsorption and strong electronic coupling with drug molecules. A dataset comprising 67 drugs adsorbed on various 2D substrates was employed to train the ML model, which was subsequently applied to predict suitable drug candidates for GrS based on molecular size and adsorption energy criteria (database link provided in a later section). The ML model exhibited robust predictive accuracy, achieving a mean absolute error of 0.075 eV upon DFT validation, though its sensitivity to initialization highlighted the need for larger and more diverse datasets. DFT-based analyses, including adsorption energetics, projected density of states (PDOS), and Bader charge calculations, revealed pronounced charge transfer and electronic coupling between the drug molecules and the GrS surface, elucidating the fundamental nature of drug-substrate interactions. The study reveals that the integrated DFT-ML strategy offers a rapid, cost-efficient approach for screening and understanding drug-nanomaterial interactions, paving the way for data-driven design of advanced nanomaterial-enabled drug delivery systems.
EIRENE [1] is a Monte Carlo neutral transport solver heavily used in the fusion community. EIRENE does not implement domain decomposition, making it impossible to use for simulations where the grid data does not fit on one compute node (see e.g. [2]). This paper presents a domain-decomposed Monte Carlo (DDMC) algorithm implemented in a new open source Monte Carlo code, Eiron. Two parallel algorithms currently used in EIRENE are also implemented in Eiron, and the three algorithms are compared by running strong scaling tests, with DDMC performing better than the other two algorithms in nearly all cases. On the supercomputer Mahti [3], DDMC strong scaling is superlinear for grids that do not fit into an L3 cache slice (4 MiB). The DDMC algorithm is also scaled up to 16384 cores in weak scaling tests, with a weak scaling efficiency of 45% in a high-collisional (heavier compute load) case, and 26% in a low-collisional (lighter compute load) case. We conclude that implementing this domain decomposition algorithm in EIRENE would improve performance and enable simulations that are currently impossible due to memory constraints.
The analysis of the Reynolds Stress Transport Equation (RSTE) provides fundamental physical insights that are essential for the development and validation of advanced turbulence models. However, a comprehensive and validated tool for computing the complete RSTE budget is absent in the widely-used open-source Computational Fluid Dynamics (CFD) framework, OpenFOAM. This work addresses this gap by presenting the implementation and a posteriori validation of a function object library for calculating all terms of the resolved RSTE budget in Large-Eddy Simulations (LES). The library is applied to simulate two canonical wall-bounded turbulent flows: a channel flow and a pipe flow, both at a friction Reynolds number of Re$_{\tau}=180$. The implementation is validated through a mesh refinement study where the results from the LES simulations are systematically compared against high-fidelity Direct Numerical Simulation (DNS) data. The computed budget terms are observed to converge systematically towards the DNS reference data. This validation demonstrates that the implemented library accurately captures the intricate balance of all budget terms. This contribution provides the open-source CFD community with a powerful utility for detailed turbulence analysis, thereby facilitating deeper physical understanding and accelerating the development of next-generation turbulence models.
This article describes the design and construction of a portable, compact, and cost-effective microspectrophotometer (MSP) that operates in the range of (200_800 nm). This microscope spectrophotometer records highresolution absorption and emission spectra in situ. The dual head design of this MSP enables simultaneous real time imaging and spectral recording of heterogeneous samples with high selectivity and micrometer spatial resolution. Our compact, portable MSP design reduces construction costs by more than 20 times compared to commercial benchtop alternatives, primarily due to its innovative illumination system and microscope objective design. The performance of the UV_vis_NIR MSP was confirmed by comparing the absorption and fluorescence spectra of an aqueous solution of Ru(bpy) obtained with our system to those measured by commercial spectroscopic systems. The high accuracy and reliability of our system in measuring absorbance and fluorescence were confirmed by R squared values of 0.998 and 0.990, respectively, from colorimetric and fluorometric tests. The MSP was further used to record absorption and fluorescence spectra from a variety of samples, including dyes and protein crystals, in both the solution and solid state, as well as individual living cells. This compact instrument is ideal for rapid, in situ spectroscopic measurements and is expected to find on site applications across various fields, such as environmental monitoring, biological research, forensic analysis, and materials characterization.
The classical one-component plasma (OCP) bounded by a spherical surface reflecting ions (BOCP) is studied using molecular dynamics (MD). Simulations performed for a series of sufficiently large BOCP's make it possible to establish the size dependencies for the investigated quantities and extrapolate them to the thermodynamic limit. In particular, the total electrostatic energy per ion is estimated in the limit of infinite BOCP in a wide range of the Coulomb coupling parameter $\Gamma$ from 0.03 to 1000 with the relative error of the order 0.1%. Calculated energies are by about 0.5% lower as compared to the modern Monte Carlo (MC) simulation data obtained by different authors at $\Gamma<30$ and almost coincide with the MC results at $\Gamma>175$. We introduce two more converging characteristic energies, the excess interatomic electrostatic energy and the excess ion-background electrostatic energy, which enable us to calculate the ionic compressibility factor inaccessible in conventional MC and MD simulation of the OCP with periodic boundary conditions. The derived wide-range ionic equation of state can be recommended for testing OCP simulations with various effective interaction potentials. Based on this equation, we propose an improved cutoff radius for the interionic forces implemented in LAMMPS and perform MD simulation of the OCP to demonstrate that location of the metastable region of the fluid-solid phase transition depends sensitively on this radius.
We introduce SeismoStats, a Python package that enables essential statistical seismology analyses, with a focus on well-established methods. The package provides user-friendly tools to download and manipulate earthquake catalogs, but also plotting functionalities to visualize them, as well as means to perform analyses such as estimating the a- and b-value of the Gutenberg-Richter law, or estimating the magnitude of completeness of any earthquake catalog. This is the first well-tested, well-documented, and openly accessible Python package with all these features. It is intended to serve as the nucleus of a long-term community effort, continually expanding in functionality through shared contributions. We invite seismologists and developers to contribute ideas and code to support and shape its future development.
Versatile, ultracompact, easy-to-handle, high-sensitivity sensors are compelling tools for in situ pivotal applications, such as medical diagnostics, security and safety assessments, and environmental control. In this work, we combine photoacoustic spectroscopy and feedback interferometry, proposing a novel trace-gas sensor equipped with a self-mixing readout. This scheme demonstrates a readout sensitivity comparable to that of bulkier state-of-the-art balanced Michelson-interferometric schemes, achieving the same spectroscopic performance in terms of signal-to-noise ratio (SNR) and minimum detection limit (MDL). At the same time, the self-mixing readout benefits from a reduced size and a lower baseline, paving the way for future system downsizing and integration while offering a higher detectability for lower gas concentrations. Moreover, the intrinsic wavelength independence of both self-mixing and photoacoustic techniques allows the applicability and tailorability of the sensor to any desired spectral range.
Despite decades of ship-based observations at the Bermuda Atlantic Timeseries Study (BATS) site, ambiguities linger in our understanding of the region's annual carbon cycle. Difficulties reconciling geochemical estimates of annual net community production (ANCP) with direct measurements of nutrient delivery and carbon exports (EP) have implied either an insufficient understanding of these processes, and/or that they are playing out on shorter time and spatial scales than resolved by monthly sampling. We address the latter concern using autonomous underwater gliders equipped with biogeochemical sensors to quantify ANCP from mass balances of oxygen (O2) and nitrate (NO3) over a full annual cycle. The timing, amplitude and distribution of O2 production, consumption, and NO3 fluxes reaffirm ideas about strong seasonality in physical forcing and trophic structure creating a dual system: i.e. production fueled by NO3 supplied to the photic zone from deeper layers in the first half of the year, versus being recycled within the upper ocean during the second half. The evidence also supports recently proposed hypotheses regarding the production and recycling of carbon with non-Redfield characteristics, deplete in nitrogen and phosphorus, to explain observed patterns of high NCP in the absence of significant NO3 supply. It further identifies significant contributions to ANCP and EP potentially linked to vertically migrating communities of salps in spring after all convective activity has ceased. The improved resolution of the datasets, combined with more precise definitions of photic and subphotic integration depths, brings the estimates of ANCP and EP into better alignment with each other.
We present new electromagnetic plasma modes that propagates in one time and one space coordinates. Differently to the usual plane wave solution, which is written in terms of separation of variables, all our solutions are along the light-cone coordinates. This allow us to find several new wavepacket solutions whose functionality properties rely on the conditions imposed on the choice for their light-cone coordinates dependence. The presented wavepacket solutions are constructed in terms of multiplications of Airy functions, Parabolic cylinder functions, Mathieu functions, or Bessel functions. We thoroughly analyze the case of a double Airy solution, which have new electromagnetic properties, as a defined wavefront, and velocity faster than the electromagnetic plane wave counterpart solution. It is also mentioned how more general structured wavepackets can be constructed from these new solutions.
Physics-informed machine learning (PIML) integrates partial differential equations (PDEs) into machine learning models to solve inverse problems, such as estimating coefficient functions (e.g., the Hamiltonian function) that characterize physical systems. This framework enables data-driven understanding and prediction of complex physical phenomena. While coefficient functions in PIML are typically estimated on the basis of predictive performance, physics as a discipline does not rely solely on prediction accuracy to evaluate models. For example, Kepler's heliocentric model was favored owing to small discrepancies in planetary motion, despite its similar predictive accuracy to the geocentric model. This highlights the inherent uncertainties in data-driven model inference and the scientific importance of selecting physically meaningful solutions. In this paper, we propose a framework to quantify and analyze such uncertainties in the estimation of coefficient functions in PIML. We apply our framework to reduced model of magnetohydrodynamics and our framework shows that there are uncertainties, and unique identification is possible with geometric constraints. Finally, we confirm that we can estimate the reduced model uniquely by incorporating these constraints.
Constructing reduced models for turbulent transport is essential for accelerating profile predictions and enabling many-query tasks such as uncertainty quantification, parameter scans, and design optimization. This paper presents machine-learning-driven reduced models for Electron Temperature Gradient (ETG) turbulence in the Wendelstein 7-X (W7-X) stellarator. Each model predicts the ETG heat flux as a function of three plasma parameters: the normalized electron temperature radial gradient ($\omega_{T_e}$), the ratio of normalized electron temperature and density radial gradients ($\eta_e$), and the electron-to-ion temperature ratio ($\tau$). We first construct models across seven radial locations using regression and an active machine-learning-based procedure. This process initializes models using low-cardinality sparse-grid training data and then iteratively refines their training sets by selecting the most informative points from a pre-existing simulation database. We evaluate the prediction capabilities of our models using out-of-sample datasets with over $393$ points per location, and $95\%$ prediction intervals are estimated via bootstrapping to assess prediction uncertainty. We then investigate the construction of generalized reduced models, including a generic, position-independent model, and assess their heat flux prediction capabilities at three additional locations. Our models demonstrate robust performance and predictive accuracy comparable to the original reference simulations, even when applied beyond the training domain.
This work develops quantized local reduced-order models (ql-ROMs) of the turbulent Minimal Flow Unit (MFU) for the analysis and interpretation of intermittent dissipative dynamics and extreme events. The ql-ROM combines data-driven clustering of the flow state space with intrusive Galerkin projection on locally defined Proper Orthogonal Decomposition (POD) bases. This construction enables an accurate and stable low-dimensional representation of nonlinear flow dynamics whilst preserving the structure of the governing equations. The model is trained on direct numerical simulation data of the MFU. When deployed, the ql-ROM is numerically stable for long-term integration, and correctly infers the statistical behavior of the kinetic energy and dissipation observed of the full-order system. A local modal energy-budget formulation is employed to quantify intermodal energy transfer and viscous dissipation within each region of the attractor. The analysis reveals that dissipation bursts correspond to localized energy transfer from streamwise streaks and travelling-wave modes toward highly dissipative vortical structures, consistent with the self-sustaining process of near-wall turbulence. Beyond reduced modeling, the ql-ROM framework provides a pathway for the reduced-space characterization and potential prediction of extreme events. ql-ROM offer an interpretable and computationally efficient framework for the analysis and prediction of extreme events in turbulent flows.
We propose an improved Path Integral Monte Carlo (PIMC) algorithm called Harmonic PIMC (H-PIMC) and its generalization, Mixed PIMC (M-PIMC). PIMC is a powerful tool for studying quantum condensed phases. However, it often suffers from a low acceptance ratio for solids and dense confined liquids. We develop two sampling schemes especially suited for such problems by dividing the potential into its harmonic and anharmonic contributions. In H-PIMC, we generate the imaginary time paths for the harmonic part of the potential exactly and accept or reject it based on the anharmonic part. In M-PIMC, we restrict the harmonic sampling to the vicinity of local minimum and use standard PIMC otherwise, to optimize efficiency. We benchmark H-PIMC on systems with increasing anharmonicity, improving the acceptance ratio and lowering the auto-correlation time. For weakly to moderately anharmonic systems, at $\beta \hbar \omega=16$, H-PIMC improves the acceptance ratio by a factor of 6-16 and reduces the autocorrelation time by a factor of 7-30. We also find that the method requires a smaller number of imaginary time slices for convergence, which leads to another two- to four-fold acceleration. For strongly anharmonic systems, M-PIMC converges with a similar number of imaginary time slices as standard PIMC, but allows the optimization of the auto-correlation time. We extend M-PIMC to periodic systems and apply it to a sinusoidal potential. Finally, we combine H- and M-PIMC with the worm algorithm, allowing us to obtain similar efficiency gains for systems of indistinguishable particles.
We establish a comprehensive probability theory for coherent transport of random waves through arbitrary linear media. The transmissivity distribution for random coherent waves is a fundamental B-spline with knots at the transmission eigenvalues. We analyze the distribution's shape, bounds, moments, and asymptotic behaviors. In the large n limit, the distribution converges to a Gaussian whose mean and variance depend solely on those of the eigenvalues. This result resolves the apparent paradox between bimodal eigenvalue distribution and unimodal transmissivity distribution.
Coherent motions associated with extreme wall shear stress events are investigated for adverse pressure gradient turbulent boundary layers (APG-TBLs). The analyses are performed using wall-resolved large eddy simulations of a NACA0012 airfoil at angles of attack of 9 and 12 deg. and Reynolds number 400000. The suction side exhibits attached TBLs which develop under progressively stronger APGs. A quadrant decomposition of Reynolds shear stress shows that sweeps and ejections dominate the momentum exchange between the mean and fluctuating fields, with the intensity of sweeps near the wall growing more rapidly with APG strength. Probability density functions of wall shear stress reveal a higher frequency of backflow events and an increased distribution symmetry with stronger APGs. Extreme positive and backflow events are examined using space--time correlations and conditional statistics. Conditional averages show that backflow events originate from inner-layer sweep motions bringing high-momentum fluid toward the wall, followed by ejections that drive local deceleration. In such cases, the intensity of ejections is modulated by the APG strength. The dynamics of coherent turbulent structures and their interactions are examined using conditional flow field analyses. For extreme positive events, stronger APGs lead to shorter high-speed streaks, while the associated sweep motions generate spanwise velocities that increasingly influence the near-wall dynamics. In the case of backflows, stronger APGs shorten low-speed streaks and amplify high-speed structures associated with sweep motions, promoting spanwise alignment of vortical structures. Overall, APGs modify the structure and dynamics of extreme near-wall events by reshaping the balance and spatial organization of sweep- and ejection-dominated motions.
Designing frictional interfaces to exhibit prescribed macroscopic behavior is a challenging inverse problem, made difficult by the non-uniqueness of solutions and the computational cost of contact simulations. Traditional approaches rely on heuristic search over low-dimensional parameterizations, which limits their applicability to more complex or nonlinear friction laws. We introduce a generative modeling framework using Variational Autoencoders (VAEs) to infer surface topographies from target friction laws. Trained on a synthetic dataset composed of 200 million samples constructed from a parameterized contact mechanics model, the proposed method enables efficient, simulation-free generation of candidate topographies. We examine the potential and limitations of generative modeling for this inverse design task, focusing on balancing accuracy, throughput, and diversity in the generated solutions. Our results highlight trade-offs and outline practical considerations when balancing these objectives. This approach paves the way for near-real-time control of frictional behavior through tailored surface topographies.
We present a bifidelity Karhunen-Loève expansion (KLE) surrogate model for field-valued quantities of interest (QoIs) under uncertain inputs. The approach combines the spectral efficiency of the KLE with polynomial chaos expansions (PCEs) to preserve an explicit mapping between input uncertainties and output fields. By coupling inexpensive low-fidelity (LF) simulations that capture dominant response trends with a limited number of high-fidelity (HF) simulations that correct for systematic bias, the proposed method enables accurate and computationally affordable surrogate construction. To further improve surrogate accuracy, we form an active learning strategy that adaptively selects new HF evaluations based on the surrogate's generalization error, estimated via cross-validation and modeled using Gaussian process regression. New HF samples are then acquired by maximizing an expected improvement criterion, targeting regions of high surrogate error. The resulting BF-KLE-AL framework is demonstrated on three examples of increasing complexity: a one-dimensional analytical benchmark, a two-dimensional convection-diffusion system, and a three-dimensional turbulent round jet simulation based on Reynolds-averaged Navier--Stokes (RANS) and enhanced delayed detached-eddy simulations (EDDES). Across these cases, the method achieves consistent improvements in predictive accuracy and sample efficiency relative to single-fidelity and random-sampling approaches.
We demonstrate that the slot between parallel metal gates placed above two-dimensional electron system (2DES) forms a plasmonic cavity with unconventional mode quantization. The resonant plasmon modes are excited when the slot width $L$ and the plasmon wavelength $\lambda$ satisfy the condition $L = \lambda/8 +n \times \lambda/2$, where $n=0, 1, 2 \ldots$. The lowest resonance occurs at a surprisingly small cavity size, specifically one eighth of the plasmon wavelength, which contrasts with the conventional half-wavelength Fabry-Perot cavities in optics. This unique quantization rule arises from a non-trivial phase shift of $-\pi/4$ acquired by the 2D plasmon upon reflection from the edge of the gate. The slot plasmon modes exhibit weak decay into the gated 2DES region, with the decay rate being proportional to the square root of the separation between the gate and the 2DES. Absorption cross-section by such slots reaches $\sim 50$ % of the fundamental dipole limit without any matching strategies, and is facilitated by field enhancement at the metal edges.
We resolve Loschmidt's paradox -- the apparent contradiction between time-reversible microscopic dynamics and irreversible macroscopic evolution -- including the long-standing puzzle of the thermodynamic arrow of time. The resolution: entropy increases not because dynamics are asymmetric, but because information accessibility is geometrically bounded. For Hamiltonian systems (conservative dynamics), Lyapunov exponents come in positive-negative pairs ($\{\lambda_i, -\lambda_i\}$) due to symplectic structure. Under time reversal these pairs flip ($\lambda_i \to -\lambda_i$), but stable manifolds contract below quantum resolution $\lambda = \hbar/\sqrt{mk_BT}$, becoming physically indistinguishable. We always observe only unstable manifolds where trajectories diverge. Hence information loss proceeds at the same rate $h_{KS} = \frac{1}{2}\sum_{\text{all } i}|\lambda_i|$ in both time directions, resolving the arrow of time: ``forward'' simply means ``where we observe expansion,'' which is universal because stable manifolds always contract below measurability. Quantitatively, for N$_2$ gas at STP with conservative estimates ($h_{KS} \sim 10^{10}$ s$^{-1}$), time reversal at $t = 1$ nanosecond requires momentum precision $\sim 10^{-13}$ times quantum limits -- geometrically impossible. At macroscopic times, the precision requirement becomes $\sim 10^{-10^{10}}$ times quantum limits. This framework preserves microscopic time-reversal symmetry, requires no special initial conditions or Past Hypothesis, and extends to quantum systems (OTOCs) and black hole thermodynamics.
A biofilm is a self-contained community of bacteria that uses signaling molecules called autoinducers (AIs) to coordinate responses through the process of quorum sensing. Biofilms exhibit a dual role that drives interest in both combating antimicrobial resistance (AMR) and leveraging their potential in bioprocessing, since their products can have commercial potential. Previous work has demonstrated how the distinct anisotropic channel geometry in some biofilms affects AIs propagation therein. In this paper, a 2D anisotropic biofilm channel model is extended to be a time-varying channel (TVC), in order to represent the diffusion dynamics during the maturation phase when water channels develop. Since maturation is associated with the development of anisotropy, the time-varying model captures the shift from isotropic to anisotropic diffusion. Particle-based simulation results illustrate how the TVC is a hybrid scenario incorporating propagation features of both isotropic and anisotropic diffusion. This hybrid behavior aligns with biofilm maturation. Further study of the TVC includes characterization of the mutual information (MI), which reveals that an increased AI count, reduced transmitter -- receiver distance, greater degree of anisotropy, and shorter inter-symbol interference lengths increase the MI. Finally, a brief dimensional analysis demonstrates the scalability of the anisotropic channel results for larger biofilms and timescales.
Background: Non-invasive imaging-based assessment of blood flow plays a critical role in evaluating heart function and structure. Computed Tomography (CT) is a widely-used imaging modality that can robustly evaluate cardiovascular anatomy and function, but direct methods to estimate blood flow velocity from movies of contrast evolution have not been developed. Purpose: This study evaluates the impact of CT imaging on Physics-Informed Neural Networks (PINN)-based flow estimation and proposes an improved framework, SinoFlow, which uses sinogram data directly to estimate blood flow. Methods: We generated pulsatile flow fields in an idealized 2D vessel bifurcation using computational fluid dynamics and simulated CT scans with varying gantry rotation speeds, tube currents, and pulse mode imaging settings. We compared the performance of PINN-based flow estimation using reconstructed images (ImageFlow) to SinoFlow. Results: SinoFlow significantly improved flow estimation performance by avoiding propagating errors introduced by filtered backprojection. SinoFlow was robust across all tested gantry rotation speeds and consistently produced lower mean squared error and velocity errors than ImageFlow. Additionally, SinoFlow was compatible with pulsed-mode imaging and maintained higher accuracy with shorter pulse widths. Conclusions: This study demonstrates the potential of SinoFlow for CT-based flow estimation, providing a more promising approach for non-invasive blood flow assessment. The findings aim to inform future applications of PINNs to CT images and provide a solution for image-based estimation, with reasonable acquisition parameters yielding accurate flow estimates.
Recent discoveries in semi-metallic multi-gap systems featuring band singularities have galvanized enormous interest in particular due to the emergence of non-Abelian braiding properties of band nodes. This previously uncharted set of topological phases necessitates novel approaches to probe them in laboratories, a pursuit that intricately relates to evaluating non-Abelian generalizations of the Abelian quantum geometric tensor (QGT) that characterizes geometric responses. Here, we pioneer the direct measurement of the non-Abelian QGT. We achieve this by implementing a novel orbital-resolved polarimetry technique to probe the full Bloch Hamiltonian of a six-band two-dimensional (2D) synthetic lattice, which grants direct experimental access to non-Abelian quaternion charges, the Euler curvature, and the non-Abelian quantum metric associated with all bands. Quantum geometry has been highlighted to play a key role on macroscopic phenomena ranging from superconductivity in flat-bands, to optical responses, transport, metrology, and quantum Hall physics. Therefore, our work unlocks the experimental probing of a wide phenomenology of multi-gap systems, at the confluence of topology, geometry and non-Abelian physics.
During mid-May 2024, active region (AR) 13664 produced a series of M- and X-class flares along with several coronal mass ejections (CMEs) that resulted in exceptionally strong aurora at Earth. This study presents in-situ solar energetic particle (SEP) ion composition data from Solar Terrestrial Relations Observatory Ahead (STA), Advanced Composition Explorer (ACE), and Parker Solar Probe (PSP) as their magnetic connectivity to AR 13664 varied throughout the event period. Between 08 to 24 May, STA was separated by 12° in longitude from ACE at 0.96 AU. SEP intensities rose gradually due to merged CMEs from AR 13664. On 13 May, an M6 flare was followed by a rapid-onset SEP event at STA, although velocity dispersion analysis yielded no clear path length or release time. PSP, 95° longitudinally separated from Earth at 0.74 AU, observed gradually increasing SEP intensities beginning 11 May, followed by a jump in both SEP intensity and magnetic field (>100 nT) on 16 May. These early event intervals display stepwise SEP increases, consistent with the passage of successive CMEs. On 20 May, an X16.5 flare from AR 13664 produced an Fe-rich SEP event observed at all three spacecraft despite their wide longitudinal separations. Throughout the period, Fe/O ratios ranged from <0.01 to >0.8 and increased with energy between 1 to 100 MeV/nuc. This trend deviates from the typical energy-dependent decrease expected from diffusive shock acceleration and suggests more complex scenarios, possibly involving variable suprathermal seed populations or species-dependent transport.
Quantum photonic networks require two distinct functionalities: bright single-photon sources and long-lived quantum memories. III-V semiconductor quantum dots excel as deterministic and coherent photon emitters, while rare-earth ions such as erbium (Er$^{3+}$) in crystalline oxides offer exceptional spin and optical coherence at telecom wavelengths. Combining these systems and their functionalities via direct epitaxy is challenging due to lattice mismatch and incompatible growth conditions. Here we demonstrate low-temperature pulsed laser deposition of Er$^{3+}$-doped TiO$_{2}$ thin films directly on GaAs and GaSb substrates. Controlled surface preparation with an arsenic cap and an oxygen-deficient buffer layer enables the growth of epitaxial anatase TiO$_{2}$ (001) at 390$^{o}$C with sub-300 pm surface roughness, while avoiding interface degradation. In contrast, high-temperature oxide desorption or growth temperatures drive the transition to rough, polycrystalline rutile film, as confirmed by transmission electron microscopy. Minimal coincident interface area (MCIA) modeling explains the orientation-selective growth on GaAs and GaSb. Raman and cryogenic photoluminescence excitation spectroscopy verify the crystal phase and optical activation of Er$^{3+}$ ions. This multi-parameter growth strategy helps preserve III-V quantum dot functionality and yields smooth surfaces suitable for low-loss nanophotonic structures. Our results establish a materials platform for monolithically integrating rare-earth quantum memories with semiconductor photon sources, paving the way toward scalable hybrid quantum photonic chips.
Quantum imaging is emerging as a transformative approach for biomedical applications, applying nonclassical properties of light, such as entanglement, squeezing, and quantum correlations, to overcome fundamental limits of conventional techniques. These methods promise superior spatial resolution, enhanced signal-to-noise ratios, improved phase sensitivity, and reduced radiation dose, for potentially safer and more precise imaging for delicate biological samples. Here, we present an overview of quantum optical biomedical imaging technologies as well as quantum-inspired imaging methods, including quantum optical coherence tomography, quantum optical microscopy, ghost imaging, multi-parameter quantum imaging, and imaging with quantum-grade cameras. We describe the operating principles, biomedical applications, and unique advantages of each approach, along with the specific challenges for their translation into real-life practice. This review aims to guide future research toward advancing quantum imaging from experimental demonstrations to impactful biomedical tools.
We numerically study the fast spatial transport of a trapped Bose-Einstein condensate (BEC) using shortcuts-to-adiabaticity (STA) by counterdiabatic driving (CD). The trapping potential and the required auxiliary potential were simulated as painted potentials. We compared STA transport to transport that follows a constant-acceleration scheme (CA). Experimentally feasible values of trap depth and atom number were used in the 2D Gross-Pitaevskii equation (GPE) simulations. Different transport times, trap depths, and trap lengths were investigated. In all simulations, there exists a minimum amount of time necessary for fast transport, which is consistent with previous results from quantum speed limit studies.
The Moon has been long regarded as a natural resonator of gravitational waves (GWs) since 1960, showing great potential to fill the frequency gap left behind GW detections by ground- or space-based laser interferometry. However, the spatial variation of this amplification capacity on the Moon remains unclear. Here, we numerically simulate the lunar response to GWs by fully considering the fluctuant topography and laterally heterogeneous interior structures. Our results show that most regions on the Moon can amplify GWs with a ratio over 2, a finding significantly higher than previous estimations. Particularly, the amplification ratio can even reach factors of tens at the resonant frequency of ~0.015 Hz on the highlands surrounding the South Pole-Aitken (SPA) basin, where the regional crust is the thickest. Our findings establish the thick-crust regions as critical zones of GW amplification, which is essential for future landing site selection and instrumental setting for GW detection on the Moon.
Habitat fragmentation, often driven by human activities, alters ecological landscapes by disrupting connectivity and reshaping species interactions. In such fragmented environments, habitats can be modeled as networks, where individuals disperse across interconnected patches. We consider an intraspecific competition model, where individuals compete for space while dispersing according to a nonlinear random walk, capturing the heterogeneity of the network. The interplay between asymmetric competition, dispersal dynamics, and spatial heterogeneity leads to nonuniform species distribution: individuals with stronger competitive traits accumulate in central (hub) habitat patches, while those with weaker traits are displaced toward the periphery. We provide analytical insights into this mechanism, supported by numerical simulations, demonstrating how competition and spatial structure jointly influence species segregation. In the large-network limit, this effect becomes extreme, with dominant individuals disappearing from peripheral patches and subordinate ones from central regions, establishing spatial segregation. This pattern may create favorable conditions for speciation, as physical separation can reinforce divergence within the population over time.
The recent rapid deployment of datacenter infrastructures for performing large language models (LLMs) and related artificial intelligence (AI) applications in the clouds is predicted to incur an exponentially growing energy consumption in the near-term future. In this paper, we propose and analyze the implementation of the transformer model, which is the cornerstone of the modern LLMs, with novel large-scale optoelectronic neurons (OENs) constructed over the commercially available complementary metal-oxide-semiconductor (CMOS) image sensor (CIS) platform. With all of the required optoelectronic devices and electronic circuits integrated in a chiplet only about 2 cm by 3 cm in size, 175 billon parameters in the case of GPT-3 are shown to perform inference at an unprecedented speed of 12.6 POPS using only a 40 nm CMOS process node, along with a high power efficiency of 74 TOPS/W and a high area efficiency of 19 TOPS/mm2, both surpassing the related digital electronics by roughly two orders of magnitude. The influence of the quantization formats and the hardware induced errors are numerically investigated, and are shown to have a minimal impact. Our study presents a new yet practical path toward analog neural processing units (NPUs) to complement existing digital processing units.
Multistability, the coexistence of multiple stable states, is a cornerstone of nonlinear dynamical systems, governing their equilibrium, tunability, and emergent complexity. Recently, the concept of hidden multistability, where certain stable states evade detection via conventional continuous parameter sweeping, has garnered increasing attention due to its elusive nature and promising applications. In this Letter, we present the first experimental observation of hidden multistability using a programmable acoustic coupled-cavity platform that integrates competing self-focusing and self-defocusing Kerr nonlinearities. Beyond established bistability, we demonstrate semi- and fully-hidden tristabilities by precisely programming system parameters. Crucially, the hidden stable states, typically inaccessible via the traditional protocol, are unambiguously revealed and dynamically controlled through pulsed excitation, enabling flexible transitions between distinct types of stable states. These experimental findings not only offer new insights into the fundamental physics of emerging hidden multistability, but also unlock new avenues for applications in information storage, information encryption, and safety precaution, where multi-state dynamics could enable advanced control techniques.
We investigate energy propagation in a one-dimensional stub lattice in the presence of both disorder and nonlinearity. In the periodic case, the stub lattice hosts two dispersive bands separated by a flat band; however, we show that sufficiently strong disorder fills all intermediate band gaps. By mapping the two-dimensional parameter space of disorder and nonlinearity, we identify three distinct dynamical regimes (weak chaos, strong chaos, and self-trapping) through numerical simulations of initially localized wave packets. When disorder is strong enough to close the frequency gaps, the results closely resemble those obtained in the one-dimensional disordered discrete nonlinear Schrödinger equation and Klein-Gordon lattice model. In particular, subdiffusive spreading is observed in both the weak and strong chaos regimes, with the second moment $m_2$ of the norm distribution scaling as $m_2 \propto t^{0.33}$ and $m_2 \propto t^{0.5}$, respectively. The system's chaotic behavior follows a similar trend, with the finite-time maximum Lyapunov exponent $\Lambda$ decaying as $\Lambda \propto t^{-0.25}$ and $\Lambda \propto t^{-0.3}$. For moderate disorder strengths, i.e., near the point of gap closing, we find that the presence of small frequency gaps does not exert any noticeable influence on the spreading behavior. Our findings extend the characterization of nonlinear disordered lattices in both weak and strong chaos regimes to other network geometries, such as the stub lattice, which serves as a representative flat-band system.
Membraneless droplets or liquid condensates formed via liquid-liquid phase separation (LLPS) play a pivotal role in cell biology and hold potential for biomedical engineering. While membraneless droplets are often studied in the context of interactions between passive components, it is increasingly recognized that active matter inclusions, such as molecular motors and catalytic enzymes in cells, play important roles in the formation, transport and interaction of membraneless droplets. Here we developed a bacteria-polymer active phase separation system to study the nonequilibrium effect of active matter inclusions on the LLPS dynamics. We found that the presence of bacterial active matter accelerated the initial condensation of phase-separated liquid droplets but subsequently arrested the droplet coarsening process, resulting in a stable suspension of membraneless active droplets packed with motile bacterial cells. The arrested phase separation of the bacterial active droplet system presumably arises from anti-phase entrainment of interface fluctuations between neighboring droplets, which reduces the frequency of inter-droplet contact and suppresses droplet coarsening. In addition, the active stresses generated by cells within the droplets give rise to an array of nonequilibrium phenomena, such as dominant long-wavelength fluctuations and enhanced droplet transport with short-term persistent motion due to spontaneous symmetry breaking. Our study reveals a unique mechanism for arrested phase separation and long-term stability in membraneless droplet systems. The bacteria-polymer active phase separation system opens a new avenue for studying the dynamics of membraneless active droplets relevant to non-equilibrium LLPS in cells and in biomedical engineering applications.
We introduce a quantum key distribution (QKD) primitive based on charge teleportation: by Local Operations and Classical Communication (LOCC) on an entangled many-body ground state, Alice's one-bit choice steers the sign of a local charge shift at Bob, which directly encodes the key bit. Relative to energy teleportation schemes, the charge signal is bit-symmetric, measured in a single basis, and markedly more robust to realistic noise and model imperfections. We instantiate the protocol on transverse-field Ising models, star-coupled and one-dimensional chain, obtain closed-form results for two qubits, and for larger systems confirm performance via exact diagonalization, circuit-level simulations, and a proof-of-principle hardware run. We quantify resilience to classical bit flips and local quantum noise, identifying regimes where sign integrity, and hence key correctness, is preserved. These results position charge teleportation as a practical, low-rate QKD primitive compatible with near-term platforms.
Biological tissues exhibit distinct mechanical and rheological behaviors during morphogenesis. While much is known about tissue phase transitions controlled by structural order and cell mechanics, key questions regarding how tissue-scale nematic order emerges from cell-scale processes and influences tissue rheology remain unclear. Here, we develop a minimal vertex model that incorporates a coupling between active forces generated by cytoskeletal fibers and their alignment with local elastic stress in solid epithelial tissues. We show that this feedback loop induces an isotropic--nematic transition, leading to an ordered solid state that exhibits soft elasticity. Further increasing activity drives collective self-yielding, leading to tissue flows that are correlated across the entire system. This remarkable state, that we dub plastic nematic solid, is uniquely suited to facilitate active tissue remodeling during morphogenesis. It fundamentally differs from the well-studied fluid regime where macroscopic elastic stresses vanish and the velocity correlation length remains finite, controlled by activity. Altogether, our results reveal a rich spectrum of tissue states jointly governed by activity and passive cell deformability, with important implications for understanding tissue mechanics and morphogenesis.
The way living tissues respond to external mechanical forces is crucial in physiological processes like embryogenesis, homeostasis or tumor growth. Providing a complete description across length scales which relates the properties of individual cells to the rheological behavior of complex 3D-tissues remains an open challenge. The development of simplified biomimetic tissues capable of reproducing essential mechanical features of living tissues can help achieving this major goal. We report in this work the development of a microfluidic device that enables to achieve the sequential assembly of biomimetic prototissues and their rheological characterization. We synthesize prototissues by the controlled assembly of Giant Unilamellar Vesicles (GUVs) for which we can tailor their sizes and shapes as well as their level of GUV-GUV adhesion. We address a rheological description at multiple scales which comprises an analysis at the local scale of individual GUVs and at the global scale of the prototissue. The flow behavior of prototissues ranges from purely viscous to viscoelastic for increasing levels of adhesion. At low adhesion the flow response is dominated by viscous dissipation, which is mediated by GUV spatial reorganizations at the local scale, whereas at high adhesion the flow is viscoelastic, which results from a combination of internal reorganizations and deformation of individual GUVs. Such multiscale characterization of model biomimetic tissues provides a robust framework to rationalize the role of cell adhesion in the flow dynamics of living tissues.
Multispecies ecosystems modelled by generalized Lotka-Volterra equations exhibit stationary population abundances, where large number of species often coexist. Understanding the precise conditions under which this is at all feasible and what triggers species extinctions is a key, outstanding problem in theoretical ecology. Using standard methods of random matrix theory, I show that distributions of species abundances are Gaussian at equilibrium, in the weakly interacting regime. One consequence is that feasibility is generically broken before stability, for large enough number of species. I further derive an analytic expression for the probability that $n=0,1,2,...$ species go extinct and conjecture that a single-parameter scaling law governs species extinctions. These results are corroborated by numerical simulations in a wide range of system parameters.
Mixed-state phase transitions have recently attracted growing attention as a new frontier in nonequilibrium quantum matter and quantum information. In this work, we introduce the measurement-dressed imaginary-time evolution (MDITE) as a novel framework to explore mixed-state quantum phases and decoherence-driven criticality. In this setup, alternating imaginary-time evolution and projective measurements generate a competition between coherence-restoring dynamics and decoherence-inducing events. While reminiscent of monitored unitary circuits, MDITE fundamentally differs in that the physics is encoded in decoherent mixed states rather than in quantum trajectories. We demonstrate that this interplay gives rise to a new class of mixed-state phase transitions, using numerical simulations of the one-dimensional transverse-field Ising model and the two-dimensional dimerized Heisenberg model. Furthermore, we provide a diagrammatic representation of the evolving state, which naturally enables efficient studies of MDITE with quantum Monte Carlo and other many-body numerical methods, thereby extending investigations of mixed-state phase transitions to large-scale and higher-dimensional Hamiltonians. Our results highlight MDITE as a powerful paradigm for investigating non-unitary dynamics and the fundamental role of decoherence in many-body quantum systems.
Exactly solvable models of topologically ordered phases with non-abelian anyons typically require complicated many-body interactions which do not naturally appear in nature. This motivates the "inverse problem" of quantum many-body physics: given microscopic systems with experimentally realistic two-body interactions, how to design a Hamiltonian that realizes a desired topological phase? Here we solve this problem on a platform motivated by Rydberg atoms, where elementary two-level systems couple via simple blockade interactions. Within this framework, we construct Hamiltonians that realize topological orders described by non-abelian quantum double models. We analytically prove the existence of topological order in the ground state, and present efficient schemes to prepare these states. We also introduce protocols for the controlled adiabatic braiding of anyonic excitations to probe their non-abelian statistics. Our construction is generic and applies to quantum doubles $\mathcal{D}(G)$ for arbitrary finite groups $G$. We illustrate braiding for the simplest non-abelian quantum double $\mathcal{D}(S_3)$.
Understanding the interactions between microstructure, strain, phase, and material behavior is crucial in many scientific fields. However, quantifying these correlations is challenging, as it requires the use of multiple instruments and techniques, often separated by space and time. The Dual Imaging And Diffraction (DIAD) beamline at Diamond is designed to address this challenge. DIAD allows its users to visualize internal structures, identify compositional/phase changes, and measure strain. DIAD provides two independent beams combined at one sample position, allowing quasi-simultaneous X-ray Computed Tomography and X-ray Powder Diffraction. A unique functionality of the DIAD configuration is the ability to perform image-guided diffraction, where the micron-sized diffraction beam is scanned over the complete area of the imaging field of view without moving the specimen. This moving beam diffraction geometry enables the study of fast-evolving and motion-susceptible processes and samples. Here, we discuss the novel moving beam diffraction geometry presenting the latest findings on the reliability of both geometry calibration and data reduction routines used. Our measures confirm diffraction is most sensitive to the moving geometry for the detector position downstream normal to the incident beam. The observed data confirm that the motion of the KB mirror coupled with a fixed aperture slit results in a rigid translation of the beam probe, without affecting the angle of the incident beam path to the sample. Our measures demonstrate a nearest-neighbour calibration can achieve the same accuracy as a self-calibrated geometry when the distance between calibrated and probed sample region is smaller or equal to the beam spot size. We show the absolute error of the moving beam diffraction geometry remains below 0.0001, which is the accuracy we observe for the beamline with stable beam operation.
Inorganic crystal materials have broad application potential due to excellent physical and chemical properties, with elastic properties (shear modulus, bulk modulus) crucial for predicting materials' electrical conductivity, thermal conductivity and mechanical properties. Traditional experimental measurement suffers from high cost and low efficiency, while theoretical simulation and graph neural network-based machine learning methods--especially crystal graph convolutional neural networks (CGCNNs)--have become effective alternatives, achieving remarkable results in predicting material elastic properties. This study trained two CGCNN models using shear modulus and bulk modulus data of 10987 materials from the Matbench v0.1 dataset, which exhibit high accuracy (mean absolute error <13, coefficient of determination R-squared close to 1) and good generalization ability. Materials were screened to retain those with band gaps between 0.1-3.0 eV and exclude radioactive element-containing compounds. The final predicted dataset comprises two parts: 54359 crystal structures from the Materials Project database and 26305 crystal structures discovered by Merchant et al. (2023 Nature 624 80). Ultimately, this study completed the prediction of shear modulus and bulk modulus for 80664 inorganic crystals. This work enriches existing material elastic data resources and provides robust support for material design, with all data openly available at this https URL.
The precise identification of neurotransmitters is essential for comprehending cerebral function, detecting neurological conditions, and formulating successful therapeutic approaches. The present work investigates the electrochemical detection of serotonin with the excellent hybrid electrocatalyst $Cu_2S/H{\beta}cd-rGO$. $Cu_2S$, with its significant features as improved catalytic activity and enhanced charge transfer when combined with $H{\beta}cd-rGO$, will enhance the performance. The integration of $Cu_2S$ with $H{\beta}cd-rGO$, regulated by the van der Waals force and the electrostatic interaction, makes it a stable catalyst without disrupting the composite structure. Also, the aggregation of the $Cu_2S/H{\beta}cd$ with the layered sheets of rGO can be highly reduced and resulting in the improvement of the conductivity. Thus, the above features resulted in the improved oxidation response current when fabricated over the glassy carbon electrode (GCE). The SR showed sensitive response at a broad linear range of 0.019 to 0.299 $\mu$M and 4.28 to 403.14 $\mu$M, resulting in a lower limit of detection (LOD) of 1.2 nM or 0.0012 $\mu$M and a sensitivity of about 15.9 $\mu$A ${\mu}M^{-1}$ $cm^{-2}$. The sensor demonstrated excellent selectivity against common interferents, including aminophenol, dopamine, epinephrine, hydroquinone, melatonin, and chlorine. The real sample studies in the biological samples show good recovery values, showing the effectiveness of the as-fabricated sensor. Thus, the cost-efficient and straightforward integration of $Cu_2S/H{\beta}cd-rGO$ will be an outstanding electrocatalyst for detecting SR.
Fluorescence Molecular Tomography (FMT) is a promising technique for non-invasive 3D visualization of fluorescent probes, but its reconstruction remains challenging due to the inherent ill-posedness and reliance on inaccurate or often-unknown tissue optical properties. While deep learning methods have shown promise, their supervised nature limits generalization beyond training data. To address these problems, we propose $\mu$NeuFMT, a self-supervised FMT reconstruction framework that integrates implicit neural-based scene representation with explicit physical modeling of photon propagation. Its key innovation lies in jointly optimize both the fluorescence distribution and the optical properties ($\mu$) during reconstruction, eliminating the need for precise prior knowledge of tissue optics or pre-conditioned training data. We demonstrate that $\mu$NeuFMT robustly recovers accurate fluorophore distributions and optical coefficients even with severely erroneous initial values (0.5$\times$ to 2$\times$ of ground truth). Extensive numerical, phantom, and in vivo validations show that $\mu$NeuFMT outperforms conventional and supervised deep learning approaches across diverse heterogeneous scenarios. Our work establishes a new paradigm for robust and accurate FMT reconstruction, paving the way for more reliable molecular imaging in complex clinically related scenarios, such as fluorescence guided surgery.
Freeze-thaw cycles can be regularly observed in nature in water and are essential in industry and science. Objects present in the medium will interact with either an advancing solidification front during freezing or a retracting solidification front, i.e., an advancing melting front, during thawing. It is well known that objects show complex behaviours when interacting with the advancing solidification front, but the extent to which they are displaced during the retraction of the solid-liquid interface is less well understood. To study potential hysteresis effects during freeze-thaw cycles, we exploit experimental model systems of oil-in-water emulsions and polystyrene (PS) particle suspensions, in which a water-ice solidification front advances and retracts over an individual immiscible (and deformable) oil droplet or over a solid PS particle. We record several interesting hysteresis effects, resulting in non-zero relative displacements of the objects between freezing and thawing. PS particles tend to migrate further and further away from their initial position, whereas oil droplets tend to return to their starting positions during thawing. We rationalize our experimental findings by comparing them to our prior theoretical model of Meijer, Bertin & Lohse, Phys. Rev. Fluids (2025), yielding a qualitatively good agreement. Additionally, we look into the reversibility of how the droplet deforms and re-shapes throughout one freeze-thaw cycle, which will turn out to be remarkably robust.
Reduced-order models (ROMs) can efficiently simulate high-dimensional physical systems, but lack robust uncertainty quantification methods. Existing approaches are frequently architecture- or training-specific, which limits flexibility and generalization. We introduce a post hoc, model-agnostic framework for predictive uncertainty quantification in latent space ROMs that requires no modification to the underlying architecture or training procedure. Using conformal prediction, our approach estimates statistical prediction intervals for multiple components of the ROM pipeline: latent dynamics, reconstruction, and end-to-end predictions. We demonstrate the method on a latent space dynamical model for cloud microphysics, where it accurately predicts the evolution of droplet-size distributions and quantifies uncertainty across the ROM pipeline.
Beyond the adiabatic regime, our understanding of quantum dynamics in coupled systems remains limited, and the choice of representation continues to obscure physical interpretation and simulation accuracy. Here we propose a natural and efficient basis for electron nuclear dynamics by drawing on the concepts of pointer and preferred states from decoherence theory, adapted to systems where electrons and nuclei interact strongly. Within this framework, we show that 1) the independent dynamics exploited by mixed quantum classical (MQC) methods is best understood as a manifestation of entanglement viewed in a preferred basis, rather than a consequence of decoherence, and 2) the adiabatic Born Oppenheimer states satisfy the conditions of an approximate preferred basis. This perspective reconciles widely used approximations with a more fundamental structure of the theory and provides a systematic route to more reliable MQC strategies. In effect, we revisit MQC methods through the lens of preferred states, clarifying when they succeed and how they can be improved.
We experimentally investigate the superfluid properties of a two-dimensional, weakly interacting Bose-Einstein condensate in the zero-temperature regime, when it is subjected to a triangular optical lattice potential. We implement an original method, which involves solving the hydrodynamic continuity equation to extract the superfluid fraction tensor from the measured in situ density distribution of the fluid at rest. In parallel, we apply an independent dynamical approach that combines compressibility and sound velocity measurements to determine the superfluid fraction. Both methods yield consistent results in good agreement with simulations of the Gross-Pitaevskii equation as well as with the Leggett bounds determined from the measured density profiles.
Solid-fuel ramjets offer a compact, energy-dense propulsion option for long-range, high-speed flight but pose significant challenges for thrust regulation due to strong nonlinearities, limited actuation authority, and complex multi-physics coupling between fuel regression, combustion, and compressible flow. This paper presents a computational and control framework that combines a computational fluid dynamics model of an SFRJ with a learning-based adaptive control approach. A CFD model incorporating heat addition was developed to characterize thrust response, establish the operational envelope, and identify the onset of inlet unstart. An adaptive proportional-integral controller, updated online using the retrospective cost adaptive control (RCAC) algorithm, was then applied to regulate thrust. Closed-loop simulations demonstrate that the RCAC-based controller achieves accurate thrust regulation under both static and dynamic operating conditions, while remaining robust to variations in commands, hyperparameters, and inlet states. The results highlight the suitability of RCAC for SFRJ control, where accurate reduced-order models are challenging to obtain, and underscore the potential of learning-based adaptive control to enable robust and reliable operation of SFRJs in future air-breathing propulsion applications.
Classical phase-field theories of brittle fracture capture toughness-controlled crack propagation but do not account for the material's strength surface, which governs fracture nucleation in the absence of cracks. The phase-field formulation of Kumar et al. (2020) proposed a blueprint for incorporating the strength surface while preserving toughness-controlled propagation by introducing a nucleation driving force and presented results for the Drucker--Prager surface. Following this blueprint, Chockalingam (2025) recently derived a general driving-force expression that incorporates arbitrary strength surfaces. The present work implements this driving force within a finite-element framework and incorporates representative strength surfaces that span diverse mathematical and physical characteristics -- the Mohr--Coulomb, 3D Hoek--Brown, and Mogi--Coulomb surfaces. Through simulations of canonical fracture problems, the formulation is comprehensively validated across fracture regimes, capturing (i) nucleation under uniform stress, (ii) crack growth from large pre-existing flaws, and (iii) fracture governed jointly by strength and toughness. While the strength surfaces examined here already encompass a broad range of brittle materials, the results demonstrate the generality and robustness of the proposed driving-force construction for materials governed by arbitrary strength surfaces.
We investigate the threshold of collapse of a massless complex scalar field in axisymmetric spacetimes under the ansatz of Choptuik et al. 2004, in which a symmetry depending on the azimuthal parameter $m$ is imposed on the scalar field. This allows for both non-vanishing twist and angular momentum. We extend earlier work to include higher angular modes. Using the pseudospectral code bamps with a new adapted symmetry reduction method, which we call $m$-cartoon, and a generalized twist-compatible apparent horizon finder, we evolve near-critical initial data to the verge of black hole formation for the lowest nontrivial modes, $m=1$ and $m=2$. For $m=1$ we recover discrete self-similarity with echoing period $\Delta\simeq0.42$ and power-law scaling with exponent $\gamma\simeq0.11$, consistent with earlier work. For $m=2$ we find that universality is maintained within this nonzero fixed-$m$ symmetry class but with smaller period and critical exponents, $\Delta\simeq0.09$ and $\gamma\simeq0.035$, establishing an explicit dependence of the critical solution on the angular mode. Analysis of the relation between the angular momentum and the mass of apparent horizons at the instant of formation, $J_{\mathrm{AH}}{-}M_{\mathrm{AH}}$, shows that the effect of angular momentum is minimal at the threshold, with $\chi_{\mathrm{AH}}=J_{\mathrm{AH}}/M_{\mathrm{AH}}^2\to0$, and, therefore, excludes extremal black holes for the families under consideration. Our results demonstrate that while universality and DSS hold within each $m$-sector, the critical universal values vary with $m$, and neither extremality nor bifurcation occur in the complex scalar field model within the families considered here.
Polarization-resolved near-infrared imaging adds a useful optical contrast mechanism to eye tracking by measuring the polarization state of light reflected by ocular tissues in addition to its intensity. In this paper we demonstrate how this contrast can be used to enable eye tracking. Specifically, we demonstrate that a polarization-enabled eye tracking (PET) system composed of a polarization--filter--array camera paired with a linearly polarized near-infrared illuminator can reveal trackable features across the sclera and gaze-informative patterns on the cornea, largely absent in intensity-only images. Across a cohort of 346 participants, convolutional neural network based machine learning models trained on data from PET reduced the median 95th-percentile absolute gaze error by 10--16\% relative to capacity-matched intensity baselines under nominal conditions and in the presence of eyelid occlusions, eye-relief changes, and pupil-size variation. These results link light--tissue polarization effects to practical gains in human--computer interaction and position PET as a simple, robust sensing modality for future wearable devices.
Extreme precipitation nowcasting demands high spatiotemporal fidelity and extended lead times, yet existing approaches remain limited. Numerical Weather Prediction (NWP) and its deep-learning emulations are too slow and coarse for rapidly evolving convection, while extrapolation and purely data-driven models suffer from error accumulation and excessive smoothing. Hybrid 2D radar-based methods discard crucial vertical information, preventing accurate reconstruction of height-dependent dynamics. We introduce a gray-box, fully three-dimensional nowcasting framework that directly processes volumetric radar reflectivity and couples physically constrained neural operators with datadriven learning. The model learns vertically varying 3D advection fields under a conservative advection operator, parameterizes spatially varying diffusion, and introduces a Brownian-motion--inspired stochastic term to represent unresolved motions. A residual branch captures small-scale convective initiation and microphysical variability, while a diffusion-based stochastic module estimates uncertainty. The framework achieves more accurate forecasts up to three-hour lead time across precipitation regimes and ranked first in 57\% of cases in a blind evaluation by 160 meteorologists. By restoring full 3D dynamics with physical consistency, it offers a scalable and robust pathway for skillful and reliable nowcasting of extreme precipitation.
We present KGB-evolution, a relativistic $N$-body simulation code that extends the $k$-evolution code by incorporating an effective field theory parameterization of kinetic gravity braiding, while also including the $k$-essence model as a limiting case. As a first step, we implement the linearized dark energy stress-energy tensor and scalar field equations, providing the groundwork for a future full Horndeski theory extension. We validate KGB-evolution by comparing its power spectra against linear predictions from hi$\_$class, finding excellent agreement on large scales at low redshifts and over all scales at high redshifts. We demonstrate that nonlinear growth of matter and metric perturbations on small scales drives the linearized dark energy field into a nonlinear clustering regime, which in turn feeds back on the growth of cosmic structure. In contrast to the $k$-essence limit, a nonzero braiding considerably amplifies this backreaction, producing a significantly stronger alteration of structure formation in the kinetic gravity braiding model.
It would seem that the present dry economic times impose a very precise focus for science in general and physics in particular: research, possibly of applied type. However in doing so two basic pillars of a healthy future for science are being undermined: fundamental research and public engagement. The first is what makes applications possible in the first place, many times with a path from inception to implementation that is as long and indirect as poorly advertised. The second pillar, public engagement, is mostly regarded as a commodity: if there is good level of funding scientists can consider spending money for public relations otherwise this is the first thing scientists cut because it is the least necessary. On the contrary, public engagement in science is very much needed, at the very least because the public is either an enemy or an ally, as testified respectively by the climate change denial and the 2009 Shuttle mission that people wanted in order to service the Hubble Space Telescope for the last time. In this article I will make the case for why popularizing science should be a funding priority, instead of a commodity, for both nation-wide organizations and local research institutions. I will take examples from my personal background with the hope that they will serve to enlighten a more general picture and to frame the discussion around concrete issues and practical avenues to be pursued immediately.
We present BIGSTICK, a flexible configuration-interaction open-source shell-model code for the many-fermion problem. Written mostly in Fortran 90 with some later extensions, BIGSTICK utilizes a factorized on-the-fly algorithm for computing many-body matrix elements, and has both MPI (distributed memory) and OpenMP (shared memory) parallelization, and can run on platforms ranging from laptops to the largest parallel supercomputers. It uses a flexible yet efficient many-body truncation scheme, and reads input files in multiple formats, allowing one to tackle both phenomenological (major valence shell space) and ab initio (the so-called no-core shell model) calculations. BIGSTICK can generate energy spectra, static and transition one-body densities, and expectation values of scalar operators. Using the built-in Lanczos algorithm one can compute transition probability distributions and decompose wave functions into components defined by group theory. This manual provides a general guide to compiling and running BIGSTICK, which comes with numerous sample input files, as well as some of the basic theory underlying the code. Updated November 2025 to version 8.0.0
Imaging systems have traditionally been designed to mimic the human eye and produce visually interpretable measurements. Modern imaging systems, however, process raw measurements computationally before or instead of human viewing. As a result, the information content of raw measurements matters more than their visual interpretability. Despite the importance of measurement information content, current approaches for evaluating imaging system performance do not quantify it: they instead either use alternative metrics that assess specific aspects of measurement quality or assess measurements indirectly with performance on secondary tasks. We developed the theoretical foundations and a practical method to directly quantify mutual information between noisy measurements and unknown objects. By fitting probabilistic models to measurements and their noise characteristics, our method estimates information by upper bounding its true value. By applying gradient-based optimization to these estimates, we also developed a technique for designing imaging systems called Information-Driven Encoder Analysis Learning (IDEAL). Our information estimates accurately captured system performance differences across four imaging domains (color photography, radio astronomy, lensless imaging, and microscopy). Systems designed with IDEAL matched the performance of those designed with end-to-end optimization, the prevailing approach that jointly optimizes hardware and image processing algorithms. These results establish mutual information as a universal performance metric for imaging systems that enables both computationally efficient design optimization and evaluation in real-world conditions. A video summarizing this work can be found at: this https URL
A recent report by Barik et al. [Nature Chemistry 14, 1098, 2022] on ambient-light-induced intermolecular Coulombic decay (ICD) in unbound pyridine monomers proposes the formation of a pyridine cation via intermolecular Coulombic decay following a three-body association/collision, wherein all the three pyridine molecules are in the excited state. The collision-free conditions of the free-jet expansion, an abysmally low probability of finding three independently excited pyridine molecules in the vicinity of each other, and extremely low excited state lifetimes negate the possibility of ICD in unbound pyridine monomers. An alternate mechanism, wherein the pyridine monomer cation originates from the dissociative ionization of pyridine dimers following a three-photon absorption process, based on the translational energy measurements of pyridine cation is proposed.
Accurate numerical modeling of surface tension has been a challenging aspect of multiphase flow simulations. The integral formulation for modeling surface tension forces is known to be consistent and conservative, and to be a natural choice for the simulation of flows driven by surface tension gradients along the interface. This formulation was introduced by Popinet and Zaleski [1] for a front-tracking method and was later extended to level set methods by Al-Saud et al. [2]. In this work, we extend the integral formulation to a volume of fluid (VOF) method for capturing the interface. In fact, we propose three different schemes distinguished by the way we calculate the geometric properties of the interface, namely curvature, tangent vector and surface fraction from VOF representation. We propose a coupled level set volume of fluid (CLSVOF) method in which we use a signed distance function coupled with VOF, a height function (HF) method in which we use the height functions calculated from VOF, and a height function to distance (HF2D) method in which we use a sign-distance function calculated from height functions. For validation, these methods are rigorously tested for several problems with constant as well as varying surface tension. It is found that from an accuracy standpoint, CLSVOF has the least numerical oscillations followed by HF2D and then HF. However, from a computational speed point of view, HF method is the fastest followed by HF2D and then CLSVOF. Therefore, the HF2D method is a good compromise between speed and accuracy for obtaining faster and correct results. Keywords: Multiphase flows; Surface tension modeling; Marangoni flows
Caribou is a versatile data acquisition system used in multiple collaborative frameworks (CERN EP R&D, DRD3, AIDAinnova, Tangerine) for laboratory and test-beam qualification of novel silicon pixel detector prototypes. The system is built around a common hardware, firmware and software stack shared accross different projects, thereby drastically reducing the development effort and cost. It consists of a custom Control and Readout (CaR) board and a commercial Xilinx Zynq System-on-Chip (SoC) platform. The SoC platform runs a full Yocto distribution integrating the custom software framework (Peary) and a custom FPGA firmware built within a common firmware infrastructure (Boreal). The CaR board provides a hardware environment featuring various services such as powering, slow-control, and high-speed data links for the target detector prototype. Boreal and Peary, in turn, offer firmware and software architectures that enable seamless integration of control and readout for new devices. While the first version of the system used a SoC platform based on the ZC706 evaluation board, migration to a Zynq UltraScale+ architecture is progressing towards the support of the ZCU102 board and the ultimate objective of integrating the SoC functionality directly into the CaR board, eliminating the need for separate evaluation boards. This paper describes the Caribou system, focusing on the latest project developments and showcasing progress and future plans across its hardware, firmware, and software components.
This review consolidates experimental, theoretical, and simulation work examining the behavior of high-field devices and the fundamental process of vacuum arc initiation, commonly referred to as breakdown. Detailed experimental observations and results relating to a wide range of aspects of high-field devices, including conditioning, field and temperature dependence of breakdown rate, and the ability to sustain high electric fields as a function of device geometry and materials, are presented. The different observations are then addressed theoretically, and with simulation, capturing the sequence of processes that lead to vacuum breakdown and explaining the major observed experimental dependencies. The core of the work described in this review was carried out by a broad multi-disciplinary collaboration in an over a decade-long program to develop high-gradient, 100 MV/m-range, accelerating structures for the CLIC project, a possible future linear-collider high-energy physics facility. Connections are made to the broader linear collider, high-field, and breakdown communities.
Understanding how internal community structure shapes the course of epidemics remains a fundamental challenge in modeling real-world populations. Standard metapopulation models often assume uniform mixing within communities, overlooking how internal heterogeneity affects global outcomes. Here, we develop a general framework for epidemic spreading in hierarchically structured metapopulations, where individuals interact locally within dense communities and move across a broader network. We show that transmission dynamics are governed by the mesoscale organization of these communities: highly connected groups accelerate and amplify outbreaks, while less connected ones dampen spread. Through a combination of mean-field theory, spectral analysis, and stability methods, we reveal a direct link between internal connectivity and the emergence of uneven, spatially structured epidemic patterns. We further validate these predictions using real-world data, where social contact networks capture the local scale of transmission while spatial transport networks govern global connectivity, confirming the robustness of our framework across scales. These results demonstrate how community structure fundamentally governs the shape of epidemics in complex, networked populations, offering new insights into vulnerability, containment, and epidemic control.
Radiation therapy is one of the most common cancer treatments, and dose optimization and targeting of radiation are crucial since both cancerous and healthy cells are affected. Different mathematical and computational approaches have been developed for this task. The most common mathematical approach, dating back to the late 1970's, is the linear-quadratic (LQ) model for the survival probability given the radiation dose. Most simulation models consider tissue as a continuum rather than consisting of discrete cells. While reasonable for large-scale models (e.g., human organs), continuum approaches necessarily neglect cellular-scale effects, which may play a role in growth, morphology, and metastasis of tumors. Here, we propose a method for modeling the effect of radiation on cells based on the mechanobiological \textsc{CellSim3D} simulation model for growth, division, and proliferation of cells. To model the effect of a radiation beam, we incorporate a Monte Carlo procedure into \textsc{CellSim3D} with the LQ model by introducing a survival probability at each beam delivery. Effective removal of dead cells by phagocytosis was also implemented. Systems with two types of cells were simulated: stiff slowly proliferating healthy cells and soft rapidly proliferating cancer cells. For model verification, the results were compared to prostate cancer (PC-3 cell line) data for different doses and we found good agreement. In addition, we simulated proliferating systems and analyzed the probability density of the contact forces. We determined the state of the system with respect to the jamming transition and found very good agreement with experiments.
We report the potential energy curve, the diagonal Born-Oppenheimer, non-adiabatic mass, relativistic, and leading-order QED corrections for the ground electronic state of the helium dimer cation; the higher-order QED and finite-nuclear size effects are also estimated. The computations are carried out with an improved error control and over a much broader configuration range compared to earlier work [D. Ferenc, V. I. Korobov, and E. Mátyus, Phys. Rev. Lett. 125, 213001 (2020)]. As a result, all rovibrational bound states are reported with an estimated accuracy of 0.005 cm$^{-1}$.
Surface bubbles in the environment or engineering configurations, such as the ocean-atmosphere interface, sparkling wine, or during volcanic eruptions typically live on contaminated surfaces. A particularly common type of contamination is surface active agents (surfactants). We consider the effect of insoluble surfactant on jet drop formation by bubble bursting. Contrary to the observed trend that surfactants decrease the ejected drop radius for bubbles with precursor capillary waves, we find that surfactants increase the ejected drop radius for bubbles without precursor capillary waves - a regime characteristic of small bubbles. Consequently, the results have fundamental implications for understanding aerosol distributions in contaminated conditions. We find that the trend reversal is due to the effect of Marangoni stresses on the focusing of the collapsing cavity. We demonstrate quantitative agreement on the jet velocity and drop size between laboratory experiments and numerical simulations by using the measured surface tension dependence on surfactant concentration as the equation of state for the simulations. *Jun Eshima and Tristan Aurégan contributed equally to this work.
The $GW$ approximation has become a method of choice for predicting quasiparticle properties in solids and large molecular systems, owing to its favorable accuracy-cost balance. However, its accuracy is the result of a fortuitous cancellation of vertex corrections in the polarizability and self-energy. Hence, when attempting to go beyond $GW$ through inclusion of vertex corrections, the accuracy can deteriorate if this delicate balance is disrupted. In this work, we explore an alternative route that theoretically goes beyond $GW$: the parquet formalism. Unlike approaches that focus on a single correlation channel, such as the electron-hole channel in $GW$ or the particle-particle channel in $T$-matrix theory, parquet theory treats all two-body scattering channels on an equal footing. We present the formal structure of the parquet equations, which couple the one-body Green's function, the self-energy, and the two-body vertex. We discuss the approximations necessary to solve this set of equations, the advantages and limitations of this approach, outline its implementation for molecular systems, and assess its accuracy for principal ionization potentials of small molecular systems.
We have developed a silicon nitride based photonic integrated circuit (PIC) that is responsible for the cooling, pumping and imaging of cold rubidium 87 atoms. The photonic integrated circuit consists of two chips placed next to each other and has a total area of 2x2~cm$^2$. This greatly minimizes the area needed while still having all the optical control functions to create, control and measure a magneto-optical trap (MOT). The piezo electric material Lead Zirconate Titanate (PZT) on the PIC is employed for phase shifting a Mach-Zehnder type configuration where extinction ratios up to 50 dB and switching speeds of 1 MHz are achieved. For the first time a two and three dimensional rubidium 87 MOT is realized using an active PIC. For the three-dimensional MOT, we measure $7\cdot 10^7$ atoms with a temperature of 270~$\mu$K.
Based on a rigorous thermodynamic framework, this work develops a two-fluid magnetohydrodynamic model grounded in the Helmholtz free energy formalism. The model maintains full thermodynamic consistency by simultaneously satisfying energy conservation and entropy production laws in two-fluid systems. By analyzing the convex-concave structure of the Helmholtz free energy density, we systematically derive key thermodynamic variables-chemical potential, entropy density, and internal energy-in a self-consistent manner. Building on this foundation, we construct a temporally discrete numerical scheme that inherits the thermodynamic consistency of the continuous model. The scheme is proven to adhere rigorously to both the first and second laws of thermodynamics. For the implemented two-dimensional degenerate system, we establish comprehensive a priori error estimates in space and time. Numerical simulations validate the model's effectiveness in capturing essential plasma phenomena, demonstrating its applicability to complex physical scenarios.
Experimental studies of ultra-relativistic heavy ion collisions at the Large Hadron Collider (LHC) depend crucially on Zero Degree Calorimeters (ZDCs) that measure neutrons produced at near-beam rapidity in nucleus-nucleus collisions. In hadronic nuclear collisions these neutrons are mainly spectator neutrons, those that do not scatter from opposing nucleons during the collision. As a result, the ZDCs provide a vital probe of heavy ion collision geometry. The ZDCs are also essential in the study of ultra-peripheral collisions that are initiated by photons associated with the electric fields of one or both nuclei. Coherent photon emission typically leaves the photon emitter intact, making the observation of no ZDC signal, on one or both sides, a tag of such processes. The ATLAS ZDCs, built prior to Run 1 were substantially upgraded for LHC Run 3. The primary upgrades included replacement of the quartz Cherenkov radiator with $\text{H}_2$-doped fused silica rods; installation of fast air-core signal cables between the ZDC and the ATLAS USA15 cavern; new LED-based calibration system; and new electronics implemented for readout and fully-digital triggering. The ZDCs were also augmented with new "Reaction Plane Detectors" (RPDs) designed to measure the transverse centroid of multi-neutron showers to allow event-by-event reconstruction of the directed-flow plane in nuclear collisions. The Run~3 ZDC detectors, including the RPDs, are described in detail with emphasis on aspects that are new for Run~3.
Plasmons facilitate a strong confinement and enhancement of near-field light, offering exciting opportunities to enhance nonlinear optical responses at the nanoscale. However, despite significant advancements, the electrically tunable range of the nonlinear optical responses at nanometer-scale plasmonic structures remains limited to a few percents per volt. Here, we transcend the limitation of the nanometer regime by expanding the concept of electrophotonics into angstrom-scale platform, enabling high-performance modulation of near-field nonlinear optical responses inaccessible in prior architectures. We demonstrate ~2000% enhancement in second-harmonic generation (SHG) within 1 V of voltage application by utilizing an angstrom-scale plasmonic gap between a metallic tip and a flat metal substrate in a scanning tunneling microscope. Extending this near-field SHG scheme to sum-frequency generation that is accompanied by large frequency upconversion, we also found that such giant electrical modulation of plasmon-enhanced nonlinear optical phenomena is effective over mid-infrared to visible broad wavelength range. Our results and concepts lay the foundation for developing near-field-based angstrom-scale nonlinear electrophotonics with significant modulation depth at low driving voltage.
To explore the physicochemical hydrodynamics of phase-separating ternary liquids (Ouzo-type), a binary oil-ethanol mixture is introduced into a co-flowing stream of water. Oil droplets nucleate at the interface between the two liquids, leading to a larger oil droplet interacting with the ethanol-rich jet. Although buoyancy forces and hydrodynamic drag forces push the droplet in downstream direction, we observe an upstream motion. Using computational fluid dynamics simulations of a simplified model system, we identify the nucleation zone for oil droplets and uncover Marangoni forces to be responsible for the upstream motion of the droplet. A semi-analytical model allows us to identify the key parameters governing this effect. A general conclusion is that Marangoni stresses can reverse the motion of droplets through channels, where the surrounding liquid is a multi-component mixture. The insights from this work are not only relevant for channel flow, but more generally, for the physicochemical hydrodynamics of multiphase, multi-component systems.
In 2024, after thirty years of research on this subject, I published a book entitled: \textit{Poincaré, Einstein and the discovery of special relativity. An end to the controversy} \cite{Ginoux2024}. In September 2025, Galina Weinstein published a review of this book entitled: \textit{Convergences and Divergences: Einstein Poincaré and Special Relativity} (arXiv:2509.09361) in which she harshly criticized my work in an unfair and error filled manner. The, she published a second comment (arXiv: 2510.03793) in which she added she added insults about me to all her mistakes, falsehoods and misleading criticisms. So I've decided to reply to her comments, in an academic way (as she would normally have done), in order to demonstrate that her allegedly ``novel way'' of reconstructing the history of the theory of special relativity is purely based on her own interpretation of the facts and not on the facts themselves. To this aim, I will follow the structure of each Weinstein's comment (arXiv: 2509.09361 \& 2510.03793) and I will highlight section by section all the erroneous things she has reported and repeated.
Accurate wellbore trajectory prediction is a paramount challenge in subsurface engineering, governed by complex interactions between the drilling assembly and heterogeneous geological formations. This research establishes a comprehensive, mathematically rigorous framework for trajectory prediction that moves beyond empirical modeling to a geomechanically-informed, data-driven surrogate this http URL study leverages Log ASCII Standard (LAS) and wellbore deviation (DEV) data from 14 wells in the Gulfaks oil field, treating petrophysical logs not merely as input features, but as proxies for the mechanical properties of the rock that fundamentally govern drilling dynamics. A key contribution of this work is the formal derivation of wellbore kinematic models, including the Average Angle method and Dogleg Severity, from the first principles of vector calculus and differential geometry, contextualizing them as robust numerical integration schemes. The core of the predictive model is a Gated Recurrent Unit (GRU) network, for which we provide a complete, step-by-step derivation of the forward propagation dynamics and the Backpropagation Through Time (BPTT) training algorithm. This detailed theoretical exposition, often omitted in applied studies, clarifies the mechanisms by which the network learns temporal dependencies. The methodology encompasses a theoretically justified data preprocessing pipeline, including feature normalization, uniform depth resampling, and sequence generation. Trajectory post-processing and error analysis are conducted using Mean Absolute Error (MAE), Root Mean Square Error (RMSE), and the Coefficient of Determination (R2).
As the quantum information science and engineering (QISE) workforce grows, there is an anticipated need for professionals with bachelor's and master's degrees who can fill a wide range of roles in the quantum industry. This report identifies the experimental skills needed for individuals with bachelor's or master's degrees to succeed in quantum industry roles. Through semi-structured interviews with quantum industry employers, we gathered data on 22 distinct positions spanning hardware, software, and business functions. While employers describe varying expectations of quantum expertise, the unifying requirement across these roles is proficiency in experimental skills, which fall into four key categories: instrumentation, computation and data analysis, experimental and project design, as well as communication and collaboration. Positions open to bachelor's and master's graduates use all four skill areas, but the balance of experimental skill set needed differs. Bachelor's roles lean toward instrumentation, computation and data analysis, as well as experimental and project design skills. Individuals in these roles build, operate, and troubleshoot hardware, and they gather and interpret data to design and carry out experiments. Master's roles stand out with the communication and collaboration skills needed on top of the other three skill categories. Individuals in these roles oversee experiments, coordinate teams, and align efforts with company and client needs. By articulating experimental skills needed for bachelor's and master's roles in the quantum industry, this report provides actionable insights for educators developing QISE courses and programs.
By returning to the topological basics of fusion target design, Generative Artificial Intelligence (genAI) is used to specify how to initially configure and drive the optimally entangled topological state, and stabilize that topological state from disruption. This can be applied to all methods; including tokamaks, laser-driven schemes, and pulsed-power driven schemes. The result is practical, room temperature targets that can yield up to 10 GJ of energy, driven by as little as 3 MJ of absorbed energy. The genAI is based on the concept of Ubuntu that replaces the Deep Convolutional Neural Network approximation of a functional, with the formula for the generating functional of a canonical transformation from the domain of the canonical field momentums and fields, to the domain of the canonical momentums and coordinates, that is the Reduced Order Model. This formula is a logical process of renormalization, enabling Heisenberg's canonical approach to field theory, via calculation of the S-matrix, given observation of the fields. This can be viewed as topological characterization and control of collective, that is complex, systems.
We investigate a process of growth of a signed network that strictly adheres to Heider structural balance rules, resulting in two opposing, growing factions. New agents make contact with a random existing agent and join one of the factions with the bias $p$ towards the group they made contact with. The evolution of the group sizes can be mapped to a randomized Pólya urn model. Aside from $p=1$, the relative sizes of the two factions always tend towards $1/2$, but the behavior differs in the anti-bias regime ($p<1/2$) and the biased one ($p>1/2$). In the anti-bias regime, the expected faction sizes converge toward equality, regardless of initial differences, while in the biased regime, initial size difference persists over time. This difference is obscured by fluctuations, with the faction size distribution remaining unimodal even above $p>1/2$, up until a characteristic point $p^{ch}$, where it becomes bimodal, with initially larger and smaller factions featuring their own distinguishable peaks. We discuss several approaches to estimate this characteristic value. At $p=1$, differences between the relative sizes of factions can persist indefinitely, although still subject to fluctuations.
The present investigation is directed at exploring southern polar ionospheric responses to intense space weather events and their correlations with plasma convection and auroral precipitation. The main phases of six geomagnetic storms occurring in the year 2023 (ascending phase of the present solar cycle) are considered for this study. The ionospheric Total Electron Content (TEC) measurements derived from GPS receivers covering the Antarctic region are used for probing the electron density perturbations during these events. Auroral precipitation maps are shown to illustrate the locations of the GPS stations relative to particle precipitation. SuperDARN maps are shown to understand the effects of plasma convection over these locations. Correlation between the enhanced TEC observations with the auroral precipitation (R $\sim$ 0.31) and the plasma convection (R $\sim$ 0.88) reveals that the latter is more responsible for causing significant enhancements in the diurnal maximum values of TEC over the Antarctic region in comparison to the former. Therefore, this work shows correlation studies between two physical processes and ionospheric density enhancements over the under-explored south polar region under strong levels of geomagnetic activity during 2023.
This study investigates into the adsorption sensing capabilities of single-walled (5,5) boron nitride nanotubes (BNNTs) towards environmental pollutant gas molecules, including CH2, SO2, NH3, H2Se, CO2 and CS2. Employing a linear combination of atomic orbital density functional theory (DFT) and spin-polarized generalized gradient approximation (GGA), the investigation reveals the nanotube's robust adsorption behavior without compromising its structural integrity. Thermodynamic and chemical parameters, such as adsorption energy, HOMO-LUMO gap, vertical ionization energy, and vertical electron affinity, highlight the (5,5) BNNTs' potential as efficient absorbents for pollutant molecules. Infrared spectroscopy confirms the formation of distinct BNNT-gas complexes. These findings underscore the promising application of BN nanotubes as absorbents for common gaseous pollutants, essential for developing sensors to enhance indoor air quality.
We calculate bound and scattering properties of a system of two neutral atoms and an ion near an atom-ion Feshbach resonance. Our results indicate that long-range atom-ion interactions lead to significant deviations from universal behavior derived from contact or van der Waals potentials. We find that ionic systems display an overall suppression of inelastic transitions leading to recombination rates and lifetimes of Efimov state orders of magnitude smaller with respect to those for neutral atoms. We further characterize the dense spectra of triatomic molecular ions with extended lifetimes. Our results provide a deeper insight on the universality and structure of three-body ionic systems and establishing them as a promising platform for exploring novel few- and many-body phenomena with long-range interactions.
Representation learning for high-dimensional, complex physical systems aims to identify a low-dimensional intrinsic latent space, which is crucial for reduced-order modeling and modal analysis. To overcome the well-known Kolmogorov barrier, deep autoencoders (AEs) have been introduced in recent years, but they often suffer from poor convergence behavior as the rank of the latent space increases. To address this issue, we propose the learnable weighted hybrid autoencoder, a hybrid approach that combines the strengths of singular value decomposition (SVD) with deep autoencoders through a learnable weighted framework. We find that the introduction of learnable weighting parameters is essential -- without them, the resulting model would either collapse into a standard POD or fail to exhibit the desired convergence behavior. Interestingly, we empirically find that our trained model has a sharpness thousands of times smaller compared to other models. Our experiments on classical chaotic PDE systems, including the 1D Kuramoto-Sivashinsky and forced isotropic turbulence datasets, demonstrate that our approach significantly improves generalization performance compared to several competing methods. Additionally, when combining with time series modeling techniques (e.g., Koopman operator, LSTM), the proposed technique offers significant improvements for surrogate modeling of high-dimensional multi-scale PDE systems.
Solar wind, classified by its bulk speed and the Alfvénic nature of its fluctuations, generates the heliosphere. The elusive physical processes responsible for the generation of the different types of this wind are a topic of active debate. Recent observations reveal intermittent jets, with kinetic energy in the picoflare range, emerging from dark areas of a polar coronal hole threaded by open magnetic field lines. These could substantially contribute to solar wind. However, their ubiquity and direct links to solar wind have not been established. Here, we report a unique set of remote-sensing and in situ observations from the Solar Orbiter spacecraft that establish a unified picture of fast and Alfvénic slow wind, connected to the similar widespread picoflare jet activity in two coronal holes. Radial expansion of coronal holes ultimately regulates the speed of the emerging wind.
We introduce a classical computational method for quantum dynamics that relies on a global-in-time variational principle. Unlike conventional time-stepping approaches, our scheme computes the entire state trajectory over a finite time window by minimizing a loss function that enforces the Schrödinger's equation. The variational state is parametrized with a Galerkin-inspired ansatz based on a time-dependent linear combination of time-independent Neural Quantum States. This structure is particularly well-suited for exploring long-time dynamics and enables bounding the error with the exact evolution via the global loss function. We showcase the method by simulating global quantum quenches in the paradigmatic Transverse-Field Ising model in both 1D and 2D, uncovering signatures of ergodicity breaking and absence of thermalization in two dimensions. Overall, our method is competitive compared to state-of-the-art time-dependent variational approaches, while unlocking previously inaccessible dynamical regimes of strongly interacting quantum systems.
Topological matter offers opportunities for control of charge and energy flow with implications for chemistry still incompletely understood. In this work, we study an ensemble of adsorbates with an empty frontier level (LUMO) coupled to the edges, domain walls (solitons), and bulk of a Su-Schrieffer-Heeger polyacetylene chain across its trivial insulator, metallic, and topological insulator phases. We find that two experimentally relevant observables, charge donation into the LUMO and the magnitude of adsorbate electronic friction, are significantly impacted by the electronic phase of the SSH chain and show clear signatures of the topological phase transition. Localized, symmetry-protected midgap states at edges and solitons strongly enhance electron donation relative to both the metallic and trivial phases, whereas by contrast, the metal's extended states, despite larger total DOS near the Fermi energy, hybridize more weakly with a molecular adsorbate near a particular site. Electronic friction is largest in the metal, strongly suppressed in gapped regions, and intermediate at topological edges where hybridization splits the midgap resonance. These trends persist with disorder highlighting their robustness and suggest engineering domain walls and topological boundaries as pathways for employing topological matter in molecular catalysis and sensing.
The syntactic structure of a sentence can be described as a tree that indicates the syntactic relationships between words. In spite of significant progress in unsupervised methods that retrieve the syntactic structure of sentences, guessing the right direction of edges is still a challenge. As in a syntactic dependency structure edges are oriented away from the root, the challenge of guessing the right direction can be reduced to finding an undirected tree and the root. The limited performance of current unsupervised methods demonstrates the lack of a proper understanding of what a root vertex is from first principles. We consider an ensemble of centrality scores, some that only take into account the free tree (non-spatial scores) and others that take into account the position of vertices (spatial scores). We test the hypothesis that the root vertex is an important or central vertex of the syntactic dependency structure. We confirm the hypothesis in the sense that root vertices tend to have high centrality and that vertices of high centrality tend to be roots. The best performance in guessing the root is achieved by novel scores that only take into account the position of a vertex and that of its neighbours. We provide theoretical and empirical foundations towards a universal notion of rootness from a network science perspective.
The radiative linewidth of a two-level emitter (TLE) fundamentally limits the bandwidth available for quantum information processing. Despite its importance, no prior experiment has systematically examined how driving detuning affects the indistinguishability of photons scattered from a TLE - a parameter critical for photonic quantum computing. Here, we perform post-selective two-photon interference measurements between mutually detuned resonance fluorescence signals from an InAs quantum dot embedded in a micropillar cavity. At small mutual laser detunings (<=0.5GHz), the results are accurately described by the pure-state model [Nat. Commun. 16, 6453 (2025)], which treats all resonance-fluorescence photons as spontaneous emission. At larger detunings, we uncover an anomalous feature in the two-photon interference, where the normalised second-order correlation function under orthogonal polarisations yields g2_vert(0) < 0.5.
Altermagnets with nonrelativistic momentum-dependent spin splitting and compensated net magnetic moments have recently garnered significant interest in spintronics, particularly as pinning layers in magnetic tunnel junctions (MTJs). However, room temperature (RT) altermagnet-based MTJs with tunable tunneling magnetoresistance (TMR) or electroresistance (TER) modulated by multiferroicity remain largely unexplored. Here, we propose an experimentally fabricable above-RT multiferroic MTJ, comprising an altermagnetic metal, ferroelectric barrier, and ferromagnetic metal-epitomized by a CrSb/In2Se3/Fe3GaTe2 heterostructure. Our calculations with first-principles and nonequilibrium Green function method indicate that the architecture enables magnetically switchable TER, electrically tunable TMR, and dual-mode controllable spin filtering. To disentangle the roles of ferroelectricity and the tunnel barrier, nonferroelectric Sb2Se3 and a vacuum gap are exploited as control cases. Remarkably, the system achieves TMR up to 2308%, TER of 707%, and near-perfect spin filtering efficiency. Both TMR and TER are considerable for CrSb/In2Se3/Fe3GaTe2 with either Cr or Sb interface. The transport performance is robust under bias voltage. These findings demonstrate the above-RT multiferroic altermagnet-based MTJs and highlight their exciting potential as a versatile platform for next-generation spin dynamics, magnetic sensing, and quantum logic nanodevices.
Fundamental physics often confronts complex symbolic problems with few guiding exemplars or established principles. While artificial intelligence (AI) offers promise, its typical need for vast datasets to learn from hinders its use in these information-scarce frontiers. We introduce learning at criticality (LaC), a reinforcement learning (RL) scheme that tunes Large Language Models (LLMs) to a sharp learning transition, addressing this information scarcity. At this transition, LLMs achieve peak generalization from minimal data, exemplified by 7-digit base-7 addition -- a test of nontrivial arithmetic reasoning. To elucidate this peak, we analyze a minimal concept-network model (CoNet) designed to capture the essence of how LLMs might link tokens. Trained on a single exemplar, this model also undergoes a sharp learning transition. This transition exhibits hallmarks of a second-order phase transition, notably power-law distributed solution path lengths. At this critical point, the system maximizes a ``critical thinking pattern" crucial for generalization, enabled by the underlying scale-free exploration. This suggests LLMs reach peak performance by operating at criticality, where such explorative dynamics enable the extraction of underlying operational rules. We demonstrate LaC in quantum field theory: an 8B-parameter LLM, tuned to its critical point by LaC using a few exemplars of symbolic Matsubara sums, solves unseen, higher-order problems, significantly outperforming far larger models. LaC thus leverages critical phenomena, a physical principle, to empower AI for complex, data-sparse challenges in fundamental physics.
Spin splitting in emerging altermagnets is non-relativistic and momentum-dependent, yet energy-independent, and localized in momentum space, posing challenges for practical applications. Here, we propose an intercalation-driven paradigm for altermagnets to attain ameliorative electronic structures, multiferroic characteristics, and anomalous and spin transport functionalities. As a representative system, we investigate electrochemistry- and self-intercalated V2Se2O bilayers, building on the recently reported room-temperature K- and Rb-intercalated V2Se2O family [Nat. Phys. 2025, 21, 754; Nat. Phys. 2025, 21, 760], utilizing density functional theory, Wannier function analyses, Monte Carlo simulations, and non-equilibrium Green function methods. Intercalation induces room-temperature intralayer ferrimagnetic and interlayer ferromagnetic order (358 K for Li-intercalation and 773 K for V-intercalation), ferroelasticity (~1 % signal intensity), in-plane uniaxial magnetic anisotropy, and metallization, while also modifying the anomalous Hall effect. Notably, Li- and V-intercalated V2Se2O bilayers exhibit enhanced spin splitting and half-metallic behavior, respectively, yielding near-perfect spin filtering efficiency. Intercalation substantially enhances spin transport in V2Se2O-based devices, enabling giant magnetoresistance (877 %), ultra-high thermal tunneling magnetoresistance (~12000 %), and observable spin Seebeck and temperature negative differential resistance effects. This intercalation-driven paradigm expands altermagnetic functionalities through multifunctional integration, offering promising avenues for advanced, miniaturized, room-temperature exploitation of anomalous, electron, and spin transport properties.
In two- and higher-dimensional non-Hermitian lattices, systems can exhibit geometry-dependent bands, where the spectrum and eigenstates under open boundary conditions depend on the bulk geometry even in the thermodynamic limit. Although geometry-dependent bands are widely observed, the underlying mechanism for this phenomenon remains unclear. In this work, we address this problem by establishing a higher-dimensional non-Bloch band theory based on the concept of "strip generalized Brillouin zones" (SGBZs), which describe the asymptotic behavior of non-Hermitian bands when a lattice is extended sequentially along its linearly independent axes. Within this framework, we demonstrate that geometry-dependent bands arise from the incompatibility of SGBZs and, for the first time, derive a general criterion for the geometry dependence of non-Hermitian bands: non-zero area of the complex energy spectrum or the imaginary momentum spectrum. Our work opens an avenue for future studies on the interplay between geometric effects and non-Hermitian physics, such as non-Hermitian band topology.
We investigate the magnetic-field dependence of the interaction between two Rydberg atoms, $|nS_{1/2}, m_J\rangle$ and $|(n+1)S_{1/2}, m_J\rangle$. In this setting, the effective spin-1/2 Hamiltonian takes the form of an {\it XXZ} model. We show that the anisotropy parameter of the {\it XXZ} model can be tuned by applying a magnetic field and, in particular, that it changes drastically near the Förster resonance points. Based on this result, we propose experimental realizations of spin-1/2 and spin-1 Heisenberg-type quantum spin models in Rydberg atom quantum simulators, without relying on Floquet engineering. Our results provide guidance for future experiments of Rydberg atom quantum simulators and offer insight into quantum many-body phenomena emerging in the Heisenberg model.
Atomistic simulation methods have evolved through successive computational levels, each building upon more fundamental approaches: from quantum mechanics to density functional theory (DFT), and subsequently, to machine learning interatomic potentials (MLIPs). While universal MLIPs (u-MLIPs) offer broad transferability, their computational overhead limits large-scale applications. Task-specific MLIPs (ts-MLIPs) achieve superior efficiency but require prohibitively expensive DFT data generation for each material system. In this paper, we propose LightPFP, a data-efficient knowledge distillation framework. Instead of using costly DFT calculations, LightPFP generates a distilled ts-MLIP by leveraging u-MLIP to generate high-quality training data tailored for specific materials and utilizing a pre-trained light-weight MLIP to further enhance data efficiency. Across a broad spectrum of materials, including solid-state electrolytes, high-entropy alloys, and reactive ionic systems, LightPFP delivers three orders of magnitude faster model development than conventional DFT-based methods, while maintaining accuracy on par with first-principles predictions. Moreover, the distilled ts-MLIPs further sustain the computational efficiency essential for large-scale molecular dynamics, achieving 1-2 orders of magnitude faster inference than u-MLIPs. The framework further enables efficient precision transfer learning, where systematic errors from the u-MLIP can be corrected using as few as 10 high-accuracy DFT data points, as demonstrated for MgO melting point prediction. This u-MLIP-driven distillation approach enables rapid development of high-fidelity, efficient MLIPs for materials science applications.
We present the design and testing of a compact, low-cost stellar spectrometer developed for undergraduate and outreach applications. The instrument employs a 600 lines/mm diffraction grating, a CMOS monochrome sensor, and a 3D-printed mount integrated with reflecting telescopes. Calibration was performed using helium emission sources in the laboratory and Vega as a spectrophotometric standard, supported by a custom Python-based image-processing pipeline for wavelength calibration and spectral stacking. The spectrometer successfully recorded usable spectra of bright stars including Vega, Sirius, Procyon, Capella, and Betelgeuse, covering spectral types A through M. The results demonstrate that meaningful stellar spectroscopy can be achieved with accessible, low-cost equipment, providing a practical framework for student-led astronomical instrumentation projects.
Perovskite materials, known for their structural versatility and multifunctional properties, continue to draw interest for advanced electronic and optoelectronic applications. In this study, we investigate the elastic and strain--engineered mechanical, electronic properties and optical properties of the orthorhombic La2AlGaO6 (LAGO) hybrid perovskite using first--principles quantum mechanical calculations based on density functional theory (DFT). Structural optimizations were performed using both the local density approximation (LDA) and the generalized gradient approximation (GGA). The mechanical stability of LAGO was confirmed through the Born--Huang criteria, and key elastic constants (C11, C12, C33, C44, and C66) were evaluated. These constants were further used to derive mechanical parameters such as Young's modulus, bulk modulus, shear modulus, Poisson's ratio, Cauchy's pressure, and anisotropic factor, offering insights into the material's ductility, hardness, and elastic anisotropy. Crucially, we explored the influence of biaxial strain on the electronic band structure, DOS/PDOS, and Fermi energy, revealing significant band gap modulation under compressive and tensile strain, and hence, varying the optical properties. The coupling between elastic response and electronic structure highlights LAGO's potential for tunable device applications, where mechanical stimuli can be employed to tailor its electronic functionality.
This monograph presents the results of experimental and theoretical studies of binary and ternary crystalline and glassy silicon tellurides. It provides a detailed description of the methods for synthesizing and growing bulk and nanostructured binary crystals of Si2Te3, SiTe2, and ternary crystals known in the M-Si-Te systems (M = Na, K, Cu, Ag, Al, In), as well as sodium-silicon and tellurium-silicon clathrates. Significant attention is paid to the results of investigations into their electronic structure, optical, electrical, photoelectric, and photoluminescent this http URL publication is intended for researchers and specialists in the fields of semiconductor materials science, physics, and semiconductor technology, as well as lecturers, postgraduate students, and students of relevant specialties.
Artificial intelligence is transforming the sciences, yet general conversational AI systems often generate unverified "hallucinations" undermining scientific rigor. We present OceanAI, a conversational platform that integrates the natural-language fluency of open-source large language models (LLMs) with real-time, parameterized access to authoritative oceanographic data streams hosted by the National Oceanic and Atmospheric Administration (NOAA). Each query such as "What was Boston Harbor's highest water level in 2024?" triggers real-time API calls that identify, parse, and synthesize relevant datasets into reproducible natural-language responses and data visualizations. In a blind comparison with three widely used AI chat-interface products, only OceanAI produced NOAA-sourced values with original data references; others either declined to answer or provided unsupported results. Designed for extensibility, OceanAI connects to multiple NOAA data products and variables, supporting applications in marine hazard forecasting, ecosystem assessment, and water-quality monitoring. By grounding outputs and verifiable observations, OceanAI advances transparency, reproducibility, and trust, offering a scalable framework for AI-enabled decision support within the oceans. A public demonstration is available at this https URL.