This study evaluates the accessibility of public EV charging stations in the Washington metropolitan area using a comprehensive measure that accounts for both destination-based and en route charging opportunities. By incorporating the full spectrum of daily travel patterns into the accessibility evaluation, our methodology offers a more realistic measure of charging opportunities than destination-based methods that prioritize proximity to residential locations. Results from spatial autocorrelation analysis indicate that conventional accessibility assessments often overestimate the availability of infrastructure in central urban areas and underestimate it in peripheral commuting zones, potentially leading to misallocated resources. By highlighting significant clusters of high-access and low-access areas, our approach identifies spatial inequalities in infrastructure distribution and provides insights into areas requiring targeted interventions. This study underscores the importance of incorporating daily mobility patterns into urban planning to ensure equitable access to EV charging infrastructure and suggests a framework that other regions could adopt to enhance sustainable transportation networks and support equitable urban development.

To enhance the understanding of air quality within underground railway stations (URS), a methodology has been developed to establish a baseline profile of particle concentrations (PM10 and PM2.5). This approach incorporates an extensive data cleaning process based on the identification of URS operation periods, physically inconsistent or mathematically aberrant data, and comparing the profile of each day to an average profile. The versatility of this methodology allows its application to different particle classes within various URS. The results obtained from the three studied URS indicate the possibility of obtaining reliable daily typical profiles even over short measurement periods (up to one or two weeks).

Liquid-liquid equilibrium (LLE) phase diagrams have been determined, by means of the critical opalescence method with a laser scattering technique, for the mixtures 4-phenylbutan-2-one + CH$_3$(CH$_2$)$_n$CH$_3$ ($n = 10,12,14$) and for benzyl ethanoate + CH$_3$(CH$_2$)$_n$CH$_3$ ($n = 12,14$). The systems are characterized by having an upper critical solution temperature (UCST), which increases with n. The corresponding LLE curves show a rather horizontal top and become skewed toward higher mole fractions of the polar compound when n is increased. Calorimetric and LLE measurements show that, for mixtures with molecules with a given functional group, interactions between aromatic molecules are stronger than those between homomorphic linear molecules (aromaticity effect). This has been ascribed to proximity effects arising from the presence of the polar group and the aromatic ring within the same molecule. Proximity effects become weaker in the sequence 1-phenylpropan-2-one > 4-phenylbutan-2-one > 1-phenylethanone and are more important in benzyl ethanoate than in ethyl benzoate molecules. Values of the critical compositions and temperatures calculated with the DISQUAC group contribution model are in good agreement with the experimental results. Accordingly, the shape of the LLE curves is also correctly described by DISQUAC.

In this work, effects of constant and time-dependent vaccination rates on the Susceptible-Exposed-Infected-Recovered-Susceptible (SEIRS) seasonal model are studied. Computing the Lyapunov exponent, we show that typical complex structures, such as shrimps, emerge for given combinations of constant vaccination rate and another model parameter. In some specific cases, the constant vaccination does not act as a chaotic suppressor and chaotic bands can exist for high levels of vaccination (e.g., $> 0.95$). Moreover, we obtain linear and non-linear relationships between one control parameter and constant vaccination to establish a disease-free solution. We also verify that the total infected number does not change whether the dynamics is chaotic or periodic. The introduction of a time-dependent vaccine is made by the inclusion of a periodic function with a defined amplitude and frequency. For this case, we investigate the effects of different amplitudes and frequencies on chaotic attractors, yielding low, medium, and high seasonality degrees of contacts. Depending on the parameters of the time-dependent vaccination function, chaotic structures can be controlled and become periodic structures. For a given set of parameters, these structures are accessed mostly via crisis and in some cases via period-doubling. After that, we investigate how the time-dependent vaccine acts in bi-stable dynamics when chaotic and periodic attractors coexist. We identify that this kind of vaccination acts as a control by destroying almost all the periodic basins. We explain this by the fact that chaotic attractors exhibit more desirable characteristics for epidemics than periodic ones in a bi-stable state.

This paper explores the advanced mathematical frameworks used to analyze symmetry breaking in high-dimensional field theories, emphasizing the roles of Laurent series, residues, and winding numbers. Symmetry breaking is fundamental in various physical contexts, such as high-energy physics, condensed matter physics, and cosmology. The study addresses how these mathematical tools enable the decomposition of complex field behaviors near singularities, revealing the intricate dynamics of symmetry breaking. Laurent series facilitate the expansion of fields into manageable terms, particularly around critical points. Residues provide a direct link between local field behavior and global physical properties, playing a crucial role in effective action formulations and renormalization processes. Winding numbers offer a topological perspective, quantifying how fields wrap around singularities and identifying stable topological structures like vortices, solitons, and monopoles. Extending these methods to (3+1) dimensions highlights the complexity of symmetry breaking in higher-dimensional scenarios, where advanced group theory and topological invariants are necessary to describe non-linear interactions. The findings underscore the importance of integrating these mathematical techniques into modern theoretical physics, with potential applications in quantum gravity, string theory, and the study of topological phases of matter. Future directions include further exploration of higher-dimensional extensions and their implications for understanding the fundamental nature of symmetry, topology, and field dynamics.

In this paper, metallic behavior of magneto optic materials in slit arrays under the incident of TM wave is analyzed by mode matching technique and confirmed by full wave simulations. It is shown that tensor electric permittivity of such materials under a static magnetic field bias can result in negative effective permittivity and if placed in a periodic two dimensional structure known as slit array, results in extraordinary optical transmission. It is also shown that this structure acts as a highly frequency selective surface tunable with magnetic bias change.

Peer-customer is a mechanism to pair student teams with customers in hands-on curriculum courses. Each student pitches a problem they want someone else in the class to solve for them. The use of peer-customers provides practical and scalable access for students to work with a customer on a real-world need for their final project. The peer-customer, despite being a student in the class, do not work on the project with the team. This dissociation forces a student team to practice customer needs assessment, testing, and surveying that can often be lacking in self-ideated final projects that do not have resources to curate external customers like in capstone courses. We prototyped the use of peer-customers in an introductory physical prototyping course focused on basic embedded systems design and python programming. In this paper, we present a practical guide on how best to use peer-customers, supported by key observations made during two separate offerings of the course with a total of N=64 students (N=29 Y1 and N=35 Y2).

Mechanical testing with sub-sized specimens plays an important role in the nuclear industry, facilitating tests in confined experimental spaces with lower irradiation levels and accelerating the qualification of new materials. The reduced size of specimens results in different material behavior at the microscale, mesoscale, and macroscale, in comparison to standard-sized specimens, which is referred to as the specimen size effect. Although analytical models have been proposed to correlate the properties of sub-sized specimens to standard-sized specimens, these models lack broad applicability across different materials and testing conditions. The objective of this study is to create the first large public dataset of tensile properties for sub-sized specimens used in nuclear structural materials. We performed an extensive literature review of relevant publications and extracted over 1,000 tensile testing records comprising 54 parameters including material type and composition, manufacturing information, irradiation conditions, specimen dimensions, and tensile properties. The dataset can serve as a valuable resource to investigate the specimen size effect and develop computational methods to correlate the tensile properties of sub-sized specimens.

The past few centuries have witnessed a dramatic growth in scientific and technological knowledge. However, the nature of that growth - whether exponential or otherwise - remains controversial, perhaps partly due to the lack of quantitative characterizations. We evaluated knowledge as a collective thinking structure, using citation networks as a representation, by examining extensive datasets that include 213 million publications (1800-2020) and 7.6 million patents (1976-2020). We found that knowledge - which we conceptualize as the reduction of uncertainty in a knowledge network - grew linearly over time in naturally formed citation networks that themselves expanded exponentially. Moreover, our results revealed inflection points in the growth of knowledge that often corresponded to important developments within fields, such as major breakthroughs, new paradigms, or the emergence of entirely new areas of study. Around these inflection points, knowledge may grow rapidly or exponentially on a local scale, although the overall growth rate remains linear when viewed globally. Previous studies concluding an exponential growth of knowledge may have focused primarily on these local bursts of rapid growth around key developments, leading to the misconception of a global exponential trend. Our findings help to reconcile the discrepancy between the perceived exponential growth and the actual linear growth of knowledge by highlighting the distinction between local and global growth patterns. Overall, our findings reveal major science development trends for policymaking, showing that producing knowledge is far more challenging than producing papers.

Silicon nitride (SiN) formed via low pressure chemical vapor deposition (LPCVD) is an ideal material platform for on-chip nonlinear photonics owing to its low propagation loss and competitive nonlinear index. Despite this, LPCVD SiN is restricted in its scalability due to the film stress when high thicknesses, required for nonlinear dispersion engineering, are deposited. This stress in turn leads to film cracking and makes integrating such films in silicon foundries challenging. To overcome this limitation, we propose a bilayer waveguide scheme comprised of a thin LPCVD SiN layer underneath a low-stress and low-index PECVD SiN layer. We show group velocity dispersion tuning at 1550nm without concern for filmcracking while enabling low loss resonators with intrinsic quality factors above 1 million. Finally, we demonstrate a locked, normal dispersion Kerr frequency comb with our bilayer waveguide resonators spanning 120nm in the c-band with an on-chip pump power of 350mW.

Diffractive Neural Networks (DNNs) leverage the power of light to enhance computational performance in machine learning, offering a pathway to high-speed, low-energy, and large-scale neural information processing. However, most existing DNN architectures are optimized for single tasks and thus lack the flexibility required for the simultaneous execution of multiple tasks within a unified artificial intelligence platform. In this work, we utilize the polarization and wavelength degrees of freedom of light to achieve optical multi-task identification using the MNIST, FMNIST, and KMNIST datasets. Employing bilayer cascaded metasurfaces, we construct dual-channel DNNs capable of simultaneously classifying two tasks, using polarization and wavelength multiplexing schemes through a meta-atom library. Numerical evaluations demonstrate performance accuracies comparable to those of individually trained single-channel, single-task DNNs. Extending this approach to three-task parallel recognition reveals an expected performance decline yet maintains satisfactory classification accuracies of greater than 80% for all tasks. We further introduce a novel end-to-end joint optimization framework to redesign the three-task classifier, demonstrating substantial improvements over the meta-atom library design and offering the potential for future multi-channel DNN designs. Our study could pave the way for the development of ultrathin, high-speed, and high-throughput optical neural computing systems.

We predict the existence of stable bound states between pairs of ultracold diatomic molecules with the aid of a static electric field and 1D harmonic confinement. We focus on collisions of NaK-NaK identical fermions, for which we find that currently achievable experimental parameters allow the observation of these confinement-induced field-linked bound states as scattering resonances. The bound state is highly stable with lifetimes estimated to be tens of seconds long. With the diatomic molecules bound at distances a fraction of the dipolar length scale, these complexes allow for explorations of polyatomic chemistry and Fermi gas superfluid pairing.

Mathematically modelling diffusive and advective transport of particles in heterogeneous layered media is important to many applications in computational, biological and medical physics. While deterministic continuum models of such transport processes are well established, they fail to account for randomness inherent in many problems and are valid only for a large number of particles. To address this, this paper derives a suite of equivalent stochastic (discrete-time discrete-space random walk) models for several standard continuum (partial differential equation) models of diffusion and advection-diffusion across a fully- or semi-permeable interface. Our approach involves discretising the continuum model in space and time to yield a Markov chain, which governs the transition probabilities between spatial lattice sites during each time step. Discretisation in space is carried out using a standard finite volume method while two options are considered for discretisation in time. A simple forward Euler discretisation yields a stochastic model taking the form of a local (nearest-neighbour) random walk with simple analytical expressions for the transition probabilities while an exact exponential discretisation yields a non-local random walk with transition probabilities defined numerically via a matrix exponential. Constraints on the size of the spatial and/or temporal steps are provided for each option to ensure the transition probabilities are non-negative. MATLAB code comparing the stochastic and continuum models is available on GitHub (https://github.com/elliotcarr/Carr2024c) with simulation results demonstrating good agreement for several example problems.

Manipulation of small-scale particles across streamlines is the elementary task of microfluidic devices. Many such devices operate at very low Reynolds numbers and deflect particles using arrays of obstacles, but a systematic quantification of relevant hydrodynamic effects has been lacking. Here, we explore an alternate approach, rigorously modeling the displacement of force-free spherical particles in vortical Stokes flows under hydrodynamic particle-wall interaction. Certain Moffatt-like eddy geometries with broken symmetry allow for systematic deflection of particles across streamlines, leading to particle accumulation at either Faxen field fixed points or limit cycles. Moreover, particles can be forced onto trajectories approaching channel walls exponentially closely, making quantitative predictions of particle capture (sticking) by short-range forces possible. This rich, particle size-dependent behavior suggests the versatile use of inertial-less flow in devices with a long particle residence time for concentration, sorting, or filtering.

The Brewster effect, dating back to the pioneering work of Sir David Brewster in 1815, offers a crucial route to achieve 100% energy conversion between the incident and transmitted propagating waves at an optical interface and is of fundamental importance to many practical applications, such as polarization filtering, beam steering, and optical broadband angular selectivity. However, whether the Brewster effect of surface waves can be implemented without the involvement of negative-permittivity or negative-permeability materials remains elusive. This is due to the formidable challenge to fully suppress both the parasitic scattering into propagating waves and the reflection into surface waves under the incidence of surface waves. Here, we reveal a feasible route to achieve scattering-free plasmonic Brewster effect via isotropic metasurfaces, along with the usage of positive-permittivity and positive-permeability metamaterials with both anisotropic and magnetic responses. In essence, the anisotropic response of metamaterials is judiciously designed to fully suppress the parasitic scattering into propagating waves, while the magnetic response of metamaterials facilitates the full suppression of the reflection into surface waves supported by metasurfaces. Moreover, we find that this plasmonic Brewster effect via metasurfaces can be further engineered to occur for arbitrary incident angles, giving rise to the exotic phenomenon of all-angle scattering-free plasmonic Brewster effect.

We present a theoretical framework for temperature imaging from long-wavelength infrared thermal radiation (e.g. 8-12 $\mu$m) through the end-to-end design of a metasurface-optics frontend and a computational-reconstruction backend. We introduce a new nonlinear reconstruction algorithm, ``Planck regression," that reconstructs the temperature map from a grayscale sensor image, even in the presence of severe chromatic aberration, by exploiting blackbody and optical physics particular to thermal imaging. We combine this algorithm with an end-to-end approach that optimizes a manufacturable, single-layer metasurface to yield the most accurate reconstruction. Our designs demonstrate high-quality, noise-robust reconstructions of arbitrary temperature maps (including completely random images) in simulations of an ultra-compact thermal-imaging device. We also show that Planck regression is much more generalizable to arbitrary images than a straightforward neural-network reconstruction, which requires a large training set of domain-specific images.

Atomically thin two-dimensional (2D) hexagonal boron nitride (hBN) has emerged as an essential material for the encapsulation layer in van der Waals heterostructures and efficient deep ultra-violet optoelectronics. This is primarily due to its remarkable physical properties and ultrawide bandgap (close to 6 eV, and even larger in some cases) properties. Color centers in hBN refer to intrinsic vacancies and extrinsic impurities within the 2D crystal lattice, which result in distinct optical properties in the ultraviolet (UV) to near-infrared (IR) range. Furthermore, each color center in hBN exhibits a unique emission spectrum and possesses various spin properties. These characteristics open up possibilities for the development of next-generation optoelectronics and quantum information applications, including room-temperature single-photon sources and quantum sensors. Here, we provide a comprehensive overview of the atomic configuration, optical and quantum properties, and different techniques employed for the formation of color centers in hBN. A deep understanding of color centers in hBN allows for advances in the development of next-generation UV optoelectronic applications, solid-state quantum technologies, and nanophotonics by harnessing the exceptional capabilities offered by hBN color centers.

Nearly constant mean angular momentum profiles are widely observed in curved turbulent flows, including the bulk region of Taylor--Couette (TC) flows, where the inner and outer cylinders have weakly counter-rotating and co-rotating conditions. For high-Reynolds-number TC flows under these conditions, both the bulk and boundary layers become turbulent without Taylor rolls, referred to as the featureless ultimate regime (UR). In this study, we examine Reynolds-averaged Navier--Stokes (RANS) models to predict the nearly constant mean angular velocity as a one-dimensional problem in the featureless UR of TC turbulence. High-Reynolds-number experiments of TC turbulence are performed for reference, where the radius ratio is $\eta = r_\mathrm{in}/r_\mathrm{out} = 0.732$ and angular velocity ratio $a = -\omega_\mathrm{out}/\omega_\mathrm{in}$ is in the range $-0.5 \le a \le 0.1$. Verification of the RANS model using the algebraic Reynolds stress model (ARSM) suggests that convection of the Reynolds stress is essential for predicting the angular momentum profile. We introduce the Jaumann derivative as a covariant time derivative to develop ARSMs that incorporate the convection effect in a covariant manner. The proposed ARSM using the Jaumann derivative of the term composed of the strain and vorticity tensors successfully predicts the nearly constant mean angular momentum for a wide range of angular velocity ratios in the co-rotating case. The modelling approach incorporating time-derivative terms is a candidate for expressing curvature effects while satisfying the covariance of the Reynolds stress tensor.

Precision timekeeping is fundamental to modern technologies such as Global Navigation Satellite Systems (GNSS), communication networks, financial transactions, and power grid management. Over the past 50 years, microwave atomic clocks have been the standard for timing precision. The new generation of optical atomic clocks have demonstrated orders of magnitude better performance, and are now transitioning from research to practical applications. We provide a web resource that tracks the performance of these optical atomic clocks, measured in terms of their Allan deviation at various integration times, against their SWaP requirements via an interactive plot. The most current data and additional resources are available online, providing a continuously updated reference for those interested in precision timing.

Wake and force characteristics of an oscillating cylinder in inline steady currents are investigated numerically over a wide parameter space of dimensionless oscillation amplitude ($A^* = 0.01 - 0.50$) and wavelength ($\lambda^* = 0.4 - 25$) at a fixed Reynolds number $Re = 500$. Fundamental issues addressed in this study are the interactions of wakes induced by steady approaching flow and cylinder oscillations and the influences of the governing parameters of $A^$ and $\lambda^$ on such interactions. Whilst the collinear flow is dominated by wakes induced by cylinder oscillation at $\lambda^* \leq 1.5$ and steady current at $\lambda^* \geq 10$, it exhibits characteristics of nonlinear interactions of wakes induced by the cylinder oscillation and steady current at $\lambda^* = 1.5 - 10$, such as the formation of multiple synchronized modes interleaved with desynchronized modes. The synchronized mode varies with both $\lambda^$ and $A^$, forming an inclined Arnold's tongue across $\lambda^-A^$ space. There is a wide variability of the vortex shedding pattern in each synchronized mode. Variations of different hydrodynamic force coefficients with $\lambda^$ and $A^$ are investigated with physical interpretations based on the wake characteristics. The applicability of the Morison equation in predicting inline force fluctuations is examined. We found that the Morison equation shows reasonable accuracy only for a small range of $\lambda^* \leq 1.5$. Beyond this range, its performance deteriorates due to the influence of steady current on wake characteristics.

Geophysical and astrophysical fluid flows are typically buoyantly driven and are strongly constrained by planetary rotation at large scales. Rapidly rotating Rayleigh-B\'enard convection (RRRBC) provides a paradigm for direct numerical simulations (DNS) and laboratory studies of such flows, but the accessible parameter space remains restricted to moderately fast rotation (Ekman numbers $\rm Ek \gtrsim 10^{-8}$), while realistic $\rm Ek$ for astro-/geophysical applications are significantly smaller. Reduced equations of motion, the non-hydrostatic quasi-geostrophic equations describing the leading-order behavior in the limit of rapid rotation ($\rm Ek \to 0$) cannot capture finite rotation effects, leaving the physically most relevant part of parameter space with small but finite $\rm Ek$ currently inaccessible. Here, we introduce the rescaled incompressible Navier-Stokes equations (RiNSE), a reformulation of the Navier-Stokes-Boussinesq equations informed by the scalings valid for $\rm Ek\to 0$. We provide the first full DNS of RRRBC at unprecedented rotation strengths down to $\rm Ek=10^{-15}$ and below and show that the RiNSE converge to the asymptotically reduced equations.

Exceptional points (EPs) are spectral singularities in non-Hermitian systems where eigenvalues and their corresponding eigenstates coalesce simultaneously. In this study, we calculate scattering poles in an open spherical solid and propose a depth-first search-based method to identify EPs. Using the proposed method, we numerically identify multiple EPs in a parameter space and confirm the simultaneous degeneracy of scattering poles through numerical experiments. The proposed method and findings enable the exploration of applications in practical three-dimension models.

In this article we propose a simple approach for the precision calculation of Bethe logarithm. The leading contributions are obtained using specific operators, while the remaining terms are eliminated by adjusting the parameter $\lambda$. Through the use of dimensional regularization, singular divergences are algebraically canceled. Compared to the standard form of Bethe logarithm, our approach significantly reduces the complexity of constructing pseudostates in numerical evaluations. Using this approach we obtain a very highly precise result of Bethe logarithm for the ground state of the hydrogen, achieving 49 significant digits. And for multi-electron systems this approach appears simplicity and efficiency as well.

Due to the absorption of high energetic ultraviolet (UV) photons by the surface layers of the cold molecular clouds, only low energetic photons are able to penetrate into the inner regions of these clouds. This leads to lower photo-ionization yield of molecules of higher ionization potential in these environments. However, here we have experimentally shown the ionization of Benzonitrile molecule using 266nm (4.66eV) photons. The low intensity and unfocused laser irradiation of benzonitrile molecules results extensive fragmentation. Moreover, the ion-neutral reactions among the cationic fragments and neutral fragments shows promising molecular mass growth.

X-ray Thomson scattering (XRTS) has emerged as a powerful tool for the diagnostics of matter under extreme conditions. In principle, it gives one access to important system parameters such as the temperature, density, and ionization state, but the interpretation of the measured XRTS intensity usually relies on theoretical models and approximations. In this work, we show that it is possible to extract the Rayleigh weight -- a key property that describes the electronic localization around the ions -- directly from the experimental data without the need for any model calculations or simulations. As a practical application, we consider an experimental measurement of strongly compressed Be at the National Ignition Facility (NIF) [D\"oppner \emph{et al.}, \textit{Nature} \textbf{618}, 270-275 (2023)]. In addition to being interesting in their own right, our results will open up new avenues for diagnostics from \emph{ab initio} simulations, help to further constrain existing chemical models, and constitute a rigorous benchmark for theory and simulations.

To streamline fast-track processing of large data volumes, we have developed a deep learning approach to deblend seismic data in the shot domain based on a practical strategy for generating high-quality training data along with a list of data conditioning techniques to improve performance of the data-driven model. We make use of unblended shot gathers acquired at the end of each sail line, to which the access requires no additional time or labor costs beyond the blended acquisition. By manually blending these data we obtain training data with good control of the ground truth and fully adapted to the given survey. Furthermore, we train a deep neural network using multi-channel inputs that include adjacent blended shot gathers as additional channels. The prediction of the blending noise is added in as a related and auxiliary task with the main task of the network being the prediction of the primary-source events. Blending noise in the ground truth is scaled down during the training and validation process due to its excessively strong amplitudes. As part of the process, the to-be-deblended shot gathers are aligned by the blending noise. Implementation on field blended-by-acquisition data demonstrates that introducing the suggested data conditioning steps can considerably reduce the leakage of primary-source events in the deep part of the blended section. The complete proposed approach performs almost as well as a conventional algorithm in the shallow section and shows great advantage in efficiency. It performs slightly worse for larger traveltimes, but still removes the blending noise efficiently.

Processing marine seismic data is computationally demanding and consists of multiple time-consuming steps. Neural network based processing can, in theory, significantly reduce processing time and has the potential to change the way seismic processing is done. In this paper we are using deep convolutional neural networks (CNNs) to remove seismic interference noise and to deblend seismic data. To train such networks, a significant amount of computational memory is needed since a single shot gather consists of more than 106 data samples. Preliminary results are promising both for denoising and deblending. However, we also observed that the results are affected by the signal-to-noise ratio (SnR). Moving to common channel domain is a way of breaking the coherency of the noise while also reducing the input volume size. This makes it easier for the network to distinguish between signal and noise. It also increases the efficiency of the GPU memory usage by enabling better utilization of multi core processing. Deblending in common channel domain with the use of a CNN yields relatively good results and is an improvement compared to shot domain.

The effects of the aortic geometry on its mechanics and blood flow, and subsequently on aortic pathologies, remain largely unexplored. The main obstacle lies in obtaining patient-specific aorta models, an extremely difficult procedure in terms of ethics and availability, segmentation, mesh generation, and all of the accompanying processes. Contrastingly, idealized models are easy to build but do not faithfully represent patient-specific variability. Additionally, a unified aortic parametrization in clinic and engineering has not yet been achieved. To bridge this gap, we introduce a new set of statistical parameters to generate synthetic models of the aorta. The parameters possess geometric significance and fall within physiological ranges, effectively bridging the disciplines of clinical medicine and engineering. Smoothly blended realistic representations are recovered with convolution surfaces. These enable high-quality visualization and biological appearance, whereas the structured mesh generation paves the way for numerical simulations. The only requirement of the approach is one patient-specific aorta model and the statistical data for parameter values obtained from the literature. The output of this work is SynthAorta, a dataset of ready-to-use synthetic, physiological aorta models, each containing a centerline, surface representation, and a structured hexahedral finite element mesh. The meshes are structured and fully consistent between different cases, making them imminently suitable for reduced order modeling and machine learning approaches.

It is a truth universally acknowledged, that space charge effects in ultrarelativistic electron storage rings are irrelevant due to the steep inverse dependence of their strength on the Lorentz factor. Yet, with the push towards the diffraction limit, the state-of-the-art light sources are approaching the point where their emittance becomes so small that the space charge force can no longer be ignored. In this paper, we demonstrate how space charge effects affect the injection dynamics, dynamical aperture, and collective beam stability on the example of 4th generation light sources PETRA IV and SOLEIL II.

Chemo-mechanical waves on active deformable surfaces are a key component for many vital cellular functions. In particular, these waves play a major role in force generation and long-range signal transmission in cells that dynamically change shape, as encountered during cell division or morphogenesis. Reconstituting and controlling such chemically controlled cell deformations is a crucial but unsolved challenge for the development of synthetic cells. Here, we develop an optogenetic method to elucidate the mechanism responsible for coordinating surface contraction waves that occur in oocytes of the starfish Patiria miniata during meiotic cell division. Using spatiotemporally-patterned light stimuli as a control input, we create chemo-mechanical cortical excitations that are decoupled from meiotic cues and drive diverse shape deformations ranging from local pinching to surface contraction waves and cell lysis. We develop a quantitative model that entails the hierarchy of chemical and mechanical dynamics, which allows to relate the variety of mechanical responses to optogenetic stimuli. Our framework systematically predicts and explains transitions of programmed shape dynamics. Finally, we qualitatively map the observed shape dynamics to elucidate how the versatility of intracellular protein dynamics can give rise to a broad range of mechanical phenomenologies. More broadly, our results pave the way toward real-time control over dynamical deformations in living organisms and can advance the design of synthetic cells and life-like cellular functions.

In magnetic resonance imaging (MRI), the spectrometer is a fundamental component of MRI systems, responsible for the transmission of radiofrequency (RF) pulses that excite hydrogen nuclei and the subsequent acquisition of MR signals for processing. However, detailed knowledge about this component remains largely inaccessible due to the proprietary nature of commercial systems. To address this gap, we present an FPGA-based platform specifically designed for MRI signal transmission and reception in low-field MRI applications. Additionally, with appropriate chip replacements, this platform can be adapted for use in mid- and high-field MRI systems. This platform leverages Direct Digital Synthesis (DDS) technology to generate RF pulses, offering the flexibility to quickly and precisely adjust soft pulse parameters to meet the specific requirements of the MRI system. Additionally, the platform processes MRI signals through digital downconversion techniques and utilizes CIC and FIR filters to obtain baseband signals. Experimental testing of this platform has yielded promising results. We hope that this work will inspire further research and development in the field of MRI spectrometer design. Furthermore, it is worth noting that with the replacement of relevant chips, this system can also be adapted for use in mid- and high-field MRI systems.

Ionising radiation interactions in matter can trigger a cascade of processes that underpin long-lived damage in the medium. To date, however, a lack of suitable methodologies has precluded our ability to understand the role that material nanostructure plays in this cascade. Here, we use transient photoabsorption to track the lifetime of free electrons (t_c) in bulk and nanostructured SiO2 (aerogel) irradiated by picosecond-scale (10^-12 s) bursts of X-rays and protons from a laser-driven accelerator. Optical streaking reveals a sharp increase in t_c from < 1 ps to > 50 ps over a narrow average density (p_av) range spanning the expected phonon-fracton crossover in aerogels. Numerical modelling suggests that this discontinuity can be understood by a quenching of rapid, phonon-assisted recovery in irradiated nanostructured SiO_2. This is shown to lead to an extended period of enhanced energy density in the excited electron population. Overall, these results open a direct route to tracking how low-level processes in complex systems can underpin macroscopically observed phenomena and, importantly, the conditions that permit them to emerge.

Confocal Raman microscopy, a highly specific and label-free technique for the microscale study of thick samples, often presents difficulties due to weak Raman signals. Inhomogeneous samples introduce wavefront aberrations that further reduce these signals, requiring even longer acquisition times. In this study, we introduce adaptive optics to confocal Raman microscopy for the first time to counteract such aberrations, significantly increasing the Raman signal and image quality. The method is designed to integrate seamlessly with existing commercial microscopes without hardware modifications. It uses a wavefront sensorless approach to derive aberrations using an optofluidic, transmissive spatial light modulator that can be attached to the microscope nosepiece. Our experimental results demonstrate the compensation of aberrations caused by artificial scatterers and mouse brain tissue, improving spatial resolution and achieving up to 3.5-fold signal enhancements. Our results provide a basis for the molecular label-free study of biological systems at greater imaging depths.

The diffusional dynamics and vibrational spectroscopy of molecular hydrogen (H$_2$) in myoglobin (Mb) is characterized. Hydrogen has been implicated in a number of physiologically relevant processes, including cellular aging or inflammation. Here, the internal diffusion through the protein matrix was characterized and the vibrational spectroscopy was investigated using conventional empirical energy functions and improved models able to describe higher-order electrostatic moments of the ligand. H$_2$ can occupy the same internal defects as already found for Xe or CO (Xe1 to Xe4 and B-state). Furthermore, 4 additional sites were found, some of which had been discovered in earlier simulation studies. The vibrational spectra using the most refined energy function indicate that depending on the docking site the spectroscopy of H$_2$ differs. The maxima of the absorption spectra cover $\sim 20$ cm$^{-1}$ which are indicative of a pronounced effect of the surrounding protein matrix on the vibrational spectroscopy of the ligand. Electronic structure calculations show that H$_2$ forms a stable complex with the heme-iron (stabilized by $\sim -12$ kcal/mol) but splitting of H$_2$ is unlikely due to a high activation energy ($\sim 50$ kcal/mol).

The Pound-Drever-Hall (PDH) technique is routinely used to stabilize the frequency of a laser to a reference cavity. The electronic sideband (ESB) locking scheme, a PDH variant, helps bridge the frequency difference between the quantized frequencies enforced by the cavity and the laser frequency of interest. Here we use quadrature amplitude modulation (QAM), a technique used in digital signal communication, to engineer the high-quality phase-modulated radio-frequency (rf) signal required for ESB locking scheme. We develop a theoretical framework to analyze the effects of in-phase/quadrature-phase (I/Q) impairments on the ESB error signal for ultra-narrow linewidth lasers. We design and implement two baseband-sampling software-defined radio variants for implementing QAM that compensate for these I/Q impairments. Using these variants, we engineer high-quality phase-modulated radio-frequency (rf) signals with a large phase modulation index of 1.01 radians, a maximum modulation frequency of 3 MHz, a tunable carrier wave frequency range of 450 MHz to 4 GHz, and I/Q errors of less than 2.25 % over the entire carrier wave frequency range.

An electronically variational approach to the calculation of atomic hyperfine structure transition energies under the influence of static external electric fields is presented. The method avoids the calculation of intermediate atomic states entirely and requires only the wavefunctions of the electronic states involved in the respective hyperfine levels. These wavefunctions are obtained through relativistic general-excitation-rank configuration interaction theory. The method also enables for calculations on atoms with the most complicated of shell structures. The initial applications include $^{87}$Rb and $^{133}$Cs where very good agreement of the approach with literature results is established. For $^{169}$Tm that is used in the development of atomic clocks the differential static electric dipole polarizability between ground levels $J=\frac{7}{2}$ and $J=\frac{5}{2}$ is calculated to be $\Delta\alpha = -0.23 \pm 0.11$ \au The hyperfine Stark coefficient for the hyperfine levels belonging to the ground term with $J=\frac{7}{2}$ is found to be $k = (1.3 \pm 1.0) \times 10^{-13}$ [Hz/((V/m)$^2$)]. This coefficient is several orders of magnitude smaller than the corresponding coefficients in $^{87}$Rb and $^{133}$Cs.

Climate change and rapid urbanization have led to more frequent and severe flooding, causing significant damage. The existing literature on flood risk encompasses a variety of dimensions, such as physical, economic, social, political, environmental, infrastructural, and managerial aspects. This paper aims to provide an extensive review of proposed conceptual frameworks and their components used in flood risk assessment. For this purpose, Initially, conceptual frameworks were extracted to configure the components of flood risk including hazard, vulnerability, exposure, resilience, and susceptibility. Subsequently, a comprehensive set of criteria from the literature were identified, addressing risk components. In this paper, the risk conceptual framework is defined by the intersection of vulnerability and hazard. Vulnerability, shaped by exposure and susceptibility, can be reduced by enhancing resiliency, which includes coping and adaptive capacities. In total, 102 criteria/subcriteria were identified and classified into three hierarchical structures of hazard, susceptibility, and resilience. Finally, flood risk assessment methods were reviewed, with an emphasis on their applicability and characteristics. The review highlighted the strengths and limitations of various methods, providing a comprehensive overview of their suitability for different scenarios. The outcomes of this review could serve as a valuable reference for professionals involved in flood risk assessment, aiding in the identification of the most appropriate risk concepts, assessment criteria, and suitable methods for quantification based on the specific study area and data availability.

Fluids at supercritical pressures exhibit large variations in density near the pseudo critical line, such that buoyancy plays a crucial role in their fluid dynamics. Here, we experimentally investigate heat transfer and turbulence in horizontal hydrodynamically developed channel flows of carbon dioxide at 88.5 bar and 32.6{\deg}C, heated at either the top or bottom surface to induce a strong vertical density gradient. In order to visualise the flow and evaluate its heat transfer, shadowgraphy is used concurrently with surface temperature measurements. With moderate heating, the flow is found to strongly stratify for both heating configurations, with bulk Richardson numbers Ri reaching up to 100. When the carbon dioxide is heated from the bottom upwards, the resulting unstably stratified flow is found to be dominated by the increasingly prevalent secondary motion of thermal plumes, enhancing vertical mixing and progressively improving heat transfer compared to a neutrally buoyant setting. Conversely, stable stratification, induced by heating from the top, suppresses the vertical motion leading to deteriorated heat transfer that becomes invariant to the Reynolds number. The optical results provide novel insights into the complex dynamics of the directionally dependent heat transfer in the near-pseudo-critical region. These insights contribute to the reliable design of heat exchangers with highly property-variant fluids, which are critical for the decarbonisation of power and industrial heat. However, the results also highlight the need for further progress in the development of experimental techniques to generate reliable reference data for a broader range of non-ideal supercritical conditions.

This paper presents a novel volume of fluid ghost-cell immersed boundary (IB) method for two-phase free surface flow interacting with structures. To circumvent the disturbance occurring around the intersection area of the IB and free surface when using the interpolation method for variable reconstruction, the fluid-structure interaction is firstly considered with the orthogonal IB by mimicking the imposition of boundary conditions in the body-conformal grid method. Treatments are subsequently performed to account for the non-orthogonal effect in accurately simulating the FSI, including the newly proposed flux-scaling and IB velocity re-evaluation methods. Further, a variable smoothing process and a flux correction method are adapted to handle moving boundary cases. Based on OpenFOAM, a two-phase flow solver has been developed. Both stationary and moving immersed boundary cases are used for validations. The numerical results reasonably agree with the corresponding laboratory data and other numerical simulation results, demonstrating the disturbance being effectively depressed and the solver's accuracy in capturing fluid-structure interactions involving free surface flow.

We develop a deep reinforcement learning method for training a jellyfish-like swimmer to effectively track a moving target in a two-dimensional flow. This swimmer is a flexible object equipped with a muscle model based on torsional springs. We employ a deep Q-network (DQN) that takes the swimmer's geometry and dynamic parameters as inputs, and outputs actions which are the forces applied to the swimmer. In particular, we introduce an action regulation to mitigate the interference from complex fluid-structure interactions. The goal of these actions is to navigate the swimmer to a target point in the shortest possible time. In the DQN training, the data on the swimmer's motions are obtained from simulations conducted using the immersed boundary method. During tracking a moving target, there is an inherent delay between the application of forces and the corresponding response of the swimmer's body due to hydrodynamic interactions between the shedding vortices and the swimmer's own locomotion. Our tests demonstrate that the swimmer, with the DQN agent and action regulation, is able to dynamically adjust its course based on its instantaneous state. This work extends the application scope of machine learning in controlling flexible objects within fluid environments.

An essential requirement for universal quantum control of the centre-of-mass motion of levitated objects is the development of a precise readout of all three translational degrees of freedom. Improving that precision presents one key challenge: collecting all the information on the object's position, encoded in the scattered light, equivalent to minimising the measurement imprecision. Here, we propose a new detection technique based on spatial mode decomposition, which addresses this problem using a simple integrated setup, where all of the light back-scattered from a levitated nanoparticle couples into a spatial mode sorter. We observe that each mode of the sorter pairs predominantly to the in-elastically scattered field generated by the object's motion along a particular spatial axis. This results in each translational degree of freedom being selectively encoded in the amplitude of orthogonal information channels. Using this approach, we report measurement efficiencies ($\eta_{\mathrm{tot}}^x$, $\eta_{\mathrm{tot}}^y$, $\eta_{\mathrm{tot}}^z$) = (0.17, 0.15, 0.30), implying that our technique is capable of reaching the 3D quantum ground state.

Homodyne Quadrature Interferometers (HoQIs) are compact, low noise and high dynamic range displacement sensors designed for use in gravitational wave observatories. Their lower noise compared to the displacement sensors used at present makes them valuable for improving the seismic isolation in current and future detectors. This paper outlines the progression of this sensor from initial production and benchtop tests to in-vacuum static performance and installation in a gravitational wave detector prototype facility. A detailed design description is outlined, including the full signal and optical chain required for implementation in detectors. The measured in-vacuum static performance indicates a noise floor of $3-4\times10^{-13}m/\sqrt{\rm{Hz}}$ at 10Hz. Three HoQIs were installed on the beamsplitter suspension at the AEI 10m prototype. They measured motion of the intermediate mass across the entire bandwidth measured and showed minimal non-linearities and a good robustness to motion in unmeasured degrees of freedom, both important for practical use in dynamic systems such as seismic isolation.

The effects of kinetic-energy preservation errors due to Runge-Kutta (RK) temporal integrators have been analyzed for the case of large-eddy simulations of incompressible turbulent channel flow. Simulations have been run using the open-source solver Xcompact3D with an implicit spectral vanishing viscosity model and a variety of temporal Runge-Kutta integrators. Explicit pseudo-symplectic schemes, with improved energy preservation properties, have been compared to standard RK methods. The results show a marked decrease in the temporal error for higher-order pseudo-symplectic methods; on the other hand, an analysis of the energy spectra indicates that the dissipation introduced by the commonly used three-stage RK scheme can lead to significant distortion of the energy distribution within the inertial range. A cost-vs-accuracy analysis suggests that pseudo-symplectic schemes could be used to attain results comparable to traditional methods at a reduced computational cost.

The Finke-Watkzy model is the reaction set consisting of autocatalysis, A + B --> 2B and the first order process A --> B. It has been widely used to describe phenomena as diverse as the formation of transition metal nanoparticles and protein misfolding and aggregation. It can also be regarded as a simple model for the spread of a non-fatal but incurable disease. The deterministic rate equations for this reaction set are easy to solve and the solution is used in the literature to fit experimental data. However, some applications of the Finke-Watkzy model may involve systems with a small number of molecules or individuals. In such cases, a stochastic description using a Chemical Master Equation or Gillespie's Stochastic Simulation Algorithm is more appropriate than a deterministic one. This is even more so because for this particular set of chemical reactions, the differences between deterministic and stochastic kinetics can be very significant. Here, we derive an analytical solution of the Chemical Master Equation for the Finke-Watkzy model. We consider both the original formulation of the model, where the reactions are assumed to be irreversible, and its generalization to the case of reversible reactions. For the former, we obtain analytical expressions for the time dependence of the probabilities of the number of A molecules. For the latter, we derive the corresponding steady-state probability distribution. Our findings may have implications for modeling the spread of epidemics and chemical reactions in living cells.

Blood rheology and microcirculation are strongly influenced by red blood cell (RBC) aggregation. The aggregability of RBCs can vary significantly due to factors such as their mechanical and membrane surface properties, which are affected by cell aging in vivo. In this study, we investigate RBC aggregability as a function of their density, a marker of cell age and mechanical properties, by separating RBCs from healthy donors into different density fractions using Percoll density gradient centrifugation. We examine the dissociation rates of aggregates in a controlled medium supplemented with Dextran, employing an extensional flow technique based on hyperbolic microfluidic constrictions and image analysis, assisted by a convolutional neural network (CNN). In contrast to other techniques, our microfluidic experimental approach highlights the behavior of RBC aggregates in dynamic flow conditions relevant to microcirculation. Our results demonstrate that aggregate dissociation is strongly correlated with cell density and that aggregates formed from the denser fractions of RBCs are significantly more robust than those from the average cell population. This study provides insight into the effect of RBC aging in vivo on their mechanical properties and aggregability, underscoring the importance of further exploration of RBC aggregation in the context of cellular senescence and its potential implications for hemodynamics. Additionally, it suggests that this technique can complement existing methods for improved evaluation of RBC aggregability in health and disease.

Sterile neutrinos are a minimal extension of the Standard Model of particle physics and a promising candidate for dark matter if their mass is in the keV-range. The Karlsruhe Tritium Neutrino experiment (KATRIN), equipped with a novel multi-pixel silicon drift detector array, the TRISTAN detector, will be capable of searching for these keV-scale sterile neutrinos by investigating the kinematics of the tritium $\beta$-decay. This measurement will be performed after the completion of the neutrino mass measurement campaign. To detect a sterile neutrino signal with a high sensitivity, a profound understanding of the detector response is required. In this work, we report on the characterization of a 7-pixel TRISTAN prototype detector with a laser system. We present the experimental results obtained in high-resolution scans of the detector surface with a focused laser beam and demonstrate how the charge collection and the timing of the signals generated in the detector is related to the detector geometry. A comparison of the experimental data with simulations shows a good agreement.

Quantum computation for chemical problems will require the construction of guiding states with sufficient overlap with a target state. Since easily available and initializable mean-field states are characterized by an overlap that is reduced for multi-configurational electronic structures and even vanishes with growing system size, we here investigate the severity of state preparation for reaction chemistry. We emphasize weaknesses in current traditional approaches (even for weakly correlated molecules) and highlight the advantage of quantum phase estimation algorithms. An important result is the introduction of a new classification scheme for electronic structures based on orbital entanglement information. We identify two categories of multi-configurational molecules. Whereas class-1 molecules are dominated by very few determinants and often found in reaction chemistry, class-2 molecules do not allow one to single out a reasonably sized number of important determinants. The latter are particularly hard for traditional approaches and an ultimate target for quantum computation. Some open-shell iron-sulfur clusters belong to class 2. We discuss the role of the molecular orbital basis set and show that true class-2 molecules remain in this class independent of the choice of the orbital basis, with the iron-molybdenum cofactor of nitrogenase being a prototypical example. We stress that class-2 molecules can be build in a systematic fashion from open-shell centers or unsaturated carbon atoms. Our key result is that it will always be possible to initialize a guiding state for chemical reaction chemistry in the ground state based on initial low-cost approximate electronic structure information, which is facilitated by the finite size of the atomistic structures to be considered.

High-speed imaging is central to the experimental investigation of fast phenomena, like flapping flags. Event-based cameras use new types of sensors that address typical challenges such as low illumination conditions, large data transfer, and the trade-off between increasing repetition rate and measurement duration more efficiently and at reduced costs compared to classical frame-based fast cameras. Event-based cameras output unstructured data that frame-based algorithms can not process. This paper proposes a general method to reconstruct the motion of a slender object similar to the centreline of a flapping flag from raw streams of event data. Our algorithm relies on a coarse chain-like structure that encodes the current state of the line and is updated by the occurrence of new events. The algorithm is applied to synthetic data, generated from known motions, to demonstrate that the method is accurate up to one percent of error for tip-based, shape-based, and modal decomposition metrics. Degradation of the reconstruction accuracy due to simulated defects only occurs when the defect intensities become more than two orders of magnitude larger than the values expected in experiments. The algorithm is then applied to experimental data of flapping flags, and we obtain relative errors below one percent when comparing the results with the data from laser distance sensors. The reconstruction of line deformation from event-based data is accurate and robust, and unlocks the ability to perform autonomous measurements in experimental mechanics.

We demonstrate two-step phase-shifting interferometry (holography) of complex laser modes generated by a spatial light modulator (SLM), in which the amplitude and phase of the signal are determined directly from measurements of phase-shifted interferograms. The reference and signal beams are generated and phase-controlled with a single composite hologram on the SLM and propagated collinearly. This requires no additional optics and leads to measurements that are more accurate and less prone to noise, which we demonstrate with collinearly-referenced measurements of various Laguerre-Gaussian modes and structured images.

The extreme heat fluxes in the divertor region of tokamaks may require an alternative to solid plasma-facing components, for the extraction of heat and the protection of the surrounding walls. Flowing liquid metals are proposed as an alternative, but raise additional challenges that require investigation and numerical simulations. Free surface designs are desirable for plasma-facing components (PFCs), but steady flow profiles and surface stability must be ensured to limit undesirable interactions with the plasma. Previous studies have mainly used steady-state, 2D, or simplified models for internal flows and have not been able to adequately model free-surface liquid metal (LM) experiments. Therefore, FreeMHD has been recently developed as an open-source magnetohydrodynamics (MHD) solver for free-surface electrically conductive flows subject to a strong external magnetic field. The FreeMHD solver computes incompressible free-surface flows with multi-region coupling for the investigation of MHD phenomena involving fluid and solid domains. The model utilizes the finite-volume OpenFOAM framework under the low magnetic Reynolds number approximation. FreeMHD is validated using analytical solutions for the velocity profiles of closed channel flows with various Hartmann numbers and wall conductance ratios. Next, experimental measurements are then used to verify FreeMHD, through a series of cases involving dam breaking, 3D magnetic fields, and free-surface LM flows. These results demonstrate that FreeMHD is a reliable tool for the design of LM systems under free surface conditions at the reactor scale. Furthermore, it is flexible, computationally inexpensive, and can be used to solve fully 3D transient MHD flows.

The change of the vibrational energy within a molecule after collisions with another molecule plays an essential role in the evolution of molecular internal energy distributions, which is also the limiting process in the relaxation of the gas towards equilibrium. Here we investigate the energy transfer between the translational motion and the vibrational motion of the diatom during the atom-diatom collision, the simplest case involving the transfer between inter-molecular and intra-molecular energies. We are interested in the situation when the translational temperature of the gas is high, in which case there are significant probabilities for the vibrational energy to change over widely separated energy levels after a collision. Data from quasi-classical trajectory simulations of the N+N$_2$ system with \textit{ab initio} potential energies suggest that the transition probability dependence on the collisional energy possesses an ``activation-saturation'' behavior and can be described by a simple model. The model allows for explicit evaluation of the vibrational state-to-state transition rate coefficients, from which the evolution of the vibrational energy distribution from any initial conditions can be solved by the master equation approach. An example of the vibrational energy relaxation in the N+N$_2$ system mimicking the gas behind strong shocks in a hypersonic flow is shown and the results are in good agreement with available data.

Polaritonic chemistry has garnered increasing attention in recent years due to pioneering experimental results, which show that site- and bond-selective chemistry at room temperature is achievable through strong collective coupling to field fluctuations in optical cavities. Despite these notable experimental strides, the underlying theoretical mechanisms remain unclear. In this focus review, we highlight a fundamental theoretical link between the seemingly unrelated fields of polaritonic chemistry and spin glasses, exploring its profound implications for the theoretical framework of polaritonic chemistry. Specifically, we present a mapping of the dressed electronic structure problem under collective vibrational strong coupling to the iconic Sherrington-Kirkpatrick model of spin glasses. This mapping uncovers a collectively induced instability in the dressed electronic structure (spontaneous replica symmetry breaking), which could provide the long-sought seed for significant local chemical modifications in polaritonic chemistry. This mapping paves the way to incorporate, adjust and probe numerous spin glass concepts in polaritonic chemistry, such as frustration, aging dynamics, excess of thermal fluctuations, time-reversal symmetry breaking or stochastic resonances. Ultimately, the mapping also offers fresh insights into the applicability of spin glass theory beyond condensed matter systems and it suggests novel theoretical directions such as polarization glasses with explicitly time-dependent order parameter functions.

Vibrational strong light-matter coupling offers a promising approach for controlling chemical reactivity with infrared microcavities. This study explores the dynamics of Blackbody Infrared Radiative Dissociation (BIRD) in microcavities under weak and strong light-matter interaction regimes. Using a Master equation approach, we simulate the effects of infrared field confinement and vibrational strong coupling on BIRD rates for diatomic molecules. We present a framework explaining how infrared microcavities influence BIRD kinetics, highlighting the importance of overtone transitions in the process. Our findings reveal conditions for significant enhancement and mild suppression of radiative dissociation, establishing upper bounds for BIRD rates under weak and strong coupling. These results provide new strategies and limitations for controlling reactive processes with infrared resonators.

Super-resolution ultrasound (SRUS) visualises microvasculature beyond the ultrasound diffraction limit (wavelength($\lambda$)/2) by localising and tracking spatially isolated microbubble contrast agents. SRUS phantoms typically consist of simple tube structures, where diameter channels below 100 $\mu$m are not available. Furthermore, these phantoms are generally fragile and unstable, have limited ground truth validation, and their simple structure limits the evaluation of SRUS algorithms. To aid SRUS development, robust and durable phantoms with known and physiologically relevant microvasculature are needed for repeatable SRUS testing. This work proposes a method to fabricate durable microvascular phantoms that allow optical gauging for SRUS validation. The methodology used a microvasculature negative print embedded in a Polydimethylsiloxane to fabricate a microvascular phantom. Branching microvascular phantoms with variable microvascular density were demonstrated with optically validated vessel diameters down to $\sim$ 60 $\mu$m ($\lambda$/5.8; $\lambda$ =$\sim$ 350 $\mu$m). SRUS imaging was performed and validated with optical measurements. The average SRUS error was 15.61 $\mu$m ($\lambda$/22) with a standard deviation error of 11.44 $\mu$m. The average error decreased to 7.93 $\mu$m ($\lambda$/44) once the number of localised microbubbles surpassed 1000 per estimated diameter. In addition, the less than 10$\%$ variance of acoustic and optical properties and the mechanical toughness of the phantoms measured a year after fabrication demonstrated their long-term durability. This work presents a method to fabricate durable and optically validated complex microvascular phantoms which can be used to quantify SRUS performance and facilitate its further development.

Lattice thermal conductivity (kL) is a crucial physical property of crystals with applications in thermal management, such as heat dissipation, insulation, and thermoelectric energy conversion. However, accurately and rapidly determining kL poses a considerable challenge. In this study, we introduce an formula that achieves high precision (mean relative error=8.97%) and provides fast predictions, taking less than one minute, for kL across a wide range of inorganic binary and ternary materials. Our interpretable, dimensionally aligned and physical grounded formula forecasts kL values for 4,601 binary and 6,995 ternary materials in the Materials Project database. Notably, we predict undiscovered high kL values for AlBN2 (kL=101 W/ m/ K) and the undetectedlow kL Cs2Se (kL=0.98 W/ m/ K) at room temperature. This method for determining kL streamlines the traditionally time-consuming process associated with complex phonon physics. It provides insights into microscopic heat transport and facilitates the design and screening of materials with targeted and extreme kL values through the application of phonon engineering. Our findings offer opportunities for controlling and optimizing macroscopic transport properties of materials by engineering their bulk modulus, shear modulus, and Gruneisen parameter.

We present a theoretical study of van der Waals interaction forces in disordered linear molecule chains. We demonstrate that the interaction energy strongly and nonmonotonously depends on the disorder correlation length. Semianalytical expressions for the interaction energy are obtained.

The description of the dynamics of complex systems, in particular the capture of the interaction structure and causal relationships between elements of the system, is one of the central questions of interdisciplinary research. While the characterization of pairwise causal interactions is a relatively ripe field with established theoretical concepts and the current focus is on technical issues of their efficient estimation, it turns out that the standard concepts such as Granger causality or transfer entropy may not faithfully reflect possible synergies or interactions of higher orders, phenomena highly relevant for many real-world complex systems. In this paper, we propose a generalization and refinement of the information-theoretic approach to causal inference, enabling the description of truly multivariate, rather than multiple pairwise, causal interactions, and moving thus from causal networks to causal hypernetworks. In particular, while keeping the ability to control for mediating variables or common causes, in case of purely synergetic interactions such as the exclusive disjunction, it ascribes the causal role to the multivariate causal set but \emph{not} to individual inputs, distinguishing it thus from the case of e.g. two additive univariate causes. We demonstrate this concept by application to illustrative theoretical examples as well as a biophysically realistic simulation of biological neuronal dynamics recently reported to employ synergetic computations.

We are merging a large participatory science effort with machine learning to enhance the Hobby-Eberly Telescope Dark Energy Experiment (HETDEX). Our overall goal is to remove false positives, allowing us to use lower signal-to-noise data and sources with low goodness-of-fit. With six million classifications through Dark Energy Explorers, we can confidently determine if a source is not real at over 94% confidence level when classified by at least ten individuals; this confidence level increases for higher signal-to-noise sources. To date, we have only been able to apply this direct analysis to 190,000 sources. The full sample of HETDEX will contain around 2-3M sources, including nearby galaxies ([O II] emitters), distant galaxies (Lyman-alpha emitters or LAEs), false positives, and contamination from instrument issues. We can accommodate this tenfold increase by using machine learning with visually-vetted samples from Dark Energy Explorers. We have already increased by over ten-fold in number of sources that have been visually vetted from our previous pilot study where we only had 14,000 visually vetted LAE candidates. This paper expands on the previous work increasing the visually-vetted sample from 14,000 to 190,000. In addition, using our currently visually-vetted sample, we generate a real or false positive classification for the full candidate sample of 1.2 million LAEs. We currently have approximately 17,000 volunteers from 159 countries around the world. Thus, we are applying participatory or citizen scientist analysis to our full HETDEX dataset, creating a free educational opportunity that requires no prior technical knowledge.

While much attention of neural network methods is devoted to high-dimensional PDE problems, in this work we consider methods designed to work for elliptic problems on domains $\Omega \subset \mathbb{R} ^d, $ $d=1,2,3$ in association with more standard finite elements. We suggest to connect finite elements and neural network approximations through training, i.e., using finite element spaces to compute the integrals appearing in the loss functionals. This approach, retains the simplicity of classical neural network methods for PDEs, uses well established finite element tools (and software) to compute the integrals involved and it gains in efficiency and accuracy. We demonstrate that the proposed methods are stable and furthermore, we establish that the resulting approximations converge to the solutions of the PDE. Numerical results indicating the efficiency and robustness of the proposed algorithms are presented.

Entangled two-photon absorption (ETPA) may be a viable technique to continuously drive an excited state population in plasma for high-bandwidth spectroscopy measurements of localized plasma turbulence or impurity density. Classical two-photon absorption commonly requires a high-intensity, pulsed laser, but entangled photons with short entanglement time and high time correlation may allow for ETPA using a lower intensity, continuous-wave laser. Notably, ETPA with non-collinear entangled photon generation allows for cross-beam spatial localization of the absorption or fluorescence signal using a single laser source. Entangled photon generation, the ETPA cross-section, candidate transitions for an Ar-II species, and plans for a proof-of-principle measurement in a helicon plasma are discussed.

The analysis of event time series is in general challenging. Most time series analysis tools are limited for the analysis of this kind of data. Recurrence analysis, a powerful concept from nonlinear time series analysis, provides several opportunities to work with event data and even for the most challenging task of comparing event time series with continuous time series. Here, the basic concept is introduced, the challenges are discussed, and the future perspectives are summarised.

Detecting non-classical light is a central requirement for photonics-based quantum technologies. Unrivaled high efficiencies and low dark counts have positioned superconducting nanowire single photon detectors (SNSPDs) as the leading detector technology for fiber and integrated photonic applications. However, a central challenge lies in their integration within photonic integrated circuits regardless of material platform or surface topography. Here, we introduce a method based on transfer printing that overcomes these constraints and allows for the integration of SNSPDs onto arbitrary photonic substrates. We prove this by integrating SNSPDs and showing through-waveguide single-photon detection in commercially manufactured silicon and lithium niobate on insulator integrated photonic circuits. Our method eliminates bottlenecks to the integration of high-quality single-photon detectors, turning them into a versatile and accessible building block for scalable quantum information processing.

Subsurface oxygen in oxide-derived copper catalysts significantly influences CO$_2$ activation. However, its effect on the molecular charging process, the key to forming the CO$_2^{\delta-}$ intermediate, remains poorly understood. We employ many-body perturbation theory to investigate the impact of the structural factors induced by the subsurface oxygen on charged activation of CO$_2$. By computing the molecular single-particle state energy of the electron-accepting orbital ($\sigma*$) on Cu (111) surface, we examined how this molecular quasi-particle (QP) energy changes with varied vicinity of adsorption and multiple subsurface oxygen configuration. We demonstrate that subsurface oxygen impairs CO$_2$ charging, with its presence and density being influential factors. The non-local potential proves substantial for accurate excitation energy predictions yet is not sensitive to minor atomic structural changes. More importantly, state delocalization and hybridization are critical for determining QP energy. These insights are enlightening for designing atomic architectures to optimize catalytic performance on modified surfaces.

Future electronics require aggressive scaling of channel material thickness while maintaining device performance. Two-dimensional (2D) semiconductors are promising candidates, but despite over two decades of research, experimental performance still lags theoretical expectations. Here, we develop an oxygen-free approach to push the electrical transport of 2D field-effect transistors toward the theoretical phonon-limited intrinsic mobility. We achieve record carrier mobilities of 91 (132) cm2V-1s-1 for mono- (bi-) layer MoS2 transistors on SiO2 substrate. Statistics from over 60 devices confirm that oxygen-free fabrication enhances key figures of merit by more than an order of magnitude. While previous studies suggest that 2D transition metal dichalcogenides such as MoS2 and WS2 are stable in air, we show that short-term ambient exposure can degrade their device performance through irreversible oxygen chemisorption. This study emphasizes the criticality of avoiding oxygen exposure, offering guidance for device manufacturing for fundamental research and practical applications of 2D materials.

The COVID-19 pandemic accelerated the use of preprints, aiding rapid research dissemination but also facilitating the spread of misinformation. This study analyzes media coverage of preprints from 2014 to 2023, revealing a significant post-pandemic decline. Our findings suggest that heightened awareness of the risks associated with preprints has led to more cautious media practices. While the decline in preprint coverage may mitigate concerns about premature media exposure, it also raises questions about the future role of preprints in science communication, especially during emergencies. Balanced policies based on up-to-date evidence are needed to address this shift.

We integrate neural operators with diffusion models to address the spectral limitations of neural operators in surrogate modeling of turbulent flows. While neural operators offer computational efficiency, they exhibit deficiencies in capturing high-frequency flow dynamics, resulting in overly smooth approximations. To overcome this, we condition diffusion models on neural operators to enhance the resolution of turbulent structures. Our approach is validated for different neural operators on diverse datasets, including a high Reynolds number jet flow simulation and experimental Schlieren velocimetry. The proposed method significantly improves the alignment of predicted energy spectra with true distributions compared to neural operators alone. Additionally, proper orthogonal decomposition analysis demonstrates enhanced spectral fidelity in space-time. This work establishes a new paradigm for combining generative models with neural operators to advance surrogate modeling of turbulent systems, and it can be used in other scientific applications that involve microstructure and high-frequency content. See our project page: vivekoommen.github.io/NO_DM

Daily activity monitoring systems used in households provide vital information for health status, particularly with aging residents. Multiple approaches have been introduced to achieve such goals, typically obtrusive and non-obtrusive. Amongst the obtrusive approaches are the wearable devices, and among the non-obtrusive approaches are the movement detection systems, including motion sensors and thermal sensor arrays (TSAs). TSA systems are advantageous when preserving a person's privacy and picking his precise spatial location. In this study, human daily living activities were monitored day and night, constructing the corresponding activity time series and spatial probability distribution and employing a TSA system. The monitored activities are classified into two categories: sleeping and daily activity. Results showed the possibility of distinguishing between classes regardless of day and night. The obtained sleep activity duration was compared with previous research using the same raw data. Results showed that the duration of sleep activity, on average, was 9 hours/day, and daily life activity was 7 hours/day. The person's spatial probability distribution was determined using the bivariate distribution for the monitored location. In conclusion, the results showed that sleeping activity was dominant. Our study showed that TSAs were the optimum choice when monitoring human activity. Our proposed approach tackled limitations encountered by previous human activity monitoring systems, such as preserving human privacy while knowing his precise spatial location.

The core of quantum metrology lies in utilizing entanglement to enhance measurement precision beyond standard quantum limit. Here, we utilize the Floquet-engineered two-axis twisting (TAT) and turn dynamics to generate non-Gaussian states for quantum metrology. By employing both analytically semi-classical and quantum approaches, we find that the desired $N$-particle non-Gaussian state can be produced within a remarkably short time $t_\mathrm{opt}\propto \ln{N}/{N}$, and its quantum Fisher information $F^\mathrm{opt}_\mathrm{Q}\propto N^2$ approaches the Heisenberg limit. Moreover, using the Floquet-engineered anti-TAT-and-turn, we may implement an efficient interaction-based readout protocol to extract the signal encoded in this non-Gaussian state. This Floquet-engineered anti-TAT-and-turn approach offers a viable method to achieve effective time-reversal dynamics for improving measurement precision and resilience against detection noise, all without the need to invert the sign of the nonlinear interaction. This study paves the way for achieving entanglement-enhanced quantum metrology via rapid generation of cat-like states at high particle numbers through continuous Floquet engineering.

Atomic gravimeters are the most accurate sensors for measuring gravity, however, a significant challenge is how to achieve high precision even in the presence of noises. Here, we develop a protocol for achieving robust high-precision atomic gravimetry based upon adaptive Bayesian quantum estimation. Our protocol incorporates a sequence of interferometry measurements taken with short to long interrogation times and offers several key advantages. Firstly, it enables a high dynamic range without the need to scan multiple fringes for pre-estimation, making it more efficient than the conventional frequentist method. Secondly, it enhances robustness against noises, allowing for a significant measurement precision improvement in noisy environments. The enhancement can be more than $5$ times for a transportable gravimeter and up to an order of magnitude for a state-of-the-art fountain gravimeter. Notably, by optimizing the interferometry sequence, our approach can improve the scaling of the measurement precision ($\Delta g_{est}$) versus the total interrogation time ($\tilde{T}$) to $\Delta g_{est} \propto \tilde{T}^{-2}$ or even better, in contrast to the conventional one $\Delta g_{est} \propto \tilde{T}^{-0.5}$. Our approach offers superior precision, increased dynamic range, and enhanced robustness, making it highly promising for a range of practical sensing applications.

A joint image reconstruction and segmentation approach based on disentangled representation learning was trained to enable cardiac cine MR imaging in real-time and under free-breathing. An exploratory feasibility study tested the proposed method in undersampled real-time acquisitions based on an in-house developed spiral bSSFP pulse sequence in eight healthy participants and five patients with intermittent atrial fibrillation. Images and predicted LV segmentations were compared to the reference standard of ECG-gated segmented Cartesian cine in repeated breath-holds and corresponding manual segmentation. On a 5-point Likert scale, image quality of the real-time breath-hold approach and Cartesian cine was comparable in healthy participants (RT-BH: 1.99 $\pm$ .98, Cartesian: 1.94 $\pm$ .86, p=.052), but slightly inferior in free-breathing (RT-FB: 2.40 $\pm$ .98, p<.001). In patients with arrhythmia, image quality from both real-time approaches was favourable (RT-BH: 2.10 $\pm$ 1.28, p<.001, RT-FB: 2.40 $\pm$ 1.13, p<.001, Cartesian: 2.68 $\pm$ 1.13). Intra-observer reliability was good (ICC=.77, 95%-confidence interval [.75, .79], p<.001). In functional analysis, a positive bias was observed for ejection fractions derived from the proposed model compared to the clinical reference standard (RT-BH mean EF: 58.5 $\pm$ 5.6%, bias: +3.47%, 95%-confidence interval [-.86, 7.79%], RT-FB mean: 57.9 $\pm$ 10.6%, bias: +1.45%, [-3.02, 5.91%], Cartesian mean: 54.9 $\pm$ 6.7%). The introduced real-time MR imaging technique is capable of acquiring high-quality cardiac cine data in 1-2 minutes without the need for ECG gating and breath-holds. It thus offers a promising alternative to the current clinical practice of segmented acquisition, with shorter scan times, higher patient comfort and increased robustness to arrhythmia and patient incompliance.

We provide the first counter-example showing that the ground state energy of electrons in an external Coulomb potential is not always a convex function of the number of electrons. This property had been conjectured to hold for decades and it plays an important role in quantum chemistry. Our counter-example involves an external potential generated by six nuclei of small fractional charges, placed far away from each other. The ground state energy of 3 electrons is proved to be higher than the average of the energies for 2 and 4 electrons. In addition, we show that the nuclei can bind 2 or 4 electrons, but not 3. Although the conjecture remains open for real nuclei (of integer charges), our work sets some doubt on the validity of the energy convexity for general atoms and molecules.

This work tackles the critical challenge of mitigating "hardware noise" in deep analog neural networks, a major obstacle in advancing analog signal processing devices. We propose a comprehensive, hardware-agnostic solution to address both correlated and uncorrelated noise affecting the activation layers of deep neural models. The novelty of our approach lies in its ability to demystify the "black box" nature of noise-resilient networks by revealing the underlying mechanisms that reduce sensitivity to noise. In doing so, we introduce a new explainable regularization framework that harnesses these mechanisms to significantly enhance noise robustness in deep neural architectures.

Chondritic components such as chondrules and matrix are the key time capsules that can help us understand the evolution and dynamics of the protoplanetary disk from which the Solar System originated. Knowledge of where and how these components formed and to what extent they were transported in the gaseous disk provides major constraints to astrophysical models that investigate planet formation. Here, we explore whether chondrules and matrix are genetically related to each other and formed from single reservoirs per chondrite group or if every chondrite represents a unique proportion of components transported from a small number of formation reservoirs in the disk. These static versus dynamic disk interpretations of cosmochemical data have profound implications for the accretion history of the planets in the Solar System. To fully understand the relationship between chondrules and matrix and their potential complementarity, we dive into the petrological nature and origin of matrix, the chemical and isotopic compositions of chondrules and matrix and evaluate these data considering the effect of secondary alteration observed in chondrites and the potential complexity of chondrule formation. Even though we, the authors, have used different datasets and arrived at differing interpretations of chondrule-matrix relationships in the past, this review provides clarity on the existing data and has given us new directions towards future research that can resolve the complementarity debate.

Although impurities are unavoidable in real-world and experimental systems, most numerical studies on nucleation focus on pure (impurity-free) systems. As a result, the role of impurities in phase transitions remains poorly understood, especially for systems with complex free energy landscapes featuring one or more metastable intermediate phases. In this study, we employed Monte-Carlo simulations to investigate the effects of static impurities (quenched disorder) of varying length scales and surface morphologies on the nucleation mechanism and kinetics in the Gaussian Core Model (GCM) system, a model for soft colloidal systems. We first explored how the nucleation free energy barrier and critical cluster size are influenced by the fraction of pinned particles ($f_{\rm p}$) and the pinned cluster size ($n_{\rm p}$). Both the nucleation free energy barrier and critical cluster size increase sharply with increasing $f_{\rm p}$ but decrease as $n_{\rm p}$ grows, eventually approaching the homogeneous nucleation limit. On examining the impact of surface morphology on nucleation kinetics, we observed that the nucleation barrier significantly decreases with increasing the spherical pinned cluster (referred to as "seed") size of face-centred cubic (FCC), body-centred cubic (BCC), and simple cubic (SC) structures, with BCC showing the greatest facilitation. Interestingly, seeds with random surface roughness had little effect on nucleation kinetics. Additionally, the polymorphic identity of particles in the final crystalline phase is influenced by both seed surface morphology and system size. This study further provides crucial insights into the intricate relationship between substrate-induced local structural fluctuations and the selection of the polymorphic identity in the final crystalline phase, which is essential for understanding and controlling crystallization processes in experiments.

In this paper we present an analysis of the mean flow velocities, and related mass transport, which are induced by certain Equatorially-trapped water waves. In particular, we examine a recently-derived exact and explicit solution to the geophysical governing equations in the $\beta-$plane approximation at the Equator which incorporates a constant underlying current.

The thermoelectric characteristics of lead selenium (PbSe) doped with gallium (Ga) are investigated in this study. When the lead sulfide (PbSe) is tuned with appropriate dopants, it exhibits satisfactory ZT values, hence making it a promising thermoelectric material. This study examines the electrical conductivity, Seebeck coefficient, thermal conductivity, and power factor of PbSe, with varying amounts of added Ga. Results indicate that incorporating Ga into PbSe improves its thermoelectric performance, with a maximum ZT value of approximately 1.2 at 873 K for the optimal doping concentration of 0.005 atomic percent. This improvement is attributed to the combined effects of increased electrical conductivity and reduced thermal conductivity. These findings suggest that Ga-doped PbSe is a promising candidate for mid-temperature thermoelectric applications.

Measurements of the speed of sound in gaseous cis-1,3,3,3-tetrafluoroprop-1-ene, (R1234ze(Z)), are presented. The measurements were performed using a quasi-spherical acoustic resonator at temperatures between 307 K and 420 K and pressures up to 1.8 MPa. Ideal-gas heat capacities and acoustic virial coefficients over the same temperature range were directly calculated from the results. The relative accuracy of our determinations of the speed of sound $w$($p$,$T$) of R1234ze(Z) was approximately $\pm$ 0.02%. The accuracy of the determination of the ideal gas heat capacity ratio ${\gamma}^{0}$($T$) was approximately $\pm$ 0.25%. These data were found to be mostly consistent with the predictions of a fundamental equation of state of R1234ze(Z).

Synchronization is an important phenomenon in a wide variety of systems comprising interacting oscillatory units, whether natural (like neurons, biochemical reactions, cardiac cells) or artificial (like metronomes, power grids, Josephson junctions). The Kuramoto model provides a simple description of these systems and has been useful in their mathematical exploration. Here we investigate this model in the presence of two characteristics that may be important in applications: an external periodic influence and higher-order interactions among the units. The combination of these ingredients leads to a very rich bifurcation scenario in the dynamics of the order parameter that describes phase transitions. Our theoretical calculations are validated by numerical simulations.

Global estuaries and coastal regions, acting as critical interfaces for mitigating nitrogen flux to marine, concurrently contend with contamination from tire wear particles (TWPs). However, the effects of pristine and photoaged TWP (P-TWP and A-TWP) and their leachates (P-TWPL and A-TWPL) on key nitrogen removal processes in estuarine sediments remain unclear. This study explored the responses of denitrification rate, anammox rate, and nitrous oxide (N2O) accumulation to P-TWP, A-TWP, P-TWPL, and A-TWPL exposures in estuarine sediments, and assessed the potential biotoxic substances in TWPL. Results indicate that P-TWP inhibited the denitrification rate and increased N2O accumulation without significantly impacting the anammox rate. A-TWP intensified the denitrification rate inhibition by further reducing narG gene abundance and NAR activity, and also decreased the hzo gene abundance, HZO activity, and Candidatus Kuenenia abundance, thereby slowing the anammox rate. N2O accumulation was lower after A-TWP exposure than P-TWP, with the NIR/NOS and NOR/NOS activity ratios closely associated with N2O accumulation. Batch experiments indicated that photoaging promoted Zn release from TWPL, significantly contributing to the inhibited denitrification rate and increased N2O accumulation by TWP. In addition, TWP drives changes in microbial community structure through released additives, with the abundance of DNB and AnAOB closely linked to the Zn, Mn, and As concentrations in TWPL. This study offers insights into assessing the environmental risks of TWPs in estuarine ecosystems.

The XENONnT experiment, located at the INFN Laboratori Nazionali del Gran Sasso, Italy, features a 5.9 tonne liquid xenon time projection chamber surrounded by an instrumented neutron veto, all of which is housed within a muon veto water tank. Due to extensive shielding and advanced purification to mitigate natural radioactivity, an exceptionally low background level of (15.8 $\pm$ 1.3) events/(tonne$\cdot$year$\cdot$keV) in the (1, 30) keV region is reached in the inner part of the TPC. XENONnT is thus sensitive to a wide range of rare phenomena related to Dark Matter and Neutrino interactions, both within and beyond the Standard Model of particle physics, with a focus on the direct detection of Dark Matter in the form of weakly interacting massive particles (WIMPs). From May 2021 to December 2021, XENONnT accumulated data in rare-event search mode with a total exposure of one tonne $\cdot$ year. This paper provides a detailed description of the signal reconstruction methods, event selection procedure, and detector response calibration, as well as an overview of the detector performance in this time frame. This work establishes the foundational framework for the `blind analysis' methodology we are using when reporting XENONnT physics results.

Most of the novel energy materials contain multiple elements occupying a single site in their lattice. The exceedingly large configurational space of these materials imposes challenges in determining their ground-state structures. Coulomb energies of possible configurations generally show a satisfactory correlation to computed energies at higher levels of theory and thus allow to screen for minimum-energy structures. Employing a second-order cluster expansion, we obtain an efficient Coulomb energy optimizer using Monte Carlo and Genetic Algorithms. The presented optimization package, GOAC (Global Optimization of Atomistic Configurations by Coulomb), can achieve a speed up of several orders of magnitude compared to existing software. Our code is able to find low-energy configurations of complex systems involving up to $10^{920}$ structural configurations. The GOAC package thus provides an efficient method for constructing ground-state atomistic models for multi-element materials with gigantic configurational spaces.

Out-of-equilibrium fermionic quantum impurity models (QIM), describing a small interacting system coupled to a continuous fermionic bath, play an important role in condensed matter physics. Solving such models is a computationally demanding task, and a variety of computational approaches are based on finding approximate representations of the bath by a finite number of modes. In this paper, we formulate the problem of finding efficient bath representations as that of approximating a kernel of the bath's Feynman-Vernon influence functional by a sum of complex exponentials, with each term defining a fermionic pseudomode. Under mild assumptions on the analytic properties of the bath spectral density, we provide an analytic construction of pseudomodes, and prove that their number scales polylogarithmically with the maximum evolution time $T$ and the approximation error $\varepsilon$. We then demonstrate that the number of pseudomodes can be significantly reduced by an interpolative matrix decomposition (ID). Furthermore, we present a complementary approach, based on constructing rational approximations of the bath's spectral density using the ``AAA'' algorithm, followed by compression with ID. The combination of two approaches yields a pseudomode count scaling as $N_\text{ID} \sim \log(T)\log(1/\varepsilon)$, and the agreement between the two approches suggests that the result is close to optimal. Finally, to relate our findings to QIM, we derive an explicit Liouvillian that describes the time evolution of the combined impurity-pseudomodes system. These results establish bounds on the computational resources required for solving out-of-equilibrium QIMs, providing an efficient starting point for tensor-network methods for QIMs.

Current research on thermoelectricity is primarily focused on the exploration of materials with enhanced performance, resulting in a lack of fundamental understanding of the thermoelectric effect. Such circumstance is not conducive to the further improvement of the efficiency of thermoelectric conversion. Moreover, available physical images of the derivation of the Kelvin relations are ambiguous. Derivation processes are complex and need a deeper understanding of thermoelectric conversion phenomena. In this paper, a new physical quantity 'thermoelectrical potential' from the physical nature of the thermoelectric conversion is proposed. The quantity is expressed as the product of the Seebeck coefficient and the absolute temperature, i.e., ST. Based on the thermoelectrical potential, we clarify the conversion of the various forms of energy in the thermoelectric effect by presenting a clear physical picture. Results from the analysis of the physical mechanism of the Seebeck effect indicate that the thermoelectrical potential, rather than the temperature gradient field, exerts a force on the charge carriers in the thermoelectric material. Based on thermoelectric potential, the Peltier effects at different material interfaces can be macroscopically described. The Kelvin relation is rederived using the proposed quantity, which simplified the derivation process and elucidated the physical picture of the thermoelectrical conversion.

PiNNAcLe is an implementation of our adaptive learn-on-the-fly algorithm for running machine-learning potential (MLP)-based molecular dynamics (MD) simulations -- an emerging approach to simulate the large-scale and long-time dynamics of systems where empirical forms of the PES are difficult to obtain. The algorithm aims to solve the challenge of parameterizing MLPs for large-time-scale MD simulations, by validating simulation results at adaptive time intervals. This approach eliminates the need of uncertainty quantification methods for labelling new data, and thus avoids the additional computational cost and arbitrariness thereof. The algorithm is implemented in the NextFlow workflow language (Di Tommaso et al., 2017). Components such as MD simulation and MLP engines are designed in a modular fashion, and the workflows are agnostic to the implementation of such modules. This makes it easy to apply the same algorithm to different references, as well as scaling the workflow to a variety of computational resources. The code is published under BSD 3-Clause License, the source code and documentation are hosted on Github. It currently supports MLP generation with the atomistic machine learning package PiNN (Shao et al., 2020), electronic structure calculations with CP2K (K\"uhne et al., 2020) and DFTB+ (Hourahine et al., 2020), and MD simulation with ASE (Larsen et al., 2017).

We propose a method to realize microwave-activated CZ gates between two remote spin qubits in quantum dots using an offset-charge-sensitive transmon coupler. The qubits are longitudinally coupled to the coupler, so that the transition frequency of the coupler depends on the logical qubit states; a capacitive network model using first-quantized charge operators is developed to illustrate this. Driving the coupler transition then implements a conditional phase shift on the qubits. Two pulsing schemes are investigated: a rapid, off-resonant pulse with constant amplitude, and a pulse with envelope engineering that incorporates dynamical decoupling to mitigate charge noise. We develop non-Markovian time-domain simulations to accurately model gate performance in the presence of $1/f^\beta$ charge noise. Simulation results indicate that a CZ gate fidelity exceeding 90% is possible with realistic parameters and noise models.

Recently, physics-informed neural networks (PINNs) have emerged as a flexible and promising application of deep learning to partial differential equations in the physical sciences. While offering strong performance and competitive inference speeds on forward and inverse problems, their black-box nature limits interpretability, particularly regarding alignment with expected physical behavior. In the present work, we explore the application of influence functions (IFs) to validate and debug PINNs post-hoc. Specifically, we apply variations of IF-based indicators to gauge the influence of different types of collocation points on the prediction of PINNs applied to a 2D Navier-Stokes fluid flow problem. Our results demonstrate how IFs can be adapted to PINNs to reveal the potential for further studies.

Ensuring content compliance with community guidelines is crucial for maintaining healthy online social environments. However, traditional human-based compliance checking struggles with scaling due to the increasing volume of user-generated content and a limited number of moderators. Recent advancements in Natural Language Understanding demonstrated by Large Language Models unlock new opportunities for automated content compliance verification. This work evaluates six AI-agents built on Open-LLMs for automated rule compliance checking in Decentralized Social Networks, a challenging environment due to heterogeneous community scopes and rules. Analyzing over 50,000 posts from hundreds of Mastodon servers, we find that AI-agents effectively detect non-compliant content, grasp linguistic subtleties, and adapt to diverse community contexts. Most agents also show high inter-rater reliability and consistency in score justification and suggestions for compliance. Human-based evaluation with domain experts confirmed the agents' reliability and usefulness, rendering them promising tools for semi-automated or human-in-the-loop content moderation systems.

The way media reports on legal cases can significantly shape public opinion, often embedding subtle biases that influence societal views on justice and morality. Analyzing these biases requires a holistic approach that captures the emotional tone, moral framing, and specific events within the narratives. In this work we introduce E2MoCase, a novel dataset designed to facilitate the integrated analysis of emotions, moral values, and events within legal narratives and media coverage. By leveraging advanced models for emotion detection, moral value identification, and event extraction, E2MoCase offers a multi-dimensional perspective on how legal cases are portrayed in news articles.