Developing complex, reliable advanced accelerators requires a coordinated, extensible, and comprehensive approach in modeling, from source to the end of beam lifetime. We present highlights in Exascale Computing to scale accelerator modeling software to the requirements set for contemporary science drivers. In particular, we present the first laser-plasma modeling on an exaflop supercomputer using the US DOE Exascale Computing Project WarpX. Leveraging developments for Exascale, the new DOE SCIDAC-5 Consortium for Advanced Modeling of Particle Accelerators (CAMPA) will advance numerical algorithms and accelerate community modeling codes in a cohesive manner: from beam source, over energy boost, transport, injection, storage, to application or interaction. Such start-to-end modeling will enable the exploration of hybrid accelerators, with conventional and advanced elements, as the next step for advanced accelerator modeling. Following open community standards, we seed an open ecosystem of codes that can be readily combined with each other and machine learning frameworks. These will cover ultrafast to ultraprecise modeling for future hybrid accelerator design, even enabling virtual test stands and twins of accelerators that can be used in operations.
In a laser wakefield accelerator (LWFA), an intense laser pulse excites a plasma wave that traps and accelerates electrons to relativistic energies. When the pulse overlaps the accelerated electrons, it can enhance the energy gain through direct laser acceleration (DLA) by resonantly driving the betatron oscillations of the electrons in the plasma wave. The particle-in-cell (PIC) algorithm, although often the tool of choice to study DLA, contains inherent errors due to numerical dispersion and the time staggering of the electric and magnetic fields. Further, conventional PIC implementations cannot reliably disentangle the fields of the plasma wave and laser pulse, which obscures interpretation of the dominant acceleration mechanism. Here, a customized field solver that reduces errors from both numerical dispersion and time staggering is used in conjunction with a field decomposition into azimuthal modes to perform PIC simulations of DLA in an LWFA. Comparisons with traditional PIC methods, model equations, and experimental data show improved accuracy with the customized solver and convergence with an order-of-magnitude fewer cells. The azimuthal-mode decomposition reveals that the most energetic electrons receive comparable energy from DLA and LWFA.
We demonstrate the accuracy of ground-state energies of the transcorrelated Hamiltonian, employing sophisticated Jastrow factors obtained from variational Monte Carlo, together with the coupled cluster and distinguishable cluster methods at the level of singles and doubles excitations. Our results show that already with the cc-pVTZ basis the transcorrelated distinguishable cluster method gets close to complete basis limit and near full configuration interaction quality values for relative energies of over thirty atoms and molecules. To gauge the performance in different correlation regimes we also investigate the breaking of the nitrogen molecule with transcorrelated coupled-cluster methods. Numerical evidence is presented to further justify an efficient way to incorporate the major effects coming from the three-body integrals without explicitly introducing them into the amplitude equations.
The upcoming NASA mission HelioSwarm will use nine spacecraft to make the first simultaneous multi-point measurements of space plasmas spanning multiple scales. Using the wave-telescope technique, HelioSwarm's measurements will allow for both the calculation of the power in wavevector-and-frequency space and the characterization of the associated dispersion relations of waves present in the plasma at MHD and ion-kinetic scales. This technique has been applied to the four-spacecraft missions of CLUSTER and MMS and its effectiveness has previously been characterized in a handful of case studies. We expand this uncertainty quantification analysis to arbitrary configurations of four through nine spacecraft for three-dimensional plane waves. We use Bayesian inference to learn equations that approximate the error in reconstructing the wavevector as a function of relative wavevector magnitude, spacecraft configuration shape, and number of spacecraft. We demonstrate the application of these equations to data drawn from a nine-spacecraft configuration to both improve the accuracy of the technique, as well as expand the magnitudes of wavevectors that can be characterized.
In this study, molecular dynamics simulations were conducted to investigate the relaxation of the internal energy in nano-sized particles and its impact on the nucleation of atomic clusters. Quantum-mechanical potentials were utilized to analyze the growth and collision relaxation of the internal energy of Ar$_n$H$^+$ clusters in a metastable Ar gas. The results revealed that small nano-clusters are formed in highly excited rotational-vibrational states, and the relaxation of internal energy and growth of these nascent clusters are concurrent processes with a strong mutual influence. Under non-equilibrium growth conditions, the relaxation of internal energy can delay the cluster growth process. The rates of cluster growth and internal energy relaxation were found to be influenced by energy-transfer collisions between cluster particles and free Ar atoms of the bath gas. Furthermore, the non-equilibrium growth and internal energy relaxation of small nano-clusters were found to depend on the structure of the cluster's atomic shells. An ensemble of molecular dynamics simulations were conducted to investigate the growth, time-evolution of kinetic and total energies of Ar$_n$H$^+$ clusters with specified $n \leq 11$, and the results were explained by collisional relaxation processes described by the Boltzmann equation. Finally, the general relationship between the rates of internal energy relaxation and non-equilibrium growth of nano-particles is discussed.
Isaac Newton, in popular imagination the Ur-scientist, was an outstanding humanist scholar. His researches on, among others, ancient philosophy, are thorough and appear to be connected to and fit within his larger philosophical and theological agenda. It is therefore relevant to take a closer look at Newton's intellectual choices, at how and why precisely he would occupy himself with specific text-sources, and how this interest fits into the larger picture of his scientific and intellectual endeavours. In what follows, we shall follow Newton into his study and look over his shoulder while reading compendia and original source-texts in his personal library at Cambridge, meticulously investigating and comparing fragments and commentaries, and carefully keeping track in private notes of how they support his own developing ideas. Indeed, Newton was convinced that precursors to his own insights and discoveries were present already in Antiquity, even before the Greeks, in ancient Egypt, and he puts a lot of time and effort into making the point, especially, and not incidentially, in the period between the first and the second edition of the Principia. A clear understanding of his reading of the classic sources therefore matters to our understanding of its content and gestation process. In what follows we will confine ourselves to the classical legacy, and investigate Newton's intellectual intercourse with it.
An effect we have termed the acousto-thermoelectric effect is theorized for temperature gradients driven by acoustic modulation. The effect produces a dynamic and spatially varying voltage. Adiabatic acoustic fluctuations in a solid cause temperature variations and temperature gradients that generate quasi-static thermoelectric effects correlated with the time and spatial scales of the acoustic fluctuations. This phenomenon is distinctive from the static thermoelectric effect in that the hot spots (heat sources) and cold spots (heat sinks) change locations and vary over short time scales. Predictions are made for a semiconductor material, indium antimonide, showing that the effect is measurable under laboratory conditions. The sample is excited by a resonant acoustic mode with frequency 230 kHz, wavelength of 1.37 cm, and pressure amplitude of 2.23 MPa (rms). The predicted peak voltage between positions where maximum and minimum temperatures occur is 2.6 {\mu}V. The voltage fluctuates with the same frequency as acoustic resonance.
A shadow molecular dynamics scheme for flexible charge models is presented, where the shadow Born-Oppenheimer potential is derived from a coarse-grained approximation of range-separated density functional theory. The interatomic potential, including the atomic electronegativities and the charge-independent short-range part of the potential and force terms, are modeled by the linear atomic cluster expansion (ACE), which provides a computationally efficient alternative to many machine learning methods. The shadow molecular dynamics scheme is based on extended Lagrangian (XL) Born-Oppenheimer molecular dynamics (BOMD) [Eur. Phys. J. B 94, 164 (2021)]. XL-BOMD provides a stable dynamics, while avoiding the costly computational overhead associated with solving an all-to-all system of equations, which normally is required to determine the relaxed electronic ground state prior to each force evaluation. To demonstrate the proposed shadow molecular dynamics scheme for flexible charge models using the atomic cluster expansion, we emulate the dynamics generated from self-consistent charge density functional tight-binding (SCC-DFTB) theory using a second-order charge equilibration (QEq) model. The charge-independent potentials and electronegativities of the QEq model are trained for a supercell of uranium oxide (UO2) and a molecular system of liquid water. The combined ACE + XL-QEq dynamics are stable over a wide range of temperatures both for the oxide and the molecular systems, and provide a precise sampling of the Born-Oppenheimer potential energy surfaces. Accurate ground Coulomb energies are produced by the ACE-based electronegativity model during an NVE simulation of UO2, predicted to be within 1 meV of those from SCC-DFTB on average during comparable simulations.
Solving complex fluid-structure interaction (FSI) problems, which are described by nonlinear partial differential equations, is crucial in various scientific and engineering applications. Traditional computational fluid dynamics based solvers are inadequate to handle the increasing demand for large-scale and long-period simulations. The ever-increasing availability of data and rapid advancement in deep learning (DL) have opened new avenues to tackle these challenges through data-enabled modeling. The seamless integration of DL and classic numerical techniques through the differentiable programming framework can significantly improve data-driven modeling performance. In this study, we propose a differentiable hybrid neural modeling framework for efficient simulation of FSI problems, where the numerically discretized FSI physics based on the immersed boundary method is seamlessly integrated with sequential neural networks using differentiable programming. All modules are programmed in JAX, where automatic differentiation enables gradient back-propagation over the entire model rollout trajectory, allowing the hybrid neural FSI model to be trained as a whole in an end-to-end, sequence-to-sequence manner. Through several FSI benchmark cases, we demonstrate the merit and capability of the proposed method in modeling FSI dynamics for both rigid and flexible bodies. The proposed model has also demonstrated its superiority over baseline purely data-driven neural models, weakly-coupled hybrid neural models, and purely numerical FSI solvers in terms of accuracy, robustness, and generalizability.
GMP-Featurizer is a lightweight, accurate, efficient, and scalable software package for calculating the Gaussian Multipole (GMP) features \cite{GMP} for a variety of atomic systems with elements across the periodic table. Starting from the GMP feature computation module from AmpTorch \cite{amptorch}, the capability of GMP-Featurizer has since been greatly improved, including its accuracy and efficiency, as well as the ability to parallelize on different cores, even machines. Moreover, this python package only has very few dependencies that are all standard python libraries, plus cffi for C++ code interfacing and Ray \cite{Ray} for parallelization, making it lightweight and robust. A set of unit tests are designed to ensure the reliability of its outputs. A set of extensive examples and tutorials, as well as two sets of pseudopotential files (needed for specifying the GMP feature set), are also included in this package for its users. Overall, this package is designed to serve as a standard implementation for chemical and material scientists who are interested in developing models based on GMP features. The source code for this package is freely available to the public under the Apache 2.0 license.
Numerical simulation and analysis are carried out on interactions between a 2D/3D conical shock wave and an axisymmetric boundary layer with reference to the experiment by Kussoy et al., in which the shock was generated by a 15-deg half-angle cone in a tube at 15-deg angle of attack (AOA). Based on the RANS equations and Menter's SST turbulence model, the present study uses the newly developed WENO3-PRM211 scheme and the PHengLEI CFD platform for the computations. First, computations are performed for the 3D interaction corresponding to the conditions of the experiment by Kussoy et al., and these are then extended to cases with AOA = 10-deg and 5-deg. For comparison, 2D axisymmetric counterparts of the 3D interactions are investigated for cones coaxial with the tube and having half-cone angles of 27.35-deg, 24.81-deg, and 20.96-deg. The shock wave structure, vortex structure, variable distributions, and wall separation topology of the interaction are computed. The results show that in 2D/3D interactions, a new Mach reflection-like event occurs and a Mach stem-like structure is generated above the front of the separation bubble, which differs from the model of Babinsky for 2D planar shock wave/boundary layer interaction. A new interaction model is established to describe this behavior. The relationship between the length of the circumferentially unseparated region in the tube and the AOA of the cone indicates the existence of a critical AOA at which the length is zero, and a prediction of this angle is obtained using an empirical fit, which is verified by computation. The occurrence of side overflow in the windward meridional plane is analyzed, and a quantitative knowledge is obtained. To elucidate the characteristics of the 3D interaction, the scale and structure of the vortex and the pressure and friction force distributions are presented and compared with those of the 2D interaction.
Under spatially-coherent light, a diffractive optical network composed of structured surfaces can be designed to perform any arbitrary complex-valued linear transformation between its input and output fields-of-view (FOVs) if the total number (N) of optimizable phase-only diffractive features is greater than or equal to ~2 Ni x No, where Ni and No refer to the number of useful pixels at the input and the output FOVs, respectively. Here we report the design of a spatially-incoherent diffractive optical processor that can approximate any arbitrary linear transformation in time-averaged intensity between its input and output FOVs. Under spatially-incoherent monochromatic light, the spatially-varying intensity point spread functon(H) of a diffractive network, corresponding to a given, arbitrarily-selected linear intensity transformation, can be written as H(m,n;m',n')=|h(m,n;m',n')|^2, where h is the spatially-coherent point-spread function of the same diffractive network, and (m,n) and (m',n') define the coordinates of the output and input FOVs, respectively. Using deep learning, supervised through examples of input-output profiles, we numerically demonstrate that a spatially-incoherent diffractive network can be trained to all-optically perform any arbitrary linear intensity transformation between its input and output if N is greater than or equal to ~2 Ni x No. These results constitute the first demonstration of universal linear intensity transformations performed on an input FOV under spatially-incoherent illumination and will be useful for designing all-optical visual processors that can work with incoherent, natural light.
Tau neutrino is the least studied lepton of the Standard Model (SM). The NA65/DsTau experiment targets to investigate $D_s$, the parent particle of the $\nu_\tau$, using the nuclear emulsion-based detector and to decrease the systematic uncertainty of $\nu_\tau$ flux prediction from over 50% to 10% for future beam dump experiments. In the experiment, the emulsion detectors are exposed to the CERN SPS 400 GeV proton beam. To provide optimal conditions for the reconstruction of interactions, the protons are required to be uniformly distributed over the detector's surface with an average density of $10^5~\rm{cm^{-2}}$ and the fluctuation of less than 10%. To address this issue, we developed a new proton irradiation system called the target mover. The new target mover provided irradiation with a proton density of $0.98~\rm{cm^{-2}}$ and the density fluctuation of $2.0\pm 0.3$% in the DsTau 2021 run.
Chip-based optical frequency combs address the demand for compact, bright, coherent light sources of equidistant phase-locked lines. Traditionally, the Fourier Transform Spectroscopy (FTS) technique has been considered a suboptimal choice for resolving comb lines in chip-based sensing applications due to the requirement of long optical delays, and spectral distortion from the instrumental line shape. Here, we develop a sub-nominal resolution FTS technique that precisely extracts the comb's offset frequency in any spectral region directly from the measured interferogram without resorting to nonlinear $f$-to-$2f$ interferometry. This in turn enables MHz-resolution spectrometry with millimeter optical retardations. Low-pressure MHz-wide absorption lines probed by widely-tunable chip-scale mid-infrared OFCs with electrical pumping are fully resolved over a span of tens of nanometers. This versatile technique paves the way for compact, electrostatically-actuated, or even all-on-chip high-fidelity FTS, and can be readily applied to boost the resolution of existing commercial instruments several hundred times.
Integration of thin-film oxide piezoelectrics on glass is imperative for the next generation of transparent electronics to attain sensing and actuating functions. However, their crystallization temperature (above 650 {\deg}C) is incompatible with most commercial glasses. Guided by finite element analysis, we developed a low-temperature flash lamp process for direct growth of piezoelectric lead zirconate titanate films. The process enables crystallization on various types of glasses in a few seconds only. Ferroelectric, dielectric and piezoelectric properties (e$_{33,f}$ of -5 C m$^{-2}$) of these films are comparable to the properties of films processed with standard rapid thermal annealing at 700 {\deg}C. To demonstrate applicability, a surface haptic device was fabricated with a 1 $\unicode{x00B5}$m-thick film. Its ultrasonic surface deflection reached 1.5 $\unicode{x00B5}$m at 60 V, which is sufficient for its use in surface rendering applications. This demonstrated flash lamp annealing process is compatible with large glass sheets and roll-to-roll processing and, therefore, has the potential to significantly expand the applications of piezoelectric devices on glass.
The Hamiltonian interprets how the system evolves, while the scattering coefficients describe how the system responds to inputs. Recent studies on non-Hermitian physics have revealed many unconventional effects. However, in all cases, the non-Hermiticity such as material loss is only considered for the Hamiltonian, even when studying scattering properties. Another important component-the scattering channel, is always assumed to be lossless and time-reversal symmetric. This assumption hinders the exploration on the more general and fundamental properties of non-Hermitian scattering. Here, we identify a novel kind of scattering channel that obeys time-reversal anti-symmetry. Such diffusive scattering channels overturn the conventional understanding about scattering symmetry by linking the positive and negative frequencies. By probing non-Hermitian systems with the diffusive channels, we reveal a hidden anti-parity-time (APT) scattering symmetry, which is distinct from the APT symmetry of Hamiltonians studied before. The symmetric and symmetry broken scattering phases are observed for the first time as the collapse and revival of temperature oscillation. Our work highlights the overlooked role of scattering channels in the symmetry and phase transition of non-Hermitian systems, thereby gives the diffusion the new life as a signal carrier. Our findings can be utilized to the analysis and control of strongly dissipative phenomena such as heat transfer and charge diffusion.
With wave-particle decomposition, a unified gas-kinetic wave-particle (UGKWP) method has been developed for the multiscale flow simulations. The UGKWP method captures the transport process in all flow regimes without kinetic solver's constraint on the numerical mesh size and time step being less than the particle mean free path and collision time. In the current UGKWP method, the cell's Knudsen number, defined as the ratio of collision time to numerical time step, is used to distribute the components in the wave-particle decomposition. However, the adaptation of particle in UGKWP is mainly for the capturing of the non-equilibrium transport, and the cell's Knudsen number alone is not enough to identify the non-equilibrium state. For example, in the equilibrium flow regime with a Maxwellian distribution function, even at a large cell's Knudsen number, the flow evolution can be still modelled by the Navier-Stokes solver. Therefore, to further improve the efficiency, an adaptive UGKWP (AUGKWP) method will be developed with the introduction of an additional local flow variable gradient-dependent Knudsen number. As a result, the wave-particle decomposition in UGKWP will be determined by both cell's and gradient's Knudsen numbers, and the particle in UGKWP is solely used to capture the non-equilibrium flow transport. The AUGKWP becomes much more efficient than the previous one with the cell's Knudsen number only in the determination of wave-particle composition. Many numerical tests, including Sod tube, shock structure, flow around a cylinder, flow around a reentry capsule, and an unsteady nozzle plume flow, have been conducted to validate the accuracy and efficiency of AUGKWP. Compared with the original UGKWP, the AUGKWP achieves the same accuracy but has advantages in memory reduction and computational efficiency in the simulation for the flow with the co-existing of multiple regimes.
A new formulation of lateral optical coherence tomography (OCT) imaging process and a new differential contrast method designed by using this formulation are presented. The formulation is based on a mathematical sample model called the dispersed scatterer model (DSM), in which the sample is represented as a material with a spatially slowly varying refractive index and randomly distributed scatterers embedded in the material. It is shown that the formulation represents a meaningful OCT image and speckle as two independent mathematical quantities. The new differential contrast method is based on complex signal processing of OCT images, and the physical and numerical imaging processes of this method are jointly formulated using the same theoretical strategy as in the case of OCT. The formula shows that the method provides a spatially differential image of the sample structure. This differential imaging method is validated by measuring in vivo and in vitro samples.
Exhaled droplet and aerosol-mediated transmission of respiratory diseases, including SARS-CoV-2, is exacerbated in poorly ventilated environments where body heat-driven airflow prevails. Employing large-scale simulations, we reveal how the human body heat can potentially spread pathogenic species between occupants in a room. Morphological phase transition in airflow takes place as the distance between human heat sources is varied which shapes novel patterns of disease transmission: For sufficiently large distance, individual buoyant plume creates a natural barrier, forming a ``thermal armour'' that blocks suspension spread between occupants. However, for small distances, collective effect emerges and thermal plumes condense into super-structure, facilitating long-distance suspension transport via crossing between convection rolls. Our quantitative analysis demonstrates that infection risk increases significantly at critical distances due to collective behavior and phase transition. This highlights the importance of maintaining reasonable social distancing indoors to minimize viral particle transmission and offers new insights into the critical behavior of pathogen spread.
Understanding degrees of freedom is fundamental to characterizing physical systems. Counting them is usually straightforward, especially if we can assign them a clear meaning. For example, a particle moving in three-dimensional space has three degrees of freedom, one for each independent direction of motion. However, for more complex systems like spinning particles or coupled harmonic oscillators, things get more complicated since there is no longer a direct correspondence between the degrees of freedom and the number of independent directions in the physical space in which the system exists. This paper delves into the intricacies of degrees of freedom in physical systems and their relationship with configuration and phase spaces. We first establish the well-known fact that the number of degrees of freedom is equal to the dimension of the configuration space, but show that this is only a local description. A global approach will reveal that this space can have non-trivial topology, and in some cases, may not even be a manifold. By leveraging this topology, we gain a deeper understanding of the physics. We can then use that topology to understand the physics better as well as vice versa: intuition about the configuration space of a physical system can be used to understand non-trivial topological spaces better.
A central role in shaping the experience of users online is played by recommendation algorithms. On the one hand they help retrieving content that best suits users taste, but on the other hand they may give rise to the so called "filter bubble" effect, favoring the rise of polarization. In the present paper we study how a user-user collaborative-filtering algorithm affects the behavior of a group of agents repeatedly exposed to it. By means of analytical and numerical techniques we show how the system stationary state depends on the strength of the similarity and popularity biases, quantifying respectively the weight given to the most similar users and to the best rated items. In particular, we derive a phase diagram of the model, where we observe three distinct phases: disorder, consensus and polarization. In the latter users spontaneously split into different groups, each focused on a single item. We identify, at the boundary between the disorder and polarization phases, a region where recommendations are nontrivially personalized without leading to filter bubbles. Finally, we show that our model can reproduce the behavior of users in the online music platform last.fm. This analysis paves the way to a systematic analysis of recommendation algorithms by means of statistical physics methods and opens to the possibility of devising less polarizing recommendation algorithms.
Score-based stochastic denoising models have recently been demonstrated as powerful machine learning tools for conditional and unconditional image generation. The existing methods are based on a forward stochastic process wherein the training images are scaled to zero over time and white noise is gradually added such that the final time step is approximately zero-mean identity-covariance Gaussian noise. A neural network is then trained to approximate the time-dependent score function, or the gradient of the logarithm of the probability density, for that time step. Using this score estimator, it is possible to run an approximation of the time-reversed stochastic process to sample new images from the training data distribution. These score-based generative models have been shown to out-perform generative adversarial neural networks using standard benchmarks and metrics. However, one issue with this approach is that it requires a large number of forward passes of the neural network. Additionally, the images at intermediate time steps are not useful, since the signal-to-noise ratio is low. In this work we present a new method called Fourier Diffusion Models which replaces the scalar operations of the forward process with shift-invariant convolutions and the additive white noise with additive stationary noise. This allows for control of MTF and NPS at intermediate time steps. Additionally, the forward process can be crafted to converge to the same MTF and NPS as the measured images. This way, we can model continuous probability flow from true images to measurements. In this way, the sample time can be used to control the tradeoffs between measurement uncertainty and generative uncertainty of posterior estimates. We compare Fourier diffusion models to existing scalar diffusion models and show that they achieve a higher level of performance and allow for a smaller number of time steps.
This paper describes the flow past two-dimensional porous plates at a Reynolds number ($Re$) of 30 and for a range of Darcy number ($Da$) and flow incidence ($\alpha$) values. For a plate normal to the stream and vanishing $Da$, the wake shows a vortex dipole with negligible separation from the plate. With increasing $Da$, the separation between the vortex dipole and the plate increases; the vortex dipole shortens and is eventually annihilated at a critical $Da$ between $8 \times 10^{-4}$ and $9 \times 10^{-4}$. The drag is found to decrease monotonically with $Da$. For any value of $Da$ below the critical one, the vortex dipole disappears with decreasing $\alpha$. However, this occurs through different topological stages for low and high $Da$. At low $Da$ such as $5 \times 10^{-5}$, as $\alpha$ decreases, first, one saddle-node pair merges, forming a single recirculating region with negative circulation. Thereafter, the second saddle-node pair merges, annihilating that region. For high $Da$, such as $5 \times 10^{-4}$, the two saddle-node pairs merge at the same critical incidence, $\alpha=44^\circ$. The magnitude of lift, drag, and torque decrease with $Da$. However, there exists a range of $Da$ and $\alpha$, where the magnitude of the plate-wise force component increases with $Da$, driven by the shear on the plate's pressure side. The present findings will be directly beneficial in understanding the role of permeability and solidity on small porous and bristled wings.
The nuclear polarizability effects in hyperfine splitting of light atomic systems are not well known. The only system for which they were previously calculated is the hydrogen atom, where these effects were shown to contribute about 5\% of the total nuclear correction. One generally expects the polarizability effects to become more pronounced for composite nuclei. In the present work we determine the nuclear polarizability correction to the hyperfine splitting in He$^+$ by comparing the effective Zemach radius deduced from the experimental hyperfine splitting with the Zemach radius obtained from the electron scattering. We obtain a surprising result that the nuclear polarizability of the helion yields just 3\% of the total nuclear correction, which is smaller than for the proton.
The 3D Discrete Fourier Transform (DFT) is a technique used to solve problems in disparate fields. Nowadays, the commonly adopted implementation of the 3D-DFT is derived from the Fast Fourier Transform (FFT) algorithm. However, evidence indicates that the distributed memory 3D-FFT algorithm does not scale well due to its use of all-to-all communication. Here, building on the work of Sedukhin \textit{et al}. [Proceedings of the 30th International Conference on Computers and Their Applications, CATA 2015 pp. 193-200 (01 2015)], we revisit the possibility of improving the scaling of the 3D-DFT by using an alternative approach that uses point-to-point communication, albeit at a higher arithmetic complexity. The new algorithm exploits tensor-matrix multiplications on a volumetrically decomposed domain via three specially adapted variants of Cannon's algorithm. It has here been implemented as a C++ library called S3DFT and tested on the JUWELS Cluster at the J\"ulich Supercomputing Center. Our implementation of the shared memory tensor-matrix multiplication attained 88\% of the theoretical single node peak performance. One variant of the distributed memory tensor-matrix multiplication shows excellent scaling, while the other two show poorer performance, which can be attributed to their intrinsic communication patterns. A comparison of S3DFT with the Intel MKL and FFTW3 libraries indicates that currently iMKL performs best overall, followed in order by FFTW3 and S3DFT. This picture might change with further improvements of the algorithm and/or when running on clusters that use network connections with higher latency, e.g. on cloud platforms.
In the PANDA experiment's hypernuclear and hyperatom setup, a positioning system for the primary production target is required, which will be located in the center of the solenoid magnet, in ultra-high vacuum, and exposed to high radiation levels. In this work, a prototype for a positioning sensor was built using a bisected light guide for infrared light and a low-priced readout system based on microcontrollers. In contrast to many modern positioning systems that require electronics in direct proximity, this setup has no active electronic components close to the moving parts. The prototype system was operated with a resolution of better than 5$\micro$m, and with a repeatability of better than $\pm$18$\micro$m in a total of 14000 measurements. The demonstrated performance is by far satisfying the positioning requirement of $\pm$300 $\micro$m in the hypernuclear and hyperatom setup at PANDA.
We propose an analytical approximation for the modified Bessel function of the second kind $K_\nu$. The approximation is derived from an exponential ansatz imposing global constrains. It yields local and global errors of less than one percent and a speed-up in the computing time of $3$ orders in magnitude in comparison with traditional approaches. We demonstrate the validity of our approximation for the task of generating long-range correlated random fields.
Meshfree Lagrangian frameworks for free surface flow simulations do not conserve fluid volume. Meshfree particle methods like SPH are not mimetic, in the sense that discrete mass conservation does not imply discrete volume conservation. On the other hand, meshfree collocation methods typically do not use any notion of mass. As a result, they are neither mass conservative nor volume conservative at the discrete level. In this paper, we give an overview of various sources of conservation errors across different meshfree methods. The present work focuses on one specific issue: unreliable volume and mass definitions. We introduce the concept of representative masses and densities, which are essential for accurate post-processing especially in meshfree collocation methods. Using these, we introduce an artificial compression or expansion in the fluid to rectify errors in volume conservation. Numerical experiments show that the introduced frameworks significantly improve volume conservation behaviour, even for complex industrial test cases such as automotive water crossing.
Educators must make decisions about learner expectations and skills on which to focus when it comes to laboratory activities. There are various approaches but the general pattern is to encourage students to measure ordered pairs, plot a graph to establish linear dependence, and then compute the slope of the best-fit line for an eventual scientific conclusion. To assist educators when they also want to include slope uncertainty dependent upon measurement uncertainty as part of the expected analysis, we demonstrate a physical approach so that both educators and their students have a convenient roadmap to follow. A popular alternative that educators often choose is to rely solely on statistical metrics to establish the tolerance of the technique, but we argue the statistical strategy can distract students away from the true meaning of the uncertainty that is inherent in the act of making the measurements. We will carry these measurement error bars from their points of origin through the regression analysis to consistently establish the physical error bars for the slope and the intercept. We then demonstrate the technique using an introductory physics experiment with a purpose of measuring the speed of sound in air.
In this work, we identify and characterize intra-pulse intensity noise shaping by saturable absorbers applied in mode-locked lasers and ultra-low noise nonlinear fiber amplifiers. Reshaped intra-pulse intensity noise distributions are shown to be inevitably interconnected with self-amplitude modulation, the fundamental physical mechanism for initiation and stabilization of ultra-short pulses in the steady-state of a mode-locked laser. A theoretical model is used to describe the ultrafast saturation dynamics by an intra-pulse noise transfer function for widely-applied slow and fast saturable absorbers. For experimental verification of the theoretical results, spectrally-resolved relative intensity noise measurements are applied on chirped input pulses to enable the direct measurement of intra-pulse noise transfer functions using a versatile experimental platform. It is further demonstrated, how the characterized intra-pulse intensity noise distribution of ultrafast laser systems can be utilized for quantum-limited intensity noise suppression via tailored optical bandpass filtering.
The detection and cross section measurement of Coherent Elastic Neutrino-Nucleus Scattering (CE$\nu$NS) is vital for particle physics, astrophysics and nuclear physics. Therefore, a new CE$\nu$NS detection experiment is proposed in China. Undoped CsI crystals coupled with two Photon Multiplier Tubes (PMTs) each, will be cooled down to 77K and placed at China Spallation Neutron Source (CSNS) to detect the CE$\nu$NS signals produced by neutrinos from stopped pion decays happening within the Tungsten target of CSNS. Owing to the extremely high light yield of pure CsI at 77K, even though only having a neutrino flux 60\% weaker than COHERENT, the detectable signal event rate is still expected to be $0.14/day/kg$. Low radioactivity materials and devices will be used to construct the detector and strong shielding will be applied to reduce the radioactive and neutron background. Dual-PMT readout should be able to reject PMT-related background like Cherenkov light and PMT dark noise. With all the strategies above, we are hoping to reach a 5.1$\sigma$ signal detection significance by a half-year data taking with a $12kg$ CsI. In this presentation, the design of the experiment will be presented. In addition, the estimation of signal, various kinds of background and expected signal sensitivity will be discussed.
A multiphase flowmeter (MPFM) is used in the upstream oil and gas industry for continuous, in-line, real-time, oil-gas-water flow measurement without fluid separation. An MPFM typically consists of phase-fraction (holdup) and velocity (or flow rate) measurements. It is desirable to have homogeneous flow at the measurement location so that the phase-fraction measurement is representative. A horizontal blind-tee pipe-section is often installed to homogenize flow in the downstream vertical Venturi-based flowmeters; however, little information is available on the effect of horizontal blind-tee depth (HBD) on flow homogeneity. In addition, the Venturi vertical entrance length (VEL) leading to the Venturi inlet from the horizontal blind-tee outlet is another design parameter that may potentially affect the downstream phase distribution. The phase-fraction measurement principle requires liquid properties (e.g. water salinity). The local liquid richness makes the horizontal blind-tee an ideal location for measuring liquid properties; however, an excessive HBD may affect the reliability of the measurements of liquid properties, because local vortices may degrade liquid measurement representativeness if the local liquid velocity is too low. This study uses a computational fluid dynamics approach to evaluate the effect of HBD and VEL on multiphase flow measurement, including the Venturi differential-pressure, the Venturi inlet and the throat phase-fraction, and the local liquid-property at the end of a horizontal blind-tee. The computational results are validated with experimental data collected in a multiphase flow facility. Appropriate HBD and VEL are recommended.
We present a model of the ionization efficiency, or quenching factor, for low-energy nuclear recoils, based on a solution to Lindhard integral equation with binding energy and apply it to the calculation of the relative scintillation efficiency and charge yield for nuclear recoils in noble liquid detectors. The quenching model incorporates a constant average binding energy together with an electronic stopping power proportional to the ion velocity, and is an essential input in an analysis of charge recombination processes to predict the ionization and scintillation yields. Our results are comparable to NEST simulations of LXe and LAr and are in good agreement with available data. These studies are relevant to current and future experiments using noble liquids as targets for neutrino physics and the direct searches for dark matter.
One of the challenges in tailoring the dynamics of active, self-propelling agents lies in arresting and releasing these agents at will. Here, we present an experimental system of active droplets with thermally controllable and reversible states of motion, from unsteady over meandering to persistent to arrested motion. These states depend on the P\'eclet number of the chemical reaction driving the motion, which we can tune by using a temperature sensitive mixture of surfactants as a fuel medium. We quantify the droplet dynamics by analysing flow and chemical fields for the individual states, comparing them to canonical models for autophoretic particles. In the context of these models, we are able to observe in situ the fundamental first transition between the isotropic, immotile base state and self-propelled motility.
Power law distributions are widely observed in chemical physics, geophysics, biology, and beyond. The independent variable x of these distributions has an obligatory lower bound and in many cases also an upper bound. Estimating these bounds from sample data is notoriously difficult, with a recent method involving O(N^3) operations, where N denotes sample size. Here I develop an approach for estimating the lower and upper bounds that involves O(N) operations. The approach centers on calculating the mean values, x_min and x_max, of the smallest x and the largest x in N-point samples. A fit of x_min or x_max as a function of N yields the estimate for the lower or upper bound. Application to synthetic data demonstrates the accuracy and reliability of this approach.
The diffusive transport in two-dimensional incompressible turbulent fields is investigated with the aid of high-quality direct numerical simulations. Three classes of turbulence spectra that are able to capture both short and long-range time-space correlations and oscillating features are employed. We report novel scaling laws that depart from the $\gamma=7/10$ paradigm of percolative exponents and are dependent on the features of turbulence. A simple relation between diffusion in the percolative and frozen regimes is found. The importance of discerning between differential and integral characteristic scales is emphasized.
The UV photochemistry of small heteroaromatic molecules serves as a testbed for understanding fundamental photoinduced transformations in moderately complex compounds, including isomerization, ring-opening, and molecular dissociation. Here, a combined experimental-theoretical study of 268 nm UV light-induced dynamics in 2-iodothiophene (C$_4$H$_3$IS) is performed. The dynamics are experimentally monitored with a femtosecond XUV probe pulse that measures iodine N-edge 4d core-to-valence transitions. Experiments are complemented by density functional theory calculations of both the pump-pulse induced valence excitations as well as the XUV probe-induced core-to-valence transitions. Possible intramolecular relaxation dynamics are investigated by ab initio molecular dynamics simulations. Gradual absorption changes up to ~0.5-1 ps after excitation are observed for both the parent molecular species and emerging iodine fragments, with the latter appearing with a characteristic rise time of 160$\pm$30 fs. Comparison of spectral intensities and energies with the calculations identify an iodine dissociation pathway initiated by a predominant $\pi\to\pi^*$ excitation. In contrast, initial excitation to a nearby n$_\perp\to\sigma^*$ excited state appears unlikely based on a significantly smaller oscillator strength and the absence of any corresponding XUV absorption signatures. Excitation to the $\pi\to\pi^*$ state is followed by contraction of the C-I bond, enabling a nonadiabatic transition to a dissociative $\pi\to\sigma_\textrm{C-I}^*$ state. For the subsequent fragmentation, a narrow bond-length region along the C-I stretch coordinate between 260 and 280 pm is identified, where the transition between the parent molecule and the thienyl radical + iodine atom products becomes prominent in the XUV spectrum due to rapid localization of two singly-occupied molecular orbitals on the two fragments.
The size distribution of planned and forced outages in power systems have been studied for almost two decades and has drawn great interest as they display heavy tails. Understanding of this phenomenon has been done by various threshold models, which are self-tuned at their critical points, but as many papers pointed out, explanations are intuitive, and more empirical data is needed to support hypotheses. In this paper, the authors analyze outage data collected from various public sources to calculate the outage energy and outage duration exponents of possible power-law fits. Temporal thresholds are applied to identify crossovers from initial short-time behavior to power-law tails. We revisit and add to the possible explanations of the uniformness of these exponents. By performing power spectral analyses on the outage event time series and the outage duration time series, it is found that, on the one hand, while being overwhelmed by white noise, outage events show traits of self-organized criticality (SOC), which may be modeled by a crossover from random percolation to directed percolation branching process with dissipation. On the other hand, in responses to outages, the heavy tails in outage duration distributions could be a consequence of the highly optimized tolerance (HOT) mechanism, based on the optimized allocation of maintenance resources.
Multiple exposures, of a single illuminated non-configurable mask that is transversely displaced to a number of specified positions, can be used to create any desired distribution of radiant exposure. An experimental proof-of-concept is given for this idea, employing hard X rays. The method is termed "ghost projection", since it may be viewed as a reversed form of classical ghost imaging. The written pattern is arbitrary, up to a tunable constant offset, together with a limiting spatial resolution that is governed by the finest features present in the illuminated mask. The method, which is immune to both proximity-correction and aspect-ratio issues, can be used to make a universal lithographic mask in the hard-X-ray regime. Ghost projection may also be used as a dynamically-configurable beam-shaping element, namely the hard-X-ray equivalent of a spatial light modulator. The idea may be applied to other forms of radiation and matter waves, such as gamma rays, neutrons, electrons, muons, and atomic beams.
We investigate the nature of quantum phases arising in chiral interacting Hamiltonians recently realized in Rydberg atom arrays. We classify all possible fermionic chiral spin liquids with $\mathrm{U}(1)$ global symmetry using parton construction on the honeycomb lattice. The resulting classification includes six distinct classes of gapped quantum spin liquids: the corresponding variational wave functions obtained from two of these classes accurately describe the Rydberg many-body ground state at $1/2$ and $1/4$ particle density. Complementing this analysis with tensor network simulations, we conclude that both particle filling sectors host a spin liquid with the same topological order of a $\nu=1/2$ fractional quantum Hall effect. At density $1/2$, our results clarify the phase diagram of the model, while at density $1/4$, they provide an explicit construction of the ground state wave function with almost unit overlap with the microscopic one. These findings pave the way to the use of parton wave functions to guide the discovery of quantum spin liquids in chiral Rydberg models.
The non-Hermitian skin effect, by which the eigenstates of Hamiltonian are predominantly localized at the boundary, has revealed a strong sensitivity of non-Hermitian systems to the boundary condition. Here we experimentally observe a striking boundary-induced dynamical phenomenon known as the non-Hermitian edge burst, which is characterized by a sharp boundary accumulation of loss in non-Hermitian time evolutions. In contrast to the eigenstate localization, the edge burst represents a generic non-Hermitian dynamical phenomenon that occurs in real time. Our experiment, based on photonic quantum walks, not only confirms the prediction of the phenomenon, but also unveils its complete space-time dynamics. Our observation of edge burst paves the way for studying the rich real-time dynamics in non-Hermitian topological systems.
Deep learning (DL) has been extensively researched in the field of computed tomography (CT) reconstruction with incomplete data, particularly in sparse-view CT reconstruction. However, applying DL to sparse-view cone beam CT (CBCT) remains challenging. Many models learn the mapping from sparse-view CT images to ground truth but struggle to achieve satisfactory performance in terms of global artifact removal. Incorporating sinogram data and utilizing dual-domain information can enhance anti-artifact performance, but this requires storing the entire sinogram in memory. This presents a memory issue for high-resolution CBCT sinograms, limiting further research and application. In this paper, we propose a cube-based 3D denoising diffusion probabilistic model (DDPM) for CBCT reconstruction using down-sampled data. A DDPM network, trained on cubes extracted from paired fully sampled sinograms and down-sampled sinograms, is employed to inpaint down-sampled sinograms. Our method divides the entire sinogram into overlapping cubes and processes these cubes in parallel using multiple GPUs, overcoming memory limitations. Experimental results demonstrate that our approach effectively suppresses few-view artifacts while preserving textural details faithfully.
We measure the quantum efficiency (QE) of individual dibenzoterrylene (DBT) molecules embedded in para-dichlorobenzene at cryogenic temperatures. To achieve this, we apply two distinct methods based on the maximal photon emission and on the power required to saturate the zero-phonon line. We find that the outcome of the two approaches are in good agreement, reporting a large fraction of molecules with QE values above 50%, with some exceeding 70%. Furthermore, we observe no correlation between the observed lower bound on the QE and the lifetime of the molecule, suggesting that most of the molecules have a QE exceeding the established lower bound. This confirms the suitability of DBT for quantum optics experiments. In light of previous reports of low QE values at ambient conditions, our results hint at the possibility of a strong temperature dependence of the QE.
Inorganic solid-state battery electrolytes show high ionic conductivities and enable the fabrication of all solid-state batteries. In this work, we present the temperature dependence of spin-lattice relaxation time (T1), spin-spin relaxation time (T2), and resonance linewidth of the 7Li nuclear magnetic resonance (NMR) for four solid-state battery electrolytes (Li3InCl6 (LIC), Li3YCl6 (LYC), Li1.48Al0.48Ge1.52(PO4)3 (LAGP) and LiPS5Cl (LPSC)) from 173 K to 403 K at a 7Li resonance frequency of 233 MHz, and from 253 K to 353 K at a 7Li resonance frequency of 291 MHz. Additionally, we measured the spin-lattice relaxation rates at an effective 7Li resonance frequency of 133 kHz using a spin-locking pulse sequence in the temperature range of 253 K to 353 K. In LPSC, the 7Li NMR relaxation is consistent with the Bloembergen-Pound-Purcell (BPP) theory of NMR relaxation of dipolar nuclei. In LIC, LYC and LAGP, the BPP theory does not describe the NMR relaxation rates for the temperature range and frequencies of our measurements. The presented NMR relaxation data assists in providing a complete picture of Li diffusion in the four solid-state battery electrolytes.
Traditionally, it has been assumed that the stopping of a swift ion travelling through matter can be understood in terms of two essentially independent components, i.e. electronic vs. nuclear. Performing extensive Ehrenfest MD simulations of the process of proton irradiation of water ice that accurately describe not only the non-adiabatic dynamics of the electrons but also of the nuclei, we have found a stopping mechanism involving the interplay of the electronic and nuclear subsystems. This effect, which consists in a kinetic energy transfer from the projectile to the target nuclei thanks to the perturbations of the electronic density caused by the irradiation, is fundamentally different from the atomic displacements and collision cascades characteristic of nuclear stopping. Moreover, it shows a marked isotopic effect depending on the composition of the target, being relevant mostly for light water as opposed to heavy water. This result is consistent with long-standing experimental results which remained unexplained so far.
We performed a series of molecular dynamics simulations on monodisperse polymer melts to investigate the formation of shear banding. Under high shear rates, shear banding occurs, which is accompanied by the entanglement heterogeneity intimately. Interestingly, the same linear relationship between the end-to-end distance $R_{ee}$ and entanglement density $Z$ is observed at homogeneous flow before the onset of shear banding and at stable shear banding state, where $R_{ee} \sim [ln(W_i^2)- \xi_0]Z$ is proposed as the criterion to describe the dynamic force balance of molecular chain in flow with a high rate. We establish a scaling relation between the disentanglement rate $V_d$ and Weissenberg number $W_i$ as $V_d \sim W_i^2$ for stable flow. Deviating from this relation leads to force imbalance and results in the emergence of shear banding. The formation of shear banding prevents chain from further stretching and untangling. The transition from homogeneous shear to shear banding partially dissipates the increased free energy from shear and reduces the free energy of the system.
We have studied self-sustained, deformable, rotating liquid He cylinders of infinite length. In the normal fluid $^3$He case, we have employed a classical model where only surface tension and centrifugal forces are taken into account, as well as the Density Functional Theory (DFT) approach in conjunction with a semi-classical Thomas-Fermi approximation for the kinetic energy. In both approaches, if the angular velocity is sufficiently large, it is energetically favorable for the $^3$He cylinder to undergo a shape transition, acquiring an elliptic-like cross section which eventually becomes two-lobed. In the $^4$He case, we have employed a DFT approach that takes into account its superfluid character, limiting the description to vortex-free configurations where angular momentum is exclusively stored in capillary waves on a deformed cross section cylinder. The calculations allow us to carry out a comparison between the rotational behavior of a normal, rotational fluid ($^3$He) and a superfluid, irrotational fluid ($^4$He).
Stacking and twisting atom-thin sheets create superlattice structures with unique emergent properties, while tailored light fields can manipulate coherent electron transport on ultrafast timescales. The unification of these two approaches may lead to ultrafast creation and manipulation of band structure properties, which is a crucial objective for the advancement of quantum technology. Here, we address this by demonstrating a tailored lightwave-driven analogue to twisted layer stacking. This results in sub-femtosecond control of time-reversal symmetry breaking and thereby band structure engineering in a hexagonal boron nitride monolayer. The results practically demonstrate the realization of the topological Haldane model in an insulator. Twisting the lightwave relative to the lattice orientation enables switching between band configurations, providing unprecedented control over the magnitude and location of the band gap, and curvature. A resultant asymmetric population at complementary quantum valleys lead to a measurable valley Hall current, detected via optical harmonic polarimetry. The universality and robustness of the demonstrated sub-femtosecond control opens a new way to band structure engineering on the fly paving a way towards large-scale ultrafast quantum devices for real-world applications.
A fundamental open problem in deep learning theory is how to define and understand the stability of stochastic gradient descent (SGD) close to a fixed point. Conventional literature relies on the convergence of statistical moments, esp., the variance, of the parameters to quantify the stability. We revisit the definition of stability for SGD and use the \textit{convergence in probability} condition to define the \textit{probabilistic stability} of SGD. The proposed stability directly answers a fundamental question in deep learning theory: how SGD selects a meaningful solution for a neural network from an enormous number of solutions that may overfit badly. To achieve this, we show that only under the lens of probabilistic stability does SGD exhibit rich and practically relevant phases of learning, such as the phases of the complete loss of stability, incorrect learning, convergence to low-rank saddles, and correct learning. When applied to a neural network, these phase diagrams imply that SGD prefers low-rank saddles when the underlying gradient is noisy, thereby improving the learning performance. This result is in sharp contrast to the conventional wisdom that SGD prefers flatter minima to sharp ones, which we find insufficient to explain the experimental data. We also prove that the probabilistic stability of SGD can be quantified by the Lyapunov exponents of the SGD dynamics, which can easily be measured in practice. Our work potentially opens a new venue for addressing the fundamental question of how the learning algorithm affects the learning outcome in deep learning.
We theoretically investigate the preparation of pure-state single-photon source from 14 birefringent crystals (CMTC, THI, LiIO$_3$, AAS, HGS, CGA, TAS, AGS, AGSe, GaSe, LIS, LISe, LGS, and LGSe) and 8 periodic poling crystals (LT, LN, KTP, KN, BaTiO$_3$, MgBaF$_4$, PMN-0.38PT, and OP-ZnSe) in a wavelength range from 1224 nm to 11650 nm. The three kinds of group-velocity-matching (GVM) conditions, the phase matching conditions, the spectral purity, and the Hong-Ou-Mandel interference are calculated for each crystal. This study may provide high-quality single-photon sources for quantum sensing, quantum imaging, and quantum communication applications at the mid-infrared wavelength range.
Radiation tolerance is determined as an ability of crystalline materials to withstand the accumulation of the radiation induced disorder. Based on the magnitudes of such disorder levels, semiconductors are commonly grouped into the low- or high-radiation tolerant. Nevertheless, upon exposing to sufficiently high fluences, in all cases known by far, it ends up with either extremely high disorder levels or amorphization. Here we show that gamma/beta double polymorph Ga2O3 structures exhibit unprecedently high radiation tolerance. Specifically, for room temperature experiments, they tolerate a disorder equivalent to hundreds of displacements per atom, without severe degradations of crystallinity; in comparison with, e.g., Si amorphizable already with the lattice atoms displaced just once. We explain this behavior by an interesting combination of the Ga- and O-sublattice properties in gamma-Ga2O3. In particular, O-sublattice exhibits a strong recrystallization trend to recover the face-centered-cubic stacking despite high mobility of O atoms in collision cascades compared to Ga. Concurrently, the characteristic structure of the Ga-sublattice is nearly insensitive to the accumulated disorder. Jointly it explains macroscopically negligible structural deformations in gamma-Ga2O3 observed in experiment. Notably, we also explained the origin of the beta-to-gamma Ga2O3 transformation, as a function of increased disorder in beta-Ga2O3 and studied the phenomena as a function of the chemical nature of the implanted atoms. As a result, we conclude that gamma-beta double polymorph Ga2O3 structures, in terms of their radiation tolerance properties, benchmark a new class of universal radiation tolerant semiconductors.
In addition to a component of the emission that originates from clearly distinguishable coronal loops, the solar corona also exhibits extreme-ultraviolet (EUV) and X-ray ambient emission that is rather diffuse and is often considered undesirable background. Importantly, unlike the generally more structured transition region and chromosphere, the diffuse corona appears to be rather featureless. The magnetic nature of the diffuse corona, and in particular, its footpoints in the lower atmosphere, are not well understood. We study the origin of the diffuse corona above the quiet-Sun network on supergranular scales. We identified regions of diffuse EUV emission in the coronal images from the SDO/AIA. To investigate their connection to the lower atmosphere, we combined these SDO/AIA data with the transition region spectroscopic data from the IRIS and with the underlying surface magnetic field information from the SDO/HMI. The region of the diffuse emission is of supergranular size and persists for more than five hours, during which it shows no obvious substructure. It is associated with plasma at about 1 MK that is located within and above a magnetic canopy. The canopy is formed by unipolar magnetic footpoints that show highly structured spicule-like emission in the overlying transition region. Our results suggest that the diffuse EUV emission patch forms at the base of long-ranging loops, and it overlies spicular structures in the transition region. Heated material might be supplied to it by means of spicular upflows, conduction-driven upflows from coronal heating events, or perhaps by flows originating from the farther footpoint. Therefore, the question remains open how the diffuse EUV patch might be sustained. Nevertheless, our study indicates that heated plasma trapped by long-ranging magnetic loops might substantially contribute to the featureless ambient coronal emission.
The generation of ultrashort light pulses is essential for the advancement of attosecond science. Here, we show that attosecond pulses approaching the Fourier limit can be generated through optimized optical driving of tunneling particles in solids. We propose an ansatz for the wave function of tunneling electron-hole pairs based on a rigorous expression for massive Dirac fermions, which enables efficient optimization of the waveform of the driving field. It is revealed that the dynamic sign change in the effective mass due to optical driving is crucial for shortening the pulse duration, which highlights a distinctive property of Bloch electrons that is not present in atomic gases, i.e., the periodic nature of crystals. These results show the potential of utilizing solid materials as a source of attosecond pulses.
We propose a novel device concept using spin-orbit-torques to realize a magnetic field sensor, where we eliminate the sensor offset using a differential measurement concept. We derive a simple analytical formulation for the sensor signal and demonstrate its validity with numerical investigations using macrospin simulations. The sensitivity and the measurable linear sensing range in the proposed concept can be tuned by either varying the effective magnetic anisotropy or by varying the magnitude of the injected currents. We show that undesired perturbation fields normal to the sensitive direction preserve the zero-offset property and only slightly modulate the sensitivity of the proposed sensor. Higher-harmonics voltage analysis on a Hall cross experimentally confirms the linearity and tunability via current strength. Additionally, the sensor exhibits a non-vanishing offset in the experiment which we attribute to the anomalous Nernst effect.
The precision measurements of galactic cosmic ray protons from PAMELA and AMS are reproduced using a well-established 3D numerical model for the period July 2006 - November 2019. The resulting modulation parameters are applied to simulate the modulation for cosmic antiprotons over the same period, which includes times of minimum modulation before and after 2009, maximum modulation from 2012 to 2015 including the reversal of the Sun's magnetic field polarity, and the approach to new minimum modulation in 2020. Apart from their local interstellar spectra, the modulation of protons and antiprotons differ only in their charge-sign and consequent drift pattern. The lowest proton flux was in February-March 2014, but the lowest simulated antiproton flux is found to be in March-April 2015. These simulated fluxes are used to predict the proton to anti-proton ratios as a function of rigidity. The trends in these ratios contribute to clarify to a large extent the phenomenon of charge-sign dependence of heliospheric modulation during vastly different phases of the solar activity cycle. This is reiterated and emphasized by displaying so-called hysteresis loops. It is also illustrated how the values of the parallel and perpendicular mean free paths, as well as the drift scale, vary with rigidity over this extensive period. The drift scale is found to be at its lowest level during the polarity reversal period, while the lowest level of the mean free paths are found to be in March-April 2015.
The fractional Fourier transform (FrFT), a fundamental operation in physics that corresponds to a rotation of phase space by any angle, is also an indispensable tool employed in digital signal processing for noise reduction. Processing of optical signals in their time-frequency degree of freedom bypasses the digitization step and presents an opportunity to enhance many protocols in quantum and classical communication, sensing and computing. In this letter, we present the experimental realization of the fractional Fourier transform in the time-frequency domain using an atomic quantum-optical memory system with processing capabilities. Our scheme performs the operation by imposing programmable interleaved spectral and temporal phases. We have verified the FrFT by analyses of chroncyclic Wigner functions measured via a shot-noise limited homodyne detector. Our results hold prospects for achieving temporal-mode sorting, processing and super-resolved parameter estimation.
Damage caused by freezing wet, porous materials is a widespread problem, but is hard to predict or control. Here, we show that polycrystallinity makes a great difference to the stress build-up process that underpins this damage. Unfrozen water in grain-boundary grooves feeds ice growth at temperatures below the freezing temperature, leading to the fast build-up of localized stresses. The process is very variable, which we ascribe to local differences in ice-grain orientation, and to the surprising mobility of many grooves -- which further accelerates stress build-up. Our work will help understand how freezing damage occurs, and in developing accurate models and effective damage-mitigation strategies.
The thermal conductivity of a $d=1$ lattice of ferromagnetically coupled planar rotators is studied through molecular dynamics. Two different types of anisotropies (local and in the coupling) are assumed in the inertial XY model. In the limit of extreme anisotropy, both models approach the Ising model and its thermal conductivity $\kappa$, which, at high temperatures, scales like $\kappa\sim T^{-3}$. This behavior reinforces the result obtained in various $d$-dimensional models, namely $\kappa \propto L\, e_{q}^{-B(L^{\gamma}T)^{\eta}}$ where $e_q^z \equiv[1+(1-q)z]^{\frac{1}{1-q}}\;(e_1^z=e^z)$, $L$ being the linear size of the $d$-dimensional macroscopic lattice. The scaling law $\frac{\eta \,\gamma}{q-1}=1$ guarantees the validity of Fourier's law, $\forall d$.
We demonstrate a strategy for simulating wide-range X-ray scattering patterns, which spans the small- and wide scattering angles as well as the scattering angles typically used for Pair Distribution Function (PDF) analysis. Such simulated patterns can be used to test holistic analysis models, and, since the diffraction intensity is on the same scale as the scattering intensity, may offer a novel pathway for determining the degree of crystallinity. The "Ultima Ratio" strategy is demonstrated on a 64-nm Metal Organic Framework (MOF) particle, calculated from Q < 0.01 1/nm up to Q < 150 1/nm, with a resolution of 0.16 Angstrom. The computations exploit a modified 3D Fast Fourier Transform (3D-FFT), whose modifications enable the transformations of matrices at least up to 8000^3 voxels in size. Multiple of these modified 3D-FFTs are combined to improve the low-Q behaviour. The resulting curve is compared to a wide-range scattering pattern measured on a polydisperse MOF powder. While computationally intensive, the approach is expected to be useful for simulating scattering from a wide range of realistic, complex structures, from (poly-)crystalline particles to hierarchical, multicomponent structures such as viruses and catalysts.