Solution of the discretized Lippmann-Schwinger equation in the spatial frequency domain involves the inversion of a linear operator specified by the scattering potential. To regularize this inevitably ill-conditioned problem, we propose a machine learning approach: a recurrent neural network with long short-term memory (LSTM) and with the null space projection of the Lippmann-Schwinger kernel on the recurrence path. The learning method is trained using examples of typical scattering potentials and their corresponding scattered fields. We test the proposed method in two cases: electromagnetic scattering by dielectric objects, and electron scattering by multiple screened Coulomb potentials. In both cases the solutions to test examples, disjoint from the training set, were obtained with fewer iterations and were accurate compared to linear solvers. We also observed surprising generalization ability: in the electromagnetic case, an LSTM trained with random arrangements of dielectric spheres was able to obtain the correct solutions for general topologically similar objects, such as polygons. This suggests that the LSTM successfully incorporates the physics of scattering into the inversion algorithm.

Recent advances in acquisition equipment is providing experiments with growing amounts of precise yet affordable sensors. At the same time an improved computational power, coming from new hardware resources (GPU, FPGA, ACAP), has been made available at relatively low costs. This led us to explore the possibility of completely renewing the chain of acquisition for a fusion experiment, where many high-rate sources of data, coming from different diagnostics, can be combined in a wide framework of algorithms. If on one hand adding new data sources with different diagnostics enriches our knowledge about physical aspects, on the other hand the dimensions of the overall model grow, making relations among variables more and more opaque. A new approach for the integration of such heterogeneous diagnostics, based on composition of deep \textit{variational autoencoders}, could ease this problem, acting as a structural sparse regularizer. This has been applied to RFX-mod experiment data, integrating the soft X-ray linear images of plasma temperature with the magnetic state. However to ensure a real-time signal analysis, those algorithmic techniques must be adapted to run in well suited hardware. In particular it is shown that, attempting a quantization of neurons transfer functions, such models can be modified to create an embedded firmware. This firmware, approximating the deep inference model to a set of simple operations, fits well with the simple logic units that are largely abundant in FPGAs. This is the key factor that permits the use of affordable hardware with complex deep neural topology and operates them in real-time.

Being a general wave phenomenon, bound states in the continuum (BICs) appear in acoustic, hydrodynamic, and photonic systems of various dimensionalities. Here, we report the first experimental observation of an accidental electromagnetic BIC in a one-dimensional periodic chain of coaxial ceramic disks. We show that the accidental BIC manifests itself as a narrow peak in the transmission spectra of the chain placed between two loop antennas. We demonstrate a linear growth of the radiative quality factor of the BICs with the number of disks that is well-described with a tight-binding model. We estimate the number of the disks when the radiation losses become negligible in comparison to material absorption and, therefore, the chain can be considered practically as infinite. The presented analysis is supported by near-field measurements of the BIC profile. The obtained results provide useful guidelines for practical implementations of structures with BICs opening new horizons for the development of radio-frequency and optical metadevices.

The inverse problem of designing component interactions to target emergent structure is fundamental to numerous applications in biotechnology, materials science, and statistical physics. Equally important is the inverse problem of designing emergent kinetics, but this has received considerably less attention. Using recent advances in automatic differentiation, we show how kinetic pathways can be precisely designed by directly differentiating through statistical-physics models, namely free energy calculations and molecular dynamics simulations. We consider two systems that are crucial to our understanding of structural self-assembly: bulk crystallization and small nanoclusters. In each case we are able to assemble precise dynamical features. Using gradient information, we manipulate interactions among constituent particles to tune the rate at which these systems yield specific structures of interest. Moreover, we use this approach to learn non-trivial features about the high-dimensional design space, allowing us to accurately predict when multiple kinetic features can be simultaneously and independently controlled. These results provide a concrete and generalizable foundation for studying non-structural self-assembly, including kinetic properties as well as other complex emergent properties, in a vast array of systems.

A previously published neural network potential for the description of protonated water clusters up to the protonated water tetramer, H$^+$(H$_2$O)$_4$, at essentially converged coupled cluster accuracy (J. Chem. Theory Comput. 16, 88 (2020)) is applied to the protonated water hexamer, H$^+$(H$_2$O)$_6$, in its extended Zundel conformation -- a system that the neural network has never seen before. Although being in the extrapolation regime, it is shown that the potential not only allows for stable quantum simulations from ultra-low temperatures $\sim$1 K up to 100 K but that it is able to describe the new system very accurately compared to explicit coupled cluster calculations. Compared to the interpolation regime the quality of the model is reduced by roughly one order of magnitude, but most of the difference to the coupled cluster reference comes from global shifts of the potential energy surface, while local energy fluctuations are well recovered. These results suggest that the application of neural network potentials in extrapolation regimes can provide useful results and might be more general than usually thought.

Total rotation is a quantity that has been used for years in RSA. However, its definition has no mathematical sense, since the Euler angles do not form a vector space, since angles cannot define a multiplication group. With this work I tried to give a mathematical definition of the total rotation connecting the Euler description of the rotations with the helical axis. The approximation for small angles was used to connect Euler's angles and helical angle. With this approximation Euler angles acquire the properties of a vector space and it is possible to justify the meaning of this parameter. Validation test showed that total rotation has an approximation error between 5\% and 7\% for angles in the range $\left[-\frac{\pi}{6}, \frac{\pi}{6} \right]$. Since usually RSA uses smaller angle ranges, the approximation is perfectly suitable for use in RSA.

A new imaging technique for $\alpha$-particles using a fast optical camera focused on a thin scintillator is presented. As $\alpha$-particles interact in a thin layer of LYSO fast scintillator, they produce a localized flash of light. The light is collected with a lens to an intensified optical camera, Tpx3Cam, with single photon sensitivity and excellent spatial & temporal resolutions. The interactions of photons with the camera is reconstructed by means of a custom algorithm, capable of discriminating single photons using time and spatial information.

Context. The first studies with Parker Solar Probe (PSP) data have made significant progress toward the understanding of the fundamental properties of ion cyclotron waves in the inner heliosphere. The survey mode particle measurements of PSP, however, did not make it possible to measure the coupling between electromagnetic fields and particles on the time scale of the wave periods. Aims. We present a novel approach to study wave-particle energy exchange with PSP. Methods. We use the Flux Angle operation mode of the Solar Probe Cup in conjunction with the electric field measurements and present a case study when the Flux Angle mode measured the direct interaction of the proton velocity distribution with an ion cyclotron wave. Results. Our results suggest that the energy transfer from fields to particles on the timescale of a cyclotron period is equal to approximately 3-6% of the electromagnetic energy flux. This rate is consistent with the hypothesis that the ion cyclotron wave was locally generated in the solar wind.

A sustainable burn platform through inertial confinement fusion (ICF) has been an ongoing challenge for over 50 years. Mitigating engineering limitations and improving the current design involves an understanding of the complex coupling of physical processes. While sophisticated simulations codes are used to model ICF implosions, these tools contain necessary numerical approximation but miss physical processes that limit predictive capability. Identification of relationships between controllable design inputs to ICF experiments and measurable outcomes (e.g. yield, shape) from performed experiments can help guide the future design of experiments and development of simulation codes, to potentially improve the accuracy of the computational models used to simulate ICF experiments. We use sparse matrix decomposition methods to identify clusters of a few related design variables. Sparse principal component analysis (SPCA) identifies groupings that are related to the physical origin of the variables (laser, hohlraum, and capsule). A variable importance analysis finds that in addition to variables highly correlated with neutron yield such as picket power and laser energy, variables that represent a dramatic change of the ICF design such as number of pulse steps are also very important. The obtained sparse components are then used to train a random forest (RF) surrogate for predicting total yield. The RF performance on the training and testing data compares with the performance of the RF surrogate trained using all design variables considered. This work is intended to inform design changes in future ICF experiments by augmenting the expert intuition and simulations results.

In the current study, model expressions for fifth-order velocity moments obtained from the truncated Gram-Charlier series expansions model for a turbulent flow field probability density function are validated using data from direct numerical simulation (DNS) of a planar turbulent flow in a strained channel. Simplicity of the model expressions, the lack of unknown coefficients, and their applicability to non-Gaussian turbulent flows make this approach attractive to use for closing turbulent models based on the Reynolds-averaged Navier-Stokes equations. The study confirms validity of the model expressions. It also shows that the imposed flow strain improves agreement between the model and DNS profiles for the fifth-order moments in the flow buffer zone including when the flow separates. The results of investigation of this phenomenon reveal sensitivity of odd velocity moments to the grid resolution. A new length scale is proposed as a criterion for the grid generation near walls and other areas of high velocity gradients when higher-order statistics are collected from DNS.

The uncertainty relations in hydrodynamics are numerically studied. We first give a review for the formulation of the generalized uncertainty relations in the stochastic variational method (SVM), following the paper by two of the present authors [Phys. Lett. A382, 1472 (2018)]. In this approach, the origin of the finite minimum value of uncertainty is attributed to the non-differentiable (virtual) trajectory of a quantum particle and then both of the Kennard and Robertson-Schr\"{o}dinger inequalities in quantum mechanics are reproduced. The same non-differentiable trajectory is applied to the motion of fluid elements in hydrodynamics. By introducing the standard deviations of position and momentum for fluid elements, the uncertainty relations in hydrodynamics are derived. These are applicable even to the Gross-Pitaevskii equation and then the field-theoretical uncertainty relation is reproduced. We further investigate numerically the derived relations and find that the behaviors of the uncertainty relations for liquid and gas are qualitatively different. This suggests that the uncertainty relations in hydrodynamics are used as a criterion to classify liquid and gas in fluid.

An explicitly solvable model of sound sources detection and sound wave scattering by the lateral line of a fish or amphibian based on the theory of self-adjoint perturbations of the Laplace operator by arrays of additional boundary conditions at isolated points is proposed and discussed.

Kerr microresonators driven in the normal dispersion regime typically require the presence of localized dispersion perturbations, such as those induced by avoided mode crossings, to initiate the formation of optical frequency combs. In this work, we experimentally demonstrate that this requirement can be lifted by driving the resonator with a pulsed pump source. We also show that controlling the desynchronization between the pump repetition rate and the cavity free-spectral range (FSR) provides a simple mechanism to tune the center frequency of the output comb. Using a fiber mini-resonator with a radius of only 6 cm we experimentally present spectrally flat combs with a bandwidth of 3 THz whose center frequency can be tuned by more than 2 THz. By driving the cavity at harmonics of its 0.54 GHz FSR, we are able to generate combs with line spacings selectable between 0.54 and 10.8 GHz. The ability to tune both the center frequency and frequency spacing of the output comb highlights the flexibility of this platform. Additionally, we demonstrate that under conditions of large pump-cavity desynchronization, the same cavity also supports a new form of Raman-assisted anomalous dispersion cavity soliton.

Computational modeling and accurate simulations of localized surface plasmon resonance (LSPR) absorption properties are reported for gold nanobipyramids (GNBs), a class of metal nanoparticle that features highly tunable, geometrydependent optical properties. GNB bicone models with spherical tips performed best in reproducing experimental LSPR spectra while the comparison with other geometrical models provided a fundamental understanding of base shapes and tip effects on the optical properties of GNBs. Our results demonstrated the importance of averaging all geometrical parameters determined from transmission electron microscopy images to build representative models of GNBs. By assessing the performances of LSPR absorption spectra simulations based on a quasi-static approximation, we provided an applicability range of this approach as a function of the nanoparticle size, paving the way to the theoretical study of the coupling between molecular electron densities and metal nanoparticles in GNB-based nanohybrid systems, with potential applications in the design of nanomaterials for bioimaging, optics and photocatalysis.

We present a non-destructive beam profile imaging concept that utilizes machine learning tools, namely genetic algorithm with a gradient descent-like minimization. Electromagnetic fields around a charged beam carry information about its transverse profile. The electrodes of a stripline-type beam position monitor (with eight probes in this study) can pick up that information for visualization of the beam profile. We use a genetic algorithm to transform an arbitrary Gaussian beam in such a way that it eventually reconstructs the transverse position and the shape of the original beam. The algorithm requires a signal that is picked up by the stripline electrodes, and a (precise or approximate) knowledge of the beam size. It can visualize the profile of fairly distorted beams as well.

The paper describes the results achieved in the development of the compact transportable fully automated optical clock based on a single 171Yb+ ion in a radiofrequency (RF) quadrupole trap. The resulted measurements demonstrated the 4.9E-16 output RF signal relative instability on 1000 s integration time with 298.1 kg weight, 0.921 volume, and 2.766 kW input power consumption of the device. A transformation of the ultrastable optical signal into the RF range was performed via the optical frequency comb with a supercontinuum fiber laser generator. The transformation was conducted without loss of initial stability and accuracy characteristics of the signal.

A multilayer perceptron (MLP) neural network is built to analyze the Cs-137 concentration in seawater via gamma-ray spectrums measured by a LaBr3 detector. The MLP is trained and tested by a large data set generated by combining measured and Monte Carlo simulated spectrums under the assumption that all the measured spectrums have 0 Cs-137 concentration. And the performance of MLP is evaluated and compared with the traditional net-peak area method. The results show an improvement of 7% in accuracy and 0.036 in the ROC-curve area compared to those of the net peak area method. And the influence of the assumption of Cs-137 concentration in the training data set on the classifying performance of MLP is evaluated.

We present a novel solution to automated beam alignment optimization. This device is based on a Raspberry Pi computer, stepper motors, commercial optomechanics and electronic devices, and the open source machine learning algorithm M-LOOP. We provide schematic drawings for the custom hardware necessary to operate the device and discuss diagnostic techniques to determine the performance. The beam auto-aligning device has been used to improve the alignment of a laser beam into a single-mode optical fiber from manually optimized fiber alignment with an iteration time of typically 20~minutes. We present example data of one such measurement to illustrate device performance.

In this work, Rossi-alpha measurements were simultaneously performed with a $^3$He-based detection system and an organic scintillator-based detection system. The assembly is 15 kg of plutonium (93 wt$\%$ $^{239}$Pu) reflected by copper and moderated by lead. The goal of Rossi-alpha measurements is to estimate the prompt neutron decay constant, alpha. Simulations estimate $k_\text{eff}$ = 0.624 and $\alpha$ = 52.3 $\pm$ 2.5 ns for the measured assembly. The organic scintillator system estimated $\alpha$ = 47.4 $\pm$ 2.0 ns, having a 9.37$\%$ error (though the 1.09 standard deviation confidence intervals overlapped). The $^3$He system estimated $\alpha$ = 37 $\mu$s. The known slowing down time of the $^3$He system is 35-40 $\mu$s, which means the slowing down time dominates and obscures the prompt neutron decay constant. Subsequently, the organic scintillator system should be used for assemblies with alpha much less than 35 $\mu$s.

In 1844, the Austrian mineralogist Wilhelm von Haidinger reported he could see the polarization of light with the naked eye. It appears as a faint, blurry, transient, yellow hourglass shape superimposed on whatever one looks at. It is now commonly called Haidinger's brushes. To our surprise, even though the paper is well cited, we were unable to find a translation of it from its difficult, nineteenth-century German into English. We provide one, with annotations to set the paper into its scientific and historical context.

As radiation detector arrays in nuclear physics applications become larger and physically more separated, the time synchronization and trigger distribution between many channels of detector readout electronics become more challenging. Clocks and triggers are traditionally distributed through dedicated cabling, but newer methods such as the IEEE 1588 Precision Time Protocol and White Rabbit allow clock synchronization through the exchange of timing messages over Ethernet. Consequently, we report here the use of White Rabbit in a new detector readout module, the Pixie-Net XL. The White Rabbit core, data capture from multiple digitizing channels, and subsequent pulse processing for pulse height and constant fraction timing are implemented in a Kintex 7 FPGA. The detector data records include White Rabbit time stamps and are transmitted to storage through the White Rabbit core's gigabit Ethernet data path or a slower diagnostic/control link using an embedded Zynq processor. The performance is characterized by time-of-flight style measurements and by time correlation of high energy background events from cosmic showers in detectors separated by longer distances. Software for the Zynq processor can implement "software triggering", for example to limit recording of data to events where a minimum number of channels from multiple modules detect radiation at the same time.

We propose a feasible waveguide design optimized for harnessing Stimulated Brillouin Scattering with long-lived phonons. The design consists of a fully suspended ridge waveguide surrounded by a 1D phononic crystal that mitigates losses to the substrate while providing the needed homogeneity for the build-up of the optomechanical interaction. The coupling factor of these structures was calculated to be 0.54 (W.m)$^{-1}$ for intramodal backward Brillouin scattering with its fundamental TE-like mode and 4.5(W.m)$^{-1}$ for intramodal forward Brillouin scattering. The addition of the phononic crystal provides a 30 dB attenuation of the mechanical displacement after only five unitary cells, possibly leading to a regime where the acoustic losses are only limited by fabrication. As a result, the total Brillouin gain, which is proportional to the product of the coupling and acoustic quality factors, is nominally equal to the idealized fully suspended waveguide.

We propose a fast and robust scheme for the direct minimization of the Ohta-Kawasaki energy that characterizes the microphase separation of diblock copolymer melts. The scheme employs a globally convergent modified Newton method with line search which is shown to be mass-conservative, energy-descending, asymptotically quadratically convergent, and three orders of magnitude more efficient than the commonly-used gradient flow approach. The regularity and the first-order condition of minimizers are analyzed. A numerical study of the chemical substrate guided directed self-assembly of diblock copolymer melts, based on a novel polymer-substrate interaction model and the proposed scheme, is provided.

Exploration of the impact of synthetic material landscapes featuring tunable geometrical properties on physical processes is a research direction that is currently of great interest because of the outstanding phenomena that are continually being uncovered. Twistronics and the properties of wave excitations in moir\'e lattices are salient examples. Moir\'e patterns bridge the gap between aperiodic structures and perfect crystals, thus opening the door to the exploration of effects accompanying the transition from commensurate to incommensurate phases. Moir\'e patterns have revealed profound effects in graphene-based systems1,2,3,4,5, they are used to manipulate ultracold atoms6,7 and to create gauge potentials8, and are observed in colloidal clusters9. Recently, it was shown that photonic moir\'e lattices enable observation of the two-dimensional localization-to-delocalization transition of light in purely linear systems10,11. Here, we employ moir\'e lattices optically induced in photorefractive nonlinear media12,13,14 to elucidate the formation of optical solitons under different geometrical conditions controlled by the twisting angle between the constitutive sublattices. We observe the formation of solitons in lattices that smoothly transition from fully periodic geometries to aperiodic ones, with threshold properties that are a pristine direct manifestation of flat-band physics11.

The grid point requirements of Chapman [AIAA J., 17, 1293, (1979)] and Choi and Moin [Phys. Fluid, 24, 011702 (2012)] are refined. We show that the grid requirement for DNS is $N\sim Re_{L_x}^{2.05}$ rather than $N\sim Re_{L_x}^{2.64}$ as suggested by Choi and Moin, where $L_x$ is the length of the plate. In addition, we estimate the time step requirement for DNS, WRLES, and WMLES. Requiring that the convective CFL$\leq 1$ and the diffusive CFL$\leq 1$, the time steps required for converged statistics is $n_t\sim Re_{L_x}/Re_{x_0}^{6/7}$ for WMLES, $n_t\sim Re_{L_x}/Re_{x_0}^{1/7}$ for WRLES and DNS (with different prefactors), where $Re_{x_0}$ is the inlet Reynolds number.

The data acquisition console is an important component of the EAST data acquisition system which provides unified data acquisition and long-term data storage for diagnostics. The data acquisition console is used to manage the data acquisition configuration information and control the data acquisition workflow. The data acquisition console has been developed many years, and with increasing of data acquisition nodes and emergence of new control nodes, the function of configuration management has become inadequate. It is going to update the configuration management function of data acquisition console. The upgraded data acquisition console based on LabVIEW should be oriented to the data acquisition administrator, with the functions of managing data acquisition nodes, managing control nodes, setting and publishing configuration parameters, batch management, database backup, monitoring the status of data acquisition nodes, controlling the data acquisition workflow, and shot simulation data acquisition test. The upgraded data acquisition console has been designed and under testing recently.

Axion helioscopes like the International Axion Observatory (IAXO) search for evidence of axions and axion-like particles (ALPs) from the Sun. A strong magnetic field is used to convert ALPs into photons via the generic ALP-photon coupling. To observe the resulting photons, X-ray detectors with low background and high efficiency are necessary. In addition, good energy resolution and low energy threshold would allow for investigating the ALP properties by studying the X-ray spectrum after their discovery. We propose to use low temperature metallic magnetic calorimeters (MMCs). Here we present the first detector system based on MMCs developed for IAXO and discuss the results of the characterization. The detector consists of a two-dimensional 64-pixel array covering an active area of 16 mm$^2$ with a filling factor of 93 %. We achieve an average energy resolution of 6.1 eV FWHM allowing for energy thresholds below 100 eV. This detector is the first step towards a larger 1 cm$^2$ array matching the IAXO X-ray optics. We determine the background rate in the energy range between 1 keV and 10 keV to be $3.2(1) \times 10^{-4}$ keV$^{-1}$ cm$^{-2}$ s$^{-1}$ from events acquired over 30 days. During this measurement the detector system was unshielded. In the future, active and passive shields will significantly reduce the background rate. Our results demonstrate that MMCs are a promising technology to discover and study ALPs in helioscopes.

When a fluid system is subject to strong rotation, centrifugal fluid motion is expected, i.e., denser (lighter) fluid moves outward (inward) from (toward) the axis of rotation. Here we demonstrate, both experimentally and numerically, the existence of an unexpected outward motion of warm and lighter vortices in rotating turbulent convection. This anomalous vortex motion occurs under rapid rotations when the centrifugal buoyancy is sufficiently strong to induce a symmetry-breaking in the vorticity field, i.e., the vorticity of the cold anticyclones overrides that of the warm cyclones. We show that through hydrodynamic interactions the densely populated vortices can self-aggregate into coherent clusters and exhibit collective motion in this flow regime. Interestingly, the correlation of the vortex velocity fluctuations within a cluster is scale-free, with the correlation length being about 30% of the cluster length. Such long-range correlation leads to the collective outward motion of cyclones. Our study provides new understanding of vortex dynamics that are widely present in nature.

Scientific experiments rely on some type of measurements that provides the required data to extract aimed information or conclusions. Data production and analysis are therefore essential components at the heart of any scientific experimental application. Traditionally, efforts on detector development for photon sources have focused on the properties and performance of the detection front-ends. In many cases, the data acquisition chain as well as data processing, are treated as a complementary component of the detector system and added at a late stage of the project. In most of the cases, data processing tasks are entrusted to CPUs; achieving thus the minimum bandwidth requirements and kept hardware relatively simple in term of functionalities. This also minimizes design effort, complexity and implementation cost. This approach is changing in the last years as it does not fit new high-performance detectors; FPGA and GPUs are now used to perform complex image manipulation tasks such as image reconstruction, image rotation, accumulation, filtering, data analysis and many others. This frees up CPUs for simpler tasks. The objective of this paper is to present both the implementation of real time FPGA-based image manipulation techniques, as well as, the performance of the ESRF data acquisition platform called RASHPA, into the back-end board of the SMARTPIX photon-counting detector developed at the ESRF.

We present in this paper the main structural features and enthalpy details for the energy profiles of the title reactions, both for the exothermic (forward) path to NH$_{3}$ formation and for the endothermic (reverse) reaction to NH$_{2}^{-}$ formation. Both systems have relevance for the nitrogen chemistry in the interstellar medium (ISM). They are also helpful to document the possible role of H$^{-}$ in molecular clouds at temperatures well below room temperature. The structural calculations are carried out using ab initio methods and are further employed to obtain the reaction rates down to the interstellar temperatures detected in earlier experiments. The reaction rates are obtained from the computed Minimum Energy Path (MEP) using the Variational Transition State Theory (VTST) approach. The results indicate very good accord with the experiments at room temperature, while the measured low temperature data down to 8 K are well described once we analyse in detail the physics of the reactions and modify accordingly the VTST approach. This is done by employing a T-dependent scaling, from room temperature conditions down to the lower ISM temperatures, which acknowledges the non-canonical behavior of the fast, barrierless exothermic reaction. This feature was also suggested in the earlier work discussed below in our main text. The physical reasons for the experimental behavior, and the need for improving on the VTST method when used away from room temperatures, are discussed in detail.

Belle II experiment at the SuperKEKB collider at KEK, Tsukuba, Japan has successfully started the data taking with the full detector in March 2019. Belle II is a luminosity frontier experiment of the new generation to search for physics beyond the Standard Model of elementary particles, from precision measurements of a huge number of B and charm mesons and tau leptons. In order to read out the events at a high rate from the seven subdetectors of Belle II, we adopt a highly unified readout system, including a unified trigger timing distribution system (TTD), a unified high speed data link system (Belle2link), and a common backend system to receive Belle2link data. Each subdetector frontend readout system has a field-programmable gate array (FPGA) in which unified firmware components of the TTD receiver and Belle2link transmitter are embedded. The system is designed for data taking at a trigger rate up to 30 kHz with a dead-time fraction of about 1% in the frontend readout system. The trigger rate during the nominal operation is still much lower than our design. However, the background level is already high due to the initial vacuum condition and other accelerator parameters, and it is the most limiting factor of the accelerator and detector operation. Hence the occupancy and the stress to the frontend electronics are rather severe, and they cause various kind of instabilities. We present the performance of the system, including the achieved trigger rate, dead-time fraction, stability, and discuss the experiences gained during the operation.

We present a design for an atomic oven suitable for loading ion traps, which is operated via optical heating with a continuous-wave multimode diode laser. The absence of the low-resistance electrical connections necessary for Joule heating allows the oven to be extremely well thermally isolated from the rest of the vacuum system, and for an oven filled with calcium we achieve a number density suitable for rapid ion loading in the target region with ~200 mW of laser power, limited by radiative losses. With simple feedforward to the laser power, the turn-on time for the oven is less than 20 s, while the oven contains enough calcium to operate continuously for many thousands of years without replenishment.

If a particle has to fall first vertically 1 m from A and then move horizontally 1 m to B, it takes a time $t(=\tau_1+\tau_2=\tau_3=3/\sqrt{2g})=0.67$ s. Under gravity and without friction, if it sides down on a linear track inclined at $45^0$ between two points A and B of 1 m height, it takes time $t(=\tau_4=2/\sqrt{g})=0.63$ s. Between these two extremes, historically, Bernoulli (1718) proved that the fastest track between these points A and B is cycloid with the least time of descent $t=\tau_B=0.58$ s. Apart from other interesting cases, here we study the frictionless motion of a particle/bead on an interesting track/wire between A and B given by $y(x)=(1-x^{\nu})^{1/\nu}.$ For $\nu > 1$ the track becomes convex and $t>>\tau_4$, and when $\nu >1.22$, the motion with zero initial speed is not possible. We find that when $\nu \in (0.09653, 0.31749), \tau_4

A conservative primitive variable discrete exterior calculus (DEC) discretization of the Navier-Stokes equations is performed. An existing DEC method (Mohamed, M. S., Hirani, A. N., Samtaney, R. (2016). Discrete exterior calculus discretization of incompressible Navier-Stokes equations over surface simplicial meshes. Journal of Computational Physics, 312, 175-191) is modified to this end, and is extended to include the energy-preserving time integration and the Coriolis force to enhance its applicability to investigate the late time behavior of flows on rotating surfaces, i.e., that of the planetary flows. The simulation experiments show second order accuracy of the scheme for the structured-triangular meshes, and first order accuracy for the otherwise unstructured meshes. The method exhibits second order kinetic energy relative error convergence rate with mesh size for inviscid flows. The test case of flow on a rotating sphere demonstrates that the method preserves the stationary state, and conserves the inviscid invariants over an extended period of time.

A deep learning-based wavelength controllable forward prediction and inverse design model of nanophotonic devices is proposed. Both the target time-domain and wavelength-domain information can be utilized simultaneously, which enables multiple functions, including power splitter and wavelength demultiplexer, to be implemented efficiently and flexibly.

Background and controlled electromagnetic radiation (EMR) on biological cells and tissues induces thermal, non-thermal, and dielectric property change. After EMR interaction with cells/tissues the resulting signal is used for imaging, bio-molecular response, and photo-biomodulation studies at infrared regime, and for therapeutic use. We attempt to present a review of current literature with a focus to present compilation of published experimental results for each regime viz. microwave (extremely low frequency, ELF to 3 GHz), to cellular communication frequencies (100 KHz to 300 GHz), millimeter wave (300 GHz- 1 THz), and the infra-red band extending up to 461 THz. A unique graphical representation of frequency effects and their relevant significance in detection of direct biological effects, therapeutic applications and biophysical interpretation is presented. A total of seventy research papers from peer-reviewed journals were used to compile a mixture of useful information, all presented in a narrative style. Out of the Journal articles used for this paper, 63 journal articles were published between 2000 to 2020. Physical, biological, and therapeutic mechanisms of thermal, non-thermal and complex dielectric effects of EMR on cells are all explained in relevant sections of this paper. A broad up to date review for the EMR range KHz-NIR (kilohertz to near infra-red) is prepared. Published reports indicate that number of biological cell irradiation impact studies fall off rapidly beyond a few THz EMR, leading to relatively a smaller number of studies in FIR and NIR bands covering most of the thermal effects and microthermal effects, and rotation-vibration effects.

We report the quantitative experimental observation of the weak inertial-wave turbulence regime of rotating turbulence. We produce a statistically steady homogeneous turbulent flow that consists of nonlinearly interacting inertial waves, using rough top and bottom boundaries to prevent the emergence of a geostrophic flow. As the forcing amplitude increases, the temporal spectrum evolves from a discrete set of peaks to a continuous spectrum. Maps of the bicoherence of the velocity field confirm such a gradual transition between discrete wave interactions at weak forcing amplitude, and the regime described by weak turbulence theory (WTT) for stronger forcing. In the former regime, the bicoherence maps display a near-zero background level, together with sharp localized peaks associated with discrete resonances. By contrast, in the latter regime the bicoherence is a smooth function that takes values of the order of the Rossby number, in line with the infinite-domain and random-phase assumptions of WTT. The spatial spectra then display a power-law behavior, both the spectral exponent and the spectral level being accurately predicted by WTT at high Reynolds number and low Rossby number.

We report Alfv\'en-wave experiments with liquid rubidium at the Dresden High Magnetic Field Laboratory (HLD). Reaching up to 63 T, the pulsed magnetic field exceeds the critical value of 54 T at which the Alfv\'en speed becomes equal to the sound speed (plasma-$\beta$ unity). At this threshold we observe a period doubling of an applied 8 kHz CW excitation, a clear footprint for a parametric resonance between magnetosonic waves and Alfv\'en waves.

We present the novel approach to mathematical modeling of information processes in biosystems. It explores the mathematical formalism and methodology of quantum theory, especially quantum measurement theory. This approach is known as {\it quantum-like} and it should be distinguished from study of genuine quantum physical processes in biosystems (quantum biophysics, quantum cognition). It is based on quantum information representation of biosystem's state and modeling its dynamics in the framework of theory of open quantum systems. This paper starts with the non-physicist friendly presentation of quantum measurement theory, from the original von Neumann formulation to modern theory of quantum instruments. Then, latter is applied to model combinations of cognitive effects and gene regulation of glucose/lactose metabolism in Escherichia coli bacterium. The most general construction of quantum instruments is based on the scheme of indirect measurement, in that measurement apparatus plays the role of the environment for a biosystem. The biological essence of this scheme is illustrated by quantum formalization of Helmholtz sensation-perception theory. Then we move to open systems dynamics and consider quantum master equation, with concentrating on quantum Markov processes. In this framework, we model functioning of biological functions such as psychological functions and epigenetic mutation.

Nanolasers are considered ideal candidates for communications and data processing at chip-level thanks to their extremely reduced footprint, low thermal load and potentially outstanding modulation bandwidth, which in some case has been numerically estimated to exceed hundreds of GHz. The few experimental implementations reported to date, however, have so-far fallen very short of such predictions, whether because of technical difficulties or of overoptimistic numerical results. We propose a methodology to study the physical characteristics which determine the system's robustness and apply it to a general model, using numerical simulations of large-signal modulation. Changing the DC pump values and modulation frequencies, we further investigate the influence of intrinsic noise, considering, in addition, the role of cavity losses. Our results confirm that significant modulation bandwidths can be achieved, at the expense of large pump values, while the often targeted low bias operation is strongly noise- and bandwidth-limited. This fundamental investigation suggests that technological efforts should be oriented towards enabling large pump rates in nanolasers, whose performance promises to surpass microdevices in the same range of photon flux and input energy.

In this paper I explain how I usually introduce the Schr\"odinger equation during the quantum mechanics course. My preferred method is the chronological one. Since the Schr\"odinger equation belongs to a special case of wave equations I start the course with introducing the wave equation. The Schr\"odinger equation is derived with the help of the two quantum concepts introduced by Max Planck, Einstein, and de Broglie, i.e., the energy of a photon $E=\hbar\omega$ and the wavelength of the de Broglie wave $\lambda=h/p$. Finally, the difference between the classical wave equation and the quantum Schr\"odinger one is explained in order to help the students to grasp the meaning of quantum wavefunction $\Psi({\bf r},t)$. A comparison of the present method to the approaches given by the authors of quantum mechanics textbooks as well as that of the original Nuffield A level is presented. It is found that the present approach is different from those given by these authors, except by Weinberg or Dicke and Wittke. However, the approach is in line with the original Nuffield A level one.

Recently the rapid-scan technique is reviving in NMR or EPR, because of its benefits of zero-dead time and low RF power. While signal baseline is still a big problem in such experiments, time-share method has been used to indirectly avoid it. However, it is obviously not a truly zero-dead time method. Other data-processing methods were also adopted to deal with raw data. Here we try to use single-sideband technique at 11.4MHz to mitigate this obstacle. The prospect is that single sideband technique can be used in rapid-scan experiment for low/high field imaging.

Many biomolecules have flexible structures, requiring distributional estimates of their conformations. Experiments to acquire distributional data typically measure pairs of labels separately, losing information on the joint distribution. These data are assumed independent when estimating the conformational ensemble. We developed a method to estimate the true joint distribution from separately acquired measurements, testing it on two biological systems. This method accurately reproduces the joint distribution where known and generates testable predictions about complex conformational ensembles.

Wave absorption in time-invariant, passive thin films is fundamentally limited by a trade-off between bandwidth and overall thickness. In this work, we investigate the use of temporal switching to reduce signal reflections from a thin grounded slab over broader bandwidths. We extend quasi-normal mode theory to time switching, developing an ab initio formalism that can model a broad class of time-switched structures. Our formalism provides optimal switching strategies to maximize the bandwidth over which minimal reflection is achieved, showing promising prospects for time-switched nanophotonic and metamaterial systems to overcome the limits of time-invariant, passive structures.

We analyse the nonlinear dynamics of the large scale flow in Rayleigh-B\'enard convection in a two-dimensional, rectangular geometry of aspect ratio $\Gamma$. We impose periodic and free-slip boundary conditions in the streamwise and spanwise directions, respectively. As Rayleigh number Ra increases, a large scale zonal flow dominates the dynamics of a moderate Prandtl number fluid. At high Ra, in the turbulent regime, transitions are seen in the probability density function (PDF) of the largest scale mode. For $\Gamma = 2$, the PDF first transitions from a Gaussian to a trimodal behaviour, signifying the emergence of reversals of the zonal flow where the flow fluctuates between three distinct turbulent states: two states in which the zonal flow travels in opposite directions and one state with no zonal mean flow. Further increase in Ra leads to a transition from a trimodal to a unimodal PDF which demonstrates the disappearance of the zonal flow reversals. On the other hand, for $\Gamma = 1$ the zonal flow reversals are characterised by a bimodal PDF of the largest scale mode, where the flow fluctuates only between two distinct turbulent states with zonal flow travelling in opposite directions.

The Mu3e experiment aims to find or exclude the lepton flavour violating decay $\mu^+\to e^+e^-e^+$ with a sensitivity of one in 10$^{16}$ muon decays. The first phase of the experiment is currently under construction at the Paul Scherrer Institute (PSI, Switzerland), where beams with up to 10$^8$ muons per second are available. The detector will consist of an ultra-thin pixel tracker made from High-Voltage Monolithic Active Pixel Sensors (HV-MAPS), complemented by scintillating tiles and fibres for precise timing measurements. The experiment produces about 100 GBit/s of zero-suppressed data which are transported to a filter farm using a network of FPGAs and fast optical links. On the filter farm, tracks and three-particle vertices are reconstructed using highly parallel algorithms running on graphics processing units, leading to a reduction of the data to 100 MByte/s for mass storage and offline analysis. The paper introduces the system design and hardware implementation of the Mu3e data acquisition and filter farm.

The FERS-5200 is the new CAEN Front-End Readout System for large detector arrays. It consists in a compact, distributed and easy-deployable solution integrating front-end based on ASICs, A/D conversion, data processing, synchronization and readout. Using the appropriate Front-End the solution perfectly fits a wide range of detectors such as SiPMs, multianode PMTs, GEMs, Silicon Strip detectors, Wire Chambers, Gas Tubes, etc. The first member of the FERS family is the unit A5202, a 64 channel readout card for SiPMs, based on the CITIROC ASIC by Weeroc SaS. The Concentrator board DT5215 can manage the readout of up to 128 cards at once, that is 8192 readout channels in case of the A5202.

The characteristics of the Solid-state Neutron Detector, under development for neutron-scattering measurements at the European Spallation Source, have been simulated with a Geant4-based computer code. The code models the interations of thermal neutrons and ionising radiation in the 6Li-doped scintillating glass of the detector, the production of scintillation light and the transport of optical, scintillation photons through the the scintillator, en route to the photo-cathode of the attached multi-anode photomultiplier. Factors which affect the optical-photon transport, such as surface finish, pixelation of the glass sheet, provision of a front reflector and optical coupling media are compared. Predictions of the detector response are compared with measurements made with neutron and gamma-ray sources, a collimated alpha source and finely collimated beams of 2.5 MeV protons and deuterons.

The Shape method, a novel approach to obtain the functional form of the $\gamma$-ray strength function ($\gamma$SF) in the absence of neutron resonance spacing data, is introduced. When used in connection with the Oslo method the slope of the Nuclear Level Density (NLD) is obtained simultaneously. The foundation of the Shape method lies in the primary $\gamma$-ray transitions which preserve information on the functional form of the $\gamma$SF. The Shape method has been applied to $^{56}$Fe, $^{92}$Zr, $^{164}$Dy, and $^{240}$Pu, which are representative cases for the variety of situations encountered in typical NLD and $\gamma$SF studies. The comparisons of results from the Shape method to those from the Oslo method demonstrate that the functional form of the $\gamma$SF is retained regardless of nuclear structure details or $J^\pi$ values of the states fed by the primary transitions.

A classical system, which is analogous to the quantum one with a backflow of probability, is proposed. The system consists of a chain of masses interconnected by springs, as well attached by other springs to fixed supports. Thanks to the last springs the cutoff frequency and dispersion appears in the spectrum of waves propagating along the chain. It is shown that this dispersion contributes to the appearance of a backflow of energy. In the case of the interference of the two waves, the magnitude of this backflow is an order of magnitude higher than the value of the probability backflow in the mentioned quantum problem. The equation of Green's function is considered, and it is shown that the backflow of energy is also possible when the system is excited by two consecutive short pulses. This classical backflow phenomenon is explained by the branching of energy flow to local modes, what is confirmed by the results for the forced damped oscillator. It is shown that even in such a simple system the backflow of energy takes place (both an instantaneous and on average) and the energy comes back to external force.

We demonstrate laser wakefield acceleration of quasi-monoenergetic electron bunches up to 15 MeV at 1 kHz repetition rate with 2.5 pC charge per bunch and a core with < 7 mrad beam divergence. Acceleration is driven by 5 fs, < 2.7 mJ laser pulses incident on a thin, near-critical density hydrogen gas jet. Low beam divergence is attributed to reduced sensitivity to laser carrier envelope phase slip, achieved in two ways using laser polarization and gas jet control: (1) electron injection into the wake on the gas jet's plasma density downramp, and (2) use of circularly polarized drive pulses. Under conditions of mild wavebreaking in the downramp, electron beam profiles have a 2D Lorentzian shape consistent with a kappa electron energy distribution. Such distributions had previously been observed only in space or dusty plasmas. We attribute this shape to the strongly correlated collisionless bunch confined by the quadratic wakefield bubble potential, where transverse velocity space diffusion is imparted to the bunch by the red-shifted laser field in the bubble.

A discussion on Li et al. (2019) [Numerical computations of resonant sloshing using the modified isoAdvector method and the buoyancy-modified turbulence closure model. Appl. Ocean Res. 93, article no. 101829, DOI:10.1016.j.apor.2019.05.014] is provided. Some mis-characterizations regarding the work of Larsen and Fuhrman (2018) [On the over-production of turbulence beneath surface waves using Reynolds-averaged Navier-Stokes models. J. Fluid Mech. 853, 419-460, DOI:10.1017/jfm.2018.577], on stabilizing two-equation turbulence closures beneath surface waves, are clarified.

The GOSIP (Gigabit Optical Serial Interface Protocol) provides communication via optical fibres between multiple kinds of front-end electronics and the KINPEX PCIe receiver board located in the readout host PC. In recent years a stack of device driver software has been developed to utilize this hardware for several scenarios of data acquisition. On top of this driver foundation, several graphical user interfaces (GUIs) have been created. These GUIs are based on the Qt graphics libraries and are designed in a modular way: All common functionalities, like generic I/O with the front-ends, handling of configuration files, and window settings, are treated by a framework class GosipGUI. In the Qt workspace of such GosipGUI frame, specific sub classes may implement additional windows dedicated to operate different GOSIP front-end modules. These readout modules developed by GSI Experiment Electronics department are for instance FEBEX sampling ADCs, TAMEX FPGA-TDCs, or POLAND QFWs. For each kind of front-end the GUIs allow to monitor specific register contents, to set up the working configuration, and to interactively change parameters like sampling thresholds during data acquisition. The latter is extremely useful when qualifying and tuning the front-ends in the electronics lab or detector cave. Moreover, some of these GosipGUI implementations have been equipped with features for mostly automatic testing of ASICs in a prototype mass production. This has been applied for the APFEL-ASIC component of the PANDA experiment currently under construction, and for the FAIR beam diagnostic readout system POLAND.

Taking inspiration from the brain, neuromorphic computing promises to actualize the transformative potential of Artificial Intelligence (AI) by providing a path for ultra-low power AI implementation. Moreover, mimicking the complex and advanced properties of the brain can deliver a more powerful form of computation than is currently available. Here, we design and simulate a novel artificial neuron that incorporates two advanced neural behaviors: oscillatory dynamics and neuromodulation. Neuromodulation is the self-adaptive ability of a neuron to regulate its dynamics in response to its environment and contextual cues. The artificial neuron is implemented with a lattice of five magnetic skyrmions in a bilayer of insulating thulium iron garnet (TmIG) and platinum (Pt). The oscillatory dynamics of the coupled skyrmions has a multi-frequent spectrum which provides the neuron with a rich basis for information representation. Neuromodulation is enabled by the reconfigurability of the skyrmion lattice: individual skyrmions can be manipulated by electrical currents to change their arrangement in the lattice, which shifts the resonant frequencies and modulates the amplitudes of oscillatory outputs of the neuron in response to the same external excitation. Bio-mimicking dynamics such as bursting are shown. The results can be used to implement advanced neuromorphic applications including burst-coding, motion detection, cognition, brain-machine interfaces and attention-based learning.

Transcranial ultrasound therapy is increasingly used for the non-invasive treatment of brain disorders. However, conventional numerical wave solvers are currently too computationally expensive to be used online during treatments to predict the acoustic field passing through the skull (e.g., to account for subject-specific dose and targeting variations). As a step towards real-time predictions, in the current work, a fast iterative solver for the heterogeneous Helmholtz equation in 2D is developed using a fully-learned optimizer. The lightweight network architecture is based on a modified UNet that includes a learned hidden state. The network is trained using a physics-based loss function and a set of idealized sound speed distributions with fully unsupervised training (no knowledge of the true solution is required). The learned optimizer shows excellent performance on the test set, and is capable of generalization well outside the training examples, including to much larger computational domains, and more complex source and sound speed distributions, for example, those derived from x-ray computed tomography images of the skull.

Cooling of hadron beams is critically important in the next generation of hadron storage rings for delivery of unprecedented performance. One such application is the electron-ion collider presently under development in the US. The desire to develop electron coolers for operation at much higher energies than previously achieved necessitates the use of radio-frequency (RF) fields for acceleration as opposed to the conventional, electrostatic approach. While electron cooling is a mature technology at low energy utilizing a DC beam, RF acceleration requires the cooling beam to be bunched, thus extending the parameter space to an unexplored territory. It is important to experimentally demonstrate the feasibility of cooling with electron bunches and further investigate how the relative time structure of the two beams affects the cooling properties; thus, a set of four pulsed-beam cooling experiments was carried out by a collaboration of Jefferson Lab and Institute of Modern Physics (IMP). The experiments have successfully demonstrated cooling with a beam of electron bunches in both the longitudinal and transverse directions for the first time. We have measured the effect of the electron bunch length and longitudinal ion focusing strength on the temporal evolution of the longitudinal and transverse ion beam profile and demonstrate that if the synchronization can be accurately maintained, the dynamics are not adversely affected by the change in time structure.

A common challenge in scientific and technical domains is the quantitative description of geometries and shapes, e.g. in the analysis of microscope imagery or astronomical observation data. Frequently, it is desirable to go beyond scalar shape metrics such as porosity and surface to volume ratios because the samples are anisotropic or because direction-dependent quantities such as conductances or elasticity are of interest. Minkowski Tensors are a systematic family of versatile and robust higher-order shape descriptors that allow for shape characterization of arbitrary order and promise a path to systematic structure-function relationships for direction-dependent properties. Papaya2 is a software to calculate 2D higher-order shape metrics with a library interface, support for Irreducible Minkowski Tensors and interpolated marching squares. Extensions to Matlab, JavaScript and Python are provided as well. While the tensor of inertia is computed by many tools, we are not aware of other open-source software which provides higher-rank shape characterization in 2D.

The orbital architecture of the Solar System is thought to have been sculpted by a dynamical instability among the giant planets. During the instability a primordial outer disk of planetesimals was destabilized and ended up on planet-crossing orbits. Most planetesimals were ejected into interstellar space but a fraction were trapped on stable orbits in the Kuiper belt and Oort cloud. We use a suite of N-body simulations to map out the diversity of planetesimals' dynamical pathways. We focus on two processes: tidal disruption from very close encounters with a giant planet, and loss of surface volatiles from repeated passages close to the Sun. We show that the rate of tidal disruption is more than a factor of two higher for ejected planetesimals than for surviving objects in the Kuiper belt or Oort cloud. Ejected planetesimals are preferentially disrupted by Jupiter and surviving ones by Neptune. Given that the gas giants contracted significantly as they cooled but the ice giants did not, taking into account the thermal evolution of the giant planets decreases the disruption rate of ejected planetesimals. The frequency of volatile loss and extinction is far higher for ejected planetesimals than for surviving ones and is not affected by the giant planets' contraction. Even if all interstellar objects were ejected from Solar System-like systems, our analysis suggests that their physical properties should be more diverse than those of Solar System small bodies as a result of their divergent dynamical histories. This is consistent with the characteristics of the two currently-known interstellar objects.

Searches for gravitational waves from compact binaries focus mostly on quasi-circular motion, with the rationale that wave emission circularizes the orbit. Here, we study the generality of this result, when astrophysical environments (e.g., accretion disks) or other fundamental interactions are taken into account. We are motivated by possible electromagnetic counterparts to binary black hole coalescences and orbits, but also by the possible use of eccentricity as a smoking-gun for new physics. We find that: i) backreaction from radiative mechanisms, including scalars, vectors and gravitational waves circularize the orbital motion. ii) by contrast, environmental effects such as accretion and dynamical friction increase the eccentricity of binaries. Thus, it is the competition between radiative mechanisms and environmental effects that dictates the eccentricity evolution. We study this competition within an adiabatic approach, including gravitational radiation and dynamical friction forces. We show that that there is a critical semi-major axis below which gravitational radiation dominates the motion and the eccentricity of the system decreases. However, the eccentricity inherited from the environment-dominated stage can be substantial, and in particular can affect LISA sources. We provide examples for GW190521-like sources.

Many different simulation methods for Stokes flow problems involve a common computationally intense task---the summation of a kernel function over $O(N^2)$ pairs of points. One popular technique is the Kernel Independent Fast Multipole Method (KIFMM), which constructs a spatial adaptive octree and places a small number of equivalent multipole and local points around each octree box, and completes the kernel sum with $O(N)$ performance. However, the KIFMM cannot be used directly with nonlinear kernels, can be inefficient for complicated linear kernels, and in general is difficult to implement compared to less-efficient alternatives such as Ewald-type methods. Here we present the Kernel Aggregated Fast Multipole Method (KAFMM), which overcomes these drawbacks by allowing different kernel functions to be used for specific stages of octree traversal. In many cases a simpler linear kernel suffices during the most extensive stage of octree traversal, even for nonlinear kernel summation problems. The KAFMM thereby improves computational efficiency in general and also allows efficient evaluation of some nonlinear kernel functions such as the regularized Stokeslet. We have implemented our method as an open-source software library STKFMM with support for Laplace kernels, the Stokeslet, regularized Stokeslet, Rotne-Prager-Yamakawa (RPY) tensor, and the Stokes double-layer and traction operators. Open and periodic boundary conditions are supported for all kernels, and the no-slip wall boundary condition is supported for the Stokeslet and RPY tensor. The package is designed to be ready-to-use as well as being readily extensible to additional kernels. Massive parallelism is supported with mixed OpenMP and MPI.

Non-abelian gauge fields emerge naturally in the description of adiabatically evolving quantum systems having degenerate levels. Here we show that they also play a role in Thouless pumping in the presence of degenerate bands.To this end we consider a photonic Lieb lattice having two degenerate non-dispersive modes and we show that, when the lattice parameters are slowly modulated, the propagation of the photons bear the fingerprints of the underlying non-abelian gauge structure. The non-dispersive character of the bands enables a high degree of control on photon propagation. Our work paves the way to the generation and detection of non-abelian gauge fields in photonic and optical lattices.

Modern data-driven tools are transforming application-specific polymer development cycles. Surrogate models that can be trained to predict the properties of new polymers are becoming commonplace. Nevertheless, these models do not utilize the full breadth of the knowledge available in datasets, which are oftentimes sparse; inherent correlations between different property datasets are disregarded. Here, we demonstrate the potency of multi-task learning approaches that exploit such inherent correlations effectively, particularly when some property dataset sizes are small. Data pertaining to 36 different properties of over $13, 000$ polymers (corresponding to over $23,000$ data points) are coalesced and supplied to deep-learning multi-task architectures. Compared to conventional single-task learning models (that are trained on individual property datasets independently), the multi-task approach is accurate, efficient, scalable, and amenable to transfer learning as more data on the same or different properties become available. Moreover, these models are interpretable. Chemical rules, that explain how certain features control trends in specific property values, emerge from the present work, paving the way for the rational design of application specific polymers meeting desired property or performance objectives.

Hydraulic fracturing stimulates fracture swarm in reservoir formation though pressurized injection fluid. However restricted by the availability of formation data, the variability embraced by reservoir keeps uncertain, driving unstable gas recovery along with low resource efficiency, being responsible for resource scarcity, contaminated water, and injection-induced earthquake. Resource efficiency is qualified though new determined energy efficiency, a scale of recovery and associated environmental footprint. To maximize energy efficiency while minimize its' variation, we issue picked designs at reservoir conditions dependent optimal probabilities, assembling high efficiency portfolios and low risk portfolios for portfolio combination, which balance the variation and efficiency at optimal by adjusting the proportion of each portfolio. Relative to regular design for one well, the optimal portfolio combination applied in multiple wells receive remarkable variation reduction meanwhile substantial energy efficiency increase, in response to the call of more recovery per unit investment and less environment cost per unit nature gas extracted.

In this work, we introduce the escape measure, a finite-time version of the natural measure, to investigate the transient dynamics of escape orbits in open Hamiltonian systems. In order to numerically calculate the escape measure, we cover a region of interest of the phase space with a grid and we compute the visitation frequency of a given orbit on each box of the grid before the orbit escapes. Since open systems are not topologically transitive, we also define the mean escape measure, an average of the escape measure on an ensemble of initial conditions. We apply these concepts to study two physical systems: the single-null divertor tokamak, described by a two-dimensional map; and the Earth-Moon system, as modeled by the planar circular restricted three-body problem. First, by calculating the mean escape measure profile, we visually illustrate the paths taken by the escape orbits within the system. We observe that the choice of the ensemble of initial conditions may lead to distinct dynamical scenarios in both systems. Particularly, different orbits may experience different stickiness effects. After that, we analyze the mean escape measure distribution and we find that these vary greatly between the cases, highlighting the differences between our systems as well. Lastly, we define two parameters: the escape correlation dimension, that is independent of the grid resolution, and the escape complexity coefficient, which takes into account additional dynamical aspects, such as the orbit's escape time. We show that both of these parameters can quantify and distinguish between the diverse transient scenarios that arise.

The quantum analogue of ptychography, a powerful coherent diffractive imaging technique, is a simple method for reconstructing $d$-dimensional pure states. It relies on measuring partially overlapping parts of the input state in a single orthonormal basis and feeding the outcomes to an iterative phase-retrieval algorithm for postprocessing. We provide a proof of concept demonstration of this method by determining pure states given by superpositions of $d$ transverse spatial modes of an optical field. A set of $n$ rank-$r$ projectors, diagonal in the spatial mode basis, is used to generate $n$ partially overlapping parts of the input and each part is projectively measured in the Fourier transformed basis. For $d$ up to 32, we successfully reconstructed hundreds of random states using $n=5$ and $n=d$ rank-$\lceil d/2\rceil$ projectors. The extension of quantum ptychography for other types of photonic spatial modes is outlined.

This paper considers the dominant dynamical, thermal and rotational balances within the solar convection zone. The reasoning is such that: Coriolis forces balance pressure gradients. Background vortex stretching, baroclinic torques and nonlinear advection balance jointly. Turbulent fluxes convey what part of the solar luminosity that radiative diffusion cannot. These four relations determine estimates for the dominant length scales and dynamical amplitudes strictly in terms of known physical quantities. We predict that the dynamical Rossby number for convection is less than unity below the near-surface shear layer, indicating strong rotational constraint. We also predict a characteristic convection length scale of roughly 30 Mm throughout much of the convection zone. These inferences help explain recent observations that reveal weak flow amplitudes at 100-200 Mm scales.

A particular strength of ultracold quantum gases are the versatile detection methods available. Since they are based on atom-light interactions, the whole quantum optics toolbox can be used to tailor the detection process to the specific scientific question to be explored in the experiment. Common methods include time-of-flight measurements to access the momentum distribution of the gas, the use of cavities to monitor global properties of the quantum gas with minimal disturbance and phase-contrast or high-intensity absorption imaging to obtain local real space information in high-density settings. Even the ultimate limit of detecting each and every atom locally has been realized in two-dimensions using so-called quantum gas microscopes. In fact, these microscopes not only revolutionized the detection, but also the control of lattice gases. Here we provide a short overview of this technique, highlighting new observables as well as key experiments that have been enabled by quantum gas microscopy.

A new approach is theoretically proposed to study the glass transition of active pharmaceutical ingredients and a glass-forming anisotropic molecular liquid at high pressures. We describe amorphous materials as a fluid of hard spheres. Effects of nearest-neighbor interactions and cooperative motions of particles on glassy dynamics are quantified through a local and collective elastic barrier calculated using the Elastically Collective Nonlinear Langevin Equation theory. Inserting two barriers into Kramer's theory gives structural relaxation time. Then, we formulate a new mapping based on the thermal expansion process under pressure to intercorrelate particle density, temperature, and pressure. This analysis allows us to determine the pressure and temperature dependence of alpha relaxation. From this, we estimate an effective elastic modulus of amorphous materials and capture effects of conformation on the relaxation process. Remarkably, our theoretical results agree well with experiments.

Knowledge on evolving physical fields is of paramount importance in science, technology, and economics. Dynamical field inference (DFI) addresses the problem of reconstructing a stochastically driven, dynamically evolving field from finite data. It relies on information field theory (IFT), the information theory for fields. Here, the relations of DFI, IFT, and the recently developed supersymmetric theory of stochastics (STS) are established in a pedagogical discussion. In IFT, field expectation values can be calculated from the partition function of the full space-time inference problem. The partition function of the inference problem invokes a functional Dirac function to guarantee the dynamics, as well as a field-dependent functional determinant, to establish proper normalization, both impeding the necessary evaluation of the path integral over all field configurations. STS replaces these problematic expressions via the introduction of fermionic ghost and bosonic Lagrange fields, respectively. The action of these fields has a supersymmetry, which means there exist an exchange operation between bosons and fermions that leaves the system invariant. In contrast to this, measurements of the dynamical fields do not adhere to this supersymmetry. The supersymmetry can also be broken spontaneously, in which case the system evolves chaotically. This will affect the predictability of the system and thereby make DFI more challenging.

Atomic layers of Black Phosphorus (BP) present unique opto-electronic properties dominated by a direct tunable bandgap in a wide spectral range from visible to mid-infrared. In this work, we investigate the infrared photoluminescence of BP single crystals at very low temperature. Near-bandedge recombinations are observed at 2 K, including dominant excitonic transitions at 0.276 eV and a weaker one at 0.278 eV. The free-exciton binding energy is calculated with an anisotropic Wannier-Mott model and found equal to 9.1 meV. On the contrary, the PL intensity quenching of the 0.276 eV peak at high temperature is found with a much smaller activation energy, attributed to the localization of free excitons on a shallow impurity. This analysis leads us to attribute respectively the 0.276 eV and 0.278 eV PL lines to bound excitons and free excitons in BP. As a result, the value of bulk BP bandgap is refined to 0.287 eV at 2K.

Coherent optical states consist of a quantum superposition of different photon number (Fock) states, but because they do not form an orthogonal basis, no photon number states can be obtained from it by linear optics. Here we demonstrate the reverse, by manipulating a random continuous single-photon stream using quantum interference in an optical Sagnac loop, we create engineered quantum states of light with tunable photon statistics, including approximately coherent states. We demonstrate this experimentally using a true single-photon stream produced by a semiconductor quantum dot in an optical microcavity, and show that we can obtain light with g2(0)->1 in agreement with our theory, which can only be explained by quantum interference of at least 3 photons. The produced artificial light states are, however, much more complex than coherent states, containing quantum entanglement of photons, making them a resource for multi-photon entanglement.

We study the numerical solution of scalar time-harmonic wave equations on unbounded domains which can be split into a bounded interior domain of primary interest and an exterior domain with separable geometry. To compute the solution in the interior domain, approximations to the Dirichlet-to-Neumann (DtN) map of the exterior domain have to be imposed as transparent boundary conditions on the artificial coupling boundary. Although the DtN map can be computed by separation of variables, it is a nonlocal operator with dense matrix representations, and hence computationally inefficient. Therefore, approximations of DtN maps by sparse matrices, usually involving additional degrees of freedom, have been studied intensively in the literature using a variety of approaches including different types of infinite elements, local non-reflecting boundary conditions, and perfectly matched layers. The entries of these sparse matrices are derived analytically, e.g. from transformations or asymptotic expansions of solutions to the differential equation in the exterior domain. In contrast, in this paper we propose to `learn' the matrix entries from the DtN map in its separated form by solving an optimization problem as a preprocessing step. Theoretical considerations suggest that the approximation quality of learned infinite elements improves exponentially with increasing number of infinite element degrees of freedom, which is confirmed in numerical experiments. These numerical studies also show that learned infinite elements outperform state-of-the-art methods for the Helmholtz equation. At the same time, learned infinite elements are much more flexible than traditional methods as they, e.g., work similarly well for exterior domains involving strong reflections, for example, for the atmosphere of the Sun, which is strongly inhomogeneous and exhibits reflections at the corona.

In this paper, we study the problem of large-strain consolidation in poromechanics with deep neural networks. Given different material properties and different loading conditions, the goal is to predict pore pressure and settlement. We propose a novel method "multi-constitutive neural network" (MCNN) such that one model can solve several different constitutive laws. We introduce a one-hot encoding vector as an additional input vector, which is used to label the constitutive law we wish to solve. Then we build a DNN which takes as input (X, t) along with a constitutive model label and outputs the corresponding solution. It is the first time, to our knowledge, that we can evaluate multi-constitutive laws through only one training process while still obtaining good accuracies. We found that MCNN trained to solve multiple PDEs outperforms individual neural network solvers trained with PDE.

During metastatic dissemination, streams of cells collectively migrate through a network of narrow channels within the extracellular matrix, before entering into the blood stream. This strategy is believed to outperform other migration modes, based on the observation that individual cancer cells can take advantage of confinement to switch to an adhesion-independent form of locomotion. Yet, the physical origin of this behaviour has remained elusive and the mechanisms behind the emergence of coherent flows in populations of invading cells under confinement are presently unknown. Here we demonstrate that human fibrosarcoma cells (HT1080) confined in narrow stripe-shaped regions undergo collective migration by virtue of a novel type of topological edge currents, resulting from the interplay between liquid crystalline (nematic) order, microscopic chirality and topological defects. Thanks to a combination of in vitro experiments and theory of active hydrodynamics, we show that, while heterogeneous and chaotic in the bulk of the channel, the spontaneous flow arising in confined populations of HT1080 cells is rectified along the edges, leading to long-ranged collective cell migration, with broken chiral symmetry. These edge currents are fuelled by layers of +1/2 topological defects, orthogonally anchored at the channel walls and acting as local sources of chiral active stress. Our work highlights the profound correlation between confinement and collective migration in multicellular systems and suggests a possible mechanism for the emergence of directed motion in metastatic cancer.

We analyze the memory capacity of a delay based reservoir computer with a Hopf normal form as nonlinearity and numerically compute the linear as well as the higher order recall capabilities. A possible physical realisation could be a laser with external cavity, for which the information is fed via electrical injection. A task independent quantification of the computational capability of the reservoir system is done via a complete orthonormal set of basis functions. Our results suggest that even for constant readout dimension the total memory capacity is dependent on the ratio between the information input period, also called the clock cycle, and the time delay in the system. Optimal performance is found for a time delay about 1.6 times the clock cycle

With noisy environment caused by fluoresence and additive white noise as well as complicated spectrum fingerprints, the identification of complex mixture materials remains a major challenge in Raman spectroscopy application. In this paper, we propose a new scheme based on a constant wavelet transform (CWT) and a deep network for classifying complex mixture. The scheme first transforms the noisy Raman spectrum to a two-dimensional scale map using CWT. A multi-label deep neural network model (MDNN) is then applied for classifying material. The proposed model accelerates the feature extraction and expands the feature graph using the global averaging pooling layer. The Sigmoid function is implemented in the last layer of the model. The MDNN model was trained, validated and tested with data collected from the samples prepared from substances in palm oil. During training and validating process, data augmentation is applied to overcome the imbalance of data and enrich the diversity of Raman spectra. From the test results, it is found that the MDNN model outperforms previously proposed deep neural network models in terms of Hamming loss, one error, coverage, ranking loss, average precision, F1 macro averaging and F1 micro averaging, respectively. The average detection time obtained from our model is 5.31 s, which is much faster than the detection time of the previously proposed models.

This article analyses the convergence of the Lie-Trotter splitting scheme for the stochastic Manakov equation, a system arising in the study of pulse propagation in randomly birefringent optical fibers. First, we prove that the strong order of the numerical approximation is 1/2 if the nonlinear term in the system is globally Lipschitz. Then, we show that the splitting scheme has convergence order 1/2 in probability and almost sure order 1/2- in the case of a cubic nonlinearity. We provide several numerical experiments illustrating the aforementioned results and the efficiency of the Lie-Trotter splitting scheme. Finally, we numerically investigate the possible blowup of solutions for some power-law nonlinearities.

We examine how the randomness of behavior and the flow of information between agents affect the formation of opinions. Our main research involves the process of opinion evolution, opinion clusters formation and studying the probability of sustaining opinion. The results show that opinion formation (clustering of opinion) is influenced by both flow of information between agents (interactions outside the closest neighbors) and randomness in adopting opinions.

The aim of this work is to investigate the lunisolar perturbations affecting the long-term dynamics of a Molniya satellite. Some numerical experiments on the doubly-averaged model, including the expansion of the lunisolar disturbing functions up to the third order, are carried out in order to detect the terms dominating the long-term evolution. The analysis focuses on the following significant indicators: the amplitude of the harmonic coefficients, the periods of the arguments involved and, in particular, the ratio between the amplitudes and the corresponding frequency. The results show that the second-order lunisolar perturbation gives the dominant contribution to the long-term dynamics. The second part of this work aims to study the resonant regions associated to the dominant terms identified so far by using both the ideal resonance model and an alternative approach. The results obtained show when the standard method does not catch the main features of the dynamical structure of the resonant regions. Finally, the maximum overlapping region is identified in the proximity of the Molniya orbital environment.

In this colloquium, we review the research on excitons in van der Waals heterostructures from the point of view of variational calculations. We first make a presentation of the current and past literature, followed by a discussion on the connections between experimental and theoretical results. In particular, we focus our review of the literature on the absorption spectrum and polarizability, as well as the Stark shift and the dissociation rate. Afterwards, we begin the discussion of the use of variational methods in the study of excitons. We initially model the electron-hole interaction as a soft-Coulomb potential, which can be used to describe interlayer excitons. Using an \emph{ansatz}, based on the solution for the two-dimensional quantum harmonic oscillator, we study the Rytova-Keldysh potential, which is appropriate to describe intralayer excitons in two-dimensional (2D) materials. These variational energies are then recalculated with a different \emph{ansatz}, based on the exact wavefunction of the 2D hydrogen atom, and the obtained energy curves are compared. Afterwards, we discuss the Wannier-Mott exciton model, reviewing it briefly before focusing on an application of this model to obtain both the exciton absorption spectrum and the binding energies for certain values of the physical parameters of the materials. Finally, we briefly discuss an approximation of the electron-hole interaction in interlayer excitons as an harmonic potential and the comparison of the obtained results with the existing values from both first--principles calculations and experimental measurements.

Major solar eruptions occasionally direct interplanetary coronal mass ejections (ICMEs) to Earth and cause significant geomagnetic storms and low-latitude aurorae. While single extreme storms are of significant threats to the modern civilization, storms occasionally appear in sequence and, acting synergistically, cause 'perfect storms' at Earth. The stormy interval in January 1938 was one of such cases. Here, we analyze the contemporary records to reveal its time series on their source active regions, solar eruptions, ICMEs, geomagnetic storms, low-latitude aurorae, and cosmic-ray (CR) variations. Geomagnetic records show that three storms occurred successively on 17/18 January (Dcx ~ -171 nT) on 21/22 January (Dcx ~ -328 nT) and on 25/26 January (Dcx ~ -336 nT). The amplitudes of the cosmic-ray variations and sudden storm commencements show the impact of the first ICME as the largest (~ 6% decrease in CR and 72 nT in SSC) and the ICME associated with the storms that followed as more moderate (~ 3% decrease in CR and 63 nT in SSC; ~ 2% decrease in CR and 63 nT in SSC). Interestingly, a significant solar proton event occurred on 16/17 January and the Cheltenham ionization chamber showed a possible ground level enhancement. During the first storm, aurorae were less visible at mid-latitudes, whereas during the second and third storms, the equatorward boundaries of the auroral oval were extended down to 40.3{\deg} and 40.0{\deg} in invariant latitude. This contrast shows that the initial ICME was probably faster, with a higher total magnitude but a smaller southward component.

We implement a computational periporomechanics model for simulating localized failure in unsaturated porous media. The coupled periporomechanics model is based on the peridynamic state concept and the effective force state concept. The coupled governing equations are integral-differential equations without assuming the continuity of solid displacement and fluid pressures. The fluid flow and effective force states are determined by nonlocal fluid pressure and deformation gradients through the recently formulated multiphase constitutive correspondence principle. The coupled peri-poromechanics is implemented numerically for high-performance computing by an implicit multiphase meshfree method utilizing the message passing interface. The numerical implementation is validated by simulating classical poromechanics problems and comparing the numerical results with analytical solutions and experimental data. Numerical examples are presented to demonstrate the robustness of the fully coupled peri-poromechanics in modeling localized failures in unsaturated porous media.

The Standard Quantum Limit (SQL) restricts the sensitivity of atom interferometers employing unentangled ensembles. Inertially sensitive light-pulse atom interferometry beyond the SQL requires the preparation of entangled atoms in different momentum states. So far, such a source of entangled atoms that is compatible with state-of-the-art interferometers has not been demonstrated. Here, we report the transfer of entanglement from the spin degree of freedom of a Bose-Einstein condensate to well-separated momentum modes. A measurement of number and phase correlations between the two momentum modes yields a squeezing parameter of -3.1(8) dB. The method is directly applicable for a future generation of entanglement-enhanced atom interferometers as desired for tests of the Einstein Equivalence Principle and the detection of gravitational waves.

The JEM-EUSO Collaboration aims at studying Ultra High Energy Cosmic Rays (UHECR) from space. To reach this goal, a series of pathfinder missions has been developed to prove the observation principle and to raise the technological readiness level of the instrument. Among these, the EUSO-SPB2 (Extreme Universe Space Observatory on a Super Pressure Balloon, mission two) foresees the launch of two telescopes on an ultra-long duration balloon. One is a fluorescence telescope designed to detect UHECR via the UV fluorescence emission of the showers in the atmosphere. The other one measures direct Cherenkov light emission from lower energy cosmic rays and other optical backgrounds for cosmogenic tau neutrino detection. In this paper, we describe the data processing system which has been designed to perform data management and instrument control for the two telescopes. It is a complex which controls front-end electronics, tags events with arrival time and payload position through a GPS system, provides signals for time synchronization of the event and measures live and dead time of the telescope. In addition, the data processing system manages mass memory for data storage, performs housekeeping monitor, and controls power on and power off sequences. The target flight duration for the NASA super pressure program is 100 days, consequently, the requirements on the electronics and the data handling are quite severe. The system operates at high altitude in unpressurised environment, which introduces a technological challenge for heat dissipation.