We present a study of the effects of biofouling and sedimentation on pathfinder instrumentation for the Pacific Ocean Neutrino Experiment (P-ONE), which will be located in the Cascadia Basin region of the North Pacific Ocean. P-ONE will look for high-energy neutrinos by observing the light produced when these neutrinos interact in the water, detecting and digitizing single photon signals in the ultraviolet-visible range. We measure that biofouling and sedimentation caused a decrease in the transparency of upward-facing optical surfaces over 5 years of operations. A majority of downward-facing optical surfaces, which will dominate P-ONE's sensitivity to astrophysical sources, showed no visible biofouling. Extrapolations motivated by biological growth models estimated that these losses started around 2.5 years after deployment, and suggest a final equilibrium transparency ranging between 0$\%$ and 35$\%$ of the original for the upward-facing modules.
Jiangmen Underground Neutrino Observatory (JUNO) is a large-scale neutrino experiment with multiple physics goals including neutrino mass hierarchy, accurate measurement of neutrino oscillation parameters, neutrino detection from supernova, sun, and earth, etc. The Central Detector (CD) of JUNO using 20-kiloton Liquid Scintillator (LS) as target mass with 3% energy resolution at 1 MeV and low radioactive background. To achieve LS filling smoothly and detector running safely, JUNO's liquid Filling, Overflowing, and Circulating System (FOC) features a control system based on a Programmable Logic Controller and equipped with high-reliable sensors and actuators. This paper describes the design of the FOC automatic monitoring and control system including hardware and software. The control logic including Proportional-Integral-Derivative control, sequential control, and safety interlocks ensures precise regulation of key parameters such as flow rate, liquid level, and pressure. Test results demonstrate the FOC system can satisfy the requirement of JUNO detector filling and running operations.
The super $\tau$-charm facility (STCF) is a next-generation electron-positron collider with high luminosity proposed in China. The higher luminosity leads to increased background level, posing significant challenges for track reconstruction of charged particles. Particularly in the low transverse momentum region, the current track reconstruction algorithm is notably affected by background, resulting in suboptimal reconstruction efficiency and a high fake rate. To address this challenge, we propose a Graph Neural Network (GNN)-based noise filtering algorithm (GNF Algorithm) as a preprocessing step for the track reconstruction. The GNF Algorithm introduces a novel method to convert detector data into graphs and applies a tiered threshold strategy to map GNN-based edge classification results onto signal-noise separation. The study based on Monte Carlo (MC) data shows that with the implementation of the GNF Algorithm, the reconstruction efficiency with the standard background is comparable to the case without background, while the fake rate is significantly reduced. Thus, GNF Algorithm provides essential support for the STCF tracking software.
TRIDENT is a planned multi-cubic-kilometer deep-sea neutrino telescope to be built in the South China Sea, designed to rapidly discover high-energy astrophysical neutrino sources with sensitivity to all neutrino flavors. Achieving this at scale requires a detector design that balances performance with power, cost, and mechanical simplicity. This study presents a cost-effective optimization of TRIDENT's hybrid Digital Optical Module (hDOM) design, comparing configurations using high-quantum-efficiency (QE) 3-inch PMTs and larger 4-inch PMTs, the latter evaluated with both baseline and enhanced QE assumptions. Using full-chain detector simulations incorporating site-specific seawater optical properties and realistic backgrounds, we assess performance in all-flavor neutrino detection efficiency, directional reconstruction, and tau neutrino flavor identification from 1 TeV to 10 PeV. We find that if 4-inch PMTs can achieve QE comparable to 3-inch PMTs, their performance matches or improves upon that of the 3-inch design, while significantly reducing channel count, power consumption, and cost. These findings support the 4-inch PMT hDOM as a promising and scalable choice for TRIDENT's future instrumentation.
While visualization plays a crucial role in high-energy physics (HEP) experiments, the existing detector description formats including Geant4, ROOT, GDML, and DD4hep face compatibility limitations with modern visualization platforms. This paper presents a universal interface that automatically converts these four kinds of detector descriptions into FBX, an industry standard 3D model format which can be seamlessly integrated into advanced visualization platforms like Unity. This method bridges the gap between HEP instrumental display frameworks and industrial-grade visualization ecosystems, enabling HEP experiments to harness rapid technological advancements. Furthermore, it lays the groundwork for the future development of additional HEP visualization applications, such as event display, virtual reality, and augmented reality.
By analyzing $(2367.0\pm11.1)\times10^6$ $\psi(3686)$ events collected in $e^+e^-$ collisions at $\sqrt{s}=3.686~\rm GeV$ with the BESIII detector at the BEPCII collider, we report the first search for the charged lepton flavor violating decay $\psi(3686)\to e^{\pm}\mu^{\mp}$. No signal is found. An upper limit on the branching fraction $\mathcal{B}(\psi(3686)\to e^{\pm}\mu^{\mp})$ is determined to be $1.4\times10^{-8}$ at the 90\% confidence level.
This short article is a first of a series describing the scientific journey of exceptional women scientists in experimental particle physics. We interviewed Halina Abramowicz, who started her career in hadron-hadron interactions, in neutrino physics, became an expert of strong interactions, guided the European Particle Physics Strategy Update in 2020 and now moved to an experiment in strong-field QED.
Machine learning (ML) techniques have recently enabled enormous gains in sensitivity across the sciences. In particle physics, much of this progress has relied on excellent simulations of a wide range of physical processes. However, due to the sophistication of modern machine learning (ML) algorithms and their reliance on high-quality training samples, discrepancies between simulation and experimental data can significantly limit the effectiveness of ML techniques. In this work, we present a solution to this ``mis-specification'' problem: a calibration approach based on optimal transport, which we apply to high-dimensional simulations for the first time. We demonstrate the performance of our approach through jet tagging, using a CMS-inspired dataset. A 128-dimensional internal jet representation from a powerful general-purpose classifier is studied; after calibrating this internal ``latent'' representation, we find that a wide variety of quantities derived from it for downstream tasks are also properly calibrated: using this calibrated high-dimensional representation, powerful new applications of jet flavor information can be utilized in LHC analyses. This is a key step toward allowing properly-calibrated ``foundation models'' in particle physics. More broadly, this calibration framework has broad applications for correcting high-dimensional simulations across the sciences.
This Letter reports the event-by-event observation of Cherenkov light from sub-MeV electrons in a high scintillation light-yield liquid argon (LAr) detector by the Coherent CAPTAIN-Mills (CCM) experiment. The CCM200 detector, located at Los Alamos National Laboratory, instruments 7 tons (fiducial volume) of LAr with 200 8-inch photomultiplier tubes (PMTs), 80% of which are coated in a wavelength shifting material and the remaining 20% are uncoated. In the prompt time region of an event, defined as $-6 \leq t \leq 0$ ns relative to the event start time $t=0$, the uncoated PMTs are primarily sensitive to visible Cherenkov photons. Using gamma-rays from a $^{22}$Na source for production of sub-MeV electrons, we isolated prompt Cherenkov light with $>5\sigma$ confidence and developed a selection to obtain a low-background electromagnetic sample. This is the first event-by-event observation of Cherenkov photons from sub-MeV electrons in a high-yield scintillator detector, and represents a milestone in low-energy particle detector development.
The Coherent CAPTAIN-Mills (CCM) experiment is a liquid argon (LAr) light collection detector searching for MeV-scale neutrino and Beyond Standard Model physics signatures. Two hundred 8-inch photomultiplier tubes (PMTs) instrument the 7 ton fiducial volume with 50% photocathode coverage to detect light produced by charged particles. CCM's light-based approach reduces requirements of LAr purity, compared to other detection technologies, such that sub-MeV particles can be reliably detected without additional LAr filtration and with O(1) parts-per-million of common contaminants. We present a measurement of LAr light production and propagation parameters, with uncertainties, obtained from a sample of MeV-scale electromagnetic events. The optimization of this high-dimensional parameter space was facilitated by a differentiable optical photon Monte-Carlo simulation, and detailed PMT response characterization. This result accurately predicts the timing and spatial distribution of light due to scintillation and Cherenkov emission in the detector. This is the first description of photon propagation in LAr to include several effects, including: anomalous dispersion of the index of refraction near the ultraviolet resonance, Mie scattering from impurities, and Cherenkov light production.
We present the third part of a systematic calculation of the two-loop anomalous dimensions for the low-energy effective field theory below the electroweak scale (LEFT): insertions of dimension-six operators that conserve baryon number. In line with our previous publications, we obtain the results in the algebraically consistent 't Hooft-Veltman scheme for $\gamma_5$, corrected for evanescent as well as chiral-symmetry-breaking effects through finite renormalizations. We compute the renormalization of the dimension-six four-fermion and three-gluon operators, as well as the power corrections to lower-dimension operators in the presence of masses, i.e., the down-mixing into dimension-five dipole operators, masses, gauge couplings, and theta terms. Our results are of interest for a broad range of low-energy precision searches for physics beyond the Standard Model.
We investigate the production mechanism for doubly charmed tetraquark mesons within a coupled-channel formalism. The two-body Feynman kernel amplitudes are constructed using effective Lagrangians that respect heavy quark symmetry, chiral symmetry, SU(3) flavor symmetry, and hidden local symmetry. The fully off-shell coupled scattering equations are solved within the Blankenbecler-Sugar (BbS) reduction scheme. We find three positive-parity and one negative-parity tetraquark states with total spin $J=1$. Among them, two positive-parity states appear as bound states in the isoscalar and isovector $DD^*$ channels, while another appears as a resonance in the $D^*D^*$ channel. A negative-parity resonance is also predicted in the isoscalar channel. We analyze the coupling strengths of these tetraquark states to various channels. The dependence of the results on the reduced cutoff mass $\Lambda_0$ is examined. The most significant tetraquark state remains stable within the range of $\Lambda_0=(600-700)$ MeV.
As an auxiliary system within the calibration system of the Jiangmen Underground Neutrino Observatory, a calibration house is designed to provide interfaces for connecting the central detector and accommodating various calibration sub-systems. Onsite installation has demonstrated that the calibration house interfaces are capable of effectively connecting to the central detector and supporting the installation of complex and sophisticated calibration sub-systems. Additionally, controlling the levels of radon and oxygen within the calibration house is critical. Radon can increase the experimental background, while oxygen can degrade the quality of the liquid scintillator. The oxygen concentration can be maintained at levels below 10 parts per million, and the radon concentration can be kept below 15 mBq/m$^{3}$. This paper will provide detailed information on the calibration house and its methods for radon and oxygen concentration control.
The dark photon has been postulated as a potential constituent of dark matter, exhibiting notable similarities to the axion. The primary distinction between the two particles lies in the nature of their respective fields: the dark photon field is a vector field with a polarization direction that remains undetermined. This work explores the prospect of utilizing three degenerate modes for scanning the three dimensions of space in order to mitigate the low form factor expected in the detection of the dark photon due to their unknown polarization. The employment of an haloscope with three orthogonal and degenerate modes in conjunction with the coherent sum of signals is demonstrated in this work in order to enhance the dark photon form factor up to the axion form factor, and to determine the direction of the dark photon polarization vector. We show in this manuscript that the maximum form factor is achieved in cavities of cubic, spherical, and cylindrical geometries, considering the introduction of tuning elements. To achieve this adequately, some conditions reviewed in this article must be fulfilled in the resonant cavity, leading to uncertainties in the final measurement. Finally, this technique can allow the simultaneous search for dark matter axions and dark photons, and to the knowledge of the authors, the method shown in this work is the most effective one for detecting dark photon with microwave resonant cavities.
The first measurement of pseudorapidity and azimuthal angle distributions relative to the momentum vector of a Z boson for low transverse momentum ($p_\mathrm{T}$) charged hadrons in lead-lead (PbPb) collisions is presented. By studying the hadrons produced in an event with a high-$p_\mathrm{T}$ Z boson (40 $\lt$ $p_\mathrm{T}$ $\lt$ 350 GeV), the analysis probes how the quark-gluon plasma (QGP) medium created in these collisions affects the parton recoiling opposite to the Z boson. Utilizing PbPb data at a nucleon-nucleon center-of-mass energy $\sqrt{s_{_\mathrm{NN}}}$ = 5.02 TeV from 2018 with an integrated luminosity of 1.67 nb$^{-1}$ and proton-proton (pp) data at the same energy from 2017 with 301 pb$^{-1}$, the distributions are examined in bins of charged-hadron $p_\mathrm{T}$. A significant modification of the distributions for charged hadrons in the range 1$\lt$ $p_\mathrm{T}$ $\lt$ 2 GeV in PbPb collisions is observed when compared to reference measurements from pp collisions. The data provide new information about the correlation between hard and soft particles in heavy ion collisions, which can be used to test predictions of various jet quenching models. The results are consistent with expectations of a hydrodynamic wake created when the QGP is depleted of energy by the parton propagating through it. Based on comparisons of PbPb data with pp references and predictions from theoretical models, this Letter presents the first evidence of medium-recoil and medium-hole effects caused by a hard probe.
In this work, the concept of QCD dynamical entropy is extended to heavy ion systems. This notion of entropy can be understood as a relative entropy and can also be used to estimate the initial entropy density in ultra-relativistic heavy ion collisions. The key quantity used to calculate this entropy is the nuclear unintegrated gluon distribution (nUGD), which provides a transverse momentum probability density. In the numerical analysis, both the geometric scaling phenomenon and the Glauber-Gribov approach have been used to evaluate realistic models for the nUGD. It is shown that the normalization procedure and the geometric scaling property make the dynamical entropy almost independent of the nucleus mass number $A$. Results are presented for the dynamical entropy density, $dS_D/dy$, in terms of the rapidity.
Unfolding, in the context of high-energy particle physics, refers to the process of removing detector distortions in experimental data. The resulting unfolded measurements are straightforward to use for direct comparisons between experiments and a wide variety of theoretical predictions. For decades, popular unfolding strategies were designed to operate on data formatted as one or more binned histograms. In recent years, new strategies have emerged that use machine learning to unfold datasets in an unbinned manner, allowing for higher-dimensional analyses and more flexibility for current and future users of the unfolded data. This guide comprises recommendations and practical considerations from researchers across a number of major particle physics experiments who have recently put these techniques into practice on real data.
The Giant Radio Array for Neutrino Detection (GRAND) is an envisioned observatory of ultra-high-energy particles of cosmic origin, with energies in excess of 100 PeV. GRAND uses large surface arrays of antennas to look for the radio emission from extensive air showers that are triggered by the interaction of ultra-high-energy cosmic rays, gamma rays, and neutrinos in the atmosphere or underground. In particular, for ultra-high-energy neutrinos, the future final phase of GRAND aims to be sensitive enough to detect them in spite of their plausibly tiny flux. Three prototype GRAND radio arrays have been in operation since 2023: GRANDProto300, in China, GRAND@Auger, in Argentina, and GRAND@Nançay, in France. Their goals are to field-test the GRAND detection units, understand the radio background to which they are exposed, and develop tools for diagnostic, data gathering, and data analysis. This list of contributions to the 39th International Cosmic Ray Conference (ICRC 2025) presents an overview of GRAND, in its present and future incarnations, and a first look at data collected by GRANDProto300 and GRAND@Auger, including the first cosmic-ray candidates detected by them.
We extend the study of exotic matter formation via the {\tt TQ4Q1.1} set of collinear, variable-flavor-number-scheme fragmentation functions for fully charmed or bottomed tetraquarks in three quantum configurations: scalar ($J^{PC} = 0^{++}$), axial vector ($J^{PC} = 1^{+-}$), and tensor ($J^{PC} = 2^{++}$). We adopt single-parton fragmentation at leading power and implement a nonrelativistic QCD factorization scheme tailored to tetraquark Fock-state configurations. Short-distance inputs at the initial scale are modeled using updated calculations for both gluon- and heavy-quark-initiated channels. A threshold-consistent DGLAP evolution is then applied via HFNRevo. We provide the first systematic treatment of uncertainties propagated from the color-composite long-distance matrix elements that govern the nonperturbative hadronization of tetraquarks. To support phenomenology, we compute NLL/NLO$^+$ cross sections for tetraquark-jet systems at the HL-LHC and FCC using (sym)JETHAD, incorporating angular multiplicities as key observables sensitive to high-energy QCD dynamics. This work connects the investigation of exotic hadrons with state-of-the-art precision QCD.
The IceCube Neutrino Observatory has observed a sample of high purity, primarily atmospheric, muon neutrino events over 11 years from all directions below the horizon, spanning the energy range 500 GeV to 100 TeV. While this sample was initially used for an eV-scale sterile neutrino search, its purity and spanned parameter space can also be used to perform an earth tomography. This flux of neutrinos traverses the earth and is attenuated in varying amounts depending on the energy and traversed column density of the event. By parameterizing the earth as multiple constant--density shells, IceCube can measure the upgoing neutrino flux as a function of the declination, yielding an inference of the density of each shell. In this talk, the latest sensitivities of this analysis and comparisons with the previous measurement are presented. In addition, the analysis procedure, details about the data sample, and systematic effects are also explained. This analysis is one of the latest, weak-force driven, non-gravitational, measurements of the earth's density and mass.
We present a comprehensive study of triply heavy baryons ($\Omega_{ccc}$, $\Omega_{bbb}$, $\Omega_{bcc}$, and $\Omega_{bbc}$) within the nonrelativistic quark model, employing the Gaussian expansion method to calculate mass spectra up to $D$-wave states. Our analysis represents the most complete treatment to date for this model, incorporating full angular momentum mixing effects. While our predictions for low-lying states agree well with lattice QCD results, we find systematically lower masses for excited states compared to lattice calculations. Using the obtained wave functions, we estimate radiative decay widths up to $1D$ states, revealing significant differences from previous theoretical work. Additionally, we identify and resolve several misconceptions in prior treatments of triply heavy baryon spectroscopy, particularly symmetry constraint and wave function construction in three-quark systems. These results provide crucial information for future experimental searches and theoretical investigations of triply heavy baryon systems.
The past several decades have seen significant advancement in applications using cosmic-ray muons for tomography scanning of unknown objects. One of the most promising developments is the application of this technique in border security for the inspection of cargo inside trucks and sea containers in order to search for hazardous and illicit hidden materials. This work focuses on the optimization studies for a muon tomography system similar to that being developed within the framework of the `SilentBorder' project funded by the EU Horizon 2020 scheme. Current studies are directed toward optimizing the detector module design, following two complementary approaches. The first leverages TomOpt, a Python-based end-to-end software that employs differentiable programming to optimize scattering tomography detector configurations. While TomOpt inherently supports gradient-based optimization, a Bayesian Optimization module is introduced to better handle scenarios with noisy objective functions, particularly in image reconstruction-driven optimization tasks. The second optimization strategy relies on detailed GEANT4-based simulations, which, while more computationally intensive, offer higher physical fidelity. These simulations are also employed to study the impact of incorporating secondary particle information alongside cosmic muons for improved material discrimination. This paper presents the current status and results obtained from these optimization studies.
In the present work, the scotogenic model is constructed applying non invertible $Z_M$ symmetries. The stability of dark matter and the scotogenic structure of the neutrino mass matrix is achieved via the new non-group symmetry. The non-group Scotogenic model is given with minimalistic content giving a one-zero structure of the neutrino mass matrix, and numerical analysis of the lepton mixing angles and physics are presented. Other relevant constraints are also studied.
We investigate the $\Xi^*$ resonances within the molecular picture, where these states are dynamically generated as poles in the unitarized scattering amplitudes arising from the coupled-channel interactions of $K^{*-} \Lambda$, $K^{*-} \Sigma^0$, $\rho^- \Xi^0$, $\overline{K}{}^{*0} \Sigma^-$, $\rho^0 \Xi^-$, $\omega \Xi^-$, and $\phi \Xi^-$. The interaction kernel is derived from the local hidden gauge formalism, while the unitarization procedure employs a hybrid method that combines cutoff and dimensional regularizations in the evaluation of the loop function. From a detailed spectroscopic analysis, we identify two $S = -2$ baryon states whose properties are compatible with some of the $\Xi^*$ resonances listed in the Review of Particle Physics. To explore their possible experimental signatures, we compute the femtoscopic correlation functions for all the vector-baryon pairs considered in the present study, using realistic estimates of production weights and varying source sizes $R = 1, 1.1, 1.2, 1.3, 1.5$ fm.
We derive bounds on the flavor-violating (FV) couplings of the $Z$ boson to quarks and present future sensitivity projections. Our analysis shows that the current bounds on the FV couplings are $\mathcal{O}(10^{-9})$ for the $Z$ couplings to $cu$ and $sd$, $\mathcal{O}(10^{-7})$ for $bd$, $\mathcal{O}(10^{-6})$ for $bs$, and $\mathcal{O}(10^{-3})$ for $tu$ and $tc$. Overall, low-energy flavor experiments provide significantly stronger constraints on these FV couplings than current collider searches.
A particle traversing a crystal aligned with one of its crystallographic axes experiences a strong electromagnetic field that is constant along the direction of motion over macroscopic distances. For $e^\pm$ and $\gamma$-rays with energies above a few $\mathrm{GeV}$, this field is amplified by the Lorentz boost, to the point of exceeding the Schwinger critical field $\mathcal{E}_0 \sim 1.32 \times 10^{16}~\mathrm{V/cm}$. In this regime, nonlinear quantum-electrodynamical effects occur, such as the enhancement of intense electromagnetic radiation emission and pair production, so that the electromagnetic shower development is accelerated and the effective shower length is reduced compared to amorphous materials. We have investigated this phenomenon in lead tungstate (PbWO$_4$), a high-$Z$ scintillator widely used in particle detection. We have observed a substantial increase in scintillation light at small incidence angles with respect to the main lattice axes. Measurements with $120$-$\mathrm{GeV}$ electrons and $\gamma$-rays between $5$ and $100~\mathrm{GeV}$ demonstrate up to a threefold increase in energy deposition in oriented samples. These findings challenge the current models of shower development in crystal scintillators and could guide the development of next-generation accelerator- and space-borne detectors.
The LHCb Ring-Imaging Cherenkov detectors are built to provide charged hadron identification over a large range of momentum. The upgraded detectors are also capable of providing an independent measurement of the luminosity for the LHCb experiment during LHC Run 3. The modelling of the opto-electronics chain, the application of the powering strategy during operations, the calibration procedures and the proof of principle of a novel technique for luminosity determination are presented. In addition, the preliminary precision achieved during the 2023 data-taking year for real-time and offline luminosity measurements is reported.
A new algorithm has been developed at LHCb which is able to reconstruct and select very displaced vertices in real-time at the first level of the trigger (HLT1). It makes use of the Upstream Tracker (UT) and the Scintillator Fiber detector (SciFi) of LHCb and it is executed on GPUs inside the Allen framework. In addition to an optimized strategy, it utilizes a Neural Network (NN) implementation to increase the track efficiency and reduce the ghost rates, with very high throughput and limited time budget. Besides serving to reconstruct $K_{s}^{0}$ and $\Lambda$ particles from the Standard Model, the Downstream algorithm and the associated two-track vertexing could largely increase the LHCb physics potential for detecting long-lived particles during the Run 3.
This study demonstrates a proof-of-concept application of a deep neural network for particle identification in simulated high transverse momentum proton-proton collisions, with a focus on evaluating model performance under controlled conditions. A model trained on simulated Large Hadron Collider (LHC) proton-proton collisions at $\sqrt{s} = 13\,\mathrm{TeV}$ is used to classify nine particle species based on seven kinematic-level features. The model is then tested on simulated high transverse momentum Relativistic Heavy Ion Collider (RHIC) data at $\sqrt{s} = 200\,\mathrm{GeV}$ without any transfer learning, fine-tuning, or weight adjustment. It maintains accuracy above 91% for both LHC and RHIC sets, while achieving above 96% accuracy for all RHIC sets, including the $p_T > 7\,\mathrm{GeV}/c$ set, despite never being trained on any RHIC data. Analysis of per-class accuracy reveals how quantum chromodynamics (QCD) effects, such as leading particle effect and kinematic overlap at high $p_T$, shape the model's performance across particle types. These results suggest that the model captures physically meaningful features of high-energy collisions, rather than simply overfitting to kinematics of the training data. This study demonstrates the potential of simulation-trained deep neural networks to remain effective across lower energy regimes within a controlled environment, and motivates further investigation in realistic settings using detector-level features and more advanced network architectures.
The running of the top quark mass ($m_\mathrm{t}$) is probed at the next-to-next-to-leading order in quantum chromodynamics for the first time. The result is obtained by comparing calculations in the modified minimal subtraction ($\mathrm{\overline{MS}}$) renormalisation scheme to the CMS result on differential measurement of the top quark-antiquark ($\mathrm{t\bar{t}}$) production cross section at $\sqrt{s} = 13~\mathrm{TeV}$. The scale dependence of $m_\mathrm{t}$ is extracted as a function of the invariant mass of the $\mathrm{t\bar{t}}$ system, up to an energy scale of about $0.5~\mathrm{TeV}$. The observed running is found to be in good agreement with the three-loop solution of the renormalisation group equations of quantum chromodynamics.
By measuring, modeling and interpreting cosmological datasets, one can place strong constraints on models of the Universe. Central to this effort are summary statistics such as power spectra and bispectra, which condense the high-dimensional data into low-dimensional representations. In this work, we introduce a modern set of estimators for computing such statistics from three-dimensional clustering data, and provide a flexible Python/Cython implementation; PolyBin3D. Working in a maximum-likelihood formalism, we derive general estimators for the two- and three-point functions, which yield unbiased spectra regardless of the survey mask and weighting scheme. These can be directly compared to theory without the need for mask-convolution. Furthermore, we present a numerical scheme for computing the optimal (minimum-variance) estimators for a given survey, which is shown to reduce error-bars on large-scales. Our Python package includes both general "unwindowed" estimators and their idealized equivalents (appropriate for simulations), each of which are efficiently implemented using fast Fourier transforms and Monte Carlo summation tricks, and additionally supports GPU acceleration using JAX. These are extensively validated in this work, with Monte Carlo convergence (relevant for masked data) achieved using only a small number of iterations (typically $<10$ for bispectra). This will allow for fast and unified measurement of two- and three-point functions from current and upcoming survey data.
When a gravitational wave (GW) passes through a DC magnetic field, it couples to the conducting wires carrying the currents which generate the magnetic field, causing them to oscillate at the GW frequency. The oscillating currents then generate an AC component through which the GW can be detected - thus forming a resonant mass detector or a Magnetic Weber Bar. We quantify this claim and demonstrate that magnets can have exceptional sensitivity to GWs over a frequency range demarcated by the mechanical and electromagnetic resonant frequencies of the system; indeed, we outline why a magnetic readout strategy can be considered an optimal Weber bar design. The concept is applicable to a broad class of magnets, but can be particularly well exploited by the powerful magnets being deployed in search of axion dark matter, for example by DMRadio and ADMX-EFR. Explicitly, we demonstrate that the MRI magnet that is being deployed for ADMX-EFR can achieve a broadband GW strain sensitivity of $\sim$$10^{-20}/\sqrt{\text{Hz}}$ from a few kHz to about 10 MHz, with a peak sensitivity down to $\sim$$10^{-22}/\sqrt{\text{Hz}}$ at a kHz exploiting a mechanical resonance.
Gravitational wave (GW) observations offer a promising probe of new physics associated with a first-order electroweak phase transition. Precision studies of the Higgs potential, including Fisher matrix analyses, have been extensively conducted in this context. However, significant theoretical uncertainties in the GW spectrum, particularly those due to renormalization scale dependence in the conventional daisy-resummed approach, have cast doubt on the reliability of such precision measurements. These uncertainties have been highlighted using the Standard Model Effective Field Theory (SMEFT) as a benchmark. To address these issues, we revisit Fisher matrix analyses based on the daisy-resummed approach, explicitly incorporating renormalization scale uncertainties. We then reassess the prospects for precise new physics measurements using GW observations. Adopting the SMEFT as a benchmark, we study the effects of one-loop RGE running of dimension-six operators on the Higgs effective potential via the Higgs self-couplings, top Yukawa coupling, and gauge couplings, in addition to the SMEFT tree-level effects. We find that future GW observations can remain sensitive to various dimension-six SMEFT effects, even in the presence of renormalization scale uncertainties, provided that the SMEFT $(H^{\dagger}H)^3$ operator is precisely measured, e.g., by future collider experiments.
The transverse momentum ($p_T$) spectra of identified light charged hadrons, specifically bosons ($\pi^{\pm}$ and $K^{\pm}$) as well as fermions [$p(\bar p)$], produced in small collision systems, namely deuteron-gold (d+Au) and proton-proton (p+p) collisions at the top energy of the Relativistic Heavy Ion Collider (RHIC) with a center-of-mass energy of $\sqrt{s_{NN}}=200$ GeV, are investigated in this paper. In present study, d+Au collisions are categorized into three centrality classes: central (0--20\%), semi-central (20--40\%), and peripheral (40--100\%) collisions. Various types of distributions, including standard [Bose-Einstein (Fermi-Dirac) and Boltzmann] and Tsallis distributions, are employed to fit the same $p_T$ spectra to derive different effective temperatures denoted as $T_{eff}$. The results indicate that $T_{eff}$ values obtained from Bose-Einstein, Boltzmann, Fermi-Dirac, and Tsallis distributions exhibit systematically a decreasing trend. Meanwhile, these $T_{eff}$ values also show a decreasing trend with a decrease in collision centrality. Furthermore, based on the spectra of given particles, a perfect linear relationship is observed between different pairwise combinations of $T_{eff}$ derived from both Boltzmann and Bose-Einstein (Fermi-Dirac) distributions as well as between Tsallis and Bose-Einstein (Fermi-Dirac) distributions.
The collective-flow-assisted nuclear shape-imaging method in ultra-relativistic heavy-ion collisions has recently been used to characterize nuclear collective states. In this paper, we assess the foundations of the shape-imaging technique employed in these studies. We conclude that, on the whole, the discussion regarding low-energy nuclear physics is confusing and the suggested impact on nuclear structure research is overstated. Conversely, efforts to incorporate existing knowledge on nuclear shapes into analysis pipelines can be beneficial for benchmarking tools and calibrating models used to extract information from ultra-relativistic heavy ion experiments.