A new Highly-Granular time-of-flight Neutron Detector (HGND) is being developed and constructed to measure azimuthal neutron flow and neutron yields in nucleus-nucleus interactions in heavy-ion collisions with energies up to 4A GeV in the fixed target experiment BM@N at JINR. Details of the detector design and results of performance studies for neutron identification and reconstruction are shown. Comparison of simulations for different options of the HGND layout at the BM@N is presented. Several proposed methods of neutron reconstruction including machine learning and cluster methods are discussed.
Using $(10087 \pm 44) \times 10^6$ $J/\psi$ events collected with the BESIII detector in 2009, 2012, 2018 and 2019, the tracking efficiency of charged pions is studied using the decay $J/\psi \rightarrow \pi^+ \pi^- \pi^0$. The systematic uncertainty of the tracking efficiency and the corresponding correction factors for charged pions are evaluated, in bins of transverse momentum and polar angle of the charged pions.
A direct search for new heavy neutral Higgs bosons A and H in the $\mathrm{t\bar{t}}$Z channel is presented, targeting the process pp $\to$ A $\to$ ZH with H $\to$ $\mathrm{t\bar{t}}$. For the first time, the channel with decays of the Z boson to muons or electrons in association with all-hadronic decays of the $\mathrm{t\bar{t}}$ system is targeted. The analysis uses proton-proton collision data collected at the CERN LHC with the CMS experiment at $\sqrt{s}$ = 13 TeV, which correspond to an integrated luminosity of 138 fb$^{-1}$. No signal is observed. Upper limits on the product of the cross section and branching fractions are derived for narrow resonances A and H with masses up to 2100 and 2000 GeV, respectively, assuming A boson production through gluon fusion. The results are also interpreted within two-Higgs-doublet models, complementing and substantially extending the reach of previous searches.
According to perturbative quantum chromodynamic calculations, in $pp$ collisions at $\sqrt{s} = $~200 and 510 GeV studied at RHIC, jet production in mid-pseudorapidity, $|\eta| <$ 1, is dominated by quark-gluon and gluon-gluon scattering processes. Therefore jets at RHIC are direct probes of the gluon parton distribution functions (PDFs) for momentum fractions 0.01 $<x<$ 0.5. Moreover, $W$ boson cross section ratio, $\sigma(W^+)/\sigma(W^-)$, in $pp$ collisions at $\sqrt{s} = 510$~GeV, is an effective tool to explore anti-quark PDFs, $\bar{d}/\bar{u}$. Last but not least, di-$\pi^{0}$ correlation in forward pseudorapidity, $2.6 < \eta <4.0$, is an important indication of the non-linear gluon dynamics at low $x$ where the gluon density is high in protons and nuclei. In this proceeding, we present recent STAR results of mid-pseudorapidity inclusive jet cross-sections at $\sqrt{s} =$~200 and 510 GeV in $pp$ collisions, $W$ boson cross-section ratios at $\sqrt{s} = 510$~GeV in $pp$ collisions, and forward di-$\pi^0$ correlations in $pp$, $p\textrm{Al}$ and $p\textrm{Au}$ collisions at $\sqrt{s_{\textrm{\tiny NN}}} = 200$~GeV.
For most high-precision experiments in particle physics, it is essential to know the luminosity at highest accuracy. The luminosity is determined by the convolution of particle densities of the colliding beams. In special van der Meer transverse beam separation scans, the convolution function is sampled along the horizontal and vertical axes with the purpose of determining the beam convolution and getting an absolute luminosity calibration. For this purpose, the van der Meer data of luminometer rates are separately fitted in the two directions with analytic functions giving the best description. With the assumption that the 2D convolution shape is factorizable, one can calculate it from the two 1D fits. The task of XY factorization analyses is to check this assumption and give a quantitative measure of the effect of nonfactorizability on the calibration constant to improve the accuracy of luminosity measurements. \newline We perform a dedicated analysis to study XY non-factorization on proton-proton data collected in 2022 at $\sqrt{s} = 13.6$~TeV by the CMS experiment. A detailed examination of the shape of the bunch convolution function is presented, studying various biases, and choosing the best-fit analytic 2D functions to finally obtain the correction and its uncertainty.
A measurement of off-shell Higgs boson production in the $H^*\to ZZ\to 4\ell$ decay channel is presented. The measurement uses 140 fb$^{-1}$ of proton-proton collisions at $\sqrt{s}=13$ TeV collected by the ATLAS detector at the Large Hadron Collider and supersedes the previous result in this decay channel using the same dataset. The data analysis is performed using a neural simulation-based inference method, which builds per-event likelihood ratios using neural networks. The observed (expected) off-shell Higgs boson production signal strength in the $ZZ\to 4\ell$ decay channel at 68% CL is $0.87^{+0.75}_{-0.54}$ ($1.00^{+1.04}_{-0.95}$). The evidence for off-shell Higgs boson production using the $ZZ\to 4\ell$ decay channel has an observed (expected) significance of $2.5\sigma$ ($1.3\sigma$). The expected result represents a significant improvement relative to that of the previous analysis of the same dataset, which obtained an expected significance of $0.5\sigma$. When combined with the most recent ATLAS measurement in the $ZZ\to 2\ell 2\nu$ decay channel, the evidence for off-shell Higgs boson production has an observed (expected) significance of $3.7\sigma$ ($2.4\sigma$). The off-shell measurements are combined with the measurement of on-shell Higgs boson production to obtain constraints on the Higgs boson total width. The observed (expected) value of the Higgs boson width at 68% CL is $4.3^{+2.7}_{-1.9}$ ($4.1^{+3.5}_{-3.4}$) MeV.
Neural simulation-based inference is a powerful class of machine-learning-based methods for statistical inference that naturally handles high-dimensional parameter estimation without the need to bin data into low-dimensional summary histograms. Such methods are promising for a range of measurements, including at the Large Hadron Collider, where no single observable may be optimal to scan over the entire theoretical phase space under consideration, or where binning data into histograms could result in a loss of sensitivity. This work develops a neural simulation-based inference framework for statistical inference, using neural networks to estimate probability density ratios, which enables the application to a full-scale analysis. It incorporates a large number of systematic uncertainties, quantifies the uncertainty due to the finite number of events in training samples, develops a method to construct confidence intervals, and demonstrates a series of intermediate diagnostic checks that can be performed to validate the robustness of the method. As an example, the power and feasibility of the method are assessed on simulated data for a simplified version of an off-shell Higgs boson couplings measurement in the four-lepton final states. This approach represents an extension to the standard statistical methodology used by the experiments at the Large Hadron Collider, and can benefit many physics analyses.
The p-type point-contact germanium (pPCGe) detectors have been widely adopted in searches for low energy physics events such as neutrinos and dark matter. This is due to their enhanced capabilities of background rejection, sensitivity at energies as low as the sub-keV range and particularly fine energy resolution. Nonetheless, the pPCGe is subject to irregular behaviour caused by surface effects for events near the passivated surface. These surface events can, in general, be distinguished from events that occur in the germanium crystal bulk by its slower pulse rise time. Unfortunately, the rise-time spectra of bulk and surface events starts to convolve with each other at sub-keV energies. In this work, we propose a novel method based on cross-correlation shape-matching combined with a low-pass filter to constrain the initial parameter estimates of the signal pulse. This improvement at the lowest level leads to a 50% reduction in computation time and refinements in the rise-time resolution, which will, in the end, enhance the overall analysis. To evaluate the performance of the method, we simulate artificial pulses that resembles bulk and surface pulses by using a programmable pulse generator module (pulser). The pulser-generated pulses are then used to examine the pulse behaviours at near-threshold energies, suggesting a roughly 70% background-leakage reduction in the bulk spectrum. Finally, the method is tested on data collected from the TEXONO experiment, where the results are consistent with our observations in pulser and demonstrated the possibility of lowering the analysis threshold by at least 10eV.
For decades, researchers have developed task-specific models to address scientific challenges across diverse disciplines. Recently, large language models (LLMs) have shown enormous capabilities in handling general tasks; however, these models encounter difficulties in addressing real-world scientific problems, particularly in domains involving large-scale numerical data analysis, such as experimental high energy physics. This limitation is primarily due to BPE tokenization's inefficacy with numerical data. In this paper, we propose a task-agnostic architecture, BBT-Neutron, which employs a binary tokenization method to facilitate pretraining on a mixture of textual and large-scale numerical experimental data. The project code is available at https://github.com/supersymmetry-technologies/bbt-neutron. We demonstrate the application of BBT-Neutron to Jet Origin Identification (JoI), a critical categorization challenge in high-energy physics that distinguishes jets originating from various quarks or gluons. Our results indicate that BBT-Neutron achieves comparable performance to state-of-the-art task-specific JoI models. Furthermore, we examine the scaling behavior of BBT-Neutron's performance with increasing data volume, suggesting the potential for BBT-Neutron to serve as a foundational model for particle physics data analysis, with possible extensions to a broad spectrum of scientific computing applications for Big Science experiments, industrial manufacturing and spacial computing.
In this work, we present an evaluation of subleading effects in the hadronic light-by-light contribution to the anomalous magnetic moment of the muon. Using a recently derived optimized basis, we first study the matching of axial-vector contributions to short-distance constraints at the level of the scalar basis functions, finding that also the tails of the pseudoscalar poles and tensor mesons play a role. We then develop a matching strategy that allows for a combined evaluation of axial-vector and short-distance constraints, supplemented by an estimate of tensor-meson contributions based on simplified assumptions for their transition form factors. Uncertainties are primarily propagated from the axial-vector transition form factors and the variation of the matching scale, but we also consider estimates of the low-energy effect of hadronic states not explicitly included. In total, we obtain $a_\mu^\text{HLbL}\big|_\text{subleading}=33.2(7.2)\times 10^{-11}$, which in combination with previously evaluated contributions in the dispersive approach leads to $a_\mu^\text{HLbL}\big|_\text{total}=101.9(7.9)\times 10^{-11}$.
We bootstrap the leading order hadronic contribution to muon anomalous magnetic moment. The leading hadronic contribution comes from the hadronic vacuum polarization function (HVP). We explore the bootstrap constraints, namely unitarity, analyticity, crossing symmetry and finite energy sum rules (FESR) from quantum chromodynamics (QCD). The unitarity appears as a positive semi-definite condition among the pion partial waves, form factor and spectral density function of HVP, which establishes a lower bound on leading order hadronic contribution to muon anomalous magnetic moment. We also impose chiral symmetry breaking to improve the bound slightly. By combining the lower bound with the remaining extensively calculated contributions, we achieve a bound on anomalous magnetic moment $a_\mu^\text{bootstrap-min}=11659176.3^{+3}_{-3}\times 10^{-10}$ and standard model prediction saturates this bound within the error bars. We also present a possible improvement that is saturated by both lattice computation and measured value within the error bars.
Hadronic light-by-light scattering (HLbL) defines one of the critical contributions in the Standard-Model prediction of the anomalous magnetic moment of the muon. In this work, we present a complete evaluation using a dispersive formalism, in which the HLbL tensor is reconstructed from its discontinuities, expressed in terms of simpler hadronic matrix elements that can be extracted from experiment. Profiting from recent developments in the determination of axial-vector transition form factors, short-distance constraints for the HLbL tensor, and the vector-vector-axial-vector correlator, we obtain $a_\mu^\text{HLbL}=101.9(7.9)\times 10^{-11}$, which meets the precision requirements set by the final result of the Fermilab experiment.
We explore spontaneous CP violation (SCPV) in the minimal non-supersymmetric SO(10) grand unified theory (GUT), with a scalar sector comprising a CP-even $45_H$, a $126_H$, and a complex $10_H$. All renormalizable couplings are real due to CP symmetry, and the Kobayashi-Maskawa phase arises solely from complex electroweak vacuum expectation values. The model requires an additional Higgs doublet fine-tuned below 500 GeV and constrains new Yukawa couplings, linking certain flavor-violating (FV) processes. Future proton decay observations may reveal correlated FV decay ratios, offering insights into minimal SO(10).
In this study, we investigate the impact of new LHC inclusive jet and dijet measurements on parton distribution functions (PDFs) that describe the proton structure, with a particular focus on the gluon distribution at large momentum fraction, $x$, and the corresponding partonic luminosities. We assess constraints from these datasets using next-to-next-to-leading-order (NNLO) theoretical predictions, accounting for a range of uncertainties from scale dependence and numerical integration. From the scale choices available for the calculations, our analysis shows that the central predictions for inclusive jet production show a smaller scale dependence than dijet production. We examine the relative constraints on the gluon distribution provided by the inclusive jet and dijet distributions and also explore the phenomenological implications for inclusive $H$, $t\bar{t}$, and $t\bar{t}H$ production at the LHC at 14 TeV.
The spin correlation of final-state hadrons provides a novel platform to explore the hadronization mechanism of polarized partons in unpolarized high-energy collisions. In this work, we investigate the helicity correlation of two hadrons originating from the same single parton. The production of such a dihadron system is formally described by the interference dihadron fragmentation function, in which the helicity correlation between the two hadrons arise from both the long-distance nonperturbative physics and the perturbative QCD evolution. Beyond the extraction of the dihadron fragmentation function, we demonstrate that it is also a sensitive observable to the longitudinal spin transfer, characterized by the single hadron fragmentation function $G_{1L}$. This intriguing connection opens up new opportunities for understanding the spin dynamics of hadronization and provides a complementary approach to corresponding studies using polarized beams and targets.
Under the assumption that the various evidences of a `95 GeV' excess, seen in data at the Large Electron Positron (LEP) collider as well as the Large Hadron Collider (LHC), correspond to actual signals of new physics Beyond the Standard Model (BSM), we characterise the underlying particle explaining these in terms of its Charge/Parity (CP) quantum numbers in a model independent way. In doing so, we assume the new object having spin-0 and test the CP-even (scalar) and CP-odd (pseudoscalar) hypotheses as well superpositions of these in its $\tau^+\tau^-$ decays. We prove that the High Luminosity LHC (HL-LHC) will be in a position to disentangle the CP nature of such a new particle.
Understanding the strong interactions between baryons, especially hyperon-nucleon ($Y$-$N$) interactions, is crucial for comprehending the equation-of-state (EoS) of the nuclear matter and inner structure of neutron star. In these proceedings, we present the measurements of $p$-$\Xi^{-}$ ($\bar{p}$-$\bar{\Xi}^{+}$) correlation functions with high statistics in Isobar (Ru+Ru, Zr+Zr) and Au+Au collisions at $\sqrt{s_{\mathrm{NN}}}$ = 200 GeV by the STAR experiment. With the Lednick\'y-Lyuboshitz approach, the source size and strong interaction parameters of $p$-$\Xi^{-}$ ($\bar{p}$-$\bar{\Xi}^{+}$) pairs are extracted.
The yields and ratios of light nuclei in heavy-ion collisions offer a method to distinguish between the thermal and coalescence models. Ratios such as $\rm N_t \times N_p/N_d^2$ and $\rm N_{^3He} \times N_p/N_d^2$ are suggested as potential probes to investigate critical phenomena within the QCD phase diagram. The significantly larger datasets from STAR BES-II compared to BES-I, combined with enhanced detector capabilities, allow for more precise measurements. In this proceeding, we present the centrality and energy dependence of transverse momentum spectra and particle yields of (anti-)proton, (anti-)deuteron, and $\rm ^3He$ at BES-II energies ($\sqrt{s_{\rm NN}}$ = $7.7 - 27$ GeV), as well as the light nuclei to proton yield ratios and coalescence parameters $(B_2(\rm d)$ and $B_3(\rm ^3He))$.
We examine the correlations between Higgs decays to photons and electric dipole moments (EDMs) in the CP-violating flavor-aligned two-Higgs-doublet model (2HDM). It is convenient to work in the Higgs basis $\{H_1,H_2\}$ where only the first Higgs doublet field $H_1$ acquires a vacuum expectation value. In light of the LHC Higgs data, which agree well with Standard Model (SM) predictions, it follows that the parameters of the 2HDM are consistent with the Higgs alignment limit. In this parameter regime, the observed SM-like Higgs boson resides almost entirely in ${H}_1$, and the other two physical neutral scalars, which reside almost entirely in ${H}_2$, are approximate eigenstates of CP (denoted by the CP-even $H$ and the CP-odd $A$). In the Higgs basis, the scalar potential term $\bar{Z}_7 {H}_1^\dagger {H}_2 {H}_2^\dagger {H}_2+{\rm h.c.}$ governs the charged-Higgs loop contributions to the decay of $H$ and $A$ to photons. If $\Re \bar{Z}_7 \Im\ \bar{Z}_7 \neq 0$, then CP-violating effects are present and allow for an $H^+ H^- A$ coupling, which can yield a sizable branching ratio for $A\to\gamma\gamma$. These CP-violating effects also generate non-zero EDMs for the electron, the neutron and the proton. We examine these correlations for the cases of $m_{A}=95$ GeV and $m_{A}=152$ GeV where interesting excesses in the diphoton spectrum have been observed at the LHC. These excesses can be explained via the decay of $A$ while being consistent with the experimental bound for the electron EDM in regions of parameter space that can be tested with future neutron and proton EDM measurements. This allows for the interesting possibility where the 95 GeV diphoton excess can be identified with $A$, while $m_H\simeq 98$ GeV can account for the best fit to the LEP excess in $e^+e^-\to ZH$ with $H\to b\bar b$.
Studying hyper-nuclei yields and their collectivity can shed light on their production mechanism as well as the hyperon-nucleon interactions. Heavy-ion collisions from the RHIC beam energy scan phase II (BES-II) provide an unique opportunity to understand these at high baryon densities. In these proceedings, we present a systematic study on energy dependence of the directed flow ($v_{1}$) for $\Lambda$ and hyper-nuclei ($^{3}_{\Lambda}{\rm H}$, $^{4}_{\Lambda}{\rm H}$) from mid-central Au+Au collisions at $\sqrt{s_{\mathrm{NN}}}$ = 3.2, 3.5, 3.9 and 4.5 GeV, collected by the STAR experiment with the fixed-target mode during BES-II. The rapidity (y) dependence of the hyper-nuclei $v_{1}$ is studied in mid-central collisions. The extracted $v_{1}$ slopes ($\mathrm{d}v_{1}/\mathrm{d}y|_{y=0}$) of the hyper-nuclei are positive and decrease gradually as the collision energy increases. These hyper-nuclei results are compared to that of light-nuclei including p, d, t/$\rm ^{3}He$ and $\rm ^{4}He$. Finally, these results are compared with a hadronic transport model including coalescence after-burner.
After reviewing the sound speeds in various forms and conditions of matter, we investigate the sound speed of hadronic matter that has decoupled from the hot and dense system formed during high-energy collisions. We comprehensively consider factors such as energy loss of the incident beam, rapidity shift of leading nucleons, and the Landau hydrodynamic model for hadron production. The sound speed is related to the width or standard deviation of the Gaussian rapidity distribution of hadrons. The extracted square speed of sound lies within a range from 0 to 1/3 in most cases. For scenarios exceeding this limit, we also provide an explanation.
We determine the nucleon axial, scalar and tensor charges and the nucleon $\sigma$-terms using twisted mass fermions. We employ three ensembles with approximately equal physical volume of about 5.5~fm, three values of the lattice spacing, approximately 0.06~fm, 0.07~fm and 0.08~fm, and with the mass of the degenerate up and down, strange and charm quarks tuned to approximately their physical values. We compute both isovector and isoscalar charges and $\sigma$-terms and their flavor decomposition including the disconnected contributions. We use the Akaike Information Criterion to evaluate systematic errors due to excited states and the continuum extrapolation. For the nucleon isovector axial charge we find $g_A^{u-d}=1.250(24)$, in agreement with the experimental value. Moreover, we extract the nucleon $\sigma$-terms and find for the light quark content $\sigma_{\pi N}=41.9(8.1)$~MeV and for the strange $\sigma_{s}=30(17)$~MeV.