A measurement of the associated production of a top-quark pair with the Higgs boson ($t\bar{t}H$) in multilepton final states is presented. The analysis is based on a data sample of proton-proton collisions at $\sqrt{s}=13$ TeV recorded with the ATLAS detector at the CERN Large Hadron Collider and corresponding to an integrated luminosity of 140 fb$^{-1}$. Six final states defined by the number and flavour of reconstructed charged leptons are combined in a simultaneous likelihood fit to extract the $t\bar{t}H$ signal and constrain the most relevant backgrounds. The measured $t\bar{t}H$ cross-section normalised to Standard Model (SM) prediction is $\sigma_{t\bar tH}/\sigma^{\text{SM}}=0.63^{+0.20}_{-0.19}$. This result corresponds to an observed (expected) significance of 3.3$\sigma$ (5.3$\sigma$). Additionally, two other fits are used to measure the $t\bar{t}H$ cross-section differentially in bins of the Higgs boson transverse momentum in the simplified template cross-section framework, and to extract the associated production cross-section of a single top-quark with the Higgs boson ($tH$) together with the $t\bar{t}H$ one. The $CP$ structure of the top quark-Higgs boson Yukawa coupling is probed through analysis of $t\bar{t}H$ and $tH$ events. The results are compatible with the SM hypothesis, and values of the mixing angle between $CP$-even and $CP$-odd top-Higgs Yukawa couplings of $| \alpha | > 62^\circ$ are excluded at 68$\% $ confidence level.
Systematic uncertainties in high energy physics and astrophysics are often significant contributions to the overall uncertainty in a measurement, in many cases being comparable to the statistical uncertainties. However, consistent definition and practice is elusive, as there are few formal definitions and there exists significant ambiguity in what is defined as a systematic and statistical uncertainty in a given analysis. I will describe current practice, and recommend a definition and classification of systematic uncertainties that allows one to treat these sources of uncertainty in a consistent and robust fashion. Classical and Bayesian approaches will be contrasted.
Transverse position reconstruction in a Time Projection Chamber (TPC) is crucial for accurate particle tracking and classification, and is typically accomplished using machine learning techniques. However, these methods often exhibit biases and limited resolution due to incompatibility between real experimental data and simulated training samples. To mitigate this issue, we present a domain-adaptive reconstruction approach based on a cycle-consistent generative adversarial network. In the prototype detector, the application of this method led to a 60.6% increase in the reconstructed radial boundary. Scaling this method to a simulated 50-kg TPC, by evaluating the resolution of simulated events, an additional improvement of at least 27% is achieved.
Using $(10087\pm44)\times10^{6}$ $J/\psi$ events collected with the BESIII detector, a full angular distribution analysis is carried out on the process $J/\psi\rightarrow\Lambda\bar{\Lambda}\rightarrow n\pi^{0}\bar{p}\pi^{+}+c.c.$ The decay parameters $\alpha_{0}$ for $\Lambda\rightarrow n\pi^{0}$ and $\bar{\alpha}_{0}$ for $\bar{\Lambda}\rightarrow \bar{n}\pi^{0}$ are measured to be $0.668\pm0.007\pm0.002$ and $-0.677\pm0.007\pm0.003$, respectively, yielding the most precise test for $CP$ symmetry of neutral decays of $\Lambda$, $A_{CP}^{0}=(\alpha_{0}+\bar{\alpha}_{0})/(\alpha_{0}-\bar{\alpha}_{0})$, to be $-0.006\pm0.007\pm0.002$. The ratios $\alpha_{0}/\alpha_{-}$ and $\bar{\alpha}_{0}/\alpha_{+}$ are determined to be $0.884\pm0.013\pm0.006$ and $0.885\pm0.013\pm0.004$, where $\alpha_{-}$ and $\alpha_{+}$ are the decay parameters of $\Lambda\rightarrow p\pi^{-}$ and $\bar{\Lambda}\rightarrow\bar{p}\pi^{+}$, respectively. The ratios, found to be smaller than unity by more than $5\sigma$, confirm the presence of the $\Delta I = 3/2$ transition in the $\Lambda$ and $\bar{\Lambda}$ decays, which is expected to improve the theoretical calculations for strong and weak phases, and $A_{CP}$, in hyperon decays. In all results, the first and second uncertainties are statistical and systematic, respectively.
Transformers are very effective in capturing both global and local correlations within high-energy particle collisions, but they present deployment challenges in high-data-throughput environments, such as the CERN LHC. The quadratic complexity of transformer models demands substantial resources and increases latency during inference. In order to address these issues, we introduce the Spatially Aware Linear Transformer (SAL-T), a physics-inspired enhancement of the linformer architecture that maintains linear attention. Our method incorporates spatially aware partitioning of particles based on kinematic features, thereby computing attention between regions of physical significance. Additionally, we employ convolutional layers to capture local correlations, informed by insights from jet physics. In addition to outperforming the standard linformer in jet classification tasks, SAL-T also achieves classification results comparable to full-attention transformers, while using considerably fewer resources with lower latency during inference. Experiments on a generic point cloud classification dataset (ModelNet10) further confirm this trend. Our code is available at this https URL.
We study Big Bang Nucleosynthesis (BBN) constraints on heavy QCD axions. BBN offers a powerful probe of new physics that modifies the neutron-to-proton ratio during the process, thanks to the precisely measured primordial Helium-4 abundance. A heavy QCD axion provides an attractive target for this probe, because not only is it a well-motivated hypothetical particle by the strong CP problem, but also it dominantly decays to hadrons if kinematically allowed. A range of its lifetime is thus excluded where the hadronic decays would significantly alter the neutron-to-proton ratio. We compute axion-induced modification of the neutron-to-proton ratio, and obtain robust upper bounds on the axion lifetimes, as low as 0.017 s for the axion mass higher than 300 MeV. Remarkably, this is stronger than projected future CMB bounds via $N_{\rm eff}$. Our bounds are largely insensitive to uncertainties in hadronic cross sections and the axion's branching fractions into various hadrons, as well as to the precise value of the initial axion abundance. We also incorporate, for the first time, several key improvements, such as scattering processes by energetic $K_L$ and secondary hadrons, that can also be important for studying general hadronic injections during BBN, not limited to those from axion decays.
In models of strongly-interacting dark sectors, the production of dark quarks at accelerators can give rise to dark showers with multiple dark mesons in the final state. If some of these dark mesons are sufficiently light and long-lived, they can be detected with searches for displaced vertices at beam-dump experiments and electron-positron colliders. In this work we focus on the case that dark quark production proceeds via effective operators, while the dark sector analogue of the $\rho^0$ meson can decay via kinetic mixing. We evaluate current constraints from NA62 and BaBar as well as sensitivity projections for SHiP and Belle II. We find that there exists a sizable parameter region where SHiP may detect several displaced vertices in a single event and thus obtain valuable information about the structure of the dark sector.
We investigate the neutrino sector in the framework of flavor deconstruction with an inverse-seesaw realization. This setup naturally links the hierarchical charged-fermion masses to the anarchic pattern of light-neutrino mixing. We determine the viable parameter space consistent with oscillation data and study the phenomenology of heavy neutral leptons (HNL) and lepton-flavor-violating (LFV) processes. Current bounds from direct HNL searches and LFV decays constrain the right-handed neutrino scale to a few TeV, while future $\mu \to e$ experiments will probe most of the region with $\Lambda \lesssim 10~\text{TeV}$. Among possible realizations, models deconstructing $\mathrm{SU}(2)_\mathrm{L} \times \mathrm{U}(1)_\mathrm{B-L}$ or $\mathrm{SU}(2)_\mathrm{L} \times \mathrm{U}(1)_\mathrm{R} \times \mathrm{U}(1)_\mathrm{B-L}$ are those allowing the lowest deconstruction scale.
The decay $A\to ZH$ is a characteristic signal of two-Higgs-doublet models (2HDMs), where $A$ and $H$ lie primarily within the same $SU(2)_L$ multiplet, leading to a coupling of order $g_2$ to the $Z$ boson. The subsequent decay $H \to tt^{(*)}$ is particularly promising, as it gives rise to distinct final states involving multiple leptons and $b$-jets. The required splitting between $m_A$ and $m_H$ can naturally occur near the electroweak scale while being consistent with perturbative unitarity. Whereas dedicated ATLAS and CMS searches focused on the region with both top-quarks on-shell, we cover lower masses where one top quark is off-shell by recasting Standard Model $t\bar{t}Z$ measurements of ATLAS and CMS. The obtained limits on $\sigma(A\to ZH)\times {\rm Br} (H\to t\bar t)$ are between $0.12$ pb and $0.62$ pb. When interpreted within the type-I 2HDM, a sizable part of the so far unconstrained low-mass region is excluded. Interestingly, we observe these stringent limits despite a preference (up to $2.5\sigma$) for a non-zero new physics signal, most pronounced around for $m_A \approx 450-460$ GeV and $m_H\approx 290$ GeV, with a best-fit value of $\sigma(A \to ZH) \times {\rm Br}(H \to t\bar t) \approx 0.3$ pb. This cross section can be accommodated within a top-philic 2HDM for a top-Yukawa coupling of the second Higgs doublet of $\mu_t \gtrsim 0.16$.
The introduction of the color quantum number is conventionally narrated as a linear progression from the quark-model statistics paradox to quantum chromodynamics (QCD). This paper challenges that teleology by arguing that "color" emerged as two conceptually distinct constructs during the Cold War. The first, originating with Han and Nambu and culminating in QCD, conceived of color as a local gauge charge, the source of a fundamental force mediated by gluons. The second, developed at the Joint Institute for Nuclear Research (JINR) in Dubna, treated color as a hidden, three-valued label--a statistical and structural property within a composite, S-matrix-inflected hadron model. We trace these parallel narratives, linking the Dubna approach to a holist epistemology that prioritizes observable amplitudes and global constraints, and the QCD approach to a reductionist program grounded in micro-dynamics. A case study of Fermilab's E-36 experimental chain (1970--78) shows how an observables-first design-tuned to S-matrix and Regge constraints on forward elastic scattering--performed robustly within its natural domain but was ultimately discontinued amid declining theoretical interest and involvement. The subsequent hegemony of QCD retroactively projected its gauge-theoretic conception of color onto history, erasing this epistemic diversity. We conclude that the marginalization of Dubna's structural color was not merely a political outcome of the Cold War but a result of deep ontological and philosophical divergences, advocating for a domain-sensitive pluralism in the historiography of particle physics.
The profile of the pion valence quark distribution function (DF) remains controversial. Working from the concepts of QCD effective charges and generalised parton distributions, we show that since the pion elastic electromagnetic form factor is well approximated by a monopole, then, at large light-front momentum fraction, the pion valence quark DF is a convex function described by a large-$x$ power law that is practically consistent with expectations based on quantum chromodynamics.
Tritium from tritiated methane (CH$_3$T) calibration is a significant impurity that restricts the sensitivity of the PandaX-4T dark matter detection experiment in the low-energy region. The CH$_3$T removal is essential for PandaX-4T and other liquid xenon dark matter direct detection experiments, as CH$_3$T serves as a critical component for low-energy calibration. To eliminate CH$_3$T, the xenon in the detector is suitably recuperated, leaving 1.8 bar of xenon gas inside, and the detector is flushed with heated xenon gas. Concurrently, leveraging the lower boiling point of methane relative to xenon, the PandaX-4T cryogenic distillation system is effectively utilized to extract CH$_3$T from xenon after optimizing the operational parameters. Following the commissioning run, 5.7 tons of xenon are purified via the distillation method. Recent data indicate that the CH$_3$T concentration reduces from $3.6\times10^{-24}$ mol/mol to $5.9\times10^{-25}$ mol/mol, demonstrating that gas purging and distillation are effective in removing CH$_3$T, even at concentrations on the order of $10^{-24}$ mol/mol.
We present a new model of the dark sector involving Dirac fermion dark matter, with axial coupling to a dark photon which provides a portal to Standard Model particles. In the non-relativistic limit, this implies that the dominant effective operator relevant to direct detection is ${\cal O}_8$. The resulting event rate for direct detection is suppressed by either the dark matter velocity or the momentum transfer. In this scenario there are much wider regions of the dark parameter space that are consistent with all of the existing constraints associated with thermal relic density, direct detection and collider searches.
In this paper, we present the design and characterization of a photosensor system developed for the RELICS experiment. A set of dynamic readout bases was designed to mitigate photomultiplier tube (PMT) saturation caused by intense cosmic muon backgrounds in the surface-level RELICS detector. The system employs dual readout from the anode and the seventh dynode to extend the PMT's linear response range. In particular, our characterization and measurements of Hamamatsu R8520-406 PMTs confirm stable operation under positive high-voltage bias, extending the linear response range by more than an order of magnitude. Furthermore, a model of PMT saturation and recovery was developed to evaluate the influence of cosmic muon signals in the RELICS detector. The results demonstrate the system's capability to detect coherent elastic neutrino-nucleus scattering (CE$\nu$NS) signals under surface-level cosmic backgrounds, and suggest the potential to extend the scientific reach of RELICS to MeV-scale interactions.
High-energy neutrino astronomy has advanced rapidly in recent years, with IceCube, KM3NeT, and Baikal-GVD establishing a diffuse astrophysical flux and pointing to promising source candidates. These achievements mark the transition from first detections to detailed source studies, motivating next-generation detectors with larger volumes, improved angular resolution, and full neutrino-flavour sensitivity. We present a performance study of large underwater neutrino telescopes, taking the proposed TRIDENT array in the South China Sea as a case study, with a focus on comparing the performance of various detector configurations against the TRIDENT baseline design. Both track-like events primarily from muon neutrinos, which provide precise directional information, and cascade events from all flavours, which offer superior energy resolution, diffuse-source sensitivity, and all-sky flavour coverage, are included to achieve a balanced performance across source types. The time to discover potential astrophysical sources with both track- and cascade-like events is used as the figure of merit to compare a variety of detector design choices. Our results show that, for a fixed number of optical modules, simply enlarging the instrumented volume does not inherently lead to improved performance, while taller strings can provide modest gains across all detector channels, within engineering constraints. Distributing dense clusters of strings over a large volume is found to generally worsen discovery potential compared to the baseline layout. Finally, the optical properties of the sea-water emerge as the key factor dictating the optimisation of detector layout, highlighting the need for in-situ measurements and early deployment of optical modules to guide the final array configuration.
We stack 3.75 Megaseconds of early XRISM Resolve observations of ten galaxy clusters to search for unidentified spectral lines in the $E=$ 2.5-15 keV band (rest frame), including the $E=3.5$ keV line reported in earlier, low spectral resolution studies of cluster samples. Such an emission line may originate from the decay of the sterile neutrino, a warm dark matter (DM) candidate. No unidentified lines are detected in our stacked cluster spectrum, with the $3\sigma$ upper limit on the $m_{\rm s}\sim$ 7.1 keV DM particle decay rate (which corresponds to a $E=3.55$ keV emission line) of $\Gamma \sim 1.0 \times 10^{-27}$ s$^{-1}$. This upper limit is 3-4 times lower than the one derived by Hitomi Collaboration et al. (2017) from the Perseus observation, but still 5 times higher than the XMM-Newton detection reported by Bulbul et al. (2014) in the stacked cluster sample. XRISM Resolve, with its high spectral resolution but a small field of view, may reach the sensitivity needed to test the XMM-Newton cluster sample detection by combining several years worth of future cluster observations.
Nuclear $\beta$ decay, a sensitive probe of nuclear structure and weak interactions, has become a precision test bed for physics beyond the Standard Model (BSM), driven by recent advances in spectroscopic techniques. Here we introduce tracking spectroscopy of $\beta$-$\gamma$ cascades, a method that reconstructs decay vertices while simultaneously detecting $\beta$ particles and all associated de-excitation energies. Using the PandaX-4T detector operated as a tracking spectrometer, we obtain a precise and unbiased decay scheme of $^{214}$Pb, a key background isotope in searches for dark matter and Majorana neutrinos. For the first time, transitions of $^{214}$Pb to both the ground and excited states of $^{214}$Bi are measured concurrently, revealing discrepancies in branching ratios of up to 4.7$\sigma$ relative to previous evaluations. Combined with state-of-the-art theoretical spectral shape calculations, these results establish a new benchmark for background modeling in rare-event searches and highlight the potential of tracking spectroscopy as a versatile tool for fundamental physics and nuclear applications.
In this contribution, new developments for the Standard Model Higgs-boson decays will be summarized.
This report describes the experimental strategy and technologies for XLZD, the next-generation xenon observatory sensitive to dark matter and neutrino physics. In the baseline design, the detector will have an active liquid xenon target of 60 tonnes, which could be increased to 80 tonnes if the market conditions for xenon are favorable. It is based on the mature liquid xenon time projection chamber technology used in current-generation experiments, LZ and XENONnT. The report discusses the baseline design and opportunities for further optimization of the individual detector components. The experiment envisaged here has the capability to explore parameter space for Weakly Interacting Massive Particle (WIMP) dark matter down to the neutrino fog, with a 3$\sigma$ evidence potential for WIMP-nucleon cross sections as low as $3\times10^{-49}\rm\,cm^2$ (at 40 GeV/c$^2$ WIMP mass). The observatory will also have leading sensitivity to a wide range of alternative dark matter models. It is projected to have a 3$\sigma$ observation potential of neutrinoless double beta decay of $^{136}$Xe at a half-life of up to $5.7\times 10^{27}$ years. Additionally, it is sensitive to astrophysical neutrinos from the sun and galactic supernovae.
The axion, which has yet to be discovered, is a promising candidate for dark matter that emerges from Peccei-Quinn theory. This article presents the search for axion dark matter with the "Student Project for an Axion Cavity Experiment" (SPACE), which is also the first one in Germany. The hypothetical particle was looked for in the mass range from $16.626~\mathrm{\mu eV}$ to $16.653~\mathrm{\mu eV}$, corresponding to a frequency range of 4.020 GHz to 4.027 GHz, using a resonant cavity in a peak magnetic field of 14 T. No significant signal was found, allowing us to exclude an axion-photon coupling $g_{a\gamma\gamma} = 14.6 \cdot 10^{-13}~\mathrm{GeV}^{-1}$ for the full mass range and $g_{a\gamma\gamma} = 2.811 \cdot 10^{-13}~\mathrm{GeV}^{-1}$ at peak sensitivity with a 95% confidence level. This limit surpasses previous constraints by more than two orders of magnitude.
We review the current status and techniques used in precision measurements of the effective leptonic weak mixing angle $\sin^2\theta^\ell_{\rm eff}$ (a fundamental parameter of the Standard Model (SM)) in the region of the Z pole with emphasis on hadron colliders. We also build on these techniques to extract the most precise single measurement to date of $\sin^2\theta^\ell_{\rm eff}$ from a new analysis of the published forward-backward asymmetry ($A_{\rm FB}$) in Drell-Yan dielpton production in proton-proton collisions at a center of mass energy of 13 TeV measured by the CMS collaboration at the large hadron collider. The uncertainty in $\sin^2\theta^\ell_{\rm eff}$ published by CMS is dominated by uncertainties in Parton Distribution Functions (PDFs), which are reduced by PDF profiling using the dilepton mass dependence of $A_{\rm FB}$. Our new extraction of $\sin^2\theta^\ell_{\rm eff}$ from the CMS values of $A_{\rm FB}$ includes profiling with additional new CMS measurements of the $W$-boson decay lepton asymmetry, and W/Z cross section ratio at 13 TeV. We obtain the most precise single measurement of $\sin^2\theta^\ell_{\rm eff}$ to date of 0.23156$\pm$0.00024, which is in excellent agreement with the SM prediction of 0.23161$\pm$0.00004. We also discuss outlook for future measurements at the LHC including more precise measurements of $\sin^2\theta^\ell_{\rm eff}$, a measurement of $\sin^2\theta^\ell_{\rm eff}$ for b-quarks in the initial state, and a measurement of the running of $\sin^2\theta^{\overline{\rm MS}}(\mu)$ up to 3 TeV.
Dark matter makes up approximately 85% of total matter in our universe, yet it has never been directly observed in any laboratory on Earth. The origin of dark matter is one of the most important questions in contemporary physics, and a convincing detection of dark matter would be a Nobel-Prize-level breakthrough in fundamental science. The ABRACADABRA experiment was specifically designed to search for dark matter. Although it has not yet made a discovery, ABRACADABRA has produced several dark matter search results widely endorsed by the physics community. The experiment generates ultra-long time-series data at a rate of 10 million samples per second, where the dark matter signal would manifest itself as a sinusoidal oscillation mode within the ultra-long time series. In this paper, we present the TIDMAD -- a comprehensive data release from the ABRACADABRA experiment including three key components: an ultra-long time series dataset divided into training, validation, and science subsets; a carefully-designed denoising score for direct model benchmarking; and a complete analysis framework which produces a community-standard dark matter search result suitable for publication as a physics paper. This data release enables core AI algorithms to extract the dark matter signal and produce real physics results thereby advancing fundamental science. The data downloading and associated analysis scripts are available at this https URL
We present an updated global analysis of neutrino oscillation data as of September 2024. The parameters $\theta_{12}$, $\theta_{13}$, $\Delta m^2_{21}$, and $|\Delta m^2_{3\ell}|$ ($\ell = 1,2$) are well-determined with relative precision at $3\sigma$ of about 13\%, 8\%, 15\%, and 6\%, respectively. The third mixing angle $\theta_{23}$ still suffers from the octant ambiguity, with no clear indication of whether it is larger or smaller than $45^\circ$. The determination of the leptonic CP phase $\delta_{CP}$ depends on the neutrino mass ordering: for normal ordering the global fit is consistent with CP conservation within $1\sigma$, whereas for inverted ordering CP-violating values of $\delta_{CP}$ around $270^\circ$ are favored against CP conservation at more than $3.6\sigma$. While the present data has in principle $2.5$--$3\sigma$ sensitivity to the neutrino mass ordering, there are different tendencies in the global data that reduce the discrimination power: T2K and NOvA appearance data individually favor normal ordering, but they are more consistent with each other for inverted ordering. Conversely, the joint determination of $|\Delta m^2_{3\ell}|$ from global disappearance data prefers normal ordering. Altogether, the global fit including long-baseline, reactor and IceCube atmospheric data results into an almost equally good fit for both orderings. Only when the $\chi^2$ table for atmospheric neutrino data from Super-Kamiokande is added to our $\chi^2$, the global fit prefers normal ordering with $\Delta\chi^2 = 6.1$. We provide also updated ranges and correlations for the effective parameters sensitive to the absolute neutrino mass from $\beta$-decay, neutrinoless double-beta decay, and cosmology.
We use a Hybrid Deep Neural Network (HDNN) to identify a boosted dark photon jet as a signature of a heavy vector-like fermionic portal matter (PM) connecting the visible and the dark sectors. In this work, the fermionic PM, which mixes only with the Standard Model (SM) third-generation up-type quark, predominantly decays into a top quark and a dark photon pair. The dark photon then promptly decays to a pair of standard model fermions via the gauge kinetic mixing. We have analyzed two different final states, namely, (i) exactly one tagged dark photon and exactly one tagged top quark jet, and (ii) at least two tagged dark photons and at least one tagged top quark jet at the 13 and 14 TeV LHC center of mass energies. Both these final states receive significant contributions from the pair and single production processes of the top partner. The rich event topology of the signal processes, i.e., the presence of a boosted dark photon and top quark jet pair, along with the fact that the invariant mass of the system corresponds to the mass of the top partner, help us to significantly suppress potential SM backgrounds. We have shown that one can set a $2\sigma$ exclusion limit of $\sim 2.3$ TeV on the top partner mass with $\sin\theta_L=0.1$ and assuming $100\%$ branching ratio of the top partner in the final state with exactly one tagged dark photon and exactly one tagged top quark jet at the 14 TeV LHC center of mass energy assuming 300 fb$^{-1}$ of integrated luminosity.
Tritium, predominantly produced through spallation reactions caused by cosmic ray interactions, is a significant radioactive background for silicon-based rare event detection experiments, such as dark matter searches. We have investigated the feasibility of removing cosmogenic tritium from high-purity silicon intended for use in low-background experiments. We demonstrate that significant tritium removal is possible through diffusion by subjecting silicon to high-temperature (> 400C) baking. Using an analytical model for the de-trapping and diffusion of tritium in silicon, our measurements indicate that cosmogenic tritium diffusion constants are comparable to previous measurements of thermally-introduced tritium, with complete de-trapping and removal achievable above 750C. This approach has the potential to alleviate the stringent constraints of cosmic ray exposure prior to device fabrication and significantly reduce the cosmogenic tritium backgrounds of silicon-based detectors for next-generation rare event searches.
We present the calculation of next-to-next-to-leading-order (NNLO) QCD corrections to hadron multiplicities in light-quark jets at lepton colliders, employing the ``projection-to-Born" (P2B) method implemented in the FMNLO program. Taking the next-to-leading-order result as an example, we rigorously establish the validity of our P2B-based calculation. We then present NNLO predictions for the normalized asymmetry $D_{K^{-}}$ between hadron and antihadron production in light-quark jets and compare them with SLD data. We find that a suppression of these SLD measurements relative to NPC23 predictions for $D_{K^{-}}$ emerges in the intermediate $z_h$ domain ($0.2 \lesssim z_h \lesssim 0.7$). We expect that incorporating these SLD data into global QCD fits will enable improved determination of fragmentation functions.
The Taishan Antineutrino Observatory (TAO) is a tonne-scale gadolinium-doped liquid scintillator satellite experiment of the Jiangmen Underground Neutrino Observatory (JUNO). It is designed to measure the reactor antineutrino energy spectrum with unprecedented energy resolution, better than 2% at 1 MeV. To fully achieve its designed performance, precise vertex reconstruction is crucial. This work reports two distinct vertex reconstruction methods, the charge center algorithm (CCA) and the deep learning algorithm (DLA). We describe the efforts in optimizing and improving these two methods and compare their reconstruction performance. The results show that the CCA and DLA methods can achieve vertex position resolutions better than 20mm (bias<5mm) and 12mm (bias<1.3mm) at 1 MeV, respectively, fully meeting the requirements of the TAO experiment. The reconstruction algorithms developed in this study not only prepare the TAO experiment for its upcoming real data but also hold significant potential for application in other similar experiments.
We investigate the photon structure functions via the photon-photon and photon-vector meson scattering within the framework of holographic QCD, focusing on the small Bjorken $x$ region and assuming that the Pomeron exchange dominates. The quasi-real photon structure functions are formulated as the convolution of the known U(1) vector field wave function with the Brower-Polchinski-Strassler-Tan (BPST) Pomeron exchange kernel in the five-dimensional AdS space. Assuming the vector meson dominance, the photon structure functions can be calculated in a different way with the BPST kernel and vector meson gravitational form factor, which can be obtained in a bottom-up AdS/QCD model, for the Pomeron-vector meson coupling. It is shown that the obtained $F_2$ structure functions in the both ways agree with the experimental data, which implies the realization of the vector meson dominance within the present model setup. Calculations for the longitudinal structure function and the longitudinal-to-transverse ratio are also presented.
We propose a new Scotogenic type model based on a global $Z_4$ symmetry involving dark matter candidates. After the symmetry breaking as $Z_4$ to $Z_2$ via the singlet scalar vacuum expectation value (VEV), the lightest Majorana fermion works as a viable thermal freeze-out dark matter (DM) candidate, and the mass terms for active neutrinos are generated as a finite quantum correction at the 1-loop level. A key point of realising our Scotogenic structure is to introduce two types of Majorana fermions (heavy right-handed neutrinos) and inert Higgs doublets with opposite $Z_4$ parities. Since a large VEV for the singlet scalar is not so harmful in an appropriate realisation of the Higgs mechanism for the SM gauge symmetry, we can naturally realise a TeV-scale fermionic DM candidate, where constraints via direct detection experiments are less than those for sub-TeV DM. Our scenario involves the Higgs-portal DM interactions, which help the realisation of the correct DM relic abundance. Relying on the structure of the model, it is possible to find a natural partner for coannihilation. Our scenario can be investigated via the measurement of the Higgs trilinear self-coupling at the Large Hadron Collider. The simplest way to evade the domain-wall problem by adding a tiny soft $Z_2$ breaking term works, keeping a sufficient longevity of the decaying DM lifetime.
We conduct a comprehensive study of the semileptonic decay process \(\Lambda \to p\,\ell\,\bar{\nu}_{\ell}\), focusing on the determination of all six vector and axial-vector form factors that govern the low-energy hadronic matrix elements of the underlying theory. These invariant form factors constitute the essential inputs for describing the decay, and their dependence on the momentum transfer \(q^{2}\) is analyzed across the entire physical kinematic region. To parameterize the \(q^{2}\)-dependence, we adopt both the \(z\)-expansion formalism and a polynomial fitting approach. Utilizing these parameterizations, we compute the exclusive decay widths for both the electron and muon channels and subsequently extract the corresponding branching ratios. Furthermore, we evaluate the ratio of decay widths between the muon and electron channels, defined as $R^{\mu e} \equiv \frac{\Gamma(\Lambda \to p\,\mu\,\bar{\nu}_{\mu})}{\Gamma(\Lambda \to p\,e\,\bar{\nu}_{e})}$, obtaining \(R^{\mu e} = 0.196^{+0.009}_{-0.012}\) from the polynomial fit and \(R^{\mu e} = 0.174^{+0.002}_{-0.005}\) from the \(z\)-expansion. While both ratios are compatible with previously reported values in the literature, the result from the \(z\)-expansion exhibits particularly strong agreement with the averages reported by the Particle Data Group (PDG).
We perform an improved perturbative QCD study of the decays $B_c^+ \to \eta_c L^+$ ($L$ denotes the light ground state pseudoscalar, vector mesons and the corresponding $p$-wave scalar, axial-vector, and tensor ones) and predict their branching ratios (BRs) associated with relative ratios at leading order in the strong coupling $\alpha_s$. Our results ${\rm BR}(B_c^+ \to \eta_c \pi^+) =(2.03^{+0.53}_{-0.41}) \times 10^{-3}$ and ${\rm BR}(B_c^+ \to \eta_c \pi^+)/{\rm BR}(B_c^+ \to J/\psi \pi^+) = 1.74^{+0.66}_{-0.50}$ are consistent with several available predictions in different approaches within uncertainties. Inputting the measured $\eta_c \to p\bar p$ and $\eta_c \to \pi^+\pi^- (\pi^+\pi^-, K^+ K^-, p\bar p)$ BRs with $p$ here being a proton, we derive the multibody $B_c^+ \to \eta_c (\pi, \rho)^+$ BRs through secondary decay chains via resonance $\eta_c$ under the narrow-width approximation, which might facilitate the (near) future tests of $B_c \to \eta_c$ decays. Under the $q\bar q$ assignment for light scalars, different to $B_c$ decaying into $J/\psi$ plus a scalar meson and other $B_c^+ \to \eta_c L^+$ modes, surprisingly small $\Delta S =0$ BRs around ${\cal O}(10^{-7}-10^{-9})$ and highly large ratios near ${\cal O}(10^{2})$ between the $\Delta S=1$ and $\Delta S=0$ BRs are found in the $B_c$ decays to $\eta_c$ plus light scalars, with $S$ being strange number. Many large BRs and interesting ratios presented in this work could be tested by the Large Hadron Collider experiments, which would help us to examine the reliability of this improved perturbative QCD formalism for $B_c$-meson decays and further understand the QCD dynamics in the considered decay modes, as well as in the related hadrons.