Measurements of jet substructure are key to probing the energy frontier at colliders, and many of them use track-based observables which take advantage of the angular precision of tracking detectors. Theoretical calculations of track-based observables require `track functions', which characterize the transverse momentum fraction $r_q$ carried by charged hadrons from a fragmenting quark or gluon. This letter presents a direct measurement of $r_q$ distributions in dijet events from the 140 fb$^{-1}$ of proton-proton collisions at $\sqrt{s}=13$ TeV recorded with the ATLAS detector. The data are corrected for detector effects using machine-learning methods. The scale evolution of the moments of the $r_q$ distribution is sensitive to non-linear renormalization group evolution equations of QCD, and is compared with analytic predictions. When incorporated into future theoretical calculations, these results will enable a precision program of theory-data comparison for track-based jet substructure observables.
The LHCb experiment at CERN has been upgraded for the Run 3 operation of the Large Hadron Collider (LHC). A new concept of tracking detector based on Scintillating Fibres (SciFi) read out with multichannel silicon photomultipliers (SiPMs) was installed during its upgrade. One of the main challenges the SciFi tracker will face during the Run 4 operation of the LHC is the higher radiation environment due to fast neutrons, where the SiPMs are located. To cope with the increase in radiation, cryogenic cooling with liquid nitrogen is being investigated as a possible solution to mitigate the performance degradation of the SiPMs induced by radiation damage. Thus, a detailed performance study of different layouts of SiPM arrays produced by Fondazione Bruno Kessler (FBK) and Hamamatsu Photonics K.K. is being carried out. These SiPMs have been designed to operate at cryogenic temperatures. Several SiPMs have been tested in a dedicated cryogenic setup down to 100 K. Key performance parameters such as breakdown voltage, dark count rate, photon detection efficiency, gain and direct cross-talk are characterized as a function of the temperature. The main results of this study are going to be presented here.
In high-energy physics, the increasing luminosity and detector granularity at the Large Hadron Collider are driving the need for more efficient data processing solutions. Machine Learning has emerged as a promising tool for reconstructing charged particle tracks, due to its potentially linear computational scaling with detector hits. The recent implementation of a graph neural network-based track reconstruction pipeline in the first level trigger of the LHCb experiment on GPUs serves as a platform for comparative studies between computational architectures in the context of high-energy physics. This paper presents a novel comparison of the throughput of ML model inference between FPGAs and GPUs, focusing on the first step of the track reconstruction pipeline$\unicode{x2013}$an implementation of a multilayer perceptron. Using HLS4ML for FPGA deployment, we benchmark its performance against the GPU implementation and demonstrate the potential of FPGAs for high-throughput, low-latency inference without the need for an expertise in FPGA development and while consuming significantly less power.
This study investigates the influence of seismic activities on the optical synchronization system of the European X-ray Free-Electron Laser. We analyze the controller I/O data of phase-locked-loops in length-stabilized links, focusing on the response to earthquakes, ocean-generated microseism and civilization noise. By comparing the controller data with external data, we were able to identify disturbances and their effects on the control signals. Our results show that seismic events influence the stability of the phase-locked loops. Even earthquakes that are approximately \qty{5000}{\km} away cause remarkable fluctuations in the in-loop control signals. Ocean-generated microseism in particular has an enormous influence on the in-loop control signals due to its constant presence. The optical synchronization system is so highly sensitive that it can even identify vibrations caused by civilization, such as road traffic or major events like concerts or sport events. The phase-locked loops manages to eliminate more than 99% of the existing interference.
Effective self-supervised learning (SSL) techniques have been key to unlocking large datasets for representation learning. While many promising methods have been developed using online corpora and captioned photographs, their application to scientific domains, where data encodes highly specialized knowledge, remains in its early stages. We present a self-supervised masked modeling framework for 3D particle trajectory analysis in Time Projection Chambers (TPCs). These detectors produce globally sparse (<1% occupancy) but locally dense point clouds, capturing meter-scale particle trajectories at millimeter resolution. Starting with PointMAE, this work proposes volumetric tokenization to group sparse ionization points into resolution-agnostic patches, as well as an auxiliary energy infilling task to improve trajectory semantics. This approach -- which we call Point-based Liquid Argon Masked Autoencoder (PoLAr-MAE) -- achieves 99.4% track and 97.7% shower classification F-scores, matching that of supervised baselines without any labeled data. While the model learns rich particle trajectory representations, it struggles with sub-token phenomena like overlapping or short-lived particle trajectories. To support further research, we release PILArNet-M -- the largest open LArTPC dataset (1M+ events, 5.2B labeled points) -- to advance SSL in high energy physics (HEP). Project site: https://youngsm.com/polarmae/
Polar materials with optical phonons in the meV range are excellent candidates for both dark matter direct detection (via dark photon-mediated scattering) and light dark matter absorption. In this study, we propose, for the first time, the metal halide perovskites MAPbI$_3$, MAPbCl$_3$, and CsPbI$_3$ for these purposes. Our findings reveal that CsPbI$_3$ is the best material, significantly improving exclusion limits compared to other polar materials. For scattering, CsPbI$_3$ can probe dark matter masses down to the keV range. For absorption, it enhances sensitivity to detect dark photon masses below $\sim 10~{\rm meV}$. The only material which has so far been investigated and that could provide competitive bounds is CsI, which, however, is challenging to grow in kilogram-scale sizes due to its considerably lower stability compared to CsPbI$_3$. Moreover, CsI is isotropic while the anisotropic structure of CsPbI$_3$ enables daily modulation analysis, showing that a significant percentage of daily modulation exceeding 1% is achievable for dark matter masses below $40~{\rm keV}$.
Recent advances in machine learning have opened new avenues for optimizing detector designs in high-energy physics, where the complex interplay of geometry, materials, and physics processes has traditionally posed a significant challenge. In this work, we introduce the $\textit{end-to-end}$ optimization framework AIDO that leverages a diffusion model as a surrogate for the full simulation and reconstruction chain, enabling gradient-based design exploration in both continuous and discrete parameter spaces. Although this framework is applicable to a broad range of detectors, we illustrate its power using the specific example of a sampling calorimeter, focusing on charged pions and photons as representative incident particles. Our results demonstrate that the diffusion model effectively captures critical performance metrics for calorimeter design, guiding the automatic search for layer arrangement and material composition that aligns with known calorimeter principles. The success of this proof-of-concept study provides a foundation for future applications of end-to-end optimization to more complex detector systems, offering a promising path toward systematically exploring the vast design space in next-generation experiments.
In recent years, the gain suppression mechanism has been studied for large localized charge deposits in Low-Gain Avalanche Detectors (LGADs). LGADs are a thin silicon detector with a highly doped gain layer that provides moderate internal signal amplification. Using the CENPA Tandem accelerator at the University of Washington, the response of LGADs with different thicknesses to MeV-range energy deposits from a proton beam were studied. Three LGAD prototypes of 50~$\mu$m, 100~$\mu$m, 150~$\mu$m were characterized. The devices' gain was determined as a function of bias voltage, incidence beam angle, and proton energy. This study was conducted in the scope of the PIONEER experiment, an experiment proposed at the Paul Scherrer Institute to perform high-precision measurements of rare pion decays. LGADs are considered for the active target (ATAR) and energy linearity is an important property for particle ID capabilities.
Recent advancements in particle physics demand pixel detectors that can withstand increased luminosity in the future collider experiments. In response, MALTA, a novel monolithic active pixel detector, has been developed with a cutting-edge readout architecture. This new class of monolithic pixel detectors is found to have exceptional radiation tolerance, superior hit rates, higher resolution and precise timing resolution, making them ideally suited for experiments at the LHC. To optimize the performance of these sensors before their deployment in actual detectors, comprehensive electrical characterization has been conducted. This study also includes comparative DAC analyses among sensors of varying thicknesses, providing crucial insights for performance enhancement. For the further understanding of the effect of radiation, the sensors are being exposed to different fluence using high intensity X-ray source.
We study neutrino induced charge current coherent pion production ($\nu_\mu\text{CC-Coh}\pi$) as a tool for constraining the neutrino flux at the Deep Underground Neutrino Experiment (DUNE). The neutrino energy and flavor in the process can be directly reconstructed from the outgoing particles, making it especially useful to specifically constrain the muon neutrino component of the total flux. The cross section of this process can be obtained using the Adler relation with the $\pi$-Ar elastic scattering cross section, taken either from external data or, as we explore, from a simultaneous measurement in the DUNE near detector. We develop a procedure that leverages $\nu_\mu\text{CC-Coh}\pi$ events to fit for the neutrino flux while simultaneously accounting for relevant effects in the cross section. We project that this method has the statistical power to constrain the uncertainty on the normalization of the flux at its peak to a few percent. This study demonstrates the potential utility of a $\nu_\mu\text{CC-Coh}\pi$ flux constraint, though further work will be needed to determine the range of validity and precision of the Adler relation upon which it relies, as well as to measure the $\pi$-Ar elastic scattering cross section to the requisite precision. We discuss the experimental and phenomenological developments necessary to unlock the $\nu_\mu\text{CC-Coh}\pi$ process as a "standard candle'' for neutrino experiments.