The correlation between the mean transverse momentum, $[p_{\mathrm{T}}]$, and the squared anisotropic flow, $v^{2}_{n}$, on an event-by-event basis has been suggested to be influenced by the initial conditions in heavy-ion collisions. We present measurements of the variances and covariance of $[p_{\mathrm{T}}]$ and $v^{2}_{n}$, along with their dimensionless ratio, for Au+Au collisions at various beam energies: $\sqrt{\textit{s}_{NN}}$ $=$ 14.6, 19.6, 27, 54.4, and 200 GeV. Our measurements reveal a distinct energy-dependent behavior in the variances and covariance. In addition, the dimensionless ratio displays a similar behavior across different beam energies. We compare our measurements with hydrodynamic models and similar measurements from Pb+Pb collisions at the Large Hadron Collider (LHC). These findings provide valuable insights into the beam energy dependence of the specific shear viscosity ($\eta/s$) and initial-state effects, allowing for differentiating between different initial-state models.
High-energy large-scale particle colliders generate data at extraordinary rates. Developing real-time high-throughput data compression algorithms to reduce data volume and meet the bandwidth requirement for storage has become increasingly critical. Deep learning is a promising technology that can address this challenging topic. At the newly constructed sPHENIX experiment at the Relativistic Heavy Ion Collider, a Time Projection Chamber (TPC) serves as the main tracking detector, which records three-dimensional particle trajectories in a volume of a gas-filled cylinder. In terms of occupancy, the resulting data flow can be very sparse reaching $10^{-3}$ for proton-proton collisions. Such sparsity presents a challenge to conventional learning-free lossy compression algorithms, such as SZ, ZFP, and MGARD. In contrast, emerging deep learning-based models, particularly those utilizing convolutional neural networks for compression, have outperformed these conventional methods in terms of compression ratios and reconstruction accuracy. However, research on the efficacy of these deep learning models in handling sparse datasets, like those produced in particle colliders, remains limited. Furthermore, most deep learning models do not adapt their processing speeds to data sparsity, which affects efficiency. To address this issue, we propose a novel approach for TPC data compression via key-point identification facilitated by sparse convolution. Our proposed algorithm, BCAE-VS, achieves a $75\%$ improvement in reconstruction accuracy with a $10\%$ increase in compression ratio over the previous state-of-the-art model. Additionally, BCAE-VS manages to achieve these results with a model size over two orders of magnitude smaller. Lastly, we have experimentally verified that as sparsity increases, so does the model's throughput.