The increasing use of 3D imaging technologies in biological sciences is generating vast repositories of anatomical data, yet significant barriers prevent this data from reaching its full potential in educational and collaborative contexts. While sharing raw CT and MRI scans has become routine, distributing value-added segmented datasets, where anatomical structures are precisely labeled and delineated, remains difficult and rare. Current repositories function primarily as static archives, lacking mechanisms for iterative refinement, community-driven curation, standardized orientation protocols, and the controlled terminology essential for downstream computational applications, including artificial intelligence, to help us analyze and interpret these unprecedented data resources. We introduce MorphoDepot, a framework that adapts the "fork-and-contribute" model, a cornerstone of modern open-source software development, for collaborative management of 3D morphological data. By integrating git version control and GitHub's "social" collaborative infrastructure with 3D Slicer and its SlicerMorph extension, MorphoDepot transforms segmented anatomical datasets from static resources into dynamic, community-curated projects. This approach directly addresses the challenges of distributed collaboration, enforces transparent provenance tracking, and creates high-quality, standardized training data for AI model development. The result is a system that embodies FAIR (Findable, Accessible, Interoperable, and Reusable) data principles while creating powerful new opportunities for remote learning and collaborative science for biological sciences in general and evolutionary morphology in particular.
Biological Petri Nets (Bio-PNs) require extensions beyond classical formalism to capture biochemical reality: multiple reactions simultaneously affect shared metabolites through convergent production or regulatory coupling, while signal places carry hierarchical control information distinct from material flow. We present a unified 13-tuple Extended Bio-PN formalism integrating two complementary theories: Weak Independence Theory (enabling coupled parallelism despite place-sharing) and Signal Hierarchy Theory (separating information flow from mass transfer). The extended definition adds signal partition (Psi subset P), arc type classification (A), regulatory structure (Sigma), environmental exchange (Theta), dependency taxonomy (Delta), heterogeneous transition types (tau), and biochemical formula tracking (rho). We formalize signal token consumption semantics through two-phase execution (enabling vs. consumption) and prove weak independence correctness for continuous dynamics. Application to Vibrio fischeri quorum sensing demonstrates how energy metabolism (ENERGY signals) orchestrates binary ON/OFF decisions through hierarchical constraint propagation to regulatory signals (LuxR-AHL complex), with 133-fold difference separating states. Analysis reveals signal saturation timing as the orchestrator forcing threshold-crossing, analogous to bacteriophage lambda lysogeny-lysis decisions. This work establishes formal foundations for modeling biological information flow in Petri nets, with implications for systems biology, synthetic circuit design, and parallel biochemical simulation.
Vertical federated learning enables multi-laboratory collaboration on distributed multi-omics datasets without sharing raw data, but exhibits severe instability under extreme data scarcity (P much greater than N) when applied generically. Here, we investigate how domain-aware design choices, specifically gradient saliency guided feature selection with biologically motivated priors, affect the stability and interpretability of VFL architectures in small-sample coral stress classification (N = 13 samples, P = 90579 features across transcriptomics, proteomics, metabolomics, and microbiome data). We benchmark a domain-aware VFL framework against two baselines on the Montipora capitata thermal stress dataset: (i) a standard NVFlare-based VFL and (ii) LASER, a label-aware VFL method. Domain-aware VFL achieves an AUROC of 0.833 plus or minus 0.030 after reducing dimensionality by 98.6 percent, significantly outperforming NVFlare VFL, which performs at chance level (AUROC 0.500 plus or minus 0.125, p = 0.0058). LASER shows modest improvement (AUROC 0.600 plus or minus 0.215) but exhibits higher variance and does not reach statistical significance. Domain-aware feature selection yields stable top-feature sets across analysis parameters. Negative control experiments using permuted labels produce AUROC values below chance (0.262), confirming the absence of data leakage and indicating that observed performance arises from genuine biological signal. These results motivate design principles for VFL in extreme P much greater than N regimes, emphasizing domain-informed dimensionality reduction and stability-focused evaluation.
Bioprinting technology has advanced significantly in the fabrication of tissue-like constructs with complex geometries for regenerative medicine. However, maintaining the structural integrity of bioprinted materials remains a major challenge, primarily due to the frequent and unexpected formation of hidden defects. Traditional defect detection methods often require physical contact that may not be suitable for hydrogel-based biomaterials due to their inherently soft nature, making non-invasive and straightforward structural evaluation necessary in this field. To advance the state of the art, this study presents a novel non-contact method for non-destructively detecting structural defects in bioprinted constructs using video-based vibration analysis. Ear-shaped constructs were fabricated using a bioink composed of sodium alginate and \k{appa}-carrageenan using extrusion-based bioprinting. To simulate printing defects, controlled geometric, interlayer, and pressure-induced defects were systematically introduced into the samples. The dynamic response of each structure was recorded using a high-speed camera and analyzed via phase-based motion estimation techniques. Experimental results demonstrate that all defective samples exhibit consistent changes in the dynamic characteristics compared to baseline samples, with increasingly pronounced deviation observed as defect severity increases, which reflect changes in effective stiffness and mass distribution induced by internal anomalies, even when such defects are not detectable through surface inspection. The experimental trends were also validated through finite element simulations. Overall, this work demonstrates that video-based vibrometry is a powerful approach for assessing the quality of bioprinted constructs, offering a practical pathway toward robust structural health monitoring in next-generation bio-additive manufacturing workflows.
Alzheimer's disease (AD) is a multifactorial neurodegenerative disorder characterized by progressive cognitive decline and widespread epigenetic dysregulation in the brain. DNA methylation, as a stable yet dynamic epigenetic modification, holds promise as a noninvasive biomarker for early AD detection. However, methylation signatures vary substantially across tissues and studies, limiting reproducibility and translational utility. To address these challenges, we develop MethConvTransformer, a transformer-based deep learning framework that integrates DNA methylation profiles from both brain and peripheral tissues to enable biomarker discovery. The model couples a CpG-wise linear projection with convolutional and self-attention layers to capture local and long-range dependencies among CpG sites, while incorporating subject-level covariates and tissue embeddings to disentangle shared and region-specific methylation effects. In experiments across six GEO datasets and an independent ADNI validation cohort, our model consistently outperforms conventional machine-learning baselines, achieving superior discrimination and generalization. Moreover, interpretability analyses using linear projection, SHAP, and Grad-CAM++ reveal biologically meaningful methylation patterns aligned with AD-associated pathways, including immune receptor signaling, glycosylation, lipid metabolism, and endomembrane (ER/Golgi) organization. Together, these results indicate that MethConvTransformer delivers robust, cross-tissue epigenetic biomarkers for AD while providing multi-resolution interpretability, thereby advancing reproducible methylation-based diagnostics and offering testable hypotheses on disease mechanisms.
Single-cell data analysis has the potential to revolutionize personalized medicine by characterizing disease-associated molecular changes at the single-cell level. Advanced single-cell multimodal assays can now simultaneously measure various molecules (e.g., DNA, RNA, Protein) across hundreds of thousands of individual cells, providing a comprehensive molecular readout. A significant analytical challenge is integrating single-cell measurements across different modalities. Various methods have been developed to address this challenge, but there has been no systematic evaluation of these techniques with different preprocessing strategies. This study examines a general pipeline for single-cell data analysis, which includes normalization, data integration, and dimensionality reduction. The performance of different algorithm combinations often depends on the dataset sizes and characteristics. We evaluate six datasets across diverse modalities, tissues, and organisms using three metrics: Silhouette Coefficient Score, Adjusted Rand Index, and Calinski-Harabasz Index. Our experiments involve combinations of seven normalization methods, four dimensional reduction methods, and five integration methods. The results show that Seurat and Harmony excel in data integration, with Harmony being more time-efficient, especially for large datasets. UMAP is the most compatible dimensionality reduction method with the integration techniques, and the choice of normalization method varies depending on the integration method used.
Many agent-based mathematical models of cranial neural crest cell (CNCC) migration impose a binary phenotypic partition of cells into either leaders or followers. In such models, the movement of leader cells at the front of collectives is guided by local chemoattractant gradients, while follower cells behind leaders move according to local cell-cell guidance cues. Although such model formulations have yielded many insights into the mechanisms underpinning CNCC migration, they rely on fixed phenotypic traits that are difficult to reconcile with evidence of phenotypic plasticity in vivo. A later agent-based model of CNCC migration aimed to address this limitation by allowing cells to adaptively combine chemotactic and cell-cell guidance cues during migration. In this model, cell behaviour adapts instantaneously in response to environmental cues, which precludes the identification of a persistent subset of cells as leader-like over biologically relevant timescales, as observed in vivo. Here, we build on previous leader-follower and adaptive phenotype models to develop a polarity-based agent-based model of CNCC migration, in which all cells evolve according to identical rules, interact via a pairwise interaction potential, and carry polarity vectors that evolve according to a dynamical system driven by time-averaged exposure to chemoattractant gradients. Numerical simulations of this model show that a leader-follower phenotypic partition emerges spontaneously from the underlying collective dynamics of the model. Furthermore, the model reproduces behaviour that is consistent with experimental observations of CNCC migration in the chick embryo. Thus, we provide an experimentally consistent, mechanistically-grounded mathematical model that captures the emergence of leader and follower cell phenotypes without their imposition a priori.
Many of the most consequential dynamics in human cognition occur \emph{before} events become explicit: before decisions are finalized, emotions are labeled, or meanings stabilize into narrative form. These pre-event states are characterized by ambiguity, contextual tension, and competing latent interpretations. Rogue Variable Theory (RVT) formalizes such states as \emph{Rogue Variables}: structured, pre-event cognitive configurations that influence outcomes while remaining unresolved or incompatible with a system's current representational manifold. We present a quantum-consistent information-theoretic implementation of RVT based on a time-indexed \emph{Mirrored Personal Graph} (MPG) embedded into a fixed graph Hilbert space, a normalized \emph{Quantum MPG State} (QMS) constructed from node and edge metrics under context, Hamiltonian dynamics derived from graph couplings, and an error-weighted `rogue operator'' whose principal eigenvectors identify rogue factor directions and candidate Rogue Variable segments. We further introduce a \emph{Rosetta Stone Layer} (RSL) that maps user-specific latent factor coordinates into a shared reference Hilbert space to enable cross-user comparison and aggregation without explicit node alignment. The framework is fully implementable on classical systems and does not assume physical quantum processes; \emph{collapse} is interpreted as informational decoherence under interaction, often human clarification.
Aim/background: Continuous glucose monitoring (CGM) generates dense time-series data, posing challenges for efficient storage, transmission, and analysis. This study evaluates novel encoding strategies that reduce CGM profiles to a compact set of landmark points while maintaining fidelity in reconstructed signals and derived glycemic metrics. Methods: We utilized two complementary CGM datasets, synthetic data generated via a Conditional Generative Adversarial Network (CGAN) and real-world measurements from a randomized crossover trial, to develop and validate three encoding approaches: (1) Peaks & Nadirs (PN), (2) Peaks, Nadirs, and Support Points (PN+), and (3) Uniform Downsampling. Each method compresses CGM profiles by selecting key timestamps and glucose values, followed by signal reconstruction via interpolation. Performance was assessed using compression ratio, mean absolute error (MAE), and R^2 between original and reconstructed clinically relevant CGM-derived metrics. Statistical analyses evaluated the preservation of clinically relevant glucose features. Results: Across varying compression settings, PN+ consistently outperformed PN and downsampling, achieving the highest R^2 and lowest MAE. At a compression ratio of 13 (22 landmark points per 24-hour profile), PN+ reduced MAE by a factor of 3.6 compared to downsampling (0.77 vs. 2.75), with notable improvements in metrics sensitive to glucose excursions. Encoding and decoding required an average of 0.13 seconds per profile. Validation on real-world data confirmed these trends. Conclusions: The proposed PN+ method produces a compact CGM representation that retains critical glycemic dynamics while discarding redundant portions of the profiles. The CGM signal can be reconstructed with high precision from the encoding representation.
Background: Accurate week-ahead forecasts of continuous glucose monitoring (CGM) derived metrics could enable proactive diabetes management, but relative performance of modern tabular learning approaches is incompletely defined. Methods: We trained and internally validated four regression models (CatBoost, XGBoost, AutoGluon, tabPFN) to predict six weekahead CGM metrics (TIR, TITR, TAR, TBR, CV, MAGE, and related quantiles) using 4,622 case-weeks from two cohorts (T1DM n=3,389; T2DM n=1,233). Performance was assessed with mean absolute error (MAE) and mean absolute relative difference (MARD); quantile classification was summarized via confusion-matrix heatmaps. Results: Across T1DM and T2DM, all models produced broadly comparable performance for most targets. For T1DM, MARD for TIR, TITR, TAR and MAGE ranged 8.5 to 16.5% while TBR showed large MARD (mean ~48%) despite low MAE. AutoGluon and tabPFN showed lower MAE than XGBoost for several targets (e.g., TITR: p<0.01; TAR/TBR: p<0.05 to 0.01). For T2DM MARD ranged 7.8 to 23.9% and TBR relative error was ~78%; tabPFN outperformed other models for TIR (p<0.01), and AutoGluon/ tabPFN outperformed CatBoost/XGBoost on TAR (p<0.05). Inference time per 1,000 cases varied markedly (PFN 699 s; AG 2.7 s; CatBoost 0.04 s, XGBoost 0.04 s). Conclusions: Week-ahead CGM metrics are predictable with reasonable accuracy using modern tabular models, but low-prevalence hypoglycemia remains difficult to predict in relative terms. Advanced AutoML and foundation models yield modest accuracy gains at substantially higher computational cost.
We have previously shown that Good-Turing statistics can be applied to molecular dynamics trajectories to estimate the probability of observing completely new (thus far unobserved) biomolecular structures, and showed that the method is stable, dependable and its predictions verifiable. The major problem with that initial algorithm was the requirement for calculating and storing in memory the two-dimensional RMSD matrix of the currently available trajectory. This requirement precluded the application of the method to very long simulations. Here we describe a new variant of the Good-Turing algorithm whose memory requirements scale linearly with the number of structures in the trajectory, making it suitable even for extremely long simulations. We show that the new method gives essentially identical results with the older implementation, and present results obtained from trajectories containing up to 22 million structures. A computer program implementing the new algorithm is available from standard repositories.
Background: Understanding electronic interactions in protein active sites is fundamental to drug discovery and enzyme engineering, but remains computationally challenging due to exponential scaling of quantum mechanical calculations. Results: We present a quantum-classical hybrid framework for simulating protein fragment electronic structure using variational quantum algorithms. We construct fermionic Hamiltonians from experimentally determined protein structures, map them to qubits via Jordan-Wigner transformation, and optimize ground state energies using the Variational Quantum Eigensolver implemented in pure Python. For a 4-orbital serine protease fragment, we achieve chemical accuracy (< 1.6 mHartree) with 95.3% correlation energy recovery. Systematic analysis reveals three-phase convergence behaviour with exponential decay ({\alpha} = 0.95), power law optimization ({\gamma} = 1.21), and asymptotic approach. Application to SARS-CoV-2 protease inhibition demonstrates predictive accuracy (MAE=0.25 kcal/mol), while cytochrome P450 metabolism predictions achieve 85% site accuracy. Conclusions: This work establishes a pathway for quantum-enhanced biomolecular simulations on near-term quantum hardware, bridging quantum algorithm development with practical biological applications.
The outbreak of mutant strains and vaccination behaviors have been the focus of recent epidemiological research, but most existing epidemic models failed to simultaneously capture viral mutation and consider the complexity and behavioral dynamics of vaccination. To address this gap, we develop an extended SIRS model that distinguishes infections with the original strain and a mutant strain, and explicitly introduces a vaccinated compartment state. At the behavioral level, we employ evolutionary game theory to model individual vaccination decisions, where strategies are determined by both neighbors' choices and the current epidemiological situation. This process corresponds to the time-varying vaccination rate of susceptible individuals transitioning to vaccinated individuals at the epidemic spreading level. We then couple the epidemic and vaccination behavioral processes through the microscopic Markov chain approach (MMCA) and finally investigate the evolutionary dynamics via numerical simulations. The results show that our framework can effectively mitigate outbreaks across different disease scenarios. Sensitivity analysis further reveals that vaccination uptake is most strongly influenced by vaccine cost, efficacy, and perceived risk of side effects. Overall, this behavior-aware modeling framework captures the co-evolution of viral mutation and vaccination behavior, providing quantitative and theoretical support for designing effective public health vaccination policies.
Serine hydroxymethyltransferase is an essential enzyme in the Escherichia coli folate pathway, yet it has not been adopted as an antibacterial target, unlike DHFR, DHPS, or thymidylate synthase. To investigate this discrepancy, we applied a multi-scale computational framework that integrates large-scale sequence analysis of 1000 homologs, coevolutionary interaction mapping, structural community analysis, intrinsic disorder profiling, and adaptive fitness modelling. These analyses converge on a single conclusion: the catalytic core of SHMT forms an exceptionally conserved and tightly coupled structural unit. This region exhibits dense coevolution, strong intramolecular connectivity, minimal disorder, and extremely low mutational tolerance. Peripheral loops and termini, in contrast, are far more flexible. Relative to established folate-pathway antibiotic targets, SHMT active site is even more rigid and evolutionarily constrained. This extreme constraint may limit the emergence of resistance-compatible mutations, providing a plausible explanation for the absence of natural-product inhibitors. Fitness trajectory modelling supports this interpretation, showing that nearly all active-site residues tolerate only rare or neutral substitutions. Together, these findings identify SHMT as a structurally stable and evolutionarily restricted enzyme whose catalytic architecture is unusually protected. This makes SHMT an underexplored yet promising target for the rational design of next-generation antibacterial agents.
Both abiotic self-organization and biological mechanisms have been put forward as the origin of a number of geological patterns. It is important to comprehend the formation mechanisms of such structures both to understand geological self-organization and in order to differentiate them from biological patterns -- fossils and bio-influenced structures -- seen in geological systems. Being able to distinguish the traces of biological activity from geological self-organization is fundamental both for understanding the origin of life on Earth and for the search for life beyond Earth.
Various theoretical and empirical studies have accounted for why humans cooperate in competitive environments. Although prior work has revealed that network structure and multiplex interactions can promote cooperation, most theory assumes that individuals play similar dilemma games in all social contexts. However, real-world agents may participate in a diversity of interactions, not all of which present dilemmas. We develop an evolutionary game model on multilayer networks in which one layer supports the prisoner's dilemma game, while the other follows constant-selection dynamics, representing biased but non-dilemmatic competition, akin to opinion or fad spreading. Our theoretical analysis reveals that coupling a social dilemma layer to a non-dilemmatic constant-selection layer robustly enhances cooperation in many cases, across different multilayer networks, updating rules, and payoff schemes. These findings suggest that embedding individuals within diverse networked settings -- even those unrelated to direct social dilemmas -- can be a principled approach to engineering cooperation in socio-ecological and organizational systems.
Assembly theory (AT) introduces a concept of causation as a material property, constitutive of a metrology of evolution and selection. The physical scale for causation is quantified with the assembly index, defined as the minimum number of steps necessary for a distinguishable object to exist, where steps are assembled recursively. Observing countable copies of high assembly index objects indicates that a mechanism to produce them is persistent, such that the object's environment builds a memory that traps causation within a contingent chain. Copy number and assembly index underlie the standardized metrology for detecting causation (assembly index), and evidence of contingency (copy number). Together, these allow the precise definition of a selective threshold in assembly space, understood as the set of all causal possibilities. This threshold demarcates life (and its derivative agential, intelligent and technological forms) as structures with persistent copies beyond the threshold. In introducing a fundamental concept of material causation to explain and measure life, AT represents a departure from prior theories of causation, such as interventional ones, which have so far proven incompatible with fundamental physics. We discuss how AT's concept of causation provides the foundation for a theory of physics where novelty, contingency and the potential for open-endedness are fundamental, and determinism is emergent along assembled lineages.
The efficacy of cryopreservation is constrained by the difficulty of achieving sufficiently high intracellular concentrations of cryoprotective solutes without inducing osmotic injury or chemical toxicity during loading. This thermodynamic study introduces a new conceptual mechanism for cryoprotectant delivery into cells directly or through vascular perfusion. In this framework, effective cryoprotection could be achieved through the in situ generation of high intracellular concentrations of cryoprotective solutes via pressure-activated disassembly of membrane-permeant supramolecular assemblies composed of cryoprotectant monomers or oligomers. These supramolecules, present initially at low concentrations, are envisioned to enter cells through passive partitioning or endocytosis with minimal osmotic effect, and subsequently transform into a high intracellular concentration of cryoprotectants upon disassembly. We propose that elevated hydrostatic pressure, generated intrinsically during isochoric (constant-volume) freezing or applied externally under isobaric (constant-pressure) conditions, can destabilize supramolecular assemblies whose dissociated state occupies a smaller molar volume than the assembled state. Under isochoric freezing, ice formation within a fixed volume produces a substantial pressure increase as a thermodynamic consequence of phase change, rendering pressure a dependent variable governed by the Helmholtz free energy. Under isobaric conditions, pressure acts as an externally controlled variable through the Gibbs free energy. In both formulations, pressure-activated disassembly decouples membrane transport from cryoprotectant availability and enables synchronized solute generation precisely during cooling or freezing, without pre-loading of osmotically active solutes.
Large Protein Language Models have shown strong potential for generative protein design, yet they frequently produce structural hallucinations, generating sequences with high linguistic likelihood that fold into thermodynamically unstable conformations. Existing alignment approaches such as Direct Preference Optimization are limited in this setting, as they model preferences as binary labels and ignore the continuous structure of the physical energy landscape. We propose Physio-DPO, a physics informed alignment framework that grounds protein language models in thermodynamic stability. Physio-DPO introduces a magnitude aware objective that scales optimization updates according to the energy gap between native structures and physics perturbed hard negatives. Experiments show that Physio-DPO consistently outperforms strong baselines including SFT, PPO, and standard DPO, reducing self consistency RMSD to 1.28 Å and increasing foldability to 92.8%. Qualitative analysis further demonstrates that Physio-DPO effectively mitigates structural hallucinations by recovering biophysical interactions such as hydrophobic core packing and hydrogen bond networks.
We study self-organization in a minimally nonlinear model of large random ecosystems. Populations evolve over time according to a piecewise linear system of ordinary differential equations subject to a non-negativity constraint resulting in discrete time extinction and revival events. The dynamics are generated by a random elliptic community matrix with tunable correlation strength. We show that, independent of the correlation strength, solutions of the system are confined to subsets of the phase space that can be cast as time-varying Gardner volumes from the theory of learning in neural networks. These volumes decrease with the diversity (i.e. the fraction of extant species) and become exponentially small in the long-time limit. Using standard results from random matrix theory, the changing diversity is then linked to a sequence of contractions and expansions in the spectrum of the community matrix over time, resulting in a sequence of May-type stability problems determining whether the total population evolves toward complete extinction or unbounded growth. In the case of unbounded growth, we show the model allows for a particularly simple nonlinear extension in which the solutions instead evolve towards a new attractor.
Metastatic prostate cancer is one of the leading causes of cancer-related morbidity and mortality worldwide. It is characterized by a high mortality rate and a poor prognosis. In this work, we explore how a clinical oncologist can apply a Stackelberg game-theoretic framework to prolong metastatic prostate cancer survival, or even make it chronic in duration. We utilize a Bayesian optimization approach to identify the optimal adaptive chemotherapeutic treatment policy for a single drug (Abiraterone) to maximize the time before the patient begins to show symptoms. We show that, with precise adaptive optimization of drug delivery, it is possible to significantly prolong the cancer suppression period, potentially converting metastatic prostate cancer from a terminal disease to a chronic disease for most patients, as supported by clinical and analytical evidence. We suggest that clinicians might explore the possibility of implementing a high-level tight control (HLTC) treatment, in which the trigger signals (i.e. biomarker levels) for drug administration and cessation are both high and close together, typically yield the best outcomes, as demonstrated through both computation and theoretical analysis. This simple insight could serve as a valuable guide for improving current adaptive chemotherapy treatments in other hormone-sensitive cancers.
In this paper, a class of reaction-diffusion equations for Multiple Sclerosis is presented. These models are derived by means of a diffusive limit starting from a proper kinetic description, taking account of the underlying microscopic interactions among cells. At the macroscopic level, we discuss the necessary conditions for Turing instability phenomena and the formation of two-dimensional patterns, whose shape and stability are investigated by means of a weakly nonlinear analysis. Some numerical simulations, confirming and extending theoretical results, are proposed for a specific scenario.
Terracettes, striking, step-like landforms that stripe steep, vegetated hillslopes, have puzzled scientists for more than a century. Competing hypotheses invoke either slow mass-wasting or the relentless trampling of grazing animals, yet no mechanistic model has linked hoof-scale behavior to landscape-scale form. Here we bridge that gap with an active-walker model in which ungulates are represented as stochastic foragers moving on an erodible slope. Each agent weighs the energetic cost of climbing against the benefit of fresh forage; every hoof-fall compacts soil and lowers local biomass, subtly reshaping the energy landscape that guides subsequent steps. Over time, these stigmergic feedbacks concentrate traffic along cross-slope paths that coalesce into periodic tread-and-riser bands, morphologically analogous to natural terracettes. Our model illustrates how local foraging rules governing movement and substrate feedback can self-organize into large-scale topographic patterns, highlighting the wider role of decentralized biological processes in sculpting terrestrial landscapes.
Linearly transforming stimulus representations of deep neural networks yields high-performing models of behavioral and neural responses to complex stimuli. But does the test accuracy of such predictions identify genuine representational alignment? We addressed this question through a large-scale model-recovery study. Twenty diverse vision models were linearly aligned to 4.5 million behavioral judgments from the THINGS odd-one-out dataset and calibrated to reproduce human response variability. For each model in turn, we sampled synthetic responses from its probabilistic predictions, fitted all candidate models to the synthetic data, and tested whether the data-generating model would re-emerge as the best predictor of the simulated data. Model recovery accuracy improved with training-set size but plateaued below 80%, even at millions of simulated trials. Regression analyses linked misidentification primarily to shifts in representational geometry induced by the linear transformation, as well as to the effective dimensionality of the transformed features. These findings demonstrate that, even with massive behavioral data, overly flexible alignment metrics may fail to guide us toward artificial representations that are genuinely more human-aligned. Model comparison experiments must be designed to balance the trade-off between predictive accuracy and identifiability-ensuring that the best-fitting model is also the right one.
Recent quantum models of cognition have successfully simulated several interesting effects in human experimental data, from vision to reasoning and recently even consciousness. The latter case, consciousness has been a quite challenging phenomenon to model, and most efforts have been through abstract mathematical quantum methods, mainly focused on conceptual issues. Classical (non-quantum) models of consciousness-related experiments exist, but they generally fail to align well with human data. We developed a straightforward quantum model to simulate conscious reporting of seeing or missing competing stimuli within the famous attentional blink paradigm. In an attentional blink task, a target stimulus (T2) that appears after a previous one (T1) can be consciously reported if the delay between presenting them is short enough (called lag 1), otherwise it can be rendered invisible during the so-called refractory period of attention (lags 2 to 6 and even longer). For modeling this phenomenon, we employed a three-qubit entanglement ansatz circuit in the form of a deep teleportation channel instead of the well-known EPR channel. While reporting the competing stimuli was supposed to be the classical measurement outcomes, the effect of distractor stimuli (i.e., masks, if any) was encoded simply as random angle rotations. The simulation outcome for different states was measured, and the classical outcome probabilities were further used as inputs to a simple linear neural network. The result revealed a non-linear, alternating state pattern that closely mirrors human responses in conscious stimuli reporting. The main result was a successful simulation of Lag 1 sparing, lag 7 divergence, and masking effect through probabilistic outcome of measurement in different conditions.
Prenatal maternal stress alters maternal-fetal heart rate coupling, as demonstrated by the Fetal Stress Index derived from bivariate phase-rectified signal averaging. Here, we extend this framework using information-theoretical measures to elucidate underlying mechanisms. In 120 third-trimester pregnancies (58 stressed, 62 control), we computed transfer entropy (TE), entropy rate (ER), and sample entropy (SE) under multiple conditioning paradigms, employing mixed linear models for repeated measures. We identify dual coupling mechanisms at the short-term (0.5 - 2.5 s), but not long-term (2.5 - 5 s) time scales: (1) stress-invariant state-dependent synchronization, with maternal decelerations exerting approximately 60% coupling strength on fetal heart rate complexity - a fundamental coordination conserved across demographics; and (2) stress-sensitive temporal information transfer (TE), showing exploratory associations with maternal cortisol that require replication. A robust sex-by-stress interaction emerged in TE from mixed models, with exploratory female-specific coupling patterns absent in males. Universal acceleration predominance was observed in both maternal and fetal heart rates, stronger in fetuses and independent of sex or stress. We provide insight into the dependence of these findings on the sampling rate of the underlying data, identifying 4 Hz, commonly used for ultrasound-derived fetal heart rate recordings, as the necessary and sufficient sampling rate regime to capture the information flow. Information-theoretical analysis reveals that maternal-fetal coupling operates through complementary pathways with differential stress sensitivity, extending the Fetal Stress Index by elucidating causal foundations. Future studies should explore additional information-theoretical conditional approaches to resolve stress-specific and time-scale-specific differences in information flow.
Biological systems encode function not primarily in steady states, but in the structure of transient responses elicited by time-varying stimuli. Overshoots, biphasic dynamics, adaptation kinetics, fold-change detection, entrainment, and cumulative exposure effects often determine phenotypic outcomes, yet are poorly captured by classical steady-state or dose-response analyses. This paper develops an input-output perspective on such "dynamic phenotypes," emphasizing how qualitative features of transient behavior constrain underlying network architectures independently of detailed parameter values. A central theme is the role of sign structure and interconnection logic, particularly the contrast between monotone systems and architectures containing antagonistic pathways. We show how incoherent feedforward (IFF) motifs provide a simple and recurrent mechanism for generating non-monotonic and adaptive responses across multiple levels of biological organization, from molecular signaling to immune regulation and population dynamics. Conversely, monotonicity imposes sharp impossibility results that can be used to falsify entire classes of models from transient data alone. Beyond step inputs, we highlight how periodic forcing, ramps, and integral-type readouts such as cumulative dose responses offer powerful experimental probes that reveal otherwise hidden structure, separate competing motifs, and expose invariances such as fold-change detection. Throughout, we illustrate how control-theoretic concepts, including monotonicity, equivariance, and input-output analysis, can be used not as engineering metaphors, but as precise mathematical tools for biological model discrimination. Thus we argue for a shift in emphasis from asymptotic behavior to transient and input-driven dynamics as a primary lens for understanding, testing, and reverse-engineering biological networks.
Elevated kurtosis values have been observed in subcortical grey matter structures of patients with neurodegenerative diseases. Here we tested whether these elevated values are related to iron content, which generate magnetic fields that add to the diffusion encoding gradients. Multi-shell diffusion and multi-echo gradient echo acquisitions were used to derive mean kurtosis and iron measures (R2* and magnetic susceptibility), respectively, in subcortical grey matter nuclei and white matter tracts in a discovery cohort (110 older and 63 younger adults) and replication cohort (72 healthy older adults). Iron-rich grey matter regions exhibited higher mean kurtosis, R2*, and magnetic susceptibility and white matter regions had lower mean kurtosis in the older adult group. In both cohorts, mean kurtosis was significantly correlated with R2* and magnetic susceptibility in iron-rich grey matter nuclei. No association was seen between signal-to-noise ratio and mean kurtosis in any grey matter region, indicating that the increase in mean kurtosis was not due to reduced signal-to-noise. Finally, a phantom experiment found higher mean kurtosis as iron concentrations increased. Our findings indicate that kurtosis is associated with iron-sensitive metrics in iron-rich grey matter structures, suggesting that kurtosis may be sensitive to iron deposits.