### Globular structure of the hypermineralized tissue in human femoral neck

Bone becomes more fragile with ageing. Among many structural changes, a thin layer of highly mineralized and brittle tissue covers part of the external surface of the thin femoral neck cortex in older people and has been proposed to increase hip fragility. However, there have been very limited reports on this hypermineralized tissue in the femoral neck, especially on its ultrastructure. Such information is critical to understanding both the mineralization process and its contributions to hip fracture. Here, we use multiple advanced techniques to characterize the ultrastructure of the hypermineralized tissue in the neck across various length scales. Synchrotron radiation micro-CT found larger but less densely distributed cellular lacunae in hypermineralized tissue than in lamellar bone. When examined under FIB-SEM, the hypermineralized tissue was mainly composed of mineral globules with sizes varying from submicron to a few microns. Nano-sized channels were present within the mineral globules and oriented with the surrounding organic matrix. Transmission electron microscopy showed the apatite inside globules were poorly crystalline, while those at the boundaries between the globules had well-defined lattice structure with crystallinity similar to the apatite mineral in lamellar bone. No preferred mineral orientation was observed both inside each globule and at the boundaries. Collectively, we conclude based on these new observations that the hypermineralized tissue is non-lamellar and has less organized mineral, which may contribute to the high brittleness of the tissue.

### Event-based update of synapses in voltage-based learning rules

Due to the point-like nature of neuronal spiking, efficient neural network simulators often employ event-based simulation schemes for synapses. Yet many types of synaptic plasticity rely on the membrane potential of the postsynaptic cell as a third factor in addition to pre- and postsynaptic spike times. Synapses therefore require continuous information to update their strength which a priori necessitates a continuous update in a time-driven manner. The latter hinders scaling of simulations to realistic cortical network sizes and relevant time scales for learning. Here, we derive two efficient algorithms for archiving postsynaptic membrane potentials, both compatible with modern simulation engines based on event-based synapse updates. We theoretically contrast the two algorithms with a time-driven synapse update scheme to analyze advantages in terms of memory and computations. We further present a reference implementation in the spiking neural network simulator NEST for two prototypical voltage-based plasticity rules: the Clopath rule and the Urbanczik-Senn rule. For both rules, the two event-based algorithms significantly outperform the time-driven scheme. Depending on the amount of data to be stored for plasticity, which heavily differs between the rules, a strong performance increase can be achieved by compressing or sampling of information on membrane potentials. Our results on computational efficiency related to archiving of information provide guiding principles for the future design of learning rules in order to make them practically usable in large-scale networks.

### Setting up and modelling of overflowing fed-batch cultures of Bacillus subtilis for the production and continuous removal of lipopeptides

This work is related to the setup of overflowing exponential fed-batch cultures (O-EFBC) derived from carbon limited EFBC dedicated to the production of mycosubtilin, an antifungal lipopeptide belonging to the iturin family. O-EFBC permits the continuous removal of the product from the bioreactor achieving a complete extraction of mycosubtilin. This paper also provides a dynamical Monod-based growth model of this process that is accurate enough to simulate the evolution of the specific growth rate and to correlate it to the mycosubtilin specific productivity. Two particular and dependant phenomena related to the foam overflow are taken into account by the model: the outgoing flow rate of a broth volume and the loss of biomass. Interestingly, the biomass concentration in the foam was found to be lower than the biomass concentration in the bioreactor relating this process to a recycling one. Parameters of this model are the growth yield on substrate and the maximal specific growth rate estimated from experiments led at feed rates of 0.062, 0.071 and 0.086 h --1. The model was extrapolated to five additional experiments carried out at feed rates of 0.008, 0.022, 0.040, 0.042 and 0.062 h --1 enabling the correlation of the mean specific growth rates with productivity results. Finally, a feed rate of 0.086 h --1 corresponding to a mean specific growth rate of 0.070 h --1 allowed a specific productivity of 1.27 mg of mycosubtilin g --1 of dried biomass h --1 .

### Chemical Property Prediction Under Experimental Biases

The ability to predict the chemical properties of compounds is crucial in discovering novel materials and drugs with specific desired characteristics. Recent significant advances in machine learning technologies have enabled automatic predictive modeling from past experimental data reported in the literature.However, these datasets are often biased due to various reasons, such as experimental plans and publication decisions, and the prediction models trained using such biased datasets often suffer from over-fitting to the biased distributions and perform poorly on subsequent uses.The present study focuses on mitigating bias in the experimental datasets. To this purpose, we adopt two techniques from causal inference and domain adaptation combined with graph neural networks capable of handling molecular structures.The experimental results in four possible bias scenarios show that the inverse propensity scoring-based method makes solid improvements, while the domain-invariant representation learning approach fails.

### GuessTheMusic: Song Identification from Electroencephalography response

The music signal comprises of different features like rhythm, timbre, melody, harmony. Its impact on the human brain has been an exciting research topic for the past several decades. Electroencephalography (EEG) signal enables non-invasive measurement of brain activity. Leveraging the recent advancements in deep learning, we proposed a novel approach for song identification using a Convolution Neural network given the electroencephalography (EEG) responses. We recorded the EEG signals from a group of 20 participants while listening to a set of 12 song clips, each of approximately 2 minutes, that were presented in random order. The repeating nature of Music is captured by a data slicing approach considering brain signals of 1 second duration as representative of each song clip. More specifically, we predict the song corresponding to one second of EEG data when given as input rather than a complete two-minute response. We have also discussed pre-processing steps to handle large dimensions of a dataset and various CNN architectures. For all the experiments, we have considered each participant's EEG response for each song in both train and test data. We have obtained 84.96\% accuracy for the same. The performance observed gives appropriate implication towards the notion that listening to a song creates specific patterns in the brain, and these patterns vary from person to person.

### Physiologically valid 3D-0D closed loop model of the heart and circulation -- Modeling the acute response to altered loading and contractility

Computer models of cardiac electro-mechanics (EM) have the potential to enhance our understanding of underlying mechanisms in the analysis of normal and dysfunctional hearts and show promise as a guidance in designing and developing new treatment. Here, pre- and afterload conditions play a critical role in governing and regulating heart function under both normal and pathological conditions. The common approach of imposing loads upon the heart by isolated pre- and afterload models does not fully address all properties of the circulatory system as important feedback mechanisms are missing. Closed-loop models are capable of modeling these features and also guarantee the conservation of blood volume throughout the circulatory system over multiple cardiac cycles. In this study, we present the coupling of a 3D bi-ventricular EM model to the sophisticated 0D closed-loop model CircAdapt. We show the capabilities of this advanced framework by comparing a control case of a healthy patient to different pathological scenarios. These simulations setups are not feasible using isolated pre- and afterload or simple closed-loop models and are hitherto unprecedented in the literature.

### Review of Machine-Learning Methods for RNA Secondary Structure Prediction

Secondary structure plays an important role in determining the function of non-coding RNAs. Hence, identifying RNA secondary structures is of great value to research. Computational prediction is a mainstream approach for predicting RNA secondary structure. Unfortunately, even though new methods have been proposed over the past 40 years, the performance of computational prediction methods has stagnated in the last decade. Recently, with the increasing availability of RNA structure data, new methods based on machine-learning technologies, especially deep learning, have alleviated the issue. In this review, we provide a comprehensive overview of RNA secondary structure prediction methods based on machine-learning technologies and a tabularized summary of the most important methods in this field. The current pending issues in the field of RNA secondary structure prediction and future trends are also discussed.

### PANDA: Predicting the change in proteins binding affinity upon mutations using sequence information

Accurately determining a change in protein binding affinity upon mutations is important for the discovery and design of novel therapeutics and to assist mutagenesis studies. Determination of change in binding affinity upon mutations requires sophisticated, expensive, and time-consuming wet-lab experiments that can be aided with computational methods. Most of the computational prediction techniques require protein structures that limit their applicability to protein complexes with known structures. In this work, we explore the sequence-based prediction of change in protein binding affinity upon mutation. We have used protein sequence information instead of protein structures along with machine learning techniques to accurately predict the change in protein binding affinity upon mutation. Our proposed sequence-based novel change in protein binding affinity predictor called PANDA gives better accuracy than existing methods over the same validation set as well as on an external independent test dataset. On an external test dataset, our proposed method gives a maximum Pearson correlation coefficient of 0.52 in comparison to the state-of-the-art existing protein structure-based method called MutaBind which gives a maximum Pearson correlation coefficient of 0.59. Our proposed protein sequence-based method, to predict a change in binding affinity upon mutations, has wide applicability and comparable performance in comparison to existing protein structure-based methods. A cloud-based webserver implementation of PANDA and its python code is available at https://sites.google.com/view/wajidarshad/software and https://github.com/wajidarshad/panda.

### An infection process near criticality: Influence of the initial condition

We investigate how the initial number of infected individuals affects the behavior of the critical susceptible-infected-recovered process. We analyze the outbreak size distribution, duration of the outbreaks, and the role of fluctuations.

### A computational framework for evaluating the role of mobility on the propagation of epidemics on point processes

This paper is focused on SIS epidemic dynamics (also known as the contact process) on stationary Poisson point processes of the Euclidean plane, when the infection rate of a susceptible point is proportional to the number of infected points in a ball around it. Two models are discussed, the first with a static point process, and the second where points are subject to some random motion. For both models, we use conservation equations for moment measures to analyze the stationary point processes of infected and susceptible points. A heuristic factorization of the third moment measure is then proposed to derive simple polynomial equations allowing one to derive closed form approximations for the fraction of infected nodes and the steady state. These polynomial equations also lead to a phase diagram which tentatively delineates the regions of the space of parameters (population density, infection radius, infection and recovery rate, and motion rate) where the epidemic survives and those where there is extinction. According to this phase diagram, the survival of the epidemic is not always an increasing function of the motion rate. These results are substantiated by simulations on large two dimensional tori. These simulations show that the polynomial equations accurately predict the fraction of infected nodes when the epidemic survives. The phase diagram is also partly substantiated by the simulation of the mean survival time of the epidemic on large tori. The phase diagram accurately predicts the parameter regions where the mean survival time increases or decreases with the motion rate.

### An Analysis by Synthesis Method that Allows Accurate Spatial Modeling of Thickness of Cortical Bone from Clinical QCT

Osteoporosis is a skeletal disorder that leads to increased fracture risk due to decreased strength of cortical and trabecular bone. Even with state-of-the-art non-invasive assessment methods there is still a high underdiagnosis rate. Quantitative computed tomography (QCT) permits the selective analysis of cortical bone, however the low spatial resolution of clinical QCT leads to an overestimation of the thickness of cortical bone (Ct.Th) and bone strength. We propose a novel, model based, fully automatic image analysis method that allows accurate spatial modeling of the thickness distribution of cortical bone from clinical QCT. In an analysis-by-synthesis (AbS) fashion a stochastic scan is synthesized from a probabilistic bone model, the optimal model parameters are estimated using a maximum a-posteriori approach. By exploiting the different characteristics of in-plane and out-of-plane point spread functions of CT scanners the proposed method is able assess the spatial distribution of cortical thickness. The method was evaluated on eleven cadaveric human vertebrae, scanned by clinical QCT and analyzed using standard methods and AbS, both compared to high resolution peripheral QCT (HR-pQCT) as gold standard. While standard QCT based measurements overestimated Ct.Th. by 560% and did not show significant correlation with the gold standard ($r^2 = 0.20,\, p = 0.169$) the proposed method eliminated the overestimation and showed a significant tight correlation with the gold standard ($r^2 = 0.98,\, p < 0.0001$) a root mean square error below 10%.

### ACSS-q: Algorithmic complexity for short strings via quantum accelerated approach

In this research we present a quantum circuit for estimating algorithmic complexity using the coding theorem method. This accelerates inferring algorithmic structure in data for discovering causal generative models. The computation model is restricted in time and space resources to make it computable in approximating the target metrics. The quantum circuit design based on our earlier work that allows executing a superposition of automata is presented. As a use-case, an application framework for protein-protein interaction ontology based on algorithmic complexity is proposed. Using small-scale quantum computers, this has the potential to enhance the results of classical block decomposition method towards bridging the causal gap in entropy based methods.

### Economy Versus Disease Spread: Reopening Mechanisms for COVID 19

We study mechanisms for reopening economic activities that explore the trade off between containing the spread of COVID-19 and maximizing economic impact. This is of current importance as many organizations, cities, and states are formulating reopening strategies. Our mechanisms, referred to as group scheduling, are based on partitioning the population into groups and scheduling each group on appropriate days with possible gaps (when all are quarantined). Each group interacts with no other group and, importantly, any person who is symptomatic in a group is quarantined. Specifically, our mechanisms are characterized by three parameters $(g,d,t)$, where $g$ is the number of groups, $d$ is the number of days a group is continuously scheduled, and $t$ is the gap between cycles. We show that our mechanisms effectively trade off economic activity for more effective control of the COVID-19 virus. In particular, we show that the $(2,5,0)$ mechanism, which partitions the population into two groups that alternatively work for five days each, flat lines the number of COVID-19 cases quite effectively, while still maintaining economic activity at 70% of pre-COVID-19 level. We also study mechanisms such as $(2,3,2)$ and $(3,3,0)$ that achieve a somewhat lower economic output (about 50%) at the cost of more aggressive control of the virus; these could be applicable in situations when the disease spread is more rampant in the population. We demonstrate the efficacy of our mechanisms by theoretical analysis and extensive experimental simulations on various epidemiological models. Our mechanisms prove beneficial just by regulating human interactions. Moreover, our results show that if the disease transmission (reproductive) rate is made lower by following social distancing, mask wearing, and other public health guidelines, it can further increase the efficacy of our mechanisms.

### Large Deviation Approach to Random Recurrent Neuronal Networks: Rate Function, Parameter Inference, and Activity Prediction

Statistical field theory captures collective non-equilibrium dynamics of neuronal networks, but it does not address the inverse problem of searching the connectivity to implement a desired dynamics. We here show for an analytically solvable network model that the effective action in statistical field theory is identical to the rate function in large deviation theory; using field theoretical methods we derive this rate function. It takes the form of a Kullback-Leibler divergence and enables data-driven inference of model parameters and Bayesian prediction of time series.

### Predicting molecular phenotypes from histopathology images: a transcriptome-wide expression-morphology analysis in breast cancer

Molecular phenotyping is central in cancer precision medicine, but remains costly and standard methods only provide a tumour average profile. Microscopic morphological patterns observable in histopathology sections from tumours are determined by the underlying molecular phenotype and associated with clinical factors. The relationship between morphology and molecular phenotype has a potential to be exploited for prediction of the molecular phenotype from the morphology visible in histopathology images. We report the first transcriptome-wide Expression-MOrphology (EMO) analysis in breast cancer, where gene-specific models were optimised and validated for prediction of mRNA expression both as a tumour average and in spatially resolved manner. Individual deep convolutional neural networks (CNNs) were optimised to predict the expression of 17,695 genes from hematoxylin and eosin (HE) stained whole slide images (WSIs). Predictions for 9,334 (52.75%) genes were significantly associated with RNA-sequencing estimates (FDR adjusted p-value < 0.05). 1,011 of the genes were brought forward for validation, with 876 (87%) and 908 (90%) successfully replicated in internal and external test data, respectively. Predicted spatial intra-tumour variabilities in expression were validated in 76 genes, out of which 59 (77.6%) had a significant association (FDR adjusted p-value < 0.05) with spatial transcriptomics estimates. These results suggest that the proposed methodology can be applied to predict both tumour average gene expression and intra-tumour spatial expression directly from morphology, thus providing a scalable approach to characterise intra-tumour heterogeneity.