For each poster contribution there will be one poster wall (width: 97 cm, height: 250 cm) available.
Please do not feel obliged to fill the whole space.
Posters can be put up for the full duration of the event.
Antonacci, Yuri
We introduce a novel approach to evaluate redundant and synergistic interactions in network systems through information-theoretic analysis of multivariate processes. The approach, based on the Partial Information Decomposition (PID), decomposes the information shared between the current states of a group of random processes and their own past states into distinct contributions. These contributions include unique components originating from the past of subgroups of processes, as well as redundant and synergistic components resulting from dynamic interactions among the subgroups. The presented framework is applied to pairs of processes and it is mathematically formulated for a network of N random processes. To demonstrate the approach's effectiveness, we provide an example of linearly interacting Gaussian processes, indicating that unidirectional coupling and bidirectional coupling with internal dynamics are the primary factors related to redundancy and synergy. We then apply the method to time series data representative of the activity of the cardiovascular and cardiorespiratory systems. The results illustrate how postural stress can alter the synergistic and redundant contributions of the interactions between different physiological systems.
Camino-Pontes, Borja
A network is highly modular when different communities of nodes have high intra-connectivity within them and low inter-connectivity between them. Different methods and strategies have been used to maximize modularity in brain networks, see for instance [1] and references therein, resulting in a list indicating which node belongs to which community. Despite some strengths and weaknesses between the different methods, most of them start from a connectivity matrix that defines pairwise interactions between network nodes. Following previous work [2]–[4], we built here connectivity matrices defining node high-order interactions, from triplets to n-plets, and compared different communities obtained across different modularity methods. K = 148 healthy subjects from the Human Connectome Project (HCP), all of them were healthy unrelated subjects. Resting-state fMRI data was pre-processed with the ICA-FIX pipeline provided by HCP. We extracted differents region-level time-series of the based on various atlases like the 68 cortical regions Desikan Atlas and the 100 regions Schaefer Atlas. We first calculated the O-information that accounts for high-order interactions in n-plets of regions [2], and particularized for subsequent analyses to n=3 (X,Y,Z), i.e. the high-order interactions in triplets, also known as interaction information [5], [6]. If the value of the O-information is greater than 0, the interaction in the triplet is said to be redundant, and, if it is lower than 0, it is said to be synergetic. Next, we built connectivity matrices for R and S, respectively the redundancy and the synergy in the triplet interaction between any two regions (X,Y) when interacting with region Z. Finally, when considering each region Z as one possible node in the network, we maximize modularity using the Louvain Method. Once the communities for each region Z were obtained, we calculated their correlation with the structural connectivity matrices and the results were discussed. [1] O. Sporns y R. F. Betzel, «Modular Brain Networks», Annu. Rev. Psychol., vol. 67, n.o 1, pp. 613-640, ene. 2016, doi: 10.1146/annurev-psych-122414-033634. [2] F. E. Rosas, P. A. M. Mediano, M. Gastpar, y H. J. Jensen, «Quantifying high-order interdependencies via multivariate extensions of the mutual information», Phys. Rev. E, vol. 100, n.o 3, p. 032305, sep. 2019, doi: 10.1103/PhysRevE.100.032305. [3] M. Gatica et al., «High-Order Interdependencies in the Aging Brain», Brain Connect., vol. 11, n.o 9, pp. 734-744, nov. 2021, doi: 10.1089/brain.2020.0982. [4] F. Battiston et al., «Networks beyond pairwise interactions: Structure and dynamics», Phys. Rep., vol. 874, pp. 1-92, ago. 2020, doi: 10.1016/j.physrep.2020.05.004. [5] A. Erramuzpe et al., «Identification of redundant and synergetic circuits in triplets of electrophysiological data», J. Neural Eng., vol. 12, n.o 6, p. 066007, dic. 2015, doi:10.1088/1741-2560/12/6/066007. [6] B. Camino-Pontes et al., «Interaction Information Along Lifespan of the Resting Brain Dynamics Reveals a Major Redundant Role of the Default Mode Network», Entropy, vol. 20, n.o 10, p. 742, sep. 2018, doi: 10.3390/e20100742.
Castro Novoa, Samy
The functional connectivity (FC) between cortical regions can be divided into bottom-up and top-down pathways, each using a specific frequency band. Those frequencies emerge at the regional scale by the oscillations on different cortical layers. The explanations for the origin and use of this frequency communication range from a structural basis, as specific cellular types (PV, SOM) for each frequency band, to function as the multiplexed communication in high-frequency for bottom-up and low-frequency for top-down. Using rate models of coupled regions embedded in a realistic multi-layer cortical organization, we show that layers display specific and segregated frequencies. Deep layers oscillations show low frequencies and superficial layers show high frequencies. This is surprising because each layer has the same excitatory and inhibitory population. We show that this frequency segregation is a by-product of self-organized collective dynamics rather than hardwired anatomy or interneuronal diversity. Then, we evaluated communication between two hierarchically different cortical regions using spectral Granger causality. The model shows, as the empirical data, that communication is possible for bottom-up in high-frequencies and top-down in low-frequency. But the model also shows that the top-down pathway could change the frequency of communication from low to high. This modulation is flexibly adjusted by contextual inputs (i.e. salience/attention) or the weight of interregional connections. Then, we evaluated the informational structure of these dynamics for the communication, using s-information for the interdependencies and o-information for the synergies/redundancies. The system shows high interdependencies in the system, but more interestingly, synergies are maximized in the regions of high/low-frequency communication. Those synergies also show relationships with the dynamic regime of the system, in which the frequencies and phase relationship are related to the level of synergy found. In conclusion, our work on coupled cortical regions suggests that frequency communication is not only a result of hardwired anatomy but also an emergent property of self-organized collective dynamics with roots in synergistic interactions.
Celotto, Marco
A key element for understanding neural computations in task-related contexts is the ability to measure how much information neural populations exchange about specific external variables. Current methodologies rely on the Wiener-Granger causality principle as a criterion to measure directed information flow between simultaneously recorded neural signals, i.e. how much the past of the sender neural variable $X_{past}$, predicts the present of a receiver neural variable $Y_{t}$ beyond what can be predicted by the past of the receiver $Y_{t}$. In information-theoretic terms this principle is captured by Transfer Entropy $TE=I(X_{past}; Y_{t} |Y_{past})$. However, while $TE$ allows quantifying the magnitude and directionality of information flow, it provides no insight into communication content. Here, we define a novel measure, which we called Feature-specific Information Transfer ($FIT$). $FIT$ is a directed measure of information transfer that uses PID to quantify that subpart of the $TE$ that is about a specific target variable, such as a feature of a sensory stimulus $S$. $FIT$ definition depends on four variables: $X_{past}$, $Y_{past}$, $Y_{t}$ and $S$. Conceptually, $FIT$ is the information about $S$ that is shared by $X_{past}$ and $Y_{t}$ and is not present in $Y_{past}$. Mathematically, we wanted $FIT$ to be upper bounded, at the same time, by $TE$, by the amount of feature information encoded in the past of the emitter signal $I(S;X_{past})$, and by the information encoded in the present of the receiver $I(S;Y_{t})$. Since none of the PID information atoms appearing on single lattices with three source variables and one target respected these properties, we defined $FIT$ by relating atoms appearing on lattices with different targets that are constrained by classical Shannon information quantities, as previously done in the trivariate case (Pica et al., Entropy 2017). We validated $FIT$ on simulated data showing that it only captures information transfer that is about specific features, correctly discarding the transfer of feature-unrelated activity, while $TE$ is sensitive to the overall information transmission. Then, we tested $FIT$ on several real datasets, including magnetoencephalographic recordings from humans during a decision-making task and local-field potentials recordings from rats during motor learning. Such analysis revealed task-relevant patterns of sensory and motor information transmission that were hidden when measuring the overall information flow using $TE$. In summary, $FIT$ extends previous methodologies, using PID to combine the Wiener-Granger causality principle of information transfer with the content specificity about a target variable of interest into a single measure.
Dohnany, Sebastian
Accurately characterising information flow between brain regions is a central question in neuroscience. Previous work has successfully used transfer entropy to determine information flow between different parts of the brain and establish a hierarchy of information processing in the brain. Here, we use the framework of partial information decomposition, PID, to decompose transfer entropy into two qualitatively different components: synergistic (dependent on the previous state of the target region) and unique (independent of that state). We find that synergy dominates brain region interactions, a result we replicate across different imaging modalities, brain states and in monkeys as well as humans and validate using multiple PID measures. Overall, the results enable a more nuanced reinterpretation of information flow in the brain and suggest that the brain might be using synergy for efficient information transfer.
Erez, Amir
A ubiquitous way that cells share information is by exchanging molecules. Yet, the fundamental ways that this information exchange is influenced by intracellular dynamics remain unclear. Here we use information theory to investigate a simple model of two interacting cells with internal feedback. We show that cell-to-cell molecule exchange induces a collective two-cell critical point and that the mutual information between the cells peaks at this critical point. Information can remain large far from the critical point on a manifold of cellular states but scales logarithmically with the correlation time of the system, resulting in an information-correlation time trade-off. This trade-off is strictly imposed, suggesting the correlation time as a proxy for the mutual information.
Gökmen, Doruk Efe
Recently an equivalence between real-space RG and optimal compression theory was established for translation-invariant systems. It was used to formulate the optimal coarse-graining as a model-dependent solution to a variational problem, eliminating arbitrary heuristics and choices when identifying the relevant degrees of freedom. An efficient machine learning implementation of this principle enabled us to map-out phase diagrams, extract order parameters and relevant operators, and to discover their emergent symmetries. Here we tackle the long-standing problem of constructing a general model-driven real-space RG for higher-dimensional inhomogeneous systems. We extend the compression theoretic approach and develop the RSMI-NE code package for systems on arbitrary static graphs. We apply our method to the problem of dimer coverings on quasiperiodic Ammann-Beenker tiling, which was suspected to host exotic criticality. The numerically computed coarse-graining rules (1) vary depending on the location of the block and (2) map collections of microscopic degrees of freedom into emergent ``super-dimers" obeying an effective dimer constraint at successive scales. These results explicitly reveal a discrete scale-invariance and proximity of the original model to an RG fixed point, demonstrating the power of our method. More broadly, our method enables applying RG to high-dimensional complex systems.
Herzog, Rubén
Introduction: A major challenge in the neuroscientific study of consciousness is to identify and characterize different states of consciousness based only on neurophysiological signatures, which could be used to deepen our understanding of the brain-consciousness relationship and for developing effective biomarkers for neuropsychiatric conditions. Typically, these signatures reflect different aspects of the collective activity of the brain, such as functional connectivity, oscillations, or complexity. Because different states of consciousness involve functional reconfigurations of brain dynamics with widespread effects at multiple spatial scales, measures limited to the analysis of single regions and/or their pairwise interactions may miss all the information encoded in their high-order interactions. However, accounting for all possible combinations of high-order interactions among regions becomes prohibitive even for small brain atlases (e.g. 20 regions imply $\sim 10^6$ potential interactions). This combinatorial explosion greatly hinders the use of high-order functional connectivity (HOFC) as a practical signature of states of consciousness. In this work we study the specific changes in the collective activity of the brain induced by various mind-altering drugs via HOFC combined with a greedy algorithm to circumvent the combinatorial problem. Methods: We analyzed resting state fMRI data from subjects who received placebo or one of four drugs (LSD, N=16; psilocybin, N=13; MDMA, N=18; ketamine, N=14). The data was preprocessed using FSL, partitioned with the AAL90 atlas, and filtered between 0.01-0.1 Hz. We calculated the dual total correlation (DTC) between time series of 3 to 20 brain regions to measure HOFC. A greedy algorithm was used to identify drug-affected interactions in terms of decreased or increased HOFC (hypo or hyper connectivity), quantified by Cohen’s d effect size. Results: All drugs induced significant changes in HOFC (p-value<10-3) at different orders of interactions (from 3 to 20), both in terms of hypo and hyper connectivity. Serotonergic drugs (LSD, psilocybin and MDMA) showed mainly hypoconnectivity, while ketamine was dominated by hyperconnectivity. The topographic analysis of these interactions revealed that each drug elicited a specific pattern of hypo and hyper connectivity, but all of them involved hypo connectivity of frontal regions. Conclusion: Our results show that combining HOFC with a greedy approach can reveal valuable information about brain interactions across different states of consciousness. Each of the studied drugs displayed a unique pattern of hypo and hyper connectivity. Crucially, drugs with similar pharmacological mechanisms showed mainly hypo connectivity, while ketamine showed predominant hyper connectivity. These promising findings suggest that HOFC could also reflect the underlying mechanisms behind different states of consciousness, and should be further studied with different drugs, multimodal recordings, and larger sample sizes.
Kroupa, Tomas
In the joint work with Jaroslav Hlinka we show how to approximate the so-called connected information of order k. This is an information-theoretic quantity depending on the solution of maximum entropy problems of orders k and k-1, where the constraints of the problem are given by fixed values of entropy of marginals of order k and k-1, respectively. The resulting non-convex optimization problem is hard to solve numerically. We show how to approximate this problem by linear programming techniques in which the variables form entropic vector and the constraint are the polymatroid constraints (outer approximation) together with additional information inequalities.
Merkley, Amanda
Bivariate Partial Information Decomposition (PID) describes how information about a message is decomposed between two agents into unique, redundant, and synergistic components. We study this problem for scalar messages and Poisson-distributed sources, and show that unique information exists in at most one source under a new channel model. Mirroring Barrett's work on bivariate PID for Gaussian sources, our result provides closed-form solutions for PID terms when the sources are Poisson-distributed, facilitating estimation.
Mijatovic, Gorana
The dynamics of physiological variables are typically analyzed in discrete time by using measures of information dynamics. However, some physiological processes can be better characterized by using information-theoretic measures developed in continuous time. This is the case of point-process data that include information on the timing of events, such as neural spike trains or heartbeat timings. In this work, we present an information-theoretic framework developed to compute individual and pairwise model-free information-theoretic measures for point processes. The framework includes measures of the memory utilization capacity within a single point process, as well as, directed (causal) and undirected (symmetric) measures of interaction between two processes. These measures are computed through an estimation procedure based on nearest neighbor statistics and including a strategy for bias compensation based on surrogate data. The framework is first tested on simulations of both independent and coupled point processes, and then applied to different data sets of physiological point processes including in-vitro preparations of spontaneously-growing cultures of cortical neurons, cardiovascular variability, and functional magnetic-resonance events. We conclude prospecting the extension of the framework towards the definition of multivariate measures allowing to perform information decomposition and higher-order analysis of multiple point processes.
Munn, Brandon
Conscious awareness is thought to emerge from coordinated neural activity; however, the neurobiological mechanisms underlying this process remain poorly understood. Recent causal evidence suggests that the coupling between apical and basal dendrites in thick-tufted layer 5 pyramidal neurons is crucial for both for the state and contents of consciousness. Further, this coupling is mediated in part by nonspecific thalamocortical axons that have been separately related to the modulation of conscious states. Here, we demonstrate that a network model of thick-tufted layer 5 pyramidal neurons with thalamocortically-controlled apical-basal dendritic coupling can transition the system into a regime characterised by an admixture of spatiotemporal coordinated bursting and spiking that maximises integrated information, a quantitative metric of conscious awareness. Further, we show that the largest gain in integrated information occurs at an intermediate apical-basal dendritic coupling that coincides with empirically observed layer 5 pyramidal bursting and signatures of criticality. Our results outline a potential neurobiological mechanism for conscious awareness that integrates microscale theories and macroscale signatures of consciousness.
Neuhaus, Valentin
The limited understanding of living neural networks is a major obstacle to developing more effective and efficient artificial neural networks. The strong constraint of locality, which is crucial in the context of complex tasks that rely on the interplay of many neurons, poses a particularly difficult challenge. Our project seeks to overcome this challenge by developing a general framework for learning under the constraint of locality. At the core of our framework is Partial Information Decomposition (PID), a cutting-edge extension to the classical Shannon information theoretic framework. PID provides a powerful set of tools for separating the information about a target into the contributions of the individual sources, including unique, redundant, and synergistic contributions. By employing PID, we can build neural networks with neurons that optimize these information contributions with respect to their sources, given a predefined goal function. To further improve the computational capabilities of our system, in addition to the usual binary neurons, neurons with three initial states are incorporated. This type of neuron is also observed in living neural networks in the form of tonic, bursty, and silent spiking behavior. We use these neurons to encode error signals that serve as feedback to other neurons. By doing so, we are able to build more complex networks that can operate under a less stringent constraint of locality. Overall, our project represents a significant step forward in understanding the principles underlying neural computation, particularly in the context of the constraint of locality. By leveraging PID as an approach for locally learning neurons and implementing feedback signals with neurons with three output states, we aim to develop more advanced and flexible artificial neural networks that better reflect the structure and function of living neural networks.
Panda, Rajanikant
Authors: Rajanikant Panda1, Felipe Branco De Paiva, Yonatan Sanz Per, Jean-Flory Luaba Tshibanda5, Nathalie Maquet, Marie-Elisabeth Faymonville1, Gustavo Deco, Steven Laureys, Olivia Gosseries, Aminata Bicego, Audrey Vanhaudenhuyse Abstract: Background: Hypnosis is a non-ordinary state of consciousness characterized by a decreased awareness of the environment and modulation of self-awareness. Hypnosis has been shown to be of clinical utility in disorders of chronic pain such as fibromyalgia; however, its underlying neural mechanisms remain unclear. We postulated that brain spatiotemporal complex dynamics, which is crucial to the understanding of the consciousness process [1,2], might function as a biomarker to characterize the effects of hypnosis on the neural dynamics of the brain. Method: To test our hypothesis, we studied resting-state functional MRI in ten healthy subjects and 14 patients with fibromyalgia during the eyes-closed ordinary state of consciousness and the hypnotic state. Hypnosis was induced by muscle relaxation and eye fixation, accompanied by suggestions to experience a pleasant auto-biographical memory. To assess brain spatiotemporal complex dynamics, we used data-driven voxel-to-voxel intrinsic connectivity, dynamic functional connectivity, and turbulence approaches. Brain turbulence (i.e., turbulence and information cascade) is based on fluid dynamics; it indicates how well brain networks display nonlinearity and spatiotemporal complexity in the neural dynamics of the information flow cascade. Between conditions differences were assessed by a two-tailed paired t-test with FDR correction (p<0.05). Results: Our results showed that hypnosis increased intrinsic brain connectivity in occipital regions in both healthy participants and patients with fibromyalgia. However, decreased connectivity in inferior frontal areas was only found in healthy participants. Further assessing the underlying neural structure, we found that mean dynamic functional connectivity and brain turbulence increased during the hypnosis (p<0.05) state in healthy participants and patients with fibromyalgia. These increased brain spatiotemporal complex dynamics were more prominent in healthy participants than in patients with fibromyalgia. Conclusion: Hypnosis increased brain connectivity and complex spatiotemporal dynamics in healthy individuals, in line with other non-ordinary states of consciousness such as meditation [1,2]. The effect of hypnosis on brain dynamics was more pronounced in healthy individuals compared to patients. We speculate from our findings that fibromyalgia patients may need more hypnosis sessions to have prominent neural effects, as compared to healthy controls. Our findings suggest that hypnosis strengthens brain network flexibility and could rewire the altered network dynamics in the clinical population, specifically in patients having pain-related and emotion-related deficits such as depression. Reference: [1]. Escrichs, A. et al. (2019). Characterizing the dynamical complexity underlying meditation. Frontiers in systems neuroscience [2]. Escrichs, A. et al. (2022). Unifying turbulent dynamics framework distinguishes different brain states. Communications Biology
Raglio, Sofia
Transitive inference (TI) is a form of deductive reasoning that allows to infer unknown relations among premises. It is believed that the task is cognitively solved resorting to a mental linear workspace, namely the mental line, in which the stimuli are mapped according to their arbitrary assigned rank. This means that, if one experiences that A>B and B>C, the relationship A>C can be transitively inferred as after learning the items A, B and C are properly located on adjacent positions along such linear workspace. An open question is whether this mental line is encoded somewhere in the brain and if so where and how. Here, we investigated the possible role of the dorsal premotor cortex (PMd) in representing the hypothesized mental line during the acquisition of the relations between items, eventually leading to successfully perform a TI task. Two rhesus monkeys were tested requiring selecting the higher ranked item, presented alternatively on the left or right position of a computer monitor while the neural activity of the PMd was recorded simultaneously by 96 probes. We analyzed the multi-unit neural activity (MUA) by relying to a mathematical framework in which it is possible to carry out the needed mental line as a linear combination of the representations of the stimuli/items in the neural state space. The applicability of this theoretical model relies on the hypothesis that both the identity and the spatial position of the stimuli are encoded in the probed network. As a first result, we found that PMd represents such information in its neural activity together with a correlate of the difficulty in motor decision. According to the model, we found that the PMd representations of the stimuli, once projected on the theoretical mental line (a linear neural subspace), are predictive of the motor decision. Finally, we found striking evidence that representations of both the stimuli and the motor plan are plastic, as they change in time according to the behavioral output. A realignment of the mental line thus results leading to an increasing overlap with the axis decoding the motor responses. Our results then provide evidence that a TI task can be solved as a linear transformation of the neural representations of arbitrarily ranked stimuli. PMd appears to have a leading role in manipulating such representations, efficiently transforming the ordinal knowledge of the stimuli relations into the motor output decision.
Rajpal, Hardik
The problem of inferring the technological structure of production processes and, hence, quantifying technological sophistication, remains largely unsolved; at least in a generalizable way. This reflects in empirical literature that either focuses on outputs instead of transformative processes, or preemptively assumes the nature of technology. Building on recent advances in information theory, we develop a method to quantify technological sophistication. Our approach explicitly addresses the transformative process where inputs interact to generate outputs; providing a more direct inference about the nature of technology. More specifically, we measure the degree to which an industry's inputs interact in a synergistic fashion. Synergies create novel outputs, so we conjecture that synergistic technologies are more sophisticated. We test this conjecture by estimating synergy scores for industries across nearly 150 countries using input-output datasets. We find that these scores predict popular export-based measures of economic complexity that are commonly used as proxies for economic sophistication. The method yields synergistic interaction networks that provide further insights on the structure of industrial processes. For example, they reveal that industries from the tertiary sector tend to be disassortative sector-wise. To the extent of our knowledge, our findings are the first data-driven inference of technological sophistication within production processes (on an industrial scale). Thus, they provide the technological foundations of economic complexity and represent an important step toward its empirical microfoundations.
Ray Panda, Sneha
Introduction: Olfaction is associated to individuals’ sensory perception to consciously experience different odors. Individuals with Parkinson's disease (PD) has been reported with early signs of altered olfactory perception termed as “hyposmia”, as one of the various non-motor symptoms. There is a lack in understanding the dynamic nature of information flow in local as well as global network. Therefore, in this study, we aimed at assessing the alteration in information flow in both local as well as global network related with olfactory perception in PD patients with hyposmia, in comparison to age and gender matched cognitively normal PD as well as healthy individuals. Methods: We selected brain structural and functional MRI of 15 Parkinson's patients with severe hyposmia (PD-SH), 15 Parkinson's patients with cognitive normal ability (PD-CN) and 15 healthy individual subjects (HC) from an open access data (https://www.openfmri.org/dataset/ds000245/) available from a recent study. Due to high head movements (>2mm), three healthy subjects were excluded. The brain dynamics were assessed using dynamic brain state, which characterizes the brain's spontaneous spatiotemporal network alterations and information theory measures (synergy and redundancy) that characterize the capacity of brain higher-order information exchange. Results: The probability of appearance for a dynamic brain state that consist of complex, long-range-global connections have significantly decreased in PD-SH and PD-CN compared to HC. Furthermore, the higher-order information (i.e., synergy and redundancy) was reduced significantly in the bilateral superior temporal, Parahippocampus and cerebellum areas in the PD patients (both PD-CN and PD-SH). Though no significant differences were found between PD-SH and PD-CN, PD-SH showed a higher reduction in higher-order information exchange in broad brain areas, specifically in the bilateral frontal, insula and left sensory-motor areas. An important finding of our study is that the brain state which has prominent modular-local clusters consisting of sensorimotor and frontal areas, have an increased probability of occurrences in the patient's group compared to HC, as well in PD-SH compared to PD-CN, which we also noted using feature ranking approach. Conclusion: This study suggests the reduced dynamic functional connectivity in different brain states might leads to altered olfaction. The brain states specific to the sensorimotor system get altered in Parkinson's with severe hyposmia. Our findings suggest these alterations associated with olfactory perception could be used as markers to diagnose PD with hyposmia and open a new perspective for targeted treatment in those patients.
René, Alexandre
As we seek to understand more complex neural phenomena, so too must our models become more complex. Increasingly such models can be fitted directly in a data-driven manner (René et al., 2020). However because the fitting problem is typically non-convex, this yields many possible parameters wich cannot be differentiated using established model comparison methods. Our proposed solution begins with an ontological argument: we argue that it is essential to distinguish the physical model – which is grounded in theory – from the observation model. The parameters of the latter are less important, since its role is just to account for the mismatch between theory and data. In fact, we propose to forego parameterising the observation model altogether to allow for “unknown unknowns”. Instead, we compare parameter sets based on an average over the space of all possible observation models. We then proceed to develop a practical method for computing the expectation over unparameterised observation models. This is built upon three main ideas. The first is to cast expectations as path integrals in a space of 1d quantile functions. The second is then a method for sampling quantile paths, and thus estimating the path integral numerically. The third is a calibration procedure, which allows practitioners to adjust the criterion's sensitivity to the specifics of the model and the data. Finally we apply the method to compare fitted Hodgkin-Huxley and Wilson-Cowan models.
Reva, Maria
Neuronal membrane potential is sculptured by numerous ionic currents present in the cell. Different values of neuronal electrical features can be observed from the combination of various numbers of ionic current densities. While electrophysiological experiments provide us with a general understanding of the contribution of currents to certain electrical features, we lack a systematic exploration of the effect of the interplay between ionic currents on neuronal electrical behaviour. However, detailed biophysical models allow us to tackle this issue by unravelling the inner mechanisms of the interactions between currents. In this study, we identify modules of ionic currents and their role in shaping neuronal electrical features using statistical methods and information theory. Using Monte Carlo Markov Chain methods, we generated thousands of non-identical electrical models with experimentally plausible electrophysiological characteristics for cells with various firing patterns. Based on the obtained parameter and feature value set we used information theory methods to detect interactions between model parameters and features at all orders, beyond pairwise correlations. We show that most features depend on the synergetic or redundant combinations of several parameters. Only a small fraction of the features was influenced by one to two ionic currents while most depended on more than 5 model parameters. In our models, we were able to map the cooperative modules of ionic currents and their relation to the electrical features. Further we aim to assess the stability of the identified modules and their possible relation to the gene expression levels and morphological properties of the cells.
Rode, Julian
Biological cells and simple organisms can navigate in spatial fields of chemical cues. Sensing and motility noise prevalent at small scales render this chemotaxis a non-trivial problem, and it remains an open question when exactly different strategies to measure concentration gradients work best. We develop an information theory of chemotactic navigation that combines two fundamental gradient-sensing strategies: spatial comparison across an agent’s diameter, and temporal comparison due to its active motion, enabling us to dissect the optimal weighting of both information sources as function of noise levels. Our model builds on seminal work on infotaxis by Vergassola et al. to maximize the expected information gain [1] and extends it to agents of finite size capable of both temporal and spatial comparison [2]. In the absence of motility noise, agents behave stereotypically: the entropy of the estimated direction of a target emitting chemical cues collapses on a master curve parameterized by a signal-to-noise ratio. Motility noise (not present in [1]) sets a competition between continuous information gain due to chemical sensing and information loss due to stochastic rotational diffusion, resulting in a quasi-steady state of directional entropy, which again follows an empirical power law [2,3]. Ad-hoc information decomposition allows us to determine the relative contributions from spatial and temporal comparison, revealing a characteristic agent size above which spatial comparison pays off. References [1] M. Vergassola, E. Villermaux, and B. I. Shraiman: ‘Infotaxis’ as a strategy for searching without gradients. Nature 445, 406–409 (2007) [2] A. Auconi, M. Novak and B. M. Friedrich: Gradient sensing in Bayesian chemotaxis. EPL 138, 12001 (2022) [3] M. Novak and B. M. Friedrich: Bayesian gradient sensing in the presence of rotational diffusion. New J. Phys. 23, 043026 (2021)
Thestrup Waade, Peter
Research on joint action and social coordination has traditionally focused on action synchronicity (i.e., similar temporal ordering of actions) in simplistic experimental paradigms. While synchronicity is an interesting feature of social coordination, particularly in activities where synchronicity is the explicit goal, it is only among many interesting features. In this experiment we move beyond these methodological limitations by a) using motion capture to investigate improvised partner dance (Lindy Hop and Argentine Tango) as a case of naturalistic and open-ended but also physically measurable joint coordination, and b) using integrated information decomposition methods to measure the amount of synergy in the dance (i.e., the emergence of non-reducible pair-level dynamics). We investigate how synergy relates to synchronicity and the degree of directed coupling between the dancers, as well as the sense of togetherness and joint agency, and how this is modulated by manipulating leader-follower roles and musical stimulus.