Poster Contributions

For each poster contribution there will be one poster wall (width: 97 cm, height: 250 cm) available. Please do not feel obliged to fill the whole space. Posters can be put up for the full duration of the event.

Seeing faces in noises: Predicting perceptual decision by prestimulus brain oscillations

Bhattacharya, Joydeep

The perception of an external stimulus is not just stimulus-dependent but is also influenced by the ongoing brain activity prior to the presentation of stimulus. In this work, we directly tested whether spontaneous electrical brain activities in prestimulus period could predict perceptual outcome in face pareidolia, seeing faces in white noise images, on a trial-by-trial basis. Participants were presented with only noise images but with the prior information that some faces would be hidden in these images while their EEG signals were recorded. Participants reported their perceptual decision, face or no-face, on each trial. Using features based on large scale neural oscillations in a machine learning classifier, we demonstrated that prestimulus brain oscillations could achieve a 74% classification accuracy. The time-frequency features representing hemispheric asymmetry yielded the best classification performance and prestimulus alpha oscillations were found to be most crucially involved in predicting perceptual decision. These findings suggest a mechanism of how prior expectation in the prestimulus period may shape post-stimulus decision making.

An integrate-and-fire network model for grid cell dynamics

Bonilla Quintana, Mayte

In 2005, Hafting et al. at the Moser Lab, discovered grid cells in the Medial Entorhinal Cortex (MEC). These cells fire at multiple locations while an animal is wondering through an environment defining a periodic triangular array that covers the entire surface, hence the name. Furthermore, grid cells fire at the same position regardless of changes in the animal's speed and direction, and firing persists in the absence of visual input. It is therefore believed to correspond to the animal's own sense of location. For the discovery of these type of cells May-Britt and Edvard Moser won the 2014 Nobel Prize in Physiology or Medicine. Since the discovery of grid cells many models have been developed in order to address the mechanism of grid cell firing. However only few models link the firing of grid cells to data on intracellular resonance and rebound spiking in layer II stellate cells of the MEC, that represent the 70\% of the total MEC II neural population and therefore a large fraction of the grid cell population. We propose a neural field integrate and fire model with a hyperpolarisation activated cation current (h-current). The model is motivated by previous ones in which wave generation in spiking neural networks is hypothesised to underly the formation of grid cell firing fields but within a framework that allows for analytical tractability. Furthermore, inspired by relevant MEC data we consider only inhibitory neural connectivity. Simulations of our model show sustained rebound spiking that is propagated across the network after injecting a initial hyperpolarising current to a small fraction of the neurons. Our aim is to show that a difference in the h-currrent time constant seen experimentally along the dorsal to ventral axis of the MEC can produce a difference in the size and spacing between the grid cell firing fields. In order to achieve this, we first perform a piece-wise linear reduction of our model that preserves its dynamics. Such a reduction allows us to obtain a self-consistent solution for a periodic travelling wave. We developed a wave stability analysis using theory of nonsmooth systems and observed a strong dependence of the period on the h-current time constant.

Simulating large-scale human brain networks with a mean-field model of EIF neurons: exploring resting state FC and stimulation with electric fields

Cakan, Caglar

The use of whole-brain networks for understanding the dynamics of the interaction between brain regions has experienced a rise in popularity in the last few years. Here, we calibrate a whole-brain network model to human resting state data, and use it to explore the effects of weak electric fields on the network dynamics. The structural connectivity of the brain network is extracted from parcellated brain scans using an atlas with 68 regions (Desikan et al., 2006) and DTI tractography of long-range axons to estimate coupling strengths and delays between regions averaged over 48 individuals (Ritter et al. 2013, Schirner et al. 2015). The mean activity of each brain region is described by a mean-field population model of EIF neurons (Ladenbauer, 2015). After fitting local parameters such as recurrent coupling strengths and delays and the global parameters coupling strength, axonal signal transmission speed and external noise intensity, our model can produce simulated BOLD functional connectivity (FC) with high Pearson correlation (mean .55, max/min .78/.25) to the empirical 20 minute resting state BOLD FC of these individuals. A local model with a limit cycle at gamma frequencies and a bistability with a low and a high-activity fixed point was found to produce good fits. A range of global parameters can produce good grand average FC fits. However, the FC in the resting state is not stationary. To capture the brain's dynamical properties in the resting state, the FC fit is complemented by a fit of the FCD matrix (Hansen et al, 2015). We show that the simulated FCD matrix is well comparable to empirical data (Kolmogorov distance around 0.1) on several timescales. Clustering of the power spectra of the local nodes shows that nodes of the brain graph can be divided into two sets with dominant alpha and gamma frequencies respectively and that contralateral regions end up in the same cluster. Lastly, we present results of modeling the effect of external tACS-like brain stimulation on the global network activity. By modifying the dynamics of a subset of nodes, the global dynamics of the brain network can be shaped. We stimulate the bilateral entorhinal cortices, the main interface of the cortex to Hippocampus. We show that on the global network level, transitions from a DOWN state to an UP state tend to lock on the onset of oscillatory stimulation and relate these results to experimental findings in the rat brain conducted in Ref. (Battaglia et al, 2004). --- References Desikan, R. S., Ségonne, F., Fischl, B., Quinn, B. T., Dickerson, B. C., Blacker, D., … Killiany, R. J. (2006). An automated labeling system for subdividing the human cerebral cortex on MRI scans into gyral based regions of interest. NeuroImage, 31(3), 968–80. http://doi.org/10.1016/j.neuroimage.2006.01.021 Ritter, P., Schirner, M., McIntosh, A. R., & Jirsa, V. K. (2013). The Virtual Brain Integrates Computational Modeling and Multimodal Neuroimaging. Brain Connectivity, 3(2), 121–145. http://doi.org/10.1089/brain.2012.0120 Schirner, M., Rothmeier, S., Jirsa, V. K., McIntosh, A. R., & Ritter, P. (2015). An automated pipeline for constructing personalized virtual brains from multimodal neuroimaging data. NeuroImage, 117, 343–57. http://doi.org/10.1016/j.neuroimage.2015.03.055 Ladenbauer, J. (2015). The Collective Dynamics of Adaptive Neurons: Insights from Single Cell and Network Models. PhD Thesis. http://dx.doi.org/10.14279/depositonce-4791 Hansen, E. C. A., Battaglia, D., Spiegler, A., Deco, G., & Jirsa, V. K. (2015). Functional connectivity dynamics: Modeling the switching behavior of the resting state. Neuroimage, 105, 525–535. http://www.sciencedirect.com/science/article/pii/S1053811914009033 Battaglia, F. P., Sutherland, G. R., & McNaughton, B. L. (2004). Hippocampal sharp wave bursts coincide with neocortical “up-state” transitions. Learning & Memory (Cold Spring Harbor, N.Y.), 11(6), 697–704. http://doi.org/10.1101/lm.73504

Reconstructing networks of pulse-coupled oscillators from non-invasive observations

Cestnik, Rok

We present a method for reconstructing a network of pulse-coupled oscillators from non-invasive observations of the system's output. Assuming that the pulse trains of all nodes are known and that the coupling between the elements is sufficiently weak to justify the phase dynamics description, we recover the connectivity of the network and properties of the nodes. Our basic model for the network nodes are phase oscillators which issue a spike when their phase $\varphi$ reaches $2\pi$. (We consider the phases in the $[0, 2\pi)$ interval, i.e., after the spike generation the phase of the unit is reset to zero). This spike affects all other units of the network according to the strength of the corresponding connections. Let the size of the network be $N$ and let the connectivity be described by an $N \times N$ coupling matrix ${\cal E}$, whose elements $\epsilon_{km}$ quantify the strength of the coupling from unit $m$ to unit $k$. Between the spiking events, phases of all units obey $\dot{\varphi_k} = \omega_k$, where $\omega_k$ are natural frequencies. If unit $k$ receives a spike from oscillator $m$, then it reacts to the stimulus according to it’s phase response curve (PRC), $Z_k(\varphi)$. This means that the phase of the stimulated unit is instantaneously reset, $\varphi_k \rightarrow \varphi_k + \epsilon_{km} Z_k(\varphi_k)$. Our approach is based on making a preliminary estimation of the network connectivity by evaluating the impact an oscillator has on another oscillator's inter-spike intervals. This estimation, although crude, gives us some insight into the network, such that together with linearly approximating the phases and representing the PRC as a finite size Fourier series, we can get an approximation for the PRC. Once both approximations (connectivity and PRC) are obtained, they can be used to better approximate phases, which in turn yields better approximations of connectivity and PRC in the next iteration of the process. The more iterations one does, the better the recovery.

Influence of Inherent Prior Values in Decision-Making

Chien, Samson

Reinforcement learning (RL) has become the predominant model for predicting a subject’s decision choice based on the expected reward value (EV) of each cue, which is continuously adjusted during learning in proportion to a reward prediction error (PE). The common experimental setup utilizes value-neutral cues (e.g., fractal images) to purely study the emergence of EVs. However, most environmental cues are not value-neutral but exhibit certain inherent values. Here we investigate how these inherent values affect the learning of new (reward-based) EVs. One possible mechanism is that inherent values differentially affect learning rates such that congruent cue-outcome associations, in which the inherent values and the EVs are similar, are learned more quickly (i.e., with a higher learning rate) than incongruent pairings. We tested the hypothesis in a 2x2 factorial design, using facial attractiveness (high/low) of a visual cue as a proxy for inherent value and reward probability (0.7/0.3) as a target for newly learned EVs. Subjects were shown both attractive and unattractive face pictures of the opposite gender. Each picture was paired with a positive or negative monetary reward either congruently or incongruently. Subjects were instructed to select the pictures with the goal of maximizing the overall monetary reward. Computational RL models were fitted to the behavioral data to derive cue-specific learning rates. Concurrent fMRI data were correlated with these learning rates, EVs, and PEs. The behavioral results indicated both a faster response time and a faster learning rate for the congruent cue-outcome pairings. The model-based fMRI data analysis revealed well-established brain regions involved in decision making, such as the ventromedial prefrontal cortex for EVs and the ventral striatum for PEs. In addition, we identified a formerly unreported correlation between the cue-specific learning rates and the BOLD activity in a sub-region of the ventral striatum distinct from those representing the PEs and rewards. Our result complemented earlier findings and further established the roles of the ventral striatum in decision-making.

Chimera states in hierarchical networks of FitzHugh-Nagumo oscillators and their role in epileptic seizures

Chouzouris, Teresa

Authors: Teresa Chouzouris, Iryna Omelchenko, Anna Zakharova, Eckehard Schöll, Institut für Theoretische Physik, Technische Universität Berlin, Germany The collective behavior in networks of oscillators is of great current interest. Besides various zero-lag, cluster, or group synchronization patterns and oscillation death, special attention has recently been paid to chimera states where incoherent and coherent oscillations occur in spatially coexisting domains. Surprisingly, this symmetry-breaking behavior exists for identical elements and symmetric coupling configurations. One important application of chimera states in nature is the study of neural networks. Synchronization and desynchronization of neural activity is essential for explaining brain disorders, such as epileptic seizures and Parkinson disease. During an epileptic seizure the electrical activity in the brain is excessive or synchronous, and studying chimera states can give further insight in the underlying mechanisms of the generation and death of epileptic seizures. Recent studies on the architecture of the neuron interconnectivity of the human and mammalian brain have shown that the connectivity of the neuron axons network represents a hierarchical structure. Furthermore, the existence of chimera states in hierarchically coupled systems has been recently discovered [1,2]. A systematical analysis and comparison of the transition from asynchronous behaviour to synchrony via chimera states in structural neural networks derived from diffusion magnetic resonance imaging and in hierarchical networks is made. For this, the paradigmatic model of a FitzHugh Nagumo oscillator, describing the activation and inhibition dynamics of a spiking neuron is used. The parameter values shifting the network closer to the synchronous epileptic state are investigated and simulations of epileptic seizures are presented. References: [1] Omelchenko I., Provata, A., Hizanidis, J., Schöll, E. and Hövel, P., Robustness of chimera states for coupled FitzHugh-Nagumo oscillators Phys. Rev. E 91, 022917 (2015) [2] Krishnagopal, S., Lehnert, J., Poel, W., Zakharova, A. and Schöll, E. , Synchronization Patterns: From Network Motifs to Hierarchical Networks, Phil. Trans. R. Soc. A 375, 20160216 (2017)

Collective oscillations and neuronal avalanches in a network of noisy excitatory and inhibitory neurons

Dalla Porta Dornelles, Leonardo

Neuronal avalanches are bouts of spontaneous spatio-temporal activity, with complex emergent properties, that have been seen both in vitro and in vivo. The probability distributions of avalanche size and duration decay as power laws P(s)~s^{-3/2} and P(s)~d^{-2}$, respectively, suggesting that the network is operating near a critical point [1]. These exponents observed for probability distributions of neuronal avalanches are compatibles with the critical exponents observed in models from directed percolation universality class (DP). However, these models do not take into account the dynamics of inhibitory neurons and, besides that, a phase transition is between an absorbing state and an active phase, what turn these models difficult to reconcile with long-range temporal correlations that are observed experimentally at different spatial scales [2,3]. As an attempt to overcome some of these issues, Poil et al. proposed a computational model composed by excitatory and inhibitory neurons in a two-dimensional disordered network. They claim that a phase transition between an active and an oscillatory phase occurs in their model. At the critical line in parameter space, oscillations and neuronal avalanches emerge jointly. Furthermore, long-range time correlations (1/f noise) also emerge at criticality [4]. In the present study we have studied this model further. We have analyzed the model's robustness against changes in system size, interaction range and the activity threshold that defines an avalanche. To probe the hypothesis of a phase transition in the model, besides the analysis of temporal auto-correlation, we have tried to identify an order parameter for the model. References: [1] J. M. Beggs and D. Plenz, J. Neurosci. 23, 11167 (2003). [2] Linkenkaer-Hansen, K. et al., J. Neurosci. 21, 1370 (2001). [3] Ribeiro, T. et al., PLoS ONE 5, 14129 (2010). [4] Poil, S.-S. et al., J. Neurosci. 32, 9817 (2012).

Synaptic noise facilitates transitions from sleep-like state to awake-like state

de Oliveira Pena, Rodrigo Felipe

Neuronal activity recorded in the mammalian cortex displays various typical regimes. Being related to the instantaneous state of the brain, these regimes are separated by repeated transitions. In the case of sleep and wakefulness, transitions are commonly treated within a stochastic framework. Although many characteristics like the mean duration in every state (EPL, v. 57, n. 5, p. 625, 2002) have been determined, the mechanisms responsible for regulation of transitions are still under debate. We tackle this problem by studying a recently introduced random network model, capable of producing self-sustained oscillations (Front. in Comp. Neuro., v. 10, p. 23, 2016). Previous analysis of that network was based on deterministic equations. On adding the Ornstein–Uhlenbeck synaptic noise, we observed that the sleeping state was replaced by a regime, strongly reminiscent of wakefulness. Systematic analysis of firing rates, power spectra and voltage series confirms that two basic states switch stochastically and that their characteristics are indeed very similar to those of wake and sleep: a state of very low collective firing rates where the neurons are weakly correlated and a state of oscillatory activity where the voltage fluctuates between a hyperpolarized state and a depolarized state ("up" and "down" states). By varying the noise intensity and using the mean duration at every state as the criterion, we unambiguously confirm that noise facilitates transitions from the awake-like to the sleep-like state and hinders the reverse transitions. Our results suggest that synaptic noise can actually be viewed as a part of the mechanism that stands behind the transitions sleep-awake.

Local and global effects of transcranial direct current brain stimulation (tDCS) on resting brain activity

Di Bernardi Luft, Caroline

The possibility of using transcranial direct current brain stimulation (tDCS) to improve cognitive functions, treat affective disorders, and probe the functional role of certain cortical areas has attracted extensive interest. However, the optimism towards tDCS applications has recently reduced in face of the large variability of its effects on different individuals. At the heart of the matter is that there is still little knowledge and control of the actual effects of tDCS on brain activity at the local and global scales. Most behavioural studies assume the effects of tDCS are local as they claim to probe the role of certain brain areas on cognition, whereas neuroimaging evidence suggests that tDCS affects the global dynamics of brain states across multiple time scales. In this study, we investigated the effects of anodal and cathodal high-definition tDCS on resting brain activity at the local (oscillations) and global (directed connectivity) scales. We recorded the EEG before and after 15 minutes of anodal, cathodal, or sham tDCS over the right temporal region (T8 – 1mA current). The return current was spread over 5 electrodes in order to increase the focality of the stimulation. We measured brain oscillations using the Better Oscillation Detection Method (BOSC) and directed connectivity using the Phase Slope Index (PSI). We measured the oscillatory episodes (%) and the phase synchronization from 1 to 40 Hz. We analysed the data based on individual peak frequencies, but we also compared oscillations and phase synchronization over traditional frequency bands (delta: 1-4 Hz, theta: 4-8 Hz, alpha:8-12 Hz, beta: 12-30 Hz, gamma: 30-40 Hz). The oscillation results showed that anodal stimulation was associated with an increase in brain oscillations, but only at the individually predominant frequency, i.e. preferred rhythm. Cathodal tDCS was associated with a decrease in the proportion of oscillatory events in the predominant frequency at the stimulation site (topographically localised effect). Sham was associated with no significant change in the predominant oscillations. When comparing the conditions in each frequency band rather than on the individualised peaks, there were no significant effects. The connectivity results showed that there was a significant increase in theta phase synchronization at the stimulation site after anodal tDCS compared to cathodal and sham. Importantly, this increase was specific to the input connections driving the stimulation site, indicating that the stimulated area became more “driven” by other regions after anodal tDCS. We conclude that the local effects of tDCS are dependent on individual brain oscillations. Furthermore, we observed changes in the global scale as an increase in directed connectivity indicating that the stimulated area becomes more sensitive to inputs from other regions after anodal tDCS.

How incentive motivation improves the number sense – insights from a developmental and multi-methodological approach

Dix, Annika

The number sense depicts an evolutionary old system of numerosity perception and is shared between human adults, infants and various non-human species (Dehaene, 1997). In line with Francis Galton’s sensory discrimination hypothesis of intellectual abilities, individual differences in the precision of the number sense are predictive of math achievements in different age groups. In previous research on individual and developmental differences, the acuity of the number sense has been quantified psychophysically by the Weber’s fraction (i.e., just-noticeable difference between numerical quantities) and recently computationally by the drift rate (i.e., process of accumulating perceptual evidence) of the diffusion model. Surprisingly, prior research has neither systematically investigated the effect of incentive motivation on tuning the precision of the number sense nor the interplay of such effects with processes of brain maturation. In the present study, we aimed at (1) enhancing the precision of the number sense using incentive motivation, (2) determining the elementary processes underlying the effects of motivation and (3) identifying developmental differences in such modulations. Participants from different age groups (adolescents and young adults) performed a dot comparison task and a non-symbolic addition task. In both tasks, participants were presented with two clouds of dots differing in numerosity. They had either to decide which numerosity was larger or approximate the sum of both numerosities. Performance on the tasks served as indicator of the precision of the number sense and the ability to operate on numbers, respectively. The opportunity to collect points in some trials to win vouchers at the end of the experiment served as the manipulation of incentive motivation. Besides behavioral data, participants’ phasic pupil dilation during the task was recorded. Measures of pupil size are commonly considered as proxies of the efficacy of the norepinephrine (NE) and dopamine (DA) systems, with the incentive-saliency theory proposing that reward-related mesolimbic DA signals can enhance the representational saliency of perceptual stimuli for drawing attention (Berridge & Robinsion, 1998), which is assumed to correlate with reward-related performance enhancement. First results are in line with Francis Galton’s sensory discrimination hypothesis and indicate that the precision of the number sense was predictive of non-symbolic arithmetic ability in both age groups. Further, parameters of an EZ-diffusion model characterizing the decision process in the dot comparison task suggests that raising incentive motivation increases the rate of perceptual evidence accumulation about numerosity; thus, incentive motivation seems to be a mechanism for modulating the discriminative precision of the number sense. Regarding physiological measures reflecting the efficacy of the NE and DA systems, we expect the incentive to increase pupil sizes in both age groups, with stronger effects in adults and with the extent of this increase being correlated with incentive-induced gains in the precision of the number sense. The interplay between exogenous factors (e.g., incentive motivation) and endogenous factors (e.g., brain maturation) influencing the functioning in numerosity perception, the responsiveness of the participants’ pupil to the different task conditions as indicator of this interplay will also be discussed. Authors: Annika Dix & Shu-Chen Li Affiliations: TU Dresden, Germany Author Emails: annika.dix@tu-dresden.de; shu-chen.li@tu-dresden.de

Leveraging heterogeneity for neural computation with fading memory

Duarte, Renato

Computational studies addressing the dynamics and computational properties of biologically inspired spiking neurons and networks tend to assume (often for the sake of analytical tractability) a great degree of homogeneity in both neuronal and connectivity parameters. The biophysical reality, however, is radically different from a homogeneous system and multiple levels of complex heterogeneous properties co-exist and shape a local circuit’s emergent collective dynamics and information processing properties. Within each cortical module, the characteristic patterning of the microcircuit’s building blocks and their mechanistic interactions give rise to rich dynamics, which subserves local computation by shaping the spatiotemporal features of population responses. Despite their varying molecular, morphological and physiological features, cortical modules can be seen as variations on a common theme. In essence, cortical modules are large recurrently coupled neuronal networks, whose interactions are achieved primarily via spike-triggered excitatory and inhibitory transmission. The combined complexity of these heterogeneous building blocks can be leveraged by cortical microcircuits to provide a rich dynamical space where complex relational constructs, spanning multiple timescales, can be learned, represented and used for online information processing. In this study, we set out to systematically evaluate the role played by different sources of heterogeneity (structural, neuronal and synaptic) in the characteristics of population dynamics and the circuit’s capacity for online stimulus processing with fading memory, using cortical layer 2/3 microcircuits as a core inspiration for the circuit specification. We cross-reference various sources of experimental data regarding the composition and patterning of these microcircuits, accounting for different phenomena of interest (e.g. neuron types and corresponding sub-threshold characteristics, conductance properties of different receptor types, circuit-level connectivity and activity statistics, etc.), across different cortical regions, assuming a certain degree of generalization is possible. The methods applied in this study to quantify the dynamics and generic processing properties, being system-independent, can provide a valuable set of tools for microcircuit benchmarking. As carefully curated and organized datasets become increasingly available, it will become possible in the near future to apply increasingly realistic constrains and comparatively study the properties of realistic microcircuits, built to model specific cortical regions and their input-output relations.

Distinguishing between discrete and continuous attractor dynamics as a source of sequential activity in oscillatory networks

Gönner, Lorenz

The sequential activity of rodent hippocampal place cells during off-line behavioral states has been proposed as a neural correlate of both memory processes and evaluation-based decision-making (Carr et al. 2011), as these sequences often represent either previously experienced trajectories or currently relevant future paths (Pfeiffer and Foster, 2013). These activity sequences occur during high-frequency oscillatory events termed sharp-wave ripples (SWRs), often accompanied by gamma-frequency oscillations. Both their origin as well as their precise functional role remain unclear. Recent high-density recordings of large populations of place cells have identified a pattern of discretized movement of the location represented during place-cell sequences. This has been viewed as evidence that the underlying dynamics are best characterized as discrete attractor dynamics in which autoassociation alternates with heteroassociation, each associated to different phases of slow gamma oscillations (Pfeiffer and Foster, 2015). However, we have previously observed a similar phenomenon in a large-scale spiking network based on continuous, rather than discrete, attractor dynamics (Gönner, Vitay and Hamker, in preparation): The movement speed of the activity bump was strongly modulated by the phase of the population oscillation. These results potentially challenge the conclusion drawn by (Pfeiffer and Foster, 2015) and suggest that both discrete and continuous attractor dynamics may generate similar population dynamics when strong oscillations are present. To identify alternative criteria which may distinguish between discrete and continuous attractor dynamics in future experimental studies of hippocampal sequential activity, we provide a direct comparison between recurrent networks with different connectivity patterns associated with discrete vs. continuous attractor dynamics. [1] Carr, M., Jadhav, S., and Frank, L. (2011). Hippocampal replay in the awake state: a potential substrate for memory consolidation and retrieval. Nat Neurosci 14, 147–153 [2] Pfeiffer, B. and Foster, D. (2013). Hippocampal place-cell sequences depict future paths to remembered goals. Nature 497, 74–79 [3] Pfeiffer, B. and Foster, D. (2015). Autoassociative dynamics in the generation of sequences of hippocampal place cells. Science 349, 180–183 [4] Gönner, L., Vitay, J., and Hamker, F.H. (2017). Predictive Place-cell Sequences for Goal-finding Emerge from Goal Memory and the Cognitive Map: A Computational Model. Manuscript in preparation.

Principles of structural macroscale connectivity of the mammalian cortex

Goulas, Alexandros

Understanding the blueprint of mammalian cortico-cortical connectivity is a fundamental challenge in neuroscience. Previous studies of the cat, mouse, macaque monkey and human cortex has indicated that cytoarchitectonic similarity and physical distance of cortical areas are closely related to the existence of cortico-cortical connections. These results suggest that cytoarchitecture and distance constitute species-general mammalian wiring principles. Here we adopted a quantitative cross-species framework that allows examining if and how these principles depend on brain size and phylogenetic differences. Our results show that the existence of connections could be faithfully reconstructed in one species from information on the cytoarchitecture and distance of cortical areas in another species. The quality of these cross-species predictions did not depend on phylogenetic or brain size differences. Furthermore, a cytoarchitecture-based model predicted better the laminar origin of connections at a whole cortex level when compared to an organization scheme based on the rostrocaudal axis, thus allowing cross-species predictions and the estimation of laminar origin of connections in the human cortex. Our findings highlight specific, potentially evolutionary conserved, neurogenetic and cellular phenomena which give rise to the close relation of the connectional, physical and cytoarchitectonic organization of the mammalian cortex.

Dynamic reconfiguration of spontaneous network activity in the mouse brain

Gutierrez, Daniel

Resting-state functional magnetic resonance imaging (rsfMRI) has proven a powerful tool to investigate functional neural networks via spontaneous Blood-Oxygen-Level Dependent (BOLD) signal. Human rsfMRI research has shown that spontaneous network activity undergoes spatio-temporal reconfiguration recapitulating a finite number of behaviourally-relevant functional systems, which can be related to higher-order cognitive processes. By mapping the repertoire of spontaneous co-activation among brain regions, here we show that analogous transitions occur in rsfMRI datasets of the mouse brain. We used clustering analysis to sort and selectively average rsfMRI activity time-frames into distinct and recurrent patterns of non-stationary co-activation and co-deactivation. We describe a reduced number of reproducible co-activation patterns encompassing co-occurrence of previously described intrinsic connectivity networks of the mouse brain, including integrative (default-mode, salience) as well as motor sensory and sub-cortical networks. Notably, inverse co-occurrence of default-mode and lateral cortical networks activity was a prominent feature of many of the identified states, suggesting a competing relationship between these two macroscale neural system and recapitulating a cardinal feature of human network organization. We also show that the identified co-activation patterns are characterized by smooth transitions from one state to another, as well as gradual assembly and disassembly. Collectively, these findings suggest the presence of dynamic reconfiguration of spontaneous network activity as a fundamental evolutionary-conserved principle of mammalian cortical activity, and pave the way to targeted investigations of the neural drivers of these states via interventional approaches in rodent models.  

Exploring Brain Network Integration across the Lifespan: An Application of Minimum Spanning Tree Graphs

Klados, Manousos

Authors: Manousos A. Klados1, Basilis Pezoulas2, Michail Zervakis2, and Shu-Chen Li1 Affiliations: 1Lifespan Developmental Neuroscience, Dept. of Psychology, TU Dresden 2 Dept. of Electrical and Electronic Engineering, TU of Crete, Chania, Greece Abstract: Graph theory has increasingly been adopted as a promising method for analyzing brain dynamics and complexity. However, applying graph theoretical approaches to investigate age-related differences in brain dynamics and complexity may be confounded by age difference in average connectivity. In this context, minimum spanning tree (MST) has been proposed as one method for overcoming this bias. Here we applied the MST methodology in large electroencephalography (EEG) dataset (total sample size > 150) covering the age range from mid childhood to old age to explore age differences in brain network integration. Specifically, networks derived from EEG signals using imaginary part of coherence, and then MSTs were based upon those networks. As indicators for network integration, MST parameters, such as tree hierarchy (TH), leave numbers (LH) and maximum degree (K) are of particular relevance. Preliminary results show that these graph parameters reflect age differences in the optimization of network configuration across the lifespan. All these three parameters in the EEG theta (4 to 8 Hz) and upper alpha (10-12 Hz) ranges increase from childhood to adulthood and decline in old age. Furthermore, individual differences in these parameters seem to be associated with behavioral performance and other brain event-related potentials (ERPs). For instance, in the developing brains, higher TH (more optimal tree configuration) derived from the theta band is associated with better behaviorally measured processing efficiency; whereas, in the adult brains, higher TH derived from the upper alpha band reflects better processing efficiency in older adults and is associated with larger ERP component of inhibition in younger adults.

Power law properties of ECoG signals at different states of consciousness

Krzemiński, Dominik

Consciousness is identified with binding and integration of information, both on the phenomenological and neuronal level. Whereas the neuronal basis of features’ binding and multi-sensory integration has been extensively investigated, not much is known about the mechanisms of temporal integration and their role in maintaining consciousness. In the present study spontaneous brain activity was recorded in different states of consciousness and analysed in terms of long-range temporal correlations (LRTCs), which indicate that signals possess long “memory” and are modulated across multiple time-scales. We hypothesized that loss of consciousness during general anaesthesia will be related to weaker LRTCs in brain activity. Resting-state electrocorticography (ECoG) was recorded from four macaque monkeys during wakefulness and general anaesthesia. We estimated amplitude envelopes of brain oscillations and used Detrended Fluctuation Analysis to estimate LRTCs of amplitude modulations. We report two main findings. Firstly, spontaneous brain activity exhibits significant LRTCs, which span a wide range of time-scales. Strongest LRTCs were found in the motor and frontal regions. Secondly, LRTCs diminish during loss of consciousness. Topographically, anaesthesia-induced changes can be described in terms of a “normalization effect”, as brain regions characterized by strongest LRTCs during wakefulness exhibited greatest decrease during anaesthesia. Therefore, our results complement and extend previous studies, which demonstrated LRTCs in non-invasive M/EEG recordings. Further, we revealed that during consciousness brain activity is modulated across a wide range of time-scales, which is indicative of a long temporal memory and might reflect the process of temporal integration. Conversely, during general anaesthesia the brain’s `temporal memory' is much shorter, suggesting that extensive temporal integration does not occur during loss of consciousness.

Biological inspired simulation of neuron-astrocyte networks

Lenk, Kerstin

Astrocytes are known to affect the synaptic neuronal transmission and blood flow. However, role and functions of astrocytes in the neuronal communication on network level in health and disease calls for new insights. Astrocytes support the supply of nutrients to the neurons, and are responsible for the liquor regulation in the brain. Astrocytes actively influence the behavior of the surrounding neuronal network including changes of the synaptic plasticity and neuronal excitability. These dynamics are altered in diseases like Alzheimer’s, where the release of the gliotransmitter GABA is increased by affected, so called reactive astrocytes. In this paper, we aim to simulate a neural network in healthy condition and with altered astrocytic GABA release. Therefore, we use our developed neuron-astrocyte model which includes astrocyte controlled tripartite synapses and the astrocyte-astrocyte interaction The basis of the model was the spiking neuronal network model INEX which consists of inhibitory and excitatory neurons. The probability of each neuron to spike follows an inhomogeneous Poisson process. In order to model the effects of astrocytes on tripartite synapses, we used a modified version of presynapse astrocyte interface by De Pittá et al. for excitatory synapses which is based on Tsodyks-Markram model of synaptic activity . We made further modification to the presynaptic model that enables astrocytes to increase or decrease synaptic strength based on gliotransmission model introduced by De Pittá et al.. This modification takes into account different time scales of different transmitters, and thus, in our model which we call INEXA the effect of gliotransmission depends on time scales and neuronal activity. Astrocyte’s IP3 and calcium were modeled using simple exponential equations. In order to combine in each astrocyte the effect of synaptic inputs from all the excitatory synapses, the local astrocytic responses to each synapse were summed into a global astrocyte calcium response. The propagation of calcium waves in the astrocyte network was then modeled according to the simplified UAR calcium signaling modeled introduced in Lallouette et al.. Astrocytes, when activated, signal back locally to their connected synapses by releasing glutamate and globally by GABA. We simulated 2D networks modelling neural cell cultures on in vitro multielectrode arrays with 200 excitatory and 50 inhibitory neurons with 10 per cent connectivity. We compare then the neuronal activity when no astrocytes and when 107 astrocytes (30% of all cells) are present. The network topology was defined with rule based stochastic process resulting in a network where an astrocyte is connected to approximately 120 nearby excitatory synapses. The simulated spike trains had a length of 5 minutes. We did ten repetitions of the simulations for both scenarios, without and with astrocytes, using the same parameters for the neuronal network. Furthermore, we varied the astrocytic GABA inhibition towards the postsynapse where a high release rate resembles a pathological state. Our INEXA model is the first biologically inspired neuron-astrocyte network model with astrocyte network effects on neuronal behavior. We simulated neuronal networks without and with astrocytes. As expected, the overall neuronal network activity is reduced when astrocytes are presented since the release of astrocytic GABA in response to high activity reduces the overall activity. This prevents the system from excitotoxicity which is in dysfunction in astrocytes related diseases like epilepsy and Huntington’s disease. While the spike rate is decreased the burst rate remains the same and we see less interburst spiking which implies a more synchronous bursting than in pure neuronal populations. Our results show also that a high GABA release by astrocytes may be responsible for synchronous inhibition of postsynaptic neurons. With increased GABA inhibition, the spike and burst rate decreased while the burst duration and spikes per burst remain similar. We showed using our neural model INEXA that including astrocytes to the neuronal networks leads to burst and less interburst spiking. To our knowledge, it is the first time that the effect of the gliotransmitter GABA to the neural network was simulated.

Neuronal circuit analysis with spike-triggered non-negative matrix factorization

Liu, Jian

Many neurons throughout different sensory systems integrate their input signals in a nonlinear fashion. These nonlinearities of signal integration are often critical for how the neurons extract sensory features and encode sensory information. Computational models typically aim at capturing these nonlinear integration characteristics by partitioning receptive fields into subunits whose signals are nonlinearly combined. Such subunit models are commonly used, for example, to describe visual responses of neurons in the retina or primary visual cortex, but also find application in other sensory systems. Yet, detailed investigations of subunit models and their connections to the neural circuitry suffer from the difficulty of identifying the concrete set of subunits from the activity of recorded neurons. To this end, we developed spike-triggered non-negative matrix factorization (STNMF), a novel data analysis technique, which is based on applying non-negative matrix factorization to the ensemble of spike-eliciting stimuli, obtained under white-noise stimulation. Using simple neural circuit simulations as well as recordings from retinal ganglion cells, we demonstrate that the method faithfully identifies the relevant subunits without the need for prior specification of the number, size, or shape of the subunits or of their nonlinear interactions. Furthermore, we show 1. that the obtained subunit layouts help in predicting ganglion cell responses under natural stimulation, 2. that the identified ganglion cell subunits correspond to presynaptic bipolar cell receptive fields, as verified by combined recordings from bipolar and ganglion cells, and 3. that comparison of subunit layouts across populations of simultaneously recorded ganglion cells reveal which cells and cell types share input from the same bipolar cells. Thus STNMF provides a promising new tool for connecting neural circuitry and sensory encoding, applicable to a wide range of sensory systems.

Automatic human sleep stage scoring using recurrent neuronal networks

Malafeev, Alexander

Introduction The gold standard of sleep analysis is polysomnography (PSG), which includes electroencephalography (EEG) and other signals such as eye movements and muscle tone. Classically, sleep stages are visually scored in 20-s or 30-s epochs according to standard criteria. Scoring of sleep is time consuming and has to be performed by a trained specialist. It would be helpful for sleep medicine and research to have a tool capable of performing reliable automatic scoring. In the last few years Recurrent Neuronal Networks (RNNs) were shown to be superior than other machine learning methods on the datasets with temporal structures. One of the most widely applied RNN is the Long-Short Term Memory (LSTM) Artificial Neuronal Network (ANN). An important property of RNNs is that the temporal sequence of the data is considered. We expected that taking into account temporal information would improve automatic classification of sleep stages. Methods PSG data were obtained in an experiment with vestibular stimulation. Three nights (8 h) of 18 healthy young males (age: 20-28 y; mean: 23.7 y) were recorded: two motion nights (rocking until sleep onset; rocking for 2 hours), and a baseline without the motion (in total 54 recordings). Sleep stages were scored according to the AASM rules. We computed spectrograms (20-s epochs) and manually engineered 43 features. Using these features and spectrograms (only spectrograms, only engineered features, and their combinations) along with sleep stages marked by an expert we trained several LSTM ANNs to score sleep. The networks contained different numbers of layers and different numbers of neurons per layer. We used the 10-fold leave-one-out method for cross-validation and F1-score as a performance measure to validate the trained networks. Results Above mentioned methods yielded reasonably high F1 scores for all sleep stages (0.85-0.95) except stage 1 (~0.5), which is well known to be difficult to score. F1-scores obtained with our LSTM ANNs were comparable to the performance of experts. We also observed comparable performance using different combinations of features. Conclusions We demonstrated good performance of LSTM ANNs in the application to automatic scoring of healthy sleep in young adults. Performance of stage 1 detection was lowest but with similar F1-scores as for human experts. LSTM AANs worked much better for scoring than other machine learning methods that do not consider the temporal structure. We think that the algorithm can be further improved with larger training datasets. Supported by nano-tera.ch (grant 20NA21_145929) and the Swiss National Science Foundation (grant 32003B_146643).

Information transfer analysis in behavioral (sign language) data

Malaia, Evie

Approaches to the puzzle of acquisition of language have tended to focus on issues related to segmentation of the auditory stream using statistical, prosodic, and social cues (Johnson et al., 2014; Seidl et al., 2015). However, experimental evidence shows that babies can identify the information-carrying channel during the language acquisition period even if it is visual: for example, hearing babies of deaf parents try to “babble” using their hands (Petitto et al., 2001). We take the first step toward characterizing the universal communicative properties of the linguistic signal by approaching it from the point of information transfer. The quantifiable measure of information is entropy: the uncertainty involved in predicting the next data point in a time series (Shannon, 1948). In the auditory domain, where linguistic signal is described as a series of sounds with specific characteristics, the world languages are described as having modulation spectra of moderate fractal complexity ( ). However, the underlying properties of the visual linguistic signal allowing babies to identify a specific channel/ modality as carrying information/communicative have not been described. We characterized the information-carrying property of sign language in terms of fractal complexity of motion. The comparison between video clips of hand movements in everyday activities (e.g. Lego building) and hand motion in ASL narratives indicates significantly higher fractal complexity in sign language (Fig. 1) These results suggest that more information can be transferred using the hand movement in ASL vs. that in everyday motion. Based on previous work identifying motion as a key component in syntax and semantics of sign languages (Brentari, 1998), we characterized the information-carrying property of sign language in terms of fractal complexity of motion, based on mathematical analyses of information transfer between complex systems (West & Grigolini, 2010). The comparison indicated significantly higher fractal complexity in sign language across tested frequency bands (0.01-15 Hz), as compared to everyday human motion. Interestingly, both everyday motion and sign language appeared to have a scale-free distribution of fractal complexity – a feature not unexpected in a biological system, but never previously documented for sign language. Similarly, investigation of neuronal tuning shows that neurons in V1 area of the macaque brain are tuned to optimally respond to 1/f signal complexity in visual signals, as compared to 1/f0 or 1/f2 (Yu et al., 2005), suggesting a fundamental biological basis for neural sensitivity to a specific range of fractal complexity in visual stimuli. The question of whether language as a communicative device overlaps with complexity ranges preferred for art in the respective domain (visual and auditory) will require further study.

Change point models and hierarchical Gaussian filters: Bayesian model comparison

Markovic, Dimitrije

Behavioral models based on approximate Bayesian inference were in recent years successfully applied in model-based functional magnetic resonance imaging (fMRI) studies to identify the functional properties of specific brain areas typically involved in the decision making under uncertainty, e.g. [1,2]. However, as different studies considered different models of decision making under uncertainty, it is unclear which of these computational models provides the best account of the behavioral and neuroimaging data. This is an important issue, as not accounting for alternative descriptions of underlying computational processes may tempt researchers to over-interpret results based on a single model. Here we performed a comparative analysis of two well-established hierarchical probabilistic models that capture the evolution of beliefs in changing environments: (i) Hierarchical Gaussian Filters (HGF) [3], (ii) and Change Point Models (CPM) [4]. To perform model comparison on both simulated and empirical data, we have formulated both perceptual models within a meta-Bayesian framework [5]. This allowed us to assess using Bayesian model comparison whether it is in principle possible to disambiguate between the two models in behavioral or neuroimaging experiments. To test the accuracy of the Bayesian estimation we have simulated large number of behavioral experiments. We found that meta-Bayesian inference achieves high accuracy of model identification, and parameter estimation. Importantly it provides significant improvements to model identification when compared to Maximum-Likelihood schemes (e.g. the Akaike Information Criterion). Furthermore, a preliminary analysis of previously published behavioral data suggests that CPM provides better account for subjects’ behavior compared to HGF. These results stress the relevance of Bayesian model comparison for model-based data analysis [6]. References [1] Iglesias S, Mathys C, Brodersen KH, Kasper L, Piccirelli M, et al. (2013) Hierarchical Prediction Errors in Midbrain and Basal Forebrain during Sensory Learning. Neuron 80: 519-530. [2] Nassar MR, Rumsey KM, Wilson RC, Parikh K, Heasly B, et al. (2012) Rational regulation of learning dynamics by pupil-linked arousal systems. Nat Neurosci 15: 1040-1046. [3] Mathys CD, Lomakina EI, Daunizeau J, Iglesias S, Brodersen KH, et al. (2014) Uncertainty in perception and the Hierarchical Gaussian Filter. Frontiers in human neuroscience 8. [4] Nassar MR, Wilson RC, Heasly B, Gold JI (2010) An approximately Bayesian delta-rule model explains the dynamics of belief updating in a changing environment. The Journal of Neuroscience 30: 12366-12378. [5] Daunizeau J, Den Ouden HE, Pessiglione M, Kiebel SJ, Stephan KE, et al. (2010) Observing the observer (I): meta-Bayesian models of learning and decision-making. PLoS One 5: e15554. [6] Florent M, Schlunegger D, and Dehaene S. "The sense of confidence during probabilistic learning: A normative account." PLoS Comput Biol 11.6 (2015): e1004305. [7] Marković D, Kiebel SJ (2016) Comparative Analysis of Behavioral Models for Adaptive Learning in Changing Environments. Frontiers in Computational Neuroscience 10: 33.

Localization of complex sound stimuli, short words, experiments and model

Marsalek, Petr

Listening to many human speakers at the same time, paying attention to one of them and trying to understand the speech is a typical "cocktail party" problem, which inspired many studies. In the experiment, several loudspeakers in given spatial arrangement presented parts of speech, short words at the same time. The task of the experimental subject was to indicate the direction of one of the voices. We studied models of human sound localization previously. The part of the auditory periphery in the auditory brainstem contains neural circuit, which processes all sounds regardless of their origin and ecological relevance. The central part of the auditory pathway, auditory cortex, is involved in attentional selection of relevant sounds. Specialized cortical aras deal with detection and discrimination of speech parts. Any model of cocktail-party-like sound processing must therefore describe both the auditory periphery and central processing. We will present a progress report on models following psychoacoustical experiments.

Types of Cognition in Large scale cognitive brain systems and its Implications for future High-Level Cognitive Machines

Miguel, Camilo

This work summaries the actual knowledge about High-level Cognition process and its relation with brain systems and neural networks. From this, it is possible to identify some paradoxes with an impact on the development of future technologies and artificial intelligence based in the study of complex systems: we may make a High-level Cognitive Machine sacrificing the principal attribute of a machine, the accuracy.

Multiscale analysis of simultaneously recorded neural microstates

Mishra, Ashutosh

The study of neural processing is typically conducted on separate scales using different methods. On the one hand coarse, large-scale, high temporal resolution imaging techniques - like EEG and MEG - have provided neuroscientists with the tool to ''read'' the brain on the order of milliseconds. On the other hand recording techniques at single or multi neuron level have elucidated the dynamics at smaller scales, but with higher accuracy. However, the units of information processing are largely unknown because the brain operates simultaneously at multiple scales, which can only be understood by recording the scales in parallel. A potential 'unit' of information processing are so called ''microstates'', first coined on the level of large-scale EEG potentials. Microstates have been characterized as quasi-stable (~100 ms) topographies. We speculate that microstates on the EEG level are likely directly related to corresponding microstates on the neural population and LFP level, however, due to the scarcity of multiscale data, their relation has not been systematically studied. We are recording multilevel microstates in the mouse and develop a set of techniques to discover, classify and relate microstates in multiscale data. Our initial results indicate that a combination of dimensionality reduction and clustering reliably identifies states on both the population activity and the EEG level. We are now building a pipeline to model the sequential structure on each levels and link the resulting Markov chains across levels.

Dynamic mode decomposition of resting state EEG data - a dynamical systems approach to identifying epilepsy characteristics

Mora, Karin

Time series of neural activity recorded by $10 - 100$s of electrodes over several minutes are large-scale data. \emph{Either} their temporal \emph{or} spatial properties can be analysed by current methods such as discrete Fourier transform or principal component analysis, respectively. We, however, employ a more recently developed method called \emph{dynamic mode decomposition} (DMD) to capture the spatio-temporal dynamics and patterns of such high dimensional data, and demonstrate its novel application to electroencephalography (EEG) data. EEG is commonly used to record electrical neural activity to then diagnose epilepsy. However, such diagnoses are at best difficult unless the EEG is performed during or shortly after an epileptic seizure. Recent studies have suggested that the functional connectivity in the resting state is altered in patients with temporal lobe epilepsy (TLE). Motivated by these findings we relate the spatio-temporal characteristics of the dynamical system extracted with DMD to the electrode topology and thus to the functional network. We show how from such an analysis novel numerical indicators can be derived to identify network differences and hence distinguish between TLE patients and controls.

Intermittent phase synchronization in epileptic brain

Moskalenko, Olga

One of the most interesting types of the synchronous behavior observed in neurophysiological systems is the phase synchronization regime. The phase synchronization is the generalization of classical synchronization of periodical oscillations on the case of non-autonomous or coupled chaotic systems and means the presence of the phase locking of interacting systems in the absence of any correlations of their amplitudes. Near the boundary of the phase synchronization the intermittent behavior is observed. In such case the phase locking condition is satisfied only in certain time intervals called as the laminar (synchronous) phases, which are persistently interrupted by the phase slips called as the turbulent (asynchronous) phases. Such regime is called as the intermittent phase synchronization. It is a generic type of the synchronous behavior observed both in physical, biological and physiological systems. In present report we discuss the intermittent phase synchronization in epileptic brain. As the objects of investigation we have used the signals of electroencephalograms of both WAG/Rij rats, having genetic predisposition to epilepsy, and human. We show that in all considered cases the episodes of the synchronous activity correspond to the epileptic seizures. Moreover, the synchronous phases can also be observed during the fields of the background activity. We estimate the degree of intermittent phase synchronization between different channels of epileptic electroencephalograms and analyze the influence of medical drags of the epileptic seizure duration and synchronization degree and reveal the universal regularities for different animals. We have also shown that the epileptic seizures are characterized by the higher degree of synchronization in comparison with the fields of the background activity. For estimation of synchronization degree the method based on the estimation of zero conditional Lyapunov exponent from time series has been proposed.

Artefactual origin of biphasic cortical spike-LFP correlation

Okun, Michael

Electrophysiological data acquisition systems introduce various distortions into the signals they record. While such distortions were discussed previously, their effects are often not appreciated. Here I show that the biphasic shape of cortical spike-triggered LFP average (stLFP), reported in multiple studies, is likely an artefact introduced by high-pass filter of the neural data acquisition system when the actual stLFP has a single trough around the zero lag.

Scaling of distributed large scale simulation of synchronous slow-wave and asynchronous awake-like cortical activity

Pastorelli, Elena

The multiscale organization of the brain is multifaceted as despite an invariance of its structural properties, like the connectivity between its components, the ongoing collective dynamics of this complex system can display disparate and qualitatively different states. A sleeping brain, for instance, expresses at the single-neuron and local network levels slow oscillations of activity which at a macroscopic scale appears to be synchronized in space and time as traveling waves (slow-wave activity, SWA). The same brain during quiet wakefulness (QW) shows an asynchronous distributed activity which, even in the absence of interactions with the environment, displays a multiscale correlation continuously changing in time, eventually highlighting the existence of competing subnetworks. Transitions between SWA and QW like those naturally expressed during the wake-sleep cycle are a challenge for parallel simulation of large multi-scale models of the brain. During SWA part of the network is synchronously active with a huge rate of exchanged spikes while the rest of the system is almost silent. On the other hand, during QW an asynchronous state involves homogeneously the whole network. This poses a question whether computational and inter-process communication load of simulation platforms are fully exploited in both these brain states. To address this issue, we run a bunch of simulations of spiking neuron networks organized in bi-dimensional grids of modules, each aiming at modeling a cortical column, including up to ~50G synapses connecting ~46M point-like neurons (Leaky Integrate and Fire with Spike Frequency Adaptation) distributed over a large set of MPI processes. The execution platform was a server composed of up to 64 dual-socket nodes, each socket equipped with Intel Xeon Haswell E5-2630 v3 processors (8 cores @ 2.40GHz clock). Speed-up measures and strong and weak scaling analysis have been performed to demonstrate the scalability of simulations run using from 1 to 1024 MPI processes. Distributed simulations have been executed on the proprietary mixed time and event driven DPSNN engine (Distributed Simulator of Plastic Spiking Neural Networks) (P.S. Paolucci et al. 2016. Journal of Systems Architecture 69:29–53) showing similar performances both under SWA- and QW-like brain states. For comparison, the same scaling analyses are going to be performed on the NEST simulation platform running the same large-scale spiking neuron network models, in the framework of the Human Brain Project. The DPSNN engine has been validated comparing (on small problem sizes) its results with simulation executed on a previous generation scalar simulator (Mattia M, Del Giudice P. 2000. Neural Comput. 12:2305–2329).

Catecholamines alter long-range temporal correlations of cortical alpha-band activity through changes in cortical excitation-inhibition balance

Pfeffer, Thomas

How information is processed in the cortex depends not only on the type of information, but also heavily on the internal state of the cortex, which is subject to continuous fluctuations across multiple timescales. Previous work points to two key factors involved in shaping cortical state: (i) continuous (thalamic) drive of the network, for instance due to sensory stimulation. In this ‘high-input regime’ strong excitatory drive of the cortical network is counterbalanced with even stronger inhibition, effectively reducing the ratio between excitation and inhibition (ii) ascending neuromodulatory systems, such as the noradrenergic and the cholinergic system, mediated through an intricate circuit involving changes in both excitation and inhibition. Recent computational modelling work has demonstrated that execution-inhibition balance in neural networks, and alterations thereof, can manifest in long-range temporal correlations of collective network oscillations. In this study, we combined computational modeling, pharmacological intervention, and MEG in healthy humans to investigate if and how the cortical levels of neuromodulators (catecholamines and acetylcholine) as well as strong sensory/task drive shape long-range temporal correlations of cortical alpha-band oscillations. We find that (i) strong sensory drive is associated with significant decreases in long-range temporal correlations of alpha-band activity, presumably due to an increase in inhibition (ii) catecholamines (but not acetylcholine) significantly increase long-range temporal correlations during both rest and task, presumably due to a decrease in inhibition.

Up and down statistics of Gamma oscillations

Powanwe, Arthur

Gamma oscillations are ubiquitous in the brain and are believed to be useful for perceptual and cognitive behaviour, coding properties or communication between brain areas.Usually, recorded LFP or EEG show self-sustained oscillations at gamma frequency (30-90 Hz) which exhibit epochs of high amplitude oscillation (up states or gamma bursts) alternating with epochs of low amplitude (down states). Such spontaneous up and down states are believed to be involved in working memory,modulation of neuronal excitability with attention or generation of spontaneous activity during sleep.Moreover they have been showed to arise due to local cortical circuits which operate through a proportional balance of excitation and inhibition and are able to generate self-sustained activity that can be turned on and off by synaptic inputs or by recurrent excitation with synaptic depression and no Inhibition [1-2].But statistics of these up and down states like transition rates or First Passage Time between the two states, probability densities or even Serial Correlation Coefficients between states are still absent. The Goal of the poster is to give a full characterization of up and down states by deriving exact analytics expressions of the quantities cited above. To this end, we consider a network of two state neurons, where each neuron can be either active or quiescent and can shift from the active to the quiescent state and reversely according to a Poisson law, moreover each neuron can be excitatory or Inhibitory. At the network level the model can generate avalanche dynamics or oscillations in gamma frequency [3-4]. The activities of excitatory and inhibitory populations can be described by a stochastic version of Wilson-Cowan equations. If the network exhibits oscillations, the corresponding stochastic Wilson-Cowan equations possess stochastic limit cycle or quasi-cycle. In the case of quasi-cycle self-induced oscillations, the corresponding deterministic Wilson-Cowan equations (obtained in the thermodynamic limit of the stochastic version) possess a fixed point with complex conjugate eigenvalues which themselves have a common negative real part. Oscillations are then the consequence of the stochastic nature of the transition of a single neuron between its two states and the finite size of the system. Moreover such oscillations show up and down states. To describe the dynamics of the system around its corresponding fixed point, we can express the activities of the excitatory and inhibitory populations as the sums of fixed points activities plus some deviations scaled by the inverse square-roots of the size of the populations: a technique called Linear-Noise-Approximation (LNA). The equations obtained from LNA are coupled stochastic linear equations, which then describe gamma oscillations. Going further in the analysis, we are able to describe gamma oscillations by uncoupled phase and amplitude equations. The corresponding amplitude equation is related to the parameters of the system and allow us to have a clear understanding of gamma oscillations up and down states and transitions between them. We can then assume that up and down states are the consequence of the transitions of the amplitude of gamma oscillations above and below a fixed threshold. Using amplitude equation derived from the LNA we can derive analytics expressions of the stationary probability densities for the oscillations to be in the up and down states as well as the transition rate or Mean First Passage Time from down to up state and reversely. Moreover we can also derive analytics expressions of Higher Order Statistics like Serial Correlation Coefficients (SCC) between states. Numerical simulations are also performed and confirm the analytics expressions obtained. References 1-Y.Shu, A.Hasenstaub, and D.A.McCormick. Turningon and off recurrent balanced cortical activity. Nature, 423:288-293, 2003. 2-D.Holcman and M. Tsodyks. The emergence of up and down states in cortical networks. Plos Comput. Biol.,2:e23,2006. 3-M.Benayoun,J.Cowan,W.van Drongelen and E.Wallace. Avalanches in a stochastic model of spiking neurons. Plos Comput. Biol. 6(7): e1000846. 4-E.wallace, M.Benayoun, W.van Drongelen and J.Cowan. Emergent Oscillations in networks of stochastic Spiking Neurons. Plos Comput. Biol.,5(6):e14804,2011

Self-sustained oscillatory activity in a spiking cortical network model: phenomenology and mechanisms

Roque, Antonio

A question that arises when studying brain dynamics at rest is what are the mechanisms responsible for the observed cortical self-sustained activity (SSA) states in the absence of external inputs. Most previous theoretical studies have addressed this question by considering random networks of leaky integrate-and-fire neurons. Here, this problem is studied using a cortical network model with more realistic architecture and more realistic neuron models. The model has hierarchical and modular architecture and is made of excitatory and inhibitory neurons that belong to five electrophysiological cortical cell classes: regular spiking (RS), intrinsically bursting (IB), chattering (CH), fast spiking (FS) and neurons that produce low threshold spikes (LTS). The first three neuron types are excitatory and the other two are inhibitory. Neurons were modeled by the Izhikevich model and synapses were modeled as conductance-based with exponentially decaying conductances. Results show that in the region of parameter space where there is a balance between excitation and inhibition, detected SSA states display network activity oscillations alternating high and low global activity epochs followed by abrupt unpredictable decay toward the resting state. The lifetime expectancy of these states depend on network modularity, mixture of neuronal types and excitatory and inhibitory synaptic strengths. To shed some light on the mechanisms responsible for this behavior an experimental procedure was devised to numerically probe the local structure of the network phase space where transient SSA states occur. This was coupled to analyses of neuronal dynamics in the single neuron phase spaces. The procedure allowed to qualitatively explain the global network oscillations, their unpredictable breakdown, and the roles of modularity and different neuronal mixtures. In conclusion, this work suggests that the properties of oscillatory self-sustained cortical activity depend on both the topology and the neuronal composition of the cortical network and provides qualitative explanations for their roles in this dynamic phenomenon.

Mutual information rate of a synapse during short-term depression

Salmasi, Mehrdad

Information transmission in the brain is mediated mainly through chemical synapses. The release mechanism in a synapse is not a reliable process due to spontaneous release and intermittent unresponsiveness of the synapse to incoming action potentials. In addition to the stochastic nature of the release, short-term depression reduces the release probability of the synapse. Neurons employ multiple release sites to compensate the erroneous function of single sites and transfer information to the neighboring neurons more reliably. We are interested in the information efficacy of a synapse during short-term depression and seek to quantify the compensatory effect of multiple release sites. We model a release site as a binary asymmetric channel whose state is determined by the release history profile of the site. The short-term depression dynamics is implemented by a Markov chain in which the state transitions follow depression and exponential recovery of the release site. First we derive the mutual information rate of a single release site and show that depression can increase the information rate of the synapse, provided that spontaneous release is depressed more than spike-evoked release. Then we consider multiple release sites with separately released vesicles and derive the mutual information rate between the input spike process and the release outcomes of the release sites analytically. Using the derived expression, we assess how increasing the number of release sites compensates the unreliability of individual sites. Finally, we take into account the synaptic energy consumption and demonstrate the trade-off between the rate of information transmission and energy expenditure of the synapse.

Emergence of functionally distinct frequency bands in networks of quadratic integrate-and-fire neurons

Schmidt, Helmut

We study a neural mass model that describes the macroscopic firing rate and membrane potential of a network of all-to-all coupled quadratic integrate-and-fire (QIF) neurons and its response to external oscillatory forcing. In particular, we study the effect of periodic forcing that can be either sinusoidal or non-sinusoidal ("burst-like"). We find that this model exhibits frequency-specific responses that can either evoke or clear states of large firing rates in single neural masses and networks of such neural masses. The structure of the model equations permits the use of techniques based on Fourier analysis to study the linear and weakly non-linear response. [In the weakly non-linear regime we use a recursive scheme to compute the waveforms of the macroscopic variables.] In the nonlinear regime the system can exhibit multiple stable limit cycles and chaotic behavior, the emergence of which is organized by saddle-node and period-doubling bifurcations. We demonstrate that it is possible to approximate the loci of these bifurcations analytically in the limit of large frequencies. One remarkable finding is that one can activate and clear episodes of high firing rate activity by frequency tuning, especially when the forcing is non-sinusoidal. Whilst slow oscillations serve to initiate high firing activity by quasi-stationary response, forcing from an intermediate range of frequencies brings the system back to a low firing rate through nonlinear resonance. We apply these results to two-choice forced decision-making attractor networks as well as networks that can maintain multiple memory states. Decision-making networks consist of two mutually inhibitory neural masses that support only one active population at a time. We study the effect of the interplay of noise and oscillatory forcing on the response time and discrimination performance in a psychophysical setting. Memory networks on the other hand may consist of multiple neural masses and are able to encode multiple memory items. We find that periodic forcing can selectively gate and clear memory states. In general, our findings point out potential functional roles of oscillations in information processing and for short-term memory.

Network topology determines seizure generation in generalized epilepsy

Schmidt, Helmut

We introduce a modular network model of phase-coupled oscillators that gives a phenomenological description of the emergent electrographical activity of the cerebral cortex at rest as well as during epileptic seizures. This model is used to generate both focal and generalized model seizures in neural networks that are derived from electrophysiological data from subjects with epilepsy as well as healthy control subjects. This model presents a novel, computationally efficient approach to study epileptogenic activity in networks. Parameters that define the network structure, and parameters that define the model dynamics are derived from resting-state EEG, thus there is no need to observe seizures from electrophysiological recordings. We apply this model to two different scenarios: Firstly, we study the global mechanisms of seizure generation in epilepsy in comparison to healthy controls. We demonstrate that networks from subjects with epilepsy have a statistically higher propensity to generate seizures than healthy controls. Secondly, we demonstrate that this model-based approach can be used as diagnostic tool for epilepsy. By generating localized model seizures and recording the resultant activity in the network, we identify optimal sites and tuning parameters to distinguish between healthy subjects and drug-naive subjects who were recently diagnosed with generalized epilepsy. We find that a reliable diagnosis of epilepsy (i.e. excluding the possibility of false-positives) can be given in more than half of all cases.

Control of prosthetic hand using state-space models

Siadat, Sohail

The poster will be about using state-space models to decode peripheral spike trains for the control of a hand prosthesis.

Using personalised brain models for predicting epileptic activity

Sip, Viktor

The propagation of the epileptic activity in a diseased brain is affected by the underlying brain network structure, and taking the individual variability into account should improve the treatment outcome. Advances in non-invasive neuroimaging techniques now allow to reconstruct a patient specific structural connectome and reveal the anomalies. Here we focus on the use of these individualized connectomes in the large scale brain network models. We show how the computer simulations can be used to improve the localization of the epileptogenic zone and to predict the outcome of a resective surgery aimed at the removal of the epileptogenic zone. The simulations are performed in The Virtual Brain, a large scale brain network simulator.

The Temporal Dynamics of Criticality in the Brain during Sleeping and Wake

Thomas, Christopher

Objectives: The brain putatively operates at or near a critical point. It has been suggested that sleep is necessary for homeostatic maintenance of criticality, but experimental evidence is lacking. Furthermore, it is unclear whether and to what extent critical dynamics differ between and within prolonged periods of spontaneous waking and sleep. Methods: Continuous recordings of neuronal activity and EEG were obtained from the frontal (motor) cortex in 11 freely-behaving male C57BL/6J mice over 12-hour dark periods. “Neuronal avalanches” were identified 1) in spiking activity from putative single-units (5-22 neurones per animal, 4 second resolution) and 2) in high amplitude negative deflections of the local field potential (nLFPs) (16 channels per animal, 10 minute resolution). Results: Power law relationships indicative of criticality were obtained with nLFP avalanches but not with spiking avalanches. Spiking avalanche mean size was smallest in waking (4.26 $\pm$ 0.2 spikes per avalanche), larger in REM (4.49 $\pm$ 0.2) and largest in NREM (5.23 $\pm$ 0.49). However, nLFP avalanches were smallest in NREM (7.39 $\pm$ 0.92 channels per avalanche), larger in REM (8.42 $\pm$ 1.09) and largest in wake (9.10 $\pm$ 0.62). 24/37 prolonged (at least 1hr) wake bouts showed significant changes in spiking avalanche mean size from the first to last half of the bout (rate=-0.15-0.13 spikes per avalanche per hour, p< 0.05). Furthermore, this measure correlated significantly (R=0.09-0.78, p< 0.0001) with EEG slow wave activity (0.5-4 Hz) in all recording periods, particularly within NREM (R=0.18-0.47, p< 0.0001). No such significant trends were found in nLFP avalanches. Conclusions: The properties of neuronal avalanches differ between sleeping and wake but are consistent with near critical dynamics in both states. These properties show small but significant temporal trends which vary in rate and direction in different local networks, and are associated with slow wave activity, particularly in sleep. Spiking and LFP neuronal avalanches represent very different forms of dynamics, as their properties undergo opposite changes simultaneously. Acknowledgements: BBSRC(BB/K011847/1), MRC(MR/L003635/1), FP7-PEOPLE-CIG(PCIG11-GA-2012-322050), Wellcome Trust(098461/Z/12/Z).

Thermodynamics, signatures of criticalityin a network of neurons

Tkacik, Gasper

The activity of a neural network is defined by patterns of spiking and silence from the individual neurons. Because spikes are (relatively) sparse, patterns of activity with increasing numbers of spikes are less probable, but, with more spikes, the number of possible patterns increases. This tradeoff between probability and numerosity is mathematically equivalent to the relationship between entropy and energy in statistical physics. We construct this relationship for populations of up to N = 160 neurons in a small patch of the vertebrate retina, using a combination of direct and model-based analyses of experiments on the response of this network to naturalistic movies. The form of this function corresponds to the distribution of activity being poised near an unusual kind of critical point. Based on this observation, we define a new class of probabilistic models that can be tractably inferred from data, and are designed to properly capture code ensembles near the critical point. We show that our model is a learnable generalization of a recently proposed toy mechanism involving a fluctuating latent field, which produces zipfian distributions of responses without fine tuning.

Semantic grounding in a neurocomputational-constrained model including spiking neurons and realistic connectivity

Tomasello, Rosario

Previous neurocomputational work has addressed the question why and how many cortical areas contribute to semantic processing and, specifically, why semantic hubs involved in all types of semantics contrast with category-specific areas preferentially processing certain meaning subtypes. However, much of the pre-existing work used either basic neuron models or much simplified connectivity so that a more sophisticated and biologically-realistic model would be desirable. Here, we applied a neural-network model replicating anatomical and physiological features of a range of cortical areas in the temporal-occipital and frontal lobes to simulate the learning of semantic relationships between word-forms and specific object perceptions and motor movements of the own body. The two neuronal architectures differed in the level of detail with which cortico-cortical connectivity was implemented. Furthermore, model (A) adopted a mean-field approach by using graded-response neurons, whereas model (B) implemented leaky integrate-and-fire neurons. Equipped with correlation-based learning rules and under the impact of repeated sensorimotor pattern presentations, both models showed spontaneous emergence of specific tightly interlinked cell assemblies within the larger networks, interlinking the processing of word-form information to that of sensorimotor semantic information. Both models also showed category-specificity in the cortical distribution of word-related circuits, with high-degree connection hub areas central to the network architecture exhibiting involvement in all types of semantic processing and only moderate category-specificity (see Figure 1). The present simulations account for the emergence of both category-specific and general-semantic hub areas in the human brain and show that realistic neurocomputational models at different levels of detail consistently provide such explanation.

Cold to the core: altered energy metabolism is associated with inattention, anxiety and aggression

van Heukelum, Sabrina

Aims: Antisocial behavior and aggression in childhood and adolescence, particularly seen in conduct disorder (CD) and attention-deficit hyperactivity disorder (ADHD), represents an increasing socioeconomic burden due to the persistent and repeated nature of offences. Animal models may provide additional insights into the neurogenesis of these traits. Methods: In this study the BALB/cJ mouse was extensively phenotyped versus the BALB/cByJ mouse (control) and physical factors were measured by telemetry. In addition, the neurometabolic status was assessed using single-voxel 1H-magnetic resonance spectroscopy. Results: Using the resident-intruder task, increased pathological aggression translational to aggression observed in CD, was found in BALB/cJ mice. This was positively correlated with their level of anxiety as measured in the open field test. A global attention deficit was found in the BALB/cJ mice as measured by an increased number of omissions in the 5-choice serial reaction time task. Furthermore, telemetric measurements demonstrate that BALB/cJ mice are hyperactive in the dark phase of the light/dark cycle with a lower basal body temperature, indicating a difference in energy metabolism. This finding was confirmed by an increase in cytochrome c oxidase, the terminal oxidase of the mitochondrial respiratory pathway. This may be related to decreased taurine and GABA concentrations that were found in the anterior cingulate cortex (ACC). In addition, follow-up studies indicated anatomical differences in the volume of the ACC between BALB/cJ and BALB/cByJ mice. Conclusions: These data indicate that pathological aggression observed in the BALB/cJ mice may be related to metabolic changes that decrease attention and increase anxiety such that BALB/cJ mice express heightened stress reactivity resulting in inappropriate behavior such as aggression.

The Long-Term Effects of an Induced ´Experiential Awareness´ Versus a ´Cognitive Reappraisal´ in the Processing of Bottom Up Generated Emotions: Revealed by Heart-Brain Coupling

Wang, Yulin

Over recent years, the interest in emotion regulation research has grown, with a special interest in cognitive reappraisal. Cognitive reappraisal is assumed to be one of the most adaptive emotion regulation strategies. However, the effectiveness of cognitive reappraisal may not be always guaranteed, considering that emotionally arousing stimuli are bottom-up driven while the reappraisal works as a top-down emotion regulation strategy. In contrast, experiential awareness, an emotion regulation strategy originated from experiential psychotherapy, has recently captured some research attention as an effective bottom-up strategy. Although abundant research has focused on top-down cognitive reappraisal, research is needed to validate the effectiveness of experiential awareness in a long run on a behavioral and neuroimaging level. Therefore, our research seeks out to compare the long-term effects of experiential awareness in the processing of bottom-up generated emotions compared to cognitive reappraisal by exploring the neural correspondence of these heart rate alterations. 30 participants will undergo 3T-fMRI scanning. Simultaneously, high frequency heart rate variability will be acquired during the whole experiment. We are going to adopt a within-subject design for the fMRI experiment. All the subjects will have the instruction” Kijk” (watch) or “Doorleef” (experiential awareness),” Herinterpreteer” (reappraise). In total, we have two sessions with each session consists of 4 blocks: ´watch negative´, ´watch neutral´, ´reappraise negative´, ´experiential awareness negative´. We are now on the phase of collecting our data. Heart-Brain coupling methods will be applied in our data analysis to reveal the long-term effects of emotion regulation. Keywords: Experiential Awareness; Reappraisal; Emotion Regulation; HRV; Long Term