Our brain plays jazz: Coordinated neuronal activity in self-organized complex networks with delays

Gordon Pipa

Max-Planck-Institut für Hirnforschung, Frankfurt/Main, Germany

My research is focused on understanding how neuronal information processing and cognitive phenomena can arise from the collective self-organization of elements interacting across spatial and temporal scales. The guiding research hypothesis is that self-organized temporal coordination of neuronal activity is a key element for information processing in the brain. In this talk I am going to present two principle mechanisms for temporal coordination of activity and for information processing in complex neuronal networks.

Part One:
Characterizing neuronal encoding is essential for understanding information processing in the brain. Three methods are commonly used to relate quantitatively neural spiking activity to the features of putative stimuli: Wiener-Volterra kernel methods (WVK), the spike-triggered average (STA), and more recently the point process generalized linear model (GLM). We compared the performance of these three approaches for describing firing patterns and estimating receptive field properties and orientation tuning of V1 neurons. The GLM consisted of two formulations of the conditional intensity function for a point process characterization of the spiking activity: one with a stimulus only component and one with the stimulus and spike history. Goodness-of-fit was assessed using cross-validation with Kolmogorov-Smirnov (KS) tests based on the time-rescaling theorem to evaluate the accuracy with which each model predicts the spiking activity of individual neurons and for each movement direction (4016 models in total, for 251 neurons and 16 different directions). We show that only the spike timing GLM model describes the data sufficiently. In a second step we evaluate the importance of contextual and neuronal mass activity in a single unifying framework based on a GLM model approach. We find that contextual information is strongly modulating the encoding of single neurons. We also find that neuronal mass activity, used as a surrogate of the activity of the embedding neuronal network explains the contextual influence to a major degree.

Part Two:
Reservoir computing originally introduced in the context of echo state or liquid state machines (LSM) has been proposed as a promising computational model for information processing in complex networks. Reservoir computing is a universal framework for computation, such as prediction, classification and memorization of information contained in time varying input streams. It uses the dynamics of a complex, maybe random, dynamical system to map features into a high dimensional state, similar to the idea of support vector machines from machine learning. Computations emerge from the properties of the individual coupled elements and the inherent network dynamics. While the original concept of the LSM had been proposed for fixed networks, I am going to present an extension that incorporates self organization in a network based on neuronal plasticity. We considered two types, first spike timing dependent plasticity (STDP) that changes the synaptic strength and has been associated with sequence learning, and second intrinsic plasticity (IP) that changes the excitability of individual neurons to maintain homeostasis. Based on extensive simulation studies we demonstrate that the combination of both types, first optimizes the information processing, second leads to self-organized criticality of the network dynamics, and third that intrinsic noise introduced by the intrinsic plasticity increases the robustness of information processing in a high noise regime.

This work was supported by the grants: R01 MH59733, R01 DA015644, EU 04330, Hertie Foundation

Back