For each poster contribution there will be one poster wall (width: 97 cm, height: 250 cm) available. Please do not feel obliged to fill the whole space. Posters can be put up for the full duration of the event.
Network theory has been a groundbreaking research field in science for the last 20 years, conceivably the only one that could glue together disparate and even contrasting disciplines such as physics, economy, biology or sociology. A network materializes the complex interactions between the composing entities of large systems, it thus defines the natural and structural backbone for describing complex systems, which dynamics is unavoidably bound to the network properties. Based on a detailed study involving a large set of empirical networks arising from a wide spectrum of research fields, we claim that strong non-normality is indeed a universal property in network science . Dynamical processes evolving on non-normal networks exhibit a peculiar behavior, initial small disturbances can undergo a transient phase and be strongly amplified although the system is linearly stable . We hence propose several models to generate complex non-normal networks to explain the origin of such property. Because of the non-normality of the networked support, the comprehension of the dynamical properties goes beyond the classical linear spectral methods, while we show that the pseudo- spectrum is able to capture such behavior. This response is very general and it challenges our understanding of natural processes grounded on real networks, as we illustrate in the Generalised Lotka-Volterra model. References  M. Asllani and T. Carletti, arXiv:1803.11542 (2018).  M. Asllani and T. Carletti, Phys. Rev. E 97, 042302 (2018).
Motivated by the recent work [Mezard,17] (PhysRevE.95.022117) -- which addresses a generalized Hopfield model in which the patterns memorized in the Hopfield model are not independent, but are correlated through a combinatorial structure -- we introduce a simple and rigorous approach to derive mean-field equations for such models. In particular, we make use of a powerful concept from random matrix theory - called ``asymptotic freeness''. Our approach reproduces and generalizes Mezard' mean-field equations (which were obtained by a heuristic belief propagation on a dense graph). (This study is unpublished and is a joint work with Manfred Opper.)
We propose a new algorithm to approximate statistical quantities of discrete models defined on graphs. The new scheme computes exact marginals on acyclic graphs: in addition, it takes into account approximate loop correction on cyclic graphs, thus providing a new way of generalizing standard message passing algorithms like Belief Propagation. The method is similar to the well-known Expectation Consistence method (for instance, the approximation family is gaussian) but in this case the consistence is not on the first moments but on the value of the density on the finite support of the discrete distribution. We provide results of numerical simulations on Ferromagnetic Ising and Edward Anderson models both in 2 and 3 dimensional lattices, showing large improvements with respect to the Bethe approximation, which are comparable and - in certain regimes - even better w.r.t. other approximation schemes like Cluster Variational method.
We present an approximate Bayesian inference approach for estimating the intensity of a inhomogeneous Poisson process, where the intensity function is modelled using a Gaussian process (GP) prior via a sigmoid link function. Augmenting the model using a latent marked Poisson process and P\'olya--Gamma random variables we obtain a representation of the likelihood which is conjugate to the GP prior. We approximate the posterior using a free--form mean field approximation together with the framework of sparse GPs. Furthermore, as alternative approximation we suggest a sparse Laplace approximation of the posterior, for which an efficient expectation--maximisation algorithm is derived to find the posterior's mode. Results of both algorithms compare well with exact inference obtained by a Markov Chain Monte Carlo sampler and standard variational Gauss approach, while being one order of magnitude faster.
We construct and study the Google matrix of Bitcoin transactions during the time period from the very beginning in 2009 till April 2013. The Bitcoin network has up to a few millions of bitcoin users and we present its main characteristics including the PageRank and CheiRank probability distributions, the spectrum of eigenvalues of Google matrix and related eigenvectors. We find that the spectrum has an unusual circle-type structure which we attribute to existing hidden communities of nodes linked between their members. We show that the Gini coefficient of the transactions for the whole period is close to unity showing that the main part of wealth of the network is captured by a small fraction of users.
Molecules that diffuse in space and undergo chemical reactions can be modeled as particles, in a similiar way we can describe the motion of agents and their interactions according to a set of rules. The Doi model is often considered as the ground-truth model for the diffusion and reactions of particles. But sampling every particle trajectory is computationally expensive for many-particle systems. Here we present an approximation to the Doi model that is valid for systems with many (but not infinitely many) particles. From a description in terms of individual particles, we derive a model for the diffusion and reactions of particle densities (based on Kim et al 2017). We will introduce a simulation approach for the ground-truth model and its approximation and find the regimes in which either model gives the best compromise between accuracy and computational feasibility.
Wireless communication networks require reliable routing of messages, despite the fact that individual networks links are unreliable. Multi-hop routing protocols propose a promising solution to overcome the issue of message loss. For these protocols, successful relay of a message defines a percolation problem. Here, we present a percolation theory for a minimal model, where individual links switch between an active and an inactive state according to a two-state Markov process. Using renormalization group theory, we analytically compute the complete statistics of failure events. We show how the time-dependent probability to find a path of active links between two designated nodes converges towards an effective Bernoulli process, i.e. without memory, as the hop distance between the nodes increases. Our work extends classical percolation theory to the dynamic case. It elucidates temporal correlations of message losses with implications for the design of communication protocols and control algorithms.
In this work, we present an interacting variant of the well known Geometric Brownian Motion model which plays a central role in modern day financial theory and practice. The model is attractive in that it is able to reproduce many of the so called stylised facts of financial markets through the interplay of many metastable states and transitions between them. The focus however, will be on how such dynamical regimes play a role in inference of the interaction network. We present results primarily for the synthetic case before finally applying this to real data.
Stochastic differential equations naturally occur in many fields in science and engineering, arising often as descriptions of systems with unresolved fast degrees of freedom. Usually, the underlying deterministic dynamics (i.e. drift function) and complete path trajectories of such systems are unknown, and instead we only have discrete time partial state observations at hand. Existing inference methods for such systems either consider detailed parametric drift models or assume densely observed trajectories for non-parametric Gaussian process drift estimation. Here, we propose an approximate method for joint state path and drift estimation from sparse discrete time observations by employing mean field arguments to decouple latent variables (state path) and drift function. A variational formulation of the likelihood in terms of probabilities of unobserved paths conditioned on the observations enables us to identify an effective drift that gives rise to the posterior measure over states. Given that measure, we estimate the drift through a sparse variational formulation of Gaussian process regression. An iterative optimisation alternating between minimisation with respect to path measures and drift functions efficiently determines the underlying drift and most likely state path. These results enable us to identify stochastic systems where a specific parametric drift model is not known and only sparse discrete time observations are available. [This is joint work with Manfred Opper]
Modelling spatio-temporal systems exhibiting multi-scale behaviour is a powerful tool in many branches of science, yet it still presents significant challenges. Here we consider a general two-layer (agent-environment) modelling framework, where spatially distributed agents behave according to external inputs and internal computation; this behaviour may include influencing their immediate environment, creating a medium over which agent-agent interaction signals can be transmitted. We propose a novel simulation strategy based on a statistical abstraction of the agent layer, which are typically the most detailed components of the model and can incur significant computational cost in simulation. The abstraction makes use of Gaussian Processes, a powerful class of non-parametric regression techniques from Bayesian Machine Learning, to estimate the agent's behaviour given the environmental input. We show on two biological case studies how this technique can be used to speed up simulations and provide further insights into model behaviour.
In the forward problem we apply this approach for solving a partial differential equation, e.g. diffusion equation with uncertain coefficients. We consider affine model as well as a log-normal model. We apply Baysean inversion to the inverse problem in parametric form. For the representation of high-dimensional functions, we use a hierarchical tensor representation, or tree-tensor network states in quantum physics, to circumvent the curse of dimensions. We give a brief description of hierarchical tensor representations, and show the connection to deep neural networks.
The necessity of a prior in Bayesian Statistics has led to a lot of controversy and critisism concerning the objectivity of the Bayesian method in the field of statistics. We combine the methodology of Empirical Bayes, i.e. estimating the prior itself from cohort/big data, with the information theoretic ideas of Objective/Reference Priors to derive a new nonparametric method for prior estimation, resulting in the Empirical Reference Prior (ERP). We furthermore develop a numerical scheme for the arising optimization problem, representing our ERP as a Monte Carlo sampling and apply the method to a high-dimensional systems biology model. (c.f. https://arxiv.org/abs/1612.00064)
In many applications of finance, biology and sociology, complex systems involve entities interacting with each other. These processes have the peculiarity of evolving over time and of comprising latent factors, which influence the system without being explicitly measured. In this work we present latent variable time-varying graphical lasso (LTGL), a method for multivariate time-series graphical modelling that considers the influence of hidden or unmeasurable factors. The estimation of the contribution of the latent factors is embedded in the model which produces both sparse and low-rank components for each time point. In particular, the first component represents the connectivity structure of observable variables of the system, while the second represents the influence of hidden factors, assumed to be few with respect to the observed variables. Our model includes temporal consistency on both components, providing an accurate evolutionary pattern of the system. We derive a tractable optimisation algorithm based on alternating direction method of multipliers, and develop a scalable and efficient implementation which exploits proximity operators in closed form. LTGL is extensively validated on synthetic data, achieving optimal performance in terms of accuracy, structure learning and scalability with respect to ground truth and state-of-the-art methods for graphical inference. We conclude with the application of LTGL to real case studies, from biology and finance, to illustrate how our method can be successfully employed to gain insights on multivariate time-series data.
Accurate modelling and simulation of biochemical reaction kinetics is important for understanding the functionality of biological cells. Depending on the particle concentration and their mobility in space, different mathematical models are appropriate. For well-mixed systems with multiple population scales there exist hybrid approaches which combine the different models in an efficient way. Based on the spatio-temporal master equation, such hybrid approaches can also be extended to spatially inhomogeneous settings. We give an overview of the existing approaches and discuss ideas for further recombinations, with the aim of appropriately modelling reaction-diffusion kinetics. Applications are given by the processes of gene expression and neurotransmission.