Unifying the Principles of Learning with and without Brains

For each poster contribution there will be one poster wall (width: 97 cm, height: 250 cm) available. Please do not feel obliged to fill the whole space. Posters can be put up for the full duration of the event.

to be announced

Álvarez Candás, Raúl

Training precise stress patterns in cellular packings

Ameen, Shabeeb

Embedded spheroids are an important in vitro model for studying tumor proliferation. We had previously constructed a 3D computational model for embedded spheroids, where in particular the cellular packing of the spheroid is constructed as a 3D vertex model. In this presentation, we consider the cell-level stress in such a model and investigate the training of specific pattern of cell stresses in the packing using physical learning rules. The feasibility of training such patterns may suggest novel biophysical mechanisms that drive cancer cells in a tumor to invade the surrounding extracellular matrix.

Materials that learn periodic dynamics

Berneman, Marc

Disordered materials and structures possess a large number of different degrees of freedom. Harnessing these by combining plasticity and external manipulation allows for creating materials with complex responses comparable to those of neural networks. In the training-based approach, materials evolve on their own to attain desired responses, relinquishing the need for computer-aided design and precise fabrication. While most work in this field has been focused on the quasistatic limit, where the driving is much slower than the dynamics of the system, systems operating in the dynamic regime are not of lesser importance. Systems operating dynamically offer a range of responses not possible in quasistatics such as frequency-dependent responses, responses at an arbitrary phase, and responses that are dependent on the temporal sequence of applied deformation. Applications include efficient AI processors, mechanical sensors, materials that act as filters, and passive mechanical structures that classify sounds or words. In cases where the system is governed by the energy minimization principle, the Equilibrium Propagation framework allows to leverage the internal physics to calculate gradients of cost functions. In damped dynamics, however, it is not possible to apply Equilibrium Propagation directly due to the absence of an extremizing function. In our work, we circumvent this challenge by defining a “fake” energy function in Fourier space, allowing us to operate within the Equilibrium Propagation framework. While this approach is limited to linear systems in periodic motion, we also found that undamped nonlinear systems can be included in the category of trainable systems. As a proof of concept, we use our local learning rule to train an energy-based model to classify the spoken digits “zero” and “one”, achieving a 93.7% accuracy. In theory, such an approach could be used to imbue a material with passive sensing for sound classification. Our work opens the door to smart mechanical materials that process dynamical signals and materials with a desired temporal evolution.

Physical networks for disordered metamaterials

Bonamassa, Ivan

Recent advances in high-resolution 3D imaging have provided unprecedented insights about the architecture of biological systems, revealing morphological and structural patterns that challenge conventional material design. These discoveries have sparked growing interest in the design of mechanical metamaterials with adaptive elastic and electrical properties, laying the foundation for the creation of efficient, scalable and sustainable learning machines. Alongside these developments, physical networks –graphs where nodes and links are volumetric objects bound by physical or geometric constraints– have emerged as a versatile framework to characterize the morphological complexity observed in real-world data, raising the perspective of novel data-driven strategies to design disordered-optimized metamaterials. Here, I will review recent progress made in the characterization of minimal models of physical networks, highlighting how mutual interactions among elements drive the emergence of correlated heterogeneities in their morphology and topology. In particular, I will show that a network-of-networks representation of biological physical networks –modeling e.g. neural connectomes, where nodes (neurons) are extended objects occupying a certain volume and links are localized (synapses)– predicts the emergence of a morphological preferential attachment mechanism caused by volume exclusion and the random sequential nature of their growth. Similarly, I will show how similar effects yields logarithmic kinetics and the spontaneous formation of bundles of locally oriented filaments in non-equilibrium packings of very elongated fibers. Given the differences between these model systems from more classic disordered solids, we aim to ignite a conversation on how such morphological and topological heterogeneities could influence their rheology or learning performance, opening paths towards the design of novel bio-inspired metamaterials.

Physical Landscapes of Learning: The Role of Active Flows in Biological Learning Systems

Büchl, Adrian

In recent years, experiments and simulations have shown that self-learning machines can exploit their own physical properties to facilitate learning. This learning is achieved by modifying local network parameters, such as conductances in flow networks or resistances in electrical circuits, according to local learning rules. However, most biological systems that we observe as capable of learning have inherent active drivers that shape their behavior, such as active pumping in flow networks. They represent an additional set of learning degrees of freedom. Applying learning rules to inherent flows leads to translation of energy minima, introducing a parallel method to the changes in curvature around energy minima achieved by changing tube conductances. In this project we investigate how physical landscapes evolve during learning as the learning degrees of freedom are modified. We compare the learning updates that adapt these different kinds of learning degrees of freedom and investigate how their interplay might enhance or inhibit learning performance. Our findings shed light on how biological systems such as slime molds and fungi regulate their foraging behavior and adapt to environmental conditions in an optimal way.

Separatrix Finder: a numerical method for black-box dynamical systems

Dabholkar, Kabir

Many systems in nature exhibit multi-stability. The brain during decision making, can also be modeled as a high-dimensional bistable system. While several tools are dedicated to the behavior of such models near their equilibria, their separatrices: the manifold of states lying at the boundaries of basins of attraction are more challenging to study. We develop a numerical tool based on Koopman Theory, to characterise separatrices of black-box dynamical systems. In particular, provided access to a dynamical function, i.e. a vector flow, we use Deep Neural Networks to infer Koopman Eigenfunctions (KEFs) of the flow. To achieve this in the high-dimensional setting, we utilise several developments in machine learning. I will provide a brief introduction to the theory needed to understand how this works. In short: the zeroes of KEFs characterise the separatrices of the flow. We demonstrate our method on a suite of flows: synthetic examples, and Recurrent Neural Networks (RNNs) trained on neuroscience tasks as well RNNs fit to mouse neural data during decision making. We show that the method can be utilised to design optimal inputs to perturb systems across their separatrices, e.g., for optogenetic stimulation experiments.

to be announced

Deiringer, Nora

Physical control

Eldar, Maor

The elastic properties of disordered materials depend on the multitude of different internal degrees of freedom, such as particle positions, size, and interactions. Treating these as design parameters that can be varied allows to attain complex response functions. However, manipulating a large number of microscopic degrees of freedom is often not feasible. The goal of our study is to explore whether material properties can be manipulated through low-dimensional external forcing. We build on ideas from physical learning to evaluate the required force, using two schemes – contrastive and Hebbian. Contrastive requires perturbing target degrees of freedom which we wish to manipulate, while Hebbian is only aware of their state. We develop learning schemes and characterize convergence as a function of the number of control and target degrees of freedom. We discuss the relations between the topology and properties of material and its ability to be effectively controlled.

Learning to forage : mechanisms and hypotheses

Elhady, Ahmed

Foraging is a universal behavior performed by all animals. Most prior models and analyses of foraging behavior operate under the assumption that an animal already has knowledge of the distribution of resources in their environment. However, when an animal is new to an area must infer this statistical structure through an exploratory process. Refinement of such estimates can then improve foraging strategies over time. In this work, we develop a learning model describing an agent that learns the distribution of resources in a patchy environment, specifically the yield rate that describes the expected time to encounter a chunk of food. Estimates of yield rate parameters are updated following Bayesian sequential updating. Agents learn the patch yield rate faster in patches where their foraging depletes the patch more quickly. This learning process can further be coupling with a patch leaving rule, which can be varied to differentially weight reward versus information gain. Statistics of these gains as well as patch departures can then be determined as solutions to a first passage time problem, which can be determined analytically in some limits. Our stochastic model provides a framework for quantitatively studying learning in the context of patch foraging, introducing such hierarchical inference as a contributor to explore/exploit tradeoffs.

Learning and Adaptation in Particle Lenia

Elmeligy, Nada

Cellular automata exhibit a range of pattern formation behaviors from oscillations and seemingly chaotic patterns to extremely complex ones. Conway's game of life, with its three simple rules of birth, death, and survival, is one of the most known cellular automata, and is Turing complete. Lenia is a powerful artificial life simulation tool, initially introduced as an extension of Conway's game of life for continuous space, time, and states and is, therefore, based on fields. Particle Lenia repurposes the Lenia field into particles with the system accurately reflecting physical laws, such as mass conservation and collision dynamics. We aim to investigate the potential of Particle Lenia for understanding the physics of learning and adaptive behavior in complex, dynamic environments. We achieve this aim through modifying particle interaction rules to match bacteria behavior and incorporating multiple species that serve as the particle's environment. We compare Bacterial Lenia behavior to Particle Lenia to determine the conditions for learning and adaptivity. This work lays the ground for connecting concepts from physical learning and biological adaptation, using the tools from Artificial Life, to contribute to a deeper understanding of learning emerging from simple algorithms.

Geometry of Accuracy Regimes in Neural Networks

Ersoy, Ibrahim Talha

When neural networks (NNs) are subject to L2 regularization, increasing the regularization strength beyond a certain threshold pushes the model into an under-parameterization regime. This transition manifests as a first-order phase transition in single-hidden-layer NNs and a second-order phase transition in NNs with two or more hidden layers. We investigate a framework for such transitions by integrating the Ricci curvature of the loss landscape with regularizer-driven deep learning. First, we show that a curvature change-point separates the model-accuracy regimes in the onset of learning and that it is identical to the critical point of the phase transition driven by regularization. Second, we show that for more complex data sets additional phase transitions exist between model accuracies, and that they are again identical to curvature change points in the error landscape.

Contrastive learning through implicit non-equilibrium memory

Falk, Martin

The backpropagation method has enabled transformative uses of neural networks. Alternatively, for energy-based models, local learning methods involving only nearby neurons offer benefits in terms of decentralized training, and allow for the possibility of learning in computationally-constrained substrates. One class of local learning methods contrasts the desired, clamped behavior with spontaneous, free behavior. However, directly contrasting free and clamped behaviors requires explicit memory. Here, we introduce ‘Temporal Contrastive Learning’, an approach that uses integral feedback in each learning degree of freedom to provide a simple form of implicit non-equilibrium memory. During training, free and clamped behaviors are shown in a sawtooth-like protocol over time. When combined with integral feedback dynamics, these alternating temporal protocols generate an implicit memory necessary for comparing free and clamped behaviors, broadening the range of physical and biological systems capable of contrastive learning.

Understanding the Learning Dynamics of RNNs Trained in Closed-loop Environments

Ger, Yoav

Training recurrent neural networks (RNNs) on neuroscience-inspired tasks has proven to be a powerful way to generate hypotheses about the functionality of biological networks. However, most existing training paradigms focus on open-loop, fully observable tasks, whereas real-world learning typically unfolds in closed-loop, partially observable environments. In this study, we systematically examine the learning dynamics of RNNs trained in such closed-loop, partially observable settings. Remarkably, we find that even in one of the simplest possible tasks—a classic double integrator control problem—two identical RNNs with the same architecture and learning rule can exhibit strikingly different learning trajectories. Specifically, when trained in the closed-loop, partially observable setting, the networks display distinct phases in their learning, characterized by marked shifts in both the policy and internal representations. These phases do not emerge in the more conventional open-loop, fully observable version of the task. To probe these dynamics, we developed an analytically tractable framework using linear RNNs, which allowed us to explain the distinct phases observed in the closed-loop condition. These results offer novel insights into the interplay between policy development and representation learning in adaptive agents.

Hierarchical Control of State Transitions in Dense Associative Memories

Grishechkin, Anton

The hierarchical organisation of cell identity is a fundamental feature of animal development with rich and well-characterized experimental phenomenology, yet the mechanisms driving its emergence remain unknown. The regulation of cell identity genes relies on a distinct mechanism involving higher-order interactions of transcription factors on distant regulatory regions called enhancers. These interactions are mediated by epigenetic regulators that are broadly shared between enhancers. Through the development of a new and predictive mathematical theory on the effects of epigenetic regulator activity on gene network dynamics, we demonstrate that hierarchical identities are essential emergent properties of animal-specific gene regulatory mechanisms. Hierarchical identities arise from the interplay between enhancer competition for epigenetic readers and cooperation through activation of shared transcriptional programs. We show that epigenetic regulatory mechanisms provide the network with self-similar properties that enable multilineage priming and signal-dependent control of progenitor states. The stabilisation of progenitor states is predicted to be controlled by the balance in activities between epigenetic writers and erasers. Our model quantitatively predicts lineage relationships, reconstructs all known blood progenitor states from terminal states, and explains mechanisms of cell identity dysregulation in cancer and the general differentiation effects of histone deacetylase inhibition. We identify non-specific modulation of enhancer competition as a central regulatory axis, with implications for developmental biology, cancer, and differentiation therapy.

Adapt to Bend: Ant Cooperative Transport of Soft Rods

Madar, Itai

Local interactions, external constraints, and an influx of information affect collective behavior in animal groups. In the context of cooperative transport in Paratrechina longicornis ants, a trade-off exists between the well coordinated pull of uninformed pullers and the directional information from the informed leaders. When transporting rigid cargo, ants exhibit strong coupling to the load, each perceiving identical local forces, thus ensuring perfect communica- tion with zero delays. With flexible cargo, the material’s compliance interferes with this coupling, introducing a delay in force perception among ants. This lag in communication gives rise to complex emergent group behavior. Through experiments and theory, we investigate the mechanisms by which ants continue to transport large flexible cargo efficiently despite constrained communication.

To be announced

Maoutsa, Dimitra

Synchronization in Networks of Anticipatory Agents

Murat, Nazira

We investigate networks of coupled systems in which agents attempt to align their states with the anticipated states of their neighbors. This anticipation is based on past state information, which introduces history-dependent terms into the dynamics, manifesting as time delays. We analyze the stability of synchronized states in the presence of such delays and demonstrate that anticipation can induce synchronization in coupled map networks that would otherwise fail to coordinate their collective behavior. These findings provide valuable insight into the role of anticipation in the adaptive dynamics of physical and biological systems.

Adaptation under Dynamic Genotype-Phenotype Map as Out-of-Equilibrium Learning

Pham, Tuan

Genetic and neural networks are adaptive - they change slowly in response to the collective states of their constituting elements – genes or neurons. For genotypes encoded by such a slowly evolving network of gene-regulations that is subjected to natural selection and mutation, their adaptation is determined by the fitness of those phenotypes resulting from stochastic gene-expression dynamics on this network. By establishing a mathematical correspondence between fitness maximisation and local non-equilibrium learning rule for regulatory connections, we show how the level of reciprocity of network interactions results in a trade-off between genotype and phenotype. This trade-off, in turn, gives rise to phenotypes robust wrt noise within an intermediate level of external noise. We further use our analytical framework to compute the thermodynamic cost of adaptation within a full microscopic description of the underlying genotype-phenotype coupled dynamics on two different timescales, showing how a robust adaptation corresponds to a noise-independent level of dissipation.

Fitness and Overfitness: Implicit Regularization in Evolutionary Dynamics

Rappeport, Hagai

Ever since Darwin, a common underlying assumption in evolutionary thought has been that adaptation drives an increase in biological complexity. However, the rules governing the evolution of complexity appear more nuanced, and a general theory remains elusive. Evolution is deeply connected to learning, where the notion of complexity as well as its origins and consequences are much better understood, with multiple results existing on the optimal complexity appropriate for a given learning task in various settings. The connection between evolution and learning can be formalized via multiple mathematical isomorphisms which allow one to port established results from learning theory to evolutionary dynamics. One useful such isomorphism is between the replicator equation from evolutionary dynamics and the Bayesian update rule, with evolving types corresponding to competing hypotheses and fitness in a given environment to likelihood of observed evidence. In Bayesian learning, implicit regularization prevents overfitting and drives the inference of hypotheses whose complexity matches the learning-challenge. We show how these results naturally carry over to the evolutionary setting, where they are interpreted as organism complexity evolving to match the complexity of the environment. Other aspects, peculiar to evolution and not to learning, reveal additional trends. One such trend is that frequently changing environments decrease selected complexity, a result with potential implications to both evolution and learning. This work offers new ways of thinking about the evolution of complexity, and suggests new potential causes for the evolution of increased or decreased complexity in different settings.

Geometric Control and Memory in Networks of Hysteretic Elements

Shohat, Dor

The response of driven frustrated media stems from interacting hysteretic elements. We derive explicit mappings from networks of hysteretic springs to their abstract representation as interacting hysterons. These mappings reveal how the physical network controls the signs, magnitudes, symmetries, and pairwise nature of the hysteron interactions. In addition, strong geometric nonlinearities can produce pathways that require excess hysterons or even break hysteron models. Our results pave the way for metamaterials with geometrically controlled interactions, pathways, and functionalities, and highlight fundamental limitations of abstract hysterons in modeling disordered systems.

Towards smart active materials

Simmchen, Juliane

A Hybrid Optical-Digital Implementation of Equilibrium Propagation Using Spatial Light Modulators

Vanden Abeele, Dimitri

Modern digital machine learning systems face two major challenges: high-energy consumption and biologically implausible learning mechanisms. To tackle these issues, researchers have explored alternative learning approaches, such as Equilibrium Propagation (EP), which enables the training of energy-based networks that relax to an equilibrium. The main idea of EP is to consider a damped dynamical system, where inputs stay fixed while outputs are nudged toward target values. The local parameter update rule then compares the states of neighboring neurons for different perturbations of the output neurons once the network reaches its equilibrium. Meanwhile, significant advances have been made in the experimental creation of new neuromorphic hardware, such as Spatial Photonic Ising Machines (SPIMs), which exploit optical computing in free space to accelerate computations, showcasing parallelism, scalability, and low power consumption. This ongoing theoretical and experimental work aims to bridge the gap between these two fields by proposing a semi-optical implementation of EP using Spatial Light Modulators.

How to build compact and efficient networks for enhanced information processing capabilities?

Yadav, Manish

Understanding the structure-function relationship in complex networks is crucial for optimizing information processing. We propose a performance-dependent network evolution framework that reveals how minimal yet highly efficient network structures emerge through adaptive optimization. By leveraging reservoir computing principles, we show that evolved networks consistently outperform traditional growth strategies and Erdős-Rényi random networks as well. These networks exhibit unexpected sparsity, asymmetric input-output node distribution, and adherence to scaling laws in node-density space, suggesting an inherent self-organizing principle that tailors network structure to computational demands. In contrast, we propose an apposite framework of task-specific network pruning that refines large reservoir networks by systematically identifying and removing redundant and useless nodes. This approach demonstrates that computational efficiency is governed not merely by network size but by its topological organization. Pruning uncovers critical connectivity patterns that enhance information flow and memory retention, leading to compact, high-performing subnetworks. Our analysis highlights the role of density distributions, spectral radius, and asymmetric connectivity in maintaining computational capacity despite structural reduction. Together, these opposite yet complementary approaches of evolution and pruning offer a powerful paradigm for studying structure-function dynamics in complex networks. Our findings provide insights into task complexity and guide the design of more efficient, scalable, and interpretable machine learning architectures.

Learning in adaptive disordered solids

Zu, Mengjie

Across nature and technology, we encounter systems that have been shaped by experience or design to respond in specific, reliable ways. Yet, despite the widespread presence of such adaptive behavior, we still lack a clear, cross-disciplinary framework to define and compare what it means for a system to "learn." Our work aims to clarify this by studying learning in adaptive disordered solids. We introduce a simple, general formulation of independent response, which measures whether multiple features of a system can be adjusted separately without mutual interference. This concept offers a unified lens to evaluate learning capacity across diverse systems and provides a powerful tool for inverse design. Using this framework, we demonstrate that disordered solids can exhibit fully independent mechanical responses—achieving the theoretical maximum of tunability—unlike ordered systems such as crystals, which are inherently limited in this regard. Through cyclic training on sequences of target properties, such as Poisson’s ratio, the material adapts its internal parameters to encode and retrieve multiple functionalities. These learned states are stable and allow transitions without further structural change. Altogether, our results uncover how structural disorder enables enhanced learning, suggesting a broader physical basis for memory and adaptability in both natural and artificial systems.