Date
30 MAR 2026
30 MAR 2026
Entrance fee
Free
Time
5:30 PM - 7:30 PM
Address
Cantersteen 16, 1000 Brussels
Register now
Offline
This in-person event will take place on Monday 30 March at 17:30 in the FARI Test and Experience Center (Cantersteen 16, 1000 Brussels).
Andries Rosseau studied Physics and Astronomy at Ghent University, writing his master thesis on machine learning and nuclear fusion. After working as a machine learning engineer at In The Pocket, he started his PhD at the VUB under the supervision of prof. Ann Nowé, where he currently works on continual learning and plasticity for deep neural networks (i.e., how intelligent systems can solve many tasks without eventually losing their ability to learn). His work draws on concepts from statistical physics and extends to AI safety, reinforcement learning and large language models. Alongside his technical research, he speaks on the (existential) risks associated with advanced AI, often linking AI to broader themes such as geopolitical instability and energy. Some of his broader interests include: effective altruism, AI for science, cosmology, evolution theory, mindfulness and trail running.
Description of the session: Some of the most complex and intelligent systems in nature — from neural circuits in the brain to flocking birds and evolving ecosystems — operate in a special regime known as the edge of chaos. This regime lies on the critical boundary between order and disorder, where information flows freely and adaptive systems thrive. In this talk, I will show how we can apply this same principle to artificial neural networks, which underlie all of modern AI. Keeping neural networks at the edge of chaos has a surprising effect: it enables them to continually adapt and learn from new information, not losing the ability to pick up new skills over time. This is a core capacity of human-level and superintelligent AI, but is not yet a property current frontier models have.
Whether we want this is a different question: many AI researchers now believe the creation of advanced AI could lead to a loss of control over systems more intelligent than ourselves, potentially even posing an existential risk to humanity. At the same time, nations are racing each other to be the first to create superintelligence. This seems reckless, given that we are still far from finding a robust solution to the problem of aligning AI with human values. Our work contributes by showing that the same critical regime that enables continual learning also enables effective propagation of alignment signals, forming a prerequisite for safe AI systems. This poses a fundamental tension: keeping AI at the edge of chaos appears necessary for safety, while simultaneously amplifying the very capabilities that make safety so urgent.
Is this for you? Administrators, industry professionals and AI enthusiasts are invited to join this session.
A balanced mix of learning and relaxation – Attendees will have the opportunity to engage in insightful discussions with Andries, followed by a casual happy hour with snacks and beverages!
This event is free to attend, but registration is compulsory.This event is organized by FARI – AI for the Common Good Institute.
Summary
We want everyone to feel welcome and able to participate fully in this event. If you have a disability, please let us know. We will do our utmost to accommodate you in the best possible conditions.
Share