An Initiative of

Supported by

logo
Illustration

AI, Robots, & Us: Living with Intelligent Agents?

Author

Kenneth Hannaert, Laura Jousset

The FARI Brussels Conference 2025 held on 17th and 18th of November, marked the fourth edition of this flagship event dedicated to Artificial Intelligence (AI), data and robotics for the common good. Over two days, at Studio Flagey (Conference Day, 17th of November) and BeCentral (Partners Day, 18th of November), the conference brought together researchers, policymakers, entrepreneurs and citizens to explore different questions around this year’s theme “AI, Robots, & Us: Living with Intelligent Agents?”.

This 2025 edition focused on the growing presence of AI systems and robotic agents in everyday life – from domestic tasks and public services to scientific discovery, healthcare and even security. It invited participants to reflect on what defines an “agent”, what it means to share our environments with them, and how we can ensure that this technological shift remains compatible with democratic values, safety and ethics.

More than 800 participants, 15 speakers,  and 10 sessions took part in this year’s conference, illustrating the increasing interest in responsible AI and robotics in Brussels and beyond.

Over the last years, FARI – AI for the Common Good, a joint research institute led by the Vrije Universiteit Brussel (VUB) and Université Libre de Bruxelles (ULB), has positioned itself as a key actor in bridging research, public administrations, industry and citizens around trustworthy AI, data and robotics.

With advancements in AI, data, and robotics, questions about their ethical and societal impacts are growing. The FARI Brussels Conference aimed to provide a platform for critical inquiry, bringing participants together to build a community of practice grounded in dialogue, shared learning, and collaboration. In 2025, the conference continued this mission by creating space for exchange, and collective reflexion on how intelligent agents can be developed and governed in a way that serves society as a whole with humans, while also raising questions about autonomy, accountability and control.

This year’s conference was made possible thanks to the valued support of our sponsors and partners, including Wallonie Bruxelles International; visit.brussels with STIB/MIVB; lifetech.brussels (hub.brussels); BruBotics (Vrije Universiteit Brussel); the British Embassy; Red Hat; EY, and Fulbright. We also benefited from media partnerships with BRUZZ and RTBF.

© Thierry Geenen

November 17: FARI Brussels Conference 2025 at Studio Flagey

The first day of the FARI Brussels Conference 2025 at Studio Flagey offered a full programme of keynotes and panel discussions centered around the theme “AI, Robots, & Us: Living with Intelligent Agents?”.

Throughout the day, experts from computer science, robotics, law, social sciences, ethics and public policy shared their perspectives on how intelligent agents are reshaping our societies.

The atmosphere at Flagey was both reflective and hands-on: beyond presentations, participants engaged in Q&A, informal exchanges and networking moments, connecting different communities around shared concerns about the future we are building with, and through, intelligent agents.

Opening remarks

We were honoured to open the day with a warm welcome from Carl Morch, FARI Co-Director, followed by remarks from Marius Gilbert, Vice-Rector for Research and Valorization and Vice-Rector for Culture and Scientific Mediation at Université Libre de Bruxelles, who highlighted the strong collaboration between ULB and VUB. We were also pleased to welcome Anne Sherriff, British Ambassador to Belgium, who officially launched the event and set the tone for the conversations ahead.

Keynote Speeches: Re-centring Intelligent Agents on Human Agency

As in previous editions, the 2025 conference was punctuated by keynote talks from internationally recognized voices in AI, robotics, ethics, culture, and public policy. While coming from different disciplines, they converged on a common message: intelligent agents must remain at the service of people and the public interest, not the other way around.

The keynote sessions invited participants to look beyond the hype surrounding AI and robotics and focus instead on concrete capabilities, limits, and trade-offs. Speakers examined where intelligent agents can meaningfully help address societal challenges, where they risk introducing new forms of dependency or exclusion, and where choosing not to deploy certain systems may be the responsible option.

Another strong thread running through the day was the need for democratic oversight and citizen involvement. From the design of algorithms to their deployment in cities, workplaces or public institutions, keynote speakers highlighted that questions of power, representation and voice are central. Intelligent agents should augment human judgment and collective decision-making, not quietly replace them.

The conference began with an inspiring keynote speech from Marion Carré and a spotlight talk from Martin Isaksson who both set the tone for the day. After lunch, renowned professor Arvind Narjanan gave his keynote speech.

Marion Carré

Marion Carré, founder of Ask Mona, works at the intersection of art and AI, creating tools that let visitors “talk” with artworks. Over the past decade, her NLP (Natural Language Processing) and generative AI systems have been adopted by major institutions – from French tourism services to the Musée national des beaux-arts du Québec and the Palace of Versailles. These tools make it easier for visitors of all backgrounds to ask questions, learn, and build lasting connections with museums. Drawing on her humanities background and her latest book, Marion reflected on AI’s growing role in everyday life and warns against a “moving walkway” model that fosters speed, dependence, and standardized content. She instead promoted a “treadmill” approach that preserves effort, critical thinking, diversity, and human agency through a better understanding of how AI systems are designed and used.

© Thierry Geenen

Martin Isaksson

Martin Isaksson, GTM lead at Red Hat, presented the company’s vision for sovereign AI – systems that align with a nation’s or organisation’s own laws, values, and strategic interests rather than relying on opaque foreign models. He noted that AI has shifted from a race for speed to a matter of control and trust, with governments increasingly demanding transparency in model training and governance. Sovereign AI, he argued, requires the ability to run models anywhere, on any hardware, with zero-trust security and strong local capabilities, all built on open source. Red Hat’s modular sovereign AI stack enables organizations to operate open-source models efficiently at scale, allowing them to move from model consumers to true model providers in control of their data, models, and infrastructure. He concluded by emphasizing that sovereign AI is about owning innovation – not restricting it, and invited participants to continue this discussion.

Arvind Narayanan

Arvind Narayanan, computer science professor at Princeton University and co-author of  the book AI Snake Oil, examined why LLM-based AI agents often fail and where they genuinely succeed. He argued that most agent deployments – whether for shopping, customer support, or even automating science – fall short due to a capability-reliability gap, weak interfaces, and efforts to automate tasks that do not need automation. He pointed to software engineering as a real, though early, success case, where agents augment developers like fast junior colleagues rather than replace them. Arvind emphasized designing agents for human collaboration and predicted gradual, decade-long workflow transformation rather than sudden job automation.

Description de l’image

© Thierry Geenen

Key takeaways from the Panel Discussions

AI agents in a LLM world: what and why?

Professor Tom Lenaerts (FARI Academic Director), opened the session on “AI agents in an LLM world,” introducing a conversation that explored philosophical, ethical and technical perspectives on autonomous AI systems. Philosopher Xabier Barandiaran warned that LLM-based agents are “undead” interlocutors acting in our language-based society, shifting us from an attention to an intention economy and raising urgent questions about who controls this new automated agency. AI ethicist James Wilson argued that agents amplify both opportunity and risk, highlighting environmental costs, safety concerns, and the danger of de-humanising work if AI simply accelerates business “tick speed” without redesigning jobs around human flourishing. Researcher Zhijing Jin explained her work on causal and moral LLMs, building AI “scientists” for rigorous causal inference and auditing models for authoritarian bias, historical revisionism and threats to democracy. In the closing discussion, speakers agreed that technology alone cannot solve these challenges: democratic governance, regulation, open and public AI initiatives, and shared core values (such as human rights) are essential to ensure AI agents remain aligned with society rather than replacing human agency.

Description de l’image

© Thierry Geenen

Do’s and Don’ts for Human, AI, and Robot Collaboration

Professor Ann Nowé (FARI Academic Director) moderated a panel on the “dos and don’ts” of human-AI-robot collaboration, bringing together perspectives from academia, big tech, and public administration. Roboticist Tony Belpaeme showed how robots are moving from purely physical tasks to deeply social roles, highlighting technical gaps (like poor speech recognition for non-English and atypical voices) and both the promise and ethical ambiguity of using companion robots to tackle loneliness. Sasha Vezhnevets from Google DeepMind traced the shift from reward-maximizing AI to foundation models steeped in human culture, arguing that future “agentic” AI should be governed by social norms, conventions and appropriateness rather than a single universal alignment goal. Francesco Raffaele Ferri described how the Emilia-Romagna region uses massive public compute and a digital twin (“Amartya”) to simulate inequality and policy impacts. He shared findings from an internal AI copilot trial showing big efficiency gains on well-defined tasks but potential harm to creativity and performance when misused. In the Q&A, speakers and audience debated AI’s role in loneliness, data ownership, worker wellbeing and inequality, stressing the need for democratic governance and public, European AI infrastructures so these technologies genuinely serve the common good.

Simulating and imagining a society with AI Agents 

This session on “simulating and imagining a society with AI agents” led by Professor Geoffrey Aerts (FARI Academic Director), examined opportunities and risks across technical, institutional and geopolitical levels. Jordi Cabot (LIST) noted that multi-agent systems are emerging but LLMs remain biased, culturally narrow, and resource-intensive, calling for bias audits, better support for low-resource European languages, and clear governance for human-AI decision-making. Katrīna Kūkuma presented Riga City Council’s real-world AI experiments, framing AI as a systemic transformation rather than a plug-and-play solution. She underlined the importance of data and technological maturity, capacity-building across departments alongside central teams, and the central role of citizens as drivers of innovation.
Another important set of discussions revolved around access: who gets to design, study and benefit from intelligent agents? Speakers emphasized the need for open, high-quality data, shared infrastructures and inclusive training opportunities so that smaller organisations, public institutions and civil society can also experiment with and critically engage in AI and robotics.

Regulating and governing AI and Robotic agents

In this closing session on “regulating and governing AI and robotic agents”, moderated Professor Gregory Lewkowicz (FARI Academic Director), the panel examined whether current laws can regulate emerging AI and robotic agents and how democratic oversight should guide their use. Ronald Leenes (Tilburg University) argued that AI blurs the line between humans and products, calling for layered regulation focused on design, liability, and embedding social norms. Jarmo Eskelinen (Data-Driven Innovation Initiative) stressed that cities need contextual rules on when AI may be used, ensuring human decision-making in sensitive domains. Claudia Chwalisz (DemocracyNext) highlighted that AI governance is a democratic challenge requiring citizen involvement and new data governance models. She emphasized practical sector-specific guidance, noting that different types of agents demand tailored oversight rather than new overarching laws. Finally, Patrice Latinne (EY) spoke about the growing autonomy of AI and robotic systems and the need to govern them by design, with clear human oversight and risk-based rules to ensure safety and trust

Farewell talk

FARI’s closing keynote by one of FARI’s founder and ULB professor Hugues Bersini was both a warm farewell and a manifesto for democratic AI. Reflecting on years of work at the intersection of algorithms and public life, he warned about the growing fusion of algorithmic and political power, which can erode democratic oversight and concentrate control if left unchecked. Using concrete examples from COVID-19 (vaccination systems, QR codes), school admissions, mobility in Brussels, and local energy communities, he advocated that the algorithms governing our daily lives should be treated as numerical commons; designed and overseen collaboratively by technical experts, domain specialists, randomly selected citizens, and, when needed, elected officials. He highlighted FARI’s mission to build AI for the Common Good at the local scale in Brussels and called for stronger cooperation between universities and public administrations to actually deploy these tools. In the Q&A, he also raised environmental concerns about Large Language Models (LLMs), urging restraint and critical reflection on their use.

Description de l’image

© Thierry Geenen

At the end of the conference it was also the moment to thank Hans De Canck, co-founder and former co-director of FARI. He closed by reaffirming FARI’s commitment to transparency, collaboration, and responsible technological progress.

Collaboration for Ethical AI Governance

In her closing remarks, FARI Managing Director Karen Boers reaffirmed FARI’s mission to advance AI, data, and robotics for the common good by bridging research with public administrations, industry, and citizens. She also highlighted what is next, and most notably the launch of the Public Interest AI Network, inviting partners to join a growing global community focused on AI in the public interest.

The 2025 discussions confirmed that no single actor can shape the future of intelligent agents alone. Panels on multi-stakeholder governance emphasized cooperation between citizens, companies, researchers, public administrations and international organizations.

Examples showcased how participatory methods, civic dialogue and co-creation processes can help define acceptable uses of AI and robotics, support shared oversight and build long-term trust in technological transitions.

November 18: FARI Brussels Conference 2025 – Partners Day

The conference continued on 18 November with the Partners Day at BeCentral, co-organised with multiple partners and focused on deep-dive sessions, workshops, and hands-on collaborative formats.

Throughout the day, partners and participants explored through workshops and focused sessions concrete applications of intelligent agents in sectors such as public administration, mobility, health, sustainable urban development and democratic participation. Discussions looked at how cities and regions experiment with AI-driven tools, how data infrastructures are built and governed, and how organizations can develop internal capacity to work with AI, data and robotics in a responsible way.

Description de l’image

© Thierry Geenen

Across the day, the exchanges tackled a wide range of practical and policy-facing topics: how to build responsible AI strategies in organisations; how AI regulatory sandboxes can be implemented in practice; what “AI safety” means when framed around the public interest; and how international cooperation can strengthen trustworthy AI across borders. Participants also explored data stewardship and data spaces to enable better public services, debated the impact of AI on democratic processes, examined human–AI–robot cooperation and interoperability, and looked at concrete innovation pathways – from healthtech applications combining AI and robotics to accelerator pitches turning ideas into real-world solutions.

Partners Day was co-organised with a diverse group of organising partners, including the Knowledge Centre Data & Society; the AI Office, CybeRights, and the University of Bologna (EUSAIR); the Global AI Policy Research Network (represented by Delft University of Technology and the Uniarts Research Institute); the Québec Representation to Belgium; The Data Tank; BUDA; Make.org; NASK and the Łukasiewicz – Poznań Institute of Technology (INVEST project); lifetech.brussels; and BruBotics (VUB research lab). Additional sessions were co-organised with the Public Interest AI Network and the Fulbright Alumni Association of Belgium (FAAB), alongside FARI – AI for the Common Good Institute (ULB & VUB).

 

Description de l’image

© Thierry Geenen

Food for thought

The sessions at the FARI Brussels Conference 2025 raised a number of  questions:

Living with intelligent agents: As AI systems and robots become embedded in homes, streets, institutions and workplaces, how do we define the roles we are comfortable assigning to them – and which responsibilities must remain strictly human?

Democracy and agency: How can we ensure that the deployment of intelligent agents strengthens democratic participation, rather than enabling new forms of surveillance, nudging or concentration of power?

Safety, robustness and trust: What technical, organizational and legal safeguards are needed to guarantee that intelligent agents behave reliably, can be contested when necessary and remain under meaningful human control?

Sustainability and resources: Intelligent agents depend on data, energy and infrastructure. How can we design and use them in ways that support environmental and social sustainability, instead of increasing pressure on resources?

Inclusion and justice: Who is involved in deciding where and how intelligent agents are used? How do we prevent these technologies from exacerbating existing inequalities or creating new forms of exclusion?

Human creativity and autonomy: As intelligent agents assist with more cognitive and creative tasks, how do we preserve space for human curiosity, experimentation and dissent — and avoid treating complex social questions as mere optimisation problems?

Over two days, experts, policymakers, practitioners and citizens engaged with a central challenge: what kind of shared future do we want with intelligent agents, and under which conditions? The reflections and encounters of the FARI Brussels Conference 2025 offer an important basis for shaping technologies that support human rights, social justice and sustainable development, in Brussels and beyond.

Recordings of sessions from Conference Day are available on FARI’s YouTube channel, allowing a wider audience to revisit the debates and continue the conversation.

FARI will keep working with its partners and communities to turn its vision of AI, data and robotics for the common good into concrete projects, guidelines and opportunities for learning – and to ensure that, as intelligent agents become part of our daily lives, they do so in a way that genuinely serves us.

We look forward to returning in 2026 for more discussions and advancements – stay tuned and register to our newsletter not to miss further opportunities to connect with our communities during our events and our activities.

Description de l’image

© Thierry Geenen

Share

Other news

All news

All news