Author
Léa Rogliano
In 2025, FARI – AI for the Common Good Institute, launched a new pilot format called « Anchoring sesssions » to support affiliated FARI researchers who wish to add a citizen engagement component in their research. A first session shaped as a focus group was organised with Anastasia Karagianni from LSTS research group (VUB). Are you a FARI researcher willing to organise a workshop to augment the social readiness level of your project, please reach them here (ceh@fari.brussels).
Anastasia Karagianni, Doctoral Researcher at the LSTS research group of the Law and Criminology Faculty at VUB and former FARI Scholar, initiated a focus group on auditing smart wearables (e.g. connected & portable rings, glasses) through a bottom-up approach based on equity and safety by design.
In this article, Léa Rogliano, head of FARI’s Citizen Engagement Hub (CEH), asks Anastasia about her experience and feedback.
What are we looking for when we open up our research to third parties? What can we expect from such collaborations? What are the best methods for achieving a satisfactory result?
The following article is a transcript of this conversation. The CEH’s mission is to stimulate exchanges between researchers and civil society and build a concerted innovation for the common good.
L.R: What does AI for the common good mean to you as a researcher?
A.K: AI is an open field that researchers across disciplines are exploring from different angles and perspectives. It is really important to challenge it from our own stands in order to help build a responsible and inclusive technological future. As a legal researcher, this means for me that I have always to question myself on how beneficial are AI solutions for the common good, for the citizens, for the society as a whole. This also includes being attentive to gender implications, since AI systems can unintentionally reproduce or amplify gender biases if they are developed or deployed without a critical approach. My goal is to propose ways to close legal gaps so that AI can be developed and used in a legally sound and socially beneficial manner, one that respects equality, promotes fairness, and ensures that no group is disadvantaged by technological progress.
L.R: Thank you. Could you please say a word about the concept of verifeminication you invented in your PhD research?
Yes, of course. In my thesis, I apply feminist epistemologies. Throughout my research, I came across the verification principles (e.g., model checking, data verification) and verification standards. These are used to ensure that an AI system or model is correctly designed and functions as intended. I questioned how can I interpret this verification processes from a feminist point of view. This is how the concept of “verifemmication” came about.
You might wonder why did I apply feminist lens? Because one in three women is facing gender-based violence. It is therefore striking that AI designers and providers do not sufficiently account for the risk that their systems may be weaponised for such harms, including image-based sexual abuse. “Verifemmication” is therefore an approach that seeks to embed feminist, equity-based, and equality-driven criteria into verification processes prior to the deployment of an AI system. My point is that equity must be integrated into the very design approach of these technologies.
L.R: Thank you. Can you explain how you came up with the idea of organising a focud group?
A.K: I wanted to test the concept of “verifeminication” in order to evaluate if it makes sense to other audiences and how could they can use it in their fields. It is the reason why I reached the FARI’s Citizen Engagement Hub.
In the workshop we organised, we applied the “verifemmication” concept to smart wearables. We focused on the Ray-Ban Meta AI glasses recently launched on the European market. I chose this example because, although I consider this product innovative and potentially helpful in daily life, I am also wondering whether it raises significant privacy challenges. For instance, the glasses include a live-streaming feature, and I wondered what the privacy consequences might be for bystanders.
I also questioned whether this feature could create an entry point for gender-based violence, knowing that such violence unfortunately occurs in digital environments as well. I imagined how someone with harmful intentions might use this technology. Through this reflection, I realised that certain risks had not been adequately considered before the product was released to the public.
L.R: Could you tell us about the structure of the workshop?
A.K : The first part of the workshop focused on unpacking this technology and exploring the features of the glasses. For instance, we examined the extent to which AI technology, facial recognition and voice assistance, are embedded in these glasses, how users can activate these features, and under which circumstances. We also assessed whether the glasses include any transparency mechanisms. For instance, there is a LED light that switches on when the camera is recording.
In the second part, we tried to imagine how a bystander, meaning a person who is not using the glasses, might feel about the passive engagement he or she has with this technology. For instance, does the user ask for the consent of the person who might appear in the video? Would asking for consent be effective or not?
In the last part of the workshop, we zoomed in on the “verifemmication” approach. We put ourselves in the postions of the designers of such products and asked how we could address the concerns identified in the second part of the workshop. As a group, we asked ourselves the following question: Which additional features would we embed in the device to prevent previoulsy identifed risks?
L.R: What about the people that wear these glasses? Would you also like to organise testing groups with them? Also, what in term of socio-economical dynamics as I guess this product is quite expensive?
A.K : Thank you for this question. Actually, the more like I focus on this topic, the more I realise that indeed we have very limited raw data about these interactions between the person who is using the product and with the people who are surrounding. This is why we tried to provide this perspective in the workshop by conducting a questionnaire with a person who own these glasses. It was really interesting because to some extent, the person was aware of certain risks, but for other risks, the comment was “Interesting, I didn’t think of this.” This clearly showed us that there is a need to raise awareness about the potential issues with this product.
I was recently in the airport of Valencia and there was a Ray-Ban store where they were selling these/this pair of AI Meta glasses. There were many models. It looks to me that both of these companies, Ray-Ban and Meta, are trying to constantly improve these products.
And that the improvement, the better option, the better version of this product is the product that embeds more AI technology. But for me, the better version of this product would be the more privacy friendly one.
L.R: Even if you have already given us a lot of elements, what are the main outcomes you bring with you out of this workshop?
A.K: As a researcher, you sometimes feel isolated. You are working in a university, mostly behind a desk, reading theoretical articles. Yet, when we examine the law, we always need to put it in practice. The workshop showed me that actually we need to have more practical insights. We need to question who benefits from technologies, who does not, what the pros and cons are, what are the businesse issues, and how the citizens and communities will be affected. Research should adopt a more ethical approach, including real-world testings to see how the law could work well or not, because the law ultimately applies to society. So we need to bring these societal insights and to put practical insights in the research.
Regarding the workshop itself, a key conclusion was that probably it would be better to test this approach with more targeted groups. We would probably gather different perspectives and feedback if we tested this approach only with policymakers or only with researchers. So for now, at this initial stage, we have an overview, a bigger picture of what the “verifemmication” approach is and how it could benefit citizens. But to be more specific, we would need to test it further with more targeted groups to generate more specific insights.
L.R : Did you face any difficulties? Was it an easy process or what difficulty did you face?
A.K : I wouldn’t say that I faced a significant difficulty, however, since I tried to run this workshop in a hybrid format (both online and in person), it turned out to be a bit ambitious. I really wanted to engage with all the working groups, but we split the participants into two breakout groups, and I wasn’t always able to interact with everyone as much as I had hoped. Next time, I will organise a workshop in person and a workshop online only with the participants that are not in Brussels or can only participate remotely. I won’t combine these formats again because it is not a seminar, it is not a lecture, and I really want to pay attention to every feedback.
L.R: How long did it take in total to organize this workshop?
A.K: It didn’t take up too much of my time, thanks to the support of FARI’s CEH. I would say that I spent between five to eight hours in total. We started organising the workshop one to two months before and tried to have every two weeks a meeting with FARI people, (Alice Demaret). Through these meeting we worked at summarizing progress, provide updates, send invitations, and coordinate all the practicalities of organizing an event.
L.R: And what about the construction of your structure of your workshop? How and how much time did it take to shape the format?
A.K: I would say that most of the questions and the format were already prepared, because I had drafted them for a previous conference submission that was finally rejected. Since the proposal was already in place, I would estimate that it took me about two hours to adapt it for this workshop.
L.R: Will you integrate these outcomes in your thesis or in an article?
A.K: I would really like to include these insights in my thesis however as I mentioned earlier, because my research is not empirical but theorotical and in order to avoid any administrative barriers (probably I would have to explain the methodology or why I have invited these people in particular), I plan to summarize the insights in a broad way in a paragraph. My thesis does not only focus on Ray-Ban Meta glasses, the questions I address in my thesis are legal in nature.
L.R: Would you be interested in writing an article about the focus group or to organize more workshops in the future?
A.K: I would like to organise more workshops in the future because as I explained earlier, I would really like to have the possibility to test this approach with more targeted groups, for example only with policymakers, exclusively with researchers or citizens. I would like to gather diverse in order to develop a more complete understanding of the challenges and risks associated with these AI products.
L.R: Okay. So let’s talk next year?
A.K: Yes, thank you. Looking forward!
L.R: What advice would you give to researchers who want to include citizen participation in their work?
A.K: Only positive outcomes, like many valuable insights. You will likely explore a new dimension of your research, because, as I believe, researchers need to stay connected with society and bring those perspectives into their work.
In terms of organising the workshop, I would tell other researchers: “As long as you have a concrete idea in mind, I assure you that everything will work out. If you also have the support of FARI’s team, you will feel safe and supported. You are not alone, you have a team, and even if you forget something, the FARI team will likely cover it for you”.
Share
Other news