The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.
***
>> OLGA CRONIN: Hi, everyone, and thanks a million for joining us here today. We're very grateful to the organisations of the IGF. INCLO is a network of 17 national civil liberties and human rights organisations across the north and south that work together to support rights and freedoms I won't mention all 17 members to save time but they include the ACLU in the U.S., the Egyptian rights for persons in Egypt and the also those in the UK. And one in West Bank and connect us in Brazil. We also have member organisations in Ireland, Hungary and Argentina which is why we're here today. My name is Olga and I am a member of ICCL in INCLO. And we have Adam Remport a member of INCLO and also Tomás Ignacio Griffa from CELS. And we are also joined on Lync by Victor Saavedra and also Timilehin who is based in Toronto.
Most people in this room probably already know what FRT is but just very, very briefly, facial recognition, a biometric technology and we use that to identify individuals through their facial features. FRF compares a fake print taken from an image that can be sourced from CCTV and compared that template to the person unknown against a database of storage or biometric templates of those who are known. The image of the person unknown is called a probe image. The database stored by templates of known people would be called a reference database and if you're wondering what kind of reference database are stored and used by police you can think of passport database or police shop databases. These are processed by the unique bio facial data you can compare to iris scans or your fingerprints biometric data. Very quickly there's three points I would like to make about FRT in terms of the live and retrospective use but also the threshold values fixed for probable matches and the fact that it's a probablistic technology. It compares a live feed against a watch list to find a possible match that would generate an alert for police to act upon.
Retrospective means comparing still images of faces of people unknown in reference database to try and identify that person. Now, the European court of human rights and the European Union have views live and use of FRT is more invasive than retrospective. Tracking a person's movements over a significant length of time can be as invasive if not more invasive than one instance of realtime identification. For the FRT system to work there's a threshold value to determine whether the software will have a possible match that occurs. Should this be fixed too low or too high respectively can create a high false positive rate. There is no single threshold that eliminates all errors. So when you think about what a police officer will get in their hand afterwards, you know, if they use FRT they will essentially get a list of potential candidates, person A, with the percentage score next to them, a similarity score, person B with another similarity score. How long this list can be, anyone's guess because it largely depends on the database and a number of other factors. Just very quickly I just concluded this picture of Aman called Robert Williams from Detroit. This is called an investigative lead report from the Michigan state police in respect to Robert Williams who was wrongfully detained as a shoplifter. We could do a whole session on Robert's case but I just thought it was interesting to show the probe image that was used in his case you can see it there, it's a CCTV still and the picture on the right is of his driver's license. Please forgive the slide, but basically it's important to note that Robert was arrested after an algorithm identified him as the ninth most likely match for the probe image but there were two other algorithms run. One returned 240 candidates Robert wasn't on that list and the other no results at all and yet he was still arrested and detained. This is not the silver bullet solution it's often presented to be. And there are increasing number of people who have been wrongly accused due to FRT. You'll notice all the people in these images are people who are Black, all from the states, Shaun Thompson and Sara not her real name are from the UK and there are increasing numbers of these misidentifications happening all the time. In 2023 the U.S. Federal Trade Commission banned the agency, retail chain right aid from using FRT in shops because it was creating thousands of people wrongfully accused of being shoplifters and told to leave stores. Predominantly people who were Black and this was all misidentifications.
So with FRT there's an immediate danger of misidentifications. It has this bias and discriminatory aspect but also there's the larger and more, you know, long‑term concerns, longer term consequences and that is this mass surveillance concern. You know, FRT allows police, you know, it gives them a seismic, a shift in this kind of surveillance power, it does turn us into walking license plates and it tilts that power dynamic into the hands, further into the hands of police. So, you know, we know when we've heard of the use of FRT against Palestinians. We know and have heard of the use of FRT against Muslims and protestors in Russia but the most I suppose most recent situation regarding the use of FRT that's been in the news at least is the use of FRT this weekend at Pride in Hungary which Adam will talk to in a bit. This is a brief slide to talk of the human rights. The right to dignity, privacy, freedom of expression, peaceful assembly, the quality and nondiscrimination rights of people with disabilities, the presumption of innocence and the right to a fair trial and due process. INCLO we are members in 17 jurisdiction s and as we brought out a report in 2021 we could see that this is becoming a significant issue.
We knew about the biometric database of Palestinians. We can see our member organisation brought a complaint to the European court of human rights over Russia's use of protestors. There's been wrongful arrests in Argentina and there was a famous bridges case in the UK, the clear view AI scandal in Canada, all these various aspects and we stood back and thought, in many of these jurisdictions there's no legislation.
To underpin this use of FRT in different jurisdictions, they have different, you know, data protection rights or privacy rights or perhaps none at all and you can see how patchwork it was. Different organisations within our members were calling for different things, some were calling for bans, for moratoriums and we need to create a set of principles for police use of FRT in the hope that it could mitigate some of these harms. I won't stay too long on it but basically our methodology is we created a steering group within the network. We met obviously throughout; we agreed what information we needed. We surveyed our members to find out what actually information there is available in their jurisdictions. We agreed on the harms and risks and we looked at the cases that were coming through. We looked at, you know, obviously media stories as well. Not everything ends up in court. Then we agreed upon a set of questions that we feel should always be asked when it comes to police use of FRT. And essentially the principles are an answer ‑‑ are an attempted answer in those questions. We did have some expert, great expert feedback with a number of experts, academics and otherwise and we did that virtually and in‑person. These are the principles. I don't want to take up all the time but essentially, there are 18 principles and the first principle is about illegal basis and essentially what we're seeing here is that any interference with the right and it interferes must have a legal basis and that must be of sufficient quality to protect against arbitrary interferences. We say that they cannot, we say that police cannot use FRT unless there is a legal basis. We also say it should never be used in certain circumstances and to identify professor s or collecting information. Which is very pertinent to what Adam is going to talk about. The second principle concerns mandatory fundamental rights impacts assessment so here we're seeing that the police need to carry out a series of impact assessments with respect to all fundamental rights prior to all use of FRT and we see these must include an assessment of the proportionality of the FRTs. We have copies here if anyone wishes to go through. I won't go into detail of each of them.
But obviously those assessments were saying they must explicitly outline the specific parameters of use, who will use it: Who it will be used against where and why and how it will be used the rights impacted the nature and extent of the risks.
How those risks will be mitigated and how and why the deployment will outweigh the rights impacts and the remedy available to someone misidentified or whose biometric data was posted when it should not have been which we'll speak to Tomás' point in a minute. Three is what I just mentioned having to be independent of the vendor assessment. It's not enough for them to say that this is X and this is Y and everything is okay. I'd like to mention here that bridges case, the Court of Appeals case in the UK in which our colleague's liberty took because in that case, you know, the court of appeal held that the public sector duty, quality duty under the Equality Act there requires public authorities to give regard to whether a policy could have a policy impact and essentially in that case it was held at the south Wales police had not taken reasonable steps to make inquiries as to whether or not the algorithm the police was using had racial or sex grants and the court heard from the witness who was employed by a company, specializing in FRT and said these kinds of details are commercially sensitive and cannot be released and we hear this a lot but it was held in the end the court held while that was understandable was never good enough and saw the police never satisfied themselves either directly or by I would verification that the software didn't have an acceptable bias. Principle four is no acquisition or deployment of any new FRT without a guarantee of any future independence from the vendor. This is about vendor lock in. This risk that a customer will be at risk of not transitioning to another vendor. Five is saying all versions must be made public before the deployment. Principle six is about the obligation of public consultation and here we're saying that before any policing authority deploys FRT it must be told meaningful consultation, principle seven, they must be told how they are used. Eight is about the technical specifications of any FRT system and how they must be made public. Nine, FRT should be prohibited we do believe that live FRT is dangerous and should be banned. It is a red line but as I said before it can be just as dangerous retrospective. Ten is about prior digital authorization. Eleven is about record of use and here we're saying the police must document each FRT research performed and provide this document to the oversight body. I haven't mentioned it yet by 16 provides an oversight body. 12 is an effort... ensures the results alone are not enough for questioning and to disclose, to apply against individuals. Principle 14, any FRT misidentification of a person must be reported. And there should be mandatory reporting of those misidentifications in principle 15. Principle 16 is the independent oversight body that I mentioned before. And that under principle 17 that oversight body must publish annual account, annual reports and principle 18 is that the impacts assessment must be made available to the oversight body before the system is employed. And I need to move on very quickly to hand this over to Tomás but basically, we hope that these principles, the aim of the principles is to, you know, both help reduce FRT harms but also empower civil society and the general population to step forward and ask the right questions and push back and advocate for safeguards with a clear understanding of these technologies. We hope it can be used, the information can be used to voice on opposition but also as an advocacy tool when discussing the FRT tool with policymakers. Now I will pass it over to Tomás who can speak to a situation in Argentina and how they mesh with the principles.
>> TOMÁS IGNACIO GRIFFA: Thank you very much Olga, hello, even. So, I'm going to be talking about our experience in Argentina. Regarding the FRT. We've been working since 2019 against the implementation of facial recognition technology in the city of bay necessary arrest, I through this provides an example of the importance behind the ideas that Olga was explaining just a moment ago.
So briefly I will talk about how the facial recognition system in the city of buenos air res works and what the legal framework will look like.
I will talk about what the processing which question the constitutionality of the system was like. I'm going to explain the principles that are set forth in the ruling by the local judges and finally I'm going to talk a little bit about how all this highlights the relevance of the principles that we were talking about. So first, regarding the system. Facial recognition system in Buenos Aires was implemented in the city of Buenos Aires on April 2019 according to the resolution the system was to employ exclusively to look for future people with pending arrest warrants and exceptionally for other tasks mandated by judges. The system work with the national fugitive database, the CONARC in Spanish to search for the fugitives and the persons who was supposed to provide the people who were to be searched for, the fugitives the system was operated by the local police and in 2020 the local legislative branch sanctioned a study that provide a legal basis for the system. So regarding the case it was started by the Argentinian case. The case started with a focus on that research on facial recognition technology around the world has repeatedly shone that is to say the risk of mistakes and wrongful identification, racial and gender biases and impacts on the right to privacy and so on.
However, as the judge started gathering information it became clear there was a big problem which was the practical implementation as I said the system was intended to work crossing data between. A national database of fugitives and wanted people which consists of maybe 30‑40,000 names and the biometric data regarded by the national identity database. So the national identity database was supposed to provide the data on those 30‑40,000 people however when they asked about the consultations the government of the city of Buenos Aires they had more than 7 million people.
So clearly the police and perhaps all the other offices were accessing this data for other purposes entirely different from searching for fugitives and to this day we don't know how and why this data was accessed. During the process of the trial a group of experts before an audit on the system and saw that thousands of people were searched without a legal basis.
That is to say people who were not fugitives. They also found that information regarding the use of the system had been manually deleted in such a way that it was possible to recover it and found that it was impossible to trace which specific public officials had operated the system. So, with all this the local judge ruled that this was unconstitutional, found that the system had been implemented without complying with the legal prohibitions of the protections of the human rights of the citizens and in the ruling details that the legislative commission that was supposed to oversee how the system worked had never been created. That the other local organisations: Who was supposed to order the system as well was not provided with information needed to support this task.
That there was no previous studies to ascertain the impact of the system on human rights and that there were no instances for public administration prior the ruling does explain that as it was explained that the system was to search for people who did not have pending arrest warrants and as I said before, this provides that this was the only possibility, the way the system could be employed the ruling also held that they looked at the millions of people under the guise of employing this system and finally briefly, the local chamber of appeals affirmed this decision and also add that had the implementation of the system had to be looked at by experts at differential impact on people based on their race or gender. And finally, very briefly I'm going to talk about the latest developments in the case. In order to perform the test to determine whether the system has an impact on race and gender is still being carried out to this day. The government wanted to do a black box test by selecting a number of people. It is not enough to do a test of this kind and it is necessary for the government to disclose the details and the datasets for which the software was trained. The government is saying this is a trade secret so this is a debate that is still ongoing. And finally, going back to the principles the case was prior to the principles and certainly in 2019 as I said but I think it's a very good example of the relevance of the ideas behind the principles and the consequences of ignoring them and what the judge found is that we could say the exact opposite of what the principles stand for. So very briefly before I go to Adam. The principles directly against the ABS set forth in principle number one, the system was implemented without any prior assessment of the impact of fundamental rights. This brings our attention to principle number two there are supposed to be two oversight bodies according to the framework of the facial recognition system. This liberating theory however, one was not created and one was not provided with information either required to perform the function as to principle 16. No public consultation took place before introducing and employing the facial recognition system and this goes against principle number six the use of the system was not properly implemented and it could not be recovered and it was not possible to tell which public officer helped perform each operation this is part of principle eleven and the latest developments that I was talking about regarding how this test of the chamber of appeals played out and the technical information regarding the system so just the source code, the data employed and so on. And this regards to principles eight and thirteen. Thank you very much.
>> OLGA CRONIN: I might just introduce just to say thanks a million, Tomás. I think it's safe to say that what you just described is exactly like the principles had they been, you know, seen to or complied to or known about beforehand could have been a different scenario and things go wrong. And now we will turn to Hungary and Adam will talk about the recent legislative change there. That effectively had banned Pride this weekend and also allows for the police to use FRT to identify people who defied that ban. Over to you.
>> ADAM REMPORT: I would like to bring to you a case that demonstrates the problems with FRT. That have real life consequences.
The case of the Hungarian government essentially banning the Pride parade. The background of the case is that the Pride is not a new event but in February of this year the prime minister said it would be banned because that was in the government's views necessary for child protection.
So new laws were enacted. They essentially banned any assembly displaying or promoting homosexuality and another law made it possible for facial recognition technology to be deployed all offenses. I will tell you more about what petty offenses are in this context so the legal background of the case is that Hungary has had a facial recognition technology act since 2016. It established the facial recognition database which consists of pictures of IDs, passports and other documents with facial images on them. There are specific authorities that can request facial analysis in certain specified procedures. From the Hungarian institute of forensic sciences which is responsible for operating the facial recognition system.
And a novelty of Hungarian FRT use was that in 2024, FRT was made available for use in infraction procedures by the police. And in 2025, this included all infraction procedures. The reason why this is important is that participating in a banned event, an event that had been previously banned by the police is an infraction so if demonstrators gathered at the Pride event after it had been banned by the police it would they would collectively commit infractions probably in the tens of thousands. So let's find out how the FRT system actually works in this scenario. The police are known to record demonstrations and they can use CCTV and other available sources too. To gather photographs or images or a certain demonstrations. If they find that there is an infraction happening, what they can do is that they initiate an infraction procedure and in the course of that infraction procedure, send facial images to the central FRT system which runs an algorithm and identifies the images that are returned to the police and it's the police officer operating this system who has to decide whether there's a match or not. I have to point out that this system has never been used en masse so it has never been used against tens of thousands of people. And it is not known how this system itself will technically or the capacities of the judiciary and the police will operatively handle this kind of situation. So what we can tell about the case is that in the FRT principles, well, it's the first principle that it must be ensured that FRT's not used to identify protestors or collect information about people attending peaceful assemblies. It is immediately violated by this kind of FRT use against peaceful demonstrators. Another principle is that there are certain uses which are banned according to the principles such as that no FRT system will be used on live or recorded moving images or video data. We can see why it is a problem that the police record entire demonstrations they don't even necessarily have to follow a demonstration with live facial recognition it is enough for the chilling effect to take place everyone that is taking part in the demonstration to find everyone in the police's recordings and send fines to them which is actually probably how this will play out in Hungary or at least how the government plans it to play out. And it is an interesting case study of the lack of transparency around facial recognition, one of my conclusions will be FRT in this present case is used actively to discourage people from attending demonstrations, but the lack of transparency, the lack of knowledge that people have of what is going to actually happen to them is also as discouraging as the outright threats of using FRT. So in the case of the Hungarian system we can tell that there was no public consultation whatsoever before the introduction of the entire system in 2016. So the introduction of facial recognition as such was done in Hungary without any public consultation or without it being communicated to the public. Which means that there is no public awareness or there hasn't been up until now as the situation has gotten worse public awareness of each existence of the FRT system there was no consultation with the public, no data protection impact assessment and no consultation with the data protection authority before the present broadening scope of FRT which would of course include... which would include this massive breakdown on the freedom to assembly. This also violates one of the articles of the protocols. It can also be said that there are no record of use, statistics available that tells you how the system works, how effective it is, when it is deployed against which kinds of infractions whatsoever. And persons prosecuted can almost never find out whether the FRT has been used against them or not. There are no impact assessments before the individual uses of the system. Which means that the police can simply just initiate these searches without assessing the possible effects that it would have on someone. This is also against the principles. There is no vendor lock in assessment either which is also important because well the Hungarian institute of forensic sciences which operates the system explicitly said they were only clients using this facial recognition algorithm.
Which raises the question of whether the data are being transferred to third countries or not. And of course, since there are no recent assessments, they haven't been publicized also which also goes against the protocols. So a lock of sufficient oversight is also what I would like to mention there is no prior authorization, this is important because as I have told you, it is necessary to start an infraction procedure before FRT can even be deployed. It is not known because the law is not clear against how many people at the same time can one infraction procedure be initiated. So this is not even really been used against more than three or four people which makes sense. But it has never been used on a scale of tens of thousands of people. A prior judicial authorization could act as a check on this kind of massive surveillance if a judge could see whether it was necessary to surveil tens of thousands of people at the same time but it's not possible. There is no independent oversight either and of course there are no annual reports, no notification of the impact assessments to the oversight body. Since the oversight body does not exist. So, these all go against the provisions I think in a very concrete manner so that you can see that these provisions are not just abstract rules but they are, when they are not met with that means actual harms in real life. My conclusion would be that what we can see is a weaponization of facial recognition technology that instead of mitigating the risks there is a deliberate abuse of FRT's most invasive properties. Essentially the government actively communicated that facial recognition would be used against people, that they cannot hide because they will be found with facial recognition and they will be fined. It is inevitable. This of course has a massive chilling effect on the freedom of assembly and we could also say that even the lack of transparency is kind of weaponized because if there's a lack of information on the system it is impossible for people to calculate the risks. This will have a chilling effect on them because they won't know whether it is true that they will actually all be found and fined. So I would like to conclude here. Some possible next steps are legal avenues that can be taken like the law enforcement directly in the EU or the AI act and the INCLO's principles can be used in advocacy at the international forum. Thank you.
>> OLGA CRONIN: Thanks Adam, I might ask a follow up question. Given the situation that's happening in Hungary and it's so imminent this weekend and has gotten international attention. What's the public opinion about the use of FRT? Has it changed? Like did people kind of care before or how is it now?
>> ADAM REMPORT: Well people really never cared about FRT because they didn't know it existed. Precisely because of the lack of communication on the government side. The situation had to become this bad and severe for people to start to care about the problem but now the system already exists with the rules we have now and which can be abused by the police and the government and communication should have been necessary on the government's part.
>> OLGA CRONIN: I wonder if we have any questions?
>> AUDIENCE: Hi, can you hear me? I'm from Brazil. The situation with facial recognition is similar to what was happening in Argentina but I was shocked with the Hungarian case and I'm part of a project that is trying to do some community activations around facial recognition in use, in powers. So, I wanted to hear from you if you ever done something with community activation. And also, I wanted to ask if you believe that there is a way to use facial recognition or if you think that it should be banned? Because in Brazil we are discussing a lot about the, like, banning all systems that use facial recognition so I wanted to listen from you guys what you think about it. Thank you.
>> OLGA CRONIN: Thank you can I go at answering some of those questions. I think the idea of getting into communities and doing that education awareness piece which is I think what you're talking about and maybe activating them or stirring them into taking action is really, really important. Mainly because of the same issue that, that Adam just mentioned people don't really understand it, don't really know about it, and then when people are talking about it. People in position of authority speak of it as a silver bullet solution with this, there's no problems, it's like control and F, there's no issues. And just, and absolutely down play the risks. So I think you have to get creative. I think you have to get creative with maybe local artists.
ICCL created a mural with a local artist in Dublin to highlight the dangers of FRT but it's also looping with other civil society organisations that might not work in this space.
And getting down to that kind of grassroots level I think you just have to get imaginative. You have to get the word out there. And I think, you know, use all the tools available to you that you would use in general for communications. I think when it comes to a ban ICCL, I'm sorry, INCLO rather, we have 17 members in 17 different jurisdictions, there are 17 different sets of safeguards and protections in place. Some people are calling for a ban. Some people are calling for moratorium and other groups are calling for legislation. It is really specific to the jurisdiction and what's happening there. What we do know is that the risks and the harms are present, they're pressing that mass surveillance risk and how this can be quickly deployed against us is clear and obvious. So from many of our perspectives we call for a ban we don't wish for the police to use it but we know that fight has been lost in certain jurisdictions. This is an attempt to try to make it better at least. I hope that helps. All right.
>> AUDIENCE: Hello my name is June Beck. I was wondering since we're talking about facial recognition technologies, there's also been a lot of movement to penalize wearing masks in public as an attempt to protect yourself against facial recognition. I was wondering if any organisation have thoughts or processes or any kind of discussions on how the ban of like facial masks for example is also in conversation with FRT.
>> OLGA CRONIN: I don't want to take over the conversation. You might have something to say.
>> TOMÁS IGNACIO GRIFFA: Maybe a little but if you have...
>> OLGA CRONIN: We're out of time. I would say that's happening. More and more laws are happening to ban face masks. It's happening in England and changing to be more restrictive in England. It's impossible to see how that's not a response to the people of FRT and the response of the public to cover their faces. It's not something we worked on specifically but it is something we're working on individually if you like.
>> AUDIENCE: All right. Thank you.
>> OLGA CRONIN: Thanks a million, sorry, we've gone over time. We're happy that you joined us. And we have hard copies of the principles if you wish.
>> TOMÁS IGNACIO GRIFFA: Thank you very much, good‑bye.
>> ADAM REMPORT: Thank you, good‑bye.
>> (Music)