IGF 2023 – Day 3 – WS #559 Harnessing AI for Child Protection – RAW

The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.

***

 

>> BABU RAM ARYAL: Good afternoon from Kyoto. It might be good morning somewhere around the world. Good evening it might be somewhere around the world. And good afternoon, maybe late evenings somewhere. This is Babu Ram Aryal. I'm a lawyer by profession, and I'm from Nepal. And I am from the Digital Freedom Coalition in Nepal. I'm moderating this session for today. And I have a very distinguished panel here to discuss artificial intelligence and child protection issues in contemporary world.

Let me briefly introduce my esteemed panelists. Next to me, Senior Advocate, Gopal Krishna. He brings more than 30‑35 years experience of litigation in Nepal. And Jutta Croll is a very senior child rights protection activist, and she is leading her organization and contributing through Dynamic Coalition on Child Rights.

And Sarim Aziz is Policy Director at Meta and having long experience on platform issues and protection of child rights and other issues as well.

And next to Sarim, Michael Ilishebo. Michael is directly dealing with various issues. He is a senior official, focusing on cybercrime investigation and digital forensic analysis.

So, introducing my panel, I have my colleague, Ananda Gautam, who is moderating online participants. And I would like to begin with a very brief concept, what is the objective of today's discussion.  Just right inside, I am seeing two kids. Consequently, they are my kids as well. They are very passionate with technology. They are very keen on using Internet. And we have a big discussion whether we allow our kids to get access to technology and connectivity. And our experience shows that aligning them on the platforms are an opportunity for them. They are growing themselves with new regime, a new world, and they have created their own set of world in their own way.

I see sometimes appear whether I'm leading my kids into a very risky world or not, and this leads to me to engage at this issue, technology and the risk, and technology and the opportunity.

Now, artificial intelligence has taken over most of the human intelligence in various areas for, like, education, law, and other areas of profession, and artificial intelligence is giving opportunity. Lots of opportunities are there, but simultaneously, there are some risks as well. So, in this discussion, we'll take the artificial intelligence issues and the child protection issues and harnessing child protection through artificial intelligence. 

There are various tools available around the world, and these are accessible to all the segment of people, including child and elderly people.

So, at the beginning, I'll go to Michael, whose responsibility is dealing with these kinds of issues regularly. I'll go to Michael. What is your personal experience from your department? What are the major issues that you have experienced? And once we hear from you, then we'll take this discussion at further level. Michael?

>> MICHAEL ILISHEBO: Good afternoon, good morning, good evening.  So, as a law enforcement officer dealing with cybercrime and digital forensic issues, moderating of content online from the human side and from the AI side has posed both a challenge to our little ones in the sense that, speaking as somebody from the developing world, is that we are mostly consuming news, or any other form of entertainment or any other form of content online that is not generated in our region. Of course, we're now generating our own content, but the aspect of, like being a gatekeeper as parents, or using technology to filter that content which is not supposed to be shown or exposed to little ones has become a little bit of a challenge.

I'll give you, like, a simple example.  If you're working, you're analysing a mobile device from maybe a child who is maybe 16, the content that you find in their phones, the data that they've, in terms of their browsing history, like, there is no control. So, whatever is exposed to an adult ends up being exposed to a little one. As a result, it has become a little bit of a child in terms of addressing issues of content moderation on both fronts.

Of course, there could be some aspects of AI that could help moderate some of this content by the end of it, or if we remove the human factor out of it, AI will not be able to address most of the challenges that we are facing right now. 

Further on, in terms of combatting crime or combatting child exploitation incidences, you will find that most of these sites that host most of the content, despite them having clear guidance and policy on gatekeeping in terms of age, our children still find their way in places online that they're not supposed to. Of course, there is no system that would detect the use of a phone to indicate their age or their gender, as a human being would. It still remains a challenge in the sense that once a phone is in the hands of a minor, you don't have control over what they see, you don't have control on what they do with it. So, basically, it has become a serious challenge on the part of the little ones and us enforcing cyberspace to ensure that the little ones, the minors, are protected from content that is not supposed to be exposed to them. 

>> BABU RAM ARYAL: Thanks, Michael. I would like to know your experience. I hope ‑‑ I belong to Nepalan society and Zambian society might be similar from education and all these things. What are the trends of abuse cases in Zambia? Any ‑‑ do you remember any trends?

>> MICHAEL ILISHEBO: So, basically, in terms of abuse, Zambia, like any other country, has those challenges. So, basically, I'll give you an example. Of late, the talks on child online protections have been gaining momentum. There's been some clear guidelines from government to ensure that issues of child online protection, data privacy, issues to do with the safety and security of everyone, including the little ones, online has been gaining momentum through the enactment of various legislations.

Like, we have the Cyber Security and Cyber Crimes Act of 2021, which clearly has now outlined the types of cyber bullying which are outlawed. So, basically, if you go on social media platforms, such as Facebook, TikTok, Instagram, Snapchat, and all those, most of the bad actors who are engaging in these bad activities of sending bad images to the children or any other content that we deem is inappropriate, most of them have been either arrested or talked to, depending. At times, they are within their age range, they share things amongst minors. If it's a minor, of course, you talk to them, you counsel them, you try to bring them to sanity in terms of their thinking. But if it's an adult, you have to know their intentions. 

So, one of our experiences, we are slowly addressing some of these challenges that we are facing, but again, that does not stop there.  There are a lot of cases or scenarios that remain unreported. So, it is difficult for us to literally address those challenges. But in a nutshell, I would literally tell you that the challenges are there, the problems are there, but of course, addressing them is not a one‑day issue. It's about continuous improvement and continuous usage of the law and technology‑based, especially from the service providers, to address some of these challenges. 

>> BABU RAM ARYAL: Yes, Michael. I will come to Jutta. Jutta, you have been engaged in child protection since long. You have good experience. We are seeing in IGF for several times and as well you are also a member of Dynamic Coalition on Child Rights. So, what is your personal experience, from protection issues, and legal issues on protection of children online, especially when AI is significantly contributing and intervening on these platforms? Jutta.

>> JUTTA CROLL: Yeah, thank you so much for not only inviting me, but posing such an important question to me. First of all, I would like to say, you introduced me as an expert in child protection issues, and you may know that the Dynamic Coalition even changed their name from Child Online Safety Coalition to Children's Rights Coalition, and that it's that environment. I think it's important to put that right from the beginning, that children have a right to protection, to provision, and to participation. So, we always need to look for a balanced approach of these areas of rights. 

And of course, when it comes to artificial intelligence, I would like to quote from the General Comment 25 to the Convention on the Rights of the Child. You may know that the rights of the child have been laid down in 1989, when, although the Internet was there, it was not designed to be used by children. And the UN Convention doesn't refer in any way to the Internet as a means of communication, of access to information, and so on and so on. So, that was the reason why four or five years ago, the Committee of the United Nations, Committee on the Rights of the Child, decided to have such a general comment in regard of children's rights in the digital environment, to give more a closer look into what it means that children are now living in a world that is mainly affected by use of digital media and look into how we can protect them.

And in one of the very first articles of this General Comment, it says explicitly that artificial intelligence is part of the digital environment. It's not only a single thing, it's woven into everything that now means the digital environment. So, it's, therefore, necessary to have a look whether artificial intelligence can be used or is able to improve digital environment for children, whether it can help us to address risks that have already been mentioned by Michael; whether it can help to detect content that is on the one hand harmful for children to be watched on the Internet, but also for content that is directly related to the abuse of children, which is where we are talking about child sexual abuse imagery. But nowadays ‑‑ and that is also due to the use of artificial intelligence and new functionalities and technologies that the Internet has used to perform online, live sexual abuse of children, and that is also where we have to have a look at, what artificial intelligence, how it can be beneficial to detect these things, but also where it might pose additional risks to children. And I'll stop at that point, and I'm pretty sure we will go deep into that.

>> BABU RAM ARYAL: I'll come to the detection side on next round. Jutta, can you share with me some more issues from ethical and legal side, if you can shed some light on this?

>> JUTTA CROLL: You mean the ethical and legal side of detection of harmful ‑‑ of child sexual abuse imagery in general?

>> BABU RAM ARYAL: Ethical and legal issues of use and misuse of technology and platforms.

>> JUTTA CROLL: Okay. I do think that the speaker on my left side has much more to say about the technology behind that. What I can say so far from research is we need both. We need to use, to deploy artificial intelligence to become ‑‑ yes, to monitor the content, to find and detect the content for the benefit of children, but still, I'm pretty much convinced that we cannot give that responsibility to the technology alone. We also need human intervention. 

>> BABU RAM ARYAL: Thanks. Initially, in my sequence, Mr. Gopal was next to you, but as you just referred, I'll go to Sarim first and then come back to Gopal. 

So, Sarim, now you have two very significant opinions on the plate, to respond on that. And again, the same questions that I would like to ask. Meta platforms are significant for, not only for kids, for all of us. But kids are also coming to various platforms, and not only Meta platforms. We neutrally discuss other platforms. So, what are your thoughts on this? What are the major issues on the platforms, including Meta platforms, about the opportunities, of course, as Jutta rightly mentioned ‑‑ first comes rights, then if any violence, then protection is there. So, Sarim, can you share some of your thoughts?

>> SARIM AZIZ: Thank you, Babu. And I'm honoured to be here to talk about this very serious issue, and humbled, obviously, with the speakers here.

I think as they've previously said, I want to just reiterate that this is a global challenge that requires a global response and a multistakeholder approach, and I think law enforcement alone can't solve this; tech/industry alone can't solve it. So, this is one where we require Civil Society. We need families. We need parents. So, that's how we as Meta have approached this issue. And so, we work on all those fronts in terms of, you know, industry. I think this is also a good example where the child rights and the child safety industry actually can be an example for many other areas, actually, like terrorism and others, because we are part of Tech Coalition, which is formed in 2014, and Microsoft and Google are also part of that. That's been an excellent forum for us to collaborate and share best practices and come together to address this challenge. And we're actually ‑‑ in 2020, as part of Project Protect, we committed to expanding the scope of that to protecting kids and thinking about child safety, not just preventing the most harmful type of stuff, which is the CSAM and other things, but also, like, keeping kids safe. 

So, I think if I were to summarize Meta's approach, you know we look at the issue in three buckets. The first would be prevention. And this is important because ‑‑ and this is where AI can ‑‑ AI has a role to play in all of these three areas. When you think about prevention, you know, we have something called, for example, search deterrence. So when someone is going out there on platforms and trying to look for such content, you know ‑‑ I think Michael at one point talked about precrime, right? I think we actually do, you know, AI and, you know, type of heads used by AI as well in terms of what people are typing. We prevent searches coming in within Facebook and Instagram and other search mechanisms to prevent people from searching. And if people are trying to type this stuff, we give them warnings, saying this is harmful and illegal content you're trying to look up and divert them towards support mechanisms. So that's important for prevention.

But also, if you think about bad behavior, you know, we ‑‑ sometimes kids are vulnerable and they might get friend requests from people who are adults or they're not even connected to, strangers. So, now we actually have in‑app advice and warnings popping up to them, saying you know, you shouldn't accept people, like friend requests from strangers. This person isn't even connected to your network. Those are things AI can detect and surface. Like, in‑app advice, safety warnings, notices. And also preventing unwanted interactions. So, we actually do intervene and disrupt those types of, you know, suspicious behaviors when we detect that, using AI. So, prevention is not one bucket where we are optimistic and excited about what AI can do to prevent harm from occurring.

The second bucket is the large part of the discussion that we've seen already around detection. Detecting CSAM has been a large, I think for over a decade, it's been a large focus for the industry, using, like, obviously technology like Photo DNA, which was initially built by Microsoft, and we've actually built on top of that, where we now have photo and video‑matching technology that Meta has open‑sourced, I believe just recently. That's called PDQ, as well as TMK, which is for video matching. So, that's been open‑sourced on GitHub. So, now ‑‑

>> BABU RAM ARYAL: Clarification, PDQ and TMK? Audience may not ‑‑ may want to know ‑‑

>> SARIM AZIZ: Acronyms are easier to Google. PDQ ‑‑ it's built on top of Photo DNA, but it's been open sourced so any platform, any company ‑‑ so, we want to make sure ‑‑ this is something that Meta truly believes in open innovation. Bad actors will use technology in multiple ways. I think our best defense is to open source this technology, make it accessible to more safety experts and more companies out there. You don't have to be as large as Meta to be able to implement child safety measures. Now, if you're an emerging platform in Zambia or any other country, you can take this technology and ensure that you prevent both, sort of spread of this type of CSAM content, but also detection and sharing of hashes and digital signatures to detect CSAM. So, that's where it helps for both photos and videos. So, it's called PDQ for photos and TMK plus PDQF for videos. That's been open sourced on GitHub for any developers and other companies to take. This also helps for transparency. And you talk about ethics. You know, this shows the tech that we use so we can be an external auditor on what's the kind of technology that we use to detect this. This is also technology we use internally for, obviously, training our algorithms and detection, you know, and machine learning technology to ensure that we are able to detect these kinds of contents.

And the last important issue where AI is also helping is response. That's where law enforcement comes in and other civil society and safety organizations, like the National Centre for Missing and Exploited Children. They're a very important partner for Meta and other companies, where any time we do detect CSAM content, we actually even help them build a case system using the same technology that I mentioned, and other. So, if it's youth, for example, that are dealing with non‑consensual imagery issues, you know, that they've put up themselves, so there's a project called "Take It Down," launched by NECMEC which helps. And Meta's on there, TikTok, other companies, where those images can be prevented from spreading. So, those are important initiatives. And that response and closing that loop with NECMEC that works with law enforcement around the world to have a cyber tip help line that helps law enforcement, you know, in their responses, is really critical. So, I think I'll pause there, but I think that's sort of the three areas where we see technology, as well as AI's playing a very important role in preventing, detecting, and responding to child safety issues.

>> BABU RAM ARYAL: Thank you, Sarim. One very interesting issue that governments in the developing world are complaining about the platform operators, that platform operators are not cooperating in the investigation issues when, from a developing country, when they don't have much technology to catch the bad people.

Michael, I'll come back to Gopal again. Just sparked that question, that's why I'm going to Michael. Michael, what is your experience while dealing with these kinds of issues, especially what are the responses from platform providers on child abuse cases online?

>> MICHAEL ILISHEBO: So, basically, that depends on which platform that content is on. Facebook has been a little bit responsive. They are responding. Instagram, they're responding. TikTok, being a new platform, we're still trying to find ways and means of engaging their law enforcement department, liaison department. Also, we've seen an increase in terms of, like, in terms of, like, local providers, those ones. It's much easier for them to bring down the content. It's much easier for them to more like follow the guidelines of putting an age limit to whatever they're posting.

If it's a short video, if it contains a bit of some violence, contains some nudity or any other material we can deem inappropriate for a child, they are required to do the correct thing of either broke the age in terms of it being accessed on Facebook ‑‑ because if I joined Facebook and I entered my age as 18, so that content will not reflect on my timeline or on my feed because of my age. But as I've said earlier, it's difficult to monitor a child who's opened a Facebook account because they'll just make themselves 21. You've seen on Facebook that people are 150 years old. You check on their birthday, they say this person's 120 years old, because the platform themselves, like Facebook, does not actually help us in addressing the ages of age‑getting. 

So, basically, as a way of addressing most of these challenges ‑‑ I will stick myself to Meta because they can answer the question to any issue I am going to raise because they are part of the panel. I can't discuss about Google or any other platform which is not here. So, Meta has been responsive, though in a way, at times, it is slow. But based on the law enforcement portal, issues of child exploitations are given priority. Issues to do with freedom of expression, those ones may be a little bit slow. But on Meta's part, I would still give them 100%, because within that period of time when you relates the takedown of data with information behind this account, they'll still provide you within the shortest period of time. So, my experience with Meta so far has been okay. Thank you. 

>> SARIM AZIZ: Can I just ‑‑

>> BABU RAM ARYAL: Please.

>> SARIM AZIZ: Thank you for that. That was not prescripted. I had no idea what Michael was going to say, but thank you for that feedback. I did want to just comment on the age verification issue. I think that's something that's obviously in discussion with experts around the world, in different countries, lots of different discussions going on. But we at Meta are testing some age verification tools that we started testing in June in some countries. And based on initial results, we see that we have about 96% of teens who tried to change their birth date, we were actually able to stop them from doing that. So, again, I don't think any tech solution's going to be perfect, but there are sort of attempts being made to figure out what is ‑‑ this is on Instagram, by the way, this age verification tool that we have. And we hope to, you know, based on those results, expand it further to prevent minors from seeing content that they shouldn't be seeing, even if they've sort of tried to change their age and things like that. Just wanted to comment on that.

>> BABU RAM ARYAL: Thanks, Sarim. Now, finally, I'll come to Mr. Gopal. So, we have discussed various issues from technical perspective, some direct enforcement perspective as well, and Jutta has discussed certain issues and she also referred to the Child Rights Convention as well. 

As a long‑practicing lawyer, what do you see in your country perspective, from Nepalese context? What are the major legal protections for children, especially when we talk about the online protection, protection over the online platforms? Yeah, please.

>> Gopal Krishna: Thank you. Thank you very much for giving me this opportunity to say something about first about my country. Of course, I am representing Nepal Bar Association. That means the human rights protector. Basically, we have four subjects we are just focusing on. Firstly, we are human rights ‑‑ we deal with human rights. Secondly, the democracy. The rule of law and this. The fourth piece is the independent of judiciary. The issue of independence of judiciary.

Of course, being a human rights protector, we have to focus on the child rights issue, too. This is our duty, that we are focusing on the human child rights issue.

You know, we, in our presentation constitution, Article 39 explicitly says that we have right to child. Every child has a right to name, birth, and recognition, along with his or her identity. Every child shall have a right to education, health, maintenance, proper care, sports, entertainment, and overall personality development from the families and the state, both. And every child shall have the right to elementary child label ‑‑ elementary child label, child development and child participation. No child shall have engaged in any factory. This is an important right for a child in Nepalese Constitution. Minor or similar other ages work, no child shall be subjected to child marriage, transported illegally and kidnapped or taken hostage. No child shall be recruited or used in any army, police, or any armed group, or to be subjected in the name of culture or religious traditions to abuse, exclusion, or physically or mentally ‑‑ physical, mental, sexual, or other form of exploitation or improper use by any means or any manner. And no child shall be subjected to physical, mental, or any other form of torture in home, school, or other places and condition, whatever. So, this is the constitutional right.

>> BABU RAM ARYAL: You made very clear protection of child in like abuse of children online as well. That reflects in the constitution.

>> Gopal Krishna: Yes, yes. In our constitution, we have clear provisions for protection of child rights. And we have Child Protection Act also. The Child Protection Act, it criminalizes the child abuse and activities online and offline, child abuse activities, online and offline both. And we have child pedophile cases, and the courts in Nepal is very strictly prohibits those types of activities. It is clearly, our court is in practice.

And we have this online child safety guidelines also. Online child safety guidelines, explicitly are told that these provide recommendations of action to the different stakeholders. And though we have not gone through AI, and not even think about it. I would like to say not even think about it, but our constitution, whatever we have, the legal provisions, and whatever we have the legal constitutional provision, the child right is, in this phase, is, especially our constitution and legal framework, especially very close to the child protection issues, what I would like to say.

I can say that child rights is our focus and child rights is the core issue for our legal provisions, constitutional and legal provisions, too.

>> BABU RAM ARYAL: Thank you. I will go to the next round of discussion of this session. Basically, when we proposed this workshop, our workshop is "Harnessing AI for Child Protection," right? So, I'll come to Sarim first.  How are technology leveraging protection of child online, especially when AI tools are available? What are the tools, what are the models, and how this can leverage on protecting child online?

>> SARIM AZIZ: Thanks, Babu. So, I think I'll dig deeper into sort of my overview that I mentioned. So, as I mentioned, AI has been a critical component of online child safety prevention, detection, and response for a very long time, so this is actually ‑‑ even though I think the gen AI discussion has maybe hyped the interest around AI for child safety, it's been a very critical component of that response, as I mentioned.  So, the most obvious one, as I mentioned, is the CSAM ‑‑ child sexually abusive material. And it started with Microsoft ten years ago with the Photo DNA technology, which has evolved, and we've open sourced our own since then. And that work on detection is the most crucial because that also helps with prevention, detecting things at scale. Especially we have a platform of 3.8 billion users, so we want to prevent such content, from people even seeing it, from people even uploading it. And then that involves ‑‑ still requires a lot of human expertise. That's important. I don't think it's completely, humans are not involved. Because making sure you've got huge, high‑quality data sets of CSAM material to train the AI to be able to detect this sort of requires a lot of human intervention, you know, and we still need humans for things that maybe AI cannot detect. So, there's definitely a challenge with gen AI in terms of maybe on the production side, where you know, people might be producing more of this more easily, but I don't think ‑‑ I think we are ‑‑ you know, on our side, we've got the defenses ready to build on and improve on, to make sure that we're able to leverage AI to also detect those kinds of things. I think there's a lot more work to do in that space, but we have a good ‑‑ industry has done well in terms of leveraging AI on the detection side. 

I think the prevention side to us is more exciting because that's something new that we've focused on, in terms of for user education, youth education, preventing them from interactions that are suspicious, that are, you know, with strangers and adults that they shouldn't be having.

I think this issue of, you know, the parental supervision is an interesting one. We obviously have parental controls built into our products, into Facebook and Instagram. We believe that parents and guardians know best for their kids in terms of ‑‑ but at the same time, you know, there are obviously privacy issues we have to consider, so those are the privacy ethical issues that are ongoing.

But yeah, so, I think prevention and detection are excellence. I think the response side, child safety is one of the few areas where the partnerships like NECMEC and multistakeholder responses are so critical to make sure we're able to work with safety partners around the world, law enforcement around the world. We have a Safety Advisory Group as well of 400 experts from around the world that advise us on these sort of products and our responses.

>> BABU RAM ARYAL: Very good follow‑up question, Sarim. You just mentioned that we have safety partners. And how it works, especially well protecting child, there are various community standards, and you know, there are even in some countries, their age of ‑‑ they have a specific age group on minority and majority. And certain ‑‑ even in my country, there are some debates in recent past, that although the CRC says 18 years and our local child legislation also says 18 years, but it's still, even in Parliament, there was some discussion that we should reduce the age of minor to majority threshold.  So, how, dealing with a different legislative regime platform operators work on combatting these kinds of issues?

>> SARIM AZIZ: Yeah, I think those discussions are ongoing as we speak in many countries, in terms of what is the right age to ‑‑ you know, at what age do you require parental consent. And so, those ‑‑ I mean, everyone will have a different opinion on that. I think what we're trying to really focus on is that ‑‑ of course, on Meta's platforms, for most products you need to be at least 13 to create an account. In some jurisdictions, it's different, so we obviously respect law wherever we operate.

However, I think our focus is really on, regardless of whether you're 13 or 14, like, it's the nature of the interaction and making sure that you are safe and saying that whether, if there's violent content, we have something, you know, what we call, like, Marked as disturbing as well, even for adults, actually. So, I think making sure minors don't even see content like that, but even if adults see that, AI helps us to make sure, this makes people who are 18 even uncomfortable, so we have technologies on that. So, 18 is a number, but at the same time, we have to make sure the protections, the systems are in place to protect youth in general, whether they're 13 or 14 or 17.

>> BABU RAM ARYAL: Thanks. Michael, you want to respond on this?

>> MICHAEL ILISHEBO: I'm just adding on to what you have said. Amongst the issues that is a little bit challenging is the classification of inappropriate content. I will give an example. On the Meta platform, 13 years is the minimum age one can join Facebook. But based on our laws and standards in the countries that we come from, 13 years is deemed to be a child who can't even probably own a cell phone.

The second part is, the content themselves. An image of violence probably in a cartoon form or music with some vulgar, violent content, or anything that may be deemed inappropriate for Zambia might actually be deemed appropriate say for the U.S. A child holding a gun in Zambia or in Africa, either through the guidance of their parent or without, it's literally something that is unheard of. But in the U.S., we've heard of children going with guns in schools, doing all sorts of things. We've seen images where, if you look at it as a parent, you'd be worried, but that image is there on Facebook and has been accessed by another child in another jurisdiction where it is not deemed to be offensive. So, issues of clarification themselves, they've played a challenging role. Just to add to what is said. Thank you.

>> BABU RAM ARYAL: Thanks, Michael. Jutta. 

>> JUTTA CROLL: Yes. Thank you for giving me the floor again. I would like to go a bit more into where AI can be used, or probably also where it can't be used. And some of you may know that there is already new draft regulation in the European Parliamentarian process on child sexual abuse, which differentiates between three different ‑‑ it's not different types, but it's three things to be addressed.  One is already known child sexual abuse imagery, which Sarim has described very well. It is possible to detect that with a very low false positive rate, due to Photo DNA and the improvement that already Meta and other companies have made during the last years have led to also being able to detect video content, which was quite difficult some years ago. It has become much better.

Then, the second part is not yet known sexual abuse imagery, so the new products, and they are coming in, in a huge amount, in huge numbers of images and videos are uploaded every day. And of course, it's much more difficult to detect this imagery that have not been classified as being child sexual abuse imagery, and the false positive rate is much higher in this regard.

And then, the third part, which is the most difficult, is detection of grooming processes, where children are groomed into contact with a stranger in order to be abused, either online or to produce self‑produced sexualized content of themselves and sending that to the grooming person. So, we know that these different areas react to different artificial intelligence strategy in a different way, and the most difficult is the part in grooming, where, obviously, if you haven't the means to look into the content, because the content of the communication might be encrypted, then you would need to use other strategies to detect patterns, for example, of the type of communication. One sender addressing 1,000 different profiles to get in contact in the expectation that at least maybe 1% of these addresses will react to the grooming process and getting in contact with the person. And that's I think where talking about shared responsibility, that could not be done by the regulator, could not be done by the policymaker, but it could be done by the platform providers, because you have the knowledge side of things, you have the resources to look deeper into these new developments and try to find a technological way, based on AI, to address these issues. And I push the ball back to you, because I am pretty sure you can answer to that. 

>> SARIM AZIZ: Thank you. That's, actually, I think exactly the area of focus for us in recent times, is to focus on preventing grooming, and that's where AI is playing a very key role, as I mentioned, on just preventing that, you know, unwanted interaction between an adult and a teen, you know. So, we've changed things, like for example just preventing the default settings so a youth account would not be able to message, you know, like a stranger. So, that. And also, even on comments of public information, so a comment that's done by a youth, for example, will not be visible to an adult. So, we're actually trying to kind of reduce that sort of unwanted interaction. It's still early days for this, but I think we've taken measures already. We haven't waited. We know this is the right thing to do in terms of, you know, ensuring that adults are not able to discover teen content.

In Instagram, for example, in our discovery reel, you won't see any youth content there, you know. And same with, whenever we detect sort of any attempts of friend requests, as I mentioned ‑‑ that was an example where if someone was not in your network, we do give warnings to teens. And that's an opportunity to educate, to say, like, this person's a stranger, you shouldn't be accepting a friend request, so discourage them. So, I think you're right, this is the right focus for us to kind of continue using technology and AI to prevent sort of grooming and protecting sort of suspicious interactions and unwanted interactions between, you know, teens and adults. 

>> BABU RAM ARYAL: Very significant issue, Jutta just referred, that detection of grooming process of a child in a platform. I have myself dealt with certain cases in Nepal as well.

Also, Michael raised age classification of use of platforms, and there are various categories of age who get connected with the platforms. As a business or platform providers, one of platform providers, maybe my question could be a law enforcement issue. But from accountability perspective, if it is seen that a platform is used for long grooming of child and leading to significant abuse to child, then do you see, as Jutta rightly mentioned that share of accountability ‑‑ do you see the platform also share the accountability of that serious incident, not only the matter of law enforcement? Or you're not clear on my question? Sarim.

>> SARIM AZIZ: I think platforms definitely have a responsibility to keep their users safe. And I think, as Michael alluded to, we ‑‑ as I said, this is a global issue. It requires a global response. We have to do our part in that, and we do that by using ‑‑ making sure we create the product by having safety by design, and some of these changes we're making is literally safety by design, like when we're developing these features to make sure that how would youth, you know, use this, and how could we keep them safe, you know.

Even like when you're ‑‑ we don't suggest, for example, adults and people, friends you know, things like that. So, this is safety by design, right, in the product. But beyond that, when something bad happens, absolutely, you know, we do work very closely with law enforcement from around the world, including with NECMEC ‑‑ through NECMEC as well, when we see a situation where a child is in danger. And many times, you won't read about it in the paper, but platforms do cooperate, and they reach out to law enforcement with the information they see to ensure that child, or anyone can be kept safe.  At least, that's my view. I obviously can't speak on behalf of every platform, but that's how we operate at Meta. 

>> BABU RAM ARYAL: I have two questions for the panelists, but before going to those questions, my questions in the future will be dealing with privacy and the future strategy. Before that, I will take a few questions from the audience as well, and I open the floor for your questions.  If you have any questions from the floor, I would like to welcome.  Yes.

>> AUDIENCE: Thank you so much for the conversations. The question did ‑‑

>> BABU RAM ARYAL: Please introduce ‑‑

>> AUDIENCE: Sorry. I'm a parliamentarian from Nepal. When it comes to protecting children, one of the other things we also need to protect them from is bullying, right? So, we've got so many different languages. What is Meta, for example, doing about content moderation in different countries in which it's used? It will be great to know. Thank you.

>> SARIM AZIZ: Thank you for the question and for joining this discussion. Yeah, we have very clear policies against bullying and harassment on our platform, across all our surfaces. It's the same policy on Facebook and Instagram and others. So, we have the same policies applied everywhere, so we want to protect ‑‑ the same protections to all youth, all adults as well.

Of course, our threshold is much, much lower when it comes to kids and youth, when we see the type of, you know, bullying. If a minor's involved in that type of situation, our policies are much more harsher in terms of the enforcement action that we take, as well as like the strikes against individuals who might be engaged in that behavior. We do a variety of enforcement actions, not just sort of stopping the behavior, but also restricting sort of additional sort of abuse from those types of accounts.

But of course, we rely on ‑‑ bullying is a difficult one where AI ‑‑ I have to say, I don't think ‑‑ I mean, it has made progress, but I think it's a difficult one, compared to CSAM and other areas, and terrorism, where AI has not been, you know, completely successful in sort of, like we don't have a 99% sort of action rate on that because of the nature of bullying can be so different, right? And it may be ‑‑ it may not be obvious to a stranger that there's bullying going on, because context is so important, you know, between the two individuals involved, the cultural context. So, I think the policies are clear. We do enforce and we do remove any type of, and prevent such kind of content, but we largely rely on our human reviewers.  We have people from around the world, including people/experts from Nepal who sort of review content in local language and help us enforce against that. But that type of content, we do rely also on the community to report, because if no one reports it, then platforms are not going to know that this is bullying, and this is why that context and intervention, including safety partners and civil society partners.

We have partnerships in many countries with local safety organizations, including in Nepal, where you know, victims of bullying can report such content to local partners who can ensure that Meta services take action quickly against that.

>> BABU RAM ARYAL: More questions? Audience? Online question? Okay. We have got one online question. 

>> There's a question from Stacy. What are the current accepted norms for balancing teens' human rights with privacy and security? Are we good at it?

>> BABU RAM ARYAL: And any specific resource person?

>> No, not mentioned.

>> BABU RAM ARYAL: Sarim and Jutta?

>> JUTTA CROLL: Okay. I'm going to take that one first. I wanted already to refer to privacy rights of children, because I think it's part of the UNCRC that is the most ambivalent paragraph of the Convention on the Rights of the Child, because children have the right to a privacy of their own. So, that also means ‑‑ and it's made very clear in the General Comment 25. Because with digital environment, with digital media, it has become more difficult also for parents to strike that balance between keeping the privacy of the children on the one hand, and that would mean not looking into their mobile phone, like Michael had been talking about before; but on the other hand, parents also have their task, their duty to protect their children. So, it's very difficult in the social environment of the children, in the family, to have a balance between their right of privacy and their right to be protected.

But also, when we look into that regulation, for example, that I've been quoting the EU regulation that is under way, but also in other regards, that it is quite difficult, because at the moment that we are asking for monitoring content, we know that is always an infringement of the privacy of the people that have produced that content or that are communicating. So, looking into the private communication of people would be an infringement of their right to privacy, and that would also mean an infringement of the rights of children and young people, because they have that right to privacy as well. 

And on the other hand, if we don't do that, how could a platform like Meta or any other platform follow their responsibility and accountability for protecting their users? That is quite ‑‑ I do think it's an equation that doesn't come to a fair solution. We need to tackle it from different directions to try to find a balance in this way.

>> SARIM AZIZ: Yeah, just to add to that. I think this is a really important one. I think when you asked the question, it reminded me of the Google case where, I think there was a parent, you know, who took a sort of nude photo of their child to send to a doctor during COVID. And I think Google's AI sort of marked that as sort of really harmful, you know, sort of content, and reported that situation to law enforcement. So, I think, yeah, there is definitely that balance, and the rights of the child versus rights of parents. That's an interesting one. 

But I think I do want to say, industry is also quite, I think, against sort of this scanning private messages situation, because the numbers seem to indicate that we don't need to actually do that. If you look, all the things that I mentioned in terms of prevention and detection is based on behavior, behavioral patterns. It's not based on necessarily content. CSAM aside. Yes, of course, like, that requires that to be ‑‑ I think if we focus all our energy on just public services, where people comment, the behavior we are trying to prevent, grooming behavior, I think there's plenty of opportunity for technology and civil society and experts to focus their efforts, and you don't need to break ‑‑ get into, like, a private messaging.

In fact, a good statistic is, Q1 of this year ‑‑ and I'm only quoting Meta numbers ‑‑ I think the global numbers from platforms is even more. This is just Meta's number.  In Q1 of this year, we sent 1.2 million reports to NCMEC of child‑related CSAM material, right, without invading anybody's privacy, okay? That's a staggering number. That's just Meta, all right? Again, if you add the other numbers, I think it's even higher for other platforms. So, I don't think we need to go there. I think that gets into, you know, a lot of unwanted sort of side effects that you don't want. I think if we focus our energy on behavioral patterns, public surfaces, there's enough opportunity to prevent sort of grooming behavior and keep kids safe.

>> BABU RAM ARYAL: In previous conversation, Michael mentioned about the privacy. And I said before opening the floor, I said, I have a separate question on privacy. Let's discuss more on privacy. I would like to ask more on privacy.

There was a big debate on COPA and CHIPA ‑‑ child Online Protection Act and Child Internet Protection Act, largely debated and went to Supreme Court, and clearly discussed about the child protection is one side and protection of adults are a different side, right? So, how we can make the better position, especially talking from developing country perspective, like Nepal and Zambia? What kind of legislative framework could be more efficient? Because lots of countries, they don't have a specific legislation on online child protection. There might be certain provisions on the regular Child Protection Act, but not a very clear position on child protection online issues.

And Michael, I'll come to you first to respond on this. What is your experience in Zambian legal regime, how your Zambian legislative framework is addressing these kinds of issues?

>> MICHAEL ILISHEBO: So, basically, as I alluded earlier on, in 2021, we had a split in terms of amending our Communications and Transactions Act, which contained both the aspect of cybercrime, cybersecurity, electronic communications, and other legislative issues on ICT. So, now it was more like, I would say we came up with two more legislations that we separated from the ECT Act. One of them is the cybersecurity and cybercrimes act and the other was the Data Protections Act. Basically, the Data Protections Act covers matters and issues to do with privacy. But of course, privacy is a generic term. At the end of the day, a child who is 10 years, what privacy do they need when they are under the control of a guardian or a parent? They may not know that which is good, that which is bad because of their stage of their age and state of mind.

Also, coming back to the issues of security and safety. They become vulnerable the moment issues of privacy come in. If you ask a child to say, "Let me have your phone. Let me see whom you've been communicating to," a child would say, "I have the right to privacy."  What do you do? It's true. As long as you've deemed that child to own a phone, in silence, we have allowed them a bit of some privacy.

But again, it also depends on which platform they're using. I will give an example of my kids. My kids back home, for their YouTube channel or any product from Google, I use a family account. That allows me to regulate what app they're installing, even if I'm not there, I will receive an email to say, they want to install this application. It's for me to either allow it or block it. The same happens to YouTube. So, basically, I've taken that step, because the human oversight, I will not always be there to see what they're doing, but somehow, technology will help me, through AI, to filter and probably bring to my notice on certain things that technology feels like this is above their age. There are some games online that would appear innocent in the eyes of an adult, but if a child keeps playing those games, a lot of bad things, a lot of images that may be of sexual exploitations will be introduced in the middle of the game. But when you look at it as an adult, you won't even see anything. So, these providers, like in the case of Google, has a way of knowing which application in the Play Store or any other platform is appropriate for a child. So, as a step to protect my kids, I've allowed them to use only a family‑friendly account where, despite me being in Japan, I'm able to know if they've viewed this video which I may deem to be inappropriate. I'll either block it from here or probably I'll talk to them that never ever visit this page. 

Of course, Microsoft also may come up with their own policies through their browser on blocking certain sites and probably pages or another thing that they may be doing online using their platform. But again, it comes back to the issue of human rights and privacy. To what extent are we able to control our kids? Are we able to control them based on the fact that they're using a single device in the house where this one uses in the morning, this one in the evening, or they've got single devices, or alternatively, we've allowed them to use single devices based on the fact that we've installed a family‑friendly account which enables you as a parent to monitor it. But of course, it's not always the case, because a child is an adventurous person. They will always find ways and means to bypass every other control. They seem to know more than we do.

The same also applies to crimes where a child is a victim. A child may be groomed by somebody they're chatting with. They may be told, you place this, place that. They'll bypass all of the controls you've put in place. As much as you've put your privacy protections and probably safety rules to how they navigate their own line space, there is a third party out there who's taking control of them and making them do and visit things that they're not supposed to do.

>> BABU RAM ARYAL: Sarim, same question.

>> SARIM AZIZ: I think this comes back to the prevention aspect of it. And I think the last example that Michael just mentioned. You know, we've changed our default settings for youth accounts, exactly that, to prevent any kind of interactions. I think prevention is really a good strategy, and focusing, making sure, having safety in there by design. And this is where AI is helping.

I mean, on the ongoing debate, as Michael said, I think kids are digital natives in this world, and so, they, you know, they're good at sort of circumventing all this stuff. But if there's safety and design into products and services that we use, and we have parental supervision tools as well on Meta, Meta's platforms, so parents are aware who they're communicating with and things like that and what type of content are they interacting with. By default, kids don't see any advertising on Facebook, right? So, obviously, that's important. At the same time, you know, any content that might be violent or graphically violent or inappropriate is not visible to them. As I said, even for adults it's disturbing, so we do mark that as disturbing for adults so they don't have to see such content, by default.  So, it's an ongoing discussion.

I think the solution is safety by design and youth safety by design in products, because kids are sometimes early adopters of these things that come in, and making sure that it's keeping them safe ‑‑ if we keep them safe, we actually keep everyone safe as well, not just kids. 

>> BABU RAM ARYAL: Jutta?

>> JUTTA CROLL: Yes, I have to respond to one thing that Sarim said, and that is, when you say kids don't see advertisement on Meta, it's when they have been honest with their age. But when they have been lying on their age, they might see advertisement.

We have already been talking about age verification or age assurance. I would just say that it's key to solve the issue. As long as we don't know the age, I would say of all users. It's not only that we need to know whether a child is a child; we also need to know whether an adult is an adult to know that there is inappropriate communication going on. So, I'm pretty sure that in the near future, we will have privacy‑saving methodologies to make sure that we know the age of the users to better protect them.

But coming back to that question that you raised and posed also to Michael. I could say, it's one sentence ‑‑ talk to each other. Parents as well as children have to talk to each other. And it's always better to negotiate what is appropriate for the child to do than to regulate. And I do think that the same applies to policy and to platforms and to the regulator. Talk to each other and try to find a constructive solution between the two of them.

>> BABU RAM ARYAL: Jutta, I don't know whether this is proper to ask you or not. Before you mentioned about the upcoming legislation of European Union Parliament. Can you share some of the domestic practices of the European Member States about online child protection? Because I wanted to ask that question before what sequence developed differently. Sorry about that. But if you can share any member state perspective on online child protection.

>> JUTTA CROLL: So, do we have two more hours to talk about all the ‑‑

>> BABU RAM ARYAL: One more round, yeah.

>> JUTTA CROLL: ‑‑ strategies that we have. Of course, it's different in the different countries.  We see that ‑‑ and I think that for several years, countries that start right now or have started to legislate two or three or five years ago have much more focus on the digital environment and how to legislate child sexual abuse in a way that is appropriate to the digital environment, while countries that have longer standing child protection laws that did not address the digital environment, of course, they need to amend their laws, but that takes time.  So, the newer the legislation, the better. It is fit for purpose to address child sexual abuse in the online environment.

What we did in Germany was, in 2021, we got an Amended Use Protection Act that refers much more to the digital environment than it did before, and that has kind of that approach that I just have been talking about before. It's called dialogic regulation. It's not that it imposes obligations on the platform providers, but it asks for a dialogue with the platform providers; try to find the best solution. And I think that is much more future‑proof than regulating, because you always can regulate only the situation that you are facing in the moment that you are doing the legislation.

But we need to look forward. And again, I'm referring to the platform providers. You're in the position to know what is in the pipeline, which functionalities will be added to your service. So, if you do safety by design, then, or as Sonia Livingston put it just in another session, it should be a child rights‑based design; then probably, the regulator would not have so much work to do.

>> BABU RAM ARYAL: Thanks, Jutta. Gopal, you wanted to say something on this?

>> Gopal Krishna: Though for some time, just what is the right age of adult? This question is now digital in our context, in the police context. We now have (?) though before some time, we have our parliamentarian. I think she has gone.  What is the correct age for marriage? We have a provision that when a child completed 16 years of age, we provide him or her citizenship certificate. That means a type of adult certificate, citizenship certificate is provided in 16 years of age, when he completed, she completed.  And a child can vote for their representative in Nepal when he or she completed 18 years of age. The voting right is to be provided to the person. 

And third one is, what is the age of marriage? The age of marriage is 20 years. And many cases now, in my practicing life, many cases are laws before the court ‑‑ the rape cases are laws before the court. What is the consented age? What is the age of consent? And many people are in jail nowadays. Many people are in jail when they are ‑‑ if they are consistently, they indulge in themselves before the age of 18. And it is the matter of questionable.  That is why it is very important, and our civil society, this matter is now in debate in civil society, too. That is, it is very important for we ‑‑ what is the proper.

Interestingly, though we have set principles, we have examples. And what is the proper age for marriage and whether it could be internationally settled, the similar age or not? This is a very important question for we now. That is why I just raised these questions to our fellow ‑‑

>> BABU RAM ARYAL: To link this issue, I'll go to Sarim very briefly. Sarim, especially when the litigation comes and the law enforcement agency sees the different age, age group actor's content, especially like sexual relevancy or any kind of other similar kind of content. As he was referring, some different legislation that allows this between people. There could be user, you know, as an evidence. So, this is the debate at different societies.  So, how easy to deal with this kind of are citizens for platform providers or what are the platform providers' response on these kinds of issues?

>> SARIM AZIZ: Yeah, I think these child safety issues are definitely top of mind for our trust and safety teams at Meta, and I'm sure for other platforms, too. And I think the NCMEC number that I shared earlier is a good sort of proof point of how we cooperate with civil society and law enforcement.

Of course, there are some cases where we don't wait for NCMEC. If a child is in imminent danger, we believe that ‑‑ we have child safety teams that look at this stuff, these cases, and law enforcement teams that directly reach out to law enforcement in the country. And you know, there's been cases where we've busted sort of these child rings as well.  So, I think that's an ongoing effort. I wouldn't say it's easy. I think it requires a lot of ‑‑ I mean, AI has helped in that effort, but it still requires human intervention and investigation. 

I think the age verification piece is interesting. As I mentioned, that's where we are doing some tests, and the AI does help, because one of the solutions that they're testing is, like you know, where the youth has to send a video of itself for verification. So, I think you can rely on this to a certain extent, but there are data collection points, how much private citizen data are you going to collection and then there are suggestions where you link into government systems but there are surveillance concerns on that. So, I don't think there's a silver bullet here, and I don't think any solution is going to be perfect, right?

We are doing tests with age verification, as I said, on Instagram. I think we'll wait and see what results say. There will be some level of verification, but again, I don't think anything will be perfect. And certainly, you know, but we do need to sort of figure out, as Jutta said, we need to figure out, like, whether an adult is an adult, whether a child is a child. And especially, we have other behaviors that we need to detect of, like, suspicious behaviors and fake accounts and things like that as well. So, that's where AI is also helping us out definitely quite a bit.

>> BABU RAM ARYAL: Follow‑up, very brief question. You regularly, you mentioned we report to NCMEC, we send report to NCMEC. NCMEC is based in U.S., right? So, what could be your response if other jurisdictions wanted to collaborate with you?

>> SARIM AZIZ: NCMEC collaborates with law enforcement around the world. So, if you're a law enforcement agency in any country, you can reach out to them. They have ‑‑ I think there's another organization called ICMEC that they also partner with, which is international, and they work with law enforcement to set up their cyber tip helpline. So, that gives local law enforcement access to that information.

>> BABU RAM ARYAL: Jutta.

>> JUTTA CROLL: Yes, I just wanted to add something, because I'm very grateful for you to bringing in the question of consent, because the principle of age of consent has also come under pressure pretty much in the digital environment where the question is what has been consensual and what wasn't? And the general comment that I have been referring to before says, in Article 118, that self‑generated sexualized imagery that is shared in consensus between young people should not be criminalized. So, I think that is a very good provision for young people, also need to experience their sexual orientation to learn about each other. But when it comes to these images, AI would not be able to understand whether it was a consensual sharing of images or whether it was not in consent. So, that makes it very difficult to apply such a rule. Like, okay, what is consensual and what is not consensual? And that's also, as I said before, we can rely on artificial intelligence in very much aspects, and I'm pretty sure it will get better and help us better to protect children, but there are certain issues where artificial intelligence cannot help, and we need to have human interventions. Thank you.

>> BABU RAM ARYAL: Thank you. Any questions from the audience? We have very less time for the session. If you have any question, please. 

>> AUDIENCE: Just part of the protection of the right of the child, the companies or the social media group should be responsible enough, for the legislation, the content, and the use of AI, to identify the local language so that there will not be any kind of misuse of it or anything else. Just like I would like to give my own example. 

It was about eight years ago, I got a friend request, by putting the name of the one beautiful child. She was about ‑‑ the picture was, you know, about 13 or 14 years old. And I didn't accept. She frequently called me and I ignored that one. And then what happened? I thought that, oh, I should tell ‑‑ I should note that call and then I should tell their parents that what, that your child is doing this thing. Then I asked her to give her ‑‑ do you have your private number or phone number, something else? I don't like talking social media. Then, what happened was that she was not a child. She was some woman that she wanted to have some informal relation with me or someone else.

Then I asked her, "Why? Why did you put the picture of the child and the name different thing?" And then she said, "Oh, if I put the picture of the child, then I will like that one."  This is the case I experience by myself. Similar things might happen for others also, so that registration systems in the social media should have some authentication mechanism. Without that, there might be some similar cases that might happen to others also, so that my request to the social media agency is to be more accountable, responsible, and intelligent enough that our platform is not misused. That is my suggestion. Thank you very much.

>> BABU RAM ARYAL: Thank you.

>> AUDIENCE: Thank you. So, I will move a little bit to the human side, while we're talking about AI.

>> BABU RAM ARYAL: Very briefly. Time.

>> AUDIENCE: Very briefly. Young people experience a lot of peer pressure in the digital era, social media has caused it even more. For example, the increased depression, disorder eating, and thoughts. When we look at the root cause of peer pressure, the need to fit in, fear of rejection, and looking at the sense of belonging, those are the human aspects.  And due to that, they are very much vulnerable to online exploitation, right? So, what is social media company doing in terms of human aspect as well, along with the technical one?

>> BABU RAM ARYAL: Thank you.

>> AUDIENCE: Just a short one. My question is also for Sarim. I'm Binod Bisnet from Nepal. So, recently, there have been some message that is circulating in the Facebook Messenger that you have been infringing some child protection policies of Facebook. So, if you don't follow some instructions, your account will be suspended and stuff like that. And when you go into that message, there is a photo of Meta, with the Meta logo. And it's very panicking for young users, and they tend to giving their contact details and their ID and the password and stuff like that. But I think in reality, they are facing sites that are seeking your passwords.

So, my question is, what is Facebook doing in regards to those phishing hackers? What is the retaliating or what is the policy that Facebook takes against those phishing sites? Thank you.

>> BABU RAM ARYAL: Thanks. Our time is almost about to conclude, but questions are coming. So, Sarim, directly to you.

>> SARIM AZIZ: Questions all directed to me. I'm happy to continue the conversation after this discussion. But look, on all fronts ‑‑

>> BABU RAM ARYAL: Your concluding remarks.

>> SARIM AZIZ: Concluding remarks would be, I think I'm going to go back to my introduction to say this is not just a platform that can solve all the solutions. I think we have technology that can assist. Technology still requires a human expertise, both from the platforms, but also expertise from civil society, law enforcement, and government, and parents and families. 

So, phishing is a longstanding issue. I think the smartest people in this room have been Phished, and it has ‑‑ you know, whether that's from Meta or some other logo that they recognise. I think it's a matter of, like, when you're short of time and your attention span is shorter, you could be phished very easily. So, I think that's an issue where we need to increase digital literacy and education.

And actually, from a systems perspective, the way you fix that is you have authentication, right? One‑time authentication, so that even if someone gets phished, your credentials don't ‑‑ one‑time authentication will prevent a sort of phishing attacks from getting access to systems. So, that systematically I think needs to change. That's a separate issue.

On, absolutely, I think in terms of safety, Meta cannot alone solve these issues. I think in terms of the human aspect, we work with 400 safety advisors we have, but there are other organizations where members of the We Protect Alliance as well, and other organizations, where us, along with industry, we want to protect kids. And I mentioned earlier on how we are using our platforms and using AI to detect and educate kids on when there is potential grooming or potential unwanted interactions to prevent kids from interacting with adults. So, those are some efforts, but there's lots more we can do and we're open to ideas.

The gentleman who mentioned that I think he was maybe phished or maybe there was some at attempt to connect with him. Of course, we also rely on community. I think some of the challenges we have is people don't report. I think they think that platforms have the manpower or the ability to just know when something's wrong. We won't until people, civil society and users report things to us. And that's where we rely on our partners. That's key when these things happen to protect yourself, but also your community.

>> BABU RAM ARYAL: Closing, less than one‑minute response from Michael.

>> MICHAEL ILISHEBO: So, basically, we will not address any of the challenges we've discussed here if there's no close collaboration between the private sector, between the governments, and the public sector, and also the tech companies. So, in as much as we can't put trust in AI to help us, policy cyberspace, the human factor and close collaboration will be the key to addressing most of these challenges. Thank you.

>> BABU RAM ARYAL: Thank you. Jutta?

>> JUTTA CROLL: Yes. Thank you for giving me the opportunity for one last statement. I would like to refer to Article 3 of the UN Convention of the Rights of the Child that states, the principle of the best interests of the child. And in any decision we may take, that policymakers may take, that platform providers may take, just consider whether this is in the best interest of the child, and that means the individual child. If you are talking about a case like we've heard of some of these cases, but all children at large. Also consider whether the decision that shall be made, the development that shall be made, the technological intervention that shall be made, is it in the best interests of the child? And then, I do think we will achieve more child protection.

>> BABU RAM ARYAL: Thank you. Gopal, closing?

>> Gopal Krishna: We have the part of child rights, we already signed in the Child Rights Protection Treaty. So, being the part, very responsible part of society, being the President of Nepal Bar Association, I am committed to always in favour of child right protection, protection acts and its amendments, its positive amendments, and so on. So, I am very thankful to you to give me this opportunity. Thank you very much.

>> BABU RAM ARYAL: Thank you very much, ads we are running out of time. I would like to thank my panelists, my team of organizers, and all of you who actively participated on this, and of course, for the benefit of a child, dedicated to the child, I would like to thank all of you, and I close this session.  We hope we will have a good report of this discussion, and we will be sharing the report to all of you through our channel. Thank you very much.