The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.
***
>> EMILAR GHANDI: Let's quickly go through our experts today. If you want to introduce yourself as well.
>> JEFFREY HOWARD: Sure, my name is Jeff Howard, I'm at Oxford University.
>> EMILAR GHANDI: Thank you so much and can we go to our three speakers online if you can just quickly introduce yourselves?
>> TOMIWA ILORI: Okay, hi, I am the advisor for the Africa project by the human rights, thank you for having me.
>> EMILAR GHANDI: When you speak next please increase your volume. Can we go to Conor, please?
>> CONOR SANCHEZ: Yes, hi, everybody my name is Conor Sanchez and I'm with Meta on the stakeholder team. Pleased to be with you all today sitting in California.
>> EMILAR GHANDI: Thank you so much Conor and do we have our third speaker online? Maybe when she joins us, she can introduce herself. Which I think all of you is first identifying all the experts. Who are these experts? How do we even define what expertise is? Can we look at lived experiences as expertise?
The external experts themselves, can we look at the impacted ones who are the potentially impacted who are the vulnerable who are the underrepresented groups so identifying experts in itself is a challenge. The second one is really when we identify the experts, what are some of the, you know, how do we manage conflicting interests within the stakeholder maps that we have? What are the agendas that they have that can influence input and objectivity on our policies, on our product policies, on our content policies but beyond that, beyond just identifying the experts, really as I think acknowledging that with experts, there's a spectrum of experts it's not just one type of experts, it's not just academics or civil society groups that I'm seeing in the room and also online the third one is the different or, you know, the power dynamics, not all NGOs are the same, not all stakeholders are the same. Different stakeholders have different levels of influence within the stakeholder groups themselves in different ‑‑ between the different stakeholder groups. How do we also communicate complex information? I'm happy to see, you know, Levin as a former dip employee ambassador, I know you don't have your headphones.
It's important I think we have benefitted personally I've benefitted from the capacity building programs that the organisation has run. And in for us as Meta it's important to differentiate the stakeholders we're engaging.
Not all of them are ‑‑ they might have lived experiences, they might be experts in their field but not everyone understands our policies so we have to really work hard and ensure that before we communicate that complex information or the policy changes, we engage in capacity building. Opportunities I think there are many and this panel will look at some of those opportunities, you know, access to specialized knowledge, we don't want this to just be an instructive process but also mutually beneficial.
That will improve our policies. It improves or policies, not only the substance or the process itself in the credibility of the work that we are doing. Transparency I think is also another opportunity, it sounds like a very easy concept but obviously not because with transparency comes accountability and that's something I think we will need to talk about and also building trust. We know that there's a trust deficit, you know, between us and stakeholders do we need intermediaries to help us build that trust?
Or is this something that we can work on and we know that trust is not a sprint but a marathon that we need to ensure that we are in for the long haul. It just ends here but I, you know, I think opportunities we can also talk a lot more about, you know, what we can, you know, gain or what we can get from the process itself. I think moving over should I start with you, you know, as ‑‑ what are some of the issues, your experiences I think working with meth ta in terms of stakeholder engagement and then we can go into specific questions.
>> JEFFREY HOWARD: That sounds great. I've been given a brief to speak about 8‑10 minutes about my experience and I will be thinking in particular about the role of academics in this process. So consider just some of the questions that bedevil policymakers at platforms like Meta. Should platforms restrict veiled threats of violence or only explicit threats of violence.
Should rules praising violence be modified to include a carveout for speech that justifies self‑defense? When should graphic violence depicting real world ‑‑ graphic content be permitted for awareness raising purposes? When should this be permitted on newsworthiness for example because the speaker is an important politician? What kinds of violations should result in permanent bans from a platform and what in temporary suspensions. How can platforms mitigate suspicious conduct by users to prevent abusive behaviour before it happens.
This is some of the topics I engaged over the years in my work as an academic researcher, I've engaged principally with various teams within Meta. Also, teams within the oversight board and policymakers throughout the UK and the thing about the conversation they're normative questions about how to strike the balance between conflict. The academic discipline of ethics is dedicated to exactly that issue. My role is to bring the tools of ethics conceived widely to bear on the proper governance of online speech and behaviour. And what I want to do in the next couple minutes is to sketch two alternative theories of the proper role of academics in undertaking this kind of work. Tracing some of their implications show we should engage with platforms.
So the first conception I will discuss is what I call the activist conception and I think this is really common. Now on this view the academic has made up his or her mind about what the right answer is on a particular issue and sees her reel on lobbying the platform to adopt her view so consider that question I mentioned about whether there should be a self‑defense carve out to the policy prohibiting advocacy of violence on this approach the academics already made up their mind about whether it's yes or no and the goal is simply to persuade platforms to go their way. Usually, academics who follow this approach have already written an academic paper publishing the view they hope to defend and show that paper has had impact for professional incentive reasons so they're really activists for they own research. I think this is common and completely misguided, the wrong way for agents to engage. I think we should reject the academic conception.
And I think we should reject it because it diminishes the distinctive reel that I think academics can play in this process because it distinguishes the role that academics can play and other stakeholders if you work for an NGO dedicated to violence against girls you figured out what policy best serves the needs of those you represent and for all those people to advocate for that policy so the activist view flows. I would argue that the distinctive role is not to be an activist. It's something else and that leads me to the second view which is the one I will defend and for lack of a better term I will call it the educative view and the idea here is that the role of the ac ‑‑ academic.
And in this way it draws in the way academics ideally already teach their classes which is to inform students about the range of research pertinent to a particular topic so when I teach a class in London on the ethics of counterterrorism policy or the ethics of crime and punishment, I'm not just teaching my own preferred views in those various controversies I teach the most reasonable arguments on each side of an issue, so that students are empowered to make up their own minds likewise for my colleagues in empirical science when they're teaching the causes of political polarization the professor doesn't just teach students his own favorite explanation. That he's published on in a recent article in the American political science review the right way to teach a class on that topic would be to identify the range of potential causes in the academic literature pointing out the evidence for and against now he might also flag that he favors a particular view but his goal isn't to ram his preferred theory into students' brains but to power them with frameworks and insights to make up their own minds and my thought for you today is that educative conception should guide academics in how they engage in platforms and other decision makers our role isn't just to tell platforms what we think the right answer is as we see it as if platforms were counting votes among stakeholders and if they were it's not clear that academics should get a vote since we're not stakeholders, not affected by policies in the way particular constituents are. Our input is solicited is because we have knowledge relevant their decision. Our role is to give platforms insight to make up their own minds so let me just make that a little more concrete for you before I finish. So when I first engaged with Meta, on the topic of violence threats and whether veiled through the should be restricted I saw my role as getting them up‑to‑date about what threats are, why speakers might have a moral duty to refrain from threatening language, what legitimate role might play. I also saw my role as informing them about theories from legal philosophy about what to do in tricky cases where all the candidate rules in a given policy area are overinclusive which I think happens quite a lot in the content moderation space. Likewise when my team presents public comment to the oversight board we of course indicate what result we think the oversight board should reach but that's much less important than the framework of arguments we offer to reach that conclusion so, for example, one central critique of deploying international human rights norms for content moderation is these fail to offer adequate guidance, but those who make this critique in the literature almost always overlook the fact that there's a human amount of cutting edge philosophical work which I think can be really, really helpful in decision makers so part of my role is to help decision makers within platforms learn about that work.
Now, wrapping up now, I would like to emphasize that the case for the model is bolstered by the obvious fact that experts disagree about what to do and so academics simply cheerleading for one side of the argument is not particularly helpful. The role of academics is to supply platforms with the insights they need to exercise platforms' own judgment about what to do and I think judgment on ethical questions is essential. If I were to tell you that I was opposed to the death penalty and you asked me why and I said I asked some ethics professors that's an unserious set of reasons. For having that view. We all are responsible for making our own judgment about what is right and wrong.
And while emphasis can help us think through the arguments the judgment about which argument is most convincing must ultimately be ours and that goes for the platform too. Platforms like Meta can consult experts but it's their responsibility about what to do.
Last comment I will make is many academics are reluctant to engage with decision makers in this space. I think that's a huge mistake because working with these folks can help us write and think about and it can also give us an opportunity to make a positive practical difference through our work. So that's how I see the role of academics and engaging with platforms. Thanks.
>> Thank you so much Jeff, this is really, useful I think one of your posts I took here is intellectually and morally unserious views I think I will use it moving forward you put forth a reason why academics should engage in these spaces and I'm sure there's a lot of people who have questions for but if we could move onto other speakers and we will get back to you. Yeah. Now I want to move onto Conor who leads our external engagement and who is the brains behind ‑‑ with Jeff as well. Behind this workshop. For him to take us through some of the case studies that somehow how our engagements have impacted, you know, impacted policy decisions.
Our engagements with academics so Conor over to you.
>> JEFFREY HOWARD: Could we put ‑‑ oh, great, everyone can see it now. Super.
>> CONOR SANCHEZ: Wonderful, thank you so much can you hear me okay?
>> Yes, we can hear you.
>> CONOR SANCHEZ: Thanks for those first set of comments and provocation for this discussion. For everybody joining again my name is Conor Sanchez I'm on the stakeholder engagement team here at Meta. I will briefly share about how we carry out consultations with external stakeholders including academia as well as independent researchers we engage these experts for a variety of reasons and a wide variety of topics. This will help you see how this process runs and how we take the consultations into account as we work through particular policy. Just backing up for a second our content policy team is the team that is in charge of our community standards. The community standards at the simplest level are rules where people feel empowered, where they feel safe to communicate and importantly these standards are based on feedback. Feedback we received from a wide variety of individuals who use our platform, but also the advice of experts. And I have a few case studies that I think kind of exhibit exactly how these consultations have had an impact on our policy. An important detail about our community standards these are global, they apply to everyone around the world. And we've become increasingly transparent about where we draw the line on particular issues. And laying out these policies in detail allows us to have a more productive dialogue with stakeholders on how and where our policies can improve. As Emilar mentioned we do a lot of capacity building.
We realize that everyone is not savvy about how our rules work so we also do a lot of education to make sure that people understand kind of where the status quo is and why we've drawn the line in certain areas even as we seek their feedback on improving our quality standards. As you can see this is a long list of what can be found it covers quite a bit. This contains everything from hate speech to violent and graphic content. To adult nudity and bullying on our platforms. The consequences for violating our community standards vary depending on the severity of the violation and the person's history and in the platform so if you violate any one of these rules, that can ‑‑ that receives different enforcement mechanisms and that in and of itself is something that we seek feedback on. What is the proportional response to somebody who violates a role? What happens if it's violated twice? Three times or seven times? At what point does that person we want people to learn about our roles to, get better and come back and be a responsible community member. And so at what stage is that appropriate for our enforcement mechanisms? And just to give you an intense how we involve our experts into our policy development process we really bring them in at ‑‑ into a very robust process of how we're developing a policy. We create an outreach strategy to make sure that we are including a wide range of stakeholders and then we carry out that outreach. Ultimately, as Jeff said, the decision sits with us. We take all of that, everything that we've heard from our consultations and we provide that to our internal teams to leadership, and we make a policy recommendation at what's called our policy forum. This is sort of the preeminent space within the company where we consider some of the biggest questions that are plaguing our community standards and make a decision on the direction that we want to go in. In terms of who we engage, this is the question I think I get the most is how do you decide who to engage with, how do you find relevant experts how do we make sure that some vulnerable groups or groups that haven't been heard are being heard in the process. There's no simple formula for doing this or how we would respond to this but we have developed a structure and a methodology that helps guide us as we reach out externally. So in terms of who we engage with, first, we can't meaningfully engage with billions of people although that is certainly our stakeholder base includes billions of people so we seek out organisations that represent the interests of others. We also really look for expertise in particular fields and these don't have to be experts in content moderation or content enforcement or even internet governance or platform governance but can be experts in psychology, all these things can be informative for a policy. And then in terms of the categories of stakeholders we're looking at NGOs, we're looking at academic researchers, human rights experts, they can also be people with lived experiences who are on our platforms, using our tools in certain ways and in terms of guiding who we engage, we really have sort of three principles or values that we look for. Inclusivity, expertise, and transparency in making sure that we know that people are building that trust with the stakeholder base as we speak with them so jumping into a few examples of how this has actually played a part in our policy development process. In 2022, we published our privacy protocol and this codified our processes to crisis situations. The framework we aim to build would assess crisis situations that may require a specific policy response and so we explored how to strengthen our existing procedures and include new components such as a certain criteria for entry and exit into this ‑‑ into a crisis designation. So as we develop this, we sought consultations with global experts who had backgrounds in things like national security, in international relations, in humanitarian response, conflict and atrocity prevention, human rights experts.
And this n these consultations stakeholders, the experts that we spoke to really helped surface key factors that should determine, that would be used to determine whether a crisis threshold has been met.
And so this included, you know, if there were certain political events or there were large demonstrations in the streets, certain states of exception or policies that were put into place, all of these things were based on the experience and the expertise from the experts that we consulted. And really informed the criteria that we continue to use to this day. Another example our functional identification process so this focused on how we treat content that could identify individuals beyond specific factors such as a name or image. We already had policies for if someone's name was shared or in a certain context or image and that posed a risk to them then we would remove that content but functional were more subtle factors that were shared with the individual without naming them but could still result in their being identified and they could put at risk as a result of that identification. So the expertise that we sought with this policy development included privacy and data security experts, journalists, who are often publishing the names of individuals in their stories. We sometimes may need to remain anonymous and so from there we're really drawing on decades if not centuries of experience of individuals who have grappled with this question before of what details to provide in a publication that will be read by many, many people.
And therefore the types of guard lines that they need to put in place to protect those identities we also spoke with a global women's safety expert advisor group that we manage. This includes not‑for‑profit leaders who could focus on the safety of women on and offline. And so this stakeholder input really, the engagements helped our team to consider additional factors including somebody's age and their ethnicity or their ‑‑ distinctly clothing if all three of those are published online and we have signal from a local NGO.
That says that this could put something ‑‑ put somebody at risk that would allow us to remove that content based on our new policy.
And the last example, of how expert input played a role in our policies. So in 2022, we developed a policy on how to treat content soliciting human smuggling services. So our policies at that time, this was under our human exploitation policies, distinguished human smuggling from human trafficking. Recognizing that human smuggling was a crime against the state and human trafficking as a crime against a person. What we wanted to tease out with experts was really figuring out what are the risks to people who solicit this content online? What are the risks of associated with leave thing content up and what are the risks associated with removing this content? And so we heard a wide variety of different insights from the stakeholders that we spoke with. The experts that we spoke with included experts who work at international organisations that are focused on migration, refugee protection, and organized crime. It also included academics who focus on irregular migration, human smuggling, refugee and asylee rights. Criminologists, we also spoke with former border enforcement officials.
People who have worked at borders around the world. And we really drew on this expertise to figure out where we should draw the line on this policy. They highlighted the risk posed to individuals especially vulnerable individuals who solicit this content. They also highlighted the risks that if we were to remove this what this would mean for somebody who may be in a very vulnerable position where they are escaping conflict, oppression or otherwise unsafe conditions in their countries of origin and ultimately this led us to adopt a policy that minimized the risk of removing this type, these types of posts. By providing a safety page with information on immigration. So we would remove the solicitation of human smuggling services but we would also provide a safety page for that individual who may be requesting that and then developing that safety page we also consulted experts to determine what information would be most impactful for this vulnerable population. That concludes my remarks and I will pass it back to Emilar. (No audio).
>> EMILAR GHANI: Can you hear me?
>> TOMIWA ILORI: Now I can.
>> EMILAR GHANI: Can you take us through how platforms can help experts to ensure a rights‑centered model for content moderation. Over to you, Tomiwa.
>> TOMIWA ILORI: Thank you. Can you hear me clearly? Can everyone hear me? If I go on? Okay, okay. Quickly to my question, my understanding of the question is it can be subdivided into two broad areas.
And the first one is how can and the second one is how should platforms learn from human rights experts to ensure a mode for content moderation and like the speakers before me have said there's usually really no one size fits all. Because of, for example in the context of Meta, and other major platforms they operate in many contexts including complex context. So seeing this is ‑‑ this has to be the solution is going to be very problematic. But let me stand in that some of the things that I think platforms can learn and also based on platforms I've worked with like Meta, in the past is a short meaningful collaboration and what do I mean by meaningful collaboration? This involves for example, increasing collaboration with established institutions and organisations to identify practical applications of human rights standards to contact moderation and governance this also involves, for example, the devolution on focus on western institutions who could pretend to work on issues especially in context that they do not expect to see. For example, the roles that they define and working directly with institutions that have boots on the ground regarding this context. This could help identify specific pain points for platforms and these actors and collaborating with these institutions and organisations to think through possible solutions. And I think also mentioned earlier by both Conor and Jeff.
Number two is centering victims. What I mean by that is in this, you know, it should involve broadening the scope of human rights expertise. To include victim centered feedback on the impact, platforms have especially on vulnerable persons when we think of experts, I think we miss out on centering victims whose experience is the focus of the engagement. One key way of learning from these experts is also to also include the voice of these victims that I impacted by these activities who may or may not be experts in contact moderation.
And a third one, is adopting a concept approach. What I mean by this is, you know, working with key actors and experts in specific domestic contexts such as national institutions, civil society and academics, this provides more contextual nuances and understanding of the issues underground. How these actors are currently thinking about them and how exactly platforms can learn from their impact on the ground. You see the example was given earlier by Conor regarding the crisis policy protocol.
That sort of reviewed the certain factors to consider in determining what qualifies us in this crisis. A fourth way that platform can learn from experts is, you know, increasing more access to platform data for independent research. Especially in underserved cone contexts such as the majority world.
There's a need to shift the aspect but the tools in reaching such an understanding such as the role of platforms data are available for analysis by most majority holders. Lastly, another way that, you know, platforms can learn is identifying with the existing platforms out there and this includes both technical and not technical outputs such as U.N., academic institutions and civil society organisations not only this, how these resources are adapted for platform use. Should also be made transparent, for example, where resources are applied in platforms it should be clear what was applied and why. And in cases where feedback is sought but not utilized should also be clear as to why. The second part of the question which I'm going to rush through quickly because of time is what classroom should learn from numerous experts. Number one is practical application of standards and I know this is a very, very difficult area especially for companies. But since numerous experts draw from standards of platform activities it will be useful to look at the most proximate standard and, for example, in this context, the U.N. guide ed principles on human rights would easily apply.
And there's the application to technology companies who provide useful ways to ensure that the activities are rights centered. For example, one way human rights has done this is in the project which is in the process of application and there are quite a lot of resources in this area. And the cost is they are focused. There's the business models, end use and a mix of measures which involves regulatory and policy responses to human rights challenges leading to digital technologies. Another way platforms should learn from numerous experts include ensuring development or process. And I was happy to listen to Conor earlier because this is a very practical demonstration of what this development refers to. Thirdly is also proactive accountability. This helps to engender trust. And it involves operationalizing measures that make platforms accountable regarding human rights even before teams are aware of such harms this includes but is not limited to impact assessment of product and assessment of harms. Which such harms impact human rights and the steps taken to remediate such harms. Lastly, is platforms should learn from human rights aspects agile adaptability. Platforms can learn to be agile and adapting when it comes to emerging and cutting edge contact moderation challenges, for example, what should be the best standard practice? That already are highlighted by numerous aspects regarding moderation? Another example is in what ways can platforms fund objectives, support research without impeding the independence of credibility? So, in my view, I think these are quite of course I rushed the presentation but this is more or less some of the ways that I think platforms could and should learn from moderation. Thank you very much, Emilar.
>> EMILAR GHANI: Thank you so much Tomiwa for that and I think you raise an important point regarding I think platforms been transparent about the input that they take into consideration and why. And not just communicate the outcome. I'm not sure if Naomi is online. I can't see from here. If ‑‑ Naomi if you're online, would you like to jump in? Naomi Shiffman is from the oversight board if she's online she will discuss how the oversight board contributes to policy development and she will also highlight how she build the academic and research partnerships program. Is she online? No. Okay. I think if she's not online I think we can move into the discussion phase for this. We have a few more minutes. But before I ask questions we've been talking for a last, you know, few minutes, are there any questions for our experts? Yeah.
>> AUDIENCE: Hi, it's Michael from the U.N. refugee agency and we've worked with a number of people on the call so good to see you. It's about capacity and I love the kind of approach from the ground up. I wondered how much capacity there is both on Meta side and on other platforms in putting that resource where it's needed and the issue of language comes up again and again.
In terms of capacity to support maybe a wide breadth that are not supported now. How can we take that bottom up approach not just for policy development but also for content moderation and make sure we have a really strong infrastructure there and I know lots of people are learning AI at the heart maybe this can help us mod indicate content going forward and that might be one person.
But there's doubts in there, so, yes, what can we do? Is there enough capacity and if not, how can we increase that capacity?
>> EMILAR GHANI: Thank you so much, Mike, that's a great question. Before we get back, any other question? Yes, please introduce yourself.
>> AUDIENCE: Thank you so much, I am from Iraq, thank you so much for this interesting discussion and I actually learned from all of you. And last year I participated in one of the Meta's events on community standards.
It was actually helpful I had the same similar question because I am from Iraq and I know Iraq is a very diverse country.
And my question would be for Conor, regarding other languages, how you guys are ‑‑ because I know that those ‑‑ the policies you mentioned, most of them are in English I don't know if any of them are available in different languages so people can read about it. And also, the next question will be about the engagement of stakeholders at the local level. I know that, for example, Iraq, I feel like there is a lot of difficulty to reach to Meta when someone, a researcher or an NGO wants to engage or ask a question. It's difficult to get to the experts. Thank you so much.
>> EMILAR GHANI: Thank you so much and good to see someone who attended our community summit.
Conor, can you take on some of the parts of the question and I'm happy to jump in.
>> CONOR SANCHEZ: I think language is a huge part of content moderation and enforcement. It's obviously something that we have ‑‑ we've invested quite a bit over the last eight years or so. I think just overall just zooming out we've invested 20 billion dollars in safety and security. And so, our trust in safety team at the company is made up of about 40,000 different people who bring language expertise, but also bring, you know, expertise in certain policy violation areas, in certain areas such as safety and cybersecurity. Content moderation includes thousands of reviewers.
That moderator content across platforms so Facebook, Instagram threads in about 70 different languages. We also do fact checking with our third-party fact checkers for misinformation in about 60 different languages. And so, and then for our coordinated inauthentic behaviour which focuses on what many would consider foreign influence operations, these are looking at taking down networks of operations and those have been done in about 42 different languages. So it's something that we are continually, you know, wanting to get better at and I think in addition to just the language differences, there are the cultural differences and the colloquial nuances that come with every language and so something even like Spanish, you know, you have certain terms and ways of speaking that defer from, you know, one part from Central America to South America. And for that another part of our content moderation is our trusted partner program which is a network of hundreds of NGOs around the world that we manage that really provide that local context, that local insight when there is maybe a particular trend or a term that we may not be ‑‑ that may be only used in that jurisdiction or in that region. Then they can be informative for our policies as we're developing something or taking an action on particular pieces of content. But Emilar anything else that I may have missed on that...
>> EMILAR GHANI: I think spoken about a lot of things there which I are really relevant just to add I think on some of the questions that you, you know, Mike, right, you asked around, you know, capacity on both sides, I think Conor has mentioned that we are 40,000, over 40,000 people in trust and safety, but I think you can never be at a point where you have full capacity, where you know everything. Cultural competence is also important. It's important to note we have some external stakeholders or experts. Ones who are willing and able to engage with us. Some who are willing and unable to engage with us. And unable because maybe connecting to date, you know, internet connection is expensive, or language capacity while we have some local, you know, team members, some people who can speak the languages but sometimes we also ensure that we either meet people where they're at, where we can. But also support you know, connectivity like support to engage.
As well. But you know, we know sometimes when it ‑‑ we need to sustain it and make sure we are able to do.
We also look at the format of the engagement itself. In capacity for us we need to continuously look at the context, and know, you know, where we have gaps. And also rely on our external expert to say, that was it. You know, you can have done better in this. And we also learn a lot not only from academics like Jeff or NGOs but also from humanitarian organisations because you are on the ground, you know what is happening and you deal with people every day. And talking of sustaining engagements. I want to come back to you, Jeff, how can we sustain academics, once ‑‑ it's not as meaningful as wanting to be, how can you ensure that it's continuous?
>> JEFFREY HOWARD: Well I think relationships are key to the story here and making sure there's ongoing dialogue with the stakeholders over time my experience participating in groups within Meta who have periodic meetings where they revisit policy areas over time is extremely useful and of course as those relationships develop, they are reciprocated so I have been delighted to have lots of people from Meta and oversight board participating in events at my university and I think those relationships are absolutely crucial here. I do have a question for Conor if I can throw it in. Can I take you back to your point about content soliciting smuggling? And you talked about the fact that a lot of your ‑‑ a lot of on the ground stakeholders with expertise of this issue counselled against banning that content but in the end, you took that you should remove smuggling. How do I get out of Libya? But you have the information, you have that information page of trusted thirty party information can you talk us through how you made the decision not to defer to those on the ground who were saying leave this content up? What was that experience like? Because it does seem like to me the right judgment but it went against what some people thought you should do. I wonder how you made that decision.
>> CONOR SANCHEZ: Yeah, that's a great question. I mean, this was an area, I can't say it was neatly divided where people thought we should go with this policy. I think everybody, first and foremost that we spoke with recognized that this is a very, very difficult call. And that there are ‑‑ but I think the picture it painted for us is people on the move receive information from a wide variety of sources and they are making decisions on a thousand different factors. So, yes, they're online, but they're also in‑person and they're also in migrant shelters. They're also, you know, speaking with relatives, you know, in their hometown before they maybe start on their journey. And they're making these decisions on a wide variety of different information points. That they receive so I think the thing that they wanted us to really hone in on was to think about some universal human rights standards. As we approach this in terms of proportionality. This, we aren't the first entity to sort of think about these challenges. There have been consultative processes in the past that we could take advantage of. And I think this comes back to Tomiwa's point is just the way in which we can learn from international human rights legal frameworks the protocol on human smuggling is something that we were urged to take a look at and sort of that documents differentiation between human trafficking and human smuggling, making sure we understood those two definitions. And then I think from our standpoint we think this doesn't necessarily need to be ‑‑ we don't need to make those distinctions necessarily on a binary decision of remove or keep up. We could still remove those and still allow for some understanding of those who may be posting this and providing, you know, information through a safety page. So I think it's once we kind of providing a safety page that meant something that we could introduce that would reduce the risk of removing and once we went to stakeholders with that as an option, that was something that many of them even the ones who were saying leave up they were warm to the idea of at least you could provide this safety page that would serve to reduce the risk of just removing it.
>> EMILAR GHANI: Thank you so much, Conor.
I think we only have a minute or so do you want to give closing remarks?
>> JEFFREY HOWARD: I think Conor in his wonderfully detailed answer, I which give us a real sense of how the process works illuminated a crucial feature of it which is that it's often in these policy areas an iterative process where you go back to stakeholders with updates and they might themselves change their minds because people's views on the topics under discussion are not fixed are the result of ongoing deliberation and so I think one of the things that we're taking out of the panel is the importance of having conversations like these and improving conversations like these topics.
>> EMILAR GHANI: I'm not sure if Tomiwa is still there. Do you want to give your closing remarks as well? Just a minute.
>> TOMIWA ILORI: Yes, thank you very much Emilar.
Yes, it's a pleasure of the been here and to listen to the questions that have been asked and these conversations will continue to happen and we continue to put in the work.
I don't think there will ever be a time when we will come to the point where we say we have done everything because issues will always crop up that means, you know, diversified and, you know, multistakeholder contributions, so it's a pleasure to be here and thank you very much, yeah. See you some other time.
>> EMILAR GHANI: Yeah, thank you so much to everyone in the room and everyone else who joined us online as well. I know Professor Howard is still around. So for those who still want to engage with him, onsite, please do, and Conor, Tomiwa, thank you so much for participating in this. Bye for now.
>> JEFFREY HOWARD: Thanks, everybody.