The following are the outputs of the captioning taken during an IGF virtual intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.
***
>> TRACY MANNERS: Great. Welcome, everybody, to today's session: The Oversight Board, one year on. Lessons in online content moderation. Today we'll be discussing digital rights, self‑regulation, and sharing learnings from the Oversight Board, one year since it began, reviewing content moderation decisions made by Meta on Facebook and Instagram. The Oversight Board consists of three interlocking elements: The Board, the Trust, and the Administration. Each play a critical role in ensuring the success of the Oversight Board. Today I'm very pleased to say we have representatives of each joining us on the panel. Thomas Hughes, Director of the Oversight Board Administration. Afia Asare‑Kyei, Oversight Board member and Cherine Chalaby.
I am Tracy Manners, and I work across global communications and engagement. Throughout the session, audience members are welcome to submit questions, which will be compiled and answered at the end. We'll talk through the Board, the decisions taken so far, and what lies ahead for social media regulation. The first part of the discussion will look at the challenges of building the Board, and for that I'll turn to Thomas first. Firstly, for those who may be less familiar with the Board's work, could you summarize its role and its work to date?
>> THOMAS HUGHES: Tracy, thank you. And likewise, it's a pleasure to be here today, and thank you to those who are joining us. So the Oversight Board is the first of its kind, an independent self‑regulatory model set up by Meta to act as a final check on the content moderation decisions that the company takes on Facebook and Instagram so just those two platforms. Users can appeal to the Board directly if they believe Meta got their decision wrong in relation to their content, or if they see a piece of content they think should be taken down, and Meta can also refer cases to us or ask for guidance on a particular policy area as well.
The Board applies a global human rights framework to all of its decisions looking at Meta's community standards and values stated on its website, and seeking to understand how those align with global human rights standards. We currently have 20 Board members who come from a diverse range of backgrounds and professions from journalism to politics and academia and human rights advocacy.
And our decision makers and those Board members are the decision makers are intentionally global. They speak 29 languages approximately and have lived in 27 countries between them and they were all selected because they have a proven track record in advancing freedom of expression and other human rights. The Board selected its first cases in December last year almost exactly a year ago today, and since then, we've accepted 23 cases, and issued 17 decisions. In 12 of those cases, the Board has overturned the original decision that Facebook took.
These cases are globally diverse, with more than half coming from Latin America and Asia and Africa, and the Board's decisions in upholding or reversing Meta's content decisions are binding on Facebook and Instagram. So the decision has to be implemented, and in all decisions, all cases, has been implemented.
As mentioned, additionally, the Board offers policy guidance that Meta must consider and respond to publicly, and again, the implementation rate is well over half, with only a very small number having been rejected, and the rest still under assessment.
This means the Board's decisions are not limited only to specific pieces of contented or specific cases, because a policy recommendation could lead a much broader change in Meta's conduct and many of these deal with some of the most complicated questions of content med ration and do so hopefully in a transparent manner that allows increased public scrutiny, as well.
>> TRACY MANNERS: Thank you, Thomas. You joined the Board in January 2020, before it began receiving cases, when there were just three Co‑Chairs. Am I right in thinking no other staff? So now one year on, there are 20 Board members, around 60 staff, and over a million people have submitted appeals to the Board. Can you just talk us through how you took the Board from a concept to an operation and perhaps give us some insights to what the biggest challenges were in doing that?
>> THOMAS HUGHES: Certainly, Tracy. So when I joined in January 2020, yes, there were the Co‑Chairs, and I believe I was staff member number 1., and we have since then obviously built out the institution.
Back in January 2020, we had a Charter and a set of Bylaws that were given to us by Meta. Meta had been through quite an extensive exercise in getting feedback in developing those but they were just what you see online. They were those charter and Bylaws and no processes had been put in place to operationalize how to live those and how to put them into practice. We also had a website but obviously there was no content on the website, no colleagues to post content on to the website and we were working to a time line of having an initial cohort of the four Co‑Chairs plus 16 Board members by May 2020, so the turn‑around time was very tight and in addition to have the staff and systems in place to be able to support them so they could start receiving cases by October which is I think formally took the first in early November but basically it was the original time line we'd foreseen. Of course an added complexity is that we had to start all of this from scratch. Everyone was working remotely, only a couple of months in all of the additional restrictions that COVID‑19 meant for everybody around the world, they also kicked in so all of the staff, all the people had not met in person or even worked with one another before so everything was very new and needless to say, the time line was very ambitious.
But we announced those Board members in May, and we had 5 months to then work together to define the standards and the various processes that would be applied and to build a working culture amongst the Board members, as well and there were certainly big challenges and questions to resolve from the outset. There were also constant trade‑offs between getting things up and running versus deciding as it were the route that we would be taking.
And we also had to access the knowledge and the materials that sit within the company as well and to obtain it in a way that safeguarded the decision review process and the structural independence of the Oversight Board as well. We also had to build a framework for applying international human rights standards, and also to look at how these could be applied vis‑a‑vis the community standards. There had been theoretical discussion but we had to do that in practice and make sure it actually worked and to do that we ran simulations, test cases, so we went through case review although I should say the Board members went through case review processes including mocking up draft decisions and testing the different processes and different technology tools we had in place.
So by October, when the Board was actually open to receive appeals, there was kind of a collective I think breath‑holding moment when we switched the machine on and we waited to see what the machine as it were sort of spat out, what came through and of course by the time we selected our first cases in December we had already had 200,000 appeals and we could say more about the number of Appeals we now see which is great that users are really engaging with us, but that 200,000, those first cases really gave us the assurance that the systems we had worked. People could appeal and we could see what was coming through.
I would say there's been many significant things again being slightly wary of blowing our own trumpet but we achieved many significant things and I think we've accepted as the norms. We had to decide where our biggest impact could be for users and align that with the interpretation of the Bylaws and to align that also with the international human rights standards and the Board members had to make decisions about which ones to apply and where.
And we had to harmonize mind sets and approaches and varying ideas around those standards, and to work out how all these elements and all the elements of the Board that we see represented hear today, how that all sits together and works together as a cohesive independent entity.
So these were significant issues to resolve, but they contributed to what we see now and I think we fully understand that there's still an enormous amount of work to do. This is just the tip of the iceberg. We've really just started but still I think it's a solid start.
And I think individually and collectively, we all saw the potential impact that these decisions could have, all of those who were engaged in it and all of the individuals around the world who submitted public comments and people who have appealed and I think that has all been driven and underpinned by a deep shared commitment to free expression and human rights.
>> TRACY MANNERS: Thank you, Thomas. I'm going to come to you, Cherine, next, if I may. Can you share with us your view on why Meta created the Oversight Board?
>> CHERINE CHALABY: Thank you. Can you hear me?
>> TRACY MANNERS: Loud and clear.
>> CHERINE CHALABY: First of all, I just have to say I'm delighted to be back at the IGF. I've been a member of the ICANN Board for 20 years and we participated every year in the IGF, and I've been on many, many of those meetings, so I'm delighted to be back, and I'm sure a lot of my ex colleagues are here listening or doing some good work somewhere else and I wish them a really productive week.
So thank you for the question. The question is: Why did Facebook hand over some of its powers to an independent body? It's a fascinating question. And really, to understand that, there are three things to consider, three aspects. Why did Meta create the Oversight Board?
The first is the rise in cyber‑sovereignty. We all know that policymakers and regulators have and are increasingly looking for ways to address their deep concerns, and by the impact of social media platform on the safety and health of billions of users around the world, whilst at the same time they want to protect the privacy and freedom of expression of those same users.
You can see how this is an immensely complex dilemma to solve, unfortunately there's no quick or easy fix. On top of that, there are also no proven precedents in regulating successfully a virtual public sphere on such global scale. So that's the first aspect to consider, the rise in cyber‑sovereignty.
The second aspect to consider is the intense criticism, and I may say distrust in Meta, so the concerns I just mentioned have focused primarily on Meta, as it grew into a powerful and integral part of the social fabric of most societies and countries. Almost every crisis, every headline today, plays out in some way across Meta's services, and the more these services become ubiquitous, the more Meta finds itself at the center of extensive criticism on everything, frankly, from the spread of misinformation to hate speech to concern about the company's power and approach to competition. And in such climate of criticism, trust in Meta to act in the public interest wrath its own commercial interest is at a low. The third aspect to consider within this political and social context is Meta needed to renew and strengthen its legitimacy with its stakeholders and global community of users and to Mark Zuckerberg and Meta's credit they recognized and here I'm going to read verbatim because these are important statements. They recognized that decisions that have enormous consequences for our society for human rights and freedom of expression should not be made by social media companies acting alone. I'm going to repeat that: Should not be made by social media companies acting alone. Furthermore, they recognize that would these companies should not be the final or arbiter of what can and cannot be said on their platforms. Users should have a voice and their cases should be heard by an independent appeal body so the time was right therefore for Meta to act and this is how the Oversight Board began, I may say a bold move to create such an independent body in order to strengthen user rights against arbitrary decisions.
So that's the historical content of why the Oversight Board was created by Meta. Back to you, Tracy.
>> TRACY MANNERS: Thank you, Cherine, thank you. Afia, holding Meta to account for decisions on how it manages its platforms is not new to you but can you tell us what made you decide to join the Board and whether your expectations of it have shifted in the last year?
Can't hear you at the moment, Afia.
>> AFIA ASARE-KYEI: Thank you, Tracy, and I'm delighted to be here, as well. The work of the Oversight Board is ultimately about protecting users and making sure that online spaces that they use on Facebook and Instagram are safe and healthy, and Meta has to be held accountable for ensuring this.
Now, I have worked for more than a decade, a decade and a half, actually, to improve citizens' access to justice and democracy, and freedom of expression is central to these issues. Now, in Africa, where I come from, and I call home, Facebook is the Internet. People go there to connect and mobilize. They share their views, in ways that they would not be able to do in other pubic Forums. They come together to discuss issues and find solutions and so in places where expression of opinion is not a freedom that comes easily, I believe we must fight to protect the few spaces where it exists. I have spoken out in the past in Africa about the frustration of Africans about social media users in Africa.
We have seen, with most important decisions, time and time again they are made outside, and in this case on content moderation, they are played in Silicon Valley and then applied in Africa with no consideration of our context and how it impacts the right of freedom of expression of people here in Africa, and so I took on this role because I wanted to fix this, which I deemed as a problem, and the Oversight Board is very serious about global decision‑making.
I wanted the Region that I called home to have a seat at the decision‑making table. I wanted to see other leaders from around the world and together address these issues. And for the first time, decisions are being taken by a group of global experts, as Thomas said, and Cherine also mentioned, with equal decision‑making power, and in consideration of the implications that it has for the globe. Given Facebook's global footprint and the number of people who rely on the platform in the Global South, I think the time was right to redistribute this decision‑making power for everybody and when I joined the Board, we knew that we wanted to achieve great things but I didn't expect that we would see this come together so quickly. We are testing decisions and applying global human rights standards to make sure that the rules uphold the rights of global users, and that the rules work, and that they are accessible for everybody.
And so I'm very proud to be part of this organization, I'll just call it an experiment, that is leading this charge, and I believe that we have shown that global decision‑making, not just Western decision‑making, can make ‑‑ can have a Regional input which is very important, because ultimately, Facebook is, and this is my personal belief, I believe that Facebook is a formidable but an imperfect platform and that the Oversight Board can help it, can help maintain it as a viable source of information, and we can through our work reduce the harms and that the users can enjoy the benefits of these platforms, and in so doing people can use their right of freedom of expression even and especially in parts where it is under threat.
And my commitment, my life investment, is about rights and justice and I think this is a right that is worth protecting across the world, and I see the Board having a potential to have that impact here. Thank you.
>> TRACY MANNERS: Thank you, Afia. And I'll stay with you now as we discuss a bit more about the outputs of the Board for the last year, so that's the case decisions that we've taken so far.
The Board was set up to take some of the most significant and difficult content moderation decisions on Facebook and Instagram, and apply a critical review of them. Which decisions have stood out most to you over the last year? And what do you think ties them together?
>> AFIA ASARE-KYEI: Well, the Board has received over a million appeals from users across the world, and the cases which choose are based on three broad criterias. Thomas spoke earlier about some of them, so their importance to public discourse, their potential to impact a high number of users, and whether they raise significant questions on Meta's policies on Facebook and Instagram.
So if I take the three criteria, first on the importance to public discourse, I think, of course, the Trump case amplified the debate on content moderation, taking it to a global stage in a way that we had not foreseen or it had not been seen before.
But our cases on misinformation related to COVID and the use of racial slurs also stand out to me as examples of debates that are incredibly meaningful, powerful, personal, and need to be had. And also that are meaningful in the current context of the globe. These are issues that we are talking about in our homes at the moment, on misinformation related to COVID‑19. There was a case from France where the Board insisted that there must be space for legitimate public commentary about the decisions Governments take in a pandemic. There must be a distinction between information which is harmful and must be removed, and information which is legitimately in the public discourse.
We find that automated removals may not always identify this fine nuance. The Board also recommended that Facebook consolidate its misinformation policies and define the harms that it was seeking to avoid by removing some of these Coasts. We have also pushed the company to publish a transparency report on how community standards have been enforced during the COVID global health crisis, including the proportion of removals that are relied entirely on automation, machine decision‑making, and the breakdown by the source of detection which Meta agreed to do.
In another case, we looked at the use of racial slurs in South Africa. We all know the history of South Africa and here the Board said Meta was right to remove the content, because it was raising, even though it was raising very important social issues about the South African society, the poster, the user, racialized this very critique by using a slur. In South Africa, this term is very harmful and extremely degrading to the people it targeted.
So the use of racial slurs on platforms should be taken seriously by Meta, particularly in a country such as South Africa that is still dealing and reeling from the legacy of apartheid. The Board urged Meta to provide more information, greater transparency around the company's slur list including huh the list is enforced in different markets and why is it confidential? Why is it that the users do not know what are the prohibited racial slurs?
Secondly, among the cases that had potential to impact high numbers of users, this of course applies to all cases, but there are some cases that we had decided on as a Board and one comes to mind, which was the one that dealt with the Sikh farmers protest in India, as an outcome of this case. The Board called on Facebook to translate their community standards into Punjabi. That's a language that is spoken by over 100 million people, so as a result of one user appealing their case to the Board, now over 100 million Facebook users can read the community standards in their own language, in Punjabi.
Thirdly, on cases which have potential to improve Meta's policies and the right to freedom of expression for users, on and offer the platform, the Colombian case was remarkable. In this case, the Oversight Board overturned Meta's decision to remove a post that was features protesters in Colombia criticizing the country's President and the case, I said it's remarkable because it highlighted very important issues, including the platform social media provides for information sharing about protests, in environments where outlets for political expression are limited. While protesters in the video used a word that is designated as a slur under Facebook's hate speech community standards, the company could have and should have applied its news worthy allowance because the content had public interest value. It was about protesting, citizens protesting about the unhappiness of their President.
So these are some of examples of cases and the criteria that we use in the selection of the cases.
Now, in terms of what ties them together, all of our cases, all of them, is based on the respect of freedom of expression, and time and time again in our decisions, we have pushed for greater transparency from Meta on Facebook. We are testing the policies and determining whether they are fit for purpose, whether they are okay in terms of real‑life examples.
We have looked at certain cases and we are asking ourselves in terms of context appropriateness, how are these cases promoting global human rights? And our recommendations have repeatedly urged Facebook to follow, you know, central principles, key principles, of transparency: Make your rules accessible to your users, and the rules should be in the language that the users understand. Tell people very clearly how you make your decisions and you enforce them. And then when people break the rules, tell them exactly what they've done wrong so that they will not repeat it so that I don't become a repeat offender on the platform so transparency is not just about Meta being more open on its policies. It is also about Meta being more open about who and how influences its application of policies. And the Board's work is indicating that we need to ensure there is space to understand how Governments, for example, are influencing content moderation decisions.
I'll give you just two examples. In August as a result of the case related to the solitary confinement of the PKK Founder Abdullah Ocalon, Facebook has agreed to publish informal Government requests to remove content, because Government do request Facebook to remove certain content, and we recommended that users must be informed if their content is removed due to a Government request. This for me is huge, and Facebook agreed to do this.
In October, Facebook said that it will implement fully our recommendation for an independent entity to conduct an examination into whether its content moderation in Hebrew and Arabic, including its use of automation, has been applied without bias, because of what was going on in the Palestinian occupied territory and Israel. And so I believe that people have a right to know what content their Governments are working hard to remove.
Now, you know, the other part of your question is also very interesting: Did any of these cases test the Board? Yes. The Board accepted the Trump case just weeks after we had announced the first set of cases for review. Thomas mentioned that we started reviewing our cases late in 2020, and in January of 2021, the Trump case came to us, and so I think it's fair to say that it was a challenge for an institution as new as ours, and the timing was certainly a challenge, but I think it also goes to show what we are capable of in that we were able to hear the case and come up with a decision so more generally I think it was clear to all Board members when we took on this undertaking that we would not always agree with specific opinions of some of one another, or some of us, but I know we all recognize the unique insights and perspectives that everybody on the Board brings to the table to serve a global community which is Facebook.
So the fact that many people will disagree often passionately and it happens, even in our own families, with one another, speaks to the diversity of the Board, and the importance of the endeavor that we have undertaken.
>> TRACY MANNERS: Thank you so much, Afia. Thomas, in your view as Director of the Administration, which of the Board's cases do you think have been the most consequential either in terms of the impacts on Meta or because of what users have come to know about Meta's decision‑making processes as a result?
>> THOMAS HUGHES: Thank you, Tracy. Good question. I think all cases in a way are born equal, so I think Afia made a very good point, which is that each case that the Board has taken, they've taken for very good reason based on criteria that they've set looking for cases that are important and consequential, so on and so forth.
And it really means that for whichever the community is, there is ‑‑ relating to that case or the individual that's related to that case, that has been extremely important for them as a community, so I think each and every case actually has been very impactful in its own right, and I think the other points and Afia I think went through some of the cases that I would also sort of highlight as ones that really demonstrate the kind of impact and the change that's taking place within Meta, also noting that there is still more work to be done, but if I look at your question slightly differently, insofar as how ‑‑ which cases have kind of broadened global participation and scrutiny in Meta's decisions in a way that I think has really not been seen before and which of the cases have really garnered international attention, which is I realize I'm reframing it slightly from what is consequential necessarily so I think, I would go to the same example that Afia used at the end there, so obviously in one of the first cases the Board reviewed Meta's decision to suspend former President Trump from its platforms and this was not only the case the Board reviewed which raised significant questions on political speech but also obviously garnered an enormous amount of attention and as with decisions for all of these cases it was assessed based on global international human rights standards in a way that considered the impact policies and decisions have on communities around the world, and in this way, it's hard to imagine kind of another entity that could have scrutinized this issue through that international lens and really opened it up to debate and feedback from the public through our public comments process, in really any similar way.
I think more recently also the Board has responded to concerns raised by former Facebook employee Frances Houghton and the Board will be meeting with Frances Houghton, on the crosscheck system which resulted in reviewing that particular policy and how it's applied. And again public comments to that opened last week and this process will be done through consultation which would engage international experts and academics and Civil Society before the Board makes a recommendation to Meta on how to address the concerns that were raised.
And these processes and ultimately decisions on Meta's policies are matters of global public interest and the Board has moved these decisions and discussions outside of the closed doors of Silicon Valley elite into the global arena and allowed everyone really to engage in those, in a way in which is deliberative and to make submissions for Board members and panels to discuss.
I think it's also important at the same time to call out that interest in content moderation obviously naturally arises with breaking news, and the challenge of course is that when public interest moves on, the commitments Meta has made may take some time to come to fruition especially when they relate to significant changes as many of them do to the product, as it were, to the platform itself.
And the danger is that those recommendations or Facebook's or Meta's commitments get kicked into the long grass and forgotten about, so the Board is committed scrutinizing the responses over the meet yum to long term and has set up a team dedicated to doing this monitoring, and also the Board has created a new Committee, the Implementation Committee, a new Committee that's going to be doing that tracking and monitoring process.
Meanwhile, in our first set of transparency reports, we shared data on the issues users were appealing to us about, where the appeals were coming from, the extent to which Meta provided data that we needed and so on and so forth and we're going to continue to push Meta to increase its transparency and share the data research that we have with other groups who are also working in this space to basically what are a set of common goals of really improving the experience of users who use Meta's platforms.
>> TRACY MANNERS: Thank you, Thomas. As a reminder what you said the quarterly transparency reports are now moving and the next one will be out very soon. I just want to come back to you, Afia, and just briefly you came to the Board from Civil Society, with a history of leading on transformational advocacy and social change strategies.
I'm really curious to know what impact you think the Board will have on the people and communities most affected by some of the challenges you've been looking at on social media over the last year.
>> AFIA ASARE-KYEI: Thanks, Tracy. Well, the challenges I've seen through my two decades of activism in Africa around the relationships between people, power, and access, and here access to social media, social media has become an essential space for citizens to express themselves, which in turn empowers them and allows them to then mobilize around human causes and build solidarity.
Now, in my part of the world, Internet shutdowns are very regular, often legitimized through some very spurious claim of National Security and no clear legal basis for why they are shutting down this Internet, but often, citizens' voices is getting louder. That is that they want ‑‑ I see it as my responsibility to protect the space, to make sure that it exists.
The Board currently is working hard and is scrutinizing the context of each case to understand whether freedom of expression is already suppressed, and how. In this way, we can truly start to understand new ways in moderating content on global scale, that are not just copy and pasted from one context to another, with no consideration of how they can be misused. Through public comment, the Board also can stand with voices of Civil Society and movement and raise the alarm and make the calls for Civil Society that Civil Society has been making for years, and that Meta has to respond to them.
This only works if users appeal to the Board, and if Civil Society organizations, academia, and other groups provide context through the public comments Section. There's a question in the chat on how do we get more people engaged? For me, it's that early important that we increase the number of user appeals from Africa and other Global South spaces, as well as public comments. Although the connections between digital rights and democracy is realized by activists in Africa, the backdrop is that there are many urgent issues vying for add tension. On the Maslow scale of needs digital rights and questions of social media and Internet access mate not be but it is. There is much power in Africa. We saw this when CSOs came together in outcry, when the Nigerian Government suspended Twitter a couple of months ago, and when it comes to freedom of expression, the stakes are much higher in Africa and in certain geographies in the Global South, so this is precisely why I'm imploring activists and Civil Society organizations to appeal to the Board, to provide the context that we need to understand the African context, the Asian context, the Latin American experience, in a way only people living there can truly understand.
>> TRACY MANNERS: Thank you, Afia. We will be sharing further details of how you can engage with the Board before the session is over. I'm going to move to Cherine. What can you tell us about the governance structure of the Oversight Board? How does it ensure the independence of the Board in its decision‑making?
>> CHERINE CHALABY: Thank you, Tracy. I can see a similar question came to the chat which is quite good and also about how the Oversight Board is funded. In your opening remark, you mentioned that the it's important to answer that so that the people understand that the governance model consists of three interlocking elements that Tracy mentioned. There's the Board that makes decisions about cases. There's the Administration that supports the Board. And then there is a Trust that is responsible for governing the totality of the Oversight Board.
Thomas and Afia have succinctly described how the Board works and how it makes decisions. But your question about independence and the question in the chat is, in my view, critical to the credibility of the Board's decisions, and you're not alone in asking this question. But since the inception of the Board in 2020, there's been a high‑level of public scrutiny and questions about how the Board can be truly independent of its creator.
I mean, after all, it's Meta that came up with the idea, and it's Meta that has funded it. So to answer this question, you really have to understand that it lies within the Trust. The Trust is the second interlocking element of this model and from where I sit, from my vantage point as a Trustee, as I mentioned responsible for the governance of the Board, I can see how independence is truly rooted in everything we do. Let me explain that.
The Trust is basically a shield, think of a shield, between Meta and the Board, and remember here what we're trying to do is protect the Board that makes all these decisions from any influence from Meta. So the Trust, in being the shield, protects the independence of the Board in three ways. The first way is it protects the independent judgment of the Board and the integrity of its decision‑making process.
How? By keeping Meta at an arm's length from Board members, and thus keeping the Board members free from Meta's influence, so neither Meta can talk to the Board members, nor the Board members can talk to Meta when they're deliberating and making decisions about cases. That's the first thing. And the Trust ensures that this happens.
The second way is it protect the Board's operational independence. We want to make sure that the Board does not need metaphor any operational needs, and this is where Thomas and his Administration team come into play, because Thomas has dedicated a team of full‑time staff, independent of Meta, who are totally working, 24/7, to assist Board members with their research, their case selection preparation, and communication on decisions, to protect their independent judgment, we protect their operational independence. And thirdly, we protect their financial independence.
When the Oversight Board started, Meta provided a large sum of money that went into the Trust. In Trust jargon, Meta is known as the settler and the settler has provided that money. Once the money is provided, the money cannot go back, so that is an important thing to remember, that Meta cannot withdraw this money at all. It's done, it's finished. So it's now totally under the control of the Trust and we safeguard these financial assets and through these financial assets, of course, we pay the compensation of the Board. We pay all the operational expenses. And we manage the entire budget of the Oversight Board, of the Board itself and its members, so that they don't have to worry about the finances. They don't have to worry about operation. They don't have to worry about influence from Meta in terms of decision‑making.
And you've seen the impact of that, by protecting its independence, you've seen from what Afia and Thomas have said, that the Board is not afraid of calling out Meta when it fails to meet its responsibility and when you look at the decisions so far, you can see also how the Board is now an institution working not to just put down content or put it up again. No, it's working in shifting Meta from making arbitrary decisions, or decisions that might be informed by the company's economic interests, towards decisions that promote freedom of expression, that treat all users fairly, and that are consistent with the company standards and values. That is really the true mission of the Oversight Board, is to change the user experience. All right?
So in summary, this governance model is unique. Think about it like this: It was designed from the outset to ensure that the Board is not just credible from the outside but also solid from the inside. In other words, externally it is recognized for the quality and timeliness of its decisions but internally it has to have a robust structure and checks and balances that protects its independence. I hope I've answered the question about independence. Thank you. Back to you, Tracy.
>> TRACY MANNERS: Thank you so much, Cherine. We're going to move to audience Q&A very briefly but I do have one more question for you Cherine before we do.
I'd like to hear from you. We talked about the time prior to the Board becoming operational, what it took to set it up, right up until the case decisions that have been published until now. I'd like to touch upon the future. Does the Oversight Board have longevity? As regulatory proposals advance in the years ahead, where do you see the Board fitting in?
>> CHERINE CHALABY: Thank you. I think this is a complex question to answer, and in order to do it justice I have to briefly describe where does the Oversight Board sits within the spectrum of regulations, because that's quite important, because it cannot be working in isolation.
And this spectrum of regulations vary from country to country, from industry to industry, so I'm going to explain first where does the Oversight Board sit and then talk about the future immediately after that. I'm going to try and be brief. So on one end of the spectrum is self‑regulation and this occurs when a private organizations or market based institutions govern their own action through voluntary agreements. In other words, they self‑police, self‑regulate, by establishing voluntary standards or code of conducts or best practices, and by which they agree to abide.
I mentioned earlier that I served on the Board of ICANN for 9 years, and I can tell you that its multistakeholder model of governance is an excellent example of self‑regulation. So that's one end of self‑regulation, of the spectrum.
In the middle of the spectrum is what we call co‑regulation, which occurs when an industry and Government jointly administer the regulatory process, and this would typically involve Government watchdogs that provide oversight of self‑regulatory organization and Government agencies that enforce penalties for violation of self‑regulation. Another example of that would be the NTIA, the National Telecommunications and Information Administration of the United States, that had an oversight role over ICANN for a good part of 20 years.
And finally, we go to the far end of the spectrum, which is State regulation, which here involves Governments regulating the actions of firms in the Private Sector. Typically, this would include legislations, Executive Orders, and top‑down rules issued by Governments. For example, the General Data Protection Regulation, GDPR that we all know, is a regulation on data protection and privacy in the European Union law.
And other notable examples are the U.K. Online Safety Bill and the EU Digital Service Act, which are new proposals for state regulation. So I'm sure by now that I've explained the spectrum, you have figured out that the Oversight Board, Thomas let out the secret early on, is a form of self‑regulation. I would be remiss to not mention the Oversight Board was not designed to supplant policymakers and regulators. The Oversight Board, however, is an important innovative model of self‑regulation.
You can see it hasn't been tried before on such a large scale, when one of the largest for‑profit corporations in the world has created an independent, not‑for‑profit institution, to make binding decisions by which the for‑profit corporation must abide. I hope this is not too complicated. I'm going to repeat it one more time. One of the largest for‑profit corporations in the world has created an independent not‑for‑profit institution to make binding decisions by which the for‑profit corporation must abide.
This new model was specifically designed to avoid both the commercial interest of for‑profit corporations and the potential abuse of state‑based regulation. Institutions such as the Oversight Board are in my view necessary. You may ask why.
Because we don't want for‑profit regulating the global virtual public sphere in their own economic interest, nor do we want National or Regional political interests vulcanizing that same sphere, in particular we do not want less Democratic or authoritarian regimes to suppress freedom of expression online. Instead, what we really want, what we really need, is disinterested, I repeat the word disinterested, regulation of our virtual speech. That means impartial and unbiased regulation.
And in this regard, the Oversight Board is truly impartial. Whose impartiality as I mentioned earlier is guaranteed by the Trust, and as a self‑regulatory model, it aligns well with the other more established models which I just mentioned and which are being discussed at this IGF meeting.
I'm sure you will agree with me that a one size fits all solution does not exist, and you'd also agree that no single Government, institution, or actor has all the answers. I therefore see the imperative for a collaborative effort between Governments, Civil Society, and the tech industry to agree on an ecosystem of solutions that are clearly grounded on human rights principles, and that we critically need to manage the complex challenges of our borderless digital future.
And it was in that context of continuous evolution in technology and regulation that I'm confident the Oversight Board will also evolve over time. Back to you, Tracy.
>> TRACY MANNERS: I could ask you so many more questions, Cherine, but I feel that we should give the floor over now to our audience that have been waiting quite patiently and there's a great question I'm going to refer to you first, Thomas. The question is: Is Meta developing these models, models like the Oversight Board, not only for itself but will it also be available for other organizations to share?
>> THOMAS HUGHES: I think Meta's intention was to ‑‑ I mean and of course this is a question for Meta ‑‑ but I think Meta's intention was that when the Board was created that other actors may also choose to utilize the Board, but I think the Board's position on this, the Board's intention or look to the future in regards to this is not that the Board seeks to become a Board for all social media companies because that would bring back the same challenges of decentralization but certainly the Board is open and willing to share the knowledge and the experience that it's generated, because there are many complex questions that the Board has already started to answer, not only in terms of what the processes look like but also in terms of what the standards are that are trying to be set.
So I think certainly as I say, we would be very open, and from a personal perspective, I think all platforms, particularly those that are on the large, or very large, side, they should be creating if not entities that look exactly like the Oversight Board, they should be creating independent, self‑regulatory structures that take these difficult decisions on some of the hardest pieces of content.
>> TRACY MANNERS: Thank you, Thomas. There was a follow‑up question hear from Laura which I think that you've answered but Laura, do let us know if there was more that you wanted to know on this.
I'm wondering, Thomas, then if you might just want to share with us your views on what you see the priorities for the Oversight Board being in the year ahead. We've spoken, I've heard you speak about creating new standards, new Global Standards, in content moderation and I'm just wondering if you might be able to share a few of those insights with the audience here.
>> THOMAS HUGHES: I think there are a number of priorities for the Board and I think some of that's been represented in what we've heard today but the Board is a year old. By any stretch of the imagination, that's both a very young life span but it also feels like a very long one at the same time because there's an enormous amount of work that still needs to be done, and there's also as Cherine was outlining a new regulatory space that is starting to be created, one in which probably statutory regulation will exist, co‑regulatory structures will be created and I think there's a strong argument that self‑regulation should have a prominent place within that ecosystem at the same time.
So priorities for the Board are first of all, continuing to lean into the work that it's doing. Meta has clearly flagged that it intends to expand the scope of the Board and the work the Board does so we have practical stuff to be done to increase that scope and start bringing new and different forms of content into play. The Board will also increase in size. The Administration will also increase in size to take on and support those new Board members but to also be able to accommodate some of those new areas of scope but I think as was pointed out by both Afia and Cherine, the Board sees itself contributing to the global discussion around standards, across a myriad of social issues and rights issues, and the Board should not be setting these standards alone. But the Board should and I think seeks to contribute to the wider discourse and the variety of actors that are going to help create that clarity.
And my only analogy in this is that, and particularly with my own background in human rights work, is that a lot of the discussion around these standards and how they should be applied has been very theoretical over the past decade but we've moved very quickly into the practical. Insofar that there's a mosaic in front of us and the Board is putting together the pieces and creating clarity around what these standards begin to look like, what the correct application is and where the line should be drawn and that's converging with this regulatory drive and trust, so we're seeing all these things starting to build, and I think that will continue into 2022, and, to be candid, 2023, 2024 and onwards because they're difficult problems and won't be solved in January or February or March of next year. They need long‑term solutions.
So I guess the answer to the question Tracy is a little bit and an awful lot of everything really is what's required in 2022.
>> TRACY MANNERS: Thank you, Thomas. And Cherine I know that you had some thoughts on another question in the chat. Please do jump right in.
>> Maybe it is possible?
>> CHERINE CHALABY: Indeed, indeed. I want to answer Alejandro's question. And hello, Alejandro. How are you? We worked together for a long time, so I know how much he really believes in multistakeholder models and self‑regulatory models.
So one of your questions is, would it be a good idea to get together with other I suppose you mention other platforms, right? And I think, you see, I think this is a thought that we have to contemplate at one point in time, because I think what Facebook has done here, or Meta has done, is spent quite a lot of money in establishing this, and as Thomas and Afia mentioned in the first year, we've learned a lot. We've only been one year long so this is going to progress and our experience is going to evolve and develop.
It would be a pity, it would be a pity, if everything we know just is not shared with other platforms, whether other platforms at one stage adopt a similar model or this becomes an industry utility, I can't tell you at this date because it's only Year One, as you can imagine, but I think one ought to ‑‑ at one point in time, one ought to think how this could develop and how could the rest of the industry benefit from them, because if they do, then I think we'll all be better off, definitely, 100%.
The other question you asked is about, could the multistakeholder model of something I can apply to this, I think it can. I think it can. Why? Because the Oversight Board cannot work in isolation, right? We have to work with Governments. We have to work with civil societies. We have to work across all regions of the world, and take diversity and people's views into account. We have to work with tech industry.
So it is inevitable, it is inevitable, there comes a point where the collaboration of all of almost the same type of players that make up the multistakeholder model of ICANN will have to be come into play in the evolution of this Oversight Board, because if we don't and if we stay isolated in our corner, we will be passe after a while. Right? We have to be constantly evolving, constantly changing, and constantly reaching out and constantly engaging with all stakeholders around the world. No doubt about that.
I don't know, Alejandro, did I answer your question? Or there was something else I didn't answer?
>> ALEJANDRO PISANTY: That has been very clear, and yet my point, let's say my larger point is do this before Governments or international, multilateral intergovernmental organizations decide to create something that's on top of that. Show action, be solving problems and politically you don't want maybe to commit Government participants inside the Board. That creates a lot of legal issues for the Government people themselves as we learned in ICANN. They actually don't want to vote because they have to become liable and it will encourage all the legal liabilities of the reputation.
But get people who are voices that are sensitive to what Governments want so that you have the political connection. It takes a lot of political. Thank you for taking this up. It's an important point for the industry.
>> CHERINE CHALABY: Just to add something to you so that you know, when Meta develops policies for its content, it doesn't do it in isolation. It has a public comment period, the same way as ICANN did this and it goes around the world listening to the various stakeholders who then provides input into the policymaking, a bottom‑up approach in the same way, not exactly the same but not as extensive as ICANN would do it, but nevertheless, it's definitely going in the right direction. Thank you. Your point is noted, Alejandro. Thank you.
>> Excuse me, please. Is it possible to give floor to the people present in the Forum?
>> TRACY MANNERS: Of course, go right ahead.
>> Thank you very much. My name is Andrew Sherbowicz from McGill University in Canada, so I'd like to ask a question that surprised me a lot, because of joining the project together with the Facebook Canada, I looked at the let's say jurisprudence of the Oversight Board and I found that there's only 18 cases that has published from I think year of existence of the Board and does surprise me a lot why the number is so small. Actually for that. And the second question, do we need any external rules in your opinion of content moderation on which everything ‑‑ everybody for example will agree around the planet? On which this content moderation will be operated in accordance with human rights of all other people. Or there should be rules depending on each platform or each Moderator. Thank you very much.
>> May I also add one more question? Because it was related to what has been raised. My name is [ Inaudible ] from UNESCO. Thank you for giving such a fantastic sharing of the Oversight Board. We have recently launched a report on promoting transparency and accountability of Internet companies particularly related to the content and moderation so that's why we really follow very closely about the result from this Oversight Board as initiative of self‑regulation. What strikes me a lot is that it is among 200,000 appeals that the Board has accepted 23 cases, so what are the criteria for the Board to take in the cases, because I imagine there can be so many factors to juggle in terms of the actors, themes, countries, whatever, so I'd like to have a clear picture, so what fits to be considered by the Board?
And next to this is ‑‑
>> TRACY MANNERS: I'll have to stop you there, I'm so sorry but we'll have to cap that question, if that's okay with you. We can always pick up. I'd be delighted to pick up with you separately and continue this conversation further. I'm just conscious that we're losing some of our panelists here.
So I got the last part of your question, so you mentioned 200,000 appeals and actually it's a million now. 200,000 was in our last transparency report which looked at the first quarters of our work but we've just surpassed a million so you're right to point out how many it is.
I think just in terms of the selection criteria, it also relates to the question the gentleman in the room with you earlier was asking around, around how many cases we've taken and why and for that I think maybe Afia you might want to speak to why the Board ‑‑ how the Board selects the cases. You're part of the group who select the cases as they go through and how you're deciding which ones to take and what they should represent. And then I might just ask you, Cherine, to give final thoughts just on the earlier part of the question that we just heard which was around shared global rules and how that whether that system should happen so I'll start with you first Afia and then come to you, Cherine.
>> CHERINE CHALABY: Thank you. And thank you for the question. I actually spoke about this earlier on, on the criteria that is used to decide on or to choose cases, which was the first being the importance to public discourse, the second being the potential to impact a higher number of users, and the third being whether or not the case raises significant questions on Meta's policies and on Facebook and Instagram so those three are guiding principles for us in choosing cases.
>> AUDIENCE: Thank you, so last question, with the massive number, a million appeals, what will the Board do more to figure out maybe another solution? Because the capacity to handle million level cases can sound like a mission impossible. Thank you.
>> TRACY MANNERS: Did you want to respond further on that, Afia? We really are over time so I'd have to ask you to keep it very short, but perhaps maybe I could just say before we pick up this conversation separately, it was never intended for the Board to receive ‑‑ to take on all of the appeals that we receive. There are no entities going be able to do that. And I think we'd be setting ourselves up clearly for a fail if we tried to. What we hope is as we iterate our process and now that we're up and running, that we take a holistic approach to understanding the nature of the appeals that we receive so that's everything from we do track the types after appeals that come in, where they come from, which community standards they're related to and of course we have a close eye on how these are playing out in the external world.
I think where we can be helped by organizations and other actors is for constant dialogue and feedback on either through the public comments, through the appeals themselves, or directly with the engagement team on the kind of issues they're seeing that matter in communities globally so that we can have that in mind as we're making those selections.
We certainly hope that we'll be able to identify trends as we move forward, and that the cases that we select and that the decisions that we take will eventually start to reflect those.
So I hope that gives you some answers for an interim. I may just ask you, Cherine, if you had any final comment and then I really must close the session.