The following are the outputs of the real-time captioning taken during the Thirteenth Annual Meeting of the Internet Governance Forum (IGF) in Paris, France, from 12 to 14 November 2018. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid to understanding the proceedings at the event, but should not be treated as an authoritative record.
***
>> MODERATOR: Good morning, everyone. I'll take the floor to kick off this session own assessing hate speech and self‑regulation, who and how? I'm expecting some people to drop in. I'll take that time to introduce myself and the set up of the session so you can check that you're in the right room so you can still flee to another one if there's need.
I work for the Council of Europe in the Anti‑ Discrimination Department. I'm happy we have this session to look at assessing and taking action on hate speech and how to do that. One of the models that is being advocated for lately is self‑regulation. But the question is what is that then? How does it work and how does ‑ what kind of models could be put in place that actually work and address the needs of the user, the needs of the Internet industry but also the needs of democratic society?
So I'm joined here by a group of officially they're called panelists but I would say co‑speakers. I'm also inviting you to join in the discussions along the way in this hour that we have. But we are joined here by Jeremy McBride who is a consultant for the European Commission against Racism and Intolerance and a major writer to policy recommendation 15. I'm also joined by Tamas Dombos, who is a board member of Hatter Society from Hungary. I'm joined here by Antonio Batisti, head of policy from Facebook France. And on the far end, Miriam Estrin, policy manager for Europe, Middle East and Africa at Google. I'm also joined by representative of NGOs, various industries of governments and we're joined online through web screening on YouTube. I also invite those who are watching online to contribute through the speaker's queue.
Technical things that I was asked to convey to you. If you have a phone like myself, don't put it on the table unless you need it like me to check the time because if it vibrates it is picked up by the microphones. If you can use the microphones they make a lot of noise.
The set up will be that I will give the floor to Jeremy shortly to describe a little bit the context of this session, what are the concerns or issues or recommendations that are out there that we can take in which hopefully starts you thinking. Then I will pop a question to you about self‑regulation and hate speech in your perspectives which I would like to try to collect together and then ask my colleagues to take them up, respond to that and also to invite you to further elaborate your thoughts on self‑regulation and hate speech.
We have an hour, it's tight. So I'm hoping this is kick off for more longer discussion around this new development and call for self‑regulation. That's the set up.
I then give the floor to Jeremy maybe to explain why we're here and what's the context of this discussion. Thank you.
>> JEREMY MCBRIDE: Thank you. The basic reason we're here is the increasing problem or the recognition there's an increasing problem of the use of hate speech and the need is, in some way, to tackle it.
There's various possibilities. What we're seeing beginning to emerge is the idea of criminalizing, in some instances, the use of hate speech, the imposition of financial penalties of various kinds.
But that isn't the only possibility. I'm a lawyer and generally, I think, in terms of courts but that's not necessarily the most effective way from the point of view of people who are affected by the use of hate speech and that's why the issue of self‑regulation, I think, becomes important. And you see this being brought forward in a variety of ways so that you have, if you look at the general policy recommendation of ECRI which was mentioned earlier, although that does envisage the possibility of civil and criminal liability for the use of hate speech, it's very much framed in terms of seeing that as a last resort, instead of ways of trying to prevent hate speech through education and where it's used to encourage the use of self‑regulation by those who are able to facilitate or actually also use it.
For example, in the context of parliaments you have the issue of parliamentarians themselves regulating themselves to prevent the use of that.
The recommendation, like various initiatives also by the European Union in terms of developing code of conduct with those Internet providers more recently with the European Commission recommendation and also with ledgelation in countries including Germany, there is a push to encourage self‑regulation.
First, what does self‑regulation mean and secondly, why is it valuable?
You have to have some standards which you expect to adopt. Generally people talk in terms of the code of conduct. You need some arrangement for monitoring. The way in which material is made available. Possibly some facilities for restricting it, for example, content bots as a means of preventing it but also encouraging the possibility of complaints and some, therefore, response as a result of people from outside the organization which is facilitating the use of the material. Being alerted to the problem and then possibly taking action.
The reason why self‑regulation is seen as desirable, insofar as it works and it's a question we'll come back to, I'm sure, is first of all there is considerable concern that if you start going to courts you may have too readily an interference with freedom of expression, which is an important value, that has to be balanced with the interest of hate speech.
Secondly, I think it's important because it provides a remedy potentially for those who are targeted by hate speech and it does so in a manner which is much quicker than going to court. Court proceedings tend to take a long time. And moreover court proceedings can be expensive so you have that advantage. And from those who have Internet services, it puts them in a place to make sure they're compliant. And also ensures that there isn't undue disruption with the way in which they function because if you can deal quickly with a problem then you don't have long procedures which follow which may be costly and also bad for reputation.
So those are the main considerations which I think drive forward the idea of self‑regulation. So I will stop there.
>> I think you mentioned an important point which I would like to stress that there's more to be done against hate speech. Policy recommendation addresses the last resort, which is the court. I think the challenge here is to see a case of regulation, what's the potential of that tool in a complete toolbox. I think this session is particularly trying to look at that specific tool in a complete toolbox.
Having said that thank you for the short introduction, which hopefully started you reflecting on this question. And this is the moment to ask you to get starting to think. I would like to ask you to look at your neighbor and try to discuss with the neighbor in one or two minutes what you think self‑regulation for Internet industry on assessing hate speech should deliver. So what should it deliver? Why should self‑regulation be there? What should it deliver for you? So I will give you one or two minutes to talk with your neighbor. Just reflect what is it about? What should it deliver, do you think? And I will collect your ideas.
OK. Thank you. Thank you. Yes, thank you. Hello. Active participation. A little dip in sound which normally means people have finished thoughts and so we start adding more. That's the moment I start grabbing everyone's attention.
I saw a lot of engagement, a lot of talk. So I would like to go around the room and collect some of your responses to the question, what should self‑regulation for the Internet industry on assessing, taking action on hate speech deliver?
If you note that other people have already said what you consider discussing then please say we had similar discussions so that we don't repeat the same point a few times. So I would like to do a quick round just to get a little bit of the feeling of thoughts and ideas among you and then I will continue that discussion all together.
Maybe I see a hand so I will move this way around. So please. Please use the mic.
>> AUDIENCE: Hi. With my partner we talked about how we should be more ‑ how we should report more offensive content when we are seeing it. Also that when we are seeing hate speech or something like that we just hide it because there is this option. If we are moderator of some group on Facebook or WhatsApp or whatever, we have to pay more attention to that and contact the actors of these hate speech to discuss with them and try to know why they are doing it and maybe try to sensitize them. Thank you.
>> MODERATOR: Thank you very much. This way.
>> AUDIENCE: The thing we discussed was obtaining the right balance between who do we trust more in terms of regulation? Do we want governments to be responsible for what people can and cannot say in the global square? Do we want companies to have a massive secret system of censorship and we think there's probably some middle ground that could be found that's more appropriate.
>> AUDIENCE: I'm going to through in a second point. One question was are we witnessing more hate because there's actually more hate in society or are we witnessing more hate because there's technologies out there for enabling small groups with unacceptable use who are able to find each other and form associations and use the Internet? Is it a question of technologically based empowerment or is there more hate in the society? That would inform possibly a response to hate speech, would you address questions of association and publication or would you address social issues of hate itself. Thank you.
>> AUDIENCE: I came quite late so I'm going to say what we discussed briefly. I think the question that I was trying to find answer during the brief discussion was that self‑regulation by the governments looks great on paper but talking about countries like Pakistan, I think it doesn't seem practical for the corporations to do self‑regulation and not listen to the government as well.
The government is there to pressure Facebook and pressure other corporations to take down content which is offensive religiously as well. So I'm not really sure what is the answer to this but that is a concern that I thought that should be addressed as well.
>> MODERATOR: It's about the independence. Anything from this?
>> AUDIENCE: Lucas from Brazil. We discussed a little bit about contexturalization. Hate speech is not the same everywhere. It's necessary for service providers to be attentive of even language differences that might ‑ if they want to enact global hate speech policies to counter hate speech, they should be aware of some regional differences that they might encounter between their users.
>> MODERATOR: Thank you very much. I move down the line here. If there's any new points that have not been mentioned.
>> AUDIENCE: Thanks. We kind of launched into our respective interests, three of us, actually, discussing this, in the topic of hate speech and the point I kind of learnt was difficulty of definition from the government perspective, what is not a crime to be hating something, but there might be ways of defining with the help of the platforms and other intermediaries with responsibilities of assessing what ‑ where does hate speech progress into promoting a violent reaction in some way. So then it becomes a kind of criminal activity but difficulty defining that. That's what I kind of led off on. Maybe you want to add something? No, OK.
>> AUDIENCE: We had a nice conversation and talked about what forms of self‑regulations we could consider self‑regulation by the platforms, self‑regulation by the platforms while kind of obliged by the government which is would be coregulation by users, in essence. Then we tried to make a case and said what would help against hate speech at the highest level, at the level of heads of State. We circled on the President of the United States' view, can you have success against such kind of hate speech by self‑regulation?
>> AUDIENCE: Hello. My colleague and I, my colleague is from Myanmar and I'm from Pakistan. We started to discuss what self‑regulation would look like and when we started defining hate speech, if there is room for self‑regulation for hate speech to only accommodate sort of the legal definition, to only accommodate talk there and to acknowledge the fact that there is room for self‑regulation as well. So the way we approach and define hate speech currently is a good way to start.
>> AUDIENCE: We had two perspectives on this. One was of course the definition is a difficult one and it's also not new. It's maybe one of scale that has kind of disrupted everything and one element of that is how much has been automated, how much hate speech is robotic that gives wider impression of the problem and certain kinds of things can be done related to making it more difficult to set up accounts in the first place that are just for that purpose.
>> AUDIENCE: Thank you. I'm from the Internet Association of Kazakhstan. For the last six years we've got over 11,000 reports, about 99% related to hate speech, with 100% of reports of hate speech for 2014. Hate speech reports come to 49%. Crisis in Ukraine brought to us hate speech reports increased maybe twice or triple. So our hotline just shows that NGO can get report from end users and can resolve all cases and we have, for example. Thank you.
>> AUDIENCE: Hi, I'm a researcher at the Institute for European Studies in Brussels. What we've been discussing is crowd sourcing solutions of the problem and different attitudes for expression in different countries. But I think if the problem of self‑regulation itself.
>> AUDIENCE: We've been in the middle of several large debates about hate speech because our online computer security service protects a lot of speeches from a lot of places including some speech that people interpret as hate speech. We tried to draw a line between hate speech and incitement which is illegal when you're actually telling people to go out and hurt people and do damage. That's different than just a lot of what the Europeans define as hate speech.
I haven't heard anybody mention the first amendment but as an American I have to. We believe that there's a lot more good speech. The other thing we worry about is that deputizing commercial entities to filter the Internet will lead to overreaction. We've already seen this and that will lead to censorship and suppression of new ideas.
Perversely, into really bad people censoring good people by using these mechanisms to complain about speech that isn't hate speech but the algorithms, the mechanisms these sites use could very well knock off content that is useful. So we have to watch that we don't go far beyond the intent here which is to just deal with the extreme extremist speech and incitement and we have to also realize that different countries have radically different approaches. So some global approach to self‑regulation is going to be very hard.
>> MODERATOR: Thank you very much. There's already many points raised and I will try to wrap them in a few minutes but maybe just a few more thoughts there. Anything to add?
>> AUDIENCE: We were also considering the fact that if self‑regulation happens it should take place in concerted way to avoid there are as many regulations as platforms.
>> AUDIENCE: Hi. We were also talking about how the idea of educating people about positive communication styles and the skills that people need to develop more compassion and sympathy are also important in this sphere of self‑regulation.
>> MODERATOR: OK, thank you very much. I think a lot of points have been raised when we were preparing this session also crossed our minds so I think it's a good hook to continue the discussion. The question raised by you about reporting needs active participation of the users itself, which I think is something to reflect on. When it comes to self‑regulation what are the challenges of balances who imposes, who initiates the self‑regulation, what's the role of the companies, the roles of governments, the risk of overdeleting. Requiring the balance of freedom of expression and nondiscrimination and things like that.
There's the challenge here ‑ the possibilities of a global approach while there's national legislation. The first amendment comes here, and just to repeat that one. And also the concerns can we have an approach that we don't have self‑regulatory per company and that creates confusion with the users.
The definition of hate speech, what do we use, how do we keep in mind the context, language, regional concerns, etc? So there's a lot of challenges here.
Let's bring it to us and can I give the floor to you to maybe respond to some of these points and your own considerations and call for self‑regulation is out there and what sort of models will work and can we address some of the concerns here. Tamas, can I and you to introduce your background and your contribution to the discussion.
>> TAMAS DOMBOS: Thank you for the floor. My name is Tamas Dombos. I'm from the Hatter Society. We offer various services to members of the community meaning information and counseling, legal aid as well as doing research and advocacy.
I'm here invited to talk a little bit about the target groups of hate speech, that is minority communities, vulnerable communities and how they experience hate speech and what are their expectations of self‑regulation and hate speech.
Members of these communities want a safe and welcoming environment when they go online and if they meet calls for burning them alive, if they meet that kind of content every day then they will not be feeling safe and they will not be feeling welcomed.
Of course the best would be such hate speech wouldn't happen at all but that's very unlikely to happen. What these communities need is a collective response to these hate speeches to make it clear that the community, as such, and the platform as such is not welcoming these kinds of content.
Now how that collective response should happen is a big question. I will give you an example. If a same‑sex couple walks into a restaurant and they are shouted by other guests at the restaurant "You dirty faggot, you're not welcomed here" you expect the restaurant to do something about it. You report it to the waiter and "Please tell those group of people to stop shouting that because it's not nice."If the waiter or manager doesn't do anything you expect the restaurant to be legally responsible for the nonaction. And I think that's the case also for social platforms, for social media platforms.
One form of self‑regulation that is currently happening in Europe is the European Commission's code of conduct which they signed with social media companies back in 2006 in which these platforms agreed to assess reports in 24 hours, to remove illegal content that is illegal already according to national legislation and to introduce a system of trusted NGOs whose reports are taken more seriously or in a more responsive way by these companies.
Now the commission is also doing a monitoring of how that code of conduct is implemented in practice so there are dozens of NGOs that every six months are reporting hundreds of content and assessing how the companies are taking care of those reports. And the experiences are quite mixed.
I would like to say we are already done three cycles of monitoring and currently in the process of the fourth cycle and there is improvement. So when it was started, the 24‑hour deadline was never met, content very often stayed online. Nowadays more ‑ there has been progress made on that. However, there are huge differences between the platforms that have already joined the code of conduct. There are companies that are doing better and there are companies which are still lagging behind and if you're interested I can name them but maybe that's not what matters.
There's also huge differences within the same company in various countries. So, for example, in some countries the companies are more responsive than others. There is no globally consistent ways of dealing with it. Finally, even if companies remove content, very often they don't provide feedback about the removal so that the user that reports it doesn't know about what happened to the report that they had made.
Of course, the more kind of gray zone the hate speech is, the more difficult it is to assess whether it is actually legal or not, the more likely for these companies not to do anything about it.
So what are the expectations of minority or vulnerable groups when it comes to self‑regulation? I think there are three key words that are crucial here ‑ quick, transparent and accessible procedures.
These are the three things that other solutions, public authorities are not providing. So what do I mean by quick? If it takes months and years of litigation to remove content that will be not something ‑ you want the content to be moved and be at least flagged as hate crime quickly in a decent amount of time so that you can actually see that something's happening.
Transparency in terms of outcomes. What were the content that were reported, what were the content that were removed? Transparency about the procedures that are taken. What kinds of legal information is taken into consideration when making those assessments? Finally accessible. Reporting should be easy for the users just as easy as it is for reporting to companies if there's self‑regulatory bodies, the reporting should be part of the platforms themselves. And accessible also in terms of costs, that it could be ideally free but at least not limiting in terms of cost.
My final comment is that these self‑regulatory solutions should be not alternatives to other legal methods but blend into those so that if there is self‑regulation it should not be used to kind of steer away people from making reports officially also to public authorities, maybe prosecutions or civil litigation as well and whatever solution of removal, for example, should not stop being using that content, improving those cases which is often the case, the content is removed and there's no way to use other legal methods because the content is no longer available and I will stop here.
>> MODERATOR: Thank you very much. From a user perspective, you're basically summarizing transparency and accessible which I think is what a few people have mentioned. What are the procedures and should be clear and useful. Thank you for that. I will leave the floor later onto respond.
Can I give the floor to Miriam.
>> MIRIAM ESTRIN: If you think about the concerns of platforms like YouTube where I come from, what we want to do is to be able to appropriately address threats like hate speech, get ahead of them while remaining open and I think it's worth just saying why that openness is important. It's because these platforms have opened up and enabled major opportunities for expression and belonging, culturally, artistically for news, for all sorts of educational content
So remaining open is extremely important to us. And to appropriately address those threats while remaining open, I think we need a few things. So one, we ourselves need a set of clear policies and guidelines at the same time that we need appropriate legal frameworks, so both liability regimes but also definitions of hate speech that are clear so that if we operate in different countries we can appropriately respect the local law.
As Tamas said, we need a system of notice that is user‑friendly and then on our side we need a robust system of enforcement. For us that's been a mix of people and machines and I can speak later about where the tech has been able to help us in an area like hate speech but actually the tech is imperfect, especially in a place where you have a lot of nuance and context like hate speech so there are real limitations there.
We need time to make the appropriate decision and here we particularly worry about provisions and regulation that suggest a fast turnaround time backed up by large fines. We think that inappropriately it incentivizes the content so we need time to make the decision and we need experts from the NGO community, from the academy and others.
Then I think we need ways to deal with gray area content appropriately. At YouTube we implemented a system ‑ we previously had two options. We could either remove a piece of content when we found it or leave it up. Increasingly we found there was content that came close to violating our policies or close to violating laws but did not. So we introduced a new system where we could limit certain features. In other words, we could limit the ability for the content to be recommended by the algorithm, we could limit the ability for users to comment or to share and we felt that that was a way to strike an appropriate balance between expression where people could access content that they may disagree with without unduly spreading it.
Then finally, I'd say we need better systems of transparency and I think we've all gotten better about that. We've long had transparency reports where we report on content. Governments have referred to us for removal and increasingly as platforms we've been publishing our own reports around community guidelines, what decisions have we made in what cases and what results have there been.
I think we will have time later to talk about a few of the examples so let me just list them. Tamas mentioned the European Commission hate speech code of conduct and I will just say we were one of the companies that got feedback from NGOs like Tamas' that we were not providing appropriate feedback to NGOs that flagged content. So they felt their flags came to us and they didn't know what happened to it.
In response we developed a new tool, a dashboard that lets any user, no matter if you're a trusted flagger or not what happened to your flag. I think that's a positive outcome of what the Commission was able to foster. We heard the feedback and we were able to bring a new tool to bear.
The second is the SDG. This is the law in Germany around hate speech and other illegal content and it provides the opportunity for companies to refer content to a separate self‑regulatory body that can issue opinions and sort of rulings about whether the content is illegal or not under German law.
There were decisions that were binding on companies. We have decided that we will work within that self‑regulatory framework and more to follow on that in 2019.
>> I don't want to repeat too much because we share similar approaches that we also have rules. I encourage you to look at them. You can Google Facebook community standards, that's the easiest way to find it. You might think they are quite long because we put everything on the table last spring so you can definitely have a look.
You will see our criteria on hate speech, on bullying or definition of terrorism. You might disagree with it but at least it's there and we can have a debate and you can provide us feedback. It's very important that we are transparent to what you convey to your users.
What I want to have, in our vision, it's a chain of responsibility accountability. We can't be responsible for everything that is on the platform, obviously. It's important that we are not alone to decide what should be or not on the platform. There is the first layer of what is obvious, like incitement, nobody wants speech that calls for the murder of other people. But unfortunately it's not that simple because you can have, in terms of conspiracy theories against Jews that are not purely inciting killing them but at some point some guy in the world will just take a gun and go and attack a synagogue.
It doesn't mean that Facebook is responsible. It's not that simple. But it's the start of the equation. So today we really need to work with other people, citizens, NGOs, but also government to see what is the best. Is it to take down content? That's something we need to continue when we have to, or is it more education? Is it more conscious speech? Many companies are joining efforts.
I'm only gives the French perspective on that when it comes to work with governments, of course President Macron the other day, he draws the lines between the Californian model, the Chinese model, and something in between that we strike the right balance between free speech and protecting users.
You probably heard that we are going to work with the French Government on these issues. The plan is not to have French Government people coming to the office and tell us what we should do or what kind of content we should take down or even access data, that's not the plan at all. The plan is we need to have a group of regulators joining a Facebook task force in order to have an intense engagement where we will show them what we do to combat the speech.
Who are the people working on defining the policies? Who are the people taking down content? Who are the engineers working on AI and automations? Because the feedback we had from government is that especially in Europe is that we are accountable to citizens so can you please show us a little bit more and when we're being asked about this then we can say this company is doing its best effort to combat hate speech online.
You show me stuff that allows me to say that.
The code of conduct is in the process with the testing. This is like an innovation. We need to try something new. If we don't want things like in Germany which we are criticizing, it's not that efficient at the end of the day, it's not really allowing companies to take down hate speech online and the weakness of this lies also we have to decide by ourselves is it or not.
The plan is to give a role to governments in Europe, it's France today but maybe we never know how far it can go, to work together and so these people can say to others, especially to citizens, this company is doing its best effort.
We will never be able to take down 100% of contents that are causing problems, to be honest, but we have to try to do better. So if you have any question I'm happy to answer.
>> MODERATOR: I'm sure there will be questions. I just wanted to ask Jeremy if there was anything to add because you already introduced a bit the context, and there were concerns raised, maybe you have pointed to add.
>> JEREMY MCBRIDE: I think the definition is important. This is really what the ECRI recommendation was trying to do from a European perspective. It's a problematic, therefore, when you have the European Commission focussing only on certain aspects and that comes to the question also whether you're only dealing with stuff which is criminal or hate speech which is not. That's important.
The issue of freedom of expression is fundamental but there is a difference in the world in terms of the balancing and that I think, therefore, has to be reflected in terms of how it's dealt with in different jurisdictions. But the importance of freedom of expression is that you need to deal also not only to protect those who are the targets but also to ensure that freedom of expression, which is legitimate, is protected itself and the danger of overreaction, I mean getting it wrong in terms of whether this is hate speech or not, you need some, in your self‑regulation scheme, you need some possibility of appeal against decisions so that the people who are accused of using hate speech actually can vindicate themselves if that is not the case.
Then I think that also comes back to the question of training of those who are involved in doing self‑regulation because they need to understand what is hate speech and therefore, different kinds of training in the terms of context that we're talking about is also important.
>> MODERATOR: Thank you. I think listening to the various inputs, I think there's a few trends that are emerging which I think address partly also what a few people have been mentioning in the room. I mean, I think the call for transparency, the call for clear definitions that is clearly there, I think what's interesting from what's happening in France is where it's like companies have an internal process of assessment and there's actually this is an effort to give transparency and to seek cooperation with regulatory bodies or others to actually look to this.
So this is an interesting development to see how can this actually address the need for democratic society to have transparent, clear system that people understand and also feedback too so that we can address needs of quick process but also feedback on the decision making and the channels.
But then ‑ so that's an interesting development and I'm very curious how this is going to play out in France. I think the next question is the gray zones. I think a few people mentioned, some of the stuff is very clearly, maybe with transparent protocols that can be addressed, we will follow companies through the code of conduct this feedback loop is there.
But this gray area, this gray hate speech, what can we do there? The self‑regulation model that has been there is trying to bring it a bit away from the companies and bring it into another role, possibly more light. Do you want to explain a bit more the idea because I think this is an opportunity.
>> MIRIAM ESTRIN: Maybe let me step back and explain a little bit more about the next DG. It says that companies that have above a certain user threshold, in this case 2 million users in Germany, must remove flagged illegal content and illegal hate speech content within 24 hours if that content is "obviously illegal". And in seven days if the content is not obviously illegal.
It then further gives companies an opportunity to avail themselves of a self‑regulatory scheme of their design, certified by the German Government as appropriate, where we can extend even that seven‑day turnaround time by referring a gray area case to a self‑regulatory body. In this case we'll work with a group called the FSM which we've worked with in the context of child safety and we'll expand that work under the next DG.
Once we sort of figure out what that scheme will look like, you will be able to see what the mechanism is, who staffs it at the SFM. We do hope it will give us clarity about specific cases that we see on our platforms and as they apply under German law.
The benefit for the platform is that whatever decision is binding on us, and we cannot then thereafter be followed under the DG if the court finds the FSM is wrong. The court body issues binding rulings but it also saves the company from liability if another ruling arises.
I think lit be an interesting model and we'll see how it works in practice and I think we will need some time to evaluate that.
There's one other, if I may, that I wanted to discuss which relates to the European Commission code of conduct. At the end of the exercise the Commission puts out a very nice report that says how each of the platforms did under a set of metrics and categories. Turnaround times, removals and feedback. I think, you know, they do a really nice job at the Commission of explaining the process that NGOs flag and refer content and companies remove but often what's lost is this headline that says companies removed on the whole, in the last cycle, 70% of the flagged content. At YouTube that number was 75%.
Often the response is "Well why did you miss 25%? Why did you miss the 30%? " And that's the sort of ‑ I want to make a direction there. You know, the standard isn't 100% removal of the flagged content. The standard is appropriate decisions and so I think it would be worthwhile looking at those areas where we didn't remove content to see, well, was it a failure of enforcement on the platform? Or was it a gray area content where lawyers and companies can disagree with NGOs and I think that's a really interesting area worth exploring.
>> MODERATOR: That's an interesting remark, which also addresses some of the needs that were expressed by the rest of the people here that we need to be transparent, we need to be aware when, why content isn't deleted. Sometimes find the report it's not deleted and we need to know why. I think looking at numbers doesn't give you the whole answer. The report is all about numbers but we're not sure if that's led to over deleting or not because we don't know what the content that has been deleted.
There's an interesting challenge here of how do we gain transparency because it's not only about numbers.
I promised I would give the floor to the other speakers in the room so at least I see already hands.
>> AUDIENCE: I wanted to pick up on something that was said earlier about comparing the Internet to a restaurant. And I think this brings up a number of serious problems that we see a lot of people with different models for the Internet and they just pored over policies developed for those other settings.
A lot of times when we're talking about hate speech, the model in people's heads seem to be television and newspaper. The assumption is people are saying something and thousands or hundreds of thousands of people are hearing it. But most of the things that people publishing are in email, two or three people. For me it's more like a diary. If you look at Facebook pages, particularly people who create private groups on Facebook, this isn't the mass medium and yet we're assuming we have to use the kind of rules that we have for newspapers and television.
I really think there's a need to challenge this idea the Internet is something that needs to be tightly self‑regulated or regulated because it gives everybody a mass audience. It's just not happening for a lot of the material that people are posting.
We wouldn't expect the Government to come in and tell email providers they have to filter every email in case someone says something hateful to an audience of two or three or five people. So there's a whole spectrum here and I really, I really cringe when I see this model of the newspaper used for the entire Internet. I also worry a lot about small companies. You mentioned Net DG which says this only applies to big companies. There are countries a new social media platform would be squashed like a bug because they wouldn't be able to provide the kind of services that you provide.
>> MODERATOR: Considering time, what I want to do is grab a few of the ideas and responses and then also give the panel as much possible time.
>> AUDIENCE: Thank you. Based on our experience with over 6,000 reports which are related to hate speech violence, 99% of them are just self‑regulated. We've done everything by our organization as NGO. Only maybe two or three reports we have sent to law enforcement because it was close to crime issues.
Also in our work, the biggest problem with YouTube and Facebook. You don't know when you can get answer. It looks like where's the report? Is it ready or not? Who knows? So the same situation with YouTube and Facebook. YouTube can get report only from the Government or law enforcement. We are NGO. What we can do? Just send reports? No. Facebook you can just send report from private person or private page or through the NGO page.
You can get just formal answer. OK, we've got your report. Thank you, we will give you answer. So the biggest problem at how you can react on the hate speech reports. Thank you.
>> MODERATOR: Thank you very much. Very short, please.
>> AUDIENCE: Just to pick up on Mike's point that the Internet is not ‑ does not or the platform should not be held to the mass publishing model of newspapers or television, nor should we use the model of the telephone. That's what makes this difficult. It's a new thing, it doesn't fit the historical models.
We would posit that probably the major opportunity for government is at the front end in education. If we're trying to really get to self‑regulation, the lack of the boundaries, the cultural boundaries that prevent this sort of thing, a person lets these sort of things out and having an understanding at younger ages of the effect of speech on others is something the government can do.
>> MODERATOR: Thank you very much. I would like to ask Tamas, maybe you would like to response to one of the inquiries and maybe also the others about the other two points.
>> TAMAS DOMBOS: I think I probably agree the Internet is not one thing. There are various platforms that have very different types of content, very different reach. Our experience mostly on social media platforms, that is why people share content but there is a body providing that platform in which case I think the responsibility is clear for maintaining the community standards by the platforms themselves. Of course it's very different when it's just hosting some other content, etc, etc.
I think that there is room for improvement for all the companies and I fully agree with difficulties in the monitoring. What is reported? It's not very clear what is reported and then, you know, the monitoring exercise should be improved on that and I fully agree with your content.
The only way to assess with how the companies are doing is to actually see what is reported and what's the result of the outcome. It doesn't have to be a print publication of thousands and thousands of pages but these platforms allow for making transparent those content that was reported and what happens with them and people can go and check what happens with those reports.
>> MIRIAM ESTRIN: Let me quickly say anybody can flag content on our platform so happy to sit and walk you through it. It's not just for government and law enforcement. Anyone can click the three dots on the video and report abuse. We have a system of trusted flaggers that can report in bulk. Then we have the dashboard that will let you see what's happened to those flags so let's make sure to help you with that.
Just wanted to say one thing. Germany had mentioned a system of appeals. I neglected to say we also have that. If you feel your content has been improperly removed by YouTube. You can appeal that decision and we can review it and I agree, it's quite important. Thanks.
>> It was interesting to focus on content and have a widespread instead of focussing on just a few very limited audience. It's a a way to improve.
>> MODERATOR: Due to time, I need to wrap up here. I think this is only the beginning of the discussion on self‑regulation and hate speech. There's two practices development now with the net DG in Germany and the France context which I think probably opens up for us to reconvene next year to continue the discussions and see our experiences from the last year here in Europe.
Thank you, everyone, for your contributions and the report will go online within 24 hours so you can read back the discussions. Thank you.
(Applause)
>> MODERATOR: The policy recommendation shorthand is on your table and if you want the full publications there's a few copies at the back.