IGF 2017 - Day 2 - Room XXII - WS166 Combating Online Violence Against Politically Active Women

 

The following are the outputs of the real-time captioning taken during the Twelfth Annual Meeting of the Internet Governance Forum (IGF) in Geneva, Switzerland, from 17 to 21 December 2017. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid to understanding the proceedings at the event, but should not be treated as an authoritative record. 

***

 

>> SANDRA PEPERA: So we'll just wait a couple of more minutes for people to roll in from lunch and start at five minutes past.

Okay, ladies and gentlemen, we'll get started. Good afternoon and welcome to a panel on combatting online violence against politically active women. My name is Sandra Pepera from the Civil Society and at the National Democratic Institute based in Washington, DC. We support democracy promotion around the world. We're present in about 50 to 55 countries, and one of our key principles has always been enhancing and increasing the equal and active participation of women in politics.

We're in an interesting political moment, I think, around the world. More than 50% of the global population lives in a democracy. There are a lot of people in China, so that probably explains some of the other 50%. And time and again people vote, literally, for democracy. In Churchill's infamous terms, It's still the least worst form of government that we have.

What do people like about democracy? They like to participate. They like to be consulted. They appreciate leaders who are somewhat competent. Many, though, do not like the divisiveness of partisan politics, and that's a challenge, I think, we're seeing around the globe, but democracy still remains a successful form of government for the realization of rights and development. So, in a way, it's not surprising that it's under attack by those with less attachment to principles of participation, inclusion, transparency, and accountability. So at NID, we've long viewed the participation of women as intrinsic to the integrity of democracy.

Research confirmed that women's participation in the decision making in the lives and communities changes political discourse, improves the relevance of public policy decisions, and makes peace more sustainable.

Women are 50% of the population, and it's their right. But if you break open the space for women who are 50% of every other social group as well, 50% of youth, 50% of disabled, 50% of minorities, you're embracing a more inclusive politics of different groups.

The chair of NID is Madeleine Albright. She gave us the slogan more my team. She said that progress without democracy is improbable. Democracy without women is impossible. And women have made significant strides in all fields of human endeavor in the last hundred or so years. Our approach to supporting women's participation is to tackle the barriers that prevent them from getting into politics. We see violence as one of those barriers. Now, all violence is bad and must be stopped, but violence against politically active women has further impacts. It is an abuse of human rights. It is an abuse of civil and political rights, and it undermines democracy.

Do men face violence in politics? Yes, they do, but it is different in nature and in impact. Violence that women face is much more likely to be sexualized or, if you like, focused on their gender and their sex, and it drives women, especially young women, away from the political sphere. Even as women step forward, maybe because they're stepping forward, we are seeing a backlash against their participation. Now, what happens online is not new. In a meeting with counterparts and national democracy partners this morning, I made a point that those of us working on marginalized community, regardless of rage, gender, sexual identity, we've been aware of this nature of violence because it existed and continues to exist in the political realm.

What the Internet has done is to accelerate, amplify, and somehow make permanent its impact while providing many perpetrators with the cloak of anonymity. In our submission to the U.K. parliament's recent inquiry into the intimidation of parliament candidates, we described it as an area where an old problem has been given a new and toxic life.

So there are dangers for women's right and the quality of democratic institution and power. At the moment, the absence of intrinsic democratic principles is failing many women who wish to be politically active around the world. I have to say -- and I would say this, wouldn't I? -- that we have constructed a stellar panel to discuss the issue today and share thoughts about how to combat this violence. We have met our own standards of diversity and inclusion, I think you will agree. We're a 50/50 panel, excluding the speaker. It is a multi-sector and multi-disciplinary panel.

The panelists will say a few words about the topic from their perspectives. Then I will curate a round of questions between them, and then we'll open things up to the floor. Please feel free to engage. We're also online. I welcome anybody who is listening from a distance. So now let me introduce our panelists to you.

On my extreme right is David Kaye, U.N. Special Rapporteur. He has served since his appointment in August of 2014. He's a clinical professor at the University of California, School of Law, with a research in writing focus on accountability for serious human rights abuses and the law governing use of force. He as, of course, collaborated with national governments, NGOs, international organizations, and everybody else in between.

To my right is Seyi Akiwowo, a locally politician in east London. She's the youngest black female counselor. After facing horrendous racist and sexist abuse online when a video of her speech in the European parliament went viral this year, she founded an online abuse and training organization. She lobbies companies to do more to stop online abuse. She has developed a set of recommendations of how they can adequately and consistently address violence against women and girls and online hate speech.

To my extreme left is Nate Matias from Princeton University and works with millions of people -- this is the power of the Internet -- in citizen behavioral science to promote a fair, safer, more understanding Internet. He researches practices that contribute to flourishing fair participation online, making evaluating interventions for safe, creative, and effective societies. Nathan's current project include large-scale experiments on reducing experiment and harassment online, as well as studies on social movements, civil participation, and social change.

And then to my left, and by no means the least, although she is last, Nighat Dad, who is the founder of Digital Rights Foundation in Pakistan. She's a human rights activist and engaged in policy level regarding women, digital security, and women's empowerment. Nighat was included in the next leader list by Time Magazine for her weapon on helping women fight online harassment.

Ladies and gentlemen, that's the panel this afternoon. Please join me in welcoming them.

(Applause)

>> SANDRA PEPERA: So I've asked Seyi so start. We're not only talking about those elected. We're talking about activists and voters and party candidates and elected women and those in high office, if that's the case. While Seyi is an elected counselor, she's a representative of a body of women who are basically just taking their rights to be active in the public decision making that affects their lives.

So, Seyi, tell us what happened to you, what your experiences have been, and what are the main challenges and opportunities to dealing with this violence against women in politics.

>> SEYI AKIWOWO: Thank you. Thank you, national democratic institute. I will talk about my experience, but if I could first touch on freedom of expression. During our training workshops and events, we face the same two questions. When does free speech becomes hate speech. For U.K., the answer is simple. Online abuse is not robust debate. It's about the intentional harassment of women to get them to leave the Internet, particularly social media, modify their behavior to please patriarchy and self-censorship.

Sharing a video without someone's consent is a clear red line. We can then turn our attention to remarks that are not so clear-cut. Companies must do more to protect the rights of women in diverse groups to express themselves online. Sadly, this is not happening. Women are not allowed to be free to express themselves, their opinion, or even post a selfie. Women aren't allowed to be strong and confident in their opinion online, especially women of color.

Which STD will end your miserable life? If all whites agreed that the best course of action would be to exterminate blacks, we could do it in a week. This is why monkeys don't belong here. I hope you get lynched. These are just some of the words I received in a storm of abuse earlier this year. This was sometime after me making a speech about the refugee crisis.

The online world seems to be a place where they know they can't behave in such a way offline. It's a selfie with my head wrapped in braids. It's a proudly celebrating Black History Month. It's for people of color to have a safe space to meet. Or when advocating for human rights for people not to be badly treated or die in police custody. My experience is not uncommon. It's an indication of how far society has to go to achieve true equality.

There's an increasing number of attempts to silence women in diverse groups online through various forms of abuse, ranging from but in no way limited to, harassment. There was one woman who was subjected to online abuse, body shaming, and harassment because -- wait for it -- she said, I hate hummus. Driving women out of public space is not a new thing, but I agree with NID, online abuse and harassment is a new challenge to democracy, digital inclusion, progress toward gender equality as well as making the information space have integrity.

I cannot sit here in a room with you guys talking about violence against women in politics without talking about Diane Abbott, the U.K.'s first woman black MP and home secretary. Not only does she top the list of U.K. MPs with the largest amount of abusive tweets received, but she received ten times more abuse than any other woman MP. Many women have contacted myself and Diane, telling us they are seriously rethinking a career in politics because they see politicians that look like them and the abuse they receive.

So I founded Glitch, a foundation to end online violence against women and girls. The Cambridge dictionary defines a glitch as a problem or a fault that prevents something from being successful or working as well as it should. We believe it sums up the current state of the Internet and social media. It's also something that can be fix. Like was mentioned before, we lobby social media companies and governments to do more. We develop a set of recommendations for social media platforms and have training workshops for young people. We recently developed an info graphic, which I think is going to go up. We train women who are in politics and those who aspire to have a career in public life. We have five approaches to combat online violence against women and girls, which organizations here and governments can adopt. I'm happy to go through this in much more detail in the Q and A. I will wait for them.

Number one, raise awareness of online abuse. It's a growing problem and has an impact. Am necessity international recently published a report on online abuse around the world. I'm proud to have been a media spokesperson. It revealed that 23% of women surveyed said they experienced online abuse or harassment at least once. 41% feared for their physical safety, and more than three quarters of women made some changes to the way they use social media platforms as well as of online abuse. That is self-censorship.

Number two, increase an understanding of our rights online. I was shocked at how well this did online on Saturday when I was watching dancing. I was shocked that people took to this. It goes to show that so many women are so desensitized and so used to having to deal with misogyny that they don't have to deal with it.

There's a significant problem with law enforcement across the world not taking reports of online violence seriously. I'm pleased to say the online hub was launched in London. These skills need to be shared with all police officers so women are not prevented from reporting online abuse.

Number five, training. We need training for young people so we can be better online citizens. They can understand what online abuse is. We need training for those that work with young people. It's not good enough just banning phones in schools or banning certain websites. We need to teach young people. We need training for women in politics to build a resilience to navigate online abuse and to those aspiring to have a career in politics.

Most of all, we need to train tech companies. They are developing new platforms. They must learn from the mistakes and glitches. As I draw to a close. I promise I'm coming to an end. I would like to talk about diversity and inclusion when combatting violence in women in politics. Number one, when we talk about the online abuse women and politically active women face, we must be intersectional. We must look at women with multiple identities.

I will tell you a secret, just in case you didn't know. I'm a black woman. I don't just face misogyny. I face racism too. When reporting online abuse, users face a very white male reporting system and response. Finally, there was a responsibility on women and men in politics to advocate on behalf of and inclusive of women engaging in online space. Yes, these women are activists and politicians, but they're also journalists, models, bloggers, moms, senior leaders of companies, and the future generation. We all must stand up for their right to be a woman online too.

Thank you.

(Applause)

(Audio fading in and out)

>> DAVID KAYE: I don't know if I should thank you for following Seyi after that talk because so much of what you said resonated with me. What I had prepared to say, in some respects, Seyi covered. I think some of the things I want to mention are a reflection on what she just powerfully, I think, noted, about the reality of online abuse but also some of the things you said about the responsibility of private actors and also things that are short of censorship and short of prohibition that I think are important but also difficult, right?

Going down the path of dealing with online abuse in a really careful way, a way that allows robust debate but also tackles the problem of online abuse. That's not an easy problem to solve. I think that what you put up in terms of the graphic from Glitch, I think it covered a lot of what is important both from a violence against women perspective and from the perspective of freedom of expression. So I will just make maybe three or four points generally, and then move down the line here.

Abuse, in particular online abuse, against women is different than other forms of abuse against men certainly but even minority communities. There's a particular kind of attack that's offense sexualized against women and is often designed to not only push one particular person off line, but to push a community offline. I think that's true for all sorts of attacks on minority communities or on vulnerable communities. I don't like using vulnerable, but it's the word I'm using here right now. I think that's important to recognize. The purpose of online attacks is itself a censoring purpose. I think that if we treat it only as a vague sense of speech that needs to be protected, I think we're missing how to resolve the problem. So I think the way Seyi framed it was excellent because I think it's important. It's really critical for us to see online abuse as this kind of problem. I think related to that is, again, maybe the fourth point on the online graphic. Tell the police.

So I think this is an important part of the issue and part of the solution, which is offline abuse, offline harassment and threats of violence, we have problems offline in the real world too, but typically it's understood that threats are to be addressed by law enforcement. For whatever reason, whatever set of reasons, that doesn't always transfer over into online space.

So it's really critical for -- and I like the idea of training law enforcement about this as well. It's really critical for actual threats and abuse that would be penalized in offline space, that that should also be subject to investigation in online space. Those are real harms. There shouldn't be a distinction between the two.

Okay. So that being said, I just want to put down a few points of caution. I will say them very quickly because I want to get to the rest of the panel. So one is the importance of the clarity of the legal normals. I will say legal norms, but I mean legal norms as we traditionally understand them, law that's adopted by government, by city councils, by regular public authorities, but also platform law.

So any kind of normative shaping of online expression should be clear. It shouldn't be subject to a lot of misunderstanding so that either a person who might abuse can legitimately say, Well, the rules didn't suggest that I couldn't do this. There are a couple of double negatives. I'm sorry about that.

Victims should understand what their rights are in offline space and in online space. They can do that better when they know what those rules are and those rules are clear. So it's really important that the rules themselves be clear. And related to that point, if the rules are not clear, it leaves significant discretion in both the platforms and law enforcement to actually tackle the wrong kind of expression.

So instead of attacking abuse, they might attack or take down material that's about sexual help or that is taken totally out of context, and the context is this is a critique of abuse. That gets taken down. So the rules need to be very clear and also be designed not to overregulate in the space.

A second point is it's critical for, I think, the online platforms to provide autonomy to individual users. So autonomy means there should be ways in advance of -- or in the face of abuse -- and I'm talking now about abuse that's short of threats, let's say, but abuse that's clearly designed to knock people off platforms or get you to stop being on a platform. There should be tools of autonomy that allow people to block those kinds of users, to block those kind of abusers in mass, to do mass blocking. The platforms have been moving in this direction over the last year or so. I want to recognize that, but at the same time there needs to be a significant amount of autonomy so actual subjects and victims of abuse have the tools online to manage their space.

And then the third point I want to make is a more general point just about censorship, so it's not about the clarity of the laws and their specificity. It's not about autonomy. It's about generally I think we need to be careful. I thought the way Seyi divided between the threats and abuse and this kind of bright line -- and I'm not convinced there's a bright line, unfortunately. Sometimes those fuzzy lines are the hardest cases, but we need to be really cautious in addressing the problem, which is a very, very real problem.

It's a problem for the democracy. It's not just a problem -- and this is a framing issue as well, sort of as a footnote -- I think if we think of this as a problem of women's voices, we're missing the point. It's a problem of full public participation in our democracy. So we all lose, even those not subject to the abuse, we lose the value of a free and open democratic space. There needs to be a framing there. Going back up to the text, we need to be ensuring that the steps we take are not designed to censor otherwise legitimate speech.

I think that is actually where the hardest sets of problems might be, but those might also be the places where there are not as many cases of them. I don't know, but I think there needs to certainly be research around that. I think if we have clarity on enforcement of real threats and abuse, it would be addressed in offline space, then we can at least get to that area that might be a little more gray and a little bit more uncertain in a way that doesn't deal with it solely from the principles of censorship but uses some of the tools of training and education that Seyi was talking about.

Thank you.

(Applause)

>> SANDRA PEPERA: Thank you, David. Let me just make one stave point here. The robot that's in the machine keeps reflecting the name of a previous panelist Soraya. She's not here. I want to say every time it says Soraya, it really means Seyi nitrogen nitrogen -- S-E-Y-I and A-K-I-W-O-W-O.

>> SANDRA PEPERA: Nitrogen, she's combustive too.

(Laughter)

>> SANDRA PEPERA: I'm going to turn now to Nighat, who has been talking about the digital user and the issue this is about creating an inclusive space. You're in Pakistan. The world must look quite different from there. Tell us what it looks like from you, from where you are and how you're addressing some of these principles with women's participation in the public space.

>> NIGHAT DAD: I think it's important to look at different jurisdictions. Coming from Pakistan, it's a conservative society, talking about violence against women is even, you know, it's an issue. When you say online violence against women, that's not -- like, people don't even consider it an issue to discuss about.

So for us, I think it was a great challenge to initiate a discourse of online violence against women. Not only the women using online spaces but women who are politically active, women who are activists, women who are journalists and doing a lot of other things while using online spaces.

So what we basically did was we started with the woman's rights organizations who are already doing work on combatting the violence against women. It was a real challenge to, you know, make them realize that the violence, or the harassment, that's happening in the online spaces is actually a form of violence.

So starting from them because these organizations were already there for decades and working on these issues was our first step, but then, you know, we strongly felt at DRF that we really cannot do blinded work. We need evidence-based research. We started working on small little reports, started working with UNSRs. Even just now, I think we're going to (?) on violence against women. So finding all these small spaces, not only in Pakistan but outside Pakistan as well too to make people realize that online violence against women is actually a form of violence.

Another thing we did was find champions in the parliament so the senators or the women parliamentarians in the National Assembly, it was, again, the thing to do, actually make them understand what online violence is and why it is a matter of free speech and access to fundamental rights.

Another thing we did was start a cyber harassment helpline. I started the helpline because I, as someone working on digital rights on the issues of violence against women, started getting a lot of complaints from women Internet users and their experiences. They were seeking legal remedy or counseling, or anything that could support them. We didn't have any. I mean, I ended up getting complaints like two or three complaints in a day. I started thinking this is not a solution. I cannot handle this. They cannot handle this.

So I brainstormed with some of my friends in the U.S., and I thought that maybe the solution that can work for Pakistan is to start a helpline. We have several other helplines addressing the issues of young people or, you know, violence against women or domestic violence. So we started a helpline last year, in December 2016, and we're going to release our report tomorrow. So far we have received more than 1,500 complaints, and among those complaints, we have referred more than 300 to law enforcement. Things that we learned through the help line is not only getting calls or assisting women in terms of their digital safety or how they can fight back online harassment or how they can utilize legal remedies or the laws we have, but also, you know, it helped us in generating data and a set of recommendations for the law enforcement in Pakistan and also the Internet companies.

So we have seen that the Internet companies have their reporting mechanism. They say it's global, but it's actually not global. They treat people differently in different jurisdictions. In countries where they have stronger laws, the companies feel it is an obligation for them to follow those laws, but in countries like Pakistan, they can easily get away with it because we don't have any proper laws or polices. So these are the things we've done so far. We're still learning from our experience. Maybe the helpline is working for us, but it might not work for other jurisdictions, but that's the solution that we have found so far.

I feel that there is no one solution to address, you know, online violence against women. In patriarchal settings, I thought to at least give a space for women to speak out, to share their experiences is a lot, actually.

(Applause)

>> SANDRA PEPERA: Before we move on to Nate. Don't tell me you haven't faced this yourself.

>> NIGHAT DAD: Every single day, there's a threat on my timeline, on Twitter. I don't go to the other folders on my Facebook, but if I go -- I only go because there are women who reach out to me with complaints. I find a lot of sexualized violence, body shaming. And threats. People are like, How dare you talk about this? Technology is not for women. You're ruining our culture, our values. So there's a lot more, but I feel like, you know, we need to keep reclaiming this space.

>> SANDRA PEPERA: Thank you.

Nate, one of the things that Nighat talked about was engage in broad social movements. I know this is what you're up to. Tell us about the work you do and how you're challenging people to, as Nighat said, to provide the space and reclaim the open Internet for democratic.

>> NATHAN MATIAS: Delighted. I want to start with this question that has been posed by several people already, that in our efforts to protect people, might we actually move into the space of censorship? This was a very big part of debates in the early 2000s as a number of governments were debating whether or not they wanted to ban material online that would promote self-harm and eating disorders.

When government decided not to pass these laws, the social media platform Instagram decided to take action in 2012. In a very early example of algorithm governance, they altered their search engine to make this material harder to find. At the time, it was celebrated as an example of a platform taking action to protect people, that they maybe found a way to balance these two values. They weren't prohibiting the material, but they were making it harder to find.

Unfortunately, four years later, when a group of researchers at Georgia Tech looked at this policy, they found that publishers that evaded Instagram's changes actually experienced increased engagement and participation. Instagram had taken actions that they thought would make progress on this important issue. They were trying to balance the censorship concerns with a public health -- very real public health -- need. Turns out there may not have been anything to balance, that it's at least possible that their best efforts at their policy may have increased the very thing they were trying to resolve.

Now, I bring this up in a conversation about violence against women because many of our discussions on this issue include hopes and assumptions about the outcomes of policies of social efforts to create change that we are very much at the early stages of understanding their outcomes. I want to make two major points here in our conversation.

The first is that I think that both platforms and anyone working on this issue have an obligation to evaluate our efforts to protect people and prevent violence, precisely because they may have outcomes that are very different from what with expect. Also, as that research happens, I argue that that research must be independent from the companies and the governments who carry out these measures so that we can be sure that that research and the collective evidence we develop are accurate, reliable, and trustworthy, and accountable to the public.

So usually when I bring up these ideas, people are less likely to question whether it's good to test the outcomes in policies and are usually asking the question: How? Because, of course, tech platforms are, understandably, secretive around their own internal policies and trade secrets because there are all sorts of privacy questions that are barriers, and, of course, companies tend to be very protective of their research and data. We only see a trickle of their research that they do.

To give you a sense of the scale, there was just a study published by team Microsoft that reported the last 21,000 behavioral experiments they did over the last two or three years. So when you think about the scale of behavioral research that's done every day on these platforms and then the incredibly small share of research we have on the behavioral outcomes of our best efforts to prevent violence and keep people safe, it's a real tragedy.

In my research, I've been trying to build Avenues for advocates and people I'm starting to call citizen behavioral scientists to do our own research so we can guarantee that the questions are driven by the people most affected by an issue and so that we can work together to grow a shared pool of findings on what works in the context where we're finding them. I will share three examples very briefly. One was a study in 2014 that I collaborated with, an organization called Women Action in the Media. Like some of Nighat's work, this is collective reports of harassment from women predominantly in English-speaking countries and reported that to the Twitter platform.

As Nighat observed, that put them in a powerful position to be able to collect data and analyze not only the kinds of harassment that people were experiencing but also to do statistical analysis that responses from Twitter was likely to make. Because people were pooling information about the harassment reports, we may not have internal data on the decisions that Twitter was making. We were able to do statistics to get an average sense, at least for those cases, about the kind of decisions the platform was making. One example of the way third parties are able to coordinate to collect data and grow transparency and data for a company.

Another example that I finished in this last year was a collaboration with now an 18 million subscriber community that discusses scientific knowledge on the platform. This community has now over 2,000 volunteer moderators. David noted earlier that companies are increasingly relying on third parties to make sense of these issues. This is something that has been central to how the Web works since the very earliest days of social technology in the 1970s and 1980s.

You might hear that companies have hired thousands of staff who do paid moderation. Usually those staff are dramatically dwarfed by the volunteers who flag content, who take formal moderation positions. Those are people who similarly are in a position to do their own behavioral research.

So I worked with moderators of this science discussion community to test the effect of simply making policies against harassment more visible. So we posted the policies at the top of some discussions and not to others. And then observed that over the course of over 2,200 discussions in that context and found that simply making the rules more visible could reduce harassment from first-time participants by 7.5% points, around 1,800 cases per month.

Now, this, of course, is an incremental change in a community but an example of a way that a community cares about protecting people and preventing problems cannot only intervene but also do behavioral research to test the effects of their efforts. That's the broadest idea behind the project that's coming out of my dissertation at MIT, which is now being incubated by the citizen organization Global Voices called Civil Servant. We're building structures that we hope will achieve maybe not the same scale of (?) but hundreds of thousands of times of the behavioral research we have that is actually guided by citizens in their own context trying to find out what works.

So it could be people in London trying to manage and prevent online harassment there. It could be some of the gaming communities that I'm working with that have millions of participants that are trying to reduce hate speech in their context or it could be people trying to audit the social impact of platform interventions into our lives. So as we think about these higher-level questions about how we should frame our values and our policies, there's a tremendous opportunity for people to organize, actually test those assumptions, and discover over time what makes a difference.

(Applause)

>> SANDRA PEPERA: Now, I know that David has to leave us in 12 minutes time, so I'm going to just sort of turn to you because I think a number of things were said that have some coherency. I looked at some of the things you said, and you talk about the need to engage and develop a mechanism or a platform for an ongoing conversation with the Internet platform providers. How would you do that, taking on board or not, maybe, Nate's caution about the need for independence and autonomy?

>> DAVID KAYE: So this is like a conference version of intersectional or something. I'm going back to another panel, but the discussion there was focused on common sets of principles that the companies can think about in order to address how they're dealing with different problems of content, regulation or content moderation. I will say two things about this.

One is I think it's important for there to be common sets of principles, at least common sources of norms. I think that's valuable and particularly important in a space where so many people are using multiple platforms. So common principles would make some sense, but I also think there's room for independence, but the problem is independence would make a lot more sense in an environment where you could imagine start-ups, you could imagine competition, you could imagine new entrance into the field particularly of social and search.

Right now, I think it's a problem of dominance. I'm not taking a policy stand on whether there should be break-up of the big companies, but the problem is that censorship and some of the rules coming out of Europe right now are reinforcing the market dominance of some of the companies because it's very expensive to do some of the moderation and regulation that they're seeking to impose.

I think those rules may lead to less creativity about how to deal with some of these problems, rather than add to censorship. I don't agree there should be independent approaches, but in an environment where there are few big actors in markets, that independence may not have the same kind of salience as it might in a very competitive environment where people can choose, I'm going to be on this platform because there are stronger restrictions on certain kinds of abuse, as opposed to going somewhere else. There's not a lot of movement right now because there aren't many other places to go where you get the audience you get on the big platforms right now.

>> SANDRA PEPERA: One of the things, again, to really put to you because you have the sort of 50,000-foot view of some of this stuff and I know you've been working on the (?) and she's going to be presenting her report on combatting online violence against women to the counsel in June of next year, what is your kind of view about the fundamental point, which is that the issues of gender and other forms of exclusion are fundamentally issues of power. The people who are at the table or the people who are writing the algorithms are not necessarily the people with the best perspective on that. How far does that go to reinforcing the dominance that you just spoke of?

>> DAVID KAYE: So that's a problem that goes across communities and across -- whether we're talking about victim communities, I think both in the context of engineering space and where the main companies are coming from, you know, they come from communities in sort of the liberal United States, sort of the liberal parts of the United States, and they tend not to be demographically very diverse.

I think that goes across other restrictions. I think one of the overarching perspectives that much of the discussion over content regulation from the United States, you know, much of that conversation is kind of First Amendment oriented rather than human rights oriented, so the vocabulary doesn't resonate outside the United States.

So I think there's some problems around that in addition to the fact that as Seyi was saying before, that it's a certain -- it's male-dominated and tends to be Caucasian male-dominated. I think there's interesting work in the IGF in the Internet engineering task force, integrating human rights into the engineering choices being made right now. I think that kind of step is important and also, we're talking about training earlier. I think training and education around human rights norms is something that needs to be integrated into engineering choices as much as other places.

One other point, I just want to make sure I make it. It's like throwing out a little bomb or something out there. There's been on the panel, I think, a discussion about violence against women and politically active women. One question I just want to throw out there -- and this is why I'm not going to answer it. Human rights law around freedom of expression tends to treat public figures differently than, let's say, regular individuals in terms of speech and in terms of what we're expected to accept in terms of criticism. And this is less about -- certainly not about algorithmic. It's questions about whether the norm should be different, or two kinds of norms. Norms related to protection against online abusive women and if that's any different from abuse of public figures and public figures who are women. I don't think there's too much of a difference here in the space of threats and real abuse, but I wonder if in that gray area, the gray space I was suggesting earlier, whether there might be variation. Because law makes that kind of framing, I thought it was important to put that out there as well.

>> SEYI AKIWOWO: It was asked if politicians could be seen as a protective group on social media, therefore required quicker response. And their response was politicians, no country around the world recognizes politicians as a group of protected. Maybe this is the beginning of pushing for that.

>> DAVID KAYE: The question is whether the rules should allow, in a way, almost more abuse of public figures. Normally, human rights law around freedom of expression accepts that politicians enter -- and not just politicians but public figures generally -- have less claim to challenge expression than others do.

>> SANDRA PEPERA: I think from our point of view, because we've been working on this a lot, the challenge with that challenge is that we know that it doesn't really matter what a woman says, she's just going to get more abuse once she has said something online on a public issue. I was struck by Seyi's point. She was talking about a sensitive issue, migration and refugees. The big analysis of the common stream said the top six commentators who got the most abuse were all women. Number seven and eight happened to be black men. It didn't matter what the women were writing about. They were just abused.

So your point about the public space or public officials, I don't think that kind of -- it's an unsafe categorization because it is about women speaking up in that political space, which it is their right to do, and I don't think -- anyway, you would have to almost double down on what you were asking for, if that was to be the case. I just put that out there. Just saying.

>> SEYI AKIWOWO: Just to come back on that point, though. I have to say this as a Jo Cox leadership graduate. Jo Cox was murdered because it was so easily to find out where she was. Being an MP in the U.K., you can opt out of having your address online. If you're a counselor like me, your address is there. When I was going through the abuse, I had to have the local police officers come and check out the area. That freaked out my African mom. But there's a thing around public officials have to be transparent because all of their different houses and expenses and claims. I get that. But that means they're also the most vulnerable and can easily be targeted. We're seeing so many times of women being easily being attacked.

>> SANDRA PEPERA: David, thank you so much for being with us. We really, really appreciate your presence this afternoon. Thank you.

(Applause)

>> SANDRA PEPERA: So I wanted to turn back to Nighat. One of the things we're grappling with a lot is what we call the Gateway Effect. David even talks about it, how a lot of law enforcement considers online as a jurisdictional-free zone, but we know things step off the online platform into the physical world. And many women politicians and women activists, including Jo Cox are stalked online and suffer (?) murder, in her case. What are some ways you think that element can be addressed?

>> NIGHAT DAD: It's a challenge how that will be addressed, but that's something we have seen problems on as well. Last year one woman politician basically blamed their party leader for sexual harassment, and she had to face a lot of abuse online by the party workers. One thing that we saw, there were several threats of throwing acid on her face or stabbing her and stuff like that. And then she made a complaint to the police that there are people who are stalking her.

We have seen sometimes the online abuse shifts in the offline spaces, and sometimes offline abuse shifts to the online spaces. So this is really interconnected. But the challenge we're seeing, the law enforcement in Pakistan, they don't see the online abuse really as a serious issue. They think, Oh, it's in the online space, it's virtual, and nothing is going to happen in the offline space.

I think the example of this woman politician sort of gave us a very strong evidence that, you know, this abuse can shift in the off line space as well. So, yeah, the challenge is there to convince the law enforcement. Also, one thing that I have seen with the law enforcement in Pakistan is that they are not trained really.

So there are cyber crime acts and laws which address crimes happening in the online spaces, but this is the least important category for them, the online abuse against any community. The other fraudulent crimes, the other cyber crimes, they have much more -- they give much more preference to that.

So I think to make a case that it's not just online, it has off line effect and impact, one needs to bring the solid evidence that they can see. That's the problem with the law enforcement. They want to see the violation happening in the offline space, and then they think it's a crime. So the challenge is there, but I don't know if anyone has a response to that, but, yeah, we're facing this challenge.

>> SANDRA PEPERA: So I will ask one question of Nate, and then we'll open up the floor. I know we have online contributors as well. Nate, I just wanted to ask you: What does it take to mobilize your citizen moderators and citizen activists, really, changing them from bystanders to advocates and activists? What does it take to actually achieve that?

>> NATHAN MATIAS: Sometimes we underestimate the number of people who are actually already doing the work of being active bystanders. In the United States, the data And Society Institute did a national study where they discovered around 46% of Internet users have taken action to support the person who they see facing harassment or engage directly the person they saw harassing. I think because citizens are distributed throughout society, we don't always have organizations coordinating us. That bystander work is not always easy to see. When I do work with bystanders, I often start with people who have already made a decision to dedicate substantial energy.

(Audio fading in and out)

>> NATHAN MATIAS: To a number of Facebook group administers. If you count the number of moderators on a platform like Reddit, there were about 150,000 moderator positions. There are a lot of people who have taken up at least nominally to monitor the spaces we're a part of. I try to organize and help those people realize if they (?) certain ways, they make discoveries about what can make a difference both in their context and to inspire others to try the same. That's the kind of research that the groups like the science discussion community have been doing with me.

>> SANDRA PEPERA: Okay. So now it's your turn. Anna, can we take some of the online questions or issues first and then go from there?

>> ANNA: Yes. We have one question for the entire panel? How do we address this? The member of the audience is asking or suggesting let's commit to a plan to move the discussion, DC framework, to completion.

(Audio fading in and out)

>> ANNA: I'm not sure the word to use correctly here. I'm thinking you're asking sort of what is the next step? What is the next actionable step beyond what we're doing already.

>> PARTICIPANT: Thank you very much for that interesting discussion. I have three questions. I'm coming from Sri Lanka, and some of the things we see is the women are labeled as immoral women. Their character is assassinated. To what extent have you seen this where you're coming from, and is this more of a global phenomenon.

And, second, to Nighat Dad, you said you've been looking for champions to talk against this issue. In Sri Lanka -- I'm sorry. I keep referring to Sri Lanka because that's where I come from. So there was this one instance where the issue was raised that some of the female parliament was subjected to physical violence, but none of them came forward to admit that. What kind of difficulties have you run into when you were looking for champions? What are the things we can learn from you?

And my last question, again to you, Nighat. You kept saying that online violence is a form of violence. Is there a reason why you kept on saying this? Is it normalized as not violence, as maybe a (?) form of violence? I would like to know more about that.

My last thing is to Nate. If I remember right, you said you looked at online violence in European countries. I don't know if I understood right, but would there be any possibility for you to analyze the differences between Europe and other regions. Thank you.

(Audio fading in and out)

>> SANDRA PEPERA: Okay. Seyi, why don't you speak to -- I'm sure you can speak to at least the first two.

>> SEYI AKIWOWO: The next actionable step, I would say, is what Nighat has said and what Glitch has been campaigning on. Get the U.K. government and all political parties to formally recognize online violence as a form of violence, and the language here is very important. Online violence -- we're not saying online violence equals physical violence. We're saying online violence is in its own right a form of violence, which has similar impacts and consequences like physical violence, like when you're in a domestic violence relationship.

The critique we tend to get is how can you say certain words or robust debate is the same as physical violence. We're not saying it's the same. We're not saying it's a hierarchy either. We're saying it's a form of violence that needs to be recognized. That's the first actual step, get political parties to adopt that. Get the governments to recognize that the next (?) in November.

So immoral is really a hard word. When you said it, I was like -- I don't think the word I've seen is immoral. It's around knowing your place. You shouldn't be confident in your opinion. There's a lot of shouldn'ts. I mentioned this. Women can't do this or that. It's particularly more so for women of color. Like, how dare I celebrate Black History Month. How dare I wear a head wrap. It goes to that kind of language. Not really a morality thing. I would be interested to hear what the morality argument is on that. I just want to say as well that I agree with Nighat, but I sadly think it's going to take a high profile case of somebody being directly physically assaulted or abused from a threat online before the international community wakes up.

We saw a 10-year-old girl in America who was cyberbullied and then committed suicide, and that didn't even make mainstream media. Unfortunately, I think it's going to take another Jo Cox or another person like Hillary Clinton not winning the election for the media to start picking up on the impact of fake news and online violence against women.

>> SANDRA PEPERA: Thank you. I think what I can say is that from the countries we work in, immorality or a supposed sexual license is hugely used against women who are politically active. I know a woman in Kosovo who was Photoshopped having sex with a dog. It's huge out there. So it's very problematic. It is a global phenomenon has part of that. I can say one of the issues around violence against women MPs, one of the reason we've had so much trouble is because the women won't come forward. They see it as a risk to their political career, for start. As long as it goes unrecorded, nobody will talk about it and nobody will take it up. And making the link that all violence is violence, so when we describe it, we talk about harassment, discrimination, psychological abuse online and in person, physical and sexual assault, so it follows the U.N. declaration of violence against women.

Nighat?

>> NIGHAT DAD: Yeah, so I can resonate with your comment on immoral. That's a massive comment in Pakistan when it comes to women parliamentarians. About the champions, we identify champions not only women but men also and especially not from the government but from the opposition political parties who are in power in the Senate. So I think it's very unique to be very strategic around planning champions. It worked for us in our context because the opposition was in power in the Senate, and the (?) was in power in the lower House. They made the government with other political parties, so we used that leverage and the senators who are in the opposition but want to make it in the news. Sometimes they want to be in the media. At the same time they really feel for the cause. So we were lucky to have those champions who are reachable, and we can have individual meetings with them.

Another thing that you asked about, you know, the violence against women is actually -- it wasn't considered violence. That's basically sad because it's a notion in a patriarchal society that even the violence in the offline facility is not considered violence. First of all, like, you have to be a woman. That's sufficient. And then abuse comes your way. In the online spaces, you're saying something, challenging the stereotypes, and especially challenging, you know, the power structures. People feel threatened. That's where the abuse comes from. Another thing is that in our society, I'm seeing it after so many years.

When I started working on digital rights, I remember when I talked about Internet freedom or online violence against women, people used to make fun of me, even the people in the bigger social movements. So normalizing that discourse, the abuse in the online space is not the package that come with the Internet. It's violence and one has to recognize it. The first step is recognizing this as a violence, and then we can address it. So that's why I said that it wasn't considered violence.

>> PANELIST: It's not so long ago that even psychological abuse wasn't considered violence, only the physical was. I think it's part of the evolution, but I think we need to speed up this understanding really.

>> SANDRA PEPERA: We can't slip into this cycle of.

>> SEYI AKIWOWO: When women were raped or sexually assaulted on the streets, the argument was women shouldn't be on the streets without their husband. What I'm annoyed at or frustrated is we're going to have those arguments online. Is it going to take 50 years before we start to get society to wake up to the abuse. Somehow we have to speed that up because we're women who are not engaged online every day. They're contemplating suicide or, sadly, committing suicide because of it.

>> SANDRA PEPERA: Nate, can you draw any comparison between the work you've done in North America and what's going on elsewhere?

>> NATHAN MATIAS: So I think the person with the question remarked, I think it's risky to take research from one context, particularly causal research about my work in one place, and just apply those elsewhere. I think coming out of my dissertation work that I finished this summer, I brought the Civil Servant project within an organization called Global Voices which does work in 40 languages and 160 countries around the world. And my hope is that I will be able to collaborate with people in many cultures and countries because I think that if there's anything universal to be found, the only way to understand that is look at the particulars and understand what makes sense.

(Audio fading in and out)

>> NATHAN MATIAS: And how can that be understood and applied? I think one of the major challenges to doing that, at the moment, is most of the resource in these areas come from the companies themselves.

(Audio fading in and out)

>> NATHAN MATIAS: Some of them are doing viable work, and most of that work is a trade secret. It's reviewed by the legal teams before it's allowed to be released. So to think about how to better apply research and discover meaningful interventions, we need substantial growth in research capacities that allow us to ask these questions in collaboration with citizen around the world.

>> SANDRA PEPERA: So we're running slightly against time, but I just want to address a couple of things. The last thing I will do is thank the panel. NID is working with other partners. First of all, for the first time ever, we'll be presenting to the assembly next October a report about combatting violence against women in politics. We're really, really appreciative to the special rapporteur who decided to use this report.

Secondly, those involved in political parties, we've had a very strong partnership and in the last year they have at the global level and Africa region adopted commitments that strongly focus on violence against political women (?) alongside the campaign at the Istanbul convention. Again, those of you in liberal-leaning parties and part of the that broader party international will have something to hook onto for your national level campaigning.

Thirdly, in the broader space, NID has been working with its partners, the center for International Media and the center for international private enterprise to work on a set of exclusive principles for an open and inclusive Internet. That has involved working with partners around the world in civic organizations to get at some of the normative issues that have been part of the discussion this afternoon. So there's a lot going on, and there's a lot more to be done.

I wanted to thank our panel because I think they embraced the issue. Each of them is doing really important work that impacts on women's ability to participate in politics, and therefore strengthens the resilience and sustainability of our democratic practice around the world. Thank you for being here. Please, if you are interested in this, go to NID's website and follow them on the campaign. Thank you very much, everybody.

(Applause)

(Session concluded at 4:26 p.m.)