The following are the outputs of the real-time captioning taken during the virtual Fifteenth Annual Meeting of the Internet Governance Forum (IGF), from 2 to 17 November 2020. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid to understanding the proceedings at the event, but should not be treated as an authoritative record.
***
>> CHRISTIAN PERRONE: Hello. It's so interesting to be here, folks. It's fantastic for us to discuss such an interesting topic with people all over the world or at least across three or four continents from South America, North America, Switzerland and obviously Asia as well, and discussing a topic that is actually something that impact it's all of us. We are discussing about disinformation, and more specifically, we're discussing about how or what is the role of bots and automated tools in this context.
So, before we dive in in our specific topics, I would just like us to introduce ourselves a little bit like in one minute, so I'm Christian Perrone from Brazil and work at ITS as Coordinator for Rights and Technology Team, and so Deb, one minute about yourself and a little bit about your organization.
>> DEBORA ALBU: Definitely. Hello, everyone. Good morning. Good afternoon or good evening from wherever you are joining us. It's a pleasure to be here together with Christian, which is a fellow ITSer together with me, and also my fellow panelists, it's a pleasure to be here with you. Unfortunately, we cannot at the same room this year, but I'm glad to be sharing the screen at least with you this year. I'm Debora Albu, the Coordinator of our Democracy and Technology Program and Institute for Technology and Society. We are a Brazilian based organization specifically based in Rio, even though that doesn't mean much this year in which we are all remotely working and safely working from our homes.
At the Institute for Technology and Society, our mission is to make sure that the Global South also has a voice in the Internet governance discussion and in the discussion regarding digital rights, and in the discussion regarding how new technologies affect and impact us as societies, and how us as societies can also create more ethical and democratic technologies.
At the Democracy and Technology Team, we are very fond of experimenting with new technologies to strengthen Democratic principles, and so in that sense we take a lot of advantages of the opportunities that are given by new technologies, working on developing civic technologies, and enhances civic empowerment.
However, we do also know that there are lots of challenges in that sense, so such as disinformation which is the topic of our discussion here today, a topic that we have been working and discussing and researching for a couple of years now, and we have been developing a lot of projects regarding disinformation and combating disinformation as well from a Brazilian perspective, but also from a global perspective. Thank you, and we hope to have a very fruitful discussion with all of you also attending and participating in the chat.
>> CHRISTIAN PERRONE: Thank you, Deb. Can I ask you, Jenna, to say a few words about you and your organization?
>> JENNA FUNG: Sure. Hi, everyone. This is Jenna from Hong Kong. I am the Community Engagement Lead of DotAsia organization that is actually leading NetMission Academy Youth Internet Governance Academy for the young people in Asia‑Pacific, and I'm also the Coordinator of the Asia‑Pacific Youth IGF and Hong Kong Youth IGF. And I've been engaged as an ambassador three years ago and since then I've been actively joining all of the IGF conferences and trying to make a change, especially in these past two years, things happened in the society that I live in, and it actually even encouraged me more to stay more engaged in this community because I realize how technologies and Internet can help people to make a change in their life and to fight for their rights.
So, I really am looking forward to the discussion we're going to have in this session, and thanks for inviting me. Thank you.
>> CHRISTIAN PERRONE: Thank you very much, Jenna. Can I ask you, Jan, to say a few words and about the committee? Thank you.
>> JAN GERLACH: Absolutely. Thank you for having me. My name is Jan Gerlach a Lead Public Policy Manager at Wiki Media Foundation which is the nonprofit based in San Francisco that hosts and supports Wikipedia and other projects of free knowledge. The organization also supports a larger movement ecosystem of individuals and groups, organized groups and chapters around the world who contribute to Wikipedia to make sure that the information and knowledge included in Wikipedia is up to date, lives up to high‑quality standards, and also promote Wikipedia in their relative contexts in various regions around the world.
I am a lawyer by training, and as high title suggests I work in public policy so I promote or advocate for laws around the world that basically create a positive environment in which Wikipedia can grow and flourish, but where people also have access to Wikipedia and can contribute to it freely without fear of retribution or their own safety actually being threatened.
We come to disinformation with a pretty obvious connection. Many people go to Wikipedia, actually, when they try to find out what is true and what is happening in their country, what is on the news or just want further information about a thing that they have heard, like who is this person that I see on the headlines, what is this concept that scientists are talking about, et cetera. So, yeah, Wikipedia is a great place to actually use to combat disinformation in everyday life, and I'm really excited to be here and join this conversation today.
>> CHRISTIAN PERRONE: Thank you, Jan. Chris, now can I ask you to ais a few words.
>> CHRISTOPHER TUCKWOOD: Sure. Hi, everyone I'm Christopher Tuckwood a director of organization called The Sentinel Project and we're an NGO focused on assisting communities threatened by mass atrocities around the world and in doing that we place a strong emphasis on two things, one being direct cooperation with the people in harm’s way and then the other being the innovative use of technology which can take a lot of forms, whatever tools are available and appropriate to address the drivers of risk in a given situation, and the connection with today's topic is that for several years now we've been working on what we call misinformation management because we've recognized that misinformation and rumors, along with things like hate speech and other such dynamics, are really significant drivers of instability in different societies, and can often also trigger violence and other sort of negative outcomes, so effectively countering them both online and offline is a really critical way to reduce the risk of this kind of instability and violence.
>> CHRISTIAN PERRONE: Fantastic. Thank you, Chris. So, we have all faced bots and automated tools in our trade and everyday lives as well and have seen them being used to support disinformation and to help spread misinformation in general. But what about using the same tools in order to fight disinformation? And this is the question of today's workshop. And I would like to start asking Chris, since he's already started mentioning a little bit about what is the work that The Sentinel Project does, so probably when you're talking about humanitarian crisis and all of the things that you were talking about, like fighting against genocide, you probably have seen these tools being used in order to actually foster this malicious messages, disinformation in general. So, what made you think about using the same, the very same tools in order to fight? And what are the tools that you think are important or at least useful in order to actually fight misinformation and to a certain extent, achieve truth? That's our point today, so can you talk a little bit about that.
>> CHRISTOPHER TUCKWOOD: Sure. I'll take a few minutes to try to address that. So, I'll start off by saying that, of course, as you probably guess by this point, our experience with this has been primarily in conflict affected countries and also just for the ease of terminology, whenever I say misinformation, that also includes disinformation. We don't necessarily always make or have the ability to make a really strong distinction between intentional versus unintentional false information, whether it's circulating online or offline.
So, you know, as we've already kind of I think established rumors and misinformation can really drive the distrust and fear and hatred that destabilizes societies and increase the risks of violence. In our experience, we admittedly don't have any concrete evidence of any of the places that we have worked, having bots or other forms of automation being used as part of coordinated disinformation campaigns and that's not to say that it hasn't happened. It's just to say that it can be very difficult to identify when it is actually happening.
But so from that perspective, as it stands right now, we kind of just treat all misinformation the same, although it's an open question and maybe something that we can get into later and even hear from some of the audience members on whether or not it makes a difference in how we respond to things, whether it's disinformation versus misinformation.
But I would say that generally speaking our interest in this topic of starting to use, you know, keep using the term "bots" I guess or forms of automation encountering misinformation is because we know that the adversaries, so to speak, the malicious actors who are trying to drive this kind of instability or manipulate populations or insight violence, are using or at least have the potential to use this technology, and especially as the kinds of countries that we work from the start to become increasingly digitized, as is already happening, but as this reaches a larger scale in terms of people's online engagement; and therefore, online exposure to this kind of problem. The only way to respond to that potential risk is basically to respond in kind. It's almost like an arm's race I guess you could say. If there are malicious actors who are using automation to push out this sort of manipulative, fabricated content towards large numbers of people, then the only practical way is to try to use similar approaches, at least for monitoring and identifying those kinds of trends.
How we actually then respond to it at the same kind of scale is another question entirely, but that's basically it. The interest in this technology is to increase our capabilities, to keep up with the growing problem.
>> CHRISTIAN PERRONE: Fantastic. You are quite right that this is a very interesting way of doing it, and to think of it as fighting fire with fire to a certain extent. It's quite an interesting approach. So, now, Jan, so continuing on the same topic as you mentioned, Wikipedia is probably one of the centers that people go to to have information they believe they can trust, so it probably is also a hotspot for people to fight narratives against and probe to a certain extent to foster this information on Wikipedia. We have seen this in Venezuela a few years ago very strongly, but probably you have seen it in more discrete ways in many situations, but apparently as far as I understand, you also use automated tools and bots to fight this. So, in this very understanding that Chris mentioned of fighting fire with fire, so how do you approach bots, how do you feel about bots in general, and what are your thoughts on doing it for the good to a certain extent?
>> JAN GERLACH: How do I feel about bots in general? That's a very interesting question, that's big. I just maybe clarify that Wiki media projects are curated and maintained and grown by a community of people around the world, and as a foundation that hosts those projects, we merely sort of enable and empower people around the world to do this. We don't make calls about this is disinfo, this is not true, this is low‑quality content and then remove it. This is the community doing this, right. And they work together collectively, collaboratively also, deciding what can stay up, what should maybe be rephrased a little bit, what is not accurate, what needs to go, and we try to help them. Our belief is that technology should do what people cannot do and should allow people to do what they're good at more effectively and more efficiently.
So, there are many bots, actually on Wikipedia that are run by the community. We don't place them out there. They are built Open Source by the community. They go through a process of registration where the community weighs in. The community sort of does code review and there is a certain policy for allowing bots. And those bots do various things. They do ‑‑ they remove vandalism like ha-ha or slur words or just stuff that is obviously not a sentence. They flag copyright violations. Others just say thank you and hello to newcomers and yet others make sure that there is no double linking between pages that refer to each other all the time and stuff like that, which are the like smaller tedious tasks, right, that would really be a waste of time for the editors if they did this all the time.
What this means is that the bots actually free up time for the community to do the hard work. And the hard work, of course, is the ongoing decision‑making about what is good quality, what is accurate, what is verifiable, right? What is a good way to write neutrally about a person or an event, and that's where the community really spends their time and that's what they want to do, right? So, we have a very positive outlook, actually on bots to come to your big question. We think that bots and other technologies can really help humans to do the work that they want to do in a good way and free up time.
Now, going away a bit from bots to machine learning tools. We do deploy machine learning tools ourselves. There is a system called Object Revision Evaluation System, ORES that is placed on a few language versions of Wikipedia in collaboration with the community, again, and that flags bad edits and bad articles to the community. It's sort of like an AI as a service. People can subscribe to it and then they get flags, notifications of bad, potentially bad edits, and can review them.
And we know that this has reduced the time that the community spends just doing those evaluations by a factor of 10. So, if they spent 10 hours before doing that, they only spend 1 hour now and they can actually go on and make the ‑‑ they can go do the harder revisions of information. So that's sort of like just a quick overview of how we approach technology and bots specifically in the fight against misinformation and disinformation.
>> CHRISTIAN PERRONE: Fantastic. Thank you very much, Jan. You're quite right. There is also this factor of reducing human overview needs or how many hours of human overview will have, and so talking about that, about hours of overview and how humans interact with bots as well, I would like to ask, Jenna, as you actually are involved in youth engagement, so you have probably seen how bots can have this impact in a positive and negative way. How do you ‑‑ or what are your thoughts on that and how do you feel that bots can be used in a positive way?
>> JENNA FUNG: Thank you. That is actually basically the focus I was taking and trying to point out. Earlier before we had the session, I was trying to prep on the question we are having and I was still questioning myself whether using bots for blocking misinformation, if it's a good way to tackle this problem because sometimes maybe lack of the context or any kinds of barriers, that might actually make us ‑‑ you know, that would actually channel us like what are we receiving.
But then when I think about it deeply, I think bots can actually help us to increase the accuracy in so‑called fact‑checking mechanisms when we have these issues of misinformation. Why am I saying this? Let me state an example that I was like experiencing with some particularly in Hong Kong. So, there was a period of time back a few months ago, and with this video of an incident happened last year in 2019, August 31. There was a raw video. It's a livestream of Hong Kong police storming Metro Stations, and then it was actually even the captions or the video itself no added or even with the caption it was just about some personal political opinion, which by the community guidelines, by Facebook, it's not going to be flagged as misinformation or forced information.
But it does, you know, in fact it did flag and got blurred by saying that it's a false information because somewhere else online on Twitter or other platform, many people are trying to say that in China, the police are trying to arrest people who got infected with COVID‑19. So, in this part, it's actually ‑‑ it's actually false but the video itself is not related to it, it was another information. But in this example, that video actually flagged as false information because of what was actually said on other platforms. And this fact‑checking platform actually wrote really clear that it's not related, but then there is inaccuracy in this part, and of course we write it to Facebook and then that platform called Boom Live actually responded to it very accurately, but I think that was a thing that we need to eliminate in the future because if similar situations happen in the future, bad players might be abusing this kind of mechanism to try to channel what people ‑‑ you know, try to limit how the disseminations of information because they want to limit what kind of information you're receiving.
So, I think with bots, we can lower our dependence on certain fact‑checking platforms and reliance on journalism in fact checking because that's something that an algorithm can help us to improve in accuracy. Because like what I'm saying in some territories, maybe those so‑called, by definition, false information will never be able to deliver to peoples in that territory. They do it very accurately with the AI or the bots, so that's something maybe Facebook or we need to think about because like earlier before I finished my work and headed back home to attend this session. I was like scrolling on my screen, on my social media screen and I saw a post of people saying whatever Donald Trump is posting right now, Facebook flagged a notice saying that Joe Biden is the potential candidate to be the next President or something like that. But it's kind of like irrelevant to whatever information on Facebook that is related to Donald Trump. So let's think about it, of course in some territories, they might want to block everything and hopefully to deliver certain information that they expect their audience to receive, but in some other territories which we have all kinds of freedom of expression and in a very Democratic world, there might be some other extreme that we are actually practicing the same thing in flagging information inaccurately and wrongly.
So, I think that's the thing that we also need to think about, even if we're trying to do the same thing better, we are doing something that is actually the same as another extreme that we are trying to avoid. I hope that my English can actually put together what I'm trying to explain, and that's something that I want to respond directly to your questions at this moment and I hope we will have some other intersection to respond and I will talk a little bit more on some other aspects later. Thank you.
>> CHRISTIAN PERRONE: Fantastic. You're quite right, if you're using information to fight misinformation, you have to be careful if you're not in our fight doing ourselves disinformation to a certain extent or at least blocking people's speech, so you're quite right that this is a topic that has to be addressed as well.
So, Deb, now it's to you. Since we are coming from Brazil, we have seen it firsthand how misinformation or disinformation campaigns even in election time, and we have seen that ITS has developed one specific tool to fight that in a very particular way, so my question to you is; one, why using this kind of tools like bots and automation tools to fight disinformation, and what are your thoughts for that? Or why to do this and whether it has an impact in misinformation campaigns?
>> DEBORA ALBU: Thank you, Christian. Thank you for the other panelists. You have already kind of unrolled our discussion here. To respond to your question, I think the first thing is to consider what kind of bots are we talking about when we are talking about disinformation, and here we're talking specifically about what is called, you know, by a lot of the literature, social bots. And social bots, they are created to emulate human behavior, they are created to a certain extent transpose themselves as if they were human, as if they were actual people, saying actual things on social media.
So, in that sense, it is very difficult to identify or verify or even place a very specific label on saying that a specific account is or is not a bot when we're talking about social bots.
So, in that sense, tools are methodologies to identify such bots cannot detect 100% of what is a bot and what is not. Rather, what they do is to identify and analyze automated behavior or to try to create a new word "bot likeness" which is the similarity between that behavior with a bot, with an automated behavior.
So taking this into account, the Institute for Technology and Society has developed a tool called PegaBot which is the name in Portuguese and in English it would be something like a bot capture, and it does work very similarly to other tools that, at least the native English speakers can use more directly, such as botometer or even bot sentinel, but what PegaBot does is not only to give a probability of a Twitter account being a bot or not, we give a percentage result of that probability based on an algorithm.
So in one way, we are actually using automation to detect automation as well, but one of the main things for us with PegaBot is to give more transparency to the use of bots in social media, so we're not here to say that an account is or not a bot, but we are here talking about giving transparency to the use of automation and this sort of automation in social media.
Our rationale is based on the idea that the more people know about the existence of bots online, the more they are aware to deal with maybe such impersonations, right. And I see a comment here on the chat that is really talking about this from Xavier and he's talking about the idea of how we answer comments from bots is very different than the way we should be answering comments from humans, and our rationale is exactly that. We focus on transparency so that people understand how the algorithm works, how the results are given, and how this analysis comes to be.
So one of the focuses of our newly launched website, by the way, is a page on transparency in which we literally disaggregate the algorithm criteria and all the parameters that are used to produce those results to give those probabilities so that people can better understand what those results mean instead of just taking a probability percentage into account and saying, oh, it's 78% bot then it's a bot. No. The idea is that ultimately, there is a person, a human actually interpreting those results, and as other panelists have spoken, summing up the ability that AI or automated processes, methodologies, and tools have.
So, it's about people plus automation that actually gives us a better result and a better understanding of how to deal with disinformation in social media.
>> CHRISTIAN PERRONE: Thank you very much, Deb. And I think this is a very good segue for a second part talking about different approaches. You have all mentioned different ways ‑‑ different bots and different ways to fight misinformation with bots and fighting the bots that spread disinformation, and so the question is behind this, there are different ways of disinformation campaigns can happen. One can be more malicious, meaning disinformation itself. Or in some ways misintentional or a little like misinformation in general. So, do you think we should use the same tools, the same automation, the same bots all the time, or they should have different approaches or different ways to deal with those two different levels of misinformation and disinformation?
So, my question will go first to Jan. You probably have seen it in many ways throughout Wikipedia’s work and in particular Wikipedia, the different ways of doing things, either misinformation with less intentional or sometimes even very intentional ways of changing the narratives, changing and using bots to do that. So, do you think we should have the same approach of these two levels of misinformation in general or they should be different? And whether the tools that we use, they should be broad tools or should be very, very specific, context specific, and speech specific even? Or it depends?
>> JAN GERLACH: The lawyer in me will always say it depends, right. So, on Wikipedia, to bring it back to that, I think it matters little to an editor whether something is deliberate, a deliberate disinformation campaign or just somebody including a false statement from a maybe not so trust worthy newspaper.
The ultimate goal is to provide verifiable and trustworthy information with a neutral point of view as well. So, I think it doesn't matter so much to the editor on Wikipedia where this really comes, but that may matter from a tactical perspective, of course. Like are there like 50 sock puppets which are accounts that are merely created to sort of, basically, vandalize things or shift a point of view, are they coordinating to bring up a point over and over again, then you may need a different approach because you may need more editors watching a certain series of articles, right.
Or is this really just like one person innocently trying to edit something which is not up to ‑‑ not up to the standards of Wikipedia, then you may not need such like a coordinated approach. I think it's merely a question of tactics, really. I don't think the technology on Wikipedia would be different for that.
When it comes to the bot, I think that the work that they do free up this capacity for the editors either way is, but to go back to a point that was raised earlier about the human quality of a bot maybe, which is something that Debora mentioned with the PegaBot, like is this bot‑like, right. So I think that is really something that Wikipedia people have taken issue with on Wikipedia when a bot makes an decision, bots don't have patience, bots make immediate decisions and they say yes or no to most things, and that is something that in a collaborative context when people try to give their time and work on a voluntary basis, being said no and just like your edit is reverted feels very harsh, right. And that's where some people on Wikipedia are actually a little worried about having too many bots around that cause sort of hurt and frustration with others. And ultimately, can also lead to a situation where somebody who just comes on and wants to help with a certain project is so frustrate that they leave. That is the big problem with Wikipedia, or the big issue I would say with Wikipedia working against disinformation. You need a lot of eyeballs to help with this, to help against disinformation, right.
And then when you have people leaving because of, well usually there is a lot of factors, right, but bots can be one and so maybe that is a bit of a negative aspect where the technology helps in one way, but also prevents newcomers from feeling comfortable and actually helping with the whole system.
And so, that's maybe a side where I can see that the technology may need over time like to take a different approach, as it has to fit with a community.
>> CHRISTIAN PERRONE: Fantastic. I think you have a very good point on that maybe it's not a public acknowledgment but a way that we use technology, so it's a lot of tactics. I find that is quite interesting.
So on this note, I would ask Chris, so in your work in the sentinel project, you have early warning systems that you use new technologies, so do you believe it's different when you use inside the early warning systems some technology that would look at misinformation with a less intentional point of view, or when there is a coordinated campaign on disinformation? Do you think there is a difference in technology, that there is a different in approach, and whether the tools that you use for this early‑warning system are different to deal with those different topics or different ways of speech?
>> CHRISTOPHER TUCKWOOD: Yeah, so as of right now the short answer is, no, that we don't really make a distinction between one or the other or have different tools for addressing one or the other. Because this is, I think, still a very open question in terms of how we ‑‑ how we address the issue of whether or not something is misinformation or disinformation and whether or not it makes a difference in terms of how we counter it. So, from the early warning perspective, misinformation broadly including disinformation is very relevant as an indicator, along with, again, you know, something that I mentioned earlier which is hate speech or dangerous speech deliberately inciting violence. There is a very close relationship between the two phenomena, misinformation versus hate speech, they're not two completely distinct things and they interact very, very closely in some cases in some very dangerous ways.
So, I think for us, being able to ‑‑ the biggest use of technology at this point would be in better being able to monitor and recognize both of those online from a purely early warning perspective and not from a then intervention perspective. And this is something that we've done more on the hate speech side with Project Hate Base which is basically a multilingual automated system for recognizing ‑‑ for recognizing hate speech online on platforms like Twitter, for example.
I'm really interested in finding out whether there are ways that we can apply a similar methodology to recognizing rumors and misinformation circulating online, although I think there is a significant amount of difference in how that would have to be approached.
Thinking ‑‑ or so assuming that we can do all of that effectively in terms of actually getting, you know, descent automation that can still support human judgment but take a lot of the work off of human analysts or human moderators, if we can get a decent automation just to recognize and effectively monitor this stuff online, that would be a big step forward.
The question then is what we do about it. And I think, you know, for us there is definitely still a role in technology to be used there. The trick is doing what will actually be effective. So I think there is a risk that if we start using this technology too much to try and influence public discord against that itself, so we try to do something that I think, you know, Debora referenced with regard to PegaBot which is to give people the means for themselves to better recognize these sorts of things and make their own judgments.
And, ultimately, that's what's going to address this problem. Technology can help, but it's a human problem that requires human solutions, and even in the more low‑technology areas where we've worked, we've really tried to encourage more kind of critical thinking among community members to simply question things in the first place and actually think about whether or not something that they're seeing might be true or false, and that can make a significant difference, I think, as well.
But then, of course, they also have to have the means to try to verify it. So I don't think that necessarily answers your question very well, although I don't want to take up more time from other people, but it's all just to say that it's a very open question at this point, and I think that except for in very extreme or specific cases where it would be obvious that there is a coordinated disinformation campaign that's trying to incite violence or something like that, in the majority of cases it really just doesn't make that much of a difference in terms of how we respond to it.
>> CHRISTIAN PERRONE: Fantastic. You have a very good point. I think behind this huge discussion is about how us humans will approach and understand what are the coordinated contents or whether it's just a matter of misinformation. Very specific matter of disinformation.
On this note, I would ask Jenna, since you do a lot of this work and different engagements with different age groups, particularly with the youth, do you think there is a different way of engaging with such tools, I mean, with bots and automated tools in general from different age groups and whether we could have a very broad approach since one of the things we're going to do here is identify what is a bot, what is not a bot, what is disinformation oar what is misinformation or at least to flag out if there is a different approach for the different types of age groups or it is a broad approach that would be okay, or how would you fashion the idea of having a tool for all ages or it should be a tool for specific age groups, or specific groups as well? How do you feel about this?
>> JENNA FUNG: Well, so first of all, I really am not a very technical person so if you are asking me my opinion on how to identify bots and stuff like that, I would say I'm not quite familiar with that. But then for sure, we will have this part in tackling what we say online or on social media so I can put a little bit on top of that.
I, personally, think and from my own perspective, I do not think it's really necessary to have too many bots in controlling the information that we are receiving because like what earlier the other panelists say, it's actually very important to use the human intelligence to actually educate people to judge whether the accuracy or whether this information is right or wrong. I think that's the very ideal way in making people think more critically because it's impossible for ‑‑ not impossible, but what's the point of having bots or community moderators to moderate every single information online to decide whether it's violent or it's right or wrong. Instead, I think the ideal way is to educate people. But then talking about using bots, I think more urgently, I think using it to underage would be more appropriate because right now especially after the pandemic, very young kids need to get online. When they're actually under age because they need to attend online class, and then the kids actually get online very easily. I don't know other places, but in Hong Kong, every young kids even in primary school have iPhone 6 or latest model of the smart phone and then they lie about their age and get on social media and then they receive the information. And there is another thing if it's young kids or adults, but we tend to ‑‑ it's actually basic human. We tend to listen to things that we believe we can trust, so what I'm trying to say is whether certain associations, initiatives, or organizations, they're trying to provide trustworthy information to prove this message or this information online if it's right or wrong. People will still try to choose over something that they want to hear, and then ‑‑ and that's the thing that will still make things maybe misleading because they're just listening to what they want to hear.
And earlier Chris was saying, or talking about using bots to deal with trying to deal with hate speech and conspiracy theory, and bots would try to ‑‑ and of course we're working on it right now trying to improve bots in tackling with all of these issues, but I believe human intelligence we evolve even faster. People will come up with some new ideas to try to. And so sometimes I feel like if we're working so hard in designing some mechanism or improving the algorithm to screen out things that we think this audience should not receive, but actually how harmful it is if these people are receiving this information if they can think critically, regardless of their age.
So, I think for sure young age maybe we're trying to develop some system or mechanism to protect them from misleading information, but that's actually an assumption, we're trying to assume that they're in a context that is actually democratic and totally free, so think about the other extreme. People can actually totally use these bots and mechanisms to screen out what you can receive, and people, especially young age, if they are being taught and educated in the system, they will just believe what they're taught and, you know, whatever information you feed them they think is right.
So I think that's the point we also need to think of, especially if we're in a territory that is ‑‑ that you deserve and you have totally freedom, because think about that some people, they know what is happening in this world, but they can use this kind of system to try to limit certain groups of people on what kind of information they're receiving, and this group of people may not know the information that they're receiving is actually wrong because they're actually being taught the wrong is right. Yeah. So, I think that's the point that I want to talk on this in responding to your questions.
>> CHRISTIAN PERRONE: Fantastic. You have quite an interesting point talking about how human interact with the different aspects of bots and all of that and this is a very interesting segue for the question that I'm going to ask Deb. You mentioned about the tools that are actually for media literacy, so these tools they tend to be taught by being general tools to make them or everyone understand what is a bot, what is not, how to identify, how to monitor them, and do you think this same approach could be ‑‑ could happen all over the different types of tools that we use to fight disinformation? Do you think all of them should have at least to a certain extent a component of media literacy as well?
>> DEBORA ALBU: Thank you, Chris. I think media and information literacy is definitely not only about disinformation or misinformation so to speak. It is about trying to make sense of a world in which the information ecosystem is more and more complex and that the volume of information is greater than ever. If we consider that now classic infograph on what happens on the Internet every minute. It's overwhelming, so whenever I show this to people that are not from our environment, from our community, people get completely overwhelmed. So, in that sense, navigating that ecosystem is definitely about-facing disinformation, misinformation, hate speech, and other forms of speech and human expression have not always been expressed. We're also talking about bot expression here. So, in that sense, having media and information literacy as a basis or as a skill that we need for the future is not only about disinformation and misinformation, but it's also about how to navigate in this very complex world of information.
Bots and AI, talking about a very, very broad general term for artificial intelligence can help us then curate or select and even digest and consume information making sense of this whole world of data out there, so it's not only about understanding which kind of information do we want to consume, from which areas, from which sources. But are those filters coming from? If we do use AI to filter information for us, which filters are we really using? If we do rely only on algorithms from social media, for example, or social media platforms, it means that we do not have the control upon which or how this AI is working to select or to rate information for us. Rather, there is a private entity with private interests doing that for us, and ultimately nudging us to keep on consuming a specific kind of information and only in those spaces.
So, I believe that even if we complement media and information literacy with other approaches such as fact checking or even couple that with good regulation and good public policy, it's fundamental to have media and information literacy as the basic approach because it is definitely going to have more impact as it promotes a deeper culture and societal transformation.
So, I would definitely go on the way to say that every tool or process has to be based on media information literacy, not only because of disinformation but also because of the complex ecosystem of information that we live in.
>> CHRISTIAN PERRONE: Fantastic. I think this is quite an interesting segue for our final block of discussion, and there is one question that I find kind of leads us to this direction. The question is, isn't it naive to think that people always want the truth? So, my question would go about the social risks of using such bots. I think Jenna started very well in the beginning talking about the risks of using bots to fight disinformation and the bot itself would actually limit speech or itself move disinformation or spread disinformation, so my question actually is, do you believe it is possible, legitimate to use bots to fight disinformation and shouldn't we be very, well aware, that people might have a right to say the wrong things and have a right to actually look at the wrong information, and how can we actually sort of make an equilibrium between using these tools for what we think is good and how people actually have the rights to, you know, be able to see whatever they want to see, and maybe well why not to say, have a right to be misinformed if they actually want. So how do you manage using such tools for the good and how do you think whether this is legitimate or what are the social risks of using it? Can we start with Jenna and then move around with all of us?
>> JENNA FUNG: For sure. I think using bots is a good way to eliminate certain things, such as the accuracy in fact checking misinformation and disinformation, but I think using bots is also important for us to engage stakeholder insight. Because think about it, if we are heavily depending on bots in tackling hate speech or disinformation, there must be certain groups of people trying, you know, defining certain things ‑‑ to define, let's say for example, to define what is right, what is wrong, how we should improve this bot.
And then in another form if we view this mechanism is actually that group of people trying to make everyone follow that definition, which is not democratic, so if we really want everyone to have the right to decide what they want to receive, I think even if we're using bots to tackle the issues, we have to engage all stakeholders insight to develop this kind of mechanism. But again, then there is like private entities having these kind of bots or initiatives with tackling these kind of issues, then it comes to that discussion again on these privately owned public space, then it's another workshop we can talk on about it. But I think in general, I think it would be good to have ‑‑ to use bots in tackling with these issues but not in that extreme that we only depend on that because like what other speakers said, digital literacy, and then I think the participation from all stakeholders is also important in tackling these issues. Yeah.
>> CHRISTIAN PERRONE: Fantastic. All stakeholders should be a part of that and this is an interesting segue. Chris, can you give your thoughts?
>> CHRISTOPHER TUCKWOOD: Sure. So, leaving aside the purely technological question, I think I hope that I've established and several other people have also said similar things that the technology should be this to support human actions and just help humans to better cope with the scale of the issue.
But I think the question really takes on different forms whether we're talking about government or private sector actors or Civil Society. You know, a company like Facebook or Twitter, they have the right to moderate content on their platforms, you know, sort of legal rights to freedom of speech and that sort of thing don't necessarily imply there, although from a philosophical perspective, maybe they should.
When it comes to government involvement, I'm very cautious about the sort of tendency for a lot of governments around the world to want to legislate this problem away, and essentially take away people's right to be wrong, so to speak, through you know, criminalizing the spreading of misinformation that people don't often realize that they're doing and often this is a sort of, you know, really just a sort of sneaky way I guess you could say of cutting down on freedom of speech and dissent and that sort of thing because the government of course gives itself the way to be the person they think is true or what is false and what they think is true or what don't is true is illegal. And we can go into more detail and unfortunately we don't have time for it but to leave on the note of caution that the technology is there to be used by different parties but we all have different responsibilities in doing that and as much as possible, we should do it in the least restrictive way for sort of the marketplace of ideas.
>> CHRISTIAN PERRONE: Thank you very much, Chris. Jan?
>> JAN GERLACH: Yeah. What to add to that? I just want to plus one that, actually. But also, I think that the typical Wikipedia approach to all of this is always transparency, right. Any article on Wikipedia has a so‑called talk page where you have a little discussion button up there and can you see how people have actually made changes and why, they explain them, and I think that is really helpful speaking about media literacy, right. Understanding how a decision that has been made about content, the same thing happens in newsrooms, right, around the world.
And whatever technology we deploy, be it a bot that is run by the community or which is the system that we run that I mentioned before, we really believe in explainability as well and audits, right. So, like it should be open to anyone to like really look under the hood and really understand this, granted you might need some technological understanding. Right. But there is this understanding out there and it's not just a black box and people can look into it. And I think once that is there, once you comply with a few rules around sort of transparency and auditability, I think technology can be a really good tool to help fight misinformation, and it will require a mix of tools, right. There is not just one silver bullet to use that warn‑out metaphor, but there is really a mix of tools and also people's contributions that will help in this fight.
>> CHRISTIAN PERRONE: Fantastic. Thank you very much. Deb, can you wrap it up for us? What are your views on that? We have about two minutes.
>> DEBORA ALBU: Thank you, Chris. And thank you for the other panelists to kind of laying out the way for me. I think when we talk about truth and the right to be wrong, we're definitely talking about very philosophical questions and philosophical questions that have been discussed for a couple of thousands of years, so not going on that route because we don't have the time for that. I think ultimately, it is a manner of power. Who is saying and who is pointing out what is right and what is wrong? In what position of power are those people or institutions in so that they can be the sort of like the fortress holders of what is true and what is not?
So, I would ultimately go with a very Foucault approach to this in talking about power and it's important that we take power considerations into talking about the right to be right or wrong. We're not talking about individual's right to be wrong, but also the institutional right to be wrong. And I think I'll leave it on that and give it back to you, Chris.
>> CHRISTIAN PERRONE: Thank you very much. I think we had a very interesting discussion. We have mentioned that there are different types of bots that may fight disinformation, may fight misinformation, and that they may have different approaches toward it, but there might be a general broad approach, and one of the things that the media literacy would be interesting to be is at the bottom of any different approach that we'll have, and obviously, there are different types of actors that may lead to different types of legitimacies on use and deployment of these bots and transparency is probably one of the most important tools we have available to understand whether a bot should be used to fight disinformation. Thank you very much for all panelists here today and thank you for the audience. Thank you for the many interesting questions, and please find us on social media and let us discuss a little bit further this important topic. Thank you very much. Let's use bots to fight disinformation as well.
>> DEBORA ALBU: Thank you, everyone. Have a great IGF.
>> CHRISTOPHER TUCKWOOD: It's been great. Thank you.
>> JAN GERLACH: Thank you. See you soon.
>> DEBORA ALBU: See you soon.