The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.
***
>> MODERATOR: Good morning, good afternoon and good evening. We are still waiting for people to trickle in. We will get started in a moment.
>> ZOE DARME: Can you hear us?
>> KATIE HARBATH: Just checking in. Zoe, it's day 0. So people with trickling Slowly.
>> ZOE DARME: Katie, it's great to meet you.
>> KATIE HARBATH: Zoe, I feel like you should have been here in person.
>> ZOE DARME: Travel with priorities.
>> JIM PRENDERGAST: Okay, everyone. Thank you for joining. This is the session, Tackling Misinformation with Information Literacy. You will be taking a quiz. No pressure. There are no wrong answers. We want people to walk away with a few ideas and new thinking. And by all mean, we welcome your questions and interaction. I will kick it off to Sarah, the Governor of Government Affairs for Google.
>> MODERATOR: Thank you, Jim. I know it's the end of the day, so thank you to those of you who have made it this far. Thank you for being here.
As Jim mentioned, I'm Sarah. I lead Public Affairs for Google in Saudi Arabia. I think this session is hugely timely with the amount of everything happening on the Global Scale and the amount of information present online.
So I hope you will be very engaged with our session today. We have a wonderful speaker lineup. With that being said, we will start with the Presentation by Katie Harbath, Founder and CEO of Anchor Change, on the challenges of addressing misinformation.
We will go over to Zoe Darme, who is the director of Knowledge and Information and manager of Trust Strategy at Google for Information Literacy. And we will go into a Q&A session.
I will take the prerogative of being the moderator on the first few questions and hand it over to the audience, who would like to engage with you as well. With that, I will hand it over to Katie. And Katie, thank you very much for joining us, and Zoe as well. Happy to have you with us.
>> KATIE HARBATH: Thank you so much for having me. I'm sorry I can't be there in person I want food start today by sharing a little bit of the history of companies working around misinformation and some of the thing they tried and how they approached this problem. So just to sort of ground ourselves. The current iteration of working on misinformation really started after the 2016 election.
Misinformation has been around for quite sometime in various forms. And companies have been working with it and trying to combat it for quite sometime. But after the 2016 election in the United States there were a lot of stories about initially Macedonian teenagers spreading fake news to make money and it wasn't until later in 2017 when we also started to realize and find the Russian internet research ads worked on -- I worked on Facebook so the Facebook platforms but many other platforms. And this is what started a lot of companies -- can I speak on behalf of Facebook to work fact checkers around trying to combat this. And misinformation is not just around elections and issues. Some of the earlier thing a celebrity died in your hometown or those headlines that the companies were trying to fight.
The initial focus was also very much on foreign activity, foreign adversaries trying to influence different elections around the wore. The other important thing to remember about a lot of this is that much of this work is very much focused sometimes opt behavior of these actors not necessarily the content as well. So this means that pretending to be someone that they are not. And coordinating with other accounts toiver indicate things. As they are talking about this it's not just what they are saying and if it's false but also how they may be trying to amplify it in different ways.
Can you go to the next slide. There we go. A couple of other things to think about.
This misinformation is sticky and fast. Which means it can spread very, very quickly and something that very much sticks with people and it's very hard and can be very hard to change their minds. Thing can also -- we find most of the times things are not completely false or completely true. There's usually a Cornell of truth in this with a lot of misinformation around it which makes it a lot trickier for trying to figure out what to do.
Because you can't just fully label it false or true. You also have thing like satire or parody, hyperbole that exists in many places that are perfectly legal types of speech, and understanding the intention of the poster and what they mean for it to be and doing that at scale is something that is incredibly tricky for many companies in which to do.
And overall these platforms very much do not want to be the arbiters of truth. They do not want to be the ones that are making decisions of whether or not something is true or facilities or what the facts are. Because they have seen and they have been accused of the risk of censorship, whether that's true or just perceive bud that has become a huge political problem, particularly in the United States but also around the world.
And sometimes sub categories of misinfo specifically can be a better way for platforms to have priority. So rather than a blanket one because health care, for instance you may have more facts and authoritative information that can you refer to on that, the same thing with elections, where, when and how to vote is something election authorities have that is easier to point to than something that might be more a more physical or there is disagreeing opinions about what is happening. So sometimes will you see companies start to put this out on the types of content they are seeing and the topic of it in order to try to better --figure out how to combat this and mitigate the risks that appear to them. Sorry I haven't had enough coffee. The risks that might happen to them. Jim if you could go to the next slide.
So a couple of strategies that we have seen companies do in addition to just -- so most companies do not take down fake information unless, again, it's about some very, very specific topics, health and electrics are two I can think of but other strategies we have seen companies take, one is prebunking.
So giving people a warning of the types of information that they might see. The types of stuff that could potentially be false or directing them to authoritative information on these sensitive topics. Saying that -- a lot of thing during COVID. Platforms would put where you could get more information about COVID. During electric season there may be going to authoritative information there.
A lot of them as I mentioned early, worked through fact checkers around the world. What that means the platforms aren't making this decision. They are working with the checkers and giving them a dashboard and figuring out what stories they want to fast check they are write their piece and if they determine it's false or paritally false, a label will be applied to that and that will take that person to the fact check. The other thing it will do too is reduces the reach of this content. So less people can see it. But it doesn't fully remove it.
And then as I've been mentioning too, there's the labeling component of this. So people can see whether or not what these fact checkers are saying while they are consuming these different types of content.
And Jim if you can go to the next slide.
A couple notes about labels. So a lot of this work --
There's trial and error and experimentation to it.
Because as platforms have been implementing it am I know it's easy to say just put a label on it. That will help people understand it's not fully true or would like more context. But unfortunately how people interpret that as we have been seeing with research is a lot Merckier. So some people, when it says alter content does that mean it was made with AI, edited with AI? Was it used to photo shop. There are a lot of ways to edit content and not all are bad. Many people use editing software in legitimate ways. So how do you distinguish between that and stuff that may be nefariously edited.
We find people have many interpretations what have labels mean. So the platform may not have enough information for it to label.
It if it's labeled people assume it's false. And if it's unlabeled they infer it's true. Even if that may not be the case, maybe a fact checker hasn't gotten to it.
So what are we training users to think about in ways that are unintended. So platforms are very much trying to experiment with different ways of try to build up -- and I know Zoe will go into this, the information information literacy it's not just putting a label on it, because how people interpret that is very different across generations, across cultures and across many different if a fors. If you want to go to the next slide.
The one other thing I wanted to mention. This is a study that Googles' jigsaw division did earlier this year looking at how Gen Z did this, but I can show this broadly how Gen Z goes online and think about information. What these studies fin there are seven different modes people go on when they are going onloon. They plotted this access, on the far right you have heavy content. News politics and weighs heavily on your mind versus on the far left it's more light hearted content. Think cats on Zumba, stuff like that.
On the vertical access have you on the bottom thing ha have social consequences affect others. People think they need to do something. At the top it only affects them. So it's not necessarily something they feel they have to act on. So what they found is that most people are in that upper left quadrant which is the time passed and lifestyle aspiration needs. This is where they are just hoping -- they are at the end of the day.
They are trying to emotional equilibrium. They are trying to zone out a little bit and relax. And when they are in these modes they don't care if stuff is true or not. However, what they found is that as they were absorbing it over time, they did start to believe some of the things that they were reading and consuming. And they also found with this, is that people do still want that heavier stuff that, heavier news and information but they want to be intentional about it. They want to know when they are going to get it.
And when they go in to get that information they want to get it quickly. They want a summary and they want goat out of it. So something about this as we continue to have this conversation over coming years is going to be how can we reach people where they are at? And we also have to recognize their feelings play a huge role in trying to combat misinformation. And as a common friend of Zoe and mine's has said in a recent pair, you can't bring logic to a feelings fight.
And this is something we are very much trying to think through and think out when it comes to combating this information. Because logically we have found just label it, just tell them. And what we have actually found that is not how the human psych works. I can't remember if I have one more slide or if we go over to Zoe.
>> MODERATOR: That is the final slide for you, Katie. And thank you very much for that.
There are ton of safeguards for users both active and proactive. And we will get into information literacy with Zoe in a second. Everybody in the room if you haven't had a chance to get a headset, we also have the captioning behind us. Wonderful. With that, Zoe, I will hand it over to you for Google's approach to information literacy.
>> ZOE DARME: Great. Thank you so much, Sarah, and thanks, everybody. Jim did mentioned we did start a quiz and there are no wrong answers. But there are actually are right and wrong answers for this neck quiz. There are three simple questions and I want you to basically keep track for yourself. The first question here. And folks on the chat are free to put their answers in the chat. Which one of these has not been created with AI? Is it the photo on the left or the photo on the right? A or B? Which has not been created with AI?
>> MODERATOR: Not everybody has microphones in the audience so maybe we will take a show of hand.
>> ZOE DARME: You can keep it to yourself.
>> JIM PRENDERGAST: And we are getting answers in chat and Zoom. So thank you.
>> ZOE DARME: Great. Now the next one is...
I see Jim struggling with the clicker. Now which photo is more or less as it is described? Is it the photo on the left from www F. The claim here seems to be about deforestation. Or is it the photo on the right which also seems to be somewhat climb related with a warship finally being revealed because of low water levels. Which one is more as it is described.
Great. Next one. Which one of these is a real product? Is it cheeseburger oreo, or spicy chicken wing flavour oreo?
>> MODERATOR: Hopefully neither.
>> ZOE DARME: That was my answer, Sarah, hopefully neither. Jim, we can advance. The house on the left is a real place in Poland. The post about the sunkin' ship is unaltered and accurate. And the post from the left from the WWF is a crutch photo. And the same photo taken on the same day, not 2009, but 2019 and I hate to it but spicy chicken wings oreo was a real product a viewer says the worst part of the experience was the greasy that still haunts me.
I would love a show of hands and maybe in the come a time to see who got all three correct? Anybody? I'm not seeing too many hands. And dope feel bad about yourself because next slide we are actually pretty bad at identifying misinfo when presented in this way. And a group of researchers from Australia found that our accuracy is little bit better than a coin toss. And not only are we not able to very easily always identify what is wrong in an image -- or to identify what an image is mis-- whether an image is misleading or not.
We are not able to identify very effectively what is it about that image that is wrong. So this group of researchers actually tracked people's eye movement to see if they were focusing on the part of the image that had been altered and are just not trained visual photo authenticity experts. So if it's hard for us to this, even in a setting like this one, it's hard to think about what Katie mention when folks are just in time pass mode. They are not going to always be doing a great job at this. Next slide, please.
I think also in this day and age when there's a lot of synthetic or generated content, we are getting caught up, perhaps in the wrong question as well. As Katie mentioned a lot of people just want us to label things, misinfo or not misinfo or generated or not generated but is this generated does not always mean the same thing as is this trustworthy. It really depends on the context. on the left here you see a photo that is a "real photo" of trash in Hyde park. And this claim was said that this trash was left by -- this is tactile info. It's just an image taken out of context. This was actually a marijuana celebration for '0-20 day and that makes a lot of seven there would be a lot of trash left over.
And this photo on the right is a photo I created. So it not only depend how something is created but how it's being used and how it's being used with the caption and label and everything like that.
Next slide, please. So we will still need your plain old vanilla information literacy tool. These will still need to evolve given that there is more generated content. And more synthetic content out there. Certainly our tools need to evolve but there's not going to be a technical silver bullet for generated content. Just like there's not a silver bullet for misinformation overall.
So the way we are thinking about these things in Google is inferred context over here, this is like your classic information literacy techniques, training users to think about when did the image or claim first appear? Where did it come from? Who is behind it? And what is the claim they are making and what do other sores say about that same claim? And tools on the right, assertive tools, these are either user visible or not. For things like water marking and fingerprinting and markup, meta data and labels. Next slide. Thank you.
We have set this out in a new white paper. Can you scan to read it here. Or if you give your email to Sarah, I can connect with you and we will make sure we send you a copy. But this white paper sets out how we are thinking about both inferred context and assertive prominence. And what these two things -- How they both play a role in meeting the current moment around generated content, trust worthiness and misinformation. Next slide, please.
Now what Katie talked about are a bunch of tools that happened across many different platforms.
I will focus on some of the tools and features that we brought in directly to Google Search. So first we have this rule. Next to any blue link or web results on Google Search there are a set of three dots, and if you click on the three dots you can get this tool which is designed to encourage easier information literacy practices like lateral reading or basically doing more research on a given topic. So this will tell you what this says about itself, what other people say about the source. And what other people are I sag about the same topic that you searched for.
So let's say there was misinfo about the King having a bodyguard. So when you click on that not own information about the source but about that topic. Spoiler alert, the King of (?) did not have a robot bodyguard. Just in case you were wondering. Next slide, please.
This is just another player this rule. It brings all of this information into one page and helps with the COR and SIFT method. SIFT says check the course and find other sources and trace the claim. That is really hard for people to do when they are in time pass mode some we wanted to put a tool directly into search just to make it as easy as possible for folks.
Because one of the criticisms of inferred province Nancy or inferred context is it puts a lot of responsible on to the user. So when we are thinking about all of those other modes, let's say where it Mike be more important for users, let's say making a big decision, like a financial decision, for example, we want to make sure that users have the tools that they need when they really feel motivated to go the extra mile. Next slide, please.
And we also built a similar feature into image result. So you can also in the image viewer click on a result. Will you see three dots. And this is like a super charged reverse image search directly in the image viewer. It will tell you an image's history. When we Googled and first indexed an image. Because sometimes an old image will be taken out of context and go viral for a completely different reason.
It will also show you an image's metadata. And that brings us to -- next slide, assertive provenance. So for Google tools like gemini or AI and cloud we are providing so this will provide AI into that image so you can see it was produced by one of our image generation products.
Now the reason that it's difficult for Google toll do that for every image out in the universe is for the reason Katie mentioned earlier. Let's take the example of Russian methadonia teens they are probably not using tools that are using water markings. So if they are running on an open model, for example, there's no way to force those other providers to water mark their content. And there's no motivation for the content creator in that example to use a water mark or a label.
And until we have -- we have never going to have 100% accurate AI detectors that are able to snuff out all of the information on the internet, send it through an AI detector and spit out a water or label that is accurate 100% of the time.
So really we need a holistic approach that involves inferred provenance and a whole society solution. Next slide, please. The last thing I will say is that there is a lot of talk about the role of recommendations and algorithms and how they are designed and whether that is what is creating or promoting or giving more reach to this misinformation that is sticky and fast, and a study looking at Bing, actually. Shows that there is consistent evidence that user preference shows with unreliable information and searches.
What does this mean? Searchers are coming across misinformation when there is high user intent for them to find it. That means they are searching explicitly for unreliable sources. So it's not Taylor Swift that is bringing misinfo about Taylor Swift but when are you searching for Taylor Swift plus the site that you like to go to. That may not be a reliable news source but that's when folks are likely to get unreliable information. So that's why we have to use navigational queries because that's what is driving engagement and it really has to do with what users are actively seeking out.
And that's a bit of an uncomfortable conversation because it goes to a question of, like, how do you get users on a more nutritional or healthy media diet rather than how do we just label something or how do we just fact check something? And that's a much harder problem to solve. So I will stop there and turn back to Sarah.
>> MODERATOR: Thank you very much, Zoe, it's great to see the Google tools helping people to make the best decisions for their consumer consumption. So thank you. With that I will turn it over to the Q&A portion of the he is. Maybe Jim first.
>> JIM PRENDERGAST: I have one online.
>> MODERATOR: Fantastic. Wonderful.
>> ZOE DARME: I see the question. Google's water marking or AI generated images like those created by imam if so can it be removed by the meta data or can it be removed by itself. Yes, that's why it may be as the chart may be one to go to. But essentially no, it doesn't rely on meta data. It does produce meta data that does show the image has been created with Google generative AI product.
It is tamper resistant -- nothing is 100% tamperproof. However the ID is tamper resistance so it is hard to remove from that image and clearing that image is not going to remove the water mark. This is a little bit different from other types of provenance solutions in the past. Some other types of metadata are easier to edit using very common imaging editing software.
So IPCT metadata can you edit and it was not designed to be a provenance tool, the way we are thinking about it now. But there are applications with both C2PA and IPTC about how durable that metadata should be. Where we have that metadata from C2PA or IPTC we are including it in the imam creator the way I show you had in this image. Thank you for the question.
>> MODERATOR: Thank you so much. And Katie back over to you. So misinformation and information are very big problems with big impacts on society. So there's a lot of pressure on platforms to do something about these issue bus also concern about platforms reaching issues and so on. Can you think about the tools to address them for the platforms?
>> KATIE HARBATH: Yes, and a variety of approaches for platforms to try. One thing in particular is the question about people's right to say something versus the right to be amplified. So you often times are seeing platforms where they are not taking it down. But they are adding labeling and tries to reduce the reach that has brought criticism around them as well and around the principals of that.
I think this prebunking is really important as well in trying to just give people other information and context that they can see and reach and be able to understand when they are consuming this content. Are you starting to see new approaches from places Blake Blue Sky where they are not only them labeling content but anybody can label content and the use Kerr decide what sort of content they do or do not want to see in their feed.
So it's very much putting the power back in the hands of the users rather than the platforms themselves making the decisions. I think a lot of that will change and evolve. AI plays a big role in how we summarize what sort of information that we get and people are also thinking about that and what types of information or pulled into it.
But this is sort of an ever evolving thing. As the pressure on them from -- you know as you mentioned, you know people who are I sag they should do more but others who are saying they are taking down too much around this. And at the moment you are seeing more platforms taking less of a strong approach around -- again the leave it up or take it down ask instead trying to find some of these other ways and another one I should mention is Z and twister.
They have community notes so they have larger number of people that can add community note. To give it more context to say if it is true or paritally true and they have mechanisms to make sure that cannot be gained as part of all of that. But I think we will to see a lot of experimentation on this as they try to balance the freedom of expressions are also the seat of people who are using these platforms.
>> MODERATOR: Fantastic. Thank you for that, Katie. That was really help.
Maybe shifting gears I see a few items on the chat. Maybe Zoe. Can you talk about the how proliferation around generation AI content talks about literacy and we will go and take the question in the chat after.
>> ZOE DARME: I think it's evolutional, not revolutional. It's more around terms F volume of content people are edited content is an age old problem but the very first was using a ghost and a type.
So as long as images have been created there's an issue of whether I been edited or altered and that's why I'm a strong believer that our information literacy muscle needs to grow as a society. Because whether somebody is generated or not doesn't necessarily always change the question is this trustworthy or not.
And that's the key question that we have to remind people. Is this trustworthy? Generation -- whether it's generated or not is one element. And it really depends on the context. So I think our information -- what needs to change is we need to ask, yes is this generated or not but still ask all of those other questions that we have always been asking ourselves when we are encountering contentment but it could be potentially suspect. I hope that answers your question, Sara.
>> MODERATOR: Great. Thank you. I know Zoe you answered directly in the chat but does Google's water mark those created on the I will imagine with metadata. If so can be it removed by the metadata or is it embedded directly on to the image itself.
>> ZOE DARME: It provides metadata but that metadata cannot be stripped easily. I say easily because nothing is 100% tamperproof, but it's very tamper resistance. It's not editable the way other metadata is. And I think that's a critical piece of what we as Google are doing. Again, Google -- neither Google or OpenAI or me tax none of us control all of the generation tools that are out there.
And it's very difficult to make folks who are a derivative of open source, maybe smaller models, for example, models that are being run by other companies. There's not a way for us to force other companies to water mark. And so this is where it becomes really difficult.
Because we will never have 100% coverage on open web be even if the biggest players are all in C2PA and all water marking or labeling where appropriate. There's always going to be some portion of content generated on the open web that does not include a water mark, for example.
>> MODERATOR: Fantastic. Thank you, Zoe. And for those questions and answers and maybe a question or two in the room?
>> QUESTION: Thank you very much.
This is Lena from Search for Common Ground and Tech and Cohesion. The tech, it's powerful you said it again, Zoe you can't force others to do certain things which in some ways pokes holes in your valiant effort.
So there is growing evidence about really harmful tech facilitated gender based violence. And I'm just curious, are we seeing attention on this growing? Because we do hear that there are specific things you put in place for health and electrics and a lot of that is because of the excellent work of the two of you, right. So what would it take for us to also begin to think differently about the Liberals around tech TFGVB. Do we need to rally other companies so that there is that standardization of water marking of that kind of harmful content?
Where do you think the conversations at right now. Thanks. Zoe that's a fantastic question. And image based sexual abuse, I will just say in terms of "deep (?) pornography" that's not what we call it we call it important graphic imagery. Is a problem that Google didn't necessarily create, right. We are not allowing our models to be used for deep (?) pornography or ISPI. However it is an issue we are deeply grappling with, especially what I work with Google Search because A.I. lot of that material is out there on the open web. So what we have done -- and I can only speak for ourselves. We have taken an approach that relies on both technical and multi-stake holier and solutions. So one of the thing we have done is we have implemented new ranking protections and ramping solutions so we can better recognize that type of content. And not always recognizing whether this is AI generated or not there.
Are other signals we can use as well. For example if the page itself is advertising it as like a deep fake celebrity pornography, for example we can detect that and apply ranking -- that is it's not ranking highly or well. We also have long had a content policy to remove that type of imagery as well.
The other thing we are doing is providing more automated tools for victim survivors. So when you report -- even just -- you know "regular non-synthetic, but regular ACI" we do in the background, if that image is violative we use hashing technology.
Now hashing can be evaded with some alterations to the image itself. So we also give an option for reporting users to check a box to say that they want explicit images, to the best of our ability removed for queries about them. So if the query that I'm reporting for example, is that Zoe leaked news, I can check a box that says I also want exclusive imagines without my name. Zoe -- et cetera. So that's another way where addressing problems for automation that doesn't necessarily rely on finding all of the generated imagery or not but a textbook problem through another dimension. So those are a couple delays.
In the chat are recent involuntary synthetic pornographic imagery and all of the ranking protections we are applying to demote such content and also raise authoritative content on queries like pornography where we work. We are really trying to return authoritative trusted sources about that particular topic. An issue rather than problematic sites that are promoting such content.
>> MODERATOR: Fantastic. Thank you. I think we have another question in the room before bouncing back to online.
>> Hello. Hi, do you hear me?
>> MODERATOR: Speak up.
>> QUESTION: Hello, Katie and Zoe. Thank you for the presentations and wonderful answers as well. I'm from the Brazilian association of internet providers I have more of a doubt than a question.
Do we have any studies showing the effectiveness of the literacy on actually identifying and combating misinformation? Does it have an actual impact? Or how much can we measure it already? Zoe Katie do you want to go?
>> KATIE HARBATH: I was going to say isn't there a jigsaw one on this, Zoe? I think preliminary such stuff on prebunking. Where I started to see it realize is particularly when Russia invaded Ukraine and started debunking ahead of that actually happening but I will toss it to Zoe. I don't want to take Google's thunder for some of the great research they have done on this too.
>> ZOE DARME: No. It wasn't my research either. So big shoutout to my colleague Beth Goldberg. She did a lot of debunking work. We can dig that up and throw that in the chat. Katie covered debunking I will cover information literacy. There was a lot of evidence that the sift could core, so we searched for it in the product itself. Firstly I will answer your question. Yes niece are evidence based practices. Sift was developed by Mike CAW field a researcher most recently with the University of Washington. I don't know his affiliation now.
And Corey, another misinformation and information literacy reserver. When we have done user research about this rule, for example, we have actually seen folk theories decrease. So I will say that the user research we have done -- I will caveat this by saying small sample size. More research need to be done but internal implications is consistent use about these results, for example, reduced theories about how somebody was being shown a certain result.
So for us that was a really positive indicator that they had a better understanding. Not own the results that they were seeing but how those results were chosen for them. And what I will say is a lot of people think, for example -- they will have a lot of theories about why they are showing certain results.
Are we snooping in your Gmail and giving you results based on your results that your email -- all of those types of theories. And then you really just have to say it really just has to do with the key words you are putting in the search box, people understand that that's why they are seeing those types of results. A lot of folks think that the results are so relevant to them they must be -- we must know something about them. When often times we are just using what is -- what the user puts in the box.
People are unfortunately not as unique as they think that they are. So we know a lot about what people want when they are searching for -- gosh, Katie and I were talking about about beach umbrellas yesterday. So people searching for beach embrehls. We know a lot about then. And are serving great results based on. That people think this must have to do with something about me.
The other new features can check for yourselfle we are rolling out a new feature at the footer of the search rule. These results are personalized and these results are not personalized and if they are personalized can you try without person innovation. I would encourage everybody to check that out.
Because a lot of the search results pages you will find are not personalized. A great amount out there are not. And the ones that are right now are on things like what to watch on Netflix, for example. So then can you see it just goes right at the bottom of your page. Can you even click out a personalization to see how those results would change and you can check you are not in an echo chamber or filler bubble.
>> MODERATOR: Thank you for the great answers and questions from both of our panelists. I'm being told we have 5 more minutes left. So one quick one from the room and followup. I think Zoe, what if a screen shot was taken, would there be any way of tracking it in the context of water marks? Could they be removed easily?
>> ZOE DARME: Yeah, that's a great question, and I hate to do this, Google 100,000 people and a different product area that created it. ID. Taking a shot was one way to strip metadata. Sos that watt classic example of the invasion for some other type of metadata techniques that we talked about.
There are other ways to evade. We tracked about evasion of hashing, for example which is done by modifying the water mark or image in some ways. There are always ways to get around technical tools when really motivated actors who want to do that.
So we have made it as difficult as possible to strip that metadata. But that is why we are saying in a presentation like this we condition always rely on 100% technical solutions we have to think about these other ecosystem solution as well. And that's why I come to these presentations and always talk also about them for context. However I will say that we have made it tamperproof -- so you can't go and remove it, for example and things like that.
But I will get you an answer. It's a good question about screen shooting in particular.
>> MODERATOR: Fantastic. Thank you so much, Zoe. Any other questions from the room. If not I will wrap up with one more question. Fantastic. Katie I would love to start with you. Given the platform of about IGF. And we are here for the even. What are challenges for a multistakeholder event literacy. How can the challenges be overcome especially at a platform like IGF?
>> KATIE HARBATH: I think this work is absolutely multi-stakeholder and needs to be done from multiple different approaches it's not just enough to ask the platforms in which to do this. And I think Taiwan and frankly a great example of how have you seen a multi-stakeholder approach to disinformation in their country.
I think one of the biggest challenges that I have seen that I continue to want to work on is trying to help those that have not been inside of a company understand the scale and operational challenges to some of the solutions and better thinking and brainstorming about how we might do that. And on the platform side helping them to understand the approaches that civil society and others when they are trying to combat all of this in their countries and regions around all of this. So I think continued cross-collaboration is really important. And the other thing too is this does need to continue to be experimental.
Because if there were a silver bullet to this, we would have all figured this out a long time ago. But this is a really hard and tricky programme. And I think having dialogue and open conversations will continue to be important. Particularly again as we go into this new era of AI which is very much going to change how we generate and consume information that now is really the time to be thinking about how we shape what those models and things are going to look like for the neck at least 5 to 10 years.
>> MODERATOR: Fantastic. Thank you so much Katie and Zoe. Same question for you.
>> ZOE DARME: Before I answer the question I actually just want to go back to the water marking question because I found the answer through the best product ever, Google search. Just a quick search. So Synth ID uses two neuronetworks.
It takes the images but embeds a pattern invisible to the human eye. That's the water mark. And the second it can spot that pattern to detect that water mark and -- if that has one it doesn't have one. So Synth ID means that that water mark can still be detected even if it means that image is screen shotted or edited in a way that rotated it. So I was looking that up. Can you do your final wrap-up question.
>> I think we are a little biased towards Google Search being the best. We drink the kool-aid for sure. What are the stages of multistakeholder's literacy -- what are the advantages?
>> ZOE: I think the multi-holders. Even though they are called multistakeholders they are focused on what governments can do and what they can't do. And one of the thing with this really fascinating talk -- and thank you very much, Sarah for a great job moderating it. The third leg is what users contribute and what users do, and what users stake out and what users are consuming and how they are consuming it.
So I think that's the biggest challenge. One of the studies I mentioned earlier really focused on how much users expressed preferences play into the fact that they are finding or not finding this information. Are users actively seeking out unreliable sores? That's a hard problem to solve. And there's a reason that multistakeholder approaches really want to focus on governments or technology companies at the table.
They are the ones doing the talking but we really are missing a huge piece of the puzzle if we are not talking about user expressed preferences that they want to find, what they are seeking out and then how they are consuming that. And how we can get them to be stronger and reliable consumers and creators in the information ecosystem. And that's a tough -- that's a tall order.
>> MODERATOR: Thank you for that, Zoe. I think we are at time so maybe just do wrap up a big huge thank you to our panelist, Jim, our clicker stepping in with the internet connection and from the conversation today very apparent we need a holistic approach to the shared responsibility of shared literacy and protecting our users and education with regards to our stakeholder and government users especially youth.
I think as somebody who works in one of the largest youth populations in the world, it's sometimes something that is overlooked but bringing them to the conversations really important. So mainly hand it back over to Jim.
>> JIM PRENDERGAST: Thanks, everybody and thanks to the great questions both in person and online. What we really look forward to the interaction with everybody instead of talk ago Monks ourselves. Appreciate it. Everybody have a good evening. And see everybody tomorrow.
>> MODERATOR: Account speaker stay online for a quick photo?
>> JIM PRENDERGAST: Thanks, everyone.