CORRECTION IGF 2024-Day 2 -Workshop Room 7 -OF 76 Towards the WSIS+20 Review-- RAW

The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.

***

 

>>  Okay.   Good?  Yes?  No.   Wonderful.   Hello.   Thank you for welcoming us.   My name is Judith Espinoza.   This session is hosted in coordination with coalition in at the world economic forum.   It is my pleasure to welcome you all online and virtual.   We have an incredible cast with us here today.   To my left I have Dr. Obiroy.   To his left I have co founder and director for the center of cybersecurity and to her left I have Stephanie who is joining us from Meta in Brazil for data integrity.   I will go ahead and turn it over to my colleague who is joining us remotely from Argentina today and is not present and will introduce our panel who is joining us virtually.  

>>  AGUSTINA CALLEGARI:  I hope you can hear me well.   Hi, everybody.   It is a pleasure to be there online.   My name is Agustina Callegari and put this information.   On the we have David Sullivan.   David, I saw you there.   He is the Executive Director of the trust and safety partnership.   We also have Saeed Aldhaheri that is joining us online.   Once you get there you will get a chance to introduce yourself.   Thank you Judith back to you onsite.  

>>  Thank you very much.   Today is the topic of disinformation.   Disinformation is the topic.   It is certainly involving.   How this is maybe we can start with a session for Dr. Ubaroy.   You see a lot of these issues in Interpol.   I want to ask of the real dangers of misinformation and how do you tackle it when it goes through physical boundaries and the internet is not something that can be limited to physical boundaries and traditional geopolitical lines that you would see like a harm that would be in person.   How do you tackle this?  Who is being affected and who are the perpetrators of disinformation? 

>>  Thank you, Judith.   This is false and misleading synthetic misinformation and created for the purpose of cheating.   For the purpose of harming and wrongfully influencing the opinion.   The very intent is wrong.  

                Now as you rightly mentioned the new and emerging technology is changing the speed and realistic synthetic media that they're able to create and the speed in which they can disseminate.   The reach is multiplying many fold.   One of the biggest casualties and dangers I would say is trust.   I think that is the biggest factor that impacts everyone.   I think that David will be talking more about it when his turn comes.  

                For us trust in the institutions and trust between the individual and citizen and public.   Everything gets impacted.   For a multilevel organization where our code bases in international collaboration trust is impacted in a big way in that sense.  

                In other sense I would say the other impact is on security.   Security and democrat and also in that sense.   The way that media can be used for delegitimization elections.   Most important is the harm through hate crimes.   Which is an outcome of more polarized society and individuals.   That is always behind that and who are the people who do it?  I would say anybody who wants to gain or wants to manipulate and is looking for maybe power, influence can be behind this.   In terms of categories we have seen state actors and non state actors and terror organizations.   Manipulation of market and financial crimes, hate crimes.   Those are also important.  

                I would also say that to an extend they're also industries.   That should also be taken care of.   What needs to be done and you rightfully mentioned is multi jurisdiction frame of reference needs to be taken care of.   This is true of all kinds of of digital crimes since their multi jurisdiction.  The problem comes in terms that criminals do not have any jurisdiction of restrictions, but when it comes to law enforcement we are bound by that.  That brings in bureaucratic procedures and procedures that need to be followed.  Therefore, there is a difference in legal frameworks and also affected there.  

                What we need firstly is some sort of legal framework communization.  Second is interaction.  The various variable bodies and can have interacting different stakeholders and having a harmonized body to do that.   Why we are not able to do that much for these disinformation campaigns are attribution and problems with attribution and who are the ones that initiatives this and taken after that.   Issues of platforms are also seeing Monica responding on terms of liabilities of platforms and what needs to be done by platforms to make sure that they are not held responsible for that.   That is important factors and different reasons and how technologies can help us with that.   Whether it is AI and other forms and using block chains and different technologies are talked about.   Those can be and should be used in purposes.  

                I will also say literacy.   Educating the users about who they need to be careful about would also be an important factor of handle this kind of.   I will pause there and see if there are any questions that I would like to answer.  

>>  Thank you so much.   I think that was a good cover.   I wanted to touch on something you said.   There is an intent.   There has to be an intend to harm.   This is the place to make information and misinformation.   Citizens could be misinformed.   I can read something online and share a story that is not factual the evolution of technology.   Who is disinforming people and who is misinforming people.   That goes hand in hand.  

                You also talked about trust and how it affects institutions.   That is good to bring it to Dr. Hodi.   You do a lot of work in across multiple platforms.   What do you think the implication of trade systems? 

>>  I think the implications are quite high.   When we are looking at you just said Judith to the intent versus the lack of intent in the ecosystem we have the different scale of harm that is propagated across different financial markets and affecting supply chain and thus trade in general.  

                When we're talking about creating platforms that lacks the ability to check the real time reliability of information and accuracy of the facts being generated and allowing those platforms to, I would say mass distribute false narratives it takes a toll on different valuation structures in the market.   We know currently in the age we have a lot of marketability in tangible assets.   Also tangible assets could be the brand of the business and market position is at a certain point in time where a single Tweet can be devaluing the market of the business.   There was a Tweet that was in the White House and led to a drop in the stock market.   These points that led to $30 billion loss within minutes.   When you are trying to interpret that in the greater framework you can see that in a basis of stock market are losing around based on a study that is issued by a University of  Baltimore and Oxford.   When you go further into trying to analyzing the impact on maybe brands that exist online and at the end of the day information leads to disrupting trust models and users and different in stakeholders that are (audio is garbled).   You might sway away from using specific e commerce platform and disrupt the global change in general.   The supply change cover and we have heard there would be a lack of medicine or a lack of food and people, you know, tended to go to panic shopping and panic stocking supplies and led to shortage of certain stages.  

                I think that we would be able to see this happening more often across the platforms.   It will lead to disruption.  

                The question on here is how can we use this data to safeguard stock markets?  We can't change the valuation model as fast as we want.   We do have digital access to crypto currency and highly targeted to this scale of misinformation.   In order to do this we have the pathway to make sure that the trust indicator are being very communicative and transparent way to trust a piece of information that comes into the platform.   To do this we need to incorporate many layers of algorithmic platforms.   We're trying learning (audio is garbled).   That are not at the map at the moment.   Maybe for the lack of reduced complexity across different platforms.   We are still trying to improve it.   That is going to add on the building better stack of the trust models.   I agree that we are having differently a global gap when it comes to governance and harmonization on platforms and digital kind of units that are leading to creating and generating and communicating this information across the map.  

                We need in today's day and age to find a way where it's like part of the design structure of the design ethos of any platform or social platform or any type of communication platform across the map to provide for certain authenticity and communicated to the public.  

>>  Millions of dollars in the economy because of misinformation.   This is a good transition to Monica.   I used your middle name when I introduced you.   Platforms are a way to spread misinformation.   It is not just there is so many sources of disinformation.   This must be a challenge for platforms to be able to tackle or to begin to uncover where these sources are.   I want to ask you, what are some of the mitigating tools that the platforms are combatting misinformation and how do you approach this?  You have millions of users worldwide.   How do you detect these threats and the ways that you tackle these challenges, right? They are really social at risk.  

>>  Thank you, Judith.   I'm Monica and head of public policy in Brazil for integrity.   One of the areas that I'm focusing my attention at MEeta is misinformation and disinformation.   I love it that you make the difference between these two terms.   It is very important that we understand wherever there is disinformation there is intent.   You have bad actors behind the scenes that wish to harm people somewhere or wish to gain some type of profit with that.   Then there is misinformation, right.   That is where people have access to the disinformation that was created by the first actors, but they don't intend to harm.   They believe what they're seeing and they're spreading that.  

                A big tech company like Meta we of course have to tackle both of these sources of this type of problem.  

                On the edge of disinformation we have invested billions of dollars over the past 10 years to really work against these actors.   Um, one way that bad actors spread disinformation is through fake accounts.  

                We have an exceptional team of experts and they work hand in hand with the very best technology.   I'm talking about artificial intelligence, trying to identify fake accounts and take them down before they can even exist.   Even for a few seconds in our platforms.   We have worked in the past eight years to tackle coordinated inauthentic behaviors in our platforms.   Thus disrupting coordinated attempts to create disinformation and spread misinformation.   Bad actors are very good at doing this.  

                We are constantly having to evolve and constantly having to invest in order to be able to do that.   We have transparency reports which we publish regularly.   Our transparency reports now include the numbers of fake accounts that we take down.   It is amazing to see that we're taking millions of accounts down every single day.   Fake accounts.  

                That is the work that no one sees.   It is not causing harm as we're tackling that before the harm can even happen.   That is a behind the scenes work that I'm so happy to be able to talk about.   No one really gets to know about them.   Of course some thing always escapes.  

                We also have part of our users that are misinformed and are spreading misinformation because they believe in it.   One of the characteristics of misinformation is that we're usually working with catchy openings, catchy phrases.   I truly believe on a personal level that misinformation speaks to people's hearts and emotions.   People see a headline and they see a headline that they believe and eager to spread that to their friends and their family.   Tackling the misinformation part of the problem through users that are somehow getting this disinformation and then spreading that is also another important part of our work.   So we have several actions tackling that front.   I think one of the most important ones is one we do around the globe is fact checking agencies.  

                We work with over 100 fact checking agencies across the globe.   They're covering pretty much every single language off the jurisdictions in which we offer our services.   In Brazil alone we work with six different fact checking agencies.   The way it works is our artificial intelligence, because of course we're talking about a huge volume, right.   It would be humanly impossible to tackle misinformation online if we didn't have the technical technology on our side.   So our algorithms they're constantly in our platforms looking for signs that something could potentially be misinformation.  

                For instance, when if I post something on my Instagram or my Facebook and a lot of people comment behind or underneath that post, "Oh, my God.   I don't believe that."  I really believe that or I think it is fake.   That is a sign to or algorithms that something might be wrong.   If Judith, you go to my post and report it as fake news that is one possibility.   That is a very strong side of algorithms.   They're looking at millions of signs.   Whenever they reach a certain degree of certainty that that material could be potentially misinformation they're going to send that material on to a cue that is accessed by our third party fact checkers partners.   Then they choose the materials and the links, news, whatever.   The memes.   It can be a photograph.   They check that and if it that information is false or if that information is accurate.   They then label it and send that signal back to META.   If something is reported as false to our partners that information is significantly reduced in the feed of all of our social media applications, and I'm going to see a filter in front of that information.   I no longer will see that information straightforward.   I will see a filter.   It is same as a graphic content filter.   It will say you have to click.   It says this piece of content has been rated as false.   Are you sure you want to access it?  In the technology jargon we call it adding friction.   That means the user has to take additional steps to access that information.   Most people stop when they see that filter because most people don't actually realize that some misinformation is misinformation.  

                Even if the user chooses to access that information, sometimes the user wants to know.   I think it is fair to say that people need information on what is going on in their networks and that is false.   Sometimes they wish to warn people.   I might for example, have seen other people talking about that specific piece of content and I want to warn them that that is misinformation.   I can still access that.   Then if I want to share it a pop up will come up and it will say this piece of content has been rated as false.   Are you really sure you want to share it?  We're not deleting that content because we feel that it is important for people to know that that content is false.   We're also giving the user the knowledge, the power of the knowledge and the tools to let other people also know that that information is not accurate, not correct.  

                I know we have limited time Judith and not go further.   This is one of the important steps and actions that we take towards combatting disinformation and misinformation.  

>>  Thank you so much.   This is targeted in an emotional way.   They want to pry on people's vulnerability.   It is humanly possible to look at every case together.   We want to make sure that our technologies are work and not just weaponized to harm people and we're harnessing them as well.   This is a good place to pass them to my colleague, Agustina Callegari.  

>>  AGUSTINA CALLEGARI:  I think it is important and I want to ask an important question to David.   How does information affect trust in institutions and how can civil societies support to combat this.   David, over to you.  

>>  David Sullivan:  Thank you, Agustina.   I'm David Sullivan Executive Director of the trust and safety.   This trust and safety is the function within a lot of platform and technology companies that user have a safe experience and feel that they have the trust needed to use the service.   I was thinking about this question of trust and the definition of trust in advance of this session.   I found an article from nearly more than 22 years ago from Purdue University Professor and media communication about in Community We Trust.   About online security at EBay.   It was one of the first companies to have a trust company team.   It talked about trust and that for trust to exist there also has to be risk.   What trust does is enable action in the face of risk.  

                I think this is an important way of thinking about trust not only across the role of technology companies but also across other stakeholders.  

                I also came across interesting work in the field of standardization and ISO.   Where there is a working group in trustworthiness.   Working group 13.   A definition of trustworthiness is the ability to meet stakeholders expectations in a verifiable way.   We talked and fellow panelists talked about the definition of disinformation.   We have a glossary of trust and safety terms.   We have our own term for disinformation and false information that is spread intentionally and maliciously to encourage distrust and undermine political and social institutions.  

                I think about the ultimate goal of a spreader of disinformation is to effectively dedos the institution and government societies and companies, academic institutions by undoing the trust that individuals and societies have in those institutions.   I think that this can be a particularly challenging area.  

                When we were talking about misleading information.   More broadly misleading information and disleading information and intent behind this misleading content and discern and with the tools of the trade that Monica mentioned that companies have these days.   To my mind we should always be asking, when we're talking about disinformation.   Disinformation to what end?  What is the objective?  What is the harm to which this intentional spread of disinformation distributes?  We think of foreign election interference is very different from something like the kind of disinformation that drives scams and frauds?  This requires different.   The trust and safety inside companies like doing the work that Monica mentioned.   A lot of this really is responding to things that we all agree is harmful and awful.   Whether that is child exploitation and sextortion and scams and frauds we're seeing in the moment.  

                The other case is the challenging areas of where to draw the line and what constituted disinformation and what constitutes hateful speech.   In our partnership we want to increase transparency on how to engage in trust and safety operations and in a way to not tell companies what type of content or conduct they should allow on their product or service.   Instead we are organized around best practices that companies can use to address all different kinds of of challenges when it comes to trusts and safety.   This certainly can true misleading the content of the type that we've been discussing.  

                Turning to the second part of your question Agustina, the role of working society and around challenges around disinformation.   Civil society at non governmental organizations contribute to researching and advocating to policy solutions whether with governments or companies.   I think what is important is civil societies need operating space to do their work independent of pressure and harassment from governments, companies, and other actors.   That to me I think is the single most important area where we need to give civil society the space for them to be able to do their work as a watch dog holding both governments and industry accountable.  

                I do think the world economic for economic digital safety does play an important role bringing together the kind of stakeholders in a trusted space to increase that trust between government and companies in civil society and increasing transparency through the public kinds of of publication and I will stop there and stop for questions.  

>>  AGUSTINA CALLEGARI:  Thank you David and I think I will address and as you highlighted there are misinformation and there are different roles for different stakeholders.   Now I would like to ask the question to Saeed.   The director of the center for future studies and Dubai Universities.   I think you are online.   I can see you there.   My question to you is how has it approached the information and what lessons do we find.  

>>  SAEED ALDHAHERI:  Thank you.   I'm really very happy to be with you today remotely.   I can see some of my colleagues with you.   Let me introduce myself.   My name, like Agustina has mentioned.   Saeed Aldhaheri.   I'm president of the UAE Robotics and member of the Civil Society E Safety.   I will speak a little bit on the role of disinformation.   Speaking about the UE efforts to fight disinformation.   UAE adopted a multifaceted approach.   Happiness and program in the UAE the government launched a digital wellbeing online platform for children, parents, and UAE society at large to approximately positive and safe digital usage.   For example, supporting a good code of conduct and good citizenship and behavior in the digital world.   The counsel has issued a digital wellbeing policy and charter for all the citizens and residents in the UAE.  

                This has four components.   Talking about digital footprint and what we do in the digital world and how we can live a good digital impact or digital footprint in our interactions.   It talks about online harm.   Digital ethics as part of this and of course cyber bullying is also part of this.  

                The platform has managed to conduct several Webinars, online sessions, community outreach through lectures, schools, interactions, and workplace programs and part of this is educating the population at large.   Whether we're talking about the people at the workplace and students at schools.   This is where our role as E Safe internet society takes in and we've been conducting several sessions to children at schools and parents trying to reach people of the workplace to inform and educate about disinformation.  

                Another thing that is also what the UE also in 2021 has published and come up with a law.   UAE law 2024 concerning the fights against rumor.   Again a lot of this has to relate to disinformation and cyber crime.   This law helps to prevent false or malicious misleading information that challenges official news and disturbing public piece and interest and public order or public health.   There is a big penalty as part of this law.   If a person is convicted can be prisoned for one year or pay a penalty for 100,000 UAE.   Part of what we do is that our members are active among the board of social media platforms.   For example, there is a board here in the UAE across the region with Tik Tok that one of our members is being active on that and trying to discuss how to fight disinformation with social media platforms such as Tik Tok.  

                So we've been doing multifaceted approach with the use of online platform as a soft power for UAE reach the community at large and educate them with the good conduct and good behavior.   Of course there is a regulation that I would love to see more regulations coming worldwide and lack of information and lack of accountability and allows actors really to spread disinformation.  

                Of course the part of educating and bringing the skills to people.   I believe now that is becoming very important that people understand about the media literacy and understand the critical thinking skills which they need to discern and looking at information.   There is a mass like everyone mentioned.   A mass of disinformation.   A way to fight it is having a critical judgment from people of society to be able to tell this is authentic or maybe this is not authentic or maybe this is disinformation and misinformation, and I will stop here.  

>>  AGUSTINA:  Thank you.   And what you are talk about online literacy and is what I'm going to share next and the work that we're doing for digital safety.   Judith if we are fine with the time I'm going to give a brief introduction of the work that we are doing and then after that we are going to open the floor for questions and comments onsite and online.   I think David already mentioned some of the work that we're doing in the global coalition of the digital safety of the economic forum.   The forum that started two years ago and identifying ways and creating feeds for tackling harmful content online and being simple.   We can deviate that in terms of content and that is harmful, but illegal and could be related to material.   Also content that is harmful but in some jurisdictions and challenges that we are having related to regulation that Saeed mentioned and illegal in many places and could be related to mis and disinformation.   We have stakeholders working together to identify challenges and most importantly work together to identify and promote the solutions that we are seeing working out there.  

                As almost our publications are framed and showcasing the different efforts that our community and this society is doing to tackle some of these challenges.  

                In terms of disinformation concretely we have been taking what we are calling a call of society approach for this information.   We've been focusing a lot of our work in how media literacy plays a role in combatting this information.   Of course understanding that literacy is not a silver bullet.   It is not going to solve the problems and we are seeing the complexity of the challenge with the panel and we are seeing that there are different mitigation strategies taken by different stakeholders, but we wanted to go deeper into how literacy can help tackling the issue.   This includes understanding how false information is produced, distributed and consumed and necessary skills at this stage.  

                We have done two things, more than two things I would say.   To start as a group we have produced and very aligned as what David said the glossary.   The typology.   The first challenge that we have identified is that there is a lack of common language of what we mean.   Not only of misinformation and we have seen some definitions that are helping us to advance the conversation that we are seeing today.   That lack of common understanding makes this conversation challenging.   We try to make this typology that define not only misinformation but also other harms and cyber bullying and many others.   We have done that through human rights lens because we wanted to show how the different human right frameworks that we are seeing should be and are applied to the different conventions, our principles.   Such as the UN convention on the right of child and general comment.   The convention of elimination and all forms against women.   Economic social and cultural rights.   With this focus on fundamental rights what we want to acknowledge is that all harm can lead to an awful denial of expression and these rights must be balanced against an individual and should be free on harms and the right of dignity.  

                That is the framework that we have taken to our work for the typology and as I said, I think that everyone, most I would say, most people that are attending at IGF really believe in the power of multistakeholder collaboration and the way that we work brings together all the stakeholders at tech company.   Public officials and organizations to exchange practices and coordinate actions.   Solutions is what we try to focus on and meet under reducing online harms and mis and disinformation.   I will stop here to have time for questions and comments.   Judith, back to you to see if there are comments onsite and I will monitor the chat here if there are any reactions online.   Thank you very much.  

>>  JUDITH:  Thank you, Agustina.   Tell us where you're coming from and your name.  

>>  Okay.   Okay.   Thank you, Judith.   I can't listen to mic but that is fine.   I'm from cyber internet lab.   We do disinformation and misinformation in Southeast Asia.   Sometimes to our observation disinformation would be more prevalent in moment of crisis.   Be it a pandemic or natural disaster.   Do you reckon that these technological efforts should be differentiated during the moment of period crisis or should we just find a solution that is resilient during certain times.   Thank you.  

>>  I totally agree with you that there is cyclical mediations.   This is totally dependent on immense opportunities that these times provide.   Also the malicious actors get more active then.  

                In terms of efforts to counter this the approach will remain the same.   The quantum of that would be needed to be changed according to, for example, if there is a political event happening it would need to be scaled up.   That wouldn't but in change of terms of approach may not be the right.  

>>  I would like to add to that very insightful I think.   The fact that cybersecurity stability aspect is an opportunistic approach and across different jurisdictions and indigenous fabric of the culture.   Sometimes that would be a door of opportunity to maybe magnify the impact of misinformation and disinformation.   That space of geographical element.  

                I think it is very important to tackle building the holistic technological solution or holistic approach or holistic framework that would create to the precision of the solution regardless of the locality and absorb it.  

                I will give you an example.   In the Middle East during the past period of time we have, thanks to the platform efforts we have a huge campaigns on, you know, deescalating the impact of disinformation and misinformation.   That was borderline touching and journalist felt they were censored against journalism.   They encountered that with counter algorithms even though they didn't have the fancy deep knowledge of technology.  

                I think we are seeing this kind of counter movement across the globe where sometimes technology is being countered by the common community.   I would say knowledge around the cultural nonsense and they can stop these I would say solutions from being adopted as much.   We have to try to sit together and cocreate the solutions to the right level of precision.   Thank you.  

>>  Thank you.  

                I totally agree with my colleagues.   We need a sustainable and constant work around not only disinformation and misinformation, but all of the problems that were mentioned here.   Especially by David; however, of course in certain times there is a spike, especially in an intent to harm.   Especially as people get more emotional.   We got more emotional during COVID.   We get more emotional during elections.   Again, our emotions are speaking.  

                I can give you the example of Brazil.   We just came out of a very huge municipal election this October.   Every other year we hold elections in Brazil.   Be it presidential or municipal.   The years of elections are years in which we have a larger number of people working, not only with us to tackle the misinformation/disinformation campaigns that we see because we need to act fast.   Misinformation around for instance the number that people should type in to vote for a specific candidate.   With he need to act very, very fast because if we take a little bit longer that might harm and that might have a political impact in the real world.  

                So because we need to act so much faster we have a larger number of people working with us and we have very close collaborations with law enforcement agencies with the electoral authorities in the country for instance.  

                Let me remind you that big companies like Meta for us globally this is an ongoing effort because the world is having elections all throughout the year pretty much.   The good part about it is that we're learning as we go and with every election cycle we learn where we're not doing so great and how we can improve it for the next one.  

>>  A hand here.  

>>  Hi.   Thank you.   My name is Edgar Campling.   I'm a trustee at Watch Foundation and countering child sex abuse material.   Trust and safety experts of platform operators are welcome but mainly insufficient.   Especially when platforms don't enforce their terms of service effectively.   People are leveraging the algorithms of the platform and your reports from the encountering digital hate and others that highlight this.   Should civil society groups be demanding more of governments to mandate standards to protect citizens from harm especially vulnerable groups idly by the companies and senior executives in the event that they don't take effective actions?  Thank you.  

>>  For the interest of time I'm going to ask one of you to take the question.  

>>  Thank you so much for the question.  

                I think we have done some work with ITU and within their pop kind of initiatives for children safety with the platforms as well.  

                I totally hear what you're saying.   During the search we have conducted we have seen the impact of the different layers in terms of trying to hitch the risk of misinformation and disinformation on children and trying as well to make sure that we address the different scales on the different vulnerable groups that we have.   Our defacto solution is trying to advocate not just for technological solutions or policy works or regulations, but an efficacy structure.   We need to efficacy within all these solutions.   We need to measure the certain type of trust that happens because of misinformation or because of like what David had just mentioned, a denial of service aspect that would be targeted to a specific structure.   How can we help the passive actors and sometimes you need to there is a compliance structure and it is quite rigged in types of reporting and addressing the risk.   Within the global this is not quite straightforward.   Especially when the targeted groups are children and women and vulnerable groups on the platform.  

                What we suggested is to tie in incentives from the board level.   Incentives of the COs and words of the stakeholders of those platforms and appraisal of those platforms and their activity participation and countering and hedging those risks for those groups.   Of course it would take a bit of time to kind of advocate for that.   We've seen quite a very impactful results coming out of this.   It goes with all the behavioral science aspects and nudging for heavy stake solutions.  

>>  Thank you so much.   I just got a three minute mark to close the session.   I do apologize for additional questions.   We can stay at the end and have a chat more informally.   I want to wrap up and address the last points made.  

                First of all, I think Agustina is right.   We have industry and academia.   It takes everyone.   The other thing is we're all agents in this sort of new reality of the internet that we exist in.   Saeed very touched on this.   We're not passive users or guest in that space.   How we interact and use this online all of that sort of utility of this disinformation.   The last gentleman's point.   With emerging technology more of emerging and sort of phasing of what physical and digital is.   Those are very, very real harms and risk.   It can't be couched with a platform or one single society organization

.   It takes academia.   We need a full society approach.   This is a work that we try to do and all of you respectively try to take, right.   It is efforts like these that inform citizens and inform users and make people more resilient.   Create sort of a critical lens when engaging with material online and the same way that you use critical lens reading a newspaper.   First misinformation start and I can't recount and the one in the United States yellow press newspapers.   I'm a native New Yorker.   Yellow press.   People were buying newspapers because they thought it would give them information to business endeavors and docks when import started.   With that I leave you.  

                I invite you to be active participants and the worlds you are engaging and community around you.   We welcome to the talks and dialogues.   Thank you to the esteemed panelist.   Round of applause.  

[Applause].

>>  Also to yourselves.   You're choosing to spend time with us online.   With that I close.   Thank you.