IGF 2023 - Day 1 - Town Hall #7 AI, Emerging Technologies and Human Rights- RAW

The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.

***

 

>> MODERATOR: Ladies and gentlemen, good morning. I hope you can hear me well. Thank you for being here.

[ Speaking native language ]

[ Non-English speaker on English Channel ]

( Human Captioner standing by...)

>> MARIA RESSA: Hello, good morning. Oh my gosh you made it here in time. We found our way here. Let me start, I'm very excited about this. The best part is I hope you will listen to these incredible women leaders. And then we will leave the last 20 minutes for your questions.

So women leaders face increasing challenges around the world. And the online attacks against them, you know, we have now chronicled the data around this, women politicians, researchers, journalists, so these online attacks against women, sexism and misogyny against leaders, they are often information operations or information warfare. They often herald the erosion of democracy around the world.

This is certainly true in my country, in the Philippines. It is not a coincidence that the attacks against me, for example, as a journalist. I was getting 90 hate messages per hour.

It was followed a year later by the same attacks top down. In my country a former senator Lila deLima is about to start her 7th year in prison.

I have to remind you in Iran where this year's Nobel Peace Prize winner Nargis H. remains in prison. There, the women lead the fight. Like in many parts of the world, their cry in the world, women, life, freedom.

So let me introduce you to the two with us today. The Rt. Honourable Jacinda Ardern became the prime minister of New Zealand at just 27 years old. She became pregnant and gave birth in office. As a woman leader she was also the target of information operations, warfare at times. Analysts say from both Russia and China.

She faced the challenges of a live streamed domestic terror attack against New Zealand's Muslim community. A volcanic eruption and the COVID-19 pandemic. Jacinda's focus on people, kindness and what she called pragmatic idealism saw New Zealand achieve some of the lowest losses of life experienced by any developed nation through the pandemic.

Carried through a ban of semi automatic style weapons in her country and created the Christchurch Call to action to eliminate violence online. The Special Envoy to the Christchurch Call. She joins us from the United States.

On stage with me, please meet a Fellow member of the Leadership Panel of the Internet Governance Forum. We were appointed by the UN Secretary-General Antonio Guterres a little over a year ago.

Karoline Edtstadler. A former judge, former member of the European parliament and head of delegation of the Austrian people's parties. My name is Maria Ressa, co-founder and CEO of Rappler in the Philippines. Yes, my news organisation survived six years of harassment and law fair. The Vice Chair of the Leadership Panel of the IGF. In order to be here in Kyoto to be with you this morning, I had to ask the courts, all the way to the Supreme Court for permission to travel.

So thank you to all of you for joining us this morning.

We want to try to make this more free-flowing, than we have had so far. We will leave time at the end for your questions, roughly 20 minutes. Jacinda and Karoline, please feel free to interrupt me and ask questions of each other. It's an honour to be speaking with you both. Let's see if we can get, if you can see the former prime minister, Jacinda Ardern. Hello Jacinda!

>> JACINDA ARDERN: Hopefully the technology is working.

>> MARIA RESSA: We hear you. Let me toss the first question.

I talked a lot about, I will start with the women leaders. Being a woman leader, you talked about pragmatic idealism. How did you shape that and how has that given you the courage to make these difficult decisions I had mentioned beforehand?

>> JACINDA ARDERN: Look, you could put any agenda item in this tumultuous period of time, in which leaders leading and being asked to govern. And I would say that in my mind one of the most important approaches is to always sit out your destination. You know, be it, the rapid technological development we face, or climate challenges or issues of inequality.

It's very easy to get caught up in the day-to-day of what it is that needs to be resolved with each of these issues. But I think it's critically important that we still sign post to our public and to our citizens what the end destination is.

And pragmatic idealism in my mind, is being ambitious in that signaling. Sit out your overall goal. But then be pragmatic about the time it may take or the steps you may take along the way.

Of course we will get into a discussion around the rapid technological development we are experiencing now. And how difficult it is to sit out an endpoint, when we really don't know.

So in that set of circumstances I would say it's probably more important we sit out our values on that journey. And the things we will seek to uphold in an environment of extreme uncertainty.

>> MARIA RESSA: Great, we will come back. Karoline, let me toss a similar question to you. You are helping lead through an extremely tumultuous time, Ukraine, Hungary through here. All the different issues you have had to handle. How do you maintain and deal with these difficult situations?

>> KAROLINE EDSTADLER: First of all, good morning and thank you so much Maria for organising this. It's an honour to sit on the same panel, even if you are not here Jacinda with you. You are a role model for women and it's a pleasure I'm getting the impression I'm getting role modeled by hearing what you said about me.

I would say you can break it down with a choke, which is of course only a joke, this goes the following, the last 2,000 years the world has been ruled by men, the next 2,000 years the world will be ruled by women, it can only get better. But this is not the end of the story. We are living in a very diverse challenging world. I think we need both. The approach of women and of men.

But the difference is, Jacinda already mentioned being ambitious is something very important. That we women are judged and seen in a different way. If you are ambitious as a woman, you are the pushy one. You are the one you want to get the position of a man and so on and so forth. I think what we as a society have to learn is that we need both ways of seeing the world.

And we women can make a difference because we are giving birth. We are mothers. We are really perceiving the life. I think this is also why we are different than men, and that's good. There's nothing bad in it. Especially times like that, you mentioned through a few of the crisis we are still going through. It's very important to have both ways of seeing the world, both assessments of female and male. One last thing, I think we women are still not that good than men making better networks in holding together. In encouraging ourselves. That's why I found it a conference last year in August, called the next generation is female. It's not about things against men. It's with the support of strong men. And it's really for female leaders in Europe to get together, to network, to exchange and to have personal change also and encourage ourselves because it's not easy and we will go into details also regarding hatred in the internet and being judged as a woman.

>> MARIA RESSA: That's where we will go. For the men, I hope you find this as inclusive. Part of the reason I started this way is because the attacks against women online are literally off the scale. When I talk to reporters who are, in some instances covering male leaders who are misogynist. Their editors tell them, buckle up, it's not our problem.

But I think one of the things we want to layout is it is a problem of the technology. It's an incitement of the technology and knocking women's voices out of the public debate.

Let me bring it back to what we are talking about, the technology shaping our world today. One of the most interesting things Jacinda Ardern did was strong reactions to a live streaming of a terrorist attack. The first time a government literally asked all news organisations around the world to take out the name of the attacker.

So this was, I was surprised when we got this. But when we thought about it, I was like that kind of make sense. But also to try to deal with taking down this footage from all around the world. Jacinda you pointed to the Christchurch initiative as a multistakeholder solution for eliminating terrorists and extremist content online.

What did it succeed in doing? And where can you see that moving forward, given the landscape we are working in today.

>> JACINDA ARDERN: Thank you. A really big question but I hope there are solutions to be learned. We have succeeded but we have more work to do. I assume a number of people in the room will have a better prior knowledge about the Christchurch Call to action, which is over 115 now strong with member organisations made up, and supporters made up of the likes of government, Civil Society and technology membership and platforms.

But taking a step back, why did we create this grouping in the first place? As you say on the 15th of March 2019 we experienced a horrific domestic terror attack against our Muslim community. It was live streamed on Facebook for a total of 17 minutes. And then subsequently uploaded a number of times over the following days. It was just prolific. People were encountering it without seeking it. And you are right to acknowledge in some cases it was in people's feeds because it was being reposted by news outlets or referenced by news outlets.

In the aftermath there was a strong reaction, this should have never been able to happen, but now that it's happened to us what can we do to prevent that from happening again. We took an approach not just about how do we address the fact that live streaming itself became a platform for this horrific attack? Because if we just focused on that, that's a relatively narrow brief. And we know the tools used for violent extremism or terrorist online, they are going to change. Live streaming was a tool at that time. The response was ill-coordinated by other platforms, for a number of reasons. So work needed to be done, yes. But we also wanted to make sure we were really fit for purpose should other forms of new technology be the order of the day for those violent extremists. So the Christchurch Call to action has a number of objectives. Some creating a crisis response model so we are able to stand up quickly should anything like this occur again. We have not seen at the scale and magnitude of the Christchurch Call online, because we have this civil defense model.

We also said how does someone become radicalized in the first place? Acknowledging in our case, the terrorist involved acknowledged themselves they believe themselves to be radicalized by YouTube. People will debate if they believe that is the case. Regardless there are questions what we can do as governments within our own societies, but also to better understand these pathways. You know, what is curated? How is curated content and algorithmic outcomes. We are understanding that better. And these, I think, are areas where our learnings will be much more beneficial much more broadly.

>> MARIA RESSA: That's fantastic. Let me follow-up with that.

Last week or week and a half or so, I got a class with Hillary Clinton and dean of CEECA, what happens with the ideology of terrorism, how that radicalizes people. One of the things we did in the class is to show how similar with what we are going through now on a larger scale, with political extremism.

Are there any lessons from the Christchurch approach, and the pillars you have created, how to deal with radicalization. For example, that we can learn to combat the polarization we are dealing with globally?

>> JACINDA ARDERN: Good question. Where I come at it from, our starting point was how did this individual become so radicalize they were driven to fly to our country and put themselves in the community and plan an attack against our Muslim community and take 51 lives?

How is it that can happen and what can we do to prevent it. Now the learnings from that may be applicable across a range of different areas and sources of motivation and rationales, whatever they may be presented by the individual.

But one common denominator we determined was that, despite the ideology that may be driving the behaviour, we couldn't actually answer some of these questions, because so often there would be this issue around, well privacy, intellectual property, it was very hard insight into how algorithms may be driving some of this behaviour. If indeed it is.

So we took a step back and thought over time, pulled together a group of individuals as governments and platforms who are willing to put funding into creating a privacy-enhancing tool. Which will enable researchers to look at the data we need to look at, in order to understand these pathways and that will enable researchers across a range of fields to better understand that usage and curated content. Help understand what successful offramp s look like and I hope prevent this.

>> MARIA RESSA: Karoline, you were in the E.U. and data being one of the key factors how we are able to see the patterns and trends that influence behaviour.

Could you tell us about the EU's approach to democracy action plan, and then now rolling out the digital services act and digital markets act?

>> KAROLINE EDSTADLER: Well, I think at times like this we should do everything in parallel. And there are so many crisis and so many challenges. We should find analysis for but it is really quite hard to do so. I really think the European Union is regarding the A.I. act ahead. If I'm saying ahead, we are of course lagging back. Because we should have been quicker. But the developments were so quick in the last two years, I would say, that it is normal like that. So now we are really trying to do something regarding the A.I. to have a framework of what A.I. to have a classification of the risks of A.I. and I think that is something very important to classify the risks because there is some applications that do not harm us. We need them, I don't know, for some spam filters, not doing risk. But on the other hand we have A.I. which is really harming the whole of our society.

And this is the one thing. The other thing is, that we already have the DSA and DMA in the European Union. I could proudly say we in Austria were pushing that a lot. And we already started a process in 2020 to have a legal framework in Austria. And it was I would say, diplomatically, I have discussions because they were not happy, it will last at least two years to create it in the European Union. We were quick in Austria, we had the communications platform set in place from the first of January 2021. And this is something where social media platforms have to deal with that issue. They have to report, they have to set up, they had to set up a system, where someone who has on the internet can push a button and say this is against me, do something, delete it now. Because it is going around the world very quickly. And you as a victim should be helped in the minute it comes across.

So now we have DSA and DMA and of course we have to redo our legislation, but this was also my goal. To have first national level, then European and now I'm here as a member of the Leadership Panel and really try to create something for the universe.

So this is for the whole international community. This is something, which is not easy. Of course different governments coming from different standpoints have different assessments of the situation. But in general, it's about human beings treating and have the need to treat this, yeah, big thing of danger for our whole society. As Jacinda said and we saw in her country with this really horrifying terrorist attack.

>> MARIA RESSA: Yes, from the data, from the Philippines we have looked at and analyzed. In the 2021 I called behaviour modifications system. I will tweak the data that shows that and the impact on our society. Let me ask our two leaders. For social media, the first time machine learning and artificial intelligence was allowed to insidiously manipulate humanity at scale. You are talking at that point maybe 3.2 billion, right, deployed at scale across platforms. Because it doesn't just stay in one. There was a lot of public debate and a lot of lobbying money focused on downstream solutions, right? The way I think about it, there's a factory of lies. I mean you would have seen this already, spewing lies into our information ecosystem, the river. And what we tend to focus on in the public is content moderation. Content moderation is like taking a glass of water from the river, cleaning it up and dumping it back into the river.

So you know, how can we move away from these downstream solutions like content moderation, more into structural problems like design. Lies spread six times faster on these technology platforms than really boring facts.

So that design allowed surveillance for profit, right? A business model that we didn't name until Shoshana Bubof wrote in 2019. We were retrofitting, reacting to problems after they materialized.

Now that we are in the age of generative A.I., I wonder how we can avoid being reactive? Why should the harm come first before we protect the people here? I know it's a tough question to throw at you. But let me give you an example, for example, the pharmaceutical industry. There was a COVID vaccine that we were all looking for.

Like imagine if the pharmaceutical companies didn't have to first test it. That they could test it in public. So this group-A, I will give you vaccine B, and this group here I will give you vaccine B. Group A, I'm so sorry, you died. I only say that because it is exactly what happened in Myanmar, for example. Where both the U.N., and Meta sent teams to study genocide in Myanmar.

So can we do anything to find, to prevent these types of harms happening? And Karoline first?

>> KAROLINE EDSTADLER: I would say the first thing is to raise the awareness. To raise awareness and to allow people education and give them skills to deal with it.

The second thing and this is what we are trying to do. We are doing that also in the Leadership Panel, is to set some legal framework in place. I would say it should be a regulation that is not hindering innovation. Because we know developments are quick. They are needed. And they can be used for the best of it us.

But we have to learn to handle them and also handle the down sides. Now it's said very easily, put some legal framework in place.

But it's not so easy because I'm sure we will lag behind the future and I compare my former profession criminal charges, sitting in the court room but you never have all the information the perpetrator has.

And you are always behind. But in the end you have to deal with it, and you can deal with that. I think that's the same approach we have to use in the regard of new technologies of A.I. and all the things coming along. We are already proof it is possible to do so with the DSA and the DMA. And with the legal framework we put in place in Austria. Maybe two more sentences to that, when I started the process in 2020 and when I invited to social media platforms to get into a dialogue with me about hatred on the internet and what we can do against it and we want to put up a legal framework from the parliamentarian side because we as democracies are represented by parliamentarians and ruled by governments.

They said you don't need to do that, we are so good handling the hate on the internet. We are good at deleting. We don't need a legal framework from the state, and now we have it. And now I think almost all of them are quite okay with them. Let's put it like that.

And we are now in a process also here in Tokyo, Abbis Abiba. All the expectations of society and this is a good development.

>> MARIA RESSA: Fantastic. Jacinda? Your thoughts, upstream solutions for generative A.I.?

>> JACINDA ARDERN: Here, I think that sentiment you shared instigating this part of the conversation around how do we put in place guardrails before the fact? This has to be, I think, one of our key take-homes over the last 10 years or more.

And I think we are naturally seeing a hesitancy or skepticism in the public as a result of the fact that we have been retrofitting solutions to try to prevent harms after the fact.

Some research, I believe it was recently demonstrating roughly half of people were quite negative about the relative benefits of A.I. and those who know more, even more negative.

We are talking so much about the potential harms and there isn't the same em if   --  emphasis on the opportunities that exist. It speaks to recent times, it speaks to the public. And it's relatively rare to have a field of work where just because you can, you do. As in we have the ability to develop this so we push ahead even though there are those that are flagging risks and flagging harm.

I'm an optimist though. What I find encouraging is we are having these open conversations and included in those conversations are those who are at the forefront of the tech itself.

This is where I come back to the fact, I, as a past regulator, I am not in the best position to tell you precisely the prescription for those guardrails. But I can tell you my experience the best methodology to developing them.

And then in my mind we will always be in this fast-paced environment, not to solely take a regulatory approach though it's an important part of the pace, but the risk we see these technologies developed. And the multiple intersections and perspectives we need at the table, that are multistakeholder approach, includes companies, government and Civil Society is incredibly important.

And you know, in my mind, even if I can't give you the prescription, I'm absolutely, I believe that will be the how.

One orange  --  other thing I accident anticipate when we convened a group of that nature, the fact that the companies themselves created a natural tension amongst themselves. Those who are willing to do the least were pulled up by those who were willing to do the most. There was full exposure over those issues might have sit previously in a one-on-one. You have a tension, they knew it wasn't possible just to speak to a regulator as though they were unfamiliar with the tech, or with the parameters that were operating within them.

We need that tension, I think, in the room as well.

The final thing I would say, there are opportunities here. A.I. may well help us in some areas where we have previously struggled with some of those existing issues we might have been spoken to around content moderation, media and so on. So many of these things collide in these conversations.

So we should keep looking for those opportunities. But I, for one, am always one to take a risk-based approach and I will always look for the guardrails.

>> MARIA RESSA: Fantastic. I will ask one more question. And then if you have questions, please just go to the microphones. We are coming up on the last 20 minutes. So this last one, so we have tackled the first contact with A.I., this we looked at generative A.I..

There are lots of doctrines that have been pushed out. Let's talk about the use of A.I. in law enforcement and surveillance.

The concerns that have been raised about civil liberties, about privacy. What guardrails can we put in place to protect human rights? I will toss that first to Jacinda.

>> JACINDA ARDERN: Yeah, this is where we should not be starting from scratch. Human rights privacy. These are well established rules and norms.

Now if indeed there is any nuance in that discussion for any particular area and often it should be relatively black and white but if there is any nuance in the discussion, that's where Civil Society, in my mind, has to be at the table. And again, not to harp on about the importance of the multistakeholder approach. But let's first and foremost not forget we have well established human rights practices, privacy laws. And this should be our fall-back position. Any question mark over that should be a good pressure point in those conversations.

>> MARIA RESSA: This is where I would encourage Civil Society to come up stronger, we must because the use of  pegasus and predator, increasing conflicts around us. Karoline, the same question to you, what guardrails should we put?

>> KAROLINE EDSTADLER: I agree with what Jacinda say. We shouldn't invent the wheel newly. There's human rights in the world, if we see since February last year, some are really disobeying we concluded to follow. But coming back to the internet and technology side, I think we have to guaranty rules-based approach in this regard.

And I also fully succumb A.I. are being used to the best of all of us. Think about  --  they are used for operations. They can do it much more precisely than a human person can do it. This helps us, of course. Also in the law enforcement, you asked. I recently heard a presentation when I was in Austria before lawyers, the question is to which point will we go? In the end be a judge, not a judge, but some technology sitting in deciding if someone has to be sent to prison or not? So this is really where we should draw a line. And this is what we are trying within the European Union with the A.I. act to structure the risks of A.I.

I think this is the way we could guaranty that these technologies are used for the best of us. And of course, we also have to be clear, there is always a down side. But let's handle these down sides and then it's better for all of us.

>> MARIA RESSA: Great.

The mic is open for any questions from the audience. Yes, please.

>> ATTENDEE: Okay, hi.

I'm Larry Magnit, I'm the C.E.O. of connect safely. We are writing a guide to generative A.I., we have a journalist here and politicians who are good talking to the general public.

So how would you address, parents, educators, people who don't have a technical knowledge of what generative A.I. is, that it's not the end of the world. But there are risks and what they can do within their own families and classrooms to mitigate the risks for the kids and themselves? Thank you.

>> MARIA RESSA: Karoline, you want to take it?

>> KAROLINE EDSTADLER: Well, I think the true. The reality sometimes that children are explaining to parents how to use the phone. Or they are not doing so and simply using their phones and doing things the parents didn't want them to do with the phones.

I think it's also something we as governments have to try to put into some legislation or, let's say information campaigns, to get the knowledge and the skills to the people. And this is of course a big, big challenge. Because we have to also train other people. Because they used these things but there is again always a down side of it. And this is something we can only do together. We had some campaigns in Austria and some trainings for people. We had a lot of discussions also how to train parents. And I don't the answer how to do, but I think this is the way forward to exchange also our experiences in different countries. What works and how it can work.

>> MARIA RESSA: Great, thank you.

>> JACINDA ARDERN: This is such a good question.

I was in the generation that really sat in that, really interesting transition point where, you know, we went from being students who were taught to use the Dewey Decimal system to find a book in a library. And once you had done that, you had found your fact and resource. To then being in a period where we, of course, are inundated with the ability to seek information at our fingertips. But we weren't really taught, I think, as successfully that what we then found on that shelf might not necessarily be the fact we thought we were finding before.

And the way, I had a history teacher who was extremely influential to me growing up, explained it as going from a hose to a fire hydrant for kids. Regardless of the tech at any particular time, be it generative A.I. or anything else we encounter in the future, I hope we teach our kids to be curious.

Not cynical, but curious. And now the tools that we have may be giving the impression that we are going from a fire hydrant back down to a really well-refined hose. But that water has been derived from a particular source in a particular way. And we need to teach kids just to be curious about that. To go back, not just from the information in front, but think a couple layers back and think critically in a couple layers back.

So I would sum it up with just curiosity in everything. I think that is going to help us with the age of disinformation, with the rapid technological change. And I hope create a generation that is not cynical, as a result.

>> MARIA RESSA: Fantastic.

>> ATTENDEE: I'm Michael, Executive Director of the forum of  --  democracy. It's intimidating to be in front of greatness but I will try to answer a question. One of the themes I heard today and yesterday is importance of a multistakeholder approach to finding solutions. And my question is specifically around the participation of Civil Society. It's very easy for governments to show up. It's very easy for companies to show up. Particularly in an environment where pay to play is so pervasive. You pay a few hundred thousand dollars your CEO can show up and speak, you can capture the narrative. It's not so easy for Civil Society. You can't buy a business clastic s  --  class ticket and show up the next day. What are some solutions to ensure Civil Society, especially those from the global south can participate effectively?

>> MARIA RESSA: I like the global south. Yeah?

>> KAROLINE EDSTADLER: I can only say we try to include Civil Society. I think the understanding is there that we can tackle these problems and issues only together. Not to government alone. Not to parliamentarians alone. Not the Civil Society, not tech enterprises but only we as a Civil Society together and I really mean all of us including the government.

And we are doing that in Austria also. I give you an example for the implementation of the SDGs. We will have the third dialogue forum on SDGs where we invite the Civil Society to  --  yeah, to contribute, to tell us what they are doing. This is the same here. You can't do it bottom-down, you can only do it bottom-up.

>> MARIA RESSA: Jacinda and after Michael's question let's take three questions and we can give it to both of them. Jacinda, your response to that?

>> JACINDA ARDERN: Well, firstly, I was actually going to say Maria you would be a really good person to speak to this yourself. So maybe you should have a punt at the question. My very brief contribution, Michael, I totally agree with you. Early on in the call, most of my interactions were with Civil Society at the table. Because that was what we were building. We wanted to be a structure with Civil Society at the table. As you say, there are some real practical things to overcome in creating a network of that nature.

There are and may well be in the room, I can't see the room, but if anyone from our Christchurch Call network is there, I would ask them to give a quick raise of their hand and just to share at some point whenever it's appropriate their experience. We certainly have learned as we have gone over the last four years how we can make it easier that at a practical level and meaningful, that engagement. But the fact we are still going and I think it's still seen as a valuable network I hope means we are doing some things right. Learning as we go, because we are not perfect. But I would hand that to you, dear moderator.

>> MARIA RESSA: Thanks, Jacinda. Michael, you know there are these times when Civil Society comes together. We have coming up the Paris Peace Forum coming in.

Over the last few years that's been one way we have been able to get Civil Society together, but frankly not enough, I think. There are many different groups. Talent in Estonia has handed over the open government partner to Ken ya, right. There are all these groups working together. Partly some past problems that could evolve to take on, I'm a journalist so information is power. And that is, to me, the top problem. If we do not solve the corruption of the information ecosystem, we cannot solve anything, let alone climate change, right?

Let's take three questions. And then our leaders can answer. Please?

>> Good morning, Svetlana, I work, countries are China, Myanmar. All the European initiatives regarding controlling and let's Saimon toring the private sector, especially the ICT sector, working in European territories are great. Of course it's human rights centric. I mean some of the CSOs in Europe might not agree, but in comparison with Myanmar, for instance, they are very good points to follow.

My question, the private sector regulated in Europe especially with the digital act, how would you monitor their actions in the countries with the regimes?

>>  --  it's related to that. Much of the people most susceptible to misinformation dissemination are the people who lack fundamental trust in structures and institutions. I'm sure there are strange conspiracies about what we are doing in this room today. How do you reach those people?

>> MARIA RESSA: Great. Trust. And we will take one more.

>> Yes, I think, I hope it's not too big of a question, but we are being told as humanity that privacy and safety cannot co-exist in the online world. We are being told because the technology is the way it is, we are facing design choices that currently exist, privacy cannot be absolute if there is any consideration of safety. And safety cannot be guaranteed to anybody, because we have to really care about privacy.

My question to you is how can we take a step back and think about human rights and start from there? And then think about design choices, instead of ending up, to be honest, in very stupid debates about little technology choices, bits and pieces we need to work onto overcome challenges and get to the place we could have both. We really need your help as thought leaders, so any thoughts on that would be welcomed.

>> MARIA RESSA: Fantastic. Jacinda, Karoline and I will take some questions too.

>> JACINDA ARDERN: I will trek through the last two, I will leave the first one to others. Starting on the last one around the safety debate and privacy debate. I share very briefly one experience we had with that that had persisted for years.

With Christchurch Call for instance, we didn't want to just look downstream, we wanted to look upstream and looking at things that may be contributing to radicalization. Algorithms and privacy kept coming up. We were told you can overcome that debate. It did take some resource to establish this tool but the Christchurch Call initiative  algorhythmic. That research can prove meaningful and say this is what we are learning, now what are we going to do about it? That will be the critical next step. But the learning for me, there are ways, it took too long though, it was four years it took us to really overcome that issue. But I hope it gives some encouragement that we are pushing past it and sometimes that creative teaching I teach in the room with other tech companies is helpful for those debates.

The second issue, hitting the nail on the head, what do we do about those who are susceptible to disinformation? We have seen what it can do to liberal democracies when that is at large. We have had some recent examples in a number of countries and it is devastating.

Here  I track back again. There are some doing researches on particularly the likes of Colombia. Instincts will probably tell us quite a lot as well. If you have an inherent distrust of the state, probably the state has failed you in some form. If there's a general view your economic position is from the state and you are the lowest category or disenfranchised or an experience with the state, for instance you have been in their care, these are some of the features and educational attainment as well. We need to track back as governments think about what we can do to reestablish that trust in institutions. It means by actually delivering for people as they expect us to. It's as simple as that.

When it comes down to the one-on-one, I have tried to have conversations with people who are deep in conspiracy, it's a completely demoralizing experience. That's why I always go back to the beginning, how do we stop them from falling for the first place.

>> MARIA RESSA: Karoline?

>> KAROLINE EDSTADLER: I would like to start with the second question. That's the main question as politicians, how can we gain trust again in institutions, governments, democracy and such. I would say this is also the most difficult question to be answered. We are living in challenging times. This was mentioned already several times. And people are tired of crisis. And they want to believe easy solutions. And this is really our problem. But democracy is a hard work every day. And we have to fight for the trust of the people on a daily basis. So this is the only thing we can do. We all have to be aware of the fact that you normally cannot find a solution which is beloved by everyone. So there will always be a certain amount of people, a group or something like that, you can name it, who is not happy with the decision. But democracy means that we find majorities and this is something which was clear in the past and now it's not so clear.

One of the reasons is, that, this is also going to the first question, that you can find misinformation, disinformation on the internet. That you find your group only with your opinion. This is something we found out especially during the COVID pandemic. It's nearly impossible to get people out of such chambers if they are in these, in their opinions and surrounded by people who have the same opinion.

So what we try to do is to regulate things in Europe. And we would like to be a role model also for the world. That's why I'm very happy I'm part of the Leadership Panel and I can contribute also from my experiences in Austria. But also at the European level. And again, we are not at the end of this story. Regarding this third point, privacy versus safety.

I think we need both of them. And it's always a challenge and it has always been a challenge to guaranty human rights. You always have the situation the human right of the one person ends where the human right of the other person is infringed.

This is something we have to do on a daily basis. What I did as a criminal judge in the court room on a daily base, if someone wants to demonstrate, he can of course do that. But this ends when the body of, I don't know, another person or policeman is injured. And also here you have to find the balance and this is what we have to do. So I would not be that pessimistic the person, I think it was a woman who put a question to me, we can't do both, we have to do both.

>> MARIA RESSA: Jacinda has a hard stop at the top of the hour. So let me quickly hour, and I want to ask Jacinda for her last thoughts before we let you go, Jacinda. The first question, the weakness of insurance  insurance constitutions in the global south and the countries you mentioned are the countries where we have seen regression of democracy, right?

They are using that to retain and gain more power. The second one, the cognitive bias that you mentioned. It is there, but frankly smart people think that they are immune from the behaviour modification aspects of information warfare or information operations. We are all susceptible. And sometimes the smarter you are, the harder you fall, right?

This is a problem, I think it's a problem for leaders. It's a problem for our shared reality, this is the reason why I have spoken out a lot more about the dangers because without a shared reality, we cannot do anything together. Finally, the last one, oh my God, I love your question because privacy by design. Trust and safety by design. When the tech companies say they cannot, it just means they won't because there is no regulation, no law, no Civil Society pressure to demand it.

We deserve better. Let me throw it back to Jacinda Ardern for her closing thoughts.

>> JACINDA ARDERN: Look, I think you have traversed a set of issues that confronting, I think all of us in different ways and cut across a range of other incredibly serious and important issues. How do you tackle climate change unless you have a shared reality around the definition. We see integrity issues playing out in geo strategic issues. The fact they are coupled with what are traditional forms of warfare. There's a polycrisis. We see this extra layer by technological developments we have seen in recent times. But I'm an optimist. I'm an optimist because in the worst of times I have been exposed to the ability of humans to design solutions and rapidly adapt and implement solutions, ultimately, for the most part to protect humanity. And we have that within our capability. We need to empower those who are specifically focused on doing that, who are dedicating themselves to it. Often at great sacrifice. We need to support regulators who are focused on doing that. And we need to continue to just rally one another in what is an incredibly difficult space.

My final note to those in the room working in those areas, I acknowledge you and the work you do. It's incredibly tough going. But you are in the right place at the right time and your grandchildren will thank you for it.

>> MARIA RESSA: Thank you. Thank you, Jacinda Ardern. Karoline, your thoughts?

>> KAROLINE EDSTADLER: Well, I can only second what Jacinda said. Your grandchildren will thank you one day, because it's the time now to create a future. These challenging and crucial times need all of us. And I'm coming back what I already said. We cannot do it alone as governments. We cannot leave it to the tech enterprises. We cannot do it as politicians, no matter where you serve. We need all of us. We need to change society to be aware of the challenges ahead. And stay optimistic. I really would like to conclude by stay optimistic. I think, thinking back and learning from history, normally it took about 100 years to get used to a new technology. We are talking about the internet. And we have got farther Vint Cerf is our Chair in the Leadership Panel, he invented the internet. It's the right time to set the legislation for the internet. It's the right time to be aware for the children, the parents, grandparents what to do with the internet and all these applications we are already using in our daily life and see the positive things. How we changed our life to the positive since we have all these technologies included in our daily life.

So this is really what I try to do. I'm really proud that I have the opportunity to contribute at that level. But that doesn't mean that it is more important than other levels. Everyone is needed in this process and we can only do it together.

>> MARIA RESSA: Fantastic. And the last thing I would say is, everyone in this room, you are here for the Internet Governance Forum. It is a pivotal moment. They are so wonderfully optimistic. I'm probably a little more pessimistic but it depends on what you do, right? It comes down to all of us. I hate to say it that way. But it is this moment in time. Thank you so much. Rt. Honourable Jacinda Ardern, Karoline Edtstadler.

The next session is coming in the room. Let's move.