IGF 2024-Day 3-Workshop Room 4- WS255 AI and disinformation- Safeguarding Elections-- RAW

The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.

***

 

>> MODERATOR: Welcome, everybody. We have a session on AI and disinformation: Safeguarding elections. I have two panelist on site here and one online as    is there a problem? Can we have   

(Pause).

   Okay. Do we have Roxana online? Can't see her anywhere. Okay.

(Pause).

   >> DENNIS REDEKER: If I may jump in, Roxana Radu is online, but she's not able to unmute herself. It would be great if the local team could make her a co host and she could unmute herself during the discussion.

(Pause).

   >> MODERATOR: Okay, sorry about the little confusion here. So we have three distinguished panelists. Member of Parliament of Congo, democratic public of Roxana Radu is online, the chair of the global Internet Governance academic network and assistant professor at Oxford University. And Santosh Babu Sigdel from Nepal. And we're going it talk as the title says about elections and AI and disinformation. I presume most of you have heard this has been a major election year around the world.

   I haven't been able to determine the exact number, but some 60 countries have had elections this year and two more are to come. Chad and Croatia will have elections later this month.

   It was advanced that disinformation generated by AI would be a major factor in elections. Question    one question we have to talk about is did that actually happen? If so, what should be done about it?

   Now you may have noticed there have been some less than perfectly fair elections even before AI and the Internet, all kinds of election campaign meddling has happened in the past. Governments, the people in power have, let's say, creatively used their power to influence the outcome of elections.

   So how big a difference does AI make on this?

   Maybe we'll start with that. Do you have fears about AI messing up all elections? Let's start with Saurav, is that okay? You go first.

   >> As we say in 2024 this year there was a lot of election, and there was a lot of concern about having AI used by some of the actors to gain more result on election.

   But I think we had a lot of fear. It doesn't happen as it was fear, and it's like    maybe if people was also prepared to face the AI to take some measure against the AI. But on the    on our side of legislative people, we held    we didn't make    done really welcome that. It's like only people themselves, the politician actors in that field have either take some measure to fight by themselves, by their team, they use of AI by the open net.

   And also those that was planning to use it, maybe they didn't use it as much as they would like to use it because the community was already prepared to see the use of AI.

   And I think we had a lot of fear compare on what's really up and on the ground.

   >> MODERATOR: Okay. So do you think it was more scare that did not come to reality as yet, anyway. Maybe I'll try to go to Roxana next. Perhaps you want to comment on Romania elections and did AI have anything to do with that.

   >> ROXANA RADU: Absolutely. First of all, apologies for not being able to join you physically this year at the IGF, but thank you for the invitation to join online.

   I wanted to bring in the example of Romania. For the first part of the year, we've heard quite a bit of comments about AI and as we were approaching the end of the year, people started to feel that AI is just another tool in the tool box of technologies that we have available around election.

   But the case of Romania changes the narrative completely. As you might have seen, about two weeks ago the constitutional court of Romania decided to cancel the results of the first round of presidential elections. It's the first time if has happened in the history, also the first time it's happened since we introduced AI.

   Of course there are several reasons behind the decision, but it's very clearly linked to electoral interference from foreign states, in particular one, Russia, as it was revealed by the intelligence reports.

   It was very clearly linked also to algorithmic treatment, in particular preferential treatment of one of the 13 candidates in the elections.

   And the decision of the court cited the illegal use of technologies, including artificial intelligence. So this is a case that tells us, in a way, it's a wakeup call. The fact that all of this can be abused massively. It hasn't happened in other presidential elections, other Parliamentary elections but it doesn't mean it's something we shouldn't have on our radar.

   I wrote a report with a colleague of mine looking at positive uses of AI in elections. In fact, in India we could see some creative uses of AI both to motivate people to go out and vote, but also to promote campaigns in ways that were fair.

   And also to promote inclusivity, translating in real time some of the speeches, some useful ways to reach out to a larger voter base.

   At that point, this was May/June, India elections. At that point it didn't look like there was a lot to worry about. By the time we entered the American elections, there was quite a bit of attention paid to the use of AI. And yet it happened in a country that was not in the media spotlight. And I think that's something that we should also bring into the discussions, all elections have their own stakes.

   But I think it's useful to think about this use of AI for both the good uses and some of the really bad outcomes.

   I'll stop here, but happy to jump in later in the conversation.

   >> MODERATOR: So it's nice to observe that AI is a double edge sword that can be used for good in bad also in election context.

   But maybe I'll hand over to Babu now and your notion of what happened, what could have happened, what should have happened.

   >> BABU: Thank you very much it's my pleasure to be here and talk about this interesting topic. I have privilege to speak with honorable parliamentarian who fought and got through the process all of these things and then whether it's scary or it's normal, it's very [?] that it's    it has two sides. It had bad side as well.

   So benefit is that it has become very easy to make political advertisement for candidates. Especially using the campaign and developing some content and it has become very easy for them. But simultaneously, there is a big risk that a position or some other stakeholders may influence the election campaign using similar content which could be detrimental to their characters and all these things.

   So one of the major issue in the political companies, the advertisement of political campaign. And this is one issue.

   And another issue is the transparency of the campaign. Now, we can see that various platform providers, they have their own regulations internally, platform providers have also their own provision about what kind of limits could be there. And their own filtering, some content using AI as well.

   If I recall various platform providers, including Facebook or Twitter, or Tiktok as well, they themselves removed many political content, the campaigns, and then later on there were contest or challenge by the politicians themselves.

   So there is another risk of using AI in the process by the platform providers themselves on the filtering of their content.

   And another issue was, as I mentioned, that the development of content which could be useful and detrimental and that are damaging immediately the campaign of election.

   But when intervening on that content, it could be very late, you know, content can damage a politician in a few seconds or a few minutes, even if it is removed in a few hours. That could be not sufficient to repair the damage of the political campaign. That's another issue that could be seen in the fear of election during the election.

   And another thing is, when we are talking about from    we are talking from campaign perspective, content perspective, also the major issue is coming from, like Roxana just mentioned, that foreign influence in the election process or election day, like intervening on data, intervening on system of ballot papers or ballot process.

   So this is very significant part, and another thing is like when this comes to the remedy process, whether we have sufficient regular approach or not. Whether our election courts are also need to be very clear on these kind of [?] amplifying these contents or effects.

   So these are evolving around the election and disinformation and misinformation. So if we have proper regulatory framework, understanding, literacy as well, so we can manage the risk of AI and using a positive perspective.

   >> MODERATOR: Thank you. You made some keen observations there. Not only elections, time severing. And also if an AI system were to, say, try to remove misinformation and whatever and accidentally removed somebody's political advertisement and it takes days before it comes back online and they lose the election because of that, that's also a problem. So it can cut both ways.

   So.

   >> ROXANA RADU: Excuse me, maybe I can jump in very quickly here.

   >> MODERATOR: Please do.

   >> ROXANA RADU: I definitely want to talk a little bit more about the question of transparency, because that has been part of the regulatory agenda for a while. Not necessarily in the contest of election, but platform prosperities with regard to their practices has been on the mind of policymakers for a while now.

   And in the EU we do have a framework for that, is the digital services act. And right now, the European Commission has decided to open an investigation, a formal investigation, in the case of Tiktok with regard to the remaining elections. So this was the platform that was scrutinized for this illegal use of AI.

   It turns out the transparency was not really working in this case. One of the candidates received preferential treatment without ever having their electoral content labeled as such. So it would appear in all sorts of feeds without ever mentioning that this was, in fact, part of the campaign.

   And this is, obviously, in breach of the laws in place in Romania. Which is why the court had to issue the decision.

   But we also saw now that the European Commission looking at this case, Romania is one of the members of the European Union. There's a framework in place and the commission has asked for a couple of things. First of all, already on the 5th of December it asked Tiktok to retain all the information that had to do with elections for a particular period of time. I think it was between end of November and going all the way to    to March 2025, Tiktok is now under an obligation as per this EU order to retain all the information that has to do with any national elections.

   So that will include the upcoming elections in Croatia as well.

   For the Romanian one, they said this will be a matter of priority, so we'll complete this investigation in a speedy manner.

   And they want to look at what was recommended content during the period of election and also what was potential intentional manipulation of the platform.

   So there are quite a few aspects that will come into question with regard to the practice of Tiktok.

   My previous speaker also mentioned different platforms taking action throughout this year. And it's true, we have seen lots of statements from both Meta across their different platforms, from Instagram to Facebook. From Twitter we've mixed messages in this period.

   But the truth is many of these platforms have actually reduced the number of staff working on these issues, on the issue of monitoring electoral content.

   So at the end of the day, I think we have to put that in balance. On the one hand, they've cut all the funding they had towards proper ways of dealing with this, and AI, using AI tools to detect some of this content. Turns out it doesn't work all that well.

   And on the other hand they create all these statements about the proactive attitudes towards preventing electoral interference.

   I think the truth sits somewhere in the middle, because it's a lot more mixed than we have seen. And the reality is the AI tools we have today are probably better and better in particular languages, especially widely used languages, but they nor the very good in languages that are not as well represented on the Internet.

   So ultimately, if AI is supposed to be in charge of monitoring how AI is used on platforms, we can't really trust that to be very, very accurate. Thank you, I'll stop here.

   >> MODERATOR: Thank you, Roxana. Interesting point here that historically freedom of speech or freedom of newspaper owners to propose whatever they want. And of course, platform Internet can also have its own political position.

   Just that they should be open about it. Might think something through like social or whatever which is explicitly related to the platform.

   But pretending to be neutral and not being is something that's definitely bad. Maybe I'll hand it over to you at this point, how do you feel especially in the Congos' point of view. If you have something different on this issue.

   >> As we say, some of the problem we have to having AI to monitor the content is the problem with the language. Most of it would be in our local language first. And that may be in English or French, they don't have the same meaning locally.

   We used to call some of the party using one name, which is a common name in English, but that means really a different things.

   I can    we have like in our country some    some    some part of the political side that we can use to call them to identify them regarding their    some more like Taliban. When you say Taliban in English, you may think it's someone in Taliban. But it's another meaning, it's member of the majority, you see.

   So the AI will not see that context. That's why we need here to have someone, some real people in the background and will know the local context.

I come back on the use of AI, what I was saying it's not to say AI was not using the election, but it's not on the way people was expecting it. Everyone was looking on the U.S. election on deep fake. But as you say, it's happened in Romania. But people was looking on U.S.

   And also even the U.S., AI was used not mainly to make deep fake, but to promote themselves. Like people who are using AI to make some chatbot to respond to email, to respond to phone call automatically, there was even in Pakistan I heard that one of the candidates, the former prime minister used the AI to make speech because he was in prison but was able to make speech, live speech using AI by cloning his voice.

   So there was a use of AI, but because many people was waiting to see it on deep fake site, I think people have shifted instead of attacking the open net, they start to promote their self. They start to use the AI to    to enforce their own campaigning.

   So many that use it was to respond to email, to make call, to make speech, to make some advertising, to make some nice video for themselves, some nice picture from themselves.

   In Congo had election just before    before the end of '23, we're not on '24, but it was at the end on the last day of '23. So we was also part of that big    the big game of the election on '24.

   But as I said before election, we had meeting with and even team from Meta who came to see our election committee. And we    they agreed to work with us and help us put in place a team to monitor that content.

   And we consider it work because in the '23 election, last year, we didn't have so much deep fake that it was like it was the year before. Because before the last year, we had team from Meta who came in the country and put up in places strategy and worked together with our Election Commission to see if they can fight against the deep fake and the misinformation.

   >> MODERATOR: Okay. Thank you for that.

   It's an interesting observation that AI has been used as a tool for election campaigns, and then the question comes does it help more those who have been trouble in getting things, the underdogs have the same tools now and multiply their voice or will it help those who are already powerful more?

   >> It help us, I saw it helped mostly those small candidates who was under    under the table. Because they was able to    to put much effort on AI, like in the U.S. I know there was some    in the campaign there was one of the small candidate who was able to gain more voter than Joe Biden in the state by using just the AI.

   He didn't have the budget, like the Joe Biden budget, but putting much effort in the AI, it also opened for small candidate in Japan, also putting much effort on the AI.

   So it really helped those who was    was seen as small candidate. It give them the same tool like those powerful candidate.

   >> MODERATOR: That's interesting. It turns out that it can be [?]

   But maybe Babu has a point of view here and maybe things are different in Nepal and other situations.

   >> BABU: Not really. We had similar thing in Nepal in '22 when we had the election, AI tools were not that much used in that sense. But nowadays, this is a big    big discussion after three years we'll have new election and then we are already discussing about the importance and risk of using AI especially on influencing the result of the election.

   So in this context, as our speaker Roxana raised some of the issues of platform governance. Now previously we considered platform as trusted third party media where not taking side on the content. Meta might have endorsed any candidate, but not through the content.

   But this time we observed that in especially during U.S. election, X owner was putting his content post repeatedly and that posts are coming to our account as well. And that significantly influenced the result of election. This is said.

   So in this point, what I'm talking about that now if platform owners are using platforms for their personal desired candidate, then it's a big risk. And if they use AI based content in the process, then that is more dangerous thing in democratic process and democratic    this is not the standard that we expect in the democracy.

   So in that perspective, how we make more accountable to these platforms is one thing.

   And another thing is very significant when previously they were in Nepal as well in the election context as well, this business platform operators, they wish to have their business. And if Election Commission also work with them or Election Commission also influenced by them, then there will be more risky in 2022 in Nepal.

   Some of the candidates, they had some problem with the Election Commission and then they were complaining that the contents were asked by the commission to remove to the platform providers.

   So that is a very big risk or platform governance and the mechanism of election    election [?] as well.

   So these are the very significant issues, if these are influenced using AI, then that could be a more risk.

   >> MODERATOR: Thank you.

   At this point, I understand we have some online questions, maybe Dennis would like to read out some questions for us from.

   >> DENNIS REDEKER: I'm happy to do so. This is a fantastic discussion and we have some questions in the chat, so I thought that there was some public and private, so I thought I'm going to share those with you and thank you all for the speakers so far.

   The first question by Imad is only identified as a first name, no last name, asked what is the role of eVoting in this. This is maybe something that where you think this might be a different conversation, but maybe it isn't. Because maybe this is also about trust and eVoting and the matters of trust when it comes to elections.

   And it would be certainly something that is interesting perhaps for some of the speakers to pick up on. So how does that combine, so having AI, power of disinformation online, and then also providing your vote online potentially how does that play together.

   Second question here is by someone who asked about the positive uses of AI in elections. And I think that refers in part or is meeting in part what Roxana has already presented about positive use of AI I think in the context of the India. I think you mentioned maybe that's something that you could go in some more detail. But also saw that you already posted the link to the report in the chat online as well.

   And the third question is on the    on the risk of elections being canceled. And we just had this in Romania and that also relates to trust, I think, in election integrity. Under which conditions could elections be canceled and do we    you know, what does it do to us as voters when we go into election, not knowing whether it will be an election that is fought fairly and whether it has to    will be canceled by    by a court later on.

   And so maybe this is a question to Roxana, but also for those others, perhaps, you know, what does it make with the community when you cannot trust that an election will go forward and the manipulation might mean having to retreat. Or take back the results of an election.

   Too much from the online moderation team here.

   >> MODERATOR: Thank you for those. I'll ask, does anybody want to pick up on the eVoting issue, how much if at all it relates to AI? Is eVoting going on in Estonia for a long time, for example, but I don't think we have any Estonians around to talk about that. Anything specific with AI?

   >> I can take this question? As being a neighbor of India, Nepal and India, we share a border and recently they had an election. And in India, there were many challenges on the compromise of [?]. And so Elon Musk in one statement said voting machine could be compromised. That's a big debate in that context.

   Obviously this is very challenging thing. And in Nepal's context, we may not have foreign influence in the election process, but if our    taking from Indian perspective again, India has a range of policies like rule on education is very much there. So    but it's still there using voting machine. So in that context it's very risky when we use this.

   At the beginning I also mention that data system and voting machines are very vulnerable, critical when we talk about from election perspective.

   And if our data system and the voting machine system are not securely protected, then in that case that could be a big chance of compromise.

   And from    as was asked about positive side of election, at the begin also I mentioned that this has given a power to a common person to participate in the political process. And lots of examples we're seen even in Nepal we seen a single person without any incumbent group, only using platforms got elected in mayor or parliamentary.

   So yes, it gives a significant power as our parliamentarian who mentioned that a nonperson could have also been elected using this content and participating in that process.

   >> MODERATOR: Thank you. It seems like you have something to add to that.

   >> Yeah, I would like to say that the link between electronic voting and AI is not    it is not straight. Because on the eVote, the problem we have is maybe to corrupt the vote, to change the vote. You vote for A and the machine will count for B. That can be done by attacking that machine.

   But also what    what AI bring, AI also work in the sub of prime. Because to attack a machine, normally it request some kind of skills. But AI give those skills to normal people. You can attack, you can hack something just using AI. It gives you the skill.

   So voting machine, they are vulnerable, not only to those high profile hacker, even normal people hacker are able to hack the system. Now for now, most that can interfere with the election can use the deep fake to change the voter itself so they can be convinced to vote for someone they would not vote for him normally.

   And in that way, I don't know how you will consider    because I voted for someone, maybe you    it's like even the advertising on the TV, there's no more advertising, but it's not easy to detect if that kind of deep fake was not there, the impact it would have been on election is not very easy.

   But for trafficking the voting machine, that's very easy to see how many vote was changed by    by an attack. And that's where the AI gives some    some access for those    for anyone now to be able to hack and to change the dataset on those machines.

      >> MODERATOR: I'm not sure if I read between you're implying that AI could be useful to detect some kinds of tampering as well. But other wise the link is definitely not direct.

   But thinking of the third question there, canceling election can be a problem, so maybe if AI can cause so much distrust in elections, that they tend to be canceled too easily, that could be a problem. Maybe Roxana would like to address that, possibly.

   >> ROXANA RADU: Yes, thank you very much for this question. I think it's a very important one and it's definitely on everybody's mind back home in Romania, I can tell you that, this the court decision announced at the beginning of December. We still don't know the dates of the next election, but everybody's thinking can we actually trust the next round of presidential elections if here we have proven post facto, so after the fact, that there was so much interference, what are we putting in place to prevent that this going to happen next time.

   It's a big question, because we've just had elections for the Parliament and those elections were not challenged from the perspective of    of the process. But they    they show that the vote was very split, so we need a coalition in place to be agreed before we have the date of the new presidential elections. It's going to take a while and we'll see what happens in between and whether we have institutional measures to address this.

   But just on the question of trust, right now there's also an indirect undermining of the democratic process through the cancellation of elections. On the one hand, yes, we had this in reaction to what has happened. But also for most people there are decision is perceived as a violation in some respect of the democratic process itself that there is a court decision that comes in and annuls the vote of 52% of eligible voter.

   So this is something that needs to be addressed in a broader conversation around how democracy itself transforms with the    with the rise of AI and digital technologies more broadly.

   In the way the processes we've had in place for so long, including some of the institutions that are overseeing the democratic process were created in a year with very little technology around. Right now we're talking about transforming these process always together and we have to rethink the relationship between the forms of democracy we have and the technology that is available.

   And just on very briefly, if I may jump in on the question of eVoting were I'll just say very briefly that if we look at the data on this, it's actually very few countries around the world that have opted for eVoting.

We have very good examples in that category, Estonia being one of them.

   We have a couple of examples from outside of the western world as well. But all together, many countries have stayed away from that because the feeling is that we are not able to    to prevent any sort of manipulation that might happen with eVoting. Most countries have had that conversation and most have decided not to move their voting processes online.

   Which ultimately may or may not mean it makes a big difference because in the case of Romania we had paper ballots and the whole integrity of the process had been compromised. So before you go to that final stage of whether the vote is casted online or on paper, we need to think about those other intermediary stages, you know, whether that's electoral registration, whether that's the campaigning you might have for    for the elections. The vote counting itself and the verification reporting and it seems that there were cyberattacks happening at the time of the vote counting and those paper ballots being introduced into the system, as well as post election audit. This is another very important part of the democratic process. And we have to have safeguards in place across the whole cycle of    of the electoral process, not just at the time of counting the vote or casting the vote.

   >> MODERATOR: Thank you. It does occur to me that somebody might want to set up deliberately pretending to be attacking the election so as to vet the devote canceled in order to undermine trust in the system. So even instead of actually trying to affect it, just make the impression of that so that people don't trust the system anymore. And AI may also make that easier or even just impossible without it will maybe to do it effectively.

   And another interesting observation here, I think some countries, the incumbent have so much power in the situation that they tend to win it actually foreign interference might be good for the democratic process here. But that's also something that's very difficult to let's say assess in any useful way.

   But maybe you want to carry on from that or if not, I might suggest if you'd consider what kind of power AI actually does for the specific issue of disinformation spreading.

   >> A question is there.

   >> MODERATOR: A question on this side? Okay. Hands up, who's first? Sorry for not noticing.

   >> Okay. My name is Nana    can you hear me? Okay. I have a question especially as someone who works specifically on AI and ethics.

   Considering the very big distinction between algorithms and AI, they're very different, there's a lot of conversation around algorithmic discrimination against specific candidates, right?

   And from what I hear, there seems to be a lot of responsibility placed on the platforms. Beyond the responsibility, there's    I'm also hearing a lot of trust, because the words like trust that partner has been used. And I'm wondering, is it not too much, because in the real world sense of it, platforms are like vendors, right? They're a business set up for profit. They're not NGOs. They're not Civil Society organizations. It's like expecting a newspaper to publish your views and not the views of those who pay and not the views of the people who set it up to push their own agenda.

   I'm wondering if it would not be more beneficial to push for algorithmic transparency in the sense that publications for people to understand what the algorithm considered in pushing this content towards someone's feed and all of that.

   Because feedback, right, that we have received a lot of feedback from very right wing people around platforms like X, right, and platforms like Tiktok, platforms like IG. And that feedback says that previously these platforms used to be very left wing agenda. Right?

   Very liberal, very    this is what we want to see, this is how the world should be run. It was run like an alternate universe to the actual real life.

   And but there's like a push or a shift in agenda, and now that they feel like some sort of balance has been achieved. I disagree with this but that's different. This is a conversation.

   And I'm wondering that in demanding    in demanding certain things from the platforms, are we not, one, trying to curb free speech? Because the free speech doesn't look like the speech that we're used to or the speech that we like.

   And two, would    why do we trust these platforms? Why do we expect these platforms to comply to other things other than regulatory requirements? Why do we trust these platforms so much? That's like my big question. Why do we trust these platforms so much?

   Thank you.

   >> MODERATOR: Thank you. I'll and that over to you but I'll ask you to be brief because we only have ten minutes left in the session.

   >> I'll be brief although there's a story to be told about. I was thinking with your question, I'll start with the example because during the COVID pandemic there was conspiracy theories and people were believing there were larger agendas in the world. And what a lot of research found was that these people were generally ostracized in society, there's a social and economic problem that left people out.

   I feel like we see this also playing out in the election space where when people are isolated to believe in such disinformation campaign, deep fakes, different examples like that.

   So when talking about governance, this is to all the speakers, do you think there should be    to what extent, if any, do you think that the intervention should be more on a social aspect rather than tech governance or platform governance?

   >> MODERATOR: So two very good interventions and questions there. Who would like go first? I think Babu looks like he wants to speak. Good ahead.

   >> BABU: I was also supposed to come into this very topic, influence and the election in our topic.

   So who is providing disinformation? Who are the disinformation in the election process?

   It has to be very clear. So now we are very clear, there is possibility of misinfluence and disinfluence in election process or in common platforms. So to whom to trust and whether we need to trust or not to the platform providers.

   If we engage on our own, then we don't have to trust. But it's our choice to confine our engagement in the platform. If you lock your privacy system and if you limit your engagement, then there will be more secure process, right?

   So it's very important that we, ourselves, set out the time what level of engagement we're doing on the platform.

   And when that disinformation or misinformation, who are responsible to remove that? Now platform providers, they have their own system. There are two models, one is automated, millions ever contents are moderated by the platform providers based on their own standards. And there will be another level of models. And when you explain, then platforms will be responding on that and they'll be valid and then if they think that it has to be removed, then they'll remove it.

   And also now significant [?] out there now. It's like lots of [?] out there. And the role of responsible [?] are very significant during the election and during the regular time as well. But during the election, it's the responsibility of the actor of that election has to be very precise on how we fight with this disinformation. Like actor means Election Commission, law enforcement, candidate and voter and Civil Society. All of them will have to be more careful than the regular time.

   Because there could be targeted disinformation supplied during that process.

   And it's not only the business, business also should be an accountable. Accountability comes when you start your business, any kind of business, accountability comes together, it's not a different thing that you do business, you be accountable.

   It's very important that platform providers also be more accountable when there is a sensitivity, they have to take more care in that they have more responsibility.

   So in this way, when    how we can address the Meta things and like that. And also online, it takes perspective. Of course, we need certain model of governance or regulatory perspective, and in that case and in that way we can address this. Thank you very much.

   >> MODERATOR: I was just reminded that we have only five minutes left of the session, so we'll have to start wrapping up slowly. But let's go one more round of panelist commenting that   

   >> I will be short, very short. I would say previously some of those platforms, they were seen as left wing. But the perception's changed. Because somehow we think something changed in the algorithm they use.

   And as of a Parliament, what we want to from those platform is one thing you say is transparency. Just to know what is being run in the background. So we can see if there's some    some fairness, some equity how they treat information coming from different sources.

   If we have that transparency, the trust will be more. Thank you.

   >> ROXANA RADU: Very briefly on the first question, I agree with Babu there's a need to have more transparency over funding, over the labeling of that content, and also over promotion, right? These algorithms are not a different species, right, they are    and they should not be completely unaccountable. We need to look into how they promote the content and why and whether they get that preferential treatment or not.

   And whether that results in manipulation or not that's the second part of the question. But there is funding involved, obviously. And that has to be also transparent and fully withstand scrutiny.

   Since platforms have become the new public sphere, they're not just businesses, they're more than businesses. They're the new public sphere. That's where communication actually happens. People might not turn on the TV anymore, but they'll receive their news from encrypted groups, different platforms and so on and so forth. So they provide a public channel for communication during elections.

   Most countries from rules in place for how you promote yourself during the elections and the platforms can be living in a different universe. They need to abide by those rules. They are bound to apply national legislation on these electoral cycles. So this is something that is only a question of respecting existing legislation.

   And on the second question very briefly, should intervention be broader than just tech governance, should we look at social aspect as well. And absolutely, I agree with you. I think we need to work on multiple levels. And so far we've given quite a bit of attention to technology. We have not find the right solution  to all of these problems, but we haven't looked at what could be done on a social level beyond literacy and just having a level of awareness that is better. I think we need to work on issues of poverty, issues of connectivity, we need to work on many other aspects, including welfare and so on to be able to give people equal chances in society.

   And that's going to make democracy a better place for everybody.

   >> MODERATOR: My watch says we have 45 seconds to go. I would like to hand over to Dennis if you have a final comment here to make.

   >> DENNIS REDEKER: Let me just say that this conversation has been thrilling. I really appreciate both the positive and scary scenarios for the use and also the misuse of AI in the context of elections.

   I think this is only the start of a conversation that we'll be having. And the way that we started off this planning of this session at a time when we thought AI and elections in 2024 is going to be scary, this is mirroring what Roxana said earlier, that we had a phase where we thought we had nothing to talk about in December because nothing is going to happen. And then came along the Romania elections and there will be more. And there will be more things that we have to deal with.

   So I think this is a start of a conversation and also a start to what more regulation and more transparency in that field. Thank you, everyone, from the sign up of rights and principles coalition and thank you for speakers and moderators to jump into this fray.

   >> MODERATOR: Panelists and everything, and for the audience as well and for the great questions we had, thank you. Now we are 30 seconds over time, so let's close it here. Thank you.

   (Applause)