IGF 2023 - Day 2 - Town Hall #117 Protect people and elections, not Big Tech!

The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.

***

 

>> BRUNA MARTINS dos SANTOS:  I'm going to start off with the session.  Welcome to the town hall that's called Protect People and Elections, not Big Tech! Initial disclaimer, this town hall is being organized by Digital Action.  We are conveners of the Global Coalition on Tech Justice, and this is a brand new movement that is discussing big tech accountability, how to safeguard elections, and trying to bring in new conversation or improve the current ones about why should we care about elections and why should we make this conversation even closer to social media companies. 

              The Global Coalition for Tech Justice movement with over 100 organization and individuals from all over the world.  Some of them are with we.  Actually, all of them, a few of them are with me at this panel.  We do have more and more organizations and academics joining this space to discuss some of the things that we are planning for today. 

    As for those of you that don't know Digital Action, we were founded in 2019 with the mission to protect democracy from digital threats.  This is going to be one of those heavy conversations about how social media affects democracies and the other way around works as well.  Our work has been involving some work, utilization of some collective action, building bridges, and also ensuring those directly impacted by tech harms are those that are actually in power, are those the ones that we are listening to. 

    Global Coalition for Tech Justice and Europe democracy campaign has the general goal of bringing in the perspectives of the victims or of the places in which social media companies invest less or much less in the day-to-day lives, so that is little bit of what we want to do. 

    I want to first bring in Alexandra Pardal, she's the Global Campaigns director at Digital Action.  She's going to open this panel for us and explain a little more about digital democracy campaign and what we're all about.  Alex, I think you're in the room, right?

>>ALEXANDRA PARDAL:  Yes, I am.  Thank you Bruna.   Wonderful to be with you here.  Welcome to all of our panelists and participants here today those joining us from elsewhere remotely. 

    This is a global conversation on how to protect people and elections, not big tech.  So I'm Alexandra Pardal from Digital Action, a globally connected, building movement organization with a mission to protect democracy and rights from digital threats. 

    In 2024, the year of democracy, more than 2 billion people will be entitled to vote as US presidential and European parliamentary election converge with national goals in India, Indonesia, South Africa, Uganda, Egypt, Mexico and some other 50 other countries, the largest megacycle of elections we've seen in our life times.  Our information spaces and the ability to maintain integrity of our information and uphold the truth and shared understanding of reality are more vulnerable than ever from foreign and maligning influence in elections, the use of new tech, like generative AI, making it easier for domestic or foreign actors to manipulate and lie to financially motivate a globally active disinfo industry, threats have never been bigger nor more pervasive.  Elections are flashpoints for online harms and there are offline consequence. 

    Now, over the past four years, Digital Action has collaborated with hundreds of organizations in every continent supporting the monitoring of digital threats to elections in the US and elsewhere and led large civil society coalitions demanding a strong digital services act in the EU and better policy against hate and extremist from social media companies globally.  This experience has taught us that there is startling inequity between world regions when it comes to protections from harms, from disinformation, hate and incitement to manipulation of democratic process.  Online platforms just aren't safe for most people. 

    We know that the platforms run by the world's social media giants Meta, Google, X, and TikTok, have the greatest global reach they've ever had and are at their most powerful, but safeguarding efforts have been weak to protect information integrity globally.  For instance, Facebook says it's invested $13 billion in its platforms safety and security since 2016, but internal documents show that in 2020, company plowed 87 percent of its global budget for time spent on classifying false or misleading information into the US even though 90 percent of it is users live elsewhere.  This means there's a derth of moderators with cultural and linguistics expertise where Facebook has been unable to effectively tackle disinformation at all times and most consequentially during elections where disinformation and other online harms peak. 

    Similarly, non-English languages have been a stumbling block for automated content iteration or YouTube, FCC, or TikTok.  Struggle to depict harmful posts in a number of languages in countries at risk of real-world violence and in democratic decline or autocracy.  What this means is that the risks on the horizon in 2024 very serious indeed at a time when social media companies are cutting costs, laying off staff, and pulling back from their responsibilities to stem the flow of disinformation and protect the information space from bad actors. 

    If some of the world's largest and most stable democracies, the United States, Brazil have been rocked by bad actors mobilizing on social media platforms spreading election disinfo and organizing violent assaults on the heart of their democracies, imagine next year where we'll see democracies under threat, like India, Indonesia, Tunisia, alongside a whole sway of countries that are unfree or at risk where citizens hope to hold on to spaces to resist the manipulation of the truth for autocratic purposes.  How can online platforms be made safe to uphold information and lateral integrity and protect people's rights? 

    The challenge of 2024 elections megacycle is a calling to all of us to show up, ideate, innovate, bring our skills, talents, and any power we have to the table and collaborate. 

   As an example of what's in the works and background to the perspectives we're going to hear today, together with over 160 organizations now, experts and practitioners from across the world, we've convened the Coalition for Tech Justice to launch the 2024 Year of Democracy Campaign in order to foster collective action, collaboration, and coordination across election countries next year.  Together with our members, the Global Coalition for Tech Justice will campaign, research, investigate, and tell the stories of tech harm in global media supporting and amplifying the efforts of those on the front lines and building policy solutions to address the global impacts of all social media companies. 

    So we're going to be actively collaborating with global stakeholders and this conversation today is an opportunity to further these conversations and get collaborations off the ground with all of those who share goals of safe online platforms for all. 

   I'm delighted to introduce this session for this important global conversation on how we protect 2024's megacycle elections from tech harms and ensure social media companies to fill their responsibilities to make their products and platforms safe for all. 

    I'm really happy to hand back to Bruna introduce our panelist and discussion this morning.  Than you.

>> BRUNA MARTINS dos SANTOS:  Thank you so much.  Welcome to the session as well. 

   As she just brought up, this is really a global conversation we want to do.  We want to spark a discussion on how can we collectively ensure that big tech plays its part in protecting democracy and human rights in 2024 elections.  It's not just one.  It seeks elections as everybody has been talking about this week.  It's a rather key year for everyone. 

    We have two provocative questions, kickoff questions for the panelists.  I'm going to bring you into the conversation first.  Ashnah is programs coordinator for CIPESA. 

    The first question for you would be whether, like if you consider that social media platforms and content moderation, or the lack of, are shaping democratic elections, and if so, how?

>>ASHNAH KALEMERA:  Thank you.  Good evening everyone, or good morning, like Alex said. I guess we're all in very different time zones at the moment. 

    It's a pleasure to be here.  Thank you for the invitation to digital action and the opportunity to have this very important discussion. 

    Ashnah Kalemera.  I work for CIPESA.  CIPESA is the collaboration of International ICT policies for East and Southern Africa.  We are based out of Kampala, Uganda, but work across Africa promoting effective but inclusive technology policy, but also its implementation as it intersects with governance, good governance obviously, upholding human rights as well as improved livelihoods. 

    I like to start off these conversation on very light notes.  Very often, these panels are dense in terms of spelling doom and gloom, so first, I would like to emphasize that technology broadly, including social media platforms and the internet, have huge potential for electoral processes and systems.  They are critical in ensuring that voter registration is complete and accurate enabling remote voting for excluded communities or remotely based voters.  They have been critical in supporting campaigns and encompassing as well voter awareness and education, results transmission and tallying, monitoring malpractice, all of them critical to electoral processes and lending themselves to promoting legitimacy and inclusion of elections in states that have democratic deficits, which most of Africa is many of the states.  So I think that light note is very important to highlight as we then go on to the doom and gloom that this conversation is will likely take. 

    Now we start the doom and gloom.  Unfortunately, despite those opportunities, there are immense threats that technology poses for electoral processes in Africa and, I guess, for much of the world.      

    Increasing, we're seeing states, authoritarian governments especially, leveraging the power of technology for serve-serving interests.  Critical there is example is network disruptions or shutdowns.  I see KeepItOn Coalition members in the room.  They work to push back on that excess. 

    On this disinformation and hate speech, users, governments, platforms themselves, as well as private companies, PR firms, are actively influencing narratives during elections undermining the all good stuff that I mentioned in the beginning.  Very often, we ask ourselves at CIPESA, I imagine everybody in the room, why this information thrives, because pretty much everybody is aware of the challenge that it poses.  In Africa especially, it's thriving and thriving to very worrying levels.  One of them, again, is  something positive, it's because technology is penetrating and penetrating very well on the continent.

   Previously, unconnected communities access to information, I take on the baton literally which, again, in the context of elections is great, but in the case of disinformation, it's a significant challenge. 

    Secondly is the youth population on the continent with many of them coming online via social media.  Always jokes in sessions that I've attended where there is African representation, that for many Africans, the internet is social media.  That challenge is enabling disinformation and hate speech to thrive. 

    Third is conflicts.  The elections that we're talking about are happening in very challenging context that are characterized by ethic, religious, and geopolitical conflicts.  Again, all the nice stuff I mentioned earlier on is then cast with a really dark shadow.  Like Alex mentioned, that context I just described is going to be a very significant stress test come 2024 and beyond for the continent and we're likely to see responses that undermine the potential to uphold electronic legitimacy, but also for citizens to realize their human rights. 

    One of those reactions we're likely to see from a state perspective is weaponization of laws to undermind voice or critical opinion online, which again, undermines electoral process and integrity. 

    Unfortunately, given the context around conflicts, we are likely to see a lot of politically -- sorry, fueling politically motivated violence which restricts acess to critical information and ultimately perpetrates, divides, and hate speech and lead to offline harms. 

    Bringing the conversation back to big tech, on the continent, unfortunately, we're seeing very limited collaboration between tech actors and media and civil society, for instance, identifying, debunking or prebunking, depending on which side of the fence you sit, and moderating this information. 

    Also, the processing and response times to report such complaints are really slow, this is discouraging, reporting and ultimately maximizing, in some cases, circulation of disinformation and hate speech. 

    There are also significant challenges around opaqueness in moderation measures.  Seen the case Uganda during the previous election where a huge number of, what's the word automated account?  Were taken down for otherwise not very clear reasons.  That led to a response from the State, i.e., shutting down access to Facebook which remains inaccessible to date in Uganda. 

    Given those pros and cons, either side of the coin I just described, African Continent, it's important to have collaborative actions and movements just like what digital Action is spearheading and we really wanted to be a part of, and efforts in that regard should focus on showing up and participating in consultation processes just like this and others where there are opportunities to challenge or provide feedback and comments.  I think that's really important.  Such spaces are not many. 

   We at CIPESA host annual forum on internet freedom in Africa.  We marked 10 years a couple of days ago.  For the second time, we were able to have the META oversight board present and able to engage.  They admitted that cases from the African Continent are limited, but spaces like the Forum on Internet Freedom in Africa, that CIPESA is the host, is providing that opportunity for users and other stakeholders to deliberate on these issues. 

    Cannot not say research and documentation remains important.  Of course, we're a research think tank, churning out pages and pages that are not necessarily always read, but I think it's important because evidence-driven advocacy is critical to this cause.

    skills-building, again, and digital literacy fact-checking and information verification, that remains critical, but also leveraging norm-setting mechanisms and raising the visibility of big tech challenges in new processes, university peer review, Africa Commission on Human Peoples Rights.  These conversations are not filtering up as much as they should do.  There should be interventions that, of course, promote and challenge private sector to uphold responsibilities and ethics through application of the UN guiding principles and business and human rights. 

    Lastly, strategic litigation.  That is also an opportunity that is before us in terms of challenging the excesses that big tech poses for elections in the challenging context I've just described. 

    Thank you. 

    (Applause)

    >>BRUNA MARTINS dos SANTOS:  Thank you very much.  Just speaking on two of the topics you spoke about, which is the Weaponization political processes and politically motivated violence, I think that bridges very well with the recent scenario in Brazil with unfortunately the reputation or yet another attack on the capital.  After a lot of discussions on a fake news draft bill and regulation for social media companies. 

    Yasmine, I'm going to bring you in now. Yasmine is from FGV Rio Janeiro and also the co-coordinator of the DC on platform responsibilities.  Welcome.

>>YASMIN CURZI:  Thank you so much, Bruna.  Could you please display the slides?  Thank you so much. 

    Addressing the first question Bruna posed to us here, our social media and platforms content moderation the shaping of current democratic elections.  To answer this question. I would just like to give a brief context about the elections in Brazil,  sorry about the Brazilian elections scenario regarding platform responsibilities.  There are two main pieces of legislation that deal with content moderation issues.  Specifically, since 2014, we have the Brazilian civil rights framework, AKA Marco Civil, probably known by many of you here. 

    It establishes our basic principles for internet governance such as free speech, net neutrality, protection of private and personal data, but also establishes liability regimes for platforms regarding UGC, Article 19, 221. 

    To sum up really quickly, Article 19 created a general regime in which platforms are only liable for illegal digital content if they do not comply with judicial order for the removal of specific content, if it is within the platforms' capability to does so. 

    There are only two exceptions to this rule.  One for copyrights and one for nonauthorized internet dissemination for which a mere notification of the user or their legal representative is surfaced. 

    The second one is the Code of Consumer Defense AKA, CDC, which considers users as hypersufficient and vulnerable in their relations with enterprises.  Article 14, CDC establishes objective that liability reach is restrictive, strict liability reaching, in which enterprises or service providers are responsible regardless of the existence of fault for repairing damages caused to customers due to the facts, or are insufficient are inadequate information about the risks. 

    So in a sense, two piece of legislation can give user many protections online regarding harmful activities and illegal content.  Nevertheless, users are still unprotected of the many online harms that are not clearly illegal such as disinformation, or that are not even perceived as harm to them, like operating gatekeeping, shadow-banning, microtargeting of problematic content. 

    Regarding the first issue, given the nonexistence of legislation that deals specifically with coordinated disinformation, our electoral superior court has been enacting resolutions to set standards for political campaigns announced.  Also, the electors of superior court established in the scope of its scope about fighting disinformation program partnerships with the main platforms in Brazil such as META, Twitter, TikTok, Whatsapp and Google.  Signed official agreements stating what their initiatives would be in this document.  Most of them committed with creating reporting channels, labeling content as electoral related, and redirecting users to electoral court official website and promoting official sources.  Instagram and Facebook also developed seekers to support users to vote in spite of voting being already mandatory in Brazil.  Nevertheless, we don't have enough data to see the real impact of these measures, just generic data on how much content was removed on a given platform, also generic data on how they are complying with the legislation.  This sort of data is offered by the main platforms in Brazil since the establishment of partnership programs with fact-checking agencies in 2018.  I'm not saying that they are not removing enough content.  What I want to highlight here is that we don't have data or metrics to understand what this generic numbers means.  Nor do we have knowledge on the content, if the content is being removed fast enough to not reach enough users.     

    Furthermore, in fact, some of these efforts to combat falsehood on YouTube, for example, were themselves at risk for democracy in elections in 2022.  By the Official Sources program, this slide that is displayed right now, a high partisan meter channel Jovem Pan was being actively recommended to YouTube users. 

    To give an example, election day, Jovem Pan was disseminating fake audio allegedly from a famous Brazilian drug dealer Marco Camacho, aka Marcola, in which he was supporting Lula's election. 

    Justice Alexandre de Moraes from the Brazilian Federal Supreme Court, which was presiding over the Electoral Superior Court, provided a court order for removal of the content but not before it had already reached 1.17 million visualizations.  Supporters also shared this video in at least 38 Whatsapp groups and Telegram groups monitored by the fact-checking agency Aosfatos. 

    To Bruna's question, are social media and platforms content moderation shaping democratic elections, I tend to answer no, or at least not significantly.   Either we have no significant data, or we do not have enough information on their actions and results. 

    That's it.  Thank you.

[applause]

>> BRUNA MARTINS dos SANTOS:  Going to bring it to Lia right now.  Lia is representing IPANDETEC and a fellow Latin American, and yet another region of the world that's facing a lot of those discussions in terms of proper resources deployment and policymaking as well.  Welcome to the pane.

   (Audio muted)

>>LIA HERNANDEZ:  Okay.  Perfect.  Well, because IPANDETEC is a digital rights organization based in Panama City but working in Central America, so I'm going to refer mainly to the recent electoral process in Guatemala and the next process in Panama that will take place in May 2024. 

    The First thing is I want to send all of my support to the Guatemalan people where they are mobilizing in the street because they are demanding democracy in their past elections in the country.  In central America, digital platforms make tools available to our electoral public entities because they try to hold them to verified information and to avoid any deregulation of our detailed rights, our fundamental rights for protests, freedom of expression, freedom of the press, privacy. 

    Currently, in countries such as Panama, my country, digital media platform and journalists were ordered to remove information from their platform by Tribunal Electoral, the electoral public entity, and they got fined because they were posting information about Ricardo Martinelli.  I don't know if you know about Ricardo Martinelli.  He's very famous.  He's so famous as Lula Bolsonaro in Brazil.  Well, he was a former president of Panama and he's a candidate for the next election in Panama because he want to be president again.  By the way, he's most manipulator of the privacy in the country. 

    So the electoral entity in Panama ordered this journalist to remove information about them because it's against the democracy in the country because it's against their privacy, their own image. 

    So the question is, if big techs are giving tools to our electrical public entities to promote democracy, to remove access to information, to promote fundamental rights, why electoral entities would barrier to the citizens, to journalists, to communicators, who their main fulfill is legitimate the duty to inform, the duty to communicate to the citizens what is happening in the countries.  And more in this case, of corruption because this is former president is very corrupt, this former president is very corrupt. 

    So freedom of expression, freedom of information, and freedom of press are limited in Panama when journalists try to communicate based on the principle of public interest that we have in knowing the good, bad, ugly of our candidates in our electoral process. 

     These are platforms must match the words with their actions because even though they don't have any autonomy in the country in the decision of the electoral branch, they should not become part of the problem and limit constitutional guarantees such as freedom of press. 

    So mainly, this is a very recent case that we are follow in Panama.  And thank you so much, Bruna, for space on this panel.

>> BRUNA MARTINS dos SANTOS:  Thank you so much.  Very interesting that this kind of ongoing line of major interferences with expression, with conversations online.  It's not just like one or two, but it's often lack of -- sometimes it's the responsiveness, sometimes it's the ongoing conversation or the cooperation that social media platform should have with authorities, and that should be interesting to be developing that. 

    There are also downsides to those partnerships when it like goes towards the path of like further request for data and access or even like privacy violations.  It is definitely a hard and deep conversation. 

     I'm going to go to Daniel Arnaudo from Idia.  Welcome, and the same question as the others.

     >> DANIEL ARNAUDO:  Thank you for having me.  Thanks everyone for being here.  We're really pleased to be a part of this coalition.  For those that don't know, I'm from the National Democratic Institute.  We're a nonprofit, nonpartisan, nongovernmental organization that works in partnership with groups around the world to strengthen and safeguard democratic institutions, processes and values to secure a better quality of life for all. 

    We work globally to serve elections, strengthen elections processes.  My work particularly is to support a more democratic information space.  In this work, we engage with platforms around the world both through coalitions like this or others such as the Global Network Initiative, Design for Democracy Coalition.  We help highlight issues for platforms, perform social media monitoring, we engage in consultations on various issues ranging from online violence against women in politics to data access and crisis coordination. 

    I think as was mentioned, 2024 will be massive year for democracy.  And from our perspective, I think we're particularly concerned about context we work in throughout the global majority, and particularly small and medium-sized countries, do not receive the same attention in terms of content moderation, policies research tools and data session and many other issues.  This is all in the context of I think what is a serious disinvestment in civic integrity, trust and safety and related teams within these organizations.  

    So just in the region, I think you have Bangladesh, Indonesia, India, Pakistan, and Taiwan that will all hold elections in the coming year.   I know there will be some resources devoted to larger countries, but on the other hand, they are massive user bases, and the smaller ones are going to receive very little attention at all.  I think this is a consistent focus for our work and for considerations around these issues. 

    I think one of the main kind of recommendations that I would have would focus around data access, and in the context of this disinvestment, we're seeing a serious pullback from access for third-party researchers. 

    We are very concerned about changes in the APIs and in different forms of access to data on the platforms as I think some of my other panelists have discussed for research and other purposes particularly Meta and Twitter and X, and continued restrictions in other places. 

    They're building mechanisms for access to traditional academics in certain cases, but not for researchers or not for broader civil society that not live and work these contexts.  They're often provisioned through mechanisms that are controlled within large countries in the United States or in Europe, and it there aren't really systems in place both for documentation or understanding those systems in that there are huge barriers to that kind of access even when it's enabled in that sense.  So that's something that I would really urge companies in the private sector and groups such as ours to coordinate around in terms of figuring out ways of ensuring that access in the future to shine a light within those contexts. 

    Secondly, I think they're ignoring major threats to those who make up half or more of their user base, namely women and particularly those involved in politics either as candidates, policymakers or ordinary voters.  Research has shown that they face many more threats online and platforms need to institute mechanisms that can support them both to protect themselves, to understand threats, support and issues as necessary. 

    We have conducted research that shows both scale of the problem but also look introduce a series of interventions and suggestions for companies and others that are working to respond to these issues.  But I think this is really a global problem that we see in every context that we work in globally, and I think many in the room will understand this threat and this issue. 

    Finally, I think there's a need to consider critical democratic moments and to work within those specific situations, how they can work with the broader community to manage them, not only elections, but major votes or referenda, and also more critical moments such coups, authoritarian contexts, protests, really critical situations. 

     They cannot appropriately resource these contexts in situations that they may not have a greater understanding of.  They at least need to engage with organizations that understand them and help to react and effectively make decisions in these challenges situations.

    I think retreat from programs such as the trusted partners in the case of Meta, and a consistent whittling down of their teams that are addressing these issues will have impacts on these places, on elections, on democratic institutions, and ultimately, these companies' bottom lines. 

    The Private sector should understand these are not only moral and political issues, but economic ones that will push people away from these spaces as they become hostile or toxic to them in different ways. 

    We understand the tradeoffs in terms of profit and organizing systems that are useful for the general public, but we would encourage companies to reflect that the demographic world is integral to the open and vibrant functioning of these platforms. 

    As with 2016 and 2020, 2024 will be a major election year and also likely represent a common paradigm in its moderation, in information manipulation, campaigns, in regulation, which is another kind threat, I think, that companies need to consider and a host of related themes will have big implications with their profits as well as democracy.  So I think they going to ignore these realities at third peril.

>> BRUNA MARTINS dos SANTOS:  Thanks a lot.  Also, thanks for highlighting some of the things that are the Year of the Democracy Campaign.  Issued documents that the campaign asked.  Some things we would like to require from social media companies such as streamlining, human rights, bringing in more mechanisms to protect users in addressing the problem at the real scale.  We are not just saying like is issue plans for elections.  We are also saying deploy the solutions, invest the money.  It's not just Brazil that matters.  It's also brazil, India, Kenya, Tanzania.  That's what's really core and relevant about this whole situation for sure.

    I would like to ask if anyone has any questions for the panelists or would like to add any thoughts to the conversation.  There is a microphone in the middle of the room. 

    >>Thank you for giving me some space and the ability to express myself.

    I'm from Russia.  We have like a digital election system in Russia and we are talking about like threats which area posted by social media platforms all around the world, Meta, Facebook, Instagram, Google, Snapchat, but we didn't talk about deeper threats to these digital election systems. 

    For example, like two months ago, we had elections all over Russia and our digital elections system was attacked by denial of services by Ukrainian party to disrupt elections and elections were disrupted for like three or four hours and citizens were not able to actually vote.  So this is not something about like harming Russia as a state.  It's something about harming Russia citizens as citizens.  That's number one problem. 

    Second problem is, I think you have mentioned before, but I think it's a little bit deeper because we have talked a lot about global media platform involvement in information manipulation, fakes and disinformation spread , but didn't talk about global media platforms' position, which is tends to be neutral but is not always neutral in terms of conflict because two sides and sometimes global media platforms choose sides.  What we see and talk about a lot is that global media platforms have some worry, like closed secret recommendations algorithms which basically forms the news feed for users.  And for example, in some countries in Africa, Facebook, and I you can prove me, Facebook is actually represent internet for some people and Facebook can do revolution in a click just authoring users' newsfeed.  They are like algorithms, recommendation algorithms. 

    And nobody knows how these algorithms work and I think internet society and global international society and IGF included should put more pressure on global media platforms for making this algorithm more transparent because people should know why they are seeing this or this content.  That's all.  Thank you for so much for giving me some time. 

>> BRUNA MARTINS dos SANTOS:  Thanks a lot.  Any other questions?

   >> LAURA:  Hello.  Thank you for the panel.  My name is Laura.  I'm from Brazil.  I here with the youth delegation, but I'm also a researcher at the School of Communication and Media Information at a Getulo Foundation in Brazil. 

    I would like to hear more about the issue of that data for academic research and civil society research as well.  As a center specializing in monitoring the public debate in social media, we are very concerned  with the recent changes mentioned, mentioned by Yasmin as well, regarding datas for us.  And I would like to hear more about what kind of tools and mechanisms can the academic community and civil society community in general access to fight those restrictions and to face these issues not only in the regulatory sphere, where this debate is present, but also in a more broad way.     Thank you.

>> BRUNA MARTINS dos SANTOS:  Thank you so much.      

   And the last question?

   >> ALEXANDER:  I'm Alexander from a country in which spring of which next year, 150, 145 million will elect Vladimir Putin as president.  I have two points.  First of all, I would like to say something about information about Bosch Digitalis because Russian Central Election Commission did not confirm any users electoral with electronic electoral assistance. 

   Unfortunately, such system in Russia was created by Russian big tech.  Kaspersky created one system used in Moscow, and Rostelecom, which could be considered as big tech, created another one.  System's completely untransparent.  Does not comply to the nation's commission's recommendation and another kind of recommendation for digital transparency, on my point of view, are intended for just faking results.

    if you are interested about Bosch Digitalis, ask me later.  What I would like to ask maybe not the panel, but everyone, has somebody participated in elections last time?  Yeah, okay.  Have you tried to use platforms for your promotion? 

    Now, there is also, I would like to inform that Facebook is not possible, is not legal to be used in promotions.  Before, I created a political activist or a political candidate page on Facebook and would like to advertise myself to constituency, about 20,000 voters.  So I asked Facebook, please make a suggestion and they suggested me two contacts for 10 bucks.   I think, in some cases, platforms don't understand requirements for candidates if it's not presidents.  Something like we need to work with these, they want too much money for promotions because, okay, if I would create birthday cakes, maybe two contacts for 10 bucks is reasonable, but not for the one who wants to advertise himself in a constituency.  So I think such work with platforms and helping candidates especially in restrictive regimes where advertisements on the physical page is no longer possible also should be done. 

   Thank you very much.

>> BRUNA MARTINS dos SANTOS:  Thank you.  We have one extra question from the chat that I'm going to hand out to you guys.  You don't need to answer all of them, the ones that speaks to you the most, I guess.  The one that's on chat is, what should be done legally when cross border with digital platforms like Meta refuse to cooperate to national competent authorities regarding cybercrime cases like incitement to violence and promoting pornography for children and private images, and even in serious crimes and refuse to establish official representatives in the country.  Rather dense question.  I will give it back to the floor to you guys.  And as we move to the very end of the session, we only have 12 more minutes.  I would ask you to, in a tweet, if you could summarize what will be your main recommendation for addressing this so-called global advertisers in big tech accountabilities.  It's difficult to summarize that, but if you have a tip, an idea, a pitch, for that, it's very much welcome.  I'll start with you, Ashnah.

>>ASHNAH KALEMERA:  Thank you, Bruna, and thank you for the very, rich questions.  I think they highlight that this conversation is not limited to elections and misinfo and disinfo or hate speech, but very many other aspects around it. 

    Dox attacks which you speak about to tech and the resilience of not just civil statute organizations, but even electoral bodies and commissions or entities that are state-owned or ran and leverage technology as part of elections, as well as other conversations around accessibility and exclusion because some of that technology around elections excludes key communities which brings about apathy and low voter turnout, all of them critical to the conversations around voter election. 

    Similarly, the point around positions and the power of these tech companies to literally start revolutions, to borrow your word, I think that too is an area that is critical to deliberate more on.  The answers are not very immediate.  Some of the work that we've done in researching how disinfo manifests in varying contexts has highlighted that the agents, pathways and the effects vary from one context to the other.

    Like I mentioned in the beginning, in context where these conflicts, religious or border conflicts or electoral conflict the manifestations are always very different.  The agents are always very different.  Not we're not necessarily pointing a finger only at big tech, but we are all mindful that this is a multistakeholder conversation that must be had and should be cognizant of all those challenges. 

    The issue on research, I think that's something that we felt on the continent, the inaccessibility of data.  Previously, CIPESA with leveraged data APIs, I believe that's the technical term, to document elections and monitor elections, social media, sentiment analysis, and microtargeting.  That capacity is now significantly limited so we're not able to highlight some of the challenges that emerge during elections around big tech.  That's not to say that documentations through stories or humanization would not have the same effect if access to data is limited. 

     What else did I want to talk about?  I forget because it was so heavy conversation, heavy questions.  Yes, the conversation is much broader than just elections and big tech alone.  We all have a role to play and engaging the least obvious actors, like electoral bodies, regional economic blocks and other human rights monitoring, monitoring human rights norm-setting mechanisms is also critical to the conversation.

   >> YASMIN CURZI:  Regarding recommendations, I think it's only possible actually to have really real accountability if we have like specific legislation and regulation of platforms.  It's not possible to have like a multistakeholder conversation if we have like we have the powers, symmetries, are just too big  for us to sit at the same table and discuss with them and talk to them.  They set all the rules that are on the table one the table so it's not possible to talk to them without regulations.

    Brazil, for example, during the elections, the journalist, Patricia Campos Mello, asked Facebook how much they were investigating.  Not only Facebook.  Sorry.  Facebook and YouTube.  How much they were investing in content moderation in Brazil to how much they were complying with their own memo agreements that they made, signed with the superior electoral court, and they did not answer.  They just said that this was sensitive data. 

    And like we were talking about aggregated data, how much they were investing financially to improve their content moderation in Portuguese.  So if we don't have this basic information, if we don't have like to assess how much harmful content is being recommended by their platforms, it is quite difficult for us to be able to make public policies to address these issues. 

    I would just like to display these slides again, just some propaganda, brief propaganda.  We have our CPR coalition on platform responsibilities.  Our outcome last year was framework on meaningful transparency, meaningful electoral transparency with some thoughts for policymakers and regulators worldwide if they want to implement it, and also platforms if they are able and eager to improve their best practices, so they also can adopt this framework. 

    And this year, our outcome we going to release tomorrow also focusing on human rights, risk assessments, analysis.  This is so our title.  Like collaborative a paper with best cases and also discussing legislation in India, DSA, DMA, Brazilian legislation.  So we are going to release it tomorrow.  Our session is 8:30.  Thank you.  I'm sorry for doing this again.  I just wanted to show the documents.  This is what I would recommend for people. 

    >> DANIEL ARNAUDO:  Thanks for the questions.  I think certainly, algorithmic transparency can be a good thing.  You have to be careful about how you do it.  And to create systems to understand the algorithms.  They can also be gamed in different ways if you have a perfect understanding of them, so it's a tricky business. 

    I think definitely on need for better protections and systems for smaller candidates and different contexts.  It's a part of system.  Not just the individual users and what they're seeing and how these systems and networks will be manipulated.  Candidates have access to information about political advertising or about even basic registration information. 

    I think every country in the world should have access to the same systems that are used by Meta and by other major companies, Google and others, to promote good political information, and I mean very basic information about voting processes, about political campaigns anywhere in the world. 

    I think, on data access, certainly, you're seeing a revolution right now in terms of how the companies are providing assess to their systems.  I think it's focused on X and Twitter.  That has changed the way any sort of research is being done on those platforms.  It's much more expensive.  More difficult to get at.  I think companies need to reconsider what they're doing in terms of revising those systems and making them more difficult for different groups.  Meta in particular will be really critical.  I think we need to work collectively to make sure that they make those kinds of systems, like APIs, available to as many kinds of people as possible. 

    Certainly, there are issues around placing company employees in certain countries around the world and that can be problematic in certain ways because they could also be authoritarian context and then the employees become bargaining chips potentially within certain kinds of regulations that they want to enforce.  You have to be careful about that, but I certainly understand the need to enforce regulations around privacy and content moderations and other issues.  I think it's something that has to be designed carefully. 

    I think, certainly, there's a huge crisis in terms of how companies are addressing different contexts and they need, I think, ultimately better staff and resource these issues or these different contexts.  To have people who speak local languages, that understand these contexts, that can respond to issues and reporting, and that know what they're doing, but this is expensive and I don't think you're going to be able to work your way out it of through AI or something like that, as many have proposed. 

    So I just think it's something that they need to recognize that reality or they're going to continue to suffer as, unfortunately, we will all.

    >> LIA HERNANDEZ:  Just one minute.  I think that is necessary not just to empower electoral authority.  It's most necessary to empower citizens, civil society organizations, human rights defenders, activists, because we are really working to promote conserving democracy in our countries.  This is the recommendation. 

    Regarding your question about the data, for example, there our case we are working in monitoring digital violence based against human candidates in the next election in Panama and everything is very manual because the digital platforms, they don't make available the tools to the civil society. They are only available to the government.  So we are trying to like an agreement with the electoral authority to maybe have access to the tools because it's necessary to finish the work before the elections. 

   In our case, the data is not clean.  Don't use open data standards, so we have to find sometimes guess information that they have.  Not upgrading in their websites, so it's a bit difficult for to us to work with these kinds of platforms.

>> BRUNA MARTINS dos SANTOS:  Thanks a lot to the four of you and Alex as well following us directly from UK.  Thanks everybody for sticking around.  If any of this conversation struck a note with you, go to the Year of Democracy.org.  That's the website for the Global Coalition on Tech Justice campaign, and have a nice rest of IGF.

     (applause)

`          INTERNET GOVERNANCE FORUM 2023

                    KYOTO, JAPAN             

            TUESDAY, 10TH OCTOBER, 2023  

 

    PROTECT PEOPLE AND ELECTIONS, NOT BIG TECH!

 

                    EVENT #117

                      ROOM 7

                     16:30 JST

 

This text is being provided in a rough draft format.  Communication Access Realtime Translation (CART) is provided in order to facilitate communication accessibility and may not be a totally verbatim record of the proceedings.  This text, document, or file is not to be distributed or used in any way that may violate copyright law.

 

>> BRUNA MARTINS dos SANTOS:  I'm going to start off with the session.  Welcome to the town hall that's called Protect People and Elections, not Big Tech! Initial disclaimer, this town hall is being organized by Digital Action.  We are conveners of the Global Coalition on Tech Justice, and this is a brand new movement that is discussing big tech accountability, how to safeguard elections, and trying to bring in new conversation or improve the current ones about why should we care about elections and why should we make this conversation even closer to social media companies. 

              The Global Coalition for Tech Justice movement with over 100 organization and individuals from all over the world.  Some of them are with we.  Actually, all of them, a few of them are with me at this panel.  We do have more and more organizations and academics joining this space to discuss some of the things that we are planning for today. 

    As for those of you that don't know Digital Action, we were founded in 2019 with the mission to protect democracy from digital threats.  This is going to be one of those heavy conversations about how social media affects democracies and the other way around works as well.  Our work has been involving some work, utilization of some collective action, building bridges, and also ensuring those directly impacted by tech harms are those that are actually in power, are those the ones that we are listening to. 

    Global Coalition for Tech Justice and Europe democracy campaign has the general goal of bringing in the perspectives of the victims or of the places in which social media companies invest less or much less in the day-to-day lives, so that is little bit of what we want to do. 

    I want to first bring in Alexandra Pardal, she's the Global Campaigns director at Digital Action.  She's going to open this panel for us and explain a little more about digital democracy campaign and what we're all about.  Alex, I think you're in the room, right?

>>ALEXANDRA PARDAL:  Yes, I am.  Thank you Bruna.   Wonderful to be with you here.  Welcome to all of our panelists and participants here today those joining us from elsewhere remotely. 

    This is a global conversation on how to protect people and elections, not big tech.  So I'm Alexandra Pardal from Digital Action, a globally connected, building movement organization with a mission to protect democracy and rights from digital threats. 

    In 2024, the year of democracy, more than 2 billion people will be entitled to vote as US presidential and European parliamentary election converge with national goals in India, Indonesia, South Africa, Uganda, Egypt, Mexico and some other 50 other countries, the largest megacycle of elections we've seen in our life times.  Our information spaces and the ability to maintain integrity of our information and uphold the truth and shared understanding of reality are more vulnerable than ever from foreign and maligning influence in elections, the use of new tech, like generative AI, making it easier for domestic or foreign actors to manipulate and lie to financially motivate a globally active disinfo industry, threats have never been bigger nor more pervasive.  Elections are flashpoints for online harms and there are offline consequence. 

    Now, over the past four years, Digital Action has collaborated with hundreds of organizations in every continent supporting the monitoring of digital threats to elections in the US and elsewhere and led large civil society coalitions demanding a strong digital services act in the EU and better policy against hate and extremist from social media companies globally.  This experience has taught us that there is startling inequity between world regions when it comes to protections from harms, from disinformation, hate and incitement to manipulation of democratic process.  Online platforms just aren't safe for most people. 

    We know that the platforms run by the world's social media giants Meta, Google, X, and TikTok, have the greatest global reach they've ever had and are at their most powerful, but safeguarding efforts have been weak to protect information integrity globally.  For instance, Facebook says it's invested $13 billion in its platforms safety and security since 2016, but internal documents show that in 2020, company plowed 87 percent of its global budget for time spent on classifying false or misleading information into the US even though 90 percent of it is users live elsewhere.  This means there's a derth of moderators with cultural and linguistics expertise where Facebook has been unable to effectively tackle disinformation at all times and most consequentially during elections where disinformation and other online harms peak. 

    Similarly, non-English languages have been a stumbling block for automated content iteration or YouTube, FCC, or TikTok.  Struggle to depict harmful posts in a number of languages in countries at risk of real-world violence and in democratic decline or autocracy.  What this means is that the risks on the horizon in 2024 very serious indeed at a time when social media companies are cutting costs, laying off staff, and pulling back from their responsibilities to stem the flow of disinformation and protect the information space from bad actors. 

    If some of the world's largest and most stable democracies, the United States, Brazil have been rocked by bad actors mobilizing on social media platforms spreading election disinfo and organizing violent assaults on the heart of their democracies, imagine next year where we'll see democracies under threat, like India, Indonesia, Tunisia, alongside a whole sway of countries that are unfree or at risk where citizens hope to hold on to spaces to resist the manipulation of the truth for autocratic purposes.  How can online platforms be made safe to uphold information and lateral integrity and protect people's rights? 

    The challenge of 2024 elections megacycle is a calling to all of us to show up, ideate, innovate, bring our skills, talents, and any power we have to the table and collaborate. 

   As an example of what's in the works and background to the perspectives we're going to hear today, together with over 160 organizations now, experts and practitioners from across the world, we've convened the Coalition for Tech Justice to launch the 2024 Year of Democracy Campaign in order to foster collective action, collaboration, and coordination across election countries next year.  Together with our members, the Global Coalition for Tech Justice will campaign, research, investigate, and tell the stories of tech harm in global media supporting and amplifying the efforts of those on the front lines and building policy solutions to address the global impacts of all social media companies.  

    So we're going to be actively collaborating with global stakeholders and this conversation today is an opportunity to further these conversations and get collaborations off the ground with all of those who share goals of safe online platforms for all. 

   I'm delighted to introduce this session for this important global conversation on how we protect 2024's megacycle elections from tech harms and ensure social media companies to fill their responsibilities to make their products and platforms safe for all. 

    I'm really happy to hand back to Bruna introduce our panelist and discussion this morning.  Than you.

>> BRUNA MARTINS dos SANTOS:  Thank you so much.  Welcome to the session as well. 

   As she just brought up, this is really a global conversation we want to do.  We want to spark a discussion on how can we collectively ensure that big tech plays its part in protecting democracy and human rights in 2024 elections.  It's not just one.  It seeks elections as everybody has been talking about this week.  It's a rather key year for everyone. 

    We have two provocative questions, kickoff questions for the panelists.  I'm going to bring you into the conversation first.  Ashnah is programs coordinator for CIPESA. 

    The first question for you would be whether, like if you consider that social media platforms and content moderation, or the lack of, are shaping democratic elections, and if so, how?

>>ASHNAH KALEMERA:  Thank you.  Good evening everyone, or good morning, like Alex said. I guess we're all in very different time zones at the moment. 

    It's a pleasure to be here.  Thank you for the invitation to digital action and the opportunity to have this very important discussion. 

    Ashnah Kalemera.  I work for CIPESA.  CIPESA is the collaboration of International ICT policies for East and Southern Africa.  We are based out of Kampala, Uganda, but work across Africa promoting effective but inclusive technology policy, but also its implementation as it intersects with governance, good governance obviously, upholding human rights as well as improved livelihoods. 

    I like to start off these conversation on very light notes.  Very often, these panels are dense in terms of spelling doom and gloom, so first, I would like to emphasize that technology broadly, including social media platforms and the internet, have huge potential for electoral processes and systems.  They are critical in ensuring that voter registration is complete and accurate enabling remote voting for excluded communities or remotely based voters.  They have been critical in supporting campaigns and encompassing as well voter awareness and education, results transmission and tallying, monitoring malpractice, all of them critical to electoral processes and lending themselves to promoting legitimacy and inclusion of elections in states that have democratic deficits, which most of Africa is many of the states.  So I think that light note is very important to highlight as we then go on to the doom and gloom that this conversation is will likely take. 

    Now we start the doom and gloom.  Unfortunately, despite those opportunities, there are immense threats that technology poses for electoral processes in Africa and, I guess, for much of the world.      

    Increasing, we're seeing states, authoritarian governments especially, leveraging the power of technology for serve-serving interests.  Critical there is example is network disruptions or shutdowns.  I see KeepItOn Coalition members in the room.  They work to push back on that excess. 

    On this disinformation and hate speech, users, governments, platforms themselves, as well as private companies, PR firms, are actively influencing narratives during elections undermining the all good stuff that I mentioned in the beginning.  Very often, we ask ourselves at CIPESA, I imagine everybody in the room, why this information thrives, because pretty much everybody is aware of the challenge that it poses.  In Africa especially, it's thriving and thriving to very worrying levels.  One of them, again, is  something positive, it's because technology is penetrating and penetrating very well on the continent.

   Previously, unconnected communities access to information, I take on the baton literally which, again, in the context of elections is great, but in the case of disinformation, it's a significant challenge. 

    Secondly is the youth population on the continent with many of them coming online via social media.  Always jokes in sessions that I've attended where there is African representation, that for many Africans, the internet is social media.  That challenge is enabling disinformation and hate speech to thrive. 

    Third is conflicts.  The elections that we're talking about are happening in very challenging context that are characterized by ethic, religious, and geopolitical conflicts.  Again, all the nice stuff I mentioned earlier on is then cast with a really dark shadow.  Like Alex mentioned, that context I just described is going to be a very significant stress test come 2024 and beyond for the continent and we're likely to see responses that undermine the potential to uphold electronic legitimacy, but also for citizens to realize their human rights. 

    One of those reactions we're likely to see from a state perspective is weaponization of laws to undermind voice or critical opinion online, which again, undermines electoral process and integrity. 

    Unfortunately, given the context around conflicts, we are likely to see a lot of politically -- sorry, fueling politically motivated violence which restricts acess to critical information and ultimately perpetrates, divides, and hate speech and lead to offline harms. 

    Bringing the conversation back to big tech, on the continent, unfortunately, we're seeing very limited collaboration between tech actors and media and civil society, for instance, identifying, debunking or prebunking, depending on which side of the fence you sit, and moderating this information. 

    Also, the processing and response times to report such complaints are really slow, this is discouraging, reporting and ultimately maximizing, in some cases, circulation of disinformation and hate speech. 

    There are also significant challenges around opaqueness in moderation measures.  Seen the case Uganda during the previous election where a huge number of, what's the word automated account?  Were taken down for otherwise not very clear reasons.  That led to a response from the State, i.e., shutting down access to Facebook which remains inaccessible to date in Uganda. 

    Given those pros and cons, either side of the coin I just described, African Continent, it's important to have collaborative actions and movements just like what digital Action is spearheading and we really wanted to be a part of, and efforts in that regard should focus on showing up and participating in consultation processes just like this and others where there are opportunities to challenge or provide feedback and comments.  I think that's really important.  Such spaces are not many. 

   We at CIPESA host annual forum on internet freedom in Africa.  We marked 10 years a couple of days ago.  For the second time, we were able to have the META oversight board present and able to engage.  They admitted that cases from the African Continent are limited, but spaces like the Forum on Internet Freedom in Africa, that CIPESA is the host, is providing that opportunity for users and other stakeholders to deliberate on these issues. 

    Cannot not say research and documentation remains important.  Of course, we're a research think tank, churning out pages and pages that are not necessarily always read, but I think it's important because evidence-driven advocacy is critical to this cause.

    skills-building, again, and digital literacy fact-checking and information verification, that remains critical, but also leveraging norm-setting mechanisms and raising the visibility of big tech challenges in new processes, university peer review, Africa Commission on Human Peoples Rights.  These conversations are not filtering up as much as they should do.  There should be interventions that, of course, promote and challenge private sector to uphold responsibilities and ethics through application of the UN guiding principles and business and human rights. 

    Lastly, strategic litigation.  That is also an opportunity that is before us in terms of challenging the excesses that big tech poses for elections in the challenging context I've just described. 

    Thank you. 

    (Applause)

    >>BRUNA MARTINS dos SANTOS:  Thank you very much.  Just speaking on two of the topics you spoke about, which is the Weaponization political processes and politically motivated violence, I think that bridges very well with the recent scenario in Brazil with unfortunately the reputation or yet another attack on the capital.  After a lot of discussions on a fake news draft bill and regulation for social media companies. 

    Yasmine, I'm going to bring you in now. Yasmine is from FGV Rio Janeiro and also the co-coordinator of the DC on platform responsibilities.  Welcome.

>>YASMIN CURZI:  Thank you so much, Bruna.  Could you please display the slides?  Thank you so much. 

    Addressing the first question Bruna posed to us here, our social media and platforms content moderation the shaping of current democratic elections.  To answer this question. I would just like to give a brief context about the elections in Brazil,  sorry about the Brazilian elections scenario regarding platform responsibilities.  There are two main pieces of legislation that deal with content moderation issues.  Specifically, since 2014, we have the Brazilian civil rights framework, AKA Marco Civil, probably known by many of you here. 

    It establishes our basic principles for internet governance such as free speech, net neutrality, protection of private and personal data, but also establishes liability regimes for platforms regarding UGC, Article 19, 221. 

    To sum up really quickly, Article 19 created a general regime in which platforms are only liable for illegal digital content if they do not comply with judicial order for the removal of specific content, if it is within the platforms' capability to does so. 

    There are only two exceptions to this rule.  One for copyrights and one for nonauthorized internet dissemination for which a mere notification of the user or their legal representative is surfaced. 

    The second one is the Code of Consumer Defense AKA, CDC, which considers users as hypersufficient and vulnerable in their relations with enterprises.  Article 14, CDC establishes objective that liability reach is restrictive, strict liability reaching, in which enterprises or service providers are responsible regardless of the existence of fault for repairing damages caused to customers due to the facts, or are insufficient are inadequate information about the risks. 

    So in a sense, two piece of legislation can give user many protections online regarding harmful activities and illegal content.  Nevertheless, users are still unprotected of the many online harms that are not clearly illegal such as disinformation, or that are not even perceived as harm to them, like operating gatekeeping, shadow-banning, microtargeting of problematic content. 

    Regarding the first issue, given the nonexistence of legislation that deals specifically with coordinated disinformation, our electoral superior court has been enacting resolutions to set standards for political campaigns announced.  Also, the electors of superior court established in the scope of its scope about fighting disinformation program partnerships with the main platforms in Brazil such as META, Twitter, TikTok, Whatsapp and Google.  Signed official agreements stating what their initiatives would be in this document.  Most of them committed with creating reporting channels, labeling content as electoral related, and redirecting users to electoral court official website and promoting official sources.  Instagram and Facebook also developed seekers to support users to vote in spite of voting being already mandatory in Brazil.  Nevertheless, we don't have enough data to see the real impact of these measures, just generic data on how much content was removed on a given platform, also generic data on how they are complying with the legislation.  This sort of data is offered by the main platforms in Brazil since the establishment of partnership programs with fact-checking agencies in 2018.  I'm not saying that they are not removing enough content.  What I want to highlight here is that we don't have data or metrics to understand what this generic numbers means.  Nor do we have knowledge on the content, if the content is being removed fast enough to not reach enough users.     

    Furthermore, in fact, some of these efforts to combat falsehood on YouTube, for example, were themselves at risk for democracy in elections in 2022.  By the Official Sources program, this slide that is displayed right now, a high partisan meter channel Jovem Pan was being actively recommended to YouTube users. 

    To give an example, election day, Jovem Pan was disseminating fake audio allegedly from a famous Brazilian drug dealer Marco Camacho, aka Marcola, in which he was supporting Lula's election. 

    Justice Alexandre de Moraes from the Brazilian Federal Supreme Court, which was presiding over the Electoral Superior Court, provided a court order for removal of the content but not before it had already reached 1.17 million visualizations.  Supporters also shared this video in at least 38 Whatsapp groups and Telegram groups monitored by the fact-checking agency Aosfatos. 

    To Bruna's question, are social media and platforms content moderation shaping democratic elections, I tend to answer no, or at least not significantly.   Either we have no significant data, or we do not have enough information on their actions and results. 

    That's it.  Thank you.

[applause]

>> BRUNA MARTINS dos SANTOS:  Going to bring it to Lia right now.  Lia is representing IPANDETEC and a fellow Latin American, and yet another region of the world that's facing a lot of those discussions in terms of proper resources deployment and policymaking as well.  Welcome to the pane.

   (Audio muted)

>>LIA HERNANDEZ:  Okay.  Perfect.  Well, because IPANDETEC is a digital rights organization based in Panama City but working in Central America, so I'm going to refer mainly to the recent electoral process in Guatemala and the next process in Panama that will take place in May 2024. 

    The First thing is I want to send all of my support to the Guatemalan people where they are mobilizing in the street because they are demanding democracy in their past elections in the country.  In central America, digital platforms make tools available to our electoral public entities because they try to hold them to verified information and to avoid any deregulation of our detailed rights, our fundamental rights for protests, freedom of expression, freedom of the press, privacy. 

    Currently, in countries such as Panama, my country, digital media platform and journalists were ordered to remove information from their platform by Tribunal Electoral, the electoral public entity, and they got fined because they were posting information about Ricardo Martinelli.  I don't know if you know about Ricardo Martinelli.  He's very famous.  He's so famous as Lula Bolsonaro in Brazil.  Well, he was a former president of Panama and he's a candidate for the next election in Panama because he want to be president again.  By the way, he's most manipulator of the privacy in the country. 

    So the electoral entity in Panama ordered this journalist to remove information about them because it's against the democracy in the country because it's against their privacy, their own image. 

    So the question is, if big techs are giving tools to our electrical public entities to promote democracy, to remove access to information, to promote fundamental rights, why electoral entities would barrier to the citizens, to journalists, to communicators, who their main fulfill is legitimate the duty to inform, the duty to communicate to the citizens what is happening in the countries.  And more in this case, of corruption because this is former president is very corrupt, this former president is very corrupt. 

    So freedom of expression, freedom of information, and freedom of press are limited in Panama when journalists try to communicate based on the principle of public interest that we have in knowing the good, bad, ugly of our candidates in our electoral process. 

     These are platforms must match the words with their actions because even though they don't have any autonomy in the country in the decision of the electoral branch, they should not become part of the problem and limit constitutional guarantees such as freedom of press. 

    So mainly, this is a very recent case that we are follow in Panama.  And thank you so much, Bruna, for space on this panel.

>> BRUNA MARTINS dos SANTOS:  Thank you so much.  Very interesting that this kind of ongoing line of major interferences with expression, with conversations online.  It's not just like one or two, but it's often lack of -- sometimes it's the responsiveness, sometimes it's the ongoing conversation or the cooperation that social media platform should have with authorities, and that should be interesting to be developing that. 

    There are also downsides to those partnerships when it like goes towards the path of like further request for data and access or even like privacy violations.  It is definitely a hard and deep conversation. 

     I'm going to go to Daniel Arnaudo from Idia.  Welcome, and the same question as the others.

     >> DANIEL ARNAUDO:  Thank you for having me.  Thanks everyone for being here.  We're really pleased to be a part of this coalition.  For those that don't know, I'm from the National Democratic Institute.  We're a nonprofit, nonpartisan, nongovernmental organization that works in partnership with groups around the world to strengthen and safeguard democratic institutions, processes and values to secure a better quality of life for all. 

    We work globally to serve elections, strengthen elections processes.  My work particularly is to support a more democratic information space.  In this work, we engage with platforms around the world both through coalitions like this or others such as the Global Network Initiative, Design for Democracy Coalition.  We help highlight issues for platforms, perform social media monitoring, we engage in consultations on various issues ranging from online violence against women in politics to data access and crisis coordination. 

    I think as was mentioned, 2024 will be massive year for democracy.  And from our perspective, I think we're particularly concerned about context we work in throughout the global majority, and particularly small and medium-sized countries, do not receive the same attention in terms of content moderation, policies research tools and data session and many other issues.  This is all in the context of I think what is a serious disinvestment in civic integrity, trust and safety and related teams within these organizations. 

    So just in the region, I think you have Bangladesh, Indonesia, India, Pakistan, and Taiwan that will all hold elections in the coming year.   I know there will be some resources devoted to larger countries, but on the other hand, they are massive user bases, and the smaller ones are going to receive very little attention at all.  I think this is a consistent focus for our work and for considerations around these issues. 

    I think one of the main kind of recommendations that I would have would focus around data access, and in the context of this disinvestment, we're seeing a serious pullback from access for third-party researchers. 

    We are very concerned about changes in the APIs and in different forms of access to data on the platforms as I think some of my other panelists have discussed for research and other purposes particularly Meta and Twitter and X, and continued restrictions in other places. 

    They're building mechanisms for access to traditional academics in certain cases, but not for researchers or not for broader civil society that not live and work these contexts.  They're often provisioned through mechanisms that are controlled within large countries in the United States or in Europe, and it there aren't really systems in place both for documentation or understanding those systems in that there are huge barriers to that kind of access even when it's enabled in that sense.  So that's something that I would really urge companies in the private sector and groups such as ours to coordinate around in terms of figuring out ways of ensuring that access in the future to shine a light within those contexts. 

    Secondly, I think they're ignoring major threats to those who make up half or more of their user base, namely women and particularly those involved in politics either as candidates, policymakers or ordinary voters.  Research has shown that they face many more threats online and platforms need to institute mechanisms that can support them both to protect themselves, to understand threats, support and issues as necessary. 

    We have conducted research that shows both scale of the problem but also look introduce a series of interventions and suggestions for companies and others that are working to respond to these issues.  But I think this is really a global problem that we see in every context that we work in globally, and I think many in the room will understand this threat and this issue. 

    Finally, I think there's a need to consider critical democratic moments and to work within those specific situations, how they can work with the broader community to manage them, not only elections, but major votes or referenda, and also more critical moments such coups, authoritarian contexts, protests, really critical situations. 

     They cannot appropriately resource these contexts in situations that they may not have a greater understanding of.  They at least need to engage with organizations that understand them and help to react and effectively make decisions in these challenges situations.

    I think retreat from programs such as the trusted partners in the case of Meta, and a consistent whittling down of their teams that are addressing these issues will have impacts on these places, on elections, on democratic institutions, and ultimately, these companies' bottom lines. 

    The Private sector should understand these are not only moral and political issues, but economic ones that will push people away from these spaces as they become hostile or toxic to them in different ways. 

    We understand the tradeoffs in terms of profit and organizing systems that are useful for the general public, but we would encourage companies to reflect that the demographic world is integral to the open and vibrant functioning of these platforms. 

    As with 2016 and 2020, 2024 will be a major election year and also likely represent a common paradigm in its moderation, in information manipulation, campaigns, in regulation, which is another kind threat, I think, that companies need to consider and a host of related themes will have big implications with their profits as well as democracy.  So I think they going to ignore these realities at third peril.

>> BRUNA MARTINS dos SANTOS:  Thanks a lot.  Also, thanks for highlighting some of the things that are the Year of the Democracy Campaign.  Issued documents that the campaign asked.  Some things we would like to require from social media companies such as streamlining, human rights, bringing in more mechanisms to protect users in addressing the problem at the real scale.  We are not just saying like is issue plans for elections.  We are also saying deploy the solutions, invest the money.  It's not just Brazil that matters.  It's also brazil, India, Kenya, Tanzania.  That's what's really core and relevant about this whole situation for sure.

    I would like to ask if anyone has any questions for the panelists or would like to add any thoughts to the conversation.  There is a microphone in the middle of the room. 

    >>Thank you for giving me some space and the ability to express myself.

    I'm from Russia.  We have like a digital election system in Russia and we are talking about like threats which area posted by social media platforms all around the world, Meta, Facebook, Instagram, Google, Snapchat, but we didn't talk about deeper threats to these digital election systems. 

    For example, like two months ago, we had elections all over Russia and our digital elections system was attacked by denial of services by Ukrainian party to disrupt elections and elections were disrupted for like three or four hours and citizens were not able to actually vote.  So this is not something about like harming Russia as a state.  It's something about harming Russia citizens as citizens.  That's number one problem. 

    Second problem is, I think you have mentioned before, but I think it's a little bit deeper because we have talked a lot about global media platform involvement in information manipulation, fakes and disinformation spread , but didn't talk about global media platforms' position, which is tends to be neutral but is not always neutral in terms of conflict because two sides and sometimes global media platforms choose sides.  What we see and talk about a lot is that global media platforms have some worry, like closed secret recommendations algorithms which basically forms the news feed for users.  And for example, in some countries in Africa, Facebook, and I you can prove me, Facebook is actually represent internet for some people and Facebook can do revolution in a click just authoring users' newsfeed.  They are like algorithms, recommendation algorithms. 

    And nobody knows how these algorithms work and I think internet society and global international society and IGF included should put more pressure on global media platforms for making this algorithm more transparent because people should know why they are seeing this or this content.  That's all.  Thank you for so much for giving me some time. 

>> BRUNA MARTINS dos SANTOS:  Thanks a lot.  Any other questions?

   >> LAURA:  Hello.  Thank you for the panel.  My name is Laura.  I'm from Brazil.  I here with the youth delegation, but I'm also a researcher at the School of Communication and Media Information at a Getulo Foundation in Brazil. 

    I would like to hear more about the issue of that data for academic research and civil society research as well.  As a center specializing in monitoring the public debate in social media, we are very concerned  with the recent changes mentioned, mentioned by Yasmin as well, regarding datas for us.  And I would like to hear more about what kind of tools and mechanisms can the academic community and civil society community in general access to fight those restrictions and to face these issues not only in the regulatory sphere, where this debate is present, but also in a more broad way.     Thank you.

>> BRUNA MARTINS dos SANTOS:  Thank you so much.      

   And the last question?

   >> ALEXANDER:  I'm Alexander from a country in which spring of which next year, 150, 145 million will elect Vladimir Putin as president.  I have two points.  First of all, I would like to say something about information about Bosch Digitalis because Russian Central Election Commission did not confirm any users electoral with electronic electoral assistance. 

   Unfortunately, such system in Russia was created by Russian big tech.  Kaspersky created one system used in Moscow, and Rostelecom, which could be considered as big tech, created another one.  System's completely untransparent.  Does not comply to the nation's commission's recommendation and another kind of recommendation for digital transparency, on my point of view, are intended for just faking results.

    if you are interested about Bosch Digitalis, ask me later.  What I would like to ask maybe not the panel, but everyone, has somebody participated in elections last time?  Yeah, okay.  Have you tried to use platforms for your promotion? 

    Now, there is also, I would like to inform that Facebook is not possible, is not legal to be used in promotions.  Before, I created a political activist or a political candidate page on Facebook and would like to advertise myself to constituency, about 20,000 voters.  So I asked Facebook, please make a suggestion and they suggested me two contacts for 10 bucks.   I think, in some cases, platforms don't understand requirements for candidates if it's not presidents.  Something like we need to work with these, they want too much money for promotions because, okay, if I would create birthday cakes, maybe two contacts for 10 bucks is reasonable, but not for the one who wants to advertise himself in a constituency.  So I think such work with platforms and helping candidates especially in restrictive regimes where advertisements on the physical page is no longer possible also should be done. 

   Thank you very much.

>> BRUNA MARTINS dos SANTOS:  Thank you.  We have one extra question from the chat that I'm going to hand out to you guys.  You don't need to answer all of them, the ones that speaks to you the most, I guess.  The one that's on chat is, what should be done legally when cross border with digital platforms like Meta refuse to cooperate to national competent authorities regarding cybercrime cases like incitement to violence and promoting pornography for children and private images, and even in serious crimes and refuse to establish official representatives in the country.  Rather dense question.  I will give it back to the floor to you guys.  And as we move to the very end of the session, we only have 12 more minutes.  I would ask you to, in a tweet, if you could summarize what will be your main recommendation for addressing this so-called global advertisers in big tech accountabilities.  It's difficult to summarize that, but if you have a tip, an idea, a pitch, for that, it's very much welcome.  I'll start with you, Ashnah.

>>ASHNAH KALEMERA:  Thank you, Bruna, and thank you for the very, rich questions.  I think they highlight that this conversation is not limited to elections and misinfo and disinfo or hate speech, but very many other aspects around it. 

    Dox attacks which you speak about to tech and the resilience of not just civil statute organizations, but even electoral bodies and commissions or entities that are state-owned or ran and leverage technology as part of elections, as well as other conversations around accessibility and exclusion because some of that technology around elections excludes key communities which brings about apathy and low voter turnout, all of them critical to the conversations around voter election. 

    Similarly, the point around positions and the power of these tech companies to literally start revolutions, to borrow your word, I think that too is an area that is critical to deliberate more on.  The answers are not very immediate.  Some of the work that we've done in researching how disinfo manifests in varying contexts has highlighted that the agents, pathways and the effects vary from one context to the other.

    Like I mentioned in the beginning, in context where these conflicts, religious or border conflicts or electoral conflict the manifestations are always very different.  The agents are always very different.  Not we're not necessarily pointing a finger only at big tech, but we are all mindful that this is a multistakeholder conversation that must be had and should be cognizant of all those challenges. 

    The issue on research, I think that's something that we felt on the continent, the inaccessibility of data.  Previously, CIPESA with leveraged data APIs, I believe that's the technical term, to document elections and monitor elections, social media, sentiment analysis, and microtargeting.  That capacity is now significantly limited so we're not able to highlight some of the challenges that emerge during elections around big tech.  That's not to say that documentations through stories or humanization would not have the same effect if access to data is limited. 

     What else did I want to talk about?  I forget because it was so heavy conversation, heavy questions.  Yes, the conversation is much broader than just elections and big tech alone.  We all have a role to play and engaging the least obvious actors, like electoral bodies, regional economic blocks and other human rights monitoring, monitoring human rights norm-setting mechanisms is also critical to the conversation.

   >> YASMIN CURZI:  Regarding recommendations, I think it's only possible actually to have really real accountability if we have like specific legislation and regulation of platforms.  It's not possible to have like a multistakeholder conversation if we have like we have the powers, symmetries, are just too big  for us to sit at the same table and discuss with them and talk to them.  They set all the rules that are on the table one the table so it's not possible to talk to them without regulations.

    Brazil, for example, during the elections, the journalist, Patricia Campos Mello, asked Facebook how much they were investigating.  Not only Facebook.  Sorry.  Facebook and YouTube.  How much they were investing in content moderation in Brazil to how much they were complying with their own memo agreements that they made, signed with the superior electoral court, and they did not answer.  They just said that this was sensitive data. 

    And like we were talking about aggregated data, how much they were investing financially to improve their content moderation in Portuguese.  So if we don't have this basic information, if we don't have like to assess how much harmful content is being recommended by their platforms, it is quite difficult for us to be able to make public policies to address these issues. 

    I would just like to display these slides again, just some propaganda, brief propaganda.  We have our CPR coalition on platform responsibilities.  Our outcome last year was framework on meaningful transparency, meaningful electoral transparency with some thoughts for policymakers and regulators worldwide if they want to implement it, and also platforms if they are able and eager to improve their best practices, so they also can adopt this framework. 

    And this year, our outcome we going to release tomorrow also focusing on human rights, risk assessments, analysis.  This is so our title.  Like collaborative a paper with best cases and also discussing legislation in India, DSA, DMA, Brazilian legislation.  So we are going to release it tomorrow.  Our session is 8:30.  Thank you.  I'm sorry for doing this again.  I just wanted to show the documents.  This is what I would recommend for people. 

    >> DANIEL ARNAUDO:  Thanks for the questions.  I think certainly, algorithmic transparency can be a good thing.  You have to be careful about how you do it.  And to create systems to understand the algorithms.  They can also be gamed in different ways if you have a perfect understanding of them, so it's a tricky business. 

    I think definitely on need for better protections and systems for smaller candidates and different contexts.  It's a part of system.  Not just the individual users and what they're seeing and how these systems and networks will be manipulated.  Candidates have access to information about political advertising or about even basic registration information. 

    I think every country in the world should have access to the same systems that are used by Meta and by other major companies, Google and others, to promote good political information, and I mean very basic information about voting processes, about political campaigns anywhere in the world. 

    I think, on data access, certainly, you're seeing a revolution right now in terms of how the companies are providing assess to their systems.  I think it's focused on X and Twitter.  That has changed the way any sort of research is being done on those platforms.  It's much more expensive.  More difficult to get at.  I think companies need to reconsider what they're doing in terms of revising those systems and making them more difficult for different groups.  Meta in particular will be really critical.  I think we need to work collectively to make sure that they make those kinds of systems, like APIs, available to as many kinds of people as possible. 

    Certainly, there are issues around placing company employees in certain countries around the world and that can be problematic in certain ways because they could also be authoritarian context and then the employees become bargaining chips potentially within certain kinds of regulations that they want to enforce.  You have to be careful about that, but I certainly understand the need to enforce regulations around privacy and content moderations and other issues.  I think it's something that has to be designed carefully. 

    I think, certainly, there's a huge crisis in terms of how companies are addressing different contexts and they need, I think, ultimately better staff and resource these issues or these different contexts.  To have people who speak local languages, that understand these contexts, that can respond to issues and reporting, and that know what they're doing, but this is expensive and I don't think you're going to be able to work your way out it of through AI or something like that, as many have proposed. 

    So I just think it's something that they need to recognize that reality or they're going to continue to suffer as, unfortunately, we will all.

    >> LIA HERNANDEZ:  Just one minute.  I think that is necessary not just to empower electoral authority.  It's most necessary to empower citizens, civil society organizations, human rights defenders, activists, because we are really working to promote conserving democracy in our countries.  This is the recommendation. 

    Regarding your question about the data, for example, there our case we are working in monitoring digital violence based against human candidates in the next election in Panama and everything is very manual because the digital platforms, they don't make available the tools to the civil society. They are only available to the government.  So we are trying to like an agreement with the electoral authority to maybe have access to the tools because it's necessary to finish the work before the elections. 

   In our case, the data is not clean.  Don't use open data standards, so we have to find sometimes guess information that they have.  Not upgrading in their websites, so it's a bit difficult for to us to work with these kinds of platforms.

>> BRUNA MARTINS dos SANTOS:  Thanks a lot to the four of you and Alex as well following us directly from UK.  Thanks everybody for sticking around.  If any of this conversation struck a note with you, go to the Year of Democracy.org.  That's the website for the Global Coalition on Tech Justice campaign, and have a nice rest of IGF.

     (applause)