The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.
***
>> AISHWARYA GIRIDHAR: Hi. Great, we're online. Hi, everybody. Welcome to this workshop discussion on Meaningful Platform Transparency in the Global South. I wanted to let everybody know what the broad agenda for this workshop will be. First, I'll say hello, give some context, do some introductions, and then we'll have maybe a couple rounds of questions and answers with the panelists, then we will open the floor up for questions and discussions from both the online and offline audiences.
So, to begin with, I just wanted to talk a little bit about the Center for Communication Governance, where I work. My name is Aishwarya. We're an academic research center based in the National Law Center of New Delhi and work with various teams of Internet governance, so things like platform governance and liability, privacy and data governance, emerging technology, AI, cybersecurity, national security, the whole range.
So, we do this through a bunch of different ways. So, one is we have academic research, and we also have policy inputs that we provide to the government. We also facilitate capacity‑building for stakeholders, so government and judicial officers as well as young policy and legal professionals through things like our fellowship, at both the international and national levels.
So, as to why we're here today, I think it's fairly uncontroversial to say that social media platforms have been linked to a range of harms. And so, typically, the way that we have addressed these harms through regulation is through intermediary liability and basically imposing penalties on platforms, right? But we're seeing that it comes with its own set of issues, for example, the tendency to overremove content sometimes because of the risk of regulation, and also, I think increasingly, we're finding that it's a more blunt tool and it doesn't really address the root cause of a lot of these harms on platforms which sometimes are linked to the way that they're designed and the incentives and the structures that they operate within. And so, transparency's now becoming a core focus of regulatory intervention to address social media harms. And so, the intention behind this is to know more about platform functioning so that we can more easily address harms and also increase accountability by holding the right players responsible.
So, although everyone now broadly seems to agree that transparency is a good thing to have, I think there is a bit of debate on what that means and how to operational it. So, what kinds of information do we need? Whether the different informations are provided by different kinds of social media platforms, who the information is provided to, how this intersects with trade secrets and other commercial considerations, for example, are all questions. And then another set of concerns are just about whether the same kind of interventions will apply equally to Global North and Global South countries, because we're seeing that a lot of these regulations are coming from places in the U.S. and the EU, but these might not apply equally to other countries as well. So, what we primarily want to do in this session is to look at what meaningful transparency can look like and how considerations are likely to be different based on geography and sociopolitical conditions.
So, I'll begin with introduction of the panelists. We have Fernanda Martins here with me, and the others are joining remotely. Fernanda is Internet Lab Coordinator of Identities and focuses on the field of Internet policies, within which she is dedicated to gender, ethnic racial relations, violence, political violence, hate speech. She is a PhD student in social sciences and has a master's degree in social psychology and a bachelor's degree in social sciences. She's also a member of two research centers related to gender that I'm not going to try to pronounce because I'm sure I will get it wrong, but that's Fernanda. And Fernanda, maybe we could start with you.
I think maybe a good place to start would be to talk about based on your experience what are some of the most pressing issues that we're trying to address? So, what are the kinds of harms that you're seeing, and what do current redressive policies or processes look like in Brazil and maybe the larger Latin American context?
>> FERNANDA MARTINS: Hello, everyone. Thank you for the introduction. First of all, I would like to thank the organizers of this panel. It's a good moment to meet and exchange with you. So, my name is Fernanda. I am the Director of a think tank of Internet led based in Sao Paulo, Brazil. And this theme brings us so many important thoughts. I confess that I was thinking about what it would be important to be mentioned in terms of Brazil and Latin America, and I concluded that, considering the context of challenges to democracies globally, it would be important to have reflect on the experience that we lived in the last few months in the Brazilian elections.
These last elections made the necessities that we have clear to us. In terms of harm, we have been facing problems related to disinformation, political violence, and specifically, political gender‑based violence. Understanding these phenomena requires us to deepen the understanding that each of them is connected to another. For example, it's impossible to think about disinformation without considering gender‑based violence. At the same time, it's impossible to think about gender‑based violence without considering the dynamic of information. Although we are dealing with different topics, comprehending this from an intersectional perspective allows us to improve our look to the problem.
This year, we lived through a process in which the platforms, civil society organization, the superior electoral court, journalists, and academics worked together to try to improve combatting disinformation and political violence in Brazil. This process brought challenges, gaps, and took concrete perception that we need more than we had this year and in the others. One of the main points is related to the fact that the harms caused by this phenomenon are not restricted to two election moments. Because of that, the first to address the impacts of this information are political violence ‑‑ (silence) ‑‑ actions involving different social sectors and diverse platforms are always necessary. Still, it took courage in Brazil not only because of legislation, but because we had at that moment specific people at Electoral Supreme Court, particular teams working on some platforms and specific researchers looking at political violence and disinformation in civil society and academy.
We had the consumption of factors that enabled us to face political violence and disinformation dynamics as well as possible. When the result of the election was negated, we started a new process. Part of the Brazil society defended that the elections were fraudulent. We found proof or anything in reality that supported this defense, it is a problem that has yet to be solved.
In Brazil, many human rights groups are asking for military intervention. Here, a point caught my attention. Part of those conducted during the elections had the presence of Twitter, but after Elon Musk acquired the platform, the dialogue doors were no longer open. The example of Twitter is a point to highlight for one reason: The future of democracies in Global South countries must not be only in the hands of the private sector, which certainly could change the owner, and consequently, the parameters of work that will be developed.
Likewise, we should not depend on specific personalities in sectors that centralize how the elections will occur. Because of that, the process of maturing legislation to address how we will deal with huge issues such as disinformation, transparency and political violence should be the center of the discussion. That's to start. Thank you.
>> AISHWARYA GIRIDHAR: Great. Thank you, Fernanda. I think especially what you were talking about when you said that, basically, a lot of these harms aren't restricted only to friction points ‑‑ things like elections or when mass violence, things like that happens, that rather builds over time, I think that's really important. And I think that's also why you need larger structures in place, so even if Elon Musk takes over Twitter or something else happens, platforms don't change behavior in like a significant way that affects rights.
So, Shashank, if I can come over to you now. Shashank Mohan is a Project Manager at the organization that I work at, at CCG. He works primarily on data protection, data governance, surveillance, intermediary liability issues and studies the effect of digitalization and Internet on human rights, particularly the rights to privacy and free speech.
So, Shashank, if I can ask you a little bit about what the goal of transparency would be, right? So, for example, Fernanda's highlighted a bunch of things that can happen around election time and the kind of harm that can come from it. So, is there anything that we can gain from ‑‑ so, how do we link these harms to platform information? And so, what are some of the challenges that we've seen when you try to operationalize it? We recently had the IT rules that imposed some transparency requirements, which have had limited sort of value. So, if you could talk to some of that.
>> SHASHANK MOHAN: Yeah. Thanks. Thanks, Aishwarya. And I'm just going to try and break down your question and try and answer the first part of it as to, you know, what should be certain goals of transparency, and then touch upon what are some of the operational challenges that platforms are facing.
I think I'm going to do a bit of cheating. I'm going to look at, let's say, what the Santa Clara Principles lay down when they say, you know, what should transparency look like, because those principles have been sort of endorsed by various ‑‑
>> AISHWARYA GIRIDHAR: Shashank, I think we lost audio.
>> SHASHANK MOHAN: Can you hear me now?
>> AISHWARYA GIRIDHAR: Yes. Yes.
>> SHASHANK MOHAN: Great. Just tell me where the audio dropped.
>> AISHWARYA GIRIDHAR: At the beginning of your sentence.
>> SHASHANK MOHAN: Just that I am going to use a cheat sheet here and say that we can look at some of the principles laid down in the Santa Clara Principles that have been endorsed by various private corporations, academics around the world, human rights organizations. They sort of generally are accepted to mean, you know, also being endorsed by people who are looking at what transparency should mean.
And I'm just going to say that transparency should lead to a fair, unbiased, proportional sort of experience on the Internet and should also respect user rights, right? Very, very sort of broad. And the goals of transparency should, especially in context of heterogenic sort of society like India, where there are lots of communities and India's a community‑based society. In a sort of complex society like ours, it's important to understand ‑‑ and I'm now also sort of specifically speaking to content moderation practices that, you know, large platform companies employ and deploy, is sort of to understand how decisions are made, when content is taken off or sort of kept on, being one of the goals of transparency, definitely.
And Aishwarya, you were referring to sort of regulatory changes that have happened in India recently, since last year. One of the bigger changes that have taken place that the government has asked here for companies to on a monthly basis publish transparency reports, and we've seen sort of challenges that have come with that.
When I talk about, you know, understanding how decisions are made by platforms on content, that has translated through the regulations in India only to mean sort of sharing of certain numbers. Because of the broadly worded regulation that we have, companies have focused on sharing, let's say takedown numbers, or how many sort of numbers of complaints they receive, how many pieces of content they take down. What this regulation has been criticized for is basically not providing the nuance behind, let's say, you know, if automated tools were used to take down certain content, what were the evidence in these tools, what tools were used, how much of that content was reinstated based on complaints, et cetera.
So, tying back to the goals of transparency, especially Indian regulation has not been able to sort of achieve that. That's sort of one insight I wanted to point to. And that's where sort of some challenges also rise, that when government mandates become ‑‑ you know, there's sort of a tricky balance. If you're too specific, then it may not be flexible for various types of organizations to be transparent. When you're sort of more broad‑based like the Indian regulation, then you may end up seeing transparency that is not very meaningful.
Again, I think certain goals of transparency should be sort of to ensure that also users are getting opportunities for expressing, you know, for being heard. Often in India, either on their own, by social media companies or by government requests, platforms take off content and users are left without any recourse. Although there are specific regulations now in India that require certain platforms to provide for this mechanism, but we have seen that that's not enforced very urgently.
So, to sort of broadly answer your question, Aishwarya, about goals, you know, I'll keep it at that. I'll quickly come to certain challenges that platforms are facing for transparency. Or to be transparent, right, or general challenges that make it difficult for transparency to be operational.
One of the biggest challenges, I think, has been sort of the rigidity of business models, currently how the business models that platforms work under, it is not always in ‑‑ they're not always incentivized due to those business models to be more transparent. You know, their goal is to maximize engagement, maximize eyeballs on their platform, and ensure that people continue to use their platform. So, I think one of the biggest challenges for operationalizing transparencies is business models.
There are a couple other challenges, of course. You know, when you have ‑‑ you may need to sort of add nuance to transparency measures, especially regulations. Regulations may need to defer from platform to platform, so have certain basic principles for all platforms, then also have specificity depending on size and complexity of platforms. So, for example, the transparency you may need from a platform like YouTube may be slightly different from the transparency you may need from a platform like WhatsApp that, both platforms being very distinct and being extremely popular in India, let's say. The sort of second challenge, you know, kind of a challenge I may sort of want to point to.
The third challenge I think is also jurisdictional, and we've seen it play out in various ways across the world, including India, is that what data are we asking companies to sort of collect and be transparent about? You know, we've sort of ‑‑ and we've seen that when sort of demands are made for certain laws that may be applicable in a certain country, you know, for example, courts have also asked for such content, to take it down, let's say, across the world, and apply certain orders globally. So, there are sort of jurisdictional challenges there as well when it comes to transparency, how to apply certain measures of transparency, uniformly around the world, and how to sort of make it a more nuanced jurisdiction‑wise. But I'll stop there because there is much to answer and I'll give it back to you.
>> AISHWARYA GIRIDHAR: That's like five different areas I think that we can go to. So, I think ‑‑ I mean, and we'll discuss this over the course of the rest of the session as well. It's just that I think what came through for me when you were talking about the challenges, especially, was the fact that it's very hard to ‑‑ or it can be hard to standardize the kind of information that you're asking for from different kinds of platforms while also being specific enough to get the kind of information you need to actually target harms. And so, I mean, that's a challenge, I guess. I guess if you were able to answer that question over the course of this session, then you would have solved platform governance.
But Emma, I'd like to go over to you now. Emma is the Director of the Center for Democracy and Technology's Free Expression Project, where she works to promote law and policy that supports Internet users' free expression in the United States, Europe, and around the world. She also works with user‑generated content services and other stakeholders to develop best practices, including meaningful transparency, appeals, and remedy procedures. And like Shashank was mentioning, the Santa Clara Principles on Transparency. Emma was deeply involved in the development of those principles, which I know we have been referring to over the course of the last ‑‑ of the transparency conversation.
So, Emma, if I could just ask you to talk about ‑‑ I think Shashank mentioned a few of them. At least in India, the requirements we have is to focus on transparency reporting, but what are some of the other requirements we are seeing from other regulators around the world?
And also, to, I think, address some of what Shashank was pointing to, how do you then make sure that the information that they're asking platforms to provide is meaningful so that we can work with them?
>> EMMA LLANSO: Great, yes, thank you. Thank you so much for having me on this panel. I really wish I could be there with you in person. And yeah, as far as the kinds of transparency that we're seeing come up in regulation, there are a variety of them. There, I think, is more of a recognition now than ever before that when we say "transparency," we potentially mean a lot of different things in the policy space. So, we often think about things like transparency reports that Shashank was talking about. You know, transparency reporting originally started as an industry initiative focused primarily on government demands for user data or content restriction that tech companies were facing kind of in countries where they operated around the world. There was a big push from civil society to say, this is potentially a matter of life and death for some people, to understand if governments are making demands for their personal data and for restricting their content or deactivating their accounts. So, that was sort of the first wave of transparency reporting.
Then we saw in 2018, following a lot of, again, coordinated advocacy from groups around the world, companies, at least a couple social media companies start publishing reports on how they enforce their terms of service. Those have both been, I think, useful initiatives for giving policymakers a sense of what might this transparency reporting look like, but we're starting to see a lot of focus on particular kinds of transparency reports that regulators want to see. For example, in the Digital Services Act in the European Union, there will be now obligations for different kinds of service providers to provide that kind of content moderation, enforcement reporting, but also to give really specific information, or more specific information, on how their algorithmic systems operate, to do some kind of regular reporting on what sorts of automated tools and systems that they're using, but also, what are the kinds of criteria or things that they think about in developing algorithmic recommendation systems or content promotion tools so that people can have a better idea of how not just the sort of take down or leave up decisions are being made by the companies, but try to get more of a view into what is affecting, what gets kind of promoted more widely on a service, and what might get kind of suppressed or removed from view.
There's also a lot of focus on reporting and other kinds of transparency around advertising in particular. I think it will not surprise anyone in this session that there's been a lot of focus on how online advertising systems, especially things like Facebook's advertising system or Google's kind of advertising network that spreads across the web, are potentially used for behavioral targeted ads and that this can be used to some beneficial extent by a lot of different speakers but also used in really manipulative ways, so a lot of focus on getting more information about who is posting ads, where the money is coming from, who those ads are being shown to, what kind of targeting criteria are being used, because this seems like a really sort of key element of the online information environment that people want to understand a lot better. So, that sort of transparency reporting in general.
But as Shashank was talking about, there are other kinds of transparency as well, things like User Notice, information actually designed and delivered directly to users about what the content policies are; how can you report abusive content; what you can do if someone has flagged your content for removal or an automated system has taken it down, and how can you appeal that decision. This kind of aligns with a lot of what was in the Santa Clara Principles around just the tools that users need to have, the information they need to have, the awareness of where different kinds of functions on a platform even are, are really important to users actually being able to exercise their own rights and enjoy and make decisions for themselves about kind of what services they use and how they use them and take steps to, you know, get their content restored if they think it's been wrongfully taken down.
So, we do see some legislation. I've been focusing primarily in the U.S. and Europe, and often, legislation kind of takes up some of this sort of transparency as well, this idea that, yeah, users need a better idea of what the terms of using this website are and that publishing a multi‑page, very tiny font, long‑and‑dense content policy or stuffing it into like the general terms of service for a website is not exactly giving users the information that they need.
We also see a lot of focus in regulation around audits, and this is a little new. This is kind of more recent in the past couple of years, this idea of not just trying to get companies to report information, but really wanting some independent third party to be able to verify if that information is accurate, to be able to understand, what are the processes that a company went through to produce those numbers and to actually do some verification, some checking on what sorts of systems and processes companies have in place, how they develop these numbers for their reports and what the kind of descriptive elements or the more qualitative information that they provide, how that actually aligns with the real day‑to‑day practice. So, that's the whole question of like auditing social media companies and what that looks like and what that means is potentially a whole panel in and of itself, but it is, I think, a very interesting area because it logically fits into some of the broader conversations around regulation of really wanting to have sort of third‑party assurance of some of the information that we're getting from tech companies about how they operate.
And then the final kind of transparency or thing that sort of fits under the transparency umbrella that I'll highlight is the whole issue of enabling independent researchers to have access to data held by tech companies, including social media services and some other kinds of services, so that independent researchers can do research on it, can ask questions, can do investigations, can try to figure out and test hypotheses about how our information environment operates, how different kinds of interventions or mitigation measures that a company might roll out or that third parties might try to employ on a social media service ‑‑ are they actually working? What effect do they have? How did they shift what is being said or what information people have access to? Do any of the proposed solutions that people are rolling out, either independently or through regulation, actually make an impact? In my view, it's really crucial to have independent research happening on the information environment, and one of the big challenges for researchers right now, obviously, is that the tech companies are the ones with all the important data to use, and there's a lot of different barriers to getting access to that.
Some different social media companies have different APIs, tools that researchers can use to kind of access some data. Those are often, though, not providing access to the full suite of data that researchers might want, and there can be different kinds of limits and restrictions on how those are used. And these are systems that are happening voluntarily. While I think it's very good that different companies are voluntarily trying to work with researchers or provide data or information, it's not a guarantee, and it's also something where we've seen in different high‑profile cases, sometimes a researcher's access gets revoked, and it's not clear why, or there are suspicions that it's because the company doesn't agree with the direction that that researcher is taking their research. So, that's a big feature in discussions right now, including in the EU's Digital Services Act, to actually start looking at what a structure for requiring different tech companies to make data available to researchers really looked like, how to put that in place while still ensuring safeguards for the privacy and security of this data. We've talked to a lot of researchers at CDT, and they are hungry to get access to some really sensitive data, including things like the content of private messaging communications.
If you're, for example, studying extremism and radicalization online and trying to understand what the path towards radicalization might look like for somebody who goes on to potentially commit offline violence, a lot of that interaction happens in the private direct messages or on private and even encrypted communication services, and so, there's technical questions of whether that content is even accessible, but there are enormous privacy implications about saying that, yes, your private communications could potentially just be exposed to a third party, because they have a really good idea of research that they want to do. So, there are a lot of different trade‑offs to be thinking through, and it's something where I think we're going to continue to see a lot of energy and attention, especially around questions like what data should people have access to, what are the privacy implications in making that data available, what kinds of technical transformations to data can you do to make it still useful for research but preserving of people's privacy, and also, who's a researcher; who actually gets to have access to different levels of data or data with different sensitivities. A lot of researchers would like data to be publicly available because they don't want to be sort of bound to any particular company's decisions about whether to make that data available to them or not. They just want it put out publicly, and then kind of no restrictions on what they can do with it. There's a lot of benefit to that, but there's also a lot of potential for abuse, depending on what kind of data that is.
On the other hand, ideas around saying that researchers must be associated with an academic institution or an official research institution could really cut out a lot of really important independent journalism, research by civil society organizations, or just research by researchers who aren't affiliated with any particular institution. So, trying to ‑‑ there's a lot of different kind of details there that are being worked out in a variety of different regulatory conversations. It's one where I really hope that policymakers and regulators can work together and think about developing systems that will work across jurisdictions and worldwide, because if we end up with multiple different standards and rules and different sort of regulations or, you know, vetting procedures, or any of that, it could become very difficult very quickly to actually implement any of these efforts, for researcher access, or more broadly, for any transparency. And I do worry that then it becomes something where companies focus on complying with, you know, the regulations in the countries that they're most concerned about and leave a lot of other countries without the kind of access or information because they're just not as big priorities.
So, to quickly answer this idea of, like, how do we make transparency meaningful, I think a lot of it ties into what Shashank was talking about, of understanding what are the goals of transparency. We can't just sort of take a one size fits all approach to this. We need to understand for any given proposal about transparency, what is it really trying to accomplish; who is the audience for this transparency? What kind of information did they actually need? And what's a format that's actually useful and gets that information to them as they need it?
And then I'd also flag that I think transparency regulation really needs to be ‑‑ or kind of any transparency initiatives really need to be iterative. A big part of why so many of us advocates want more transparency from tech companies is, it's really hard to do policy‑making when we don't have good information about how companies operate or the effect that they're having on the information environment, and that extends to policy‑making about transparency. We can't ‑‑ I don't think today we can say exactly what a good algorithmic transparency report should look like because nobody has really done one yet, or there aren't really solid examples or tested examples of where we could conclusively say, this kind of information is definitely useful, and this other kind of information is not useful.
I think we're really still in an experimentation phase. And so, for any policymaker thinking through kind of regulation in this space, I think it has to be flexible and iterative and something where, once we get initial information out of companies, we need to feed that back into the process and think about, okay, how does that change or shape what we want to ask for next or what the regulation should look like two years from now or five years from now, because I think it will probably change and will hopefully be much better informed from the initial efforts of transparency and help that actually improve the policy‑making around transparency itself.
>> AISHWARYA GIRIDHAR: Thanks, Emma. I think what you were saying in the end, especially about the unknown unknowns, is something we have to address in terms of transparency. It's hard to ‑‑ I mean, I guess from a regulator's perspective, it's easy to say just give me all the data, because we don't know what's useful yet or what we want yet. So, I think you're right that we do need to be prepared for this to be an iterative process.
I also think, like you said, two of the most interesting sorts of things that have come out of this set of regulations has been the focus on audits and researcher access. I know that that's where a lot of useful information can come, but I also am ‑‑ like you mentioned, they were a little bit nervous about how this would apply across different jurisdictions where there might be different considerations for platforms as well as regulators, because there are very well capacity concerns. If an audit for example is conducted by a regulator, they have to have the capacity to process that level of information as well as the technical expertise to make sense of the data, right? And similarly in terms of who you categorize as researchers and how you credit, for example, universities or whatever. I know there's a bunch of regulation around that. And provide access also makes a huge difference to the kind of information that you're obtaining from platforms, which will then, you know, inform future regulation as well. And hopefully we'll come back to that. I know we're running a bit over time. Maybe after Chris answers this question, we will open it up a little bit, in case people have questions. But Chris, let me just introduce you first.
Chris is a Research and Policy Manager at GNI and he supports GNI's multistakeholder research, advocacy and shared learning with a particular focus on government laws and demands that could authorize censorship and surveillance. So, he's written and represented GNI in different international conversations focused on good practices for digital transparency with a particular focus on rights respecting responses to online extremism. So, Chris, I know that GNI's been doing a lot of transparency work in general and was also part of the Action Coalition on Transparency, and I know that the work you do also gives you a bit of an overview across jurisdictions of the kinds of regulations coming up in the space. So, if you could talk a little bit about the work that you do and also about what you are seeing that are coming up across countries.
>> Chris Sheehy: Thank you, Aishwarya and to my fellow panelists and steering group members and MAG and IGF for putting on this event. I think we've kind of helpfully laid out some of the consensus. Transparency is a response to some of the concerns about online harms and some of the barriers and, really, Global North‑Central conversations that underpin some of the work that brings us together to strive for more meaningful transparency efforts. So, I will just quickly walk through a couple of different multi‑stakeholder collaborations that are trying to take action and help drive some more work on meaningful digital transparency.
So, just briefly, the Global Network Initiative, the organization where I work, is a multi‑stakeholder organization. We're made up of some of the world's leading information and communications technology companies, digital rights and press freedom groups, responsible investors, and academic experts and their institutions with a shared commitment to freedom of expression and privacy in the ICT sector. This commitment is embedded in the GNI principles on freedom of expression and privacy and corresponding implementation guidelines, which are rooted in international human rights law and informed by the UN Guiding Principles on Business and Human Rights.
So, this framework provides guidance for companies ‑‑ ICT companies ‑‑ in responsible decision‑making and multi‑stakeholder collaboration in the face of potential government restrictions on the rights to free expression and privacy, and it includes some corresponding commitments on transparency. So, the principles in IG's call on members to disclose, call on company members to disclose the applicable laws and policies, which could require them to restrict access to content or services as well as any personal information they collect. And they also call on companies to outline policies and procedures when those demands come in and to share those policies and procedures publicly. And finally, there is also provisions on notice to users when content is blocked or access to services is restricted.
So, within those broader transparency commitments that are a subset of the GNI framework, the GNI framework was first adopted in 2008. And so, in the almost decade and a half of experience of implementing the framework, we have seen some both important progress and some challenges in more effective transparency from ICT companies.
An important piece of the framework is we do note that there's not a one size fits all approach, but I think there are some important markers of progress, and we've also had some really helpful conversations internally about some of the challenges and barriers, including some we've discussed today that exist for more effective digital transparency. So, just some markers of progress.
I think Emma, hopefully, laid out, the shift from transparency reporting really is a strict focus on government demands to more broader reporting on systems and policies and enforcement of those systems and policies, and some more creative transparency mechanisms, including more realtime communications about thinking around major events. We've also seen ‑‑ I think it's a challenging area where there's still more progress to be made, but we have seen some progress in companies sharing public results of human rights impact assessments, and we've also seen companies ‑‑ GNI leads a project, the Country Legal Frameworks Resource ‑‑ make more efforts to help users understand the legal frameworks that might authorize censorship and surveillance demands that they receive. So, the CLFR helps map out some pertinent laws in countries where GNI members are present.
In this global engagement, I think, you know, some things that have come across that have really been pointed to today already is that transparency is a core part of any regulatory or other response to concerns about digital harms, but a lot of these discussions are centered in majority‑world perspectives, and there's an urgent need to greater consider some of ‑‑ like we were walking through the different regulatory proposals put forth actually mean in practice in different contexts.
Another thing that can often happen in this meaningful transparency conversation I've seen, including in some of the smaller expert multi‑stakeholder workshops, is we can often get very bogged down by particular barriers and trade‑offs to more meaningful transparency, whether that's privacy considerations, trade secrets, et cetera, and miss opportunities for collaboration, as well as fail to align the many different actors who are working on these issues. So, given those concerns, GNI's helped work with a diverse group of experts, again, including my colleagues that represent the steering group as well, to help build a new initiative called the Action Coalition on Meaningful Transparency. This action coalition is launched under the auspices of the Danish Tech Initiative and year of action, led by CSOs from India, Brazil, U.S., EU, Canada and South Africa. It also has an Arms Link Advisory Group with representatives from industry, government, and international organizations that help inform the steering group's work but also offer an avenue to share and promote this work, and is led by a project lead at the Brainbox Institute.
And we've had a series of public events. And perhaps a key focus of this group is also helping align and better clarify existing efforts. We're also beginning work on a portal to map the actors and initiatives working on meaningful digital transparency. I will put a link to that initiative in the chat and encourage folks to sign up who are interested. And I'm also happy to speak a little bit more about some of our specific comments and concerns we've seen on different content regulation approaches with transparency as a core component, but I think I can defer that until we open the floor.
>> AISHWARYA GIRIDHAR: Sure. I mean, we did have another round of questions, but we are massively over time. So, if anybody has questions at this stage, please let me know. Otherwise ‑‑ yes.
>> AUDIENCE: Hi. So, I'm Algin from SLFT Digital Rights Organization in India. So, thank you for the panel. I think it's been a wonderful discussion. I had a question which I think goes to the core, which is essentially, in implementing the transparency principle, within the legal framework that India has, you do not want to oversubscribe to the government demands which may transcribe to being overcensorship of information and free speech. So, my question is how do you operationalize this transparency principle within the framework without subscribing to this overcensorship issue that may arise?
>> AISHWARYA GIRIDHAR: You mean where there is no transparency, government action on platforms? Am I understanding that correctly?
>> AUDIENCE: Correct.
>> AISHWARYA GIRIDHAR: Shashank, do you want to take that, since you are in India?
>> SHASHANK MOHAN: Thank you for that question. Actually, yeah, I had written that as one of the things that countries in the Global South, when they look at regulation, or actually, platforms in the Global South ‑‑
>> AISHWARYA GIRIDHAR: Shashank, I'm sorry to interrupt. Could you just speak a little bit louder? You're a little hard to hear.
>> SHASHANK MOHAN: Is it better now?
>> AISHWARYA GIRIDHAR: Yes.
>> SHASHANK MOHAN: I was saying, actually that was a point I had written for Global South to consider. Because of the ‑‑ and now I specifically speak about India. India's been increasingly ‑‑ platforms have been increasingly getting government takedown requests. You know, just looking at Facebook over the past year, it's been a year‑on‑year increasing trend of government takedowns.
So, I didn't hear Arjin's question correctly, but I guess what I'm making out from his question is that, how can we get more transparency about government takedowns. Correct me if I'm wrong.
>> AISHWARYA GIRIDHAR: I think it's more just how do you square transparency obligations with ‑‑ yeah, basically not just government takedowns, but also how do you get transparency about government action on platforms?
>> SHASHANK MOHAN: Yeah. I mean, there are two sort of points to make here. What platforms can do, and they're a bit tied in India because of the confidentiality requirement that Indian law has on government takedowns. But possibly what platforms can do is ensure that they give more granular detail about government takedowns. So, for currently, what platforms are reporting is, let's say how many requests they get, and you know, how many takedowns they act upon, and you know, sort of more sort of broad‑level numbers. But it will be helpful for them to give more granular details about, you know, what are the subject areas broadly under which takedowns happen.
One other thing platforms could do ‑‑ and this is not uniformly applied ‑‑ is ensure that they're communicating with their users when such requests come in to act upon it, and again, giving more granular detail, not just saying that, hey, this post was removed, let's say, due to a government request, but saying that, you know, as much as permissible, what category of content ‑‑ what category of law the content sort of violated and also those kinds of details.
The other thing is, of course, that Indians and Indian scholars have been sort of asking from the government in India is to sort of, you know, sort of be more transparent about the requests they send to social media companies, especially, you know, for takedowns. This is something that will need to change in law. Currently, in Indian law, there is a soft confidentiality of these sort of requirements there, so that needs to change. So, yeah, I hope that I answered that question.
>> AISHWARYA GIRIDHAR: I think Emma had something to say about this as well. So, Emma?
>> EMMA LLANSO: Yeah, no, I'd be happy to chime in. I think it's a really important question, and I think important for everyone to recognize that there are definitely kind of abusive uses of transparency obligations as well that we have already seen different governments trying to use transparency as a way to sort of coerce certain activity out of online services. We're encountering that in the United States, including from the Attorney General in the State of Texas using civil investigative demands targeted at Twitter to try to investigate, effectively, how it came to its decision to remove Donald Trump from its platform. This is pretty widely recognized to be a politically motivated effort, and it's framed around transparency. He just wants to understand how they made these decisions. But when you understand the broader context, you can see the political motivations there.
There are actually some lawsuits in the U.S. about some social media regulations passed by the states of Texas and Florida that are probably going to our Supreme Court this year that actually look at this question of is it constitutional? Does it comply with the First Amendment in the U.S. to actually require tech companies to essentially disclose their editorial processes? So, I think there are some real questions and concerns about how transparency obligations could be used to try to basically coerce companies into certain content moderation outcomes.
On the government transparency point, I think it's really important for all of us kind of in civil society, as advocates in these conversations about transparency to raise this point, to make sure that we're saying that, yes, as much as we want transparency from tech companies, we also need transparency from governments, because if we don't have transparency from both sides about what demands governments say they're making and what demands companies say they're receiving from government, we're really only getting half the story. And so far, we've really had to depend on the tech companies to provide us this information about government action against our speech online, and it's important that the companies do that, and I want to see them continue doing it.
But it's also something where there's just a real lack of reporting and transparency from most governments around the world about how they're using different elements of the law to seek restriction of people's speech or to access user data. So, I think that would be a good message for advocates around transparency to always carry along that it's not just company transparency that we're looking for, but also government transparency.
>> AISHWARYA GIRIDHAR: Chris, I saw your hand up. Did you want to pitch in as well?
>> Chris Sheehy: Yes, and echo Shashank and Emma's points, but one thing I failed to mention earlier is part of the collaboration includes multi‑stakeholder advocacy. And so, we did put together a content regulation policy brief that surveyed a number of different regulatory efforts to address online harms around the world and put forth some human rights‑based recommendations. And I say that because I think one thing that's come up that's part of this theme is oftentimes, there can be really, you know, thoughtful or aspirational transparency measures that we would want to see in different regulatory proposals that are also part of much broader packages that have a lot more restrictive designs, whether that's broad definitions of content that could be subject to removal, overly strict enforcement, et cetera. And so, that can be a tough thing to balance, where you have, you know, requirements for a company to improve their reporting on a more regular basis that are also paired with quick 24‑hour takedown requirement and really broad definitions. So, I think that's just a wedge that we have to battle but not one to discourage us from continuing to push for transparency as a core and effective approach.
And I would also echo the government transparency piece. I think even in Global North countries, you're seeing instances where some of the new mechanisms that are set up for regulation and enforcement, whether that's regulators tasked with putting together codes of conduct or other different sort of co‑regulatory mechanisms ‑‑ sometimes there's not a burden on the regulator to report to the public or some of the reporting requirements that companies face aren't even necessarily public but may be directly to a regulator. And so, those types of mechanisms, while they can be promising, should still also have opportunities for iteration, like Emma was mentioning earlier, and it's really important that we do continue to hammer home those sort of regulatory design questions about government transparency. Or if not, we risk creating model examples that more restrictive governments can easily point to, and yeah. So, I just wanted to echo the government transparency piece and flag that there can be really thoughtful transparency measures in larger, more restrictive content regulation packages.
>> AISHWARYA GIRIDHAR: Thanks, Chris. There is one question I think specifically that's for Fernanda. So, there are specific differences you see between Global North and south transparency. In the context of Brill, was there any contextual issues which the general trends of transparency do not address?
>> FERNANDA MARTINS: Thank you. I think this question opens different avenues to think about Brazil and to think about Latin America. So, I think it's important to look at the difference between countries in Latin America because we have a history that is (?) with the fact that our democracies are almost in crisis all the time. So, when we think about transparency in Latin American countries, we need to think, there is a difficult to conduct the debate about transparency and what we are considering transparent thoughts or what we are considering disinformation or political violence in the region. So, obviously, we have some points of the discussion that is similar with the Global North. But at the same time, we need to identify, for example, that in Latin America, the concentration of media and concentration of TV and social media, it's not a new reality.
We need to consider the fact that when the regulatory debate happened in the last two years in Latin America, we are talking about disinformation, mainly. And this discussion opened other points that it's important to consider. So, we have a lot of worries, because these bills in countries as Brazil, Chile, Costa Rica, Argentina, Paraguay, also have requirements around content moderation or about rules for public agents. But at the same time, they focus on punishment. And when we talk about countries that has a history of censorship, we need to consider the importance of the balance between transparency ‑‑ the balance related to transparency, the privacy of the users, and the necessity to defend freedom of expression. And this context demonstrated to us in the last two years that in Latin America, we need to think more specifically, deeply, our reality, because it's not occurring at this moment. We don't have a space to think about it, so we need to go with steps to back and think about these questions differently.
>> AISHWARYA GIRIDHAR: Yeah, I think that's really important sometimes, that we forget that we're not always at a situation where we can just enforce transparency, but we need to have conversations that build up to actually what that means and how you frame it in a way that protects rights, especially in places where that's not maybe the norm.
I can see that Shashank and Joanne have their hand up. Shashank, did you want to maybe comment first and then maybe Joanne?
>> SHASHANK MOHAN: Yeah, sorry. I just wanted to make two quick points. Sorry, I am not being able to switch on my video because of bandwidth problems at my end. But I was just saying that, you know, there are two developments in India that may be promising, especially when we look at the influence government has on content online. One is that, you know, one social media company ‑‑ Twitter ‑‑ has challenged government takedowns before, or sort of regional high court in India, challenging it on the grounds that they have not given enough details and sort of requests maybe illegitimate in certain instances, and not asked for (?). And I think this is interesting, it's the first instance in India where private company has challenged government takedown orders in courts, and we'll have to see what comes out of that.
The other thing is that, as I was saying, by law in India, government takedowns are sort of confidential in nature, but they include a mechanism where the originator of the content should be given a hearing. And we see often that that doesn't take place, this hearing. The government does not give hearing to the originator.
But in an instance, before the court for the first time, a court has requested the government to provide a hearing and provide details of why a particular website was taken down. So, I just wanted to chime in with those two examples, as well as years of efforts by various civil society organizations and academics may be giving fruit and may be shifting towards transparency slightly even in India. Just wanted to make that point, Aishwarya. Thank you so much.
>> AISHWARYA GIRIDHAR: Sure. Joanne, did you have something?
>> JOANNE D'CUNHA: I just wanted to point to the second question in the chat, which might be interpreted as some of the incentives, I think, for transparency, especially with platforms.
>> AISHWARYA GIRIDHAR: Can you read out the question? Then any of the panelists who want to can take it. How is transparency feasible and achievable in an environment where you have owners of technology and users of technology? Don't we see this principle favoring the technical gurus? And how do we create a fairground for transparency? Does anyone want to take this?
>> EMMA LLANSO: So, if I'm understanding the question correctly, one thing that this makes me think of is the conversation we had a little bit about auditing and that idea of third‑party assurance of the information, right, that there ‑‑ and I think we're sort of seeing across different regulatory conversations about tech transparency a bit of this recognition that right now, a lot of the people who understand how technical services work, work within companies, you know, running and creating and administering those systems, and that it's very possible that we will get information out of those systems that either regulators or users don't really know what to do with or don't really understand what to do with. It reminds me a little bit of how, I'm glad that conversations about algorithmic transparency have mostly shifted away from "show us the source code," because frankly, most will not know what to do with that source code even if were provided in clear text to them.
So, I think as we think about making meaningful transparency out of all of the information that's being sought from tech companies, we need those translators; we need people with the technical expertise who can help us ask the right questions, figure out what are the ‑‑ you know, what are the different points to prod or to ask follow‑up questions or to interpret the information that comes out, because ‑‑ and I think there are a lot of public interest technologists out there who are very eager to be involved in that sort of thing and to work on that sort of thing, but it's a really important recognition in this question that there is an opportunity, if the transparency conversations go the wrong way, for all of this to just turn into kind of a box‑checking exercise, where companies drop a lot of technical data that's hard for regulators to parse and everyone outside companies is like, well, I guess they must be doing an okay job. I don't think that's an outcome that any of us are looking for, including companies, because then it's just a really useless exercise. But we do need to make sure regulators are building up the expertise and staffing appropriately to actually conduct the oversight that we hope goes along with transparency.
>> AISHWARYA GIRIDHAR: Thanks, Emma. Yes.
>> FERNANDA MARTINS: I think it is a really good question. The perspective of Global South ‑‑ the transparency, it's a thing that we can consider as an umbrella concept. Because when we talk about transparency, we are talking about hate speech, about content of moderation, about data for hit searches. So, I don't know if it's a thing that is favoring technical gurus. I think it depends on the way that we signify the concept and what we will do with this data.
So, it's important to consider the dynamics of power and how in different countries we have different relationships with the platforms. In this case, for example, I think Brazil is very privileged, because we have many of (?) platforms and the contact direct with these teams. But at the same time, in countries in neighborhoods of Latin America, the context changed suddenly when we cross the different tiers. So, it is important to think about a way to consider the region, entire region, and not just countries separately. Maybe it's a way to construct a transparent, and signify the transparency in a way that is not to give data for governments for censorship but to improve the way that we construct, for example, public policies and the comprehension about dynamics as disinformation, hate speech, political violence. So, it depends how we will do the avenue of this construction.
>> AISHWARYA GIRIDHAR: Well, I know we're already over time, but I also know there was another round of questions that we didn't get to, so I just want to give all the panelists a chance to quickly closing remarks, like a minute, tops, and then we can close. No pressure if you don't want to talk, that's fine, too. But I just want to make the space available. Well, then, I guess we're good to close. I just want to highlight a couple of themes that seem to have come up over the course of this discussion.
So, one is just the need to develop enabling frameworks to have a conversation that is meaningful on what kind of transparency would be useful. The second is on the fact that any transparency regulation is likely to be iterative. And as we get more information, we ask for more information, retail regulation and that is generally to be expected, although it may not be the most efficient way to do things.
And the third theme that came out is the importance of applying transparency mandates to governments and making sure that we're also able to obtain data related to government use of platforms. So, that's it. Thank you. Thank you, everybody, for attending, and also specifically the panelists for sharing your expertise with all of us. Have a great rest of your day.