IGF 2024-Day 1-Workshop Room 5- WS41 Big Techs and Journalism- Disputes and Regulatory Models-- RAW

The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.

***

 

>> MODERATOR:    Hi, everyone.    Do you hear us?    Online?    Yes?

>> NIKHIL PAHWA:    Hi.    I can hear you clearly.    Loud and clear.

>> MODERATOR:    I don't hear you.

>> JULIANA HARSIANTI:    Hello.    We can hear.

>> NIKHIL PAHWA:    Hi.    I can hear you loud and clear.    Can you hear us offline?

>> IVA NENADIC:    Same for me.

>> MODERATOR:    Oh.    Now it's good.    I'm going to listen to myself all the time but I think it's working.    Can any one of you online say something so we can check.

>> NIKHIL PAHWA:    Hi.    Just testing.    One, two, three, four, this is Nikhil.    Can you hear us?    Perfect.

>> IVA NENADIC:    Yes, hello, everyone.    Can you hear me?    Okay.    Perfect.

>> JULIANA HARSIANTI:    Hello.    It's Juliana.    Can you hear me?    Okay.    Thank you.    Perfect.

>> MODERATOR:    Good afternoon, everyone, who is here in Saudi Arabia.    My name is Bia Bervosa, a member of the Brazilian Internet steering committee.    I'm actually going to moderator this workshop.    Thank you everyone for coming.    In the place of Rafael who was supposed to be here because had problems getting into the country because of visa issue.    Thank you for everybody being here and thank you for the people here in the room as well.

So welcome to the Big Techs and Journalism: Disputes and Regulatory Models workshop.    The idea of today is to have an open debate do what are the alternatives to promote journalism sustainability in the digital era and what can we learn from regulatory endeavors on journalism across platforms in different countries.    In a brief introduction I would like to share that this issue is not new.    It's a tension since the prevalence of large media platforms, and advertising ecosystem, and business models that are targeting advertising that that's profoundly impacted traditional journalism and the systematic shift from journalism to media platforms has reshaped the landscape of media consumption and distribution.

This has also potentially widened power imbalances, and those with access to information and those without.    This is evident in crisis.    At the core of the concerns lies the question of how journalism is complicated by digital platforms, igniting a wave of regulatory proposals across many nations and mobilizing multiple stakeholders.    Australia, notablely, passing a legislation addressing this issue.    In Canada the approval of the Online News Act prompted Meta to remove news from their platforms.    A decree has been issued in Indonesia, while South Africa is currently looking into this.    In Brazil where I come from since 2212 proposals have been at the forefront of the debate, the law, and the platforms to negotiate with journalism companies and the approval of a public sector fund financed by digital platforms.

Although this proposal does not necessary    do not necessarily contradict each other the idea of a fund is defends as a marginning model and not as its compliment by many actors.

 

 

 

 

This can be subject to years of negotiations involving not only the executive and legislative branch but also the judiciary.

Digital platforms, media companies, researchers, journalists, and Civil Society Organizations and international bodies are taking part in this debate.

We have meted out five controversies on the subject.    The first one is who should benefit?    In other words, what should be the scope of any legislation regarding remediation of journalism by platforms?    The TRAIN legislative proposals should mitigate beneficiaries such as number of employees or media turnover but this criteria has been criticized because they potentially exclude individuals or small businesses.    For some journalists themselves should be paid directly and for others this is not feasible.

The second controversy is who should pay?    There's different terminology to define the actors responsible for remuneration.    Digital platforms, and online current service providers, and platform and news intermediary companies, are different terms in different places.

A third issue, pay for what?    The understanding of what journalistic content is changed greatly, for example, in a report established by the organisation for cooperation and development in 2021.    It defines news as information or commentary on contemporary issues.    Explicitly excluding entertainment news.    However, this is a narrow view that can be interpreted from some of the regulatory initiatives analysed in our report.    In addition, an important part of content made available by media which generates high levels of engagement on social media platforms refers to sports and entertainment.

This controversy is related to the content of voluntary agreements between platforms and journalism companies negotiated without intermediatation of a public authority.    The guarantee of confidentiality of these commercial agreements prevents the evaluation of the criteria used to renumeratorrate journalism and its impact.

Therefore, there's the concern of the use of the criteria such as the number of publications will serve as an incentive to reduce the quality of the content produced.

The fourth controversy highlighted is related to the demand for more transparency in the work of the platforms.    Whether with revenue or the algorithms used in the content recommendation systems for users.    So remuneration based on what data?    And finally, what should the role of the State be?    To what extent should the State interfere between producers and platforms?    There's a wide code for margins to negotiate on their own, however, there's no consensus on whether this is the best model, even considering some specifics in countries like Brazil, where free negotiation between the parties can result in an even greater concentration of resources and power in a small number of players.

The idea of a public sector fund financed by digital platforms and managed in a participatory way is a broader vision of the whole of the State.    In this case, decisions about the beneficiaries of the initiative would be part of the construction of public policies to support journalists.

So much to discuss about our work session, workshop session.    We'll be dividing into three parts.    The first will consist of speakers espousing your views and experiences.    The second is to have a debate on different perspectives raised by the speakers.    And the third part will be Q & A.    We would very much like to talk to our colleagues here in the room and on the online room at Zoom as well.

So I think we could    I'm going to present to you not all of you right now, but one at a time that you are going to speak.    I think we could start with Iva.    Iva studying moderation policies of online platforms.    Democratic implications on regulatory interventions.    At European University institute centre for immediate freedom.    They has been active in the media pluralism monitor.    It's great to have you with us.    It would be great for you to present your thoughts.    Thank you very much.    You have eight minutes.

>> IVA NENADIC:    Thank you very much for having me and I will try to stick to eight minutes and also maybe be briefer so we have time for exchange.    I apologise in advance because my view may be a bit more euro centric because this is the main view we focus on being a centre on media pluralism Institute and running it in all the member states and in candidates countries.    But of course we do regularly exchange with our colleagues and partners in South America and U.S. and all over the globe.    Basically our work is on the health of the media system.    The way we understand media pluralism as a concept is a little bit different than perhaps as understood in the U.S. or Australia or elsewhere in the world.    We don't just speak about it in the market, but the neighboring environment for journalism and media which is enables freedom of expression somehow.    So we're looking at fundamentals rights like freedom of expression and access to information and role and safety of journalism.    As well as social inclusiveness or representativeness of different social groups not only in media content but in management structures and not only in media but increasingly of media companies or big tech, whichever terminology we want to use.

Then there's this element of political power.

Our work very much revolves around the concept of power.    The way we approach, we understand media pluralism and the way we regulate somehow in Europe to protect media pluralism is to somehow curb or limit opinion centralization or concentration of opinion forming power.

This is how we've been doing this for the media world that we had in the past.    Of course we are still not there when it comes to platforms but I think it's quite obvious and probably not just from this conversation that the opinion forming power has increasingly shifted from the media, if it's even still with the media, to online platforms or digital platforms or digital intermediaries.

So we live in an information environment in which digital platforms largely excluded, to big company technologies, largely excluded from liability and accountability actually do have power over shaping our information systems and do have power over the distribution of media and journalistic content.    So media unlike digital platforms do have liability over the content they produce and they place and publish.    So we're seeing a power paradigm shift where these companies have become in many instances the key structures where people engage with news and can shape their opinion.    So they have tremendous power but very little responsibility with respect to that.

Because the focus of today's conversation is on the economic side, somehow, or the economic implications I will focus more on that, or this relationship between big tech and journalism in economic terms.

But I think it's important to emphasize even in the economic terms the rise of big tech has led to disintegration of news production which is very costly, especially if you think of analytical and investigator journalism, and quality of journalism in general, to distribution which is cheap and easy these days to distribute the content and then benefit or monetize on that.

The platforms have positioned as intermediaries between the media and the audiences and also between media and advertisers.    We know traditionally the business model of media or the business model was developed as two sided market, so providing news to audiences or even charging them through forms of subscription or paying for newspapers and similar, and then selling the attention of audiences to advertisers.

And now both sides of this market or of this    both sides have been disrupted or controlled somehow by the digital platforms or big tech.

And in the multisided market of big tech companies, the media are just one component of this value chain.    So I think this is also something important to keep in mind.

And I think we probably you opened with a relatively strong focus on online platforms, digital platforms, but I think what's also to introduce into this conversation is also the role increasingly relevant role of generative accompanies.

Who are extensively using media content to train their models and to provide, to generate outputs, very often separating the content from the source.    So diluting the visibility of media brands which has implications, again, for the economic sustainability of the media and again we, in that environment, we as well have negotiations or at least attempts at negotiations or establishing sort of a level playing field which is very difficult to establish.    Right?    Because of tremendous imbalances in power between the tech side and the media side but I think this is also very relevant to look at.

So looking at the power of the media, they decide whether they want to carry media content or journalism at all with these big tech companies.    You mentioned, for example, the news bargaining code in Australia, you mentioned the initiatives in Brazil, India, and South Africa, in Canada, in the U.S, especially in California which is a very interesting case of trying to establish frameworks for negotiation of fewer remuneration that should go from big tech to the media.    This is not easy because there's tremendous imbalance of power.    The Australian example, the most advanced one, you see now there's a backsliding somehow.    Now, when there is recently Australia published their revision of the effectiveness somehow of this framework for negotiations that suggests that it's not strong enough to kind of ensure sustainability of this approach.

Because we've seen with the major platforms that they're withdrawing.    They don't want to renegotiate new deals.    They don't want to expand on these deals to include more local media, for example.    So again, it suggests that the power is still with platforms.    The power is still with big tech.

So very often as a response to regulatory intervention they even threaten or just ban news.    What we've been seeing from them through the years is they're segregating the news in specific tabs, for example, in specific areas on the services that they provide so that eventually they can just switch or shut it off.

So the kind of conversation we have in Europe and the one maybe important point to make is that unlike Australia that went with the competition law, in Europe we focused a little bit more on copyright as a basis, as a ground, for negotiations between the platforms and the media for fair remuneration.

I think this is also interesting around the conversation of generative AI and how to play this problem or how to deal with this problem in that area.

And we've seen a lot of issues with this.    Right?

This negotiation, as you already emphasized, we don't really know what has been negotiated or who negotiates.    In some countries we have big publishers negotiating first or separately, which has implications for media pluralism because what the big ones negotiates sets somehow the benchmark for the other ones.    If the big ones start negotiating and excluding the smaller ones, this can really have tremendous consequences or media or information pluralism more broadly.

The big markets, of course, are much better positioned, big languages, are much better positioned to negotiate with big tech than the smaller ones.

And the same applies for this tension between the publisher so the media companies and journalists.    From many examples we've seen they're not always aligned or on the same side.    Who should benefit indeed is a big question.

The way we understand media and journalism is in a very, like, we define it in a very broad sense, trying to take into account that there is a plurality of relevant voices, voices of reference in contemporary information, that should be considered at the equal level somehow as journalists but of course this complicates the situation even further.

I don't know if I have any minutes left or should I    yeah?    I have.    Okay.    Good.

So basically, the main point I was trying to make is that what we are seeing, what we've learned somehow, from these initiatives, and mostly focussing on Australia and the copyright directive in Europe, because these two I think have the longest experience, even though I mean they've been around for a couple of years, but we can reflect a little bit and look at the effectiveness of these initiatives.

I think there are a lot of short comings somehow that are surfacing now that do show that we do not have sufficient instruments to deal with this enormous and even growing power of big tech; that the negotiation power is still on the side of platforms, so we haven't really managed to put the media at the same level to be able to negotiate equally.    The problems are also on the media side.    As I said, this fragmentation between the media companies, between media and journalists, and between the big and small ones, between bigger markets, smaller markets, big languages, small languages.

In Denmark, for example, they decided to form a coalition and negotiate collectively with big tech and they're really persistent with this and they're clear on setting their benchmarks high.    I think another problem we should consider in this conversation is the lack of clear methodology of what's the value, like, what is the value and who should be calculating the value?    What is fair remuneration in this context?    We have several examples of several cases where this value is calculated in different ways.    So it's not clear.    And of course, it's not clear from these deals because these deal, as we said already, are not transparent.

So what we're seeing increasingly in the policy framework is the shift from these bargaining or negotiation frameworks to something that is a bit more direct, regulatory or policy intervention in this area, so speaking increasingly about the need, for example, to tackle the issue of the fact that platforms do have this power to decide whether they even want to carry media content or not.

In Europe, for example, we have European media Fitimac (phonetic) that introduces a precedent somehow by putting forward this principle that media is not any media to be found on platforms.    They have to pay due regard to this content and if they want to moderate this content they need to follow a special procedure.

So I think this speaks to a policy conversation that these platforms have indeed become key infrastructure for our relationship with news and media and informative content more broadly, then maybe we should consider them as public utility and have rules, or to think of complete alternatives to break down these dependencies.

In terms of bargaining or negotiating frameworks for fair remuneration, there's been a shift from looking at their shortcomings to something more like digital text or digital levy.

but then, how do you distribute this money?    Especially taken into account not all the states have all the necessary checks and balances to make sure these kinds of processes are not abused.

I think I said a lot.    I will just stop here and look forward to the exchange.

>> MODERATOR:    Thank you so much.    We'll for sure have time for this exchange.

You mentioned the impact for small journalistic initiatives and I think that is a good way to segue with Juliana Harsianti.    I don't know if I pronounced correctly your surname.    Because I would like very much to ask you to present your views on the effect of digital platforms on community development and importance of journalism for these communities.    So to introduce you, Juliana Harsianti is a journalist and researcher from Indonesia, worked mostly about the influence of digital technology in developing countries, contributing for example to global voices and international online media.    And I'm sure that from her perspective there are much in common from our perspective in Brazil as well.    So I give the floor to Juliana.    Thank you very much for being with us.    I don't know what time it's there in Indonesia but thank you for being with us.

>> JULIANA HARSIANTI:    Yeah.    Thank you.    Can you hear me?    I'm sorry.    I cannot put on the video because the use is better for the sound connection.    Thank you for inviting me as the speaker to this important issue.    Good afternoon.    Good afternoon for everyone who is attending in Riyadh.    It's almost 9.00 p.m. here in Jakarta but this is okay to have some discussion with colleagues about the impact of big tech in journalism.

As mentioned in the opening remarks, Indonesia has this year, Indonesia has published some presidential decree about the regulation for the big tech and digital platform to share in the revenue with the publisher because the government thinks that the president of digital platform in Indonesia has been disruption for the business model for the media mindset in Indonesia.    This is still on discussion between the tech company and the association of journalism and the government in Indonesia, whether this decree can be implemented shortly or there's some modification or some adjustment in future.

but this evening I will talk about how the small media mostly on the digital platform, like social digital platform and social media, to promote the freedom of press, to get    to spread the information more freely in Indonesia.

I will give an example (?) both are online media platform based in Indonesia.    Makdalen is more about focused on gender issue.    Meanwhile Project Modatory is more focused on in depth journalism.

and highlighted some issue has been avoided by the mainstream media in Indonesia.    They chose the digital platform and social media because they think    it could be a smaller audience, they could get more engagement from the readers, but not from the good business side, because they try to avoid to have some Google ads on their platform.    They avoid to    they try organically to get the    to establish their website on Google search engine to keep their side still number one in reception year.    But the small media company or community media has more attendance, not to have the bigger venue as the big media companies, so they can do more freely to promote freedom of expression and multilingual website.    And then can discuss more freely about the issues that the news has been avoiding, the mainstream media.

And how do they get minutes, the business run?

Yes, they have some business model to be running.    Most of them get the money from the general and then from the subscript but not the news subscript, new subscription, but mostly donation from those who have been supporting the platform.    And then keep the readers who want to get more quality journalism and then alternative media in Indonesia.    I think this is enough on my side and then back to you.

>> MODERATOR:    Thank you very much, Juliana.    For sure, there's another    there are other challenges that we will be able to exchange regarding the system's ability of small media initiatives.    I think from the Global South perspective we still have some other challenge than the Global North has regarding this.

Because at least from the South America and the Latin America perspective, we face the problem of the concentration of media.    We have very few countries, we have public media that can more or less guarantee some fluidity in the media landscape in general.    So I think there are other challenges besides the developed countries already have regarding the ability of journalism and that we are still facing different challenges and then also the new challenges that the content brings to us.    So thank you very much for sharing your experience in Indonesia and I think that we can move forward with Nikhil.    Nikhil, I would love to hear what you have to say about the revenue and big tech companies and link them to the legal cases against AI firms.    I think that's a good connection to what Iva brought to us at the beginning relating how the AI systems are using journalists' content to train models and specifically the genitive AI systems.    But not only.    So thank you very much for being here with us.

I'm going to introduce you and please feel free to complete any information.

Nikhil is a journalist, digital rights activist, and a founder of a digital news portal.    He's been a key commentator around stories and debates around media companies, censorship, and regulation in India.    And of course, studying the big tech companies regarding the journalists' revenue.    Thank you very much for being with us.    You have ten minutes.

>> NIKHIL PAHWA:    Thank you for inviting me for this very important discussion.    I've been a journalist and a media entrepreneur from India, for about 16 years now, and I've been a journalist for 18.    I have also been blogging for about 21 years.    I'm part of media related committees in India that look at impact of regulations in media, including the media regulation committee of the editor's guild of India.    I have built my entire career of an online platform.    We have a small media company, about 15 people working at our media organisation.    But I also still do believe that journalism is not the exclusive privilege of traditional media, of formal journalists.    Even today news breaks on social media.    And frankly, journalism I see as an act.    And therefore, people who publish verified content even on social media are also doing journalism.

So we can't really look at things purely from a mainstream media lens.

And you know, even today there are online news channells and online podcasts that run as online media businesses.    And they're just an alternative to traditional media.

The primary challenge that media companies and especially traditional media companies face is the shift of advertising revenue from traditional media organisations which had restricted distribution, to digital platforms where now they face infinite competition because everyone can report.    Everyone can create content.

and, you know, traditional and big tech companies like Google and Facebook have built business models that rely heavily on data collection and targeted advertising, which has meant that they are competing as aggregaters with the media companies on their platforms, but also let's not forget media companies also compete with all users on the same platforms.

So the real challenge for media is of discovery.    And what we also have to realize is that for media businesses, and I run one, the benefit that these platforms create is the traffic.    Most media publications the majority of their traffic comes from search and social media.    The primary traffic for many news companies today, including us, what's also happening just to cover the complete situation, is that we are facing a new threat with AI summaries.    What Google does on its search, especially because unlike traditional search which used to direct traffic to us, AI summaries potentially cannibalize traffic.    They don't send us traffic anymore.    Google isn't just now an aggregater of links but it's also turning out to become an answers engine.    That is also a termed used by Perplexity that does the same function.    Perplexity and similar rag models for AI basically take facts from news companies and compile them into fresh articles that serve a user's need.    A future threat for us and that we will see play out in the next two or three years is that apps like Perplexity use our content, will start cannibalizing our traffic.    And all media analyses the traffic they have and review them.    But it's important to remember if we don't get traffic we won't be sustainable.

So while most of this conversation has been focused on getting paid for linking out, I think that's a battle that should not be fought because we actually benefit from search engines and from social media platforms linking out to us.    If it becomes    if we start forcing them to pay, and they choose not to link out, NORD to    which is what Facebook did in Canada, it will actually cost news companies significant revenue because audiences will not discover them.

Australia's news beginning code, as well, I feel has set the wrong precedent because we benefit from traffic on social media.    Linking out should not be mandatedly paid.    It breaks a fundamental basic principle of the Internet where the Internet is an interconnection of links.    People go from link to link to link and discover new content, new innovation, new things to read.

And so, I think we should be very careful about forcing platforms to link out, because that is a mutually beneficial relationship.

The advertising issue is frankly a function of the media not building a direct relationship with its audience.    Like we have built direct relationships with our audience.    And therefore, losing out on monetization to big tech platforms.    Let's not forget platforms like the Guardian chose to sign up to Facebook for its articles.    They thought they were benefiting from Facebook, but they were also giving up audience to Facebook.    So I think we need to be careful and build our own direct relationships.

But I want to talk a lot more about AI because I think that's where it becomes problematic.    The tricky thing with AI is that facts are not under copyright.    And media and news reporters like us basically report facts and there's copyright in how we write things but not copyright in how we write about because facts can't be exclusively with one news company because that is effectively the public good is in the distribution and easy availability of facts.

So platforms like Perplexity actually take facts from us, piece it together into a news article, they take it to multiple news organisations, and they rewrite our content in a manner to be honest which can be much more easy to read.    And they can also query the same news article on sites like Perplexity which means that a user gets all of their answers based on our reporting, on other platforms.

Now this is not copyright violation but it is plagiarism and unfortunately plagiarism is not illegal.    Only the copyright violation is.

Most of the cases that are being run at some of the U.S, some in India, based on, like, in the U.S. brought out by "The New York Times", in India by a news agency called A & I, focus on the fact that our content is being used, is being taken, by AI and injected by them to train their models.    Therefore the likelihood of them replicating our work is very high.    And that they've taken this content without a licence.    I think this is an important one because there isn't licensing or compensation for using our work to train them.

I'm aware of many news organisations around the world have actually signed up with accompanies for revenue sharing arrangements.    This is a very short term perspective and usually accompanies will do exactly what, for example, Google has done with its Google News Initiative and its news showcase where they will tie up with big media companies and this will end up actually ensuring that smaller companies don't get any money.

in case of AI, that's also what's going to happen.    I'll give you a small example.    When we moved our website to a new server, our website crashed because of the number of AI bots that were hitting our servers and they were taking our content.    Because it moved to a new server they thought this was a new website.    So this stealing of our work is I think something that we need to look at from legislation from codes to address and there needs to be copyright and AI and the outcomes happening in the U.S. with New York Times and India with A & I will address this.

There's a geopolitical battle going on right now about who comes on top in the AI race.    And they realize large language models need more and more written content and written facts and a large repository of that lies with news organisations.

So while today we are trying to fight battles related to linking out, which I think is a battle that shouldn't be fought because linking out, like I said, is a fundamentals foundational principle of the Internet.    The battle we need to fight and we need to fight it early is the battle to ensure that we get compensated for content being used by accompanies.    All they need to essentially remove our content from their databases.    That's the battle that I see only in courts but not in case of legislatures.

And these legal frameworks are going to be very, very important to develop because we need to create incentives for reporters to report, for news organisations to publish.    Let's face the facts:    The content that AI companies generate is based on our work.    If we don't do original work or get incentivized to do original work and media companies start dying effectively they will have nothing to build on top of.    So I think this is the relationship in terms of revenue relationships that regulation needs to address.    Like I said, multiple times, I strongly feel that the idea of paying for links is flawed.

and what's happened in Canada and what's happened in Australia is the wrong approach.    Media companies are companies as well.    They need to figure out mechanisms for monetization.    And, you know, they are moved from an environment of limited competition in traditional media to infinite competition in this new media and they need to adapt to that change.    Not try and, you know, get pittance from big tech firms.    They should be competing with big tech firms.

>> MODERATOR:    Thank you very much.    I think you brought us a very challenging perspective regarding because we didn't manage so far to address the challenge related to journalist's content use by platforms by the news aggregaters and now we're facing already the AI training systems using our journalistic content.    I'd like to take a minute to ask you asking, here in Brazil there's a bill, a regulation, that has just passed at the senate.    We still have the chamber of deputies to move forward and have the approval of the bill.    But it provides that the copyright payment for journalist content used in training and in response for the AI systems as well.    Do you think that could    this could be even considering there is a copyright approach that it could be interesting for solving at least this kind of problem that you mentioned?    I would like to hear from you a little bit.    Since we are checking all the perspectives that are on the table in different parts of the world to tackle this issue.

>> NIKHIL PAHWA:    I think that if it's legislated that there needs to be competition for copyright, for users of copyright content, that is a correct approach.    It's just that once you agree that there should be compensation, the question becomes, who gets compensated and how much do they get compensated?    And you know, what is a frequency at which they get compensated?    Do you get paid for an entire data dump being given?    Or do you get compensated on the basis of how it is used?    In which case, how do you validate that your content is actually being used by AI?    You know, because even Europe is struggling with algorithmic accountability.    And by the way, on the linking out part, I don't think    while I said there shouldn't be a revenue mechanism there, I do believe we need algorithmic accountability for both social media as well as search to ensure that there is no discrimination happening in terms of surfacing our content.    And as a small media owner, I don't want someone else like them to benefit, big media, or traditional media, at my expense.

So the fairness principles also need to be taken into consideration.    In the same way that fairness needs to be taken into consideration in case of the law in Brazil.

But the question you have to ask is, who is media today?    How do you identify that this organisation that you're actually supporting journalism?    Because like I said at the beginning, journalism is not the exclusive privilege of just journalists, right?    I'm a blogger who started a media company.    So I understand that bloggers also make money from advertising and to that extent they don't get compensated.    Why should I be as a blogger different from a media company?

I'm also running my own venture.    Right?

So we're seeing an infinite ability for reporting today because anyone can report.    And in that scenario, who gets compensated?    Who doesn't?    It becomes even trickier.    If you're scraping a media publication's blog, I mean a media publication, shouldn't the blogger also get compensated if it's being scraped for the AI?    Is the question.    Why or why not?    Right?

So these are not easy answers.    I don't even know if there are answers to some of these questions.

But when you're looking at defining laws you have to create that differentiation.    You have to break it up into who benefits and who doesn't benefit from that regulation.

if you look at most podcasters, they are doing    they're doing open ended journalism, in a sense.    They're conducting interviews.    Would you treat them as journalists under this law as well?    Should their transcripts, if they're being aggregated by AI, should they be compensated for that as well?    Where do you draw the line?    And that's the problem with laws.    You don't know.    It's very tricky to draw the lines in these cases.

>> MODERATOR:    And besides the law, I think countries where you don't have democratic regulator to analyse how these kind of laws are being implemented, it gives us even more challenges to deal with.

I don't know if Iva or Juliana wants to comment on that?    Or even other aspect?    Iva, I would like to ask you if you can comment as well besides anything else you would like to brings us, to ask us to tell us about this coalition you mentioned in Denmark, where the media negotiated with the platforms, the digital platforms.    One of the issues we had in Brazil as well, in the platforms regulations bill, that would, if it was approved, now it's in the chamber, was to compensate, not based on copyright, but based on content used, of journalists' content.    How to negotiate, how it would be possible for the small initiatives to do that?    There's already some digital journalism association in Brazil that tried to represent the most part of the small initiatives but they don't manage to represent all of them.    And how this coalition is working in Denmark, that you mentioned, I felt that would be interesting to dig a little bit deeper.    But if you want to dive on this AI topic as well, feel free.

>> IVA NENADIC:    Thank you.    Yeah, I'll start with the last point.    I think Nikhil said many super interesting and relevant things.    I want to stay for a second with this last point of the complexity we have to define media and journalism today.    And this is indeed one of the key obstacles of all the    not only regulatory attempts but also, you know, soft policy measures that we want to implement in this area, because this is    I mean it's the first step.    It's the foundation.    Who do we consider as a journalist?    Who should benefit from these frameworks and who shouldn't?    How far can we stretch this?    We've done a lot of work with the EU but also Council of Europe which covers much more countries in Europe and the Council of Europe has been forward some recommendations on how to define media and journalism in this new information world, or information sphere we live in.

And it takes a very broad approach.    Right?    It's the freedom of expression that is at stake.

It's one of the key principles somehow that we nurture in Europe the fact that the profession should be open and inclusive.    So if this is the principle, how do we solve this practical obstacle?    We do see a lot of paradoxes of the information systems nowadays.    Right?    The more open the debate somehow is, the more misinformation we have.    So we have, in a way, we have plurality of voices in the news and information ecosystem but not all of these voices are actually benefiting our democratic needs.    Right?    Many of these voices are actually misleading or extremely biased or not professional and not respecting ethical and professional principles.    So it's creating a lot of disorder in the information system that confuses people, distorts trust, and has a lot of negative implications for our democratic systems.

I can give one example that may not be a good solution but something to look at to solve this problem.    It's something that's been heavily discussed within the negotiation of the European freedom Act that does provide this special treatment to media service providers, including journalists, in content moderation by very large online platforms.    We define those as having more than ten% of new population of users, so more than 45 million people are using them on a regular basis, monthly users.

So in listing the criteria on first of all the law provides definition which is very broad about media service providers but listing criteria on who are the media that can or should benefit from this special treatment, there is, for the first time in the EU law, we have a mention of self regulation.    And we have an explicit reference to the respect of professional standards.    The law, now I don't recall exactly the text, but it says those media who comply to national laws and regulations but also comply to widely recognized self regulatory frameworks can benefit from this.    And of course this can be misused.    If there are wrong standards and claim that they have a certain number of media within their umbrella, but I think there is something in there.    I think we need to find a way to revive somehow self regulation, respect of certain standards and ethical principles for different voices in the sphere.    We can start from journalistic principles but of course they can also evolve for the new needs.

And another thing I think that is useful for this kind of conversation from that example is the transparency of the media who benefit from this.    We were battling heavily somehow to have this clause explicitly mentioned in the legal text.    It's the requirement that the media who benefit, who self declare as media, are transparent, that the list is easily accessible for everyone to read, to Civil Society, to academia, to make sure that bad actors are not misusing or abusing this legal provision.

So I think there is something to look into there.

On generative AI, I think this is a very relevant conversation.    And again I would agree with Nikhil that this is a new battlefield somehow.    We haven't resolved the old risks, the media problems or the political influences and so on, and even safety issues to journalists.    And we've moved to the area of digital platforms.    These two battles were fought in parallel and now we have also generative AI that is profoundly disrupting the information sphere.

I think the biggest change happening with generative AI is we're moving from fragmentation of the public sphere to what we call the audience of one.    This is extreme personalization of interaction between an individual and the content that this individual is exposed to and is generated by these models, these statistical models and systems, that we don't really know how they operate because there's a lack of transparency and accountability and there's a lot of issues with the data they've trained off of, like biases and repetitiveness and so on.

We're seeing cases such as Iceland as a state strategically decided it's important for them, for their language and culture to be represented, so they willingly gave all they had in the digital world that they had for free to AI just to be represented.    Because they saw this as a priority.    Right?    Then on the other hand, unlike New York Times case, what we're seeing is Lamond, Elpies in Spain are making deals with these companies.    We don't know what these deals are but Lamond said it's a game changing deal for them, for one company.    But this is probably not the best way forward because it's fragmenting and weakening the positioning of companies and even further smaller companies.

So I think this is interesting.    I think it's a trade union, but whether it's a professional association or a trade union, but it was an existing organisation of journalists in the country who decided the best approach is to go for collective negotiations with big tech because this will make them stronger and they also decided to use all the legal instruments and regulatory frameworks that are in place in Europe to make their position stronger, so to ally somehow with the political power in the country to back them in this fight against big tech giants.

We think of course this battle is ongoing.    There are back and forth.    Sometimes they manage to progress and then there's a backlash from big tech.    This is a very early stage, very fresh.    I think it's a very interesting and relevant case to observe to see how things can or should be done.    Because I do believe that one of the lessons learned from the existing negotiating frameworks is this doesn't really serve journalists and media so a collective approach is probably a better one and we're seeing much more happening on that hand.    So news media organisations coming together and finally starting to understand that they are stronger if they do this together.    Yeah.

>> MODERATOR:    Thank you, Iva.    Just for the record, I would presentation, we tried to invite Google representatives for this conversation here but we didn't manage to convince them to come, which usually happens in different occasions.

I see Nikhil and Juliana have raised their hands.    Just like to check if there are anyone online asking any questions or not?    So Nikhil, do you mind if I gave the floor to Juliana before?

>> NIKHIL PAHWA:    Of course.    Please go ahead.    I've said quite a bit.

>> JULIANA HARSIANTI:    Okay.    I think our discussion has moved from the digital platform to AI.    Which has become the major thing.    In Indonesia, the generative AI, especially large language model, is not only written as the copyright as Nikhil mentioned and Iva mentioned but also threatens the work of journalism itself.    The journalists start to generate the news, by using ChatGPT for example or other large language model and they just do some edit for the news and then they publish it on their news sites.    This is the problem.    Not the problem.    This is still on debate on the people who are media company and associations of journalism because they still think this is good or ethical to have generated the news?    And published?    Or they can use the large language model and generative AI, too, to just find the news for the sourcing for the news?    And then they have to write it by themselves and then publish as their news.    Is the news ethical on the media?

the problem, yes, I think we need the regulation by the state or the government, but the problem with regulation is it still needs time to discuss and to produce the regulation by the government.    Meanwhile, the technology is running fast.    When the government has the published the regulation on generative AI, maybe we already have the ChatGPT for the news area, which has the ability more than ChatGPT that we know for the moment.

Well, what we think we should be    what we think should be done, as an association, this is not only in journalism, but also creative associations, has to discuss that they will create growing regulation not to ask for rule on how to do it and not to be done by generative AI for their work.    I think this is more ethical than the regulation and for the moment they think this is enough, but I think we need stronger regulation as the law enforcement to overcome the impact of generative AI in journalism and creative work.

So back to you.

>> MODERATOR:    Thank you very much, Juliana.    Nikhil, please.

>> NIKHIL PAHWA:    Thanks.    I'll just respond to one thing Juliana said.    While we want strong regulation of AI, I think it's going to be very difficult to get because geopolitically what's happening is that the EU is being looked at as too strong a regulatory player and countries are afraid they will lose out on innovation and the regulatory battle.    In India, what I see is a lot of pressure not to regulate AI.

>> MODERATOR:    This is the position in Brazilian parliament as well.

>> NIKHIL PAHWA:    Absolutely.    The other thing to look at, responding to Iva, I think one way of ensuring that media owners get enough compensation is to not get compensation only from media owners.    If anybody's copyrighted content, whether it's musicians or authors, or media owners like us, if our work has been used for training models, we should get compensated.

I had a conversation with a lawyer a few months ago who said that AI ingesting our content is like any person reading it, because when they're giving an output it's not the exact same thing.    It's an understanding of our content.

I would actually say that the power law applies over here.    The ability of AI to ingest vast amount of our content from across the globe is far greater and so therefore there needs to be protection for creators, and that creator could be of any kind, media, movies, books, anything.

I would also say that there are other mechanisms where AI does need to be regulated.    Like there has to be regulation for data protection.    Iva mentioned bias and I think bias is the trickiest one to regulate because it's about how one sees the world.    And perhaps there is plurality of AI systems that needs to come in in order to ensure that representation is of different kinds.    Just like biases in society.

On The New York Times case I'll be surprised if there's a verdict.    I would not be surprised if they settle out of court because they don't want a verdict because their information has been ingested by AI.

So this could be systematic for users for research purposes.    AI is trying to position ingesting our content as a mechanism for research.    And there can be exceptions in some countries to copyright for research purposes.

This is another challenge that I think they're faced with.    But a fourth thing that's emerging now.

I've talked to a lot of AI founders, is the users of synthetic data, data generated by AI itself, is also coming into the mix where in the future content may not be needed for large language models because if they're already trained on existing content.    In that case, if it's a compensation that we are paid for future users as well, that may no longer exist.    Because let's face it.    These are language models.    They're not necessarily fact models.    Anyone who relies on AI for fact is probably going to get something or the other wrong and it's going to become problematic.

I still feel media does have a responsibility in its factual responsibility in the future where AI will always fail because it's outputs are probabilistic in nature.    This is still uncharted territory and evolving as we speak so I'm not answering a lot of things, but we need to take all these factors into act.    Thank you.

>> MODERATOR:    Thank you.    I think that's another topic we didn't mention today is for the journalistic community it's interesting to have journalist content somehow training these AI systems otherwise the results the AI systems will brings us will not be at least    it will be information we cannot trust at the end.    So it's important to have journalists' content being used by AI systems but I think the way it's going to be used in a fair way or compensated way, or dealing with copyright issues?    But I think for us, who support the integrity of information online, it's important to have at least some journalists' content being considered in the training of these systems.

I see that Iva has raised her hand.    We just are approaching to the final of our session.    I would like to ask you to    so I'm going to give you the floor once each one of you and ask you to bring your final comments to this topic.    Thank you again for being with us.    Start with Iva.

>> IVA NENADIC:    Thank you very much.    Yeah.    I think it's probably just the beginning of conversation but it's excellent to have this conversation at such a global scale and exchange because I think this is crucial to move us forward and have exchanges like this.    I won't conclude anything because it's very difficult to give final remarks on any of this because it's all open questions somehow.    But I would like to put one more consideration forward.    What we've seen from a lot of surveys is the trust in journalism is declining.    Surveys have shown that people see journalism as the drivers of polarization.    There's multiple reasons for this, there's smear campaigns by politicians who of course want to disregard or undermine the credibility of the profession because then it works better for them.    Right?    But I think what we're not seeing sufficiently is this sort of self reflection.    Where have we failed as a profession?    Especially in this aspect of reconnecting with youth and young audiences?    Because clearly there's a gap there.    Young people are departing from the media in traditional sense and departing from journalism in traditional sense and journalists somehow are ignoring this fact.    We don't need enough self reflection on that side.    There's also this question of creating value for audiences.    I don't think that, you know, media and journalism and traditional sense is investing enough in this.    There's this obsession or demand somehow that journalism and media should be treated as a public good and I do strongly support this idea that media and journalism, when professional and ethical are definitely public good and should be supported by public subsidies in a way that is transparent and fair and constitutes media pluralism but at the same time there has to be a bit more self reflection and incentives or initiatives coming from within the profession.    And at the moment what we're seeing is a lot of complaints, like we are captured by platforms and being destroyed by platforms.    We need the help.    But what is actually the value that journalism has to offer to the people has been pushed aside or forgotten a little bit.    So I think this would probably be the best case for journalism to kind of revive or remind us all what this value actually is and how can they create value with these new tools and technologies that are in disposal to everyone, including to media and journalism?

I think that would make a stronger case for why should people go back to journalism and media and support them more?

>> MODERATOR:    Thank you very much, Iva.    Juliana, please?

>> JULIANA HARSIANTI:    Yeah.    Thank you.    I think I agree with what Iva said.    We cannot make the conclusion for our discussion because this kind of discussion is still to be continued in the future.    And it needs to be a regular conversation either in developed country or developing country, or Global South or Global North, because this is important for journalists to create the news in the digital platform.    How to deal with the big tech?    How to deal with generative AI?    And how to create, to keep the ethnics within the journalists in the middle of the influence of data platforms and generative AI, who has been challenging their work and then business model of the media company?

the conversation will be, yeah, the conversation will be    has impacted the policy either to the nation's policy or to the associations, and the media companies, in regional or nation states.    So we have better environment for journalists who keep creating and keep surviving in this digital era.    Thank you.

>> MODERATOR:    Thank you very much, Juliana.    Nikhil please?

>> NIKHIL PAHWA:    Thank you for having me here.    It's a great conversation.    I'm a journalist and entrepreneur and a capitalist in how I work but I do that ethically.    I do feel as media we have to find our own business models rather than relying on subsidies and government support and anything from the government, to be honest, because any time    and I feel this strongly    the government comes into a tri partite relationship with government, media, and big tech, two things happen.    Governments use the funds and it may be different in Europe but in the Global South governments use funds as a mechanism for influencing the media.    And secondly, if they use    if the media pushes for them to regulate big tech, then government creates regulations over big tech and uses that as a mechanism to regulate free speech.    So to be honest, in this relationship I don't want the government in that.    Because it has an impact on democracy.    It has an impact on media freedom.    Whether directly or indirectly, whenever you have government involved.    I'd rather figure out our own business models and if it has to be regulated it has to be across society, not specific to the media.    I don't feel we need special treatment and I don't feel we should have special treatment.    We have to adapt when times change.    We have to adapt when we move from traditional business models to online business models, from online to AI.    But at the same time, if someone is stealing our content, we need to go to court to protect our rights, in a sense.    I strongly believe I don't want government in the picture and we don't need protection.    We need to fight our own battles.    And we need to innovate on our own.

For far too long, we've allowed all the innovation to centre around big techs.    When we've had the same opportunity to build audience relationships.

And I don't think expecting regulation and laws and policies to support us is going to solve the problem for us.

I know this is antithetical to what this conversation is about, but that's the way I run my media business.    Thank you.

>> MODERATOR:    And of course, one thing that is government and another thing is the state role that we brought at the beginning of our conversation, that's one of the controversies we had in this report that we published here by the Brazilian ledge steeran committee.    I totally agree with the risk that we have some governments regulate freedom of expression issues or regulate technology related to freedom of expression.    But I also agree that we have to search for some kind of balance between big companies and in countries like mine in Brazil where you have the big national companies, media national companies, and the global big techs that the public gets in the middle, the citizens get in the middle, and the state has a role to play as well, to bring at least more balance to the conversation.

but of course, it's not only governments that can bring this balance.    We have the judiciary.    We have independent regulatory bodies.    So there are other alternatives that I think we have to put on the table to try to find some solutions that represent our specificities in each of the countries that we are discussing, this kind of problem.

But also in a global perspective because we're dealing with global companies and maybe some achievements that we have had in some countries may help us to deal with that in other realities.    From the Global South perspective, I think we can learn a lot from other countries that are tackling this problem.

Once again, thank you so much for your time, your insightful thoughts, and for spending some time with us here at the IGF.    We started this conversation, as was mentioned.    It was only the beginning.    I from the Brazilian Internet steering committee perspective I would like to thank you very much and to make us available for any kind of further exchange that we might have and have you all, whoever is listening or hearing, and for those who are here with us, a good evening.    Thank you very much.    Bye.

(Session ends at 5:56 p.m. AST.)