The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.
***
>> Why don't we wait for another five minutes to give people time to come in.
Probably in three minutes, please.
>> BELTSAZAR KRISETYA: Good afternoon, everyone. Our session will start in two minutes, to give people additional time to come in and thank you, everyone, for coming on time. Make sure that you have used your hardware and are connected to channel number 1.
Thank you, and welcome. Also, before we begin, I want to check on Maria and Fitri whether I am audible and you can show your video already. That's fantastic.
>> FITRIANI: I can hear you. Will you show the slide from your side?
>> It can go either way. We can show the slides for you or you can present the slides yourself.
>> FITRIANI: I am happy if you can do it for me. Thanks.
>> BELTSAZAR KRISETYA: Sure. Fantastic.
>> FITRIANI: Hi, Maria.
>> MARIA ELIZE MENDOZA: Hi.
>> Had shall we begin, everyone? Fantastic. Hello and welcome to welcome again to IGF 2024. This is the last day's Workshop Room 1. The title of the event is "Addressing Information Manipulation in Southeast Asia."
My name is Beltsazar Krisetya from the Jakarta based Centre for Strategic and International Studies in Indonesia, and I will be your moderator for this afternoon's session.
Joining with me are the speakers, Pieter Pandie, my colleague researcher from CSIS Indonesia, as well and also Dr. Bich Tran, Postdoctoral Fellow, Lee Kuan Yew School of Public Policy, National University of Singapore.
Also joining us online is Maria Mendoza, Assistant Professor at the Department of Political Science, University of Philippines. As well as Dr. Fitriani, Senior Analyst at Australia Strategic Policy Institute.
Before we begin our session, kindly allow me to provide a little bit of context about who we are as the organizers and also why do we pick this topic to be presented amongst other researchers ongoing research project that we are also presenting, that we are also conducting in Southeast Asia and Asia Pacific in general. Safer Internet Lab it is a research programme that we are co constructed, if you will, or co concepted by CSIS, our home institution in partnership with Google, Google Indonesia back then and followed by Google Pacific later on. It is a research hub that convenes all researchers and practitioners working on information ecosystem. So, on the first year, we are trying to capture the whole supply chain, if you will, of this information. So we conducted some kind of research to dissemination, all actors, we tried to cover how buzzes or cyber troopers and bots conducted the influence operation campaign in Indonesia.
We also conduct a user centered research by conducting surveys on public accessibility on information to promote the balance on information in political literacy and we conducted a platform facing research on we want to explore further on what are the co governance models that are acceptable and yet responses and mitigations and can bring along the government actors to the platform as well as civil society in one forum and in one institution.
And so we have been doing this for the second year in a row now with concurrent with the 2024 general elections that were conducted in Indonesia. And so we collaborate a lot with information actors as well electoral actors in Indonesia.
We also shape the dialogue with international communities in which we join as speakers and also participated in the UNESCO forum, UN forum and also diplomatic embassies. We have also hosted academy conference on disinformation in Indonesia, as well as publishing reports in which you can find the report, the printed version of the report in our booth just outside of this room. We have established booth for the entire IGF 2024, so feel free to drop in anytime.
And for this year, 2024, going forward, we will be focusing on three research. The first is the impact of deepfakes on online fraud, how AI, how generative AI would worsen the topography of online scams in Southeast Asia. We take a closer look into the impact of disinformation to democratic regimes. How is the next (?) in 2024 where does the information resilient play a part this this.
And lastly the one that we are going to present this on this occasion is on information manipulation and interference.
We are also a part of the Global Network of Internet and society research centre or Global Network of centres in which institutions such as Harvard university, Oxford institute, CIS at Stanford and convene in academic discussion globally. That's a short presentation on SAIL.
But we will delve further into a topic, into one topic that is probably growing interest across the region, which is on the information manipulation. We have an Indonesian, Philippine case study from some perspective from Australia. Without further ado, I will let Pieter probably 10 to 15 minutes to present the case on for information manipulation and for interference and whether what is happening in this part of the world, which is Southeast Asia, there are parallels that can be drawn to instances that are also happening elsewhere. Please, Pieter, the time is yours.
>> PIETER ALEXANDER PANDIE: Thank you very much. Thank you everyone for attending this session. My name is Pieter Pandie. Researcher at the Safer Internet Lab and also researcher at the Department of International Relations at CSIS Indonesia.
So, as Belts has very well interviewed, the Safer Internet Lab this year has three research streams that we have tried to conduc for the second year of research of this research lab. And I will be focusing mostly on the foreign information manipulation and interference and instances of that occurring in Southeast Asia. And I will also be covering a little bit about how the information landscape in Indonesia specifically and how that correlates between foreign based disinformation or domestic sourced disinformation.
As part of the research stream for FIMI in SAIL this year we have tried to create a database that's tried to make records of FIMI instances in Southeast Asia from 2019 to 2024, so what we have done is we have from open source sources, we have tried to make a database of cases of where information operations, whether traditional, digital or offline have been made in Southeast Asia from 2016 to 2024.
And for the dataset we have tried to make those three categories. So, for traditional media influence, examples include when influence actors place advertisements, hires or pays their influencers journalists or opinion leader to share their part of the story on social media and so on, so forth.
For digital media influence these would be cases such as coordinated (?) behaviors, creation of troll and bot networks to share narratives on digital media and offline influence include the (?) influence, economic investment and so on, so forth. But for part of this research we would be focusing mostly on the digital aspect of it.
So, part of the ongoing research, what we have found so far as part of our dataset is that while disinformation has been discussed openly by countries in Southeast Asia, FIMI has not been discussed that much across Southeast Asian states and we will delve into the reasons why later but for disinformation specifically what we have found is countries in Southeast Asia tend to focus more on disinformation as the topic but not FIMI. So, what they have tried to address by policy is disinformation that's occurred domestically but not much discussion of FIMI more broadly.
So, as part of our dataset we have discovered that earlier through our early findings is that so far the dataset shows a tale of two houses between 2019 and 2024. From 2019 to 2021 we have found that cases of FIMI were not quite high. Most of the disinformation cases that occurred in Southeast Asian states were domestically sourced that were attributed, so they were mostly domestic created by local actors or sometimes government actors but from 2022 to 2024, what we have found is that there has been an increase of FIMI reported FIMI cases and also a created a diversity of threat actors that have been operating in Southeast Asia's information landscape. So the correlation that we have made as a result of these data findings is there has been an increase in FIMI and influence (?) Southeast Asia concurrent with rising geopolitical tensions between great powers and also rising number of international conflicts so the Russian Ukraine conflict, the conflicts in the Mideast. These have it was still mostly domestic focused.
So, in addressing disinformation as I have covered before, most countries still use national approaches to legislation, rarely through attribution, so very few countries, if any, attribute where the sources of disinformation are if they are foreign. If they are domestic it's more likely the case it would occur.
And even more rarely through retaliation. I don't think we have a case of that that we found so far.
So, as part of our dataset, we have recorded from ten different comments East Asia and drawing on lessons from Taiwan and Australia as well and what we have found is that it was quite difficult to find cases of because we are our team is quite small and we are mostly English speaking so most of our sources were English speaking media and newspapers and so on, so forth and we have found that that was a great limitation in how we identified cases, particularly in countries where the information space is very much, much smaller and much less exposed to English language media. Counties as Cambodia and Laos we found it was difficult to identify cases of foreign based (?) because attribution rarely occurred where they contributed a foreign actor as part of the disinformation operation. And number two is that if it were to occur it would most likely be in the local language. So the language would be localized.
Whereas in countries where the information landscape and social media users much more exposed to international media it was likely to detect FIMI operations.
We identified a few foreign influence actors, these actors include from reported cases actors such as China, Russia, Iran and also some nonstate actors that were attributed, whether they were supported by state actor or not. And also some of one of the examples that we found was also the United States engaging in information operations to Southeast Asia.
So, to wrap up how the dataset that we found is that sources of disinformation and the information landscape more broadly in Southeast Asia is very different and very contextual from different Southeast Asian states especially during election periods. Different threat perceptions particularly relating to FIMI. While disinformation is considered a challenge and is likely so for many states even outside of Southeast Asia not all governments not all governments considered FIMI as a current threat. Some are quite comfortable with leaving certain cases of FIMI to fester because it's not deemed as a big threat to to the existing political regime or it's not creating the social disturbances that other sources of domestic disinformation might.
There's also with the different cyber capabilities across Southeast Asian states, difficult attributing the source of disinformation. In Southeast Asian, while there is in ASEAN, while there is the cybersecurity cooperation agreements so on and so forth, these are led or hosted by countries such as Singapore or Malaysia who have higher, I would say, cyber capabilities compared to other Southeast Asia states who are still building on those capabilities. So not everyone is on the same page either, threat perception wise or capabilities wise.
And moving on specifically to Indonesia, we just held elections in 2024, presidential elections and while the data is still very, very fresh or new because the election just occurred in February this year we found most of the disinformation cases were still domestic sourced either by nonstate actors that were paid by government actors or certain political actors but still domestic based. As part of that we found there were differences in how disinformation was created in previous elections so in 2016 or 2019 presidential and regional elections the game in 2024 was a lot different.
Whereas in previous elections prior to 2020 most of the disinformation that was created was very text based and image based and distributed on platforms that were text based and image based.
So, platforms such as Instagram, Twitter, Facebook but they were either image or text based disinformation or messaging appears like WhatsApp. Whereas, about 2024 we see a greater proliferation of disinformation incidents that involved Gen AI either visual or audio form. Three examples I have noted the first is a video based deepfake of our former president who has passed away, who stated support for one of the political candidates. So that was a deepfake that was made. I think he was making a speech saying that you should support this certain candidate.
Two other examples that was published that was posted on TikTok was audio based. So one of them was an argument that occurred between a certain political candidate and the head of a party that supported him. Which was very convincing for a lot of people.
And the third one was one of the presidential candidates giving a speech in fluent Arabic, when he did not speak that. These are three different ways with where Gen AI has disinformation is proliferated. Our election bodies that are trying to deal with these disinformation cases are still playing on the playbook from 2019 and previous elections. They were not adequately prepared to deal with how disinformation would be proliferated in future elections because of the creation of Gen AI. And I think this is another problem that will continue moving forward.
To wrap up the presentation, what's the way forward after this. I would identify three things. Number one is I think that especially, this is for Indonesian context. I can't speak for every country since everyone has a different contextual information landscape. For Indonesia specifically a multistakeholder approach involving government, civil society and social media platforms will be needed to comprehensively address disinformation either during elections or other instances.
Obviously with Gen AI developing the way it is, it will be very, very difficult to create policy that will form itself as guardrails for it since, you know, with increasing geopolitical tensions and the tech competition between great powers I think we are going to see the rapid development of Gen AI so I think we need to do what we can and involve as much stakeholders as possible in that regard.
Number two, emerging technologies will intensify the speed, nature and spread of disinformation. While I think now there are still cases of Gen AI with video and audio that are still a little bit easy to identify where it's fake or not, I think moving forward, the capabilities of these technologies will improve or they will be increasingly difficult even for the trained eye to detect whether that's disinformation or not.
And lastly, and I think this is very important to say, especially for the Indonesian context is we need to strike a balance between effective governments of the information landscape and ensuring that democratic freedoms for civilians are still upheld. Because I this is drawing from previous research at the Safer Internet Lab is that while there are policy responses from the government to address disinformation, oftentimes they can step into civil freedoms for expressing opinions and so on, so forth, so they don't address disinformation, but they limit freedoms for expression and so on.
So, I think that balance is, of course, very, very difficult to strike, but I think it's something that we need to note on moving forward. I think that will be part that will be it for my presentation. I will pass it back to Belts.
>> BELTSAZAR KRISETYA: Before we move on with Dr. Fitriani, allow me to delve further into something you said. Please paint a picture, paint a further picture on the use, as you have explained really well on how perceptions inhibits the effort against information manipulation, also paint a picture on the different topography, how does the users look like, is the Indonesian serve as ground for disinformation because they have been the, quote/unquote, for disinformation by domestic actors and does it make them for interference in your opinion?
>> PIETER ALEXANDER PANDIE: I think with disinformation, is that disinformation is most effective when it reinforces certain opinions or ideas that someone already has. This is something that I have spoken about with counterparts from the U.S. and Australia as well, even foreign whether foreign or domestic, the confirmation bias is a big think in how disinformation is spread. When you already have pre existing notions of a certain idea or a certain political position, disinformation can reinforce those ideas and, in fact, make it stronger and I think in the Indonesian context more specifically, we are one of the most populated countries in the world. I think number four right now, digitalization is occurring rapidly and a lot of the youth are starting to become more and more exposed to social media. And I think while that increase has happened, digital literacy has not increased with it. And I think that's another challenge that we need to take, is that improving digital literacy for social media users, whether young or old, in Indonesia to be able to differentiate between fact and fiction, real or hoax information, I think is another important step forward. This is part of a public opinion survey that SAIL conducted last year and the numbers were low for the amount of people who have participated in digital literacy project programme that was held by the government even though these programmes existed for public, not a lot of people were aware of them and even less people were involved in them so I think this is another challenge moving forward.
>> BELTSAZAR KRISETYA: Thank you. Moving on to Dr. Fitriani senior analyst at Centre for Strategic policy institute. And the IT prepare for Dr. Fitriani's slides. Hello, Fitri.
>> FITRIANI: Hi, Belts. Thank you. Good afternoon. In Cambria it's 1:00 a.m. So, apologies if I look sleepy.
Thank you for having me. It's an honour to be able to speak at the Internet Governance Forum 2024, and I would like to extend my gratitude to CSI as well as Google for bringing this timely discussion on the issue that is essential I think for our digital feature on security.
So, my presentation today, if the IT team can manage through, pull out the slide, is focusing on how we can tackle information manipulation in Southeast Asia by drawing lessons and what does not work in Australia's experience.
If I can go to the next slide. I will share how information if we can go to the next slide.
Disinformation and information manipulation is a global challenge. And I think as we know and has been discussing, it undermined democratic processes and societal divides, weakening public trust in institution. And I would argue here that Australia is similar to Southeast Asia where threats are happening in a fertile ground where the society is diverse in social political, as well as opinion. And in Australia, for example, we are open to do protests in the street, and because of that, and we have many population that's coming from different parts of the world. And they are often leaving the country, but still have a connection from the country.
And sometimes the government from the country actually conduct information operation to influence how they actually say good things about the country, where they are from.
If I can go to the presentation before. The previous slide. I want to share about how disinformation is exploiting the sensitive issues of different political ideologies. And it is not common for states sponsored actor to employ disinformation campaign aimed at forestation division, confusion and mistrust among population. And for Australian experience is to which distress against allies. It happened, for example, the top example is where in the recent U.S. election, the recently BBC news was saying that Mr. Simeon Boikov. He's an Australian born individual, but he was being known as the Russian spokesperson in Australia and he was paying X account of (?) $100 U.S. dollars to post in Twitter account, X account, a fake AI video that falsely claimed immigrants committing voter fraud in Georgia swing site.
It actually pose a concern for Australia because such activities could tarnish Australia reputation and connection to its allies. And it is implicating in a way that Australia is can be considered as a launch pad for foreign interference in other countries. So, this can be concerning. And I don't say that ASEAN country might be like this, but we can see it in increasing geopolitical tension, that situation might happen in the future.
Another example is how the disinformation, as Pieter was just sharing, has become more sophisticated and leveraging social platform. And the second example, the photo below, is from Southeast Asia where there's actually actors
>> BELTSAZAR KRISETYA: I think we are losing Fitriani in Zoom. Fitriani, are you still with us?
>> FITRIANI: Had to go to Boga's website channel and that have news that is produced by AI in a post that is unfounded, really fake, and they use drone of flight that was being used for Ukraine in the example of South China Sea, and it actually trying to increase the tension by saying that U.S. is sending anti attack missiles to support the Philippines, and they copy and pasting from ChatGPT because in the posting said, I am a model language AI and I cannot perform tasks that require realtime information. But concerningly, this news on South China Sea was shared one of them was shared over 25 times we need to be aware that this campaign not only exacerbate regional tension but pose significant risks to the security and stability in Southeast Asia and here in my presentation I would like to share how Australian recent experience could provide valuable insight to addressing this challenge.
And, perhaps, give measure to combat information manipulation.
So, if we can go to the next slide, I will share of how Australia deal with information manipulation in last year Voice of Parliament, which is a referendum that called on whether the first nation, the indigenous aboriginal people of Australia can have direct seat in the parliament, like allocated seat. But this election, unlike the previously Russian operation in Australia, this was identified, there was an allegedly link with the Chinese Communist party and there's a lot of TikTok and social media, other social media being used to distribute false narrative that include racial segregation and actually having the narrative, as you can read there and say that it's a way to actually change how Australia currently is working.
So, learning from the Voice of Parliament failed to actually provide a stronger position for the first nation people of Australia, then the government and the people are trying to address this challenge, using three main ways. If I can go to the next slide. The three main ways is, one, legislative effort. Two is public and joint attribution. And three is fact checking and awareness campaign.
So, let me start with the law, making the law. I know creating a law is a process that takes long and I don't know whether the ASEAN 10 and perhaps with Timor Leste joining hopefully soon the countries of Southeast Asia, can issue the update of law. But even if Australia, the proposed combating misinformation and disinformation bill was shut down by countries that are disagree, by people that actually disagree and saying that, perhaps, this is just a way of trying to silence the people.
So, the disinformation and misinformation bill campaign is receiving disinformation campaign. And one of the senators that actually thanking Elon Musk is because Elon Musk actually shared this bill, draft saying that Australia is creating this bill.
And after Elon Musk tweeted it, it's the government received and behind that, there is another local parliamentarian that say if you want to disagree with this bill, this is how you do it. And after that, there is 16,000 submissions saying how this bill should not go.
So, that bill was failed. Although, the effort should be appreciated.
The second is having public and joint attribution. And, for example, in the attribution might be difficult and cannot be done, for example, for countries that small and medium countries that said what's the benefit of saying that big country major power are conducting information operation to us and we cannot, you know, respond to it.
So, the way Australia respond to the APT 40 cyber threat activities is by actually calling other like minded states that also become the victim of this advanced persistent threat 40. That infiltrating government computer system. So, they call, the government also called the U.S., UK, Canada, New Zealand, South Korea and Japan to issue a joint attribution. And this is called to a specific Chinese state sponsored group. And the way it does it is not political attribution but technical attribution. So maybe this is one of the ways that can be done.
And the third way was fact checking campaign. And their government government endorse and support, although the effort is done by independent institutions such as IT fact lab and fact checked AAP that systematically debunk false claims. I think other countries in Southeast Asia region have that such as Mafindo and Indonesia and fair found, Philippines that maybe Mario will share later.
If I can go to the next slide. How this is relevant for Southeast Asia, because in Southeast Asia also have diverse social political environment that present unique vulnerabilities to information manipulation. I think it is Australia experiences similar with Southeast Asia, but the differences is there's fragmented regulation that hinders flag form accountability.
If I can give you example. The top right is one of the example, how in several universities, journalism majors in Indonesia recently have signed an agreement, MOU, with Russia state media Sputnik to share how to do journalism. So, it can be a bit concerning.
And meanwhile, other countries in Southeast Asia, for example, Singapore actually implement sanction toward Russia. So there's a discrepancy of regulation addressed on a certain as Pieter were saying actors that conduct operation in a region and this can be a concern, especially when there's limited public awareness that actually exacts susceptibility.
What happened in the region, Australia, the government is then called to play a greater role in verifying what is fact and what is disinformation. Bottom example of the photo is where a law minister, Singapore law minister actually clarifying and saying how Israel diplomats are being insensitive of posting a comment on how many times Israel, the word Israel is mentioned in Quawan, it's insensitive because that posting was shared in the hate of Gaza conflict. But that how Singapore managed it is managed to control and harmony of the country to not escalate the issue.
So, I call for the need of regional cooperation to counter shared threats. To actually communicate together to share information of what happened in one country. And perhaps the content sharing agreement, for example, need to be something that the region need to talk to each other. Because having content sharing agreement, for example, with Sputnik or with other countries, state media that might not be democratic or might not be correct in reporting the certain issue, might increase tension in the region unnecessarily.
If I can go to the next slide. On the recommendation on Southeast Asia. There's actually, in terms of there's a diagram in terms of how what kind of content that can be addressed and regulate. First, the measurement is to address the one that leads most harms. And that would be equal with the level of intervention.
There's five steps here that I suggest on how Southeast Asia can address information operation or information influence. First is adopt clear regulations. So, if there's a violation in certain social media platforms, therefore, if the government have established clear and enforceable regulation, then that violation can be brought to the criminal and justice law processes.
So, for example, the regulation should include minimum content moderation standard, that is published, for example, and mechanism of how to hold platform accountable.
The second is having the threatening of regional cooperation and intelligence sharing as well as capacity of the government to address disinformation campaign.
The third one is enhance media literacy. And ASEAN actually did that with trainer of trainers and under the education minister encountering disinformation. And we have model in ASEAN what the next step is to actually translate that model to different ASEAN language.
The third one is to promote the fourth one is, sorry, to promote transparency by encouraging platforms to label trusted sources. For example, to label whether this image is AI generated, whether the video is AI generated. The more difficult is perhaps the voice, how can we actually label voice to be AI generated? But maybe we can find a way.
The last one is to build multistakeholder framework with civil society and the private sector because somehow the technology that hosts the disinformation are owned by the private sector. And the civil society is the one that do mostly the checking, while the government is the one that supervise how the game is played.
I think that's the end of my presentation. I thank you so much for the time given to me. I give to the moderator.
>> BELTSAZAR KRISETYA: Thank you, Fitriani. Perhaps two minutes elaborations on what kind of lessons does the South Asian country with learn from the Australian experience in developing the Code of Conduct against misinformation and disinformation and what can an parallels that ASEAN country adopt whether unilaterally or through a regional organization.
>> FITRIANI: I think good practical question. I think one that can be done is actually asking, for example, Google, as well as other social other platforms, actually, rank the website that is most credible to show first. Like news from the government. And actually, what happened with the COVID time, there's a labeling, this is related to COVID 19. So that actually would help, would help the people to actually be more aware.
If they can do that on COVID 19, I think they can do that for other things, like, for example, scam, that actually quite prevalent not only in Australia, but also, perhaps, in Southeast Asia, because there's a lot of scam in general, things taking on platform as well. And while the platform is actually showcasing, for example, job opportunities or advertisement, discount or sales somewhere, they need to have this verification, the government disclaimer that please check, double check before you, like, input your details, for example,.
I think those two are the ones that I recommend. Thank you.
>> BELTSAZAR KRISETYA: Thank you. Thank you, Fitriani.
Let's move on to Maria eLees from the University of Philippines. You have 10 to 15 minutes. And please, the floor is yours.
>> MARIA ELIZE MENDOZA: Hi, good day, everyone. Good evening from Manila. I am pleased to be given this opportunity. I am Assistant Professor Maria Elize Mendoza from the Philippines and I am here to present the case of the Philippines in terms of addressing misinformation manipulation. So I don't have slides so I will be going through the suggested talking points.
First is to provide an overview of the Philippines information landscape. So, one thing that the Philippines has been known for many years is that we are the social media capital of the world. And we are also known as the patient 0 of global disinformation. Almost like the petri dish or the lab experiment of disinformation.
So, Filipinos are hyper connected to social media and are among the top Internet users in the world, especially Facebook. So, that's the top social media application being used in our country.
Television, radio and the Internet are among the top resources of people of information about politics and the government. But since the 2016 presidential campaign of former President Rodrigo Duterte, the country has seen increase in social media in presidential purposes. Marked a pivotal shift toward social media campaigning. His victory was significantly influenced by coordinated digital campaigns on Facebook and YouTube for content creators that we have come to know as social media influencers or vloggers have spread and amplified directive supporting his policies, including the controversial and violent war against illegal drugs.
So in the 2016 midterm elections, which were in (?) for several national positions and local positions, the same thing was adopted and the opposition suffered an extreme blow in the senate race, no opposition candidate won in the senatorial election. So, all candidates allied with the (?) administration won in the 2019 midterm elections.
And in our next presidential elections, our most recent one, last 2022, the victory of Marcos Junior, who is the son of the late dictator, Ferdinand Marco Senior, was also largely attributed to the spread of online disinformation across different social media platforms and these contents spread on social media, did not necessarily promote Marcos Junior as a candidate, but, rather, the twisted historical narratives, attempted to cleanse the family name of the Marcoses because they still have a lot to answer regarding atrocities committed during the dictatorship and it also contributed to the demonizing of the political opposition.
Disinformation during this time also attempted to attempted to demonize the political opposition and this continued until the 2022 presidential elections.
Investigative reports from civil society groups and independent media outlets show that Marcos Junior benefited the most from disinformation at the expense of the main opposition candidate who is our former vice president.
So, at present, the Philippine is saturated with independent (?) these are technically the vloggers who are the influencer who are not necessarily formally affiliated with any political party. These vloggers and influencers who were followed and watched and heard by many Filipinos, millions, are not covered by existing media accreditation policies or the regulations surrounding journalists, for example,.
So, they exert influence when it comes to shaping public opinion but official campaign team of candidates because their online contents are extensively consumed by the general public. There is also evidence that they have been hired by politicians in previous elections and that millions of pesos, thousands almost millions of dollars have been spent to for these kind of campaigns. And what's troubling is that the domain, the social media domain of these vloggers and influencers remains largely unregulated. So the contents are there and add to that the poor content moderation policies of platforms such as Facebook and YouTube, these are aggravating the problem.
So, as a result of the saturation in the information ecosystem, a survey conducted in 2022 found that majority of Filipinos found it difficult to detect fake news and similarly despite the Internet being a top source of information about the politics and the government, the Internet is also perceived as a top source of disinformation, mostly spread by influencers.
So, moreover, Filipinos developed a growing distrust towards traditional media and journalists and these findings together with the fact that Filipinos are among the top social media users in the world is a dangerous are dangerous combination.
So, how does foreign information manipulation and interference, or FIMI, enter the picture? We have had our share of FIMI in the past. FIMI in the form of Chinese sponsored disinformation and propaganda has been around during the (?) time, who is relatively more friendly to China as compared to previous Philippine presidents. From 2019 to 2019, China launched disinformation campaign known as Operation Naval Gazing, an attempt by China to penetrate the Philippine information space. A network of fake accounts originating from China promoted and supported the (?) family and Aimee Marcos, who is the sister of the current president.
So, 2018 to 2020, attacked the accounts of senators and media. However, platforms such as Facebook have taken down this behavior for coordinated behavior.
In a net shell FIMI has not made impacts comparative to the domestic level of operations. Media outlet in the felonies is pro China but it was recently denied legislative franchise to operate on television so they are mostly operating on social media.
So in the Philippines, disinformation and influence operations are mostly domestically created and spread by these social media influencers, vloggers, celebrities, digital workers, independent practitioners or even ordinary Filipinos who make a living out of creating and spreading disinformation or (?) content online.
The last part is interesting, the hyper part is on content because not all contents are fake or false. Some are facts but these were exaggerated and twisted to suit political agenda.
But still, the threat of FIMI must not be disregarded because we have had a glimpse of it in the form of pro China content. One thing that we must also be wary of would be the potential use and misuse of generative AI in the upcoming elections. So, just very recently a few months ago our own president was a victim of this, an AI generated audio of him ordering an attack against China in light of the west Philippine Sea issue was spread and flagged with the government as well.
So, given this, how has the Philippine government worked to address those challenges? Over the years the Philippine government has failed to effectively electoral cycles have passed since 2016 yet we are still facing a worsening problem and we have an upcoming elections in 2025 this coming May. Legislative proposals to combat false information and regulatory social media campaigns have not seen any progress. As a result, civil society actors particularly media groups and academic institutions are shouldered the responsibility of ensuring the integrity of (?) by launching fact checking initiatives, digital literacy campaigns, voter education programmes.
However, without robust government support, a comprehensive legal framework and systemic changes, the impact of these initiatives is limited. It was just last September 2024 when the country's election commission released a resolution that provided guidelines on the news of artificial intelligence and the punishment for the mis or for the use of mis or disinformation in elections, just in time for upcoming elections in 2025.
This September 2024 resolution also establishes the common elect, or the commissions elections form collaboration networks with civil society actors, however, this is very late. And it remains to be seen whether it will be really implemented effectively given the extent of the problem that we have now.
On the other hand, social media platforms such as Meta and TikTok have expressed their commitment to cooperate in the upcoming elections. This is badly needed because for active content moderation measures and accountability must be demanded from and exercised by social media platforms.
At present, contents that are, obviously, false and hyper partisan, even if they were posted in the last electoral cycle are still present in the in these platforms. They have not yet been taken down despite multiple reports so this content moderation policies really have to be looked at.
Moving forward, (?) must also sustain and strengthen its engagements with civil society. Civil society actors alone cannot solve this problem and they have been shouldering the burden of fighting against disinformation for the longest time. So, strong cooperation between the government and civil society is needed.
Moreover, cybersecurity infrastructure in the country must also be strengthened. Outside of elections, Filipinos on highly susceptible to online scams, fraud, banking scams and phishing attempts so multiple government websites have also been hacked recently to there were also instances of data breaches by government in government agencies where millions of data have been allegedly sold in the dark web.
And lastly, to end my short presentation, in the long run, digital immediate literacy must be fully incorporated in basic and higher education because at present under the Philippine education system only students in their last two years of high school have media literacy in their curriculum. The rest are not really institutionalized. So this needs to be expanded across all levels of education to fully empower citizens in the fight against disinformation and information manipulation. So, that's my short presentation in the case of the Philippines, I am very etch looking forward to the questions and the discussion later. Thank you very much for having me.
>> BELTSAZAR KRISETYA: Thank you so much. Thank you so much, Maria. Again, another quick question. I remember during COVID times there was an influence cooperation allegedly done by Pentagon for the Philippines republic to sew disbelief against Chinese issued faxing and the Filipino public bought that idea that they chose to wait for a more, you know, non Chinese (?) instead and it creates a little consequences to the Filipino public health during that time.
Would you say in the realm of inference cooperations, what is happening, is it what happened in digital realm serves as an extension to geopolitical realities particularly in Philippines relations to the great powers?
>> MARIA ELIZE MENDOZA: Probably yes because in another forum that I attended who were in the room some analysts who looked at post in China that are related to Philippines. Some posts are discrediting the U.S. Philippines alliance and at the same time still supporting the (?) because the (?) is known as a president who is friendly to China and Marcos Junior is not exactly that. It is greatly perceived that Marcos Junior is more leaning towards the United States. So, there are posts being spread on Chinese social media where and they are discrediting Marcos because he's pro U.S., discrediting the Philippines/U.S. Alliance. I think these kind of disinformation can also be related to the geopolitical realities.
>> BELTSAZAR KRISETYA: Thank you. So, we have got the case study from Indonesia, from Australia, from Philippines. None of them seems to bear good news. So, we rely on you, Dr. Bich Tran, from Vietnam, how the situation looks like in Vietnam.
>> BICH TRAN: Thank you. And I am grateful for the opportunity to be here. So, you know, first I would like to give a brief description of Vietnam's information landscape. So, there are three main components here. So, the first one is domestic media. And Foreign media with (?) service, and social media.
So, in terms of domestic media, most of them are state owned or related to the government. So, they are heavily regulated by the Communist party of Vietnam. And they of course they add here to official narrative.
And in terms of foreign media (?) service, there are actually several of them. But I will give some examples from China in some western medias. For China, there are the China global television network or CGTN. And then the (?) radio and TV. Both of them have Vietnamese language.
And then for the western media, from the UK there is BBC and then there is U.S. funded as well, like Voice of America or radio free Asia.
And in the third one is social media. And, you know, unlike China, actually you can access a lot of western platforms in Vietnam, you know, like according to several sources, Facebook, YouTube, and Instagram, actually among the tops of social media in Vietnam.
And and besides that, also there is the Vietnamese platform, Zalo is a messaging appear, like WhatsApp and then TikTok was very popular. So, there is a kind of very many social media platforms that the Vietnamese can access and use.
So, in terms of foreign information manipulation and interference, so I will focus on the foreign interference part of this. So, in Vietnam, you know, because of its political system, so FIMI in election is actually not a big issue. The Vietnam's government is mostly concerned about China's disinformation about the South China Sea disputes and also what they called peaceful evolution from the west.
So, peaceful evolution is kind of defined as efforts by external forces seeking regime change without the use of militaries.
Okay. So, with this, you know, sometimes, you know, in terms of South China Sea issues, you know, China has a lot of disinformation out there, but related to FIMI, I would say that the first one is sometimes the misquote Vietnamese leaders. For example, in 2016, only two days after the ruling of the tribunals regarding the case initiated by the (?) against China, only two days then, you know, the Vietnamese prime minister met the Chinese counterpart in Mongolia, and then after the event, a lot of Chinese medias, newspapers kind of reported that the Vietnamese prime minister actually accept that Vietnam's (?) China stands regarding the ruling.
So, but actually, he didn't say so. So, the Vietnamese media immediately, you know, because they got the permission from the government to kind of clarify on that. So, they said that during the meetings, the Vietnamese prime minister mentioned things like the agreement in 2011 that Vietnam and China had regarding principles to settle the sea related issues. And then things like the declaration on the Code of Conduct or the Code of Conduct itself and on clause. And he never said anything about Vietnam supporting China then.
So, with this kind of false information, it can, you know, undermine the legitimacy of the Vietnamese Communist party. So, that's the concern here.
And also other China's narrative is to try to drive a wedge between Vietnam and western partners by saying that, you know, close relationship with external powers will not help Vietnam in the South China Sea disputes.
And then in terms of peaceful evolutions, so far Vietnam, for the Vietnamese government they perceived any kind of criticism on the Communist party is peaceful evolution. Sometimes it can be like narrative, for example, the government is to weak in terms of response of China's behavior in the South China Sea, for example, tried to undermine their legitimacy. Or sometimes, you know, even the promotion of human rights or democracy can be seen peaceful evolution.
And then other kind of narrative to try to advise Vietnam, you know, the Vietnam people, they should be anti China or pro U.S. This kind of discourse can cause (?) in the society. And then sometimes, you know, with the South China Sea disputes, there are some certain group, kind of ask the people to stand up and to join the protest.
So, this is, you know, with this, the Vietnamese governments are concerned about form the protest against China, it could, you know, lead to some other issues as well and cause (?) in the society.
So, here, I just want to emphasize that between disinformation and FIMI, there is actually a very thin line that we can walk here. So, they are related, but they are two different concepts. And in the case of Vietnam, sometimes, you know, the perceived FIMI can be also quite significant because for the government and for the Communist party of Vietnam, they have their concerns as well. So, you know, for that, I think it's very difficult to for them to strike the balance between political (?) and freedom of speech sometimes.
In terms of action on how the Vietnamese government has done to deal with FIMI, so I focus on the government part because there's not much from the civil society itself. So, for the government, you know, they repeatedly revoked China's full narrative on the South China Sea, either through the spokespersons of the Ministry of Foreign affairs or through state or media. They tried to do that every time they discovered any disinformation from China.
And, you know, to deal with peaceful evolution in 2016, the Vietnamese Ministry of Defense created what they call task force 47 to counter wrong view on the Internet. And after that, in 2017, only one year later, they created a cyber command. So it's interesting because, you know, compared to some other cyber commands, then the Vietnamese one also actually in charge of countering peaceful evolution.
So, I will end here and hope to open to the discussion. Thank you.
>> BELTSAZAR KRISETYA: Thank you, Dr. Bich. Before we get onto the discussion part of the session, one little question for you. You mentioned something about the balance between regulation, also freedom of expression, but I believe that's not the only balance that the government is also facing, because there's also the disbalance of countering information manipulation while also the dependence or interdependence economically to a certain actor.
How does the Vietnamese government balance between this disdependence and also combating for an interference? It's not working.
>> BICH TRAN: Can you hear me now?
>> BELTSAZAR KRISETYA: Yes.
>> BICH TRAN: Okay. So I forget to mention that for, you know, to deal with FIMI, actually, in Vietnam, people can still access the Chinese media, you know, the Chinese newspapers with Vietnamese language to this. But they cannot access, you know, other media, for example, BBC or Voice of America.
So, I think for the Vietnamese government, so this very it's fixed, to what Pieter and Maria already mentioned, I think for the government, no matter what the Chinese say about the South China Sea, the Vietnamese people will not believe. Yeah, so they not too concerned about Chinese media.
But for the western one is the different issue because in Vietnam, it's a one state party. So I think they are a little more sensitive in that area. And, you know, like to your question about some kind of dependence on, like, economic issues with some partners, I think that could be one of the reason square, but I believe that what I mentioned earlier is the main reason why because, you know, for Chinese media, there's not much worry.
>> BELTSAZAR KRISETYA: Thank you.
So, I believe we have at least time for three questions. So for anyone that wants to raise a question, please make yourself identifiable. And then our staff will come to you. Yeah.
>> PARTICIPANT: Sorry. So, hello, thank you for the your presentations. Was very insightful. My name is Liza. I am a advisor for the German Brazilian dialogue to promote disinformation and we also address disinformation as a topic.
So, I haven't had many contact with this office context so far. So I wanted to ask you if you have any cases of disinformation having effects on the physical world. So to say because in Brazil we had the attack to the Supreme Court and also in South Africa, I know there has been some complications with the electoral commission and et cetera.
So, have are there any records of this in southeast countries, from Asia?
>> BELTSAZAR KRISETYA: Thank you, Liza. So, that's one question. On the impact of disinformation to real life incidents. Shall we gather two more questions? Please, sir. And then the lady in the back. Yeah.
>> PARTICIPANT: Okay. Thank you. My name is Koturo from Japan. I'm in cybersecurity security. And I have a few questions. First of all, first of all, the Vietnamese, I feel it is contradictory, because on one hand we need to expect (?) to do more in this regard. And at the same time, you know, country like Australia, United States and others, we are already decide to ban certain online platformers from a market.
So, I would like to ask, you know, any partners for their view on which is better, expect more from platformers or ban them from your own economy. Of course, some of this is funded by one of giant performer. How you can trust one performer you can how say one performer is trustworthy than others.
My last question is, there's a movement to revitalize the discussion at the ASEAN digital forum. I was wondering, while listening to you, all the presentation, I was wondering, which is the best platform to discuss, you know, our step forward in FIMI and disinformation, since ASEAN regional form we have China, Russia and others.
Of course, IGF might be the decent platform as well. But I would also like to ask panelists where we should go for our next round of discussion. Thank you very much.
>> BELTSAZAR KRISETYA: Thank you very much. Koturo san. And last question for this term, please.
>> PARTICIPANT: Yes, my name is Nidi. And I have a question. When it comes to dealing with misinformation, I think we have all discussed how you can have virtual literacy campaigns and maybe something along those lines, technical solutions. As you talked, I think misinformation comes from confirmation bias and also I think there is something to be considered that the people who are in most power actually tend to have a great role in spreading it. So, even if you did manage to achieve digital literacy, which I think there are a lot of technical solutions for, this is a larger sociological problem at this point where if you are getting views for it or if you are getting power out of it, there's no reason for anybody to stop, sort of, putting out disinformation and even if you know it's wrong, from believing it. So unless you have some way of, sort of, tackling that larger sociological problem on what has become alternative truth it won't really matter so much what technical solutions you come up with.
But I am not so sure how you would go around doing that, because nobody has incentive to do that right now.
>> BELTSAZAR KRISETYA: Thank you for the intervention. Let's go on with the three questions first before we go open another sessions. The first question from Liza, whether disinformation ever transformative into real live incidents in Southeast Asia. Specifically to Pieter, which one is better, should platform do more or should we ban them entirely. And also a question to the general speakers, what will be the best platform regionally to discuss this issue further, whether it is a multilateral platform at ASEAN regional form or Multistakeholder Forum such as APrIGF, for example. No matter how technical solution is available to address this session, there's the key opinion leaders that can bridge through and confirms to the confirmation bias of the audience. So, whether there is a means for us to curb or yeah, to curb the influence of these people in power, whether it is true whether in government or in tech platform. Please, Pieter, you want to go first?
>> PIETER ALEXANDER PANDIE: Sure. I think for the first one, disinformation affecting the physical world, I think the case that we discussed earlier on on the U.S.'s influence operations in the Philippines, that was actually posted, declassified by the Pentagon and posted again by Reuters investigation. The influence operation was more or less them trying to sow disconfidence against the Chinese made vaccines in the Philippines which resulted in people not taking the vaccines and waiting for western options. I think that's a big example of foreign entity outside of Southeast Asia cathedral creating an influence operation that had real life physical effects and I'm sure there are others as well but I think that's off the top of my head, that's a big one that we could reference.
And then to the question from (?) about the best platform to discuss (?) in the Asia specific. We should take the discussion a little back towards whether or not countries in Southeast Asia or the Asia Pacific has the same threat I don't think everyone is on the same page as far as FIMI. I have said before that ASEAN has a cybersecurity cooperation strategy and a lot of different cyber initiatives but they mostly focus on cybercrime, so financial scams, defects and financial fraud and so on. As far as Southeast Asiawise, but I think FIMI, especially in the Asia Pacific where you have some victim and government threat actors, shouldn't start at which platform is best and getting everybody on the same page first I think is the real challenge because everyone has different threat perceptions in addressing how they want with FIMI.
And for the intervention from colleague about confirmation bias and a broader socio psychological call problem with disinformation, I fully agree with your statement and I think it's why Fitri and Maria and I is, sort of, promotions for this research to take on a more multistakeholder, multidisciplinary approach because I think most of us are on this panel are IR or cybersecurity specialist and I think involving people from different lines of academia or others as well would be a good step forward in understanding the problem a bit broadly, so, yeah.
>> BELTSAZAR KRISETYA: Fantastic. You want to do next?
>> MARIA ELIZE MENDOZA: Even though certain bias, the readers had more appetite for disinformation, for example, but I still believe that digital literacy campaigns will help because especially for those who haven't formed their opinion yet, then, you know, the skills to identify trusted sources who serve them in a lot of issues.
>> BELTSAZAR KRISETYA: Thank you. Fitri, specific question on platforms.
>> FITRIANI: Thank you. In Australia, we have Australian communication and media authority ACMA, voluntary code that actually call for platform digital media to development and report on safeguard to prevent harm that may arise from the propagation of mis and disinformation on their services. So, it's a voluntary code, but there's a concern of how about if the code does not work, as especially we know there is a certain platform that after some rich people buy that platform, that platform is being used for disinformation.
And that's why in Australia, there's a call for the bill, disinformation and misinformation bill that failed to be tabled. It shut down.
So, whether we call to regulate the platform or just do away with it, I think it's good to have a voluntary code. It's very mature and if we kind of expect the platform have goodwill in doing the business, they need to be able to show that they can prevent harm. But we know their platform like telegrams that actually very rarely responds to the government call when there's, like, information of, like, terrorism and whatnot, that is quite concerning. So, that kind of so, perhaps we can do both sides, we can allow the voluntary code to let platform to, you know to safeguard themself and as well when that does not work, the government needs to have tool to actually intervene. So, that's one.
And for me, I want to, if I may, answer how can we discuss in the regional platform. I think in ASEAN, we have the ASEAN task force for countering fake news and the task force actually manage to issue the guideline on how the government can manage and combat fake news. The task force only issued last established last year and the guidelines also just recent. So, I think if if ASEAN can do it, I encourage other regions, perhaps, able to do it, because that guidelines is actually telling what the government ten way, for example, what the government do when there's fake news detected. So, that's my insight my suggestion. Thank you.
>> BELTSAZAR KRISETYA: Maria, you want to respond to the question.
>> MARIA ELIZE MENDOZA: The information of disinformation or the effects of disinformation in the physical world the (?) example is a good one. And aside from the campaign against Chinese vaccines, disinformation around the side effects of vaccines in general have also made physical effects here in the felonies because there is a there has been high level of vaccine hesitancy in the past years because of another vaccine before COVID. So that's one.
And also probably the lied that the Marcos family spread about themselves were actually being cited by the supporters as their reasons for voting for them, especially when they attend some campaign rallies and are interviewed by the people for Marcos so I think it's also an effect of disinformation on the physical world that people actually whole heartedly believe these lies spread on social media.
The question of confirmation bias, the additional insight that I could provide would be tech platforms still have a responsibility regarding this issue because of how they control the algorithm. So we know that if we react to the same kinds of posts or comments of the same kinds of posts, these posts will keep appearing on our feeds.
So, if we are if hyper parties (?) keep feeding on our feeds in the algorithms. It worsens the problems. With that, tech platform also have a responsibility with regards to the transparency of the algorithm probably or controlling the algorithm in general because Facebook, for instance, has been under fire for allegedly prioritizing posts that have more angry reactions. So, those that are really emotionally charged get more exposure on people's news feeds. And in that way, they also contribute to the problem. So, still, even if says aeth logical issue, Pieter is correct, a multisectoral approach involving digital platforms of society could still be an important step in terms of solving this problem. Thank you.
>> BELTSAZAR KRISETYA: Thank you, Maria. One or two more questions before we close the session. Okay. The lady in the back.
>> PARTICIPANT: Hi, my mame is Eleeza from Vietnam and working in Germany. My question is actually addressed to the first speaker but I welcome responses and contributions from other speakers as well.
So, in your research, how would you define FIMI? Do you include do you include people the diasporic community as perpetrators in your information?
In your findings you mentioned that there are state and nonstate actors. Can you please give us some example of nonstate actors?
And in your research, did you also find evidence of the participation of the Islamic state in spreading disinformation in the case of Indonesia? And I just want kind of like input to the question of Liza. Actually, when you asked about disinformation online disinformation and the real lies incidents and court, I must emphasize in the case of Vietnam, only the government can decide what is disinformation or not. And in the case of one party in Vietnam, one party state in Vietnam, we have the legislative, executive and judiciary powers belonging to the state, which means the head of all the state agencies must be the commoners party members. When they say that is disinformation, they have the power to punish.
So, I will say that on the monthly basis, there are cases where online disinformation, whether it's just a small post critical of a state backed company or just one small video mimicking a state leader can be punished. And the highest punishment in the case of Vietnam is 20 years of imprisonment. So, I just say that disinformation in Vietnam is very hard to detect.
Oh, sorry. I forgot one question for Fitriani. How do you see the political will of as reason in fighting dissatisfaction spread and created by the government? You talked about ASEAN fighting disinformation in general. How about the disinformation spread deliberately by the governments? Thank you.
>> BELTSAZAR KRISETYA: Thank you. And one last question from the gentleman in back. Preferably a quick question.
>> PARTICIPANT: Thank you so much. I Fuaz in New Delhi. We have been having very similar conversations, of course very interesting to join this. We also had a general election this year, and one problem that I think across the board we are facing that the last question also spoke to is, the discourse around disinformation, misinformation has also now become weaponized where fact checking or countering disinformation often these narratives are appropriated by the people, you know, who sometimes might be causing real world harm.
So, I fully this is just a short intervention to say we are seeing very real world harm links to online disinformation. At the same time, the lack of the kind of multistakeholder research that we have been talking about, is leading, it's making possible this kind of appropriation so, yeah, just short intervention to say we really do need not just multistakeholder, but also maybe inter regional cooperation to bring out how disinformation is happening, how it's related to online elements and also how the discourse is being misappropriated.
>> BELTSAZAR KRISETYA: Thank you. I pay the parallel in this intervention is who is the arbiter of truth. In which power should we endow the government or the civil society or tech platform to be the arbiter of truth and what kind of, you know, multistakeholder, multilateral cooperation can be done to that.
Last response from each of the speakers regarding these two interventions. You can go first.
>> PIETER ALEXANDER PANDIE: Hello. Can you hear me.
>> BELTSAZAR KRISETYA: Yes.
>> PIETER ALEXANDER PANDIE: Addressing the question about defining FIMI, the way we have defined fete is pattern of mostly manipulative information that threatens or potential to inflect values, procedures in a country conducted by a foreign state or nonstate actor and their proxies. Still I think while we have conducted this research, we have also conducted a focus group discussion with various experts from both Southeast Asia and external countries and what we have found through those discussions is that FIMI is still a very hard thing to define, even, you know, FIMI was first coined by the European Union External Action Service. I think another step forward that we can take is to taking a more maybe Southeast Asian or Asia Pacific specific definition for FIMI and I think that's one of the research direction that we could take is finding a definition for FIMI that is context specific and more palatable, I would say or more applicable to a different information landscapes.
So, I understand that it's it is a very difficult thing to define.
And another question I think was about the role of the Islamic state and information operations in Indonesia. Our time period for research was 2019 to 2024, and I think from the top of my head, while we are still early in the dataset and we are still adding cases to it, we I don't think we have found cases of Islamic state perpetuating influence operations in Indonesia, although I would say with a disclaimer that this is still very early on in the dataset and we could find cases later on that were reported. But so far I don't think we have found any. Most and I think an explanation for that, it could be because I think terrorism cases in Indonesia I'm not a terrorism studies expert but I think broadly speaking terrorism and terrorist groups have taken a downturn in recent activities in the last few years. I could be wrong but I that is I think a broad assumption I could make as to why that has not occurred.
>> BELTSAZAR KRISETYA: Thank you. Bich, you want to add to that?
>> BICH TRAN: So, just wanted to Epic say something about your question actually about, like, who should we, you know, give the power to be (?) of truth. Because in the discussion we mentioned about digital literacy campaigns. So, you know, I think, you know, if we make it like mandatory to be taught in school, it will reach more people, of course. But then textbook, we use, right. So what kind of curriculums and the definitions, so that's actually a very big issue that we
>> BELTSAZAR KRISETYA: There you go. Okay. Okay.
Fitriani and Maria, quick response.
>> FITRIANI: Thank you. Difficult question. The textbook that the guys, the ASEAN guideline on management of government information in combating fake news and information in media is actually strategically saying that this is the perspective and (?) of the ASEAN government.
(Captioner is leaving in 3 minutes.)
But interestingly, there's a chapter there, if you want to take a look at it, this type of how government address disinformation. There's whole government approach ask there's strategic government approach or combination. The whole government approach is having a different agency, civil society as well. But the strategic government approach, as you know, how we know there's government side of things as the as Beltsazar mentioned is the side of what is the truth and what people can listen to.
And ASEAN embarrassed that and it's aware of that, but having this you know, being aware of it and having multiple ways of approaching the issue would actually not alienate countries that actually nondemocratic in a way but also struggling with disinformation on or for information coming from abroad.
So, that trying to separate a wedge country, ASEAN countries against each other. So, that's why the ASEAN is actually trying to address disinformation.
>> BELTSAZAR KRISETYA: Thank you. Maria, one last remark.
>> MARIA ELIZE MENDOZA: Yeah, I would just like to agree with the last intervention regarding cooperation within the region, whether it's in Southeast Asia or the Indo Pacific because as in the case of the Philippines, we really have a lot to catch up on in terms of addressing disinformation and as I keep mentioning in the presentation, there are no clear legislative frameworks present to address this problem. But we also have to be very careful with passing legislation that might infringe on freedom of speech, because as far as I know, there are some countries with antifake news laws, but these are being weaponized with the current government. That anything that is descent equals fake news. Must be careful regarding that. We really have a lot to learn from our neighbors in the in Southeast Asia, in the greater Pacific region in terms of addressing this problem so I do agree that regional cooperation is important.
And I think a single country like us engaging with tech platforms, calling them to be more accountable might have less effect