IGF 2023 - Day 1 - [NRIs] Community-driven governance for safe AI - RAW

The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.

***

 

>> I'm just double checking all of our online speakers. Let me see if there's anybody else here that needs to be co-host.

>> MODERATOR: Thank you for your patience. We are waiting for our organiser and we will begin in a few minutes. We appreciate your patience. Thanks.

All right. Thank you so much for your patience and welcome to the NRI session on Community-driven governance for safe AI. My name is Jennifer Chung, I will be your moderator for this session. I wear a few hats in this space but for this hat I am representing the Secretariat of the Asia Pacific regional IGF. We have a lot of interesting viewpoints from across the national regional and sub regional IGF initiatives and they will share with you, especially on the different developments, the topics, the issues and all of the policy recommendations that come out from their meetings. The NRI network is a very strong network of 160 plus and I think still increasing.

Hopefully, people in the audience, if you would like to join the Zoom room you see our NRI colleagues who will be giving their interventions and discussions also remotely. Not everyone is here in Kyoto with us. But of course this is the Internet Governance Forum so it is a very good hybrid opportunity for everyone to be able to participate well.

Speaking about artificial intelligence, this morning during the opening ceremony, we heard from the prime minister of Japan about the importance of A.I. and he unveiled the plans of the G7 to develop the code of conduct for developers of generative A.I. and heard welcoming remarks from Secretary-General Antonio Guterres about the establishment of the high advisory body of artificial intelligence. In this jam-packed schedule you will notice an increase of sessions that touch upon A.I. emerging technology and specifically the governance and safe and appropriate governance of this emerging technology.

When we are talking about national regional, sub regional and youth initiatives each of them have different topic that's are important in their respective jurisdictions and home regions. It's really important to remember that it is, A.I. really holds an immense importance for our societal development and offers a lot of transformative opportunities across various sectors. A.I. has the potential to enhance efficiency, productivity and innovation and drive a lot of economic growth and address a lot of challenges we have in the society.

From healthcare, of course coming out of pandemic, from education to transport and energy, this emerging tech can really revolutionize the processes and improve a lot of decision-making.

But amidst all of this really good things that A.I. can bring us, we always have to remember that there are harms. And we definitely need good governance to be able to leverage its actual benefits to humanity.

So talking about safe A.I., we need policies. We need appropriate regulations and ethical frameworks to guide the development of this. We need to ensure accountability, transparency, and fairness in the design, the deployment and use of A.I. systems.

So what is effective governance? And how can we promote responsible A.I. development?

So I'm very happy to be able to introduce to you, the panel of speakers that we have to share with you the different learnings and discussions from pretty much around the globe. Online we have with us Ms. Pamela Trogo from Tanzania IGF. And she will be presenting online. We have yet another online speaker, apologies if I get your name pronounced wrong. It is Jorn  Erbguth. We have Victor Lopez Cabrera, a flavour from the discussions that were had in Panama IGF.

Also online, we Umut Pajaro Velasquez and he will be presenting on the good discussions that were had in youth LACIGF, as well as youth Colombia IGF, which will have its meeting later on this year.

Enough of the recommendations and the introductions of the speakers online.

Here on the panel we have with us, on the far end we have Ms. Tanara Lochner and representing Brazilian IGF. We have Kamesh, an ISOC ambassador for the IGF. Another youth representative and delegate and he will present viewpoints from India IGF as well.

So I really want to start off with our speaker on stage, if that is okay. And then I will turn to our remote speakers as well online.

But first, I really want to ask Ms. Tanara, in terms of Brasil and developments there, what is the crucial points of discussion? Where are the pain points? Where are the recommendations coming from. And what are the most important issues right now facing, I guess Brazilian community and the LAC region?

>> Thank you, Jennifer. Hello everyone, I would like to thank the IGF Secretariat for organising this session. And I would also like to thank the host country, Japan. And the local Kyoto team. It's my first time here and I am excited to learn more about the country.

As the coordinator of the Brazilian IGF, I am keenly aware of the challenges that all teams carried out this event. Congratulations, everything has been executed perfectly.

I will try to address both questions. But I will definitely touch as well on other topics of the sessions. Regarding the theme of the session, A.I. should be recognised as an emerging technology with the potential to transform social dynamics.

scientific discovery. We are on the cusp of a pivotal shift in the trajectory of scientific exploration. As the world delves deeper in digital aware our collective understanding discuss involving concurrent. This revolution has resulted overwhelming volume of data, providing exciting opportunities for platforms.

This suggests artificial intelligence will bring a significant transformation. Recently large language models have demonstrated outstanding capabilities in generating novel content in integrating ideas. They show potential in thinking, not just with language, but in logical structures based on code suggesting they might generate innovative concepts by merging and connecting diverse ideas.

However, the strong abilities of large language models, there could be unintended consequences, that lack transparent explanations.

In 2020 Brazil established artificial intelligence strategy for the ministry of science and technologic innovations. The objective is fostering innovation, bolstering competitiveness and improving the quality of life for its citizens. Always with ethical and responsible lands.

As a member of stakeholder committee since 1995, CGI.Vr has collected valuable insights throughout the journey. Stakeholder approach is a set of established  --  rolld in the belief that all stakeholders should collaboratively discuss, share ideas and forge consensus policies. It's a challenging endeavor, but in the same way the A.I. policy should be driven by more stakeholder discussions, while also ensuring the preservation of fundamental rights.

This stakeholder model can further enhance international cooperation for ethical A.I. governance. Establishment of observatory in Brazil. This initiative aims to track the A.I. governance strategies and regulations. Training specific to Brasil and elaborating indicators to track A.I. research, development and applications in the country.

This observatory is still under construction. And we hope it can contribute to A.I. research in Internet Governance community. We must not forget innovation. The internet and digital ecosystem must be preserved and leveraged as key catalyst for innovation as basis for development. Addressing past, present and future concerns and technologies, emergent technologies such as intelligence and so on. In order to extract benefits for people and drive the development of our world with responsibility, fairness, equality and opportunities for all.

All these aspects are also key when looking ahead to the global digital compact. But also to bear in mind the importance of the digital agenda to fulfill the Sustainable Development Goals. For now this concludes my input. Thank you very much.

>> MODERATOR CHUNG: Thank you, Tanara. A lot of very good developments, especially led by CGIVR and interesting discussions that illustrate the very complex landscape right now that the Brasilian community is talking about.

I would like to stay in the LAC region, that's one of the highlights of the NRI networks. We are really spread all over the globe.

I would like now to turn to one of our remote presenters. Victor. Victor who will give us a little bit of I guess the developments covering, in health and education. Victor, if you are able to take the mic, the floor is yours.

>> VICTOR LOPEZ CABRERA: Thank you, I thank the IGF for inviting us to share our ideas about these topics. While every Sustainable Development goal, my em phasis is enhancing healthcare and education because they are the pillars of social development.

And in that regard, by 2053 Latin America is predicted to evolve into an aging society where those age 60 and above will surpass other age groups in sheer numbers. A.I. possesses the potential to bolster the silver economy, crafting products and services tailored for the elderly. Ensuring the health and well-being of seniors is crucial. Given the limited human resources sometimes we have. A.I. driven automation emerges as an ally. While the healthcare benefit extend to all citizens, our present focus zeros on the elderly. Especially in this demographic growth rapidly. All sectors from government to private enterprises must collaborate on solutions. Recently in Panama hosted the American  --  silver economy. The event facilitated culture, digitizing health services with A.I. and enhancing tech illiteracy among seniors.

A stand out pilot project features seniors working alongside younger tech aids, epitomize the spirit of intergenerational learning. They explore tech usage and hone their personal skills. Our inspiration is to test telehealth, tech tailor for tech savvy seniors.

Through co-creation, seniors have realized the vital role of technology in the their daily lives, particularly in activities they hold dear. Age 17-21 experience firsthand the value of human-centered tech deployment. Gaining insights into trove of wisdom seniors possess. This collaboration is spawning new intergenerational initiatives. Furthermore it isn't recent. It began some decades ago. Predating sophisticated models like ChatGPT or any other NLM. I have been there 40 years to see this development.

Witnessed varied outcomes over the years. Moreover, this week, the Latin America parliament, which is a body of decision makers, headquartered in Panama championed the establishment of the office of the future. Its mandate is to anticipate and prepare for them collectively. A.I. discussions dominate its agenda. In testament to this commitment, pan assembly focusing on collaborative and ethical A.I. deployment.

Now, there are some concerns, of course, because A.I. has its plus and cons. Independently of the A.I.  --  is used because A.I. is a set of different methodologies, it's not A.I., NLM is just one of them. Data remains paramount. In our age of data privacy, inform and enlighten, consent becomes essential. The challenge lies in ensuring citizens have the A.I. literacy now, not only digital literacy but A.I. literacy to discern when to share data and when to abstain. Trust lies at the heart of this dynamic. Necessitating trust certified system. A.I. is multifaceted with its applications and methodologies continually evolving.

Keeping abreast is daunting, therefore I congratulate bra  Brasil for the observatory. It's important it doesn't erode skills like empathy, interaction with A.I.

Regardless of its sew fiscation, doesn't replace genuine human connections.

So we must be careful with the use of A.I.

We cannot hinder development. Thank you.

>> MODERATOR CHUNG: Thank you very much, Victor. And thank you for reminding us the importance of the multistakeholder participation both in the process of designing how A.I. should be and the governance. And also how A.I. should be used. And there's a very fine nuance there, it's very important. Also reminding us that data is paramount. When we are looking at emerging technologies it's a set of technologies that do then of course manipulate all the data that we have.

I would like to stay in Latin America and go to another remote presenter, Umut. He will give us the outcomes from the LACIGF as well as his own expertise on A.I.

Umut, if you are able to, the floor is yours.

>> Umut Pajaro Velasquez: Hello. Related to A.I. Jen tive technologies. This is a summarized version what the people presented during the three days of discussions. And for me, quiet demonstration, comprehensive multifaceted challenge of A.I. and other (?)

The main thing we discuss is (?) specifically it was iterated that (?) boundaries and not be decided by the governor. Because if we want to create a framework, meaningful partnership require in combination of perspectives from the global south because the majority of the world lives in the Global South.

This is a way to ensure development and regulation and technologies consider the diversity of needs of these populations.

The A.I. life cycle, this entails emphasizing the rights of marginalized communities. People of colour and LGBTQI+ community and women and children.

It will have the ability to uplift these vulnerable groups. Especially in some social media platform, representations that A.I., the generative A.I. is having.

Also, stakeholder that were part of the conversation during the IGF, that (?) A.I. should not be sole responsibility of governance, private sector, academia acting independently.

(?) among all the stakeholder engaged in developing, implementing and use of this technology.

Because only through (?) action we can tackle the multiple challenge A.I. is presenting to us.

Some participants expressed their concern regarding the multiple dangers of generative A.I. with emphasis in this information and how this can have severe repercussion, particularly in political campaigns.

Here in Latin America  --  strategy are used a lot. The goal was to implement (?) to mitigate risk. That should be essential element of A.I. governance frameworks.

That means not only should be the responsibility for the governments, but also from the private sector to tackle this kind of (?) in order to preserve integrity, not only on the information but also the democratic process.

So it was highlighted the environment and impact of A.I. and emerging technologies in assessing the necessity or addressing the consequences of widespread A.I. implementation.

Moreover, it's crucial that the advancement of this technology don't increase the digital divide, particularly in rural region and among Indigenous populations. It's necessary to consider language barrier sensitivities to guaranty these technologies are comprehensive and are made for everyone.

This implies to universal system from the design, stage of these technologies.

Finally, it was addressed the need to review and strengthen the rules, not only for A.I. technologies but as for future development of quantum computing because that plays a central role on the internet and governance. What are essential to secure not only privacy but also more secure evolving technology (?) internet. In general all the outcomes that came from the

Jakarta IGF this year.

>> MODERATOR CHUNG: Thank you. That was a fulsome output that the LACIGF discussed all the way ranging from ethical guidelines, regulatory frameworks, to even the environment and also really bringing it back to the most important part, it has to be human centric.

Now I want to move from Latin America, the GRULAC region to our region we are in now, Asia Pacific and specifically India.

So Kamesh, can you tell me about, since I know your day job is also particularly looking at these policies, especially with policies in A.I., how can the development, design, deployment and auditing of A.I. to be shaped to prioritize these ethical considerations?

>> KAMESH: Thanks for that question, Jennifer. And like some great points have come out from diverse regions. I try to not repeat myself and give unique perspective here. I come to India toward the end of my intervention. I had three points to adhere.

--  add here.

Starting with very well articulated question in terms of development, deployment and auditing.

Here, I guess just coming from the status quo itself, we all know there are various frameworks out there that talks about different principles and what should be done et cetera at different stages.

But you know, one aspect that is very important as we move forward, thinking about how can we make A.I. technology utilization more responsible. We start to think the intervention from an ecosystem perspective. When you talk about designing, at the designing level there are key stakeholders involved. It starts from technology companies. And then comes A.I. developers. And A.I. developers are not necessarily the people deploying such technologies. And when it comes to deployment states, A.I. deemployers  --  employers. You started with healthcare. Within healthcare maybe there is some tech company who is developing health technology, emphasized on A.I.

That might be bought by the hospitals. Ultimately there's been like, you know, operationals used by doctors or healthcare professionals.

So across there are various stakeholders and everybody has responsibilities to add towards ensuring at the ecosystem level when A.I. is used it has to be used responsibly.

So figuring out those nuances is what I think is the way forward in terms of making this technology use better.

No one is denying technology brings out the most possible outcomes in critical sectors. Just to make it work better from the ecosystem look is important.

The second point is some of the principles we already have available. Like human-centered A.I. principles. Trustworthy A.I.

Or explainability et cetera.

To an extent we are seeing consensus across the globe, right? All within the domestic paradigms. There is a consensus in one of the principles we are really striving toward the utilization of A.I.

Or the way it has to be designed and deployed et cetera. But I guess a little bit more nuanced it does a little of this work. Nuance is needed in terms of operationalization aspect. I guess has to kick start a little bit more when we talk about, if I could give an example. We talk a lot about principles like human interloop. But that principle when it comes to optimization, across the life cycle they mean differently. We need to bring such differences out, and for different stakeholders also means differently. If I could give an instance, human in the loop, might mean you have to engage with stakeholders. Or bring in the people who will be impacted by this.

But at the same thing when it comes to operationalization, or in the actual operationalization stage, at that moment maybe you need a human who is supervising, or put out there.

So we need to bring such nuances out. It easily could be picked up. Developing or deploying the technology to understand their responsibilities and use it safely.

Now coming to my final point in terms of where, what is India doing and everything and stuff.

Like any other nation, obviously, India is also looking upon how can we make this technology utilization to the max. Here, like Victor pointed out a little bit is how can we balance this with innovation? Right. Like A.I. is the innovation and that is going to pick up, at least the Global South in terms of how they are going to Excel in the future.

So we need to find the balance, they did a very good job and personal data protection act which came out recently when we tried to look at different aspects, how to use the value of data. As well as predicting privacy.

Something is happening within the A.I. ecosystem as well.

Second point in this point, we know India is the Chair for generative A.I. this year. What could be thought regulatory framework. When we talk about regulatory framework, there's connotation, it means compliance and it's going to come from the government. But the regulator framework could be anything, market enabling mechanisms the government may be thinking about.

That's something, it's going to be on the teams that will be suggested through the GPTI that India will be hosting.

There, I guess one key aspect, any of the forums or any bilateral or multi-lateral forum should be toward consensus building. At this moment everybody is doing something when it comes to regulatory aspects and principles et cetera.

But one aspect we see consistently is principles.

So we need to hold to that point and try to see that can be a conversation-starter at the global level for us, to have a conversation that like, okay, you also talk about this principle, I also talk about this principle. Can we come together? I guess that is the key importance. I think that is also something that we might be seeing in the GPI this year happening.

So I will stop there and I can come in later.

>> MODERATOR CHUNG: Really appreciate Kamesh, you bringing in the flavour and nuance. Especially with India becoming the Chair of the GPAI later on this year. And I'm going to abuse my hat as a moderator just briefly to kind of add to the context from Asia Pacific, since my hat as the Secretariat of the APR  --  this year too we had discussions on the contours of A.I. regulation. I want to highlight and pick up on a point you said. There are so many different regulatory frameworks and everybody is trying right now very quickly to develop best practices. I think this is somehow coordinated and not so much harmonized the fact we are leveraging and not redoing and reinventing the wheel as we move forward together. Cannot stress enough the importance of multistakeholder collaboration. Echoing what Eshi said this morning. Having a high-level body to oversee this thing. As well as important and strategic collaboration, both within the Asia Pacific region and to all the regions around the globe.

Enough about Asia for now. I would like to now move over to the Africa region. We are going to be moving over to another online speaker. Pamela Trogo, if you are able to, we would like to hear more about the good discussions that came out of Tanzania, and also from the Africa region. Pamela, the floor is yours.

>> PAMELA CHOGO: Can you hear me?

>> MODERATOR CHUNG: Yes.

>> PAMELA CHOGO: Thank you. I'm a researcher (off microphone) so from our side, we had a long discussion during (?) with A.I. and what has happened. So far we are enjoying the benefits but at the same time we see a lot of challenges around.

So in our context, unfortunately we are also still working on cleaning up the digital gap. So we see that technology is moving so fast. When you are trying to fill up the digital gap but now we are talking about A.I.

And there's a little scared because the level of understanding of A.I. is still very low. Those in the digital space, some of the developers are not much aware on how to develop the A.I. solutions.

Also when it comes to users we have users using A.I. but some of them, or many of them are using it unknowingly. Don't even know they are using A.I.

We also have the community at large. Which is also contributing towards A.I. but they don't really know, or they don't really understand in their daily activities they are contributing to A.I.

So this causes a long discussion and we see there's great need for increasing awareness on A.I.

Many see A.I. as a technical aspect. But looking at A.I. it is a social technical aspect and it needs to be looked at.

On our side there's a great need for A.I. advocacy and awareness so people can understand more. And here, it will help us ensure there's fairness, there is explainability, there's robustness, transparency and also privacy taken into consideration.

So we need to look at all aspects. From the people, the general understanding of A.I. but as the other speaker mentioned, considering other aspects such as culture, diversity, background in developing the use of A.I.

But also have to look at the process of developing the A.I.

When you talk about A.I., you talk about data. So how are we gathering the data? Here it comes to the issue of consent during data collection.

It also comes into the data collection process. How ethically are we doing this? Also the technical tools and great understanding is needed on the models and frameworks they face.

The developers should be in a position to share with us be transparent and tell us the models they are using, the frameworks they are using. And if possible, reusability of these frameworks and models should be easily done.

So as I mentioned, we emphasize on awareness creation. It should consider the policy makers, developers, users. Ethics and data collection should be ensured. Validity and accuracy. I can share a little of my experience, I started with the (?) NLP (?) I didn't get a straight line toward how should I go with the data collection process.

I had to go to seven developers to find the right path. If we have something that could guide all of us, that we could all follow, it would make the process better.

But also, what are the standards? For instance, when you train the model you have the aspect  --  what would be the best time ?

What happens? (Off microphone) so I'm happy that you briefly mentioned about what was discussed in the opening session. And you mentioned something about the code of conduct and I am so happy to hear that. Because for me, my main suggestion was the world should now look at A.I. as the way we look at environment and climate change. And maybe come up with an A.I. convention where everybody would have to (?) too. It doesn't matter which part of the world, you have to align and follow, and I think this would create a safe space where we can all enjoy the benefits of A.I.

Without affecting anyone. Thank you. So that's my contribution for now.

>> MODERATOR CHUNG: Thank you so much, Pamela. And thank you for also highlighting the importance of the different levels of development and awareness around artificial intelligence. We need to have, as Umut has already talked about, A.I. literacy, in addition to digital literacy, A.I. literacy is extremely important, the capacity building and awareness around that is very important. Because A.I., at the heart of it is a tool for us to improve our human societies. And how we learn to use this tool, how we are aware when we can deploy this tool is extremely important.

I want to wrap this first kind of opening remarks from speakers and end in the European region. And really, for Jorn, specifically how the IGF and global digital compact are looking at this in particular. I know, Jorn you also have some slides. And if you would like to share those, the floor is yours. Thank you.

>> JORN ERBGUTH: Thank you very much. I'm the subject matter expert for human rights in privacy and affiliated with the University of Geneva. The EuroDIG has participated in the U.N. Secretary-General global digital compact process. And submitted one chapter on A.I. that you see here. We have mentioned all the issues, transparency, discrimination, data protection, privacy, explainability all around trust. And the sad truth is at least some of them cannot be achieved with the current technology.

We cannot have transparency accountability. We do not know if systems will discriminate [clearing throat] sorry and we have data protection issues. So things have to be dealt with or have to be solved in some way, but we cannot just say, well, let's do some regulation and then we will have it.

Interestingly, when EuroDIG started to draft regulatory framework for A.I. in 2021, they were not aware of large language models like ChatGPT. And when they became aware, they quite a lot changed they already drafted A.I. Act. This means we have to be aware, new applications, new technology advances will change how we need to regulate.

And particularly also, applications will need  --  will require change there. We have a global set of core principles like humans should ultimately remain in control, have oversight and should remain responsible. And I agree, of course with Umut there is room for interpretation there. And that we need to have a flexible model to act on new developments and applications.

The multistakeholder approach and regional dialogue are key for ensuring Harmonization for cross border use cases. Some regions and states might also have different concerns, different attitudes and culture, policy makers should be able to adapt quickly to these general principles to concrete instruments for their own situation.

One rule that is carved in stone and is valid for everybody will not solve it. So we need to have a flexible tool. For example, when you look at education, EuroDIG has had a session on A.I. in education. And we see children are amongst the most vulnerable population. But children will also be required to use new technology in the future.

It doesn't make sense to teach them the skills that were required in the past. They need to work with new tools.

Students need to be able to study A.I.

Research is essential and should not be restricted by regulation.

Investment in educational programmes and raise ago wareness to understand the technology. To understand the benefits of the technology as well as the risks of this technology.

Neighboring technology like robotics IOT and quantum computing also need to be taken into account when they become available.

Current and going regional and global initiative and collaboration and information sharing should be supported.

As was said already before, the multistakeholder approach here is very important and shouldn't be replaced by one uniform regulation that everybody has to adhere to in the same way. We need to continue with multistakeholder approach, specifically because technology is changing fast and it's revolving.

So I don't want to speak too long. Thank you. I will be looking forward to the further discussion.

>> MODERATOR CHUNG: Thank you so much, Jorn. I think EuroDIG did a very comprehensive discussion, especially on the implications of A.I.

I think we can all agree that having core principles when we are looking forward to creating a regulatory framework or any framework is extremely important. You also jumped ahead to our discussion, hopefully that we will have now with the you audience on the floor and online how NRIs can commit to actions. EuroDIG should be sharing this already, probably have shared this already to the NRI network but something we could probably build on and actually implement as well, I think.

So now I would like to open the floor to any questions we have. I already know that there is a question from the Bangladesh Remote Hub that they would like to take the floor.

But anybody in the audience, if you do have a question, I think we are able to, I'm not sure if there are any roaming micks.   --  mics. I think I see a mic stand. If you have questions or want to share your perspective from your region or jurisdiction I think that would be very helpable.

First I would like to see if we can give the mic to the Bangladesh Remote Hub, if you would allow them to unmute themselves and ask their question.

If you are able to give Bangladesh Remote Hub co-host rights so they can unmute. Thank you.

>> Thank you, moderator Ms. Jennifer Chung for giving us the opportunity to share in this important forum. I artificial intelligence a field (?)

But it would be, I think it would be very strong instrument to cross the boundary for learning and doing our daily business, very quickly. My question is that in the developing countries in the context of social economic status and other scenarios, most of the people are far behind from that internet connectivity and electronic devices. In this context, how can we benefited from this services that are to be shared (?) individual services. Thank you.

>> MODERATOR CHUNG: Thank you.

>> Any other questions?

>> MODERATOR CHUNG: From the same room, yes you can ask your question as well. Please, go ahead.

>> Bangladesh: My name is (?) I Chair, (?) (off microphone)

Thank you.

>> MODERATOR CHUNG: I'm so sorry. The second question was not quite audible.

If you could either repeat the question, or type it in chat.

>> Bangladesh: Can you hear me right now?

>> MODERATOR CHUNG: Yes, we can hear you.

>> Bangladesh: I'm repeating my question, from youth centre, how we can use artificial intelligence in agriculture (?) for youth in the developing country.

>> MODERATOR CHUNG: So we are also relying on the captioners to capture the question. I think it said how we can leverage artificial intelligence in agriculture and developing countries? I'm just  --  and  education as well. Confirming that is your question?

>> How artificial intelligence (?) up skilling for youths in the developing countries.

>> MODERATOR CHUNG: Got it, now we hear. Up skilling for youth. There are two questions from Bangladesh Remote Hub. One is regarding developing countries there are citizens there are issues about connectivity, how can they leverage and benefit from artificial intelligence. The second one is regarding the up skilling of youth in this respect.

I see someone in the line. We will take this question and then go to the panelists, please go ahead.

>> AUDIENCE: Thank you, Jennifer. My question, I want to reference colleague from Colombia in use of A.I. and data governance. How do you think with the local level with national NRIs we can use that to impact the growth of digital governance in our respective countries, thank you?

>> MODERATOR CHUNG: Thank you, Ponselate. So do we have any panellist who wants to answer the first two questions? If not, I think I will go directly to Kamesh to answer the third one, since it was directed to you. And then we could go backwards to the other two questions. Kamesh?

>> KAMESH: Thank you so much. I didn't quite catch his name. Ponselate. Yes, thank you so much for that question. I guess data governance is close to me. And yeah, so, that's a very interesting question. Sometimes we think in legislation link to specific kinds of technologies but they might not.

How our legislation like still person data protection act 2023, which is very new for India, which we just passed is going to be for A.I. technologies. I can't give much experiences from India, because we just have that, and we will be going forward. We will be seeing it. But one aspect which has been specifically when we talk about data governance and artificial intelligences how can we use publicly available personal information. So that is a very key aspect and also something we have to be talking about. Because at this moment, what innovations or technologies that the A.I., any algorithms behind A.I. is scrape data out there to provide the service. How such technologies can be seen, used away forward with data protection regulation is something we have to look.

Another aspect, asking about some learnings, in India we still have to learn. But one learning from at least from globally is that constant used as an artifact for you know, utilization of processing of the data. So I guess, with emerging technologies like artificial intelligence and (?) and et cetera, it's a crucial question for us to answer is the way forward, right. We also need to start thinking, you know, as we move forward and as the technologies are evolving and emerging, we need to also figure out some new ways we can safeguard the data. Where I guess, like any of the older mechanisms, obviously there are merits to them. But we also have to evolve and have more options.

I can't really see, you know, such a, you know, artifact can be applied, generative A.I. kind of situation, right? I guess that's one learning for any new jurisdiction moving toward data protection regulations to consider. More innovations with regulations which could work in tandem with the innovations happening in technologies. That's my answer, and I hope that helps. Thank you.

>> TANARA: Let me try to answer the first and the second questions. I think A.I. applications could be utilized language translation tools and health, diagnostic apps that function without online access. But it is difficult to imagine how people that don't have connection and don't have money to assess this kind of devices can improve their life with these solutions. But in agriculture, I don't remember the name of  --  the man that speak. A.I. can arrange farming method by predicting weather. Farmers during spor ratic internet availability. Furthermore centers in remote areas might host A.I. services offering local residents insights from educational to health related applications.

Even without consistent online assess, A.I. can still be harnessed in impactful ways.

>> MODERATOR CHUNG: Thank you, Tanara. That is extremely important when we are looking at developing countries about what is the benefits, when we are still looking at problems with access. We are still looking at problems with connectivity. When we are looking at problems of actually getting people online. How can they benefit from A.I.

I would like to turn also to get a bit of a response from Pamela as well. Since she mentioned specifically from Tanzania the need to have this capacity building and awareness. Because there are already other issues and problems that are facing the community there. So Pamela, maybe a little bit of a response from you regarding that?

>> PAMELA CHOGO: Okay, thank you. As I mention earlier, as you know, countries are struggling (?) but I see A.I. being a great benefit. I would say (?) community solutions. So it might be difficult as an individual to access the service, but this service through A.I. can be beneficial in a community context. For instance, in health or in agriculture, you can have an A.I. solution that let's say can help prediction in agriculture, or can also be used in health.

Now it is not important for an individual to have this device for A.I. to assist.

This device can be present in the hospital and hence can solve the problem of lack of experts as well as lack of other resources.

For instance in my tests and studies, I am developing a chat bot to be used in agriculture and problem I'm solving is lack of extension (?) with this particular chat bot it could be used in a community where they can access the information, they can access the knowledge collective. Yes we have challenges, but I believe we already have we can still get benefits.

>> MODERATOR CHUNG: Thank you so much, Pamela, that's extremely important as well to keep in mind.

For the question about upskilling for youth, if I could ask Umut to elaborate more since you had good learnings, what should we do to up Skill youth, the general use and tools for A.I. and generative A.I., actually. Umut?

>> Umut Pajaro Velasquez: When it comes to youth, it was mentioned before, especially because if we are in in the Global South countries we pretty much know have access to technologies. The question was about that, how we benefit from this kind of emerging economies, when we start to use A.I. technologies.

So for me, mainly is to find ways to make it more access, because A.I.'s can be accessing in a way that can be more like engaging. And also provide more like a hands-on experience. So they can understand how the tools are being applied in real life and how can they benefit from it. And how they can actually enrich the processes in a place like war, education and like that.

We know there are benefits. We should focus on that aspect of A.I. too. But for that, we need to know how to use the different tools. And to do it with, how to be with them in every day life.

Another way to skills or abilities in A.I. is to reach with people that are already manage these kind of tools. And learn from them. And so, finally what I can say is, when we try to upskill ourselves it's better if we do it as a community. Because A.I. is a collaborative field and it's important to create this kind of community, especially when we  --  so we can (?) and we can do it in several ways like, I don't know, online, probably developing some kind of hack-a-thons when you have several people with difference but they are trying to understand how a tool is being used. Or how you can improve that thing or that tool. So yeah, there are several ways you can do that. And benefit not only, but the aspect in (?), yeah.

>> MODERATOR CHUNG: Thank you so much, Umut. I also want to pose these questions actually to Victor and Jorn, if you wanted to respond on any specific questions that were asked regarding either the upskilling of youth or the or how Global South and developing countries can benefit from A.I.

We would like to hear from you two, if you would like to intervene.

>> JORN ERBGUTH: Yes, please, if I may?

>> MODERATOR CHUNG: Yes, please go ahead.

>> JORN ERBGUTH: There was a question about data governance. Data governance is not only data protection but there's specific data governance and the E.U. passed a data governance act. And when you look at A.I., it is important that there is no monopoly on the training data. And this has been addressed by the E.U. data governance act. And of course this is very important for the developing nations, for small and medium enterprises, for startups that there is no monopoly on the training data. So big companies are requested to share data that could be used for training purposes.

And also when you look at copyright, if the copyright is being extended then this might be a barrier to using freely-available data for training purposes. So it is very important to have an equal access to A.I. technology that there is no monopoly on the data. And this does not, exist besides data protection. Data protection is about personal data. And this training data doesn't have to be personal data. But it can be any kind of data that is necessary for training A.I. systems. I think it would be very important for the Global South that they too can access this data. And there are no barriers to access the data to train systems.

>> Victor Lopez Cabrera: May I?

>> MODERATOR CHUNG: Please go ahead, Victor, yeah.

>> Victor Lopez Cabrera: I will address the last one and go upwards.

From having, I think it's wonderful having datasets available and not having a monopoly. The situation in the Global South has to provide data also so that the algorithms can actually model our specifics in our countries with our contexts.

I can pinpoint an example. I mean, if you have like bio markers for elderlies, they are not the same ranges if you are in Brasil, you are in Panama, probably will be the same. But if you are in Europe or North America, the nutrition factors, the culture will affect the ranges, medical doctors have to decide how to do for example, automatic monitoring of health for elderlis. And we solve that problem because we were trying to do some diagnostics and then the ranges were not exactly appropriate for people who were in the countryside doing agriculture. Because they are not the same, you know, it's not the same body, not the same metabolism.

So the south and developing countries need to contribute more data in order so that the datasets are enriched and actually, they are not biases, more biases. That's one thing. I don't know what would be the best way to do that, but I know the need. You need to do that, so the probabilistic models, I mean, computations of the models can be adjusted.

In terms of the digital divide, that will be a really hard one. In my opinion, Panama has only four million people, it's a tiny country, very small. You can go all over the place. And still we have a divide. Especially after the pandemic. Those who do not have devices, those who do have devices. But they do not have access to internet and those who have none. And it's a really, really nasty problem that governments and private sector, it's not a thing of governments, it's society as a whole has to establish a way to do that. Computationally, if I may say so, if at some point you can do some work with A.I. with your cell phones, that's small task to do some diagnostics in the country side, whenever you get to a place you can connect, then you update and you can do some more work.

So just working at the edge, computing at the edge, trying to do some work over there, and then when you connect, then do that.

You have to work on communities, for example, in Panama in the Indigenous People, the society working, establishing networks, community networks so they can have access to internet. And then along with that will come telemonitoring, will come e-learning, will come the other goodies. But it has to be a collective effort. I've seen work at the community level. I mean, we have to be part of the solution, not part of the problem.

And with A.I. for upskilling with my own students, they are in first year, I didn't teach them ChatGPT. They learned ChatGPT by themselves. I mean by the time they came, I teach freshman students first time at the university. They already knew that. But they didn't get that at the school. Which means there's a fresh mind you just have to be a mentor. We need less professors and more mentors. People who actually share the learning experience and let them grow. Of course guiding them. And in terms of upskilling, well they are learning some  --  I put them to work with the elderlies and both were learning ChatGPT. Can you imagine a person 85 years old learning ChatGPT from 17 years old? The kind of dynamics you can get out of it. The upskilling and reskilling is not only the technical capabilities. It's actually you are getting how to be a better human being, using technology to teach another human being how to use technology. That's my take on that.

>> MODERATOR CHUNG:  Thank you so much, Victor. I would like to see if there are any more questions both from the floor and also online? I will give, oh, Nazar?

>> Sorry, I came in late. I have a question which I think have interest to the key players in the industry.

What do you think are the considerations for regulations. Not over regulation. But regulation. Whether it's artificial intelligence, ultimately each country would at some point, you know, make regulations. And what, as professions in this field, what sort of considerations should the policy makers do? When doing the regulation for, the regulations for the artificial intelligence, thank you so much.

>> MODERATOR CHUNG: Thank you, Nazar. I actually see two more questions from the Bangladesh Remote Hub. This is good, this is when you should be asking the questions. I will read them out. The question one, is most of the people in developing countries are far behind in internet connectivity and electronic devices. How can we benefit or enjoy A.I. services and facilities. I think this was the question also answered. The second is also answered. It's just all  Nazar's question.

If I could ask, then we could go to the concluding remarks from the speakers. Kamesh?

>> KAMESH: Thank you very much for that question. I will also keep some points for concluding remarks in what is the way forward. Targetly answering the question, any consideration at the regulatory level has to take into consideration the innovations and positive angle the technology is bringing out.

So what kind of regulator Lever are we moving towards in terms of taking interventions, has to consider that is A, it's implementable, in terms of understanding the nuances of the emerging technologies and it has to be implementable. And B, it should not destruct. Especially in the developing countries. Such innovation like now becoming, like trying to solve a lot of your traditional problems in critical sectors. So let's look at one side solving the problems. But in terms of also making, while it solves the problem it should not create problems. How can we make those checks and balances at the balanced level is what has to be done.

But I will say in the concluding remarks, more.

>> MODERATOR CHUNG: Thank you, Kamesh. And Jorn, your take on the regulatory frameworks?

>> JORN ERBGUTH: The approach the E.U. takes is a risk-based approach. Meaning regulate harshly where there's high risk and regulate almost not where there's no, almost no risk. Of course this is difficult to assess what kind of risk is really involved. In particular when you CSIS stems like NLMs that can be used for fun with no risk or for serious systems with quite a considerable risk. I'm not sure if the approach will be the best approach, but at least I think it's reasonable approach to start with.

>> MODERATOR CHUNG: All right, Jorn, thank you. It's really important to also note that the E.U. specifically is more advanced in looking at the regulatory framework. So it is good to also see the comparison between the regions as well.

I'm not sure, Victor, Pamela or Umut, if you would also like to intervene on that regulatory framework question. If the answer is no, maybe if we could have a last call for any burning questions from the floor, or online?

I think the answer is no. So maybe if we can ask our speakers to really give us some, you know, what the NRIs can do, the actions we can take forward. Or concluding remarks. What is the main take-away you would like us to remember coming out of this session? If we can start with Victor?

>> Victor Lopez Cabrera: Well, I think NRIs are doing exactly what they should do, with these sessions and with the opportunity to distribute and explain what is happening and see the actors.

In one thing explainable, for example, explainable. Explainability. What is the characteristic of the expert system of the 70s and 80s, expert at that time if a computer cannot explain its behaviour, people will not trust it. So it's not something new. Because human beings do not trust another human being that cannot explain why and how.

But the problem with NLMs is they are, they are black boxes and due to mathematical computational nature. That is the responsibility, being explained by research and academic institution to understand what are the shortcomings, what are they good for? And just fade away a little bit from Hollywood. Because sometimes just get scared just because they think the world is going to come down to something like The Matrix, and it's not going to happen near soon. Praf Proofpoint

But at the same time, we have to take some responsibility what they do with A.I., it's not the one who develops but the one who uses it and uses it well. Thank you.

>> MODERATOR CHUNG: Thank you, Victor. Yes, everybody needs to take responsibility for that. Pamela? Your concluding remarks and take-aways?

>> PAMELA CHOGO: Thank you. Just reminding A.I. is a global aspect so we should not look at it as a threat but rather work together to ensure we enjoy the benefits. So let's have more discussion forums. Let's share our work and build the A.I. we want.

The A.I. code of conduct or A.I. convention is very, very important. Thank you.

>> MODERATOR CHUNG: Thank you, Pamela, yes that's important, building the A.I. we want. Umut? Your take-aways and any commitment to action?

>> Umut Pajaro Velasquez: When we can have this discussion, I explain what is going on in the region and national level.

Commit ourselves to share what we discuss internet governance. Sometimes we are missing that aspect. The human aspect not in the use of the technologies. We always talk about that we need to have human-centered A.I., where the human-centered A.I. start focusing on the users of these technologies.

>> MODERATOR CHUNG: Thank you, Umut. Jorn, if I would turn to you, I know there's a comprehensive call to action. But your take-aways as well?

>> JORN ERBGUTH: I would like to stress that flexibility is key. Because we don't know what applications will be there. We don't know what practical risks there will be. So we really need flexibility. And to give you one example, explainability, we said in the discussion that explainability is not really there. So you could either resort to some fake explainability, just some general explainability, that does not explain decision that you are presented with.

Or you could look for some different mechanisms that solves at least a little bit of the problem. For example, you could give the system to the users and tell them well you can play around with it. You can see which parameter you need to change to get to a different position.

You don't know why the system has reacted. But you know what would have needed to be different to get a different outcome. And I think this little example shows that we really need to be creative, need to look at flexibility, and we need to adjust regulation. And the E.U. has been trying to be at the forefront of regulation and trying to regulate technology before its there.

And this approach has limits. And we need to be flexible. We know we need regulation now. But we also know that we don't really know how the regulation needs to look like in ten years.

>> MODERATOR CHUNG: Thank you, Jorn for the reminder to be flexible. That is extremely important. Tanara, your concluding remarks and just main take-away?

>> Tanara: Thank you, Jennifer. I also believe discussions like these are really important for us to move forward to develop a community-driven governance for safe A.I.

In the last edition of the Brazilian IGF human review of automated decisions, intersections between A.I., privacy and data protection, digital sovereignty, et cetera, we discuss all these issues on the president of IGF and also took place in Brasil this year, the (?) IGF where we discuss the implementations of the Portuguese language and A.I. models and datasets and visa versa. In the sense we should commit to fostering more debate and actions both within the Brazilian IGF community and as a network. I think international cooperation are essential steps to ensure A.I. inclusivity. On a global scale, the trust is real important. We need to trust in A.I. systems. But for that, it is necessary to know how they work, what they do and what they don't do. And what we don't want they do. When discussing A.I., we must ensure its using to enhance our digital landscape responsibly. With guidelines that prioritize human well-being and involve input from all stakeholders. Thank you so much.

>> KAMESH: Thank you so much for that. Especially the point on trust. That's the key. They don't want to do is also important.

Adding to some of your points, my final remarks would be two important things, the take aways that we have to do is that we need collaboration and coordination. When I talk about coordination, I think like there's a lot of coordination needed at like a global level. Like various entities have to come together, who have an interest in taking forward this conversations. But like, I think the only interest that we need at this moment is how they all come together. And have a conversation. And where that conversation starts.

I guess that's going to be very revolutionary.

The second thing when I talk about collaborative, I think you know, especially your point and also somebody from online also mentioned, is that collaborative is important that the public, how the government and the privates can come together. Right? We have been talking a lot about the regulatory frameworks et cetera and stuff. Legislation, rules and guidelines et cetera, one way of looking at regulatory frameworks. But how can also one of the mandates or one of the ways, one of the principles of the government et cetera is to ensure market works. For that you can use market mechanisms. As we move forward, like I guess all of these conversations become fruitful. The end stakeholders, developers and deemployers use such frameworks. For that I think there's a need for a buy-in from stakeholders. One way of doing it is compliance but that's a burden. Especially for technology which is still evolving in the developing countries. So we need to figure out a way in which we make such governance frameworks picked up by the market themselves, right? Where they start seeing value proposition. As you mentioned, the trust, right? Trust could be one of the you know, aspects. They start seeing that as a value proposition. And maybe using responsible A.I. framework or such principles can bring trust. I think that link how we show the pathways. Sometimes when we talk about regulations and frameworks, it's always negatively conno Tated they are going to bring burden and compliance. I think the nuance behind this has to be shifted, such that we see this as something that has to be picked up for your better, on the long (?) I guess these other, other key take-aways. Thank you.

>> MODERATOR CHUNG: Thank you, Kamesh. I think they have concluded better than I can encapsulate. I will conclude, there's a need for trust and flexibility in the regulatory frameworks and developing such things.

The most important part is the multistakeholder participation, both in designing this process and also implementation and actually deploying the use. So thank you all for your time. Thank you to all of the NRI colleagues for giving us all these best practices and good learnings. And thank you.

Do you want a picture for the NRIs or was that not requested? We will end the session and take a quick picture.