IGF 2023 – Day 0 – Event #210 Multistakeholoder cooperation to maximize benefits of genAI – RAW

The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.

***

 

>> MODERATOR: , everybody.  It is my great pleasure to have a wonderful panelists here on the stage and also to have all of you participants.  My name is Yoichi Iida, the assistant Vice Minister as the internet communications of the Japanese government.  And this session is talking about the opportunities and the challenges, but mainly focusing on the opportunities brought by generative AI and foundation models which are looking at having a great potential for development of society and economy for us.

And we have a very prominent speakers, and representatives from different communities.  And we will have intersection between these panelists on the potential possibilities and the use of those technologies in different types of societies and economies with different conditions and backgrounds.

So, I would like to start with the introduction by each speaker from my side to the end.  I pass the microphone to ‑‑ from one to another.  Maybe you take two, three minutes to introduce yourself.

>> Good evening, everyone, I come from India, represent UCCI, which is a civil society organization.  I am currently a MAG member.  And happy to be here.  I also chaired the Asia Pacific regional IGF amongst other hats I wear and I pass it on to the next fellow panelist.

>> MELINDA CLAYBAUGH: Good evening, everyone.  I'm Melinda Claybaugh, a director of privacy policy at Meta, and I look after AI and data regulation globally.

>> Hi.  I am Heroshi.  I work for Preferred Networks, and my background is software.  I spent 26 years at IBM Research.  Now I work for ‑‑ I work as a director, part time, as well as I work for a corporation, which is a chemical company for daily products like shampoo and soap.

>> NATASHA CRAMPTON: I am Natasha Crampton, Microsoft's Chief Responsible AI Officer.  I have two parts to my job.  The first part is an internal facing part of my job where I help out engineering teams implement our responsible AI principles and commitments by defining the policies and the governance approach that we have across the company.  And in my external facing role, I try to take what we have learned from building AI systems responsibly and move that into the public policy discussion about what the new laws and norms and standards ought to be in this space.

>> BERNARD MUNYAO: Good evening, Ladies and Gentlemen.  My name is Bernard Munyao from the Ministry of Communication and Informatics, Indonesia.

I have two roles of my responsibility related to the digital iteration.  It is to encourage people much more having capability on the IT sector for all the people.  And secondly, also related to the startup ecosystem.  That's my primary responsibility.  Thank you.

LUCIANO MAZZA: hi, good evening.  I'm Luciano Mazza.  I'm with the Brazilian Ministry of Foreign Affairs, Technology and Innovation and Intellectual Property.  And although the title of the job does not say it, that's the department of our minister responsible for all things digital.  So, anything that relates to digital economy, just information, internet governance, and also disruptive technologies.  That's part of our remedy.  It's a pleasure to be here and looking forward to the discussion.  Thank you.

>> Hi.  My name is (?) from The World Bank.  And I am senior digital development specialist.  And I be very happy to be here with excellent panelists, colleagues here.  And I engage in more digital infrastructure and also the international cooperation and as well as the digital skills scaling up issues.  Thank you very much.

>> MODERATOR: Thank you very much, the panelists.  And as you see, we have very excellent set of panelists from different communities and different regions of the world.  And before starting the question and the answer between panelists, I would ‑‑ let me briefly introduce what our government has been making our efforts in promoting AI governance across the world, through mainly G7 framework.

As many of you are aware, Japan is taking the role of G7 presidency this year.  We had digital and tech ministers meeting at the end of April through the preparation, we have been discussing global AI governance.

In the beginning, the objective of the discussion was, kind of, to bridge the gaps between different policy frameworks and the regulations across G7 members.  Because as you know the European ‑‑ EU and the European countries are heading for the legally binding framework.  While U.S., Japan and other members are maintaining, at least at the time the nonbinding software approach in AI governance.

And my objective was to keep this group to share the same policy direction.  And in the beginning, we encouraged European colleagues to admit the importance of open and enabling free environment for innovation through AI technology based on software approach.

As you may know, even under the EU AI Act framework, the proportion of regulated AI will be limited and according to their explanation, most of the AI technology and the AI systems will be mostly free to provide and free to use.  They only regulate the AI systems with high risks, and in some cases, they consider risks as an unacceptable.  But in most cases, the AI systems are free in the market.

So, free doesn't mean free of charge, but free from regulation, of course,.

So, we wanted to share this direction.  But, you know, when they are discussing internally the introduction of the legally binding framework, it was a little bit difficult for us to find landing point between the different approaches.

So, we changed the direction of the discussion and G7 agreed at the end the importance of interoperability between different policy frameworks.  Even if you have binding, legally binding approach or you have you will software based approach, we believe interoperability and transparency between different frameworks, between different jurisdictions, it's very important, so that the different ‑‑ the various players in AI ecosystem can ‑‑ could maintain or could ensure the predictability and transparency of different legal and policy frameworks.

So, that was the discussion at G7 digital and tech ministers meeting and in the middle of the discussion, we saw the rapid rise of generative AI in the market and rapid expansion across the society.

So, we decided to discuss how we could improve the governance of this very powerful technology of generative AI, but we didn't have enough time because it came up all of a sudden in the middle of probably March or even April and our ministers met at the end of April.  So, we decided, our ministers decided to continue our discussion and efforts beyond our ministers, leader and also beyond the Leader Summit in the middle of May this year.

Leaders agreed to continue the work and directed the relevant ministries to continue the work toward the end of the year and they named this initiative as Hiroshima AI process.  So Hiroshima AI process was launched at the end of May, and we have been having dozens of working group meetings online from June through to, actually, up until now.

And we have been discussing what are the priority, risks and challenges brought by generative AI, what are the opportunities and how we could address those risks and challenges and what would be the good approaches in particular when we do not have clear answer in addressing those issues and risks, such as lack of transparency or expansion of disinformation, misinformation, which are relatively new to us and brought by generative AI and the foundation models.

So, we are continuing our discussion, actually.  But in the beginning of September, as you ‑‑ some of you may know, our ministers met online to exchange and confirm the interim outcome from the discussion.

We had the minister's statement, which included 10 items as priorities.  They included the risk countermeasures to the risks and the challenges by generative AI and companies and the AI actors should consider those measures before they develop and launch their models and systems, and before they put them into the market.  And also, they ‑‑ those companies and organizations should continue their efforts after the launch of AI systems, so on and so forth.

We have 10 key elements which you can see on the website of our ministry, and these 10 elements are still being discussed at our working group to be more available with content.

And we are now trying to find a set of a little bit high‑level Guiding Principles for players such as organizations developing generative AI and the foundation model and even the new type of AI systems which may come up in the near future.

And we are also discussing the action level of code of conduct, which will articulate how those AI actors can implement those high‑level Guiding Principles.

Our working group is now discussing the high‑level principles and action‑level code of conduct with the organizations and the players developing AI systems.  Because the working group believed the development stage of AI systems is most urgent priority for us.

But at the same time, we believe different actors in AI ecosystem, I mean the AI service providers, AI deployers, AI users, AI end users, all of those AI actors are ‑‑ should be also responsible in their engagement with generative AI and advanced AI systems.

So, in the second half of our work, we will be working on principles for other AI actors than AI developers, but up until now, we are more or less focusing on the players developing AI systems and including generative AI and AI advanced ‑‑ I'm sorry, foundation models.

That is what we have been doing from the beginning of the year as G7 framework, but we, at the same time, recognize G7 is a small group in the world and in our discussion, everybody recognizes the importance of multistakeholder dialogue and dialogue with players, with partners beyond G7 group.

So, this session is one of the very first steps for us to share our idea and to start our discussion with different players in the ecosystem.

This is just an introduction.  And I am very sure I am taking a little bit longer than I expected.  But in this session, having said this, introduced our efforts up until now, we are trying to focus on the, in particular, positive side of new AI applications, new AI systems and because we often talk about risk and challenges.  But when we talk about risks and challenges, the purpose of the discussion is we want to know how we could make best use of this benefit of this technology, while addressing the potential risks and the challenges.

We all know even if there would be an enormous benefit and the potential, if there is risk and the challenge waiting for us, people are not comfortable in actively using the technology.

So, that is why we discussed risks and the challenges.  But the argument purpose is how we can make of those new technologies through innovation to our society and develop our economy.

And this is true not only to the developed countries, but, of course, true to all the different communities and the societies across the global.

So, now I would like to invite our excellent panelists to share what kind of benefit and what kind of potentials your companies services and the technologies and systems are now brought to the society through your services, and also what kind of benefit or potentials you are thinking of planning to bring to the society.

So, first, I would like to invite three AI companies to share the information on your current services or solutions you are providing in the market and what types of new benefit or development or advantages you are thinking of bringing through your newly developed services or technologies.

So, first, I would like to invite Melinda from my side to the end.

So, first I invite Melinda, followed by (?) and then (?).  So, Melinda, please.

>> MELINDA: Thank you so much.  I want to share some of the AI products and developments that Meta has been developing.  And they fall into a few buckets.  The first bucket probably not surprisingly is what's core to our business in terms of helping people connect with each other, which is our mission.

So, we recently, a couple weeks ago, released a suite of new generative AI products that you can use in our existing apps and services, WhatsApp, Facebook, Instagram.  And these are AI agents that you can interact with, have fun, ask questions, get information.  We also launched generative AI products that allow you to make images that you can share with your friends and family in our products.  And you can make stickers and fun things that you can ‑‑ that already integrate with our products that allow you to just have fun, enjoy with your friends and family.  And that's really quarter of our business and furthering the experiences that people have in our apps.

But there's also two other types of deep investments that we are making in AI that I want to highlight.

Another area is around investing in open‑source tools and products.  So, this is really about unlocking innovation globally and helping people take advantage of AI tools and democratizing access to AI tools.

I first want to call out something that we released this summer, which is a large language model called Llama 2.  And this is a large language model that we made available on an open‑source basis this summer.  Anyone can download it and use it, depending on ‑‑ you can download it in different sizes, depending on your computing capability.  And you can build things on top of it.  You can build generative AI products on top of it.

And, actually, really exciting development is a couple of days ago, we launched what's called our Llama impact challenge.  And we are seeking applications from anyone who wants to propose a compelling use for Llama to solve a societal challenge.  In particular, we are looking for applications in the areas of education, the environment and open innovation generally.

So, think about, for example, in the area of education, how you might use our large language model to support teachers or students in a particular learning environment.

In the area of the environment, how might you use our open source model to understand how we can adapt to climate change, to understand how we might prepare ourselves for climate effects, and how we might mitigate or remove greenhouse gases from the environment.  These are all things that can be propelled and powered by large language models.

So, we are very interested to see what people might come up with and the most compelling ideas we will fund and provide grants to.

That's just an example of how we are hoping to open up access to really powerful tools, particularly to solve societal challenges.  And this is something that we committed to as part of the White House commitments, that we signed in July, along with other companies, including Microsoft.  And one of the voluntary commitments that we agreed to is investing in research to understand and advance ‑‑ to advance solutions to societal challenges and we think this is a really powerful way to do that.

The other thing, the third bucket I wanted to raise around our approach to AI and investments in AI is our data for good programme.  So, just a couple of things I want to highlight from there.  One is a programme we have called no language left behind.  This is a first of its kind project that open sources models capable of delivering high‑quality translations directly between 200 languages, including low resource languages.

It aims to give people the opportunity to access and share web content in their native language and communicate with anyone anywhere, regardless of their language abilities.

We then use those learnings from that programme and feed that back into our products in order to improve our product experiences for communities around the world.

I also wanted to share one final programme that we have called the relative wealth index and this leverages artificial neural networks to analyze images that help identify poverty at a sub neighborhood level.  That information is then used by governments to increase coverage for social protection programmes and make them available to a wider set of populations that need the support most.

So, from fun, generative AI practices on the one hand to really grappling with critical social problems that we face around the world can start to see the benefits of generative AI globally.

>> MODERATOR: Thank you very much, Melinda, for a very interesting examples.

Now let me invite Mariamason to give you a story.

>> Thank you.  Our company is a little bit late in terms of coming to (?).  We have released our open‑source model two weeks ago, and we are going to demonstrate applications of this language models in various next week in (?) exhibition.  Today I would like to focus on two direct technological directions that we are investing on, which is the first one is the hardware.  (?) is significant break‑through, but there are more innovations to come.  And one of the new discoveries of ChatGPT or large language model is something called scalability load, which means more parameters, video parameters, more data and more competition or power is the key to emergent capabilities, such as command of language in this case.

But this means that if we put more competitions, like 100 times larger competition power, then we may experience the next level emergent property, which is (?).  That's the reason why we invest heavy on hardware.  We started as a software company but we found that our current hardware technology is too expensive hardware as too energy consuming.  So, we developed our own accelerator which enables us to be on the world most efficient super computers in green 500 super computer ranking.

Using our next generation hardware, we will make the next break‑through in AI.  So, that's first area of our investment.

The second area is domains to apply the generative model thinking.  Generative AI country around the world of human perception.  Like in language and text image, voice, et cetera.

But there are other domains which is not very much familiar with human beings.  For example, different scales.  And looking at the model scales, we have software as a service called materials informatics.  And we use deep learning technologies to accelerate speed of the search of new materials by thousand times or 10,000 times, compared to the traditional first principle based calculations, simulations, I mean.

Another example of that different domain is highly complex systems like in human body, biological systems.  As I said, I also work for a (?) corporation and in collaboration with (?) and Preferred Networks we developed a so‑called virtual human generative model.  I think you are familiar with image generative model, such as mid journey.  It generates an image 100 pixels by 100 pixels, each pixel represents the brightness of that dot.  But supports if you replace this image brightness with human body measurements, like an age, sex or blood pressure, a glucose level and so on.

So, we defined about 2000 different attributes that is observable from human body and created the generative model out of this data.  And this is a very interesting and general purpose model which can have many different applications such as, for example, I am a 65 male.  What is the average blood pressure in my age.  And that kind of question can be easily done by this generative model.

So, we apply the technology to other domains beyond human perception.  Of course, it's fun to watch the machines doing what human can do.  But let the machine do what maybe a human cannot do, is I think another way going forth.  Thank you.

>> MODERATOR: Okay.  Thank you very much for various types of applications and the solutions.

And now I would like to invite Natasha to share your knowledge.

Do you mean the previous speaker?  Okay, okay.

>> This is a collaboration with the (?) corporation and the (?) networks.

>> MODERATOR: Thank you.

>> NATASHA: Natasha (?) in Microsoft.  I'm incredibly optimistic about AI's potential to help us have a healthier and more sustainable, more inclusive feature and, in fact, that's what motivates me to do the work that I do within the company, to ensure that that technology is safe and secure and trustworthy.

And I think what's exciting about the current moment is that you don't just have to imagine potential use cases for AI.  There are real use cases today that are making a difference.  So, at Microsoft we have been building this suite of co‑pilots.  They are very intentionally called co‑pilots, our products that are ‑‑ incorporate the latest generation of AI because they are all about combining the best of humans and machines.

So, if you take your Microsoft products that many of us know and use every day, things like outlook or teams or Word, we are adding AI powered assistants to those programmes which allow you to do things like, instead of writing a long, lengthy email, you can just add in three bullet points and then the co‑pilot will help you expand those bullet points into a first draft, we you can look at and decide what to do.

You can take a Word document and put it into the PowerPoint co‑pilot and it will generate a first draft of the slide deck based on that Word document.  Or if you are like me and sometimes you run a little bit late to some meetings, you join a Teams meeting five minutes into the meeting, you can get a summary of what has already happened in that meeting using the co‑pilot in Teams.

In addition to adding co‑pilots to the Microsoft Office products that we all know well, we have also created a whole new products, which our customers are very much enjoying right now.  An example of that is a product called GitHub co‑pilot.  This is a product that allows you to type in plain language and generate code.  And it's an incredible pro democratizing product in the sense that you no longer need to be a coder in order to code.  You simply need to be able to issue instructions to describe the outcome that you want to achieve and the code will be generated.

And we are finding that with that type of product, it's both welcomed by muted coding individuals, people who do not have expertise in coding, but we also hear from very experienced coders at the level of, you know, coders who work on, say, Tesla's autopilot system, so very sophisticated AI operators, that they, too, find it very, very useful in their work.

So, we have that suite of products, the co‑pilot suite.  And we think together these products help users be more creative, they help them do things that they might not have been able to do before, and more productive.  And especially at a time when, you know, many countries are grappling with major population shifts in many developed countries, a shrinking of the population that's of working age, these types of productivity enhancing applications of AI are really meaningful.

In addition to those co‑pilots, we also make available the basic building blocks of this technology.  So, we are working very closely with our partner OpenAI who you may be familiar with as the developers of ChatGPT.

Oprah AI has made available a number of different models, which we make available as building blocks, and then our customers and our partners come up with all sorts of exciting applications on top of those.

I want to mention two examples to you now, which I think give you a flavor for some of the potential that lies ahead with these models.  So, there's a Danish startup called Be My Eyes.  They were established in 2012.  And they have been providing services to people who are blind or low visioned and they set up a programme whereby people who were blind or low visioned were partnered with sighted volunteers so the volunteers could help navigate an airport, help identify a product.

Microsoft was involved early in this programme by making sure that our experts on Microsoft technology products were able to help explain how to use technology to people who were blind or low vision.

So, this was a very successful programme, but it really got a steep change and was able to be made available much, much more broadly just earlier this year when OpenAI made available a model called GPTV, it's a vision model, and it allows an image to be ingested and then described in text.  So, in practice, what you can do with this technology is something like open your refrigerator door, take a photo of what's inside your refrigerator.  The model will analyze the image, recognize the items in your fridge, and then suggest recipes for what you might be able to cook that evening for your meal.  Of course this is not just helpful to people who have blind or low vision.  This has everyday applications for many of us.

So, I think that's one example of an exciting application where it's meeting a real community need.  It's serving 250 million people who are blind or low vision.  But it also has broad application that we all benefit from.

So, if we move from Denmark where that startup is based, to India, and a town called Bewan that's about two hours outside of New Delhi.  This is an average farming village and the farmers there are facing a number of challenges.  They are facing challenges like applying for pensions on behalf of their aging parents.  The government assistance payments have stopped.  They want to be able to apply in some cases for their children to get scholarships to go to university.

But in reality, in this particular village, there's both linguistic and a technology divide.  So, English is often the language of public life, of government life in India and yet only 11% of the population speaks English.

So, into this situation, it is a new offering based on OpenAI's ChatGPT technology and built on Microsoft's cloud, which is called Doogle Bandy, and it's a chatbot that is allowing much, much greater access to government services than what was previously available.

So, users of this chatbot can ask questions in multiple languages.  It turns out that India has 22 constitutionally recognized languages.  But in practice, somewhere between 100 and 120 spoken languages.  So, this bot is able to operate in a language of the user's choosing.

You can speak into the interface and it will convert your speech into text, or you can type, which, again, overcomes a literacy hurdle.

This bot then retrieves relevant information, which is usually made available in English and translates it back into local language.  So, there's one implementation of a bot in India that's helping those farmers meet those needs to get pension payments to get their government assistance programme stipends to make sure that university students are able to access that funding.

But you can really imagine how that framework could be used in many other parts of the world as well.  And it's, actually, those sort of democratizing applications of AI that I am really excited about.

>> MODERATOR: Okay.  Thank you very much for very interesting examples in various fields and in various regions.

So, having listened to three speakers from AI industry, we have learned a lot about the current situation in the AI‑based services and solutions and the possibilities in the near future.

So, now I would like to invite the speakers from emerging economies and the developing countries who are expecting the ‑‑ having some potential solutions or future services to address their challenges and problems in their society or economy.  And maybe that would give us some hint to think about future collaboration.

So, in the beginning, from my side once again, let me invite Amrita to share your idea.

>> AMRITA: Thank you.  So, if I look at the developing country perspective and I think it's a global phenomena and correct me if I am wrong.  Most countries understand the power of technology, and they want to leapfrog their development.  For example, they do understand technology can help them leapfrog.  They understand the power of technology, and they want to take, including AI, which is the flavor of the season, I would say, and use it in a better way.

Because we do see a trend.  The countries who are using technology or even AI and the country who are not, the divide is increasing.  And we don't want that to happen.

So, if I look at countries such as India as in just now, it was mentioned that AI is also being used for good.  For example, agriculture, I would take that correction.  And just a correction, India government websites are bilingual or trilingual but the end‑to‑end process will not be complete.  But they do have the local language, the official ones.

If you look at countries such as India where the population is exploding, I would say, land is, you know, decreasing because of urbanization and everything.  Agriculture is using AI.  You know, they are using it for smart farming, how to use better terrains, what kind of crops to be used.  It is used even in climate control, we are seeing the global warming.  Actual change is happening in ‑‑ you have unprecedented weather, et cetera, coming, even for fishermen.  So, these are places where it can use and it can maximize benefits.

You can use it in the public distribution systems.  If you use the datasets correctly.  I will add the caveat that it can be used for good, provided the right datasets are being used.

Governments understand that they want to use it, but, obviously, the technology may not be with everyone that needs more information sharing.

But I think the questions are that, you know, is the process transparent, is the systems, you know, the data in which the way it's used, is it accountable?  What are, I would say, the algorithms which are being used?  Because there are concerns which governments are coming up with, which is the biases in the system, it could be racial biases, it could be systemic biases.  It could be any kind of biases coming up.

And just like an example was given that medical can use these datasets for healthcare, especially when you have limited doctors or physicians.  But we need to realize, for example, someone with in a particular region, let's take Japan, the constitution or the genealogy of the person may be pretty different from a European or from ‑‑ for even an Indian.  So, the same set of patterns may not work for everyone.  It would have to be customized locally for those kind of genealogy.  And it happens for everything else.

So, the datasets need to be of that place.  For example, many times we have seen many of the algorithms which are used are in Global North it doesn't work in Global South or I would say the majority places.

If I take Asia Pacific, for example, we are very diverse.  We have countries such as Japan and we have the Pacific Islands who are, kind of, on the process of development.  We have different cultures, races.

So, when you have systems working, they need to respect the culture of the place.  That's very important.  What works in someplace may not.  There is no right or wrong in this.  This is how the places are.  So, we need to respect those.

So, I think those are the concerns which come in.  But it can be used for good.  And I think what is needed is if you look at ‑‑ if you speak to youngsters, they are using ChatGPT for their answers, using it for many things.  But it can be used much more.  And I think those things needs to be spoken about, how it can be used, perhaps, even companies speaking to the regulators, policymakers or even civil society, et cetera, and understanding what are the needs, everyone comes up with good intentions, but when it comes into, you know, reality it may be used in different ways.

For example, many countries are coming up for elections.  I hope it's not being used as you were saying with spreading misinformation or disinformation.  So, how can those be avoided and it can be used in a proper way something.  And perhaps would you like me, you know ‑‑ there needs to be more collaboration and I think capacity building, especially for decisionmakers, how technologies work, what, you know, what are the pluses, what are the concerns.

And as you mentioned, it's important that AI or generative AI is growing.  We don't know how it will shape up.  So, I think having guidelines for, actually, you know, so that it is used properly makes more sense than try to stipulate.  Because if we look at emerging countries, they want small and medium enterprises growing.  They want innovation to happen in those places.  So, sometimes if you try to restrict things, it may counter affect the aspirations of that country.

So, I think having more frameworks, more dialogues, sharing best practices is a good way.  And, perhaps, if we have some time, I would share that at the IGF we have the policy network on AI, which is working on three main parameters, and we do have a discussion on the 11th.  It's on interoperability, because, you know, you have different governance structures coming up, different OCDE coming up, and others coming up with frameworks, but each of them needs to have some converging point so that's what's being tried to look at.  Gender and race biases, how can you mitigate it a bit lesser is something we needs to ‑‑ it's looking and how AI can be used for environment.

So, and this has a Global South lens because it has been argued many times that many of the researchers which are coming is more Global North but the majority countries are not taken into consideration.  That's where it comes in.

I think if even the Hiroshima dialogue is expanding and trying to get developed nations into the discussion, that's good, because it remains an exclusive club of seven countries.  The power shifts are happening and there are other countries who are coming up.  So, it would be good not only having the countries, but also different stakeholders, for example, if you have the public industry who is innovating, the government who regulates, even civil society or academia who come up with the data in the same room, that helps.  And I think those dialogues and the capacity building is important.  Because the train has left the station.  It will go further.  You can stop something.  Put how you regulate the movement of the train in a positive way is something which needs to be looked at and I think I will end it at that.  Thank you.

>> MODERATOR: Okay.  Thank you very much.

So, we learned that AI is not all mighty, but when it's tailored or localized according to the conditions of the communities and the societies, that would be powerful instrument to bring some innovation or improvement to the community.

And as pointed out, G7 never tried to be exclusive collateral of small number of countries but we are always looking outward and we are always looking to collaboration with various partners.  So, thank you very much for the comment.

And I would like to invite Mr. Mata Luciano, director from the foreign Ministry of Brazil ‑‑ okay, I am sorry.  I skipped mic.  May I invite first Bonison from Indonesia, and then Luciano.  Bonisan from the Ministry of Communication and IT from Indonesia government.  So, Bonisan, the floor is yours.

>> Thank you.  Yes.  Everybody knows that the AI now becoming a very well‑known and very useful for our society.  And it's also going very fast because the technology itself, it is evolved and leaving a huge impact to all society.

AI technology has become applied in various sectors in Indonesia, starting with improve access to the healthcare, because Indonesia is ‑‑ we have thousands of islands so we have to have solution for each individual citizen, even though they are in the remote area.

Infrastructure, yes, it was the beginning of (?), but now and at the end, we have to provide a solution for doctors, medical healthcare, for the patient in the remote area.

Secondly, in education and skills development.  Because the young generation is also scattered in many areas.  So, recently there are, sort of, solution provided by the startup companies to provide online courses, which is suitable for those who are not living in the major city.  This is dedicated for those in rural area with the suitable content.

And also we have some solution for AI to alleviate the poverty.  And then interesting thing is environmental, humanitarian aid and disaster response.  Including the earlier warning system.  Because nowadays, due to the heatwave in Indonesia and surrounding Southeast Asia, there are quite plenty of, from the aspect of, how to say, like fire in the desert area, and then also a problem in the environmental.

So, the solution itself should be developed it's not only from the government side, but also from the private sector.

In the meantime, the innovation from startup has delivered a good and significant innovation and invention by utilizing AI.  And they also have shown a significant contribution in solving this problem and increasing the quality of service as well as the productivity.

These are some of the examples, the implementation of AI and which is being used widely.

However, we also, some of the stakeholders having some concern about the utilizing AI.  So, from the regulatory perspective, academics, practitioners, as well as civil society they should ensure that the utilization of AI should be attention and consider of individual right and ethics.

So, fortunately, we just established a national artificial intelligence strategy in 2020.  It's now being prepared to formulate into a presidential regulation.

And from the business activities, during '20 and up to 2022, they also consider to be prepared derivative regulation related to norm and then standard procedure and criteria are kind of code of conduct.

So, from the regulatory point of view, we are formulating a guide of ethical failures that can be part of references for businesses actors.  This is very essential because the company and other institution should obey about regarding data and internal ethics as a field of artificial intelligence.

So, quite plenty of innovation has been made, but we have to put attention in the ethical failure deadlines, it's including the inclusivity, humanity, security, democracy, openness, as well as credibility and accountability.  This is the basic failures of the norm in the Indonesia nation.  Thank you.

>> MODERATOR: Thank you very much.  Sorry about my mistake.  But we heard very interesting report from your country.

And now I invite Luciano about your expectation and efforts.

>> LUCIANO: Sorry.  It was off.  Thank you very much, Yoichi.  I think our colleagues and previous speakers covered some interesting issues that I wanted to touch upon a little bit as well, of course, we think about areas where AI can be most effective in addressing challenges and problems in our country, in different countries.  I think it's important to bear in mind that what is a priority in one country is completely different from others.  And in this case, I think to understand the priority for developing countries, probably a lot different than the priorities for most developed countries.

So, considering specific areas where we see a lot of potential to employment in Brazil, I think there are obvious topics and one that's not normally or not much mentioned when we think about more concerns that are more clear to develop economies, I think food security is certainly one of them.  And I think this was mentioned.  And a lot were brought about on this topic.

And probably one area that we see a lot of potential is leveraging AI solutions to improve the provision of public services.  So, I think the government in general and also the very use of AI for the workings of the public service is increase productivity and efficiency and so on and so forth.

And again, I think something that it was referred to before, it is important to mention, for developing countries, having data capabilities and governance frameworks, both in government and outside government, is crucial.  And I think that comes first.  Because without this, it would be very hard to make sure we can benefit from all this positive perspectives that we see.

So, without the priority infrastructure, it will be very hard to put advantage of the benefits that AI can bring.

One aspect that was mentioned, I think Brazil, we have AI strategy that is very much focused on innovation.  And also have a lot of legal framework that is there to boost the startup innovation.  And Brazil has a dynamic innovation ecosystem.  And I think that fits in well with some comments that I made before.  Because I think one big challenge is how you can make AI more local, and we need on bring a sense of ownership of those models to countries where probably are not developing our own big large language models.  It's very unlikely that every developed country will have its own or have big firms that will develop their own systems.

So, I think a crucial thing would be how those models are adapted for local needs and for local communities.  And then I think it was mentioned that this model based on open‑source systems is something that I think it's important, because I think that's the entry point for local innovation.  And I think that's something that makes sense and that I think is where we see potential to jazz with local innovation systems and I think it is something that is important.

Something take into account, these models, they are trained based on data that's normally not ‑‑ that does not come from developing countries.  And I think that's a challenge that I will have to face and how.  And develop local solutions and local applications, it's important to find ways to make sure that we can also bring this perspective to the data that in which those models are trained.  Because, of course, we are talking about droves of data that do not necessarily reflect the realities of developing countries so they may contain a lot of biases, not because ‑‑ because they are there, they are based on English language mainly.  They don't have a lot of ‑‑ they are not normally they are not trained on local languages or different languages.  They may contain biases, like I said, that don't reflect the realities of developing countries.

So, this process of adapting and adjusting those models when applying to developing countries, I think that's something that's very important and it's crucial to bring a sense of ownership to developing countries when these solutions are presented.

I think that's what I think it's I would say at this point.  And we look farred to discussion.

>> MODERATOR: Thank you.  Yeah.  Thank you very much for the thoughtful comment.  And, yeah, it seems that the adoption to the local conditions will be one of the key elements for success to provide good solution to the community and in order to possibility, wider possibility for adaptation, probably the interoperability between different frameworks should be very important.

And at the same time, local process, universal may be a kind of different complicated question.  But we don't go too much into this element because of the limit of the time.

But maybe we need to discuss this point after on different occasions.

Having listened to the excellent speakers from supply side of the AI economy and also demand side of AI ecosystem, we learned there will be a lot of potential for AI technology to provide a lot of benefit to different types of community and societies.

And now we have one speaker from World Bank who has been playing very important role in international cooperation, especially in my knowledge, World Bank has been very active in development support activities in digital field, especially through digital development partnership.

And we have been talking a lot about leapfrog, potential of leapfrog provided by digital technology.  I personally believe AI brings the biggest potential of leapfrog among different types of digital technologies.

I would like to invite Dice Kay to share your thought and AI experience and probably your idea to to create some changes for collaboration among companies, government and international organizations to facilitate benefit brought by AI technology in the global economy.  So, Dice Kay, please.

>> Okay.  Thank you, thank you very much.  And of course in this year's G7 process, we as a World Bank participated for the first time in the framework of the G7 for discussing further collaboration with the, again, countries to expand the digitalization in the developing countries and emerging tech, emerging economies.  And, of course, I recognize the kind of difficulties of the getting consensus within G7 countries.

So, I think, of course, involving beyond seven countries on AI is more difficult.  But at the same time we all recognize that potential of the AI.  That's why we are now facing this ‑‑ we are now discussing about potentials and the risks of AI.

And from our perspective in The World Bank, we are supporting, have been supporting for a long time for, of course, developing the economies to mitigate the poverty and enhance the prosperity globally.  And of course digital agenda is very new, as compared to the traditional infrastructure like the road, energy, or other issues.  But more and more many people are focusing on our activities for digital development.

And we have been developing in these areas by firstly, of course, the infrastructure construction support and this is very important to fill the gaps between the connected and unconnected and this is foundation to develop the countries through the digitalization.

But also, this is mostly important, I think many people indicated that the gap ‑‑ filling the gaps of the skills.  And this have been done within our framework of the digital development partnership and we have been a lot of capacity building projects, including many developed countries and also the private companies' participation.

Of course, Microsoft and Meta and other companies are so actively working with us for expanding, kind of, developing this kind of skills in developing countries.  And we believe that the expansion of these digital skills will promote the development of in development countries and, of course, to be innovative and human centric, the things that you are discussing right now to be more livable planet for all.

And also finally, I like to mention about the thing that ‑‑ just wait a minute.  Sorry.  I don't know if that works.

So, I think more importantly, of course, in regulatory framework has been more and more important in terms of the creating the environment.  And, of course, the private companies are trying to promote this AI within their election.  But at the same time, public sectors are just trying to preserve their rights of the nation and nationals and, of course, the human rights, et cetera, et cetera.

So, we are now as a World Bank, coordinating together with private companies and the private ‑‑ public sectors to find out the best solutions to in the regulatory framework.

This is some examples, but, of course, AI is a very new agenda and, of course, we are now trying to find out best solutions for enhancing this AI projects.  So, we are very happy to discuss further with the private companies as well as the public sectors and, you know, as a whole, the multistakeholders to improve this AI environment.  This is kind of our approach.  Thank you.

>> MODERATOR: Okay.  Thank you very much for your very proactive comment and some lessons from the previous experiences.

It is good to know there is a lot of potential for collaboration among the people and stakeholders.

So, having discussed among different types of AI players in the ecosystem, we ‑‑ I hope we have a lot of potential to promote collaboration.  And having listened to other's presentation, I would like to ask any, one of you, two of you, on your thought on what would be the good way to proceed for us to promote collaboration among the different types of players.  Our government will stand close to World Bank and other international organizations to promote collaboration, make use of our knowledge and experience to go ahead together and Hiroshima process will be one of those instruments.

So, I would like to invite any speaker for volunteer to make a comment and share your thought.  So, who can volunteer?  Amrita first.

>> Thank you.  I think the collaboration is a must because if you look at technologies, they are cross‑border and there has to be ‑‑ you know, there is a lot of collaboration required.  And I think what can be done through people who are experienced in it, with World Bank, et cetera, is provide the necessary trainings in the developing countries as to what's happening, what needs to be secured, what are the rights based.  Many countries are still arriving at those consensus.  It has to be rights respecting, as was mentioned.  It has to be gender respecting.  Many times we see gender bias also in the systems.

I think the training capacity building, passing the best practices is important.  And you all have been doing it through the GPI or the other initiatives also, because they all enter ‑‑ you know, overlap each other.

So, I think more dialogue, more capacity building, sharing best practices are important, and not only about algorithmic bases, transparency, but also security.  Because these systems need to be secured.  We see state actors attacking different countries.  We see different bad actors attacking into the system and if it is hack, it can be used.  A public good can become public bad.

So, even the security aspect, et cetera, is important.  And I think those trainings, if they are given, you know, how entrepreneurs can use those best practices, et cetera.  I am sure most government would be willing to get into those dialogues and benefit from it.

>> MODERATOR: Okay.  Thank you very much for your comment and proposal.

Actually, we have been providing capacity building programme in collaboration with world bank, which provides study tours to Japan, inviting the government and other relevant people from Asian and African developing countries to share our knowledge and expertise and also some of the practices at private companies in Japan.

And we have been doing that mostly among Japanese relevant companies and the people.  But maybe we can do that with the multinationally, together with the players from different countries, such as Meta or Microsoft in the location of not limited to Tokyo, but anywhere else.  We can think about that kind of capacity building programme provided by The World Bank if possible.

But anyway, that can be one of the ideas.  And I thank you very much for your proposal.  And is there any other?  Luciano, please.

>> Thank you.  Yes, I would go along the same lines.  I think it's ‑‑ we commend the leadership that Japan is playing in this field.  But we understand dialogue and cooperation ‑‑ dialogue to other initiatives and cooperation with different organizations and countries is crucial.

And, again, the cooperation is important, not only for sharing experiences and best practices, but we think that crucially to help building the necessary national capabilities that will be required to make sure countries can ‑‑ around the world can benefit from the potential.

And again, engaging with different development banks is something that would be an interesting perspective, in the sense that it would be necessary to leverage investments to those countries that need to require those capabilities.

Something from our international institutional perspective would mention considering this leadership position that Japan is playing right now.  I think it is important to strengthen the dialogue with other organizations, also in the sense that is important to ensure coherence in terms of narratives and policies to make sure you don't have a fragmentation of spaces where all these initiatives are being developed.

So, in the sense I think it's important to take into consideration that we see as useful to build some momentum would be when as well in terms of achieving an overarching narrative in this field.  And therefore, all countries are represent and we can make sure we have the basis that's as inclusive as possible in this area.  So, thank you.

>> MODERATOR: Thank you very much.  I believe building fragmentation and promoting total will be very important agenda for us.  And we are expecting very highly about of your presidency of next G20 next year.

Probably last volunteer.  Who wants to be last one here?

>> Maybe a little bit repeat from the previous suggestion.  The first and the most is the digital literacy to build the capacity building for the society, to become more knowledgeable and how to utilize the AI and other aspect.

Secondly, I think the ‑‑ we have to boost the collaboration between the industry, Meta, Microsoft and others, engaging with the startup, which is in the emerging countries.  So, this is important to be able to leverage the solution that will be provided by the technology from the startup.

Second and the last part is what may, and also other may be venture capital to engage with your startup and other industry, because without the venture capital of World Bank, I think it's quite difficult during the winter right now.  We suffer during the tap water, because some of our startup diminish from the ecosystem.  Thank you.

>> MODERATOR: Thank you very much for such a comprehensive and concluding remark.  You took the role of the Moderator by your comment.

But before ending up, let me add one volunteer from industry.  Who wants to?  Yeah.

>> You think my fellow panelists have shared many good ideas here.  I think one thing that works well in the multistakeholder context is when a specific challenge is identified and that allows you to direct resources into it.  And to be ‑‑ to make, you know, more than incremental progress.

So, I can point to, you know, some other multistakeholder initiatives, not specifically in the AI context where we have seen really, you know, significant progress in a short period of time.  And one of them that I would call out is the Christchurch call, my home country of New Zealand, there was a terrorist attack that was streamed online, the first of its kind type of attack that involved terrorist and violent extremist material and what was so effective in the response to that, you know, tragic incident was that governments and civil society and industry came together to work on a very specific problem and that problem was how do we avoid the proliferation of this terrorist and violent extremist content.  It was a specific problem but their solution was multifaceted.  Industry came up with a protocol to respond quickly and avoid the proliferation of that type of content.

But it wasn't just a point solution like that.  It also involved literacy campaigns, further study by academia as to what the problem space really involved.  So, I think as we think about what's next for multistakeholder collaboration on AI, I think there's some lessons that we can glean from other past successful multistakeholder initiatives.

Often they work best when there's a specific targeted problem that everyone is coming to give to try and address.  They work best when there are multistakeholder initiatives build on what exists already as opposed to reinventing the wheel.

My hope is we take the holistic approach here, which involves capacity building on, you know, the technology front.  We musn't forget there's a huge digital divide that we still need to close in order to even make access to AI possible in large parts of the world.  But we need to remember that fundamentally this is about people and there's a lot of skilling work that is needed for us to be able to truly take advantage of this AI moment.

So, I hope we can take those sorts of lessons forward in our multistakeholder collaboration on AI.

>> MODERATOR: Thank you very much.  In the end, we reconfirm the human centric AI and AI society should be very important and that is what we should pursue altogether.  Although, we take any different approaches or different frameworks.

So, thank you very much for the very active discussion.  And sorry about the poor management by the Moderator.  We wanted to have a little more time, but still I believe we had very good discussion.  And thank you very much for your attention to the audience.  And I think, unfortunately, our time is up.  But we stay in touch and we continue our effort together.  So, thank you.  The session is closed.  Thank you very much.