IGF 2023 – Day 1 – Networking Session #109 International Cooperation for AI & Digital Governance – RAW

The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.

***

 

>> Moderator:  Okay.  Thank you very much for coming today.  So welcome to the session is called international cooperation for AI and digital governance.  My name is Kyung Park, faculty member of career advanced institute of science and technology.  I'm really honored to moderate this session.  So we have actually wonderful group of distinguished speakers today.  So we have seven very interesting, you know, talks, but basically this is the kind of networking session but at the same time we try to share our knowledge and information about, you know, the current ‑‑ the landscape of research and also the policy in this field of AI and digital governance.  So I'm very excited to introduce my speakers, who actually needs very little introduction.

We're going to have the seven, you know, talks and then after that we'll have some Q&A sessions to share or all together.

So first of all, I'd like to introduce the professor Matthew Liao from NYU who is actually currently in the States.  So Matthew, are you here?

>> Liao:  I'm here.

>> Moderator:  I'd like to hear your thoughts and we'll hear from Matthew and keeping very introductory, very fundamental questions for the digital governance from the perspective of human rights.

Matthew, floor is yours.

>> Matthew Liao:  Thank you, Kyung:  Hi, everybody.  Sorry I couldn't be there in person, but very honored and delighted to join you.

So we all know that AI has very incredible capabilities.  They're going to be able to help us develop medicine faster.  In public health they're going to be able to identify those at risk of being unsheltered, going to be able to help us with the environment.  At the same time these powerful AIs also come with dangers.  So many people are aware that the data on which AI is trained can be biased and discriminatory.  At NYU and other educational institutions, we're grappling with ChatGPT and what that means for writing essays and plagiarism.  Elections are coming up.  People are worried that AI could be used to, you know, sell disinformation and distrust in elections.  AI is already also being used in Ukraine and other wars.  So there's a question of whether AI is leading us towards sort of mutually assured destruction.  And so to make sure that AI produces the right kind of benefits for everybody and doesn't just cause harm, governments around the world are working really hard to try to come up with the right regulatory framework.  So two weeks ago, President Une was at NYU, republic of Korea.  He talked about a digital bill of rights.  In July President Biden secured the voluntary commitment of a number of tech companies to secure three principles when using AI, so safety, security, and trust.  The European Union is getting ready to adopt the EUAI Act, which would be one of the world's first comprehensive laws on AI.  This brings me to my lightning remarks today.  So assuming we should try to regulate AI in some ways, how should we go about regulating it?  And so my students in my lab and I have been studying this issue and we've structured this topic into the 5W1H framework.  So, you know, the first question is what should be regulated?  So that is the object of regulation.  So many people talk about regulating data because, you know, how we collect them could raise issues such as bias and privacy.  Other people talk about regulating the algorithms because as impressive as they are, algorithms can also produce bad results.  So take generative AI, like ChatGPT, it's known to hallucinate and make up stuff.  There are also people who think we should regulate by sectors.  So, for example, we should have regulations for self‑driving cars, another set of regulations for medical devices, and so on and so forth.

And then finally, the EU thinks we should regulate based on risks, sort of, you know, whether the risks is going to be acceptable or too high or low and so on and so forth.  And the general issue here is that overregulation could end up stifling, innovation, but underegulation could lead to harms and violations of human rights.  So some of the questions that we can talk about is like if someone wants to regulate large language models, such as ChatGPT, where would they even start?  Would it be the training data, the models themselves?  Would it be the application?  Or another question we can ask is whether EU's risk‑based approach is that correct the way to go?  And we can talk more about that in Q&A.

So let's turn to the question of why we should regulate.  Well, there are many reasons.  We could regulate to promote national interests, for example.  In order to establish a country as a leader in AI.  We could also regulate for legal reasons to make sure that new AI technologies comport with existing laws.  Or we could regulate for ethical reasons.  For instance, to make sure that, you know, we protect human rights and some say to make sure that AIs don't cause human extinction.  Of course as an emphasis, I would hope that all regulations would conform to the highest ethical standards, but is this realistic?  For instance, in a country that's trying to win the AI race, it may feel it has no choice but to cut ethical corners.  So how optimistic or pessimistic should web that governments would pursue AI in an ethical way?  Is ‑‑ you know, we can make this discussion more concrete.  A lot of people in 2015 already signed a letter arguing that we should ban lethal atomic Webs but these are already being used.  Is AI use a good thing, what do we need to ‑‑

Now let's talk about who should be doing the regulating.  Well, there are a number of parties and stakeholders here.  So you've got the companies, the AI researchers themselves, the governments, Universites, members of the public.  Now, some people, especially from those in the tech industry, are concerned that nonspecialists would not know AI well enough to regulate it.  Is this true?  Should we leave the regulation to people in the know, to the experts?  And other people think that we shouldn't just rely on industries to regulate themselves.  Why is that?  And what's the role of the public in regulating AI?  And what's the best way to engage the public?

So in the AI ‑‑ we can also talk about when we should begin the regulation process.  That is when it in the life cycle of a he can it knowledge should we begin to regulate.  So we can regulate at the beginning, which would be upstream, right?  Or we can regulate once a product has been produced, which will be more downstream.  We can also regulate the entire life cycle from start to finish in every stage of the development.  Now, companies will say that they already have a regulatory process in place for their products.  So what I remind is independent external regulation.  And in the U.S. at least the regulations tend to be more downstream, you know, external regulations.  Take, e. to, ChatGPT.  It's already out there being used and now we're just grappling with how we should regulate it externally speaking.  Downstream regulation is usually seen as being more pro innovation and pro companies.  How feasible would it be for an external regulatory body to regulate a fast‑paced AI research and development?  Is downstream regulation enough?  Or should we be taking a more proactive approach and regulate earlier in a process to ensure more protection for humans?

We can also ask where should the regulation take place.  Here we can regulate at the local level, at the national level, at the international level, or all of the above.  So how important is it for us to be able to coordinate at the international level?  Are we going to be able to do it effectively?  We don't have a very good record with respect to climate change.  So can we be, you know ‑‑ count on doing that with expect to AI?  What would it mean to regulate at a local level?  And how can Universites, for example, contribute to AI governance?

And finally, we can kind of talk about how we should regulate.  And by this I mean what kind of policy should we try to enact when regulating AI?  So ideally we're looking for policies that can keep pace with innovation and won't stifle it.  At the same time, we hopefully these policies will be enforceable.  For example, through our legal system.  Many people talk about transparency, accountability, and explainability as being important tools in AI regulation.  Are those enough?  If not, what other policies do we need?

So, you know, I have been doing a lot of work on something called the Human Rights framework where I think, you know, we should think about regulating from a human rights perspective.  We should make sure that people's human rights are protected and promoted through AI, and that's what the purpose of the regulation.

So let's just go back and sort of apply.  So, you know, the Human Rights Framework, it's kind of like an ethical framework, right?  It says that the ethics should be prior to a lot of these discussions.  And I already mentioned, you know, there are questions whether that's realistic or not.  But ideally, you know, we should make sure ethics is at the forefront.  What should it regulate?  Well, on a human rights frame network, you might think we should look into everything, at least consider everything, the data, algorithms, by sectors, any ‑‑ and by risk.  Like anything that could impact human rights, there should be some sort of human rights impact assessment for these technologies.  Who should do the regulating?  Well, on the Human Rights Framework, it says that everybody has a responsibility.  Human rights belong to everybody.  Everybody has an obligation.  Companies, researchers, governments, Universites, the public, we all have to be proactive in engageing in this, you know, sort of regulation process.

When should we regulate?  Well, the Human Rights Framework thinks that, you know, it seems to point towards a life cycle approach.  So we should sort of at every stage look to make sure that, you know, do some sort of human rights impact assessment, making sure that it doesn't undermine human rights.  I'll talk ‑‑ I can answer in Q&A whether ‑‑ how that could be feasible.

And where should we regulate?  Well, the Human Rights Framework is global.  It's all of the above.  We need to do it internationally.  We need to do it nationally.  And we need to do it locally.

And finally, how should we regulate?  Is it going to be enforceable?  I think that's going to be the biggest challenge to a Human Rights Framework or really any framework.  I don't think this is a problem exclusive to the human rights problem, but it's certainly a big problem, which is the enforceability.  I don't think we have a very good track record, and so one of the challenges for all of us is how can we get something together where we can actually make it binding and people will actually be willing to comply with it?  So thank you very much.

>> Moderator Kyung:  Thank you very much.

[APPLAUSE]

>> Moderator Kyung:  Thank you for mentioning also about the NYU for the research collaborations.  KAIST and NYU together with Matthew and Daniel and Claudia and also professor Kim here.  So we have been leading this collaboration for the digital governance and AI policy research.

So let's move on to professor Dasom Lee from KAIST.

>> Dasom Lee:  Thank you so much for the introduction.  Could I have my slides, please?  Oh, clicker.  Oh, thank you.  (Chuckles).

So I know we don't have a lot of time.  So I figured I would spend the time to introduce the kind of work that I'm doing and introduce the lab that I have at KAIST Korea.  So I have a lab called the AI and cyber physical systems policy lab.  It's called AICPS lab.  And we basically study how AI‑based infrastructures, or infrastructures that try to incorporate AI in the future try to address and promote environmental sustainability.  So more specifically, we look at in a transition and the technologies involved with that would be smart meters, smart grids, renewable energy technologies.  We also look at transportation, such as automated vehicles and automated aerial vehicles, which are drones, and data centers.  Obviously data centers are not specifically AI focused, but they store the data that AI collects and ensures the reliability and the validity of AI technologies.

And I actually have been criticized for being way too broad and not having a focus and studying everything.  Which is fine.  I can take, you know, constructive criticism.  But I also think that it's really important to look at everything in a very harmonized and holistic way, especially when we are trying to address sustainability.  And when we look at infrastructures and we look at energy and transportation in particular, they are really interconnected.  So for example right now we're trying to use EVs as batteries and how each household can have their own battery reusing and using EVs as batteries.  We can use renewable energy more sustainably.  And so on.  So that's basically ‑‑ I'm trying to build like a harmonious infrastructural system in my mind somehow, and I'm getting there hopefully, and I'll hopefully get there in about ten to 20 years, but right now it's still kind of fuzzy.

So the current projects, I don't really want to go into too much detail, but there are these five on‑going projects right now.  The first one is regulating data centers.  We don't have a lot of regulations on data centers, especially regarding climate change.  Globally, internationally.  Not just Japan or Korea but just everywhere in the world.  The you see has the most number of data centers in the world and the U.S. is not really known for their, you know, hardcore federal regulation, which means that it's often up to ‑‑ left up to the state level governments, like California or, you know, Tennessee where I was.  And those kind of governments do not ‑‑ often do not have the expertise on data centers to propose any type of regulations.  So that's one of the projects.  Another is a media analysis of obstruction in Korea and we have the student who's working on this sitting there in the back, and he's done a wonderful job so far.  His name is Ibam.  And we're looking at how different types of media outlets show how the intertransitioning Korea has not really gone from oil‑based energy to renewable energy but instead oil‑based energy to natural gas.  So there's a transition and it's slightly better but it's not great.  And we are trying to see how that actually ‑‑ how that kind of obstruction happens in the media outlets.

The third one is the quantitive analysis for the need of social science in automated vehicle, AVs, the self‑driving car automated vehicle research.  Lots of automated vehicle research so far has been focused on more technological fixes.  We need more sensors, we need Lidar, more radars, more cameras, more these kind of infrastructures around the road and we need these wires under the road.  And we're actually ‑‑ we show using quantitative and statistical methods that social science is the ‑‑ social science needs to be involved in order for us to understand the AV technology much better.

The fourth one is a bit of my small ambition and my little personal side project, is that I'm a sociologist by training but I also study science and technology studies.  So I try to merge MLP multi level perspective by frank Hills, which is the science and technology studies and peer Boduce, theory of forms of capital and that's a theatrical work that I'm doing.  And I'm also doing some work on data donation to promote data privacy and to promote more sustainable data management and collection.  And I also want to ‑‑ I know we don't have it on there but I also want to quickly mention the project that professor Park, professor Kim and I are doing together with NYU, KAIST project which is that we are looking at how privacy is contextualized in different geographical regions based on their culture and based on their history.  And when we look at privacy we can't just ‑‑ and we have tried to do this with lots of technologies like cars or cars need to have seatbelts, right?  Everywhere in the world.  But with privacy, it's really difficult to have that kind of very concrete regulation that's universally applied because everyone has a different understanding of what privacy really is.  So we're trying ‑‑ we collected ‑‑ we're planning to collect data.  We just passed the international review board which is the ethical, you know, you have to do like ethical review before you do a survey so we just passed the ethical review and we're planning to do a survey on how those kind of privacy ‑‑ how people perceive those privacy issues and how the public would interact with those potential privacy issues in the future.

And I think that's it for me.  And I really look forward to the request.  And A session.

Thank you.

[APPLAUSE]

>> Moderator Kyung:  Thank you very much.

So now right next to me, the senior advisor from JICA, Mr. Atsushi Yamanaka.  He has extensive experiences in the field of development.  So I think he's giving us kind of development perspective on how we can, you know, address the challenges and opportunities.  With digital governance.

>> Atsushi Yamanaka:  Thank you so much and the panelists here and also actually the audience here.  It's always very, very hard to be the first session in the morning.  So thank you so much for your dedication for actually, you know, being part of this sessions.

I'm essentially a practitioner.  I have been doing for more than a quarter century which is actually quite scary to think about.  So let me actually talk from the developmental perspectives, how does a digital governance actually can contribute and also what are the threats of the digital governors or digital technologies for the government?  But I'm actually an optimist.  Let me actually start from opportunities.  So essentially for example new technologies like AI is essentially opening up a lot of windows of opportunities in developing countries.  A lot of developing countries are using the AIs and other cutting edge technologies in order for them to actually innovate and then provide actually coming up with different actually product and services.  Which is a really affecting their rights and also contributeing to their social economic development.  So it is really, you know, accelerations of these kind of things, also giving opportunity for the reverse innovations.  I don't necessarily like the word reverse innovation because sounds like it's very pretentious but we believe and I believe as well that a lot of next generations of innovations, whether IT services or the product, will be coming back, will be coming out of the so‑called developing countries or emerging economies.  Because, you know, one of the things that they are is a needs.  So they actually have a lot of social economic challenges, or needs, so that actually is fueling the innovations.  You know, when you look at mobile money, for example, it came from Kenya.  You know, it will never, never actually came out from country like Japan where essentially actually still insult with paper money or coins.  So that would not have actually happened without actually the needs of developing countries.

Another interesting opportunity that we see is digital public goods and digital public infrastructures.  That's a very big topic this year especially with discussions in 2020 where India is very pushing for digital infrastructures to bridge the gap of the digital inclusions.  So we are going to see a lot of interesting opportunities and hopefully this time that we are not seeing ‑‑ we are not going to see the same kind of fate as we saw about open source and also funding for actually developing things which is process.

Another very, you know, really encouraging signs in doing the WSIS process is multi stakeholders involvement into the digital governance areas and policymaking processes.  Prior ‑‑ you know, I'm old enough.  So prior to actually WSIS process, UN really did not actually have this multi stakeholders approach.  I mean, of course, I mean, doing the first summit of the social development summit in Rio, civil society got involved in but still it was not really this kind of multistakeholder approach.  So, you know, IGF actually exemplifies this multistakeholder approach and how everyone can put their input into it.

Let me see go to next slide.  I tend to speak so much.  Please, professor Park, if I speak too much, please cut it.

You no, the challenges.  Well, there's a lot of challenges to be still made.  You know, despite the fact that we actually have made huge progress in terms of digital inclusions, still 2.7 billion people remains to be unconnected in 2022.  And still there's a lot of issues for digital service, digital devices.  Digital devices affordability, also gender, also economic inclusions as well.  So in a way we ‑‑ essentially the problem is the same.  You know, 20 years ago.  But it became much more complex.  So in this respect the last 2.7 billion people, last 30 percent is very difficult to reach out.  So that is going to be essentially a huge issue that we need to tackle on.

Another thing is three weeks ago, I was part of the digital SDG digital summit in New York.  Still, if we cannot actually utilize digital technologies well, we will not be able to actually achieve SDGs.  So that is going to be another very big challenge to do that.  And then in the governance side, yes, there's so many different governance challenges.  Japan is promoting cross transactions of data and that is also, you know, becoming a lot of issues in terms of what will be the best examples?  What will be the framework to do that?  Privacies, know, professor from NYU was talking about.  What are we going to do with the personal privacies and also human rights issues?  Cybersecurity is another issue because you know we've seen cyber wars now.  AI, Internet, and the data fragment citation also and other things.  Data fragment citation, who has a right actually to cut the Internet, you know, all these things.  And also money informations of the.  You know, with the advent of AI technologies this is going to ‑‑ how can you actually tell the reality to the fake?  That's going to be huge issues.  And also developing countries, especially how can you actually incorporate the voices and the input?  Because they're still not fully involved in the rule making or the framework making process.  So we need to engage with them and give them the opportunities because they actually represent probably more than, you know, so‑called G7 or even G20.  So how can you do that?  And also lastly, data information froze are still undirectional.  This is actually getting very big frustrations among developing countries because they have big concerns of this data organizations or data parties, especially with the big techs.  And also data sovereignties.  I think this is a very big issues.  What if we actually critical national informations on the cloud, so, for example, like in U.S.?  Which actually laws and regulations actually is going to regulate this data?  If it's a national sovereign data, shouldn't that data owner have the right to do it?  But currently the law is actually of the United States.  So these are among some of the challenges that we really need to address in order to really, you know, fully utilize the power of these technologies for the government.

Thank you.

>> Moderator Park:  Thank you very much.  Thank you.

[APPLAUSE]

>> moderator Park:  So we have human right perspective from Matthew and also infrastructure perspective on digital governance.  And also development perspective and a development cooperation for international relations from different kind of stakeholders.  So now we're moving on to the professor Rafik Hadfi from Kyoto school of informatics.  So giving us the perspective of maybe the digital inclusion.  So okay, sure.  Rafik Hadfi.

>> Rafik Hadfi:  So thank you professor Park for the invitation and thank you for everyone being here this early time of the day.  So my name is Rafik Hadfi.  I'm currently an associate professor at the department of social informatics at Kyoto university.  I mostly do work on AI but I try to deploy it into society so solve a number of problems, going from SDGs, LSI, to the most recent ethics‑related issues.

So the work we do is multidisciplinary by nature and one of the topics that I have been working on most recently, perhaps the past two years, is digital inclusion.  And I take digital inclusion here particularly inclusion in a very global way in the sense that inclusion sometimes means equity, sometimes means self realization, autonomy, et cetera.  I'll explain exactly what it means here.  So it's one of the key elements of a society in a sense it's a way of allowing the disadvantaged individuals of society to having access to ICT technology.  This is more like answering the question of the how.  I mean, what kind of activities allow us to include these members of society?  And the goal here is to allow more equity.  So equity here is a more let's say inclusive and meaningful way to define meaning for an individual in society.  So the question of equity answers the what here.  So what's the goal of individual of society?  So this connection here leads us to something more global which is self realization and this includes all members of society.  In a way to allow digital equity, will allow individuals to, say, meet their autonomy and also fully live their lives.  So the question now is ‑‑ or the topic that I'm working on is how to enable this using most recent technologies, not just ICT but AI in particular.  So I'll take one case of a case study we have been conducting let's say the past two years.  It was very difficult, challenge because it kind of addresses multiple problems in society.  So digital inclusion here is equated with gender equality, empowering women.  So it's a study that was conducted in Afghanistan and the focus here is woman inclusion.  So the problem ‑‑ the main problem here was first of all conducting this study itself in Afghanistan.  This came at the same time where the of a began government was collapsing and apart from the logistics there, we had also the already established let's say problems there, like gender inequality, insecurity.  So it was very difficult to conduct.  Plus the ICT limitations in Afghanistan.  Fast forward two years later, we managed to conduct the experiment.  Initially was planned for 500 participants from Afghanistan and then narrow it down to 240.  So the main target here is basically how to build an AI that could be deployed in an online let's say setting where mostly women have the ability to smartphones to communicate and also deliberate.  So the AI was found to actually enhance a number of things.  So one of them is the diversity of the contribution that women were providing and this kind of online debates.  The second one and most importantly the fact that we find out that this kind of conversation reduces inhibition.  I mean, middle eastern societies in particular are known for limiting the ‑‑ let's say the reach of women in terms of like freedom of expression and also raising particularly issues or problems related to their livelihood.  The third element was found with this kind of technology is that increasing ideation.  We found that AI allows women actually to provide more ideas with regard to the local problems there like let's say employment, let's say family related issues.  So this is one this is one practical case of conversational AI which is building on large language models, ChatGPT, et cetera.  This is more advanced than let's say problemsolving approach to conversational agents.  So, yeah, so this is a particular practical example of using AI for social good and the deployment was done in Afghanistan.  So I'm looking forward to your questions and, yeah, that's all for me.

[APPLAUSE]

>> moderator Park:  Thank you very much.  Actually the figures has been leading, you know, the democracy in AI, the research group and AJ has conferenced on AI and we'll also have a conference next year in Seoul.  There was also a very interesting discussion in Hong Kong in August.  You know, today is a Japanese national holiday.  It's a sports day, right?  I think we are doing a lot of brain exercise today.  So I think it's very ambitious and very interesting talks and sessions today.  So we're moving on to the professor Liming Zhu, school of computer science and engineering from University of New South Wales.  Sorry about that.  So talk about democratizing the AI from the perspective of ‑‑

>> Liming Zhu:  Thanks very much for having me.  Right.  So I'm a professor from University of New South Wales but also research director at CRSOH Australian's national science agency and we have around 6,000 people working in the areas of agricultural energy, mining, and of course AI.  So AI digital business units.  I also work quite internationally with OECDI and some of the standards on AI and also have a national AI center was established 18 months ago, which is hosted by data.  It's not research but AI adoption, especially responsible AI adoption.

Very briefly on Australia's journey.  Developed ethics principles developed at the time commissioned by the department of industry and science but with industry consultation.  The principles if you look at them, it's not really that surprising.  A lot of international organizations in each country have developed those.  But I want to drawer your attention, you know, on the first two or three, it's really the human‑centered values, especially the pleural of values we realize the trade‑offs and the different culture and inclusiveness in human value environment, well‑being and Australia is fair country so fairness is high up in there.  But then we have the traditional attributes I would say for any systems but AI will pose a very unique challenge to them such as privacy, security, reliability and safety.  And then there are additional interesting quality attributes like transparency, explainability, contestability and accountability which is uniquely in AI context.  So since then, since 2019, it's been four years.  So Australia has been focusing on operationalizing these principles.  We have done a lot of industry consultation case studies and how get industry feedback.  The minister the picture is our minister for science, minister Husi has launched AI's responsible network.  Commit to responsible AI governance mechanism.  They have to commit to at least 3AI governance mechanism principles within their organization to be part of the member and sharing their knowledge.  And we are also ‑‑ there is a book coming up called responsible AI, best practices for creating trust wrote AI systems based on the work that we have done and have done three industry case studies in that book.

So what is our approach?  I think the casing is realized a best of practices, they need context.  They need to know when to apply them and also there is both pros and cons of these best practices.  And a best practice needs to be connected.  We also see people at the governance level let's have a responsible ethics committee, some automated mechanism at the governance level doing great things but not connected to the practitioner.  The practitioner is your AI engineers, software engineering, AI engineering developers.  How do we connect all these best practices?  So people collaborate.  So we have developed a responsible AI pattern catalog which you can easily find by searching and connects governance patterns, which we mean probably the people in this room is mostly interested in, process patterns meaning software development process and AI engineering process from development process point of view.  And the product patterns which you see are the matrix measurement on a particular product how to evaluate them.  The casing is they are connected and you can navigate around them and to have a whole system assurance.  At this moment a lot of AI governance is not about AI model itself, even ChatGPT there is many components, AI and nonAI components outside nonAI model.  Every prompt you put into ChatGPT is not the true prompt going back to the model.  Additional text edit to those texts you have edit.  Things like simple please always answer ethically and positively.  So those kind of prompts will be cached into every single prompt you put into it.  That's a system level guardrail.  And many ‑‑ so this is a very simplistic example but many organizations will leverage live language models and AI can put their unique context‑specific guardrails to leverage the benefits while having some guardrails.  And those kind of patterns mitigations need to connect with the responsible AI risks that every companies can find as part of their typical AI risk ‑‑ our risk registry systems and we have developed many of such question banks that you can ask questions about your organization and making responsible AI risk assessment part of that.  So you can find more information in some of the papers I listed here, search online and this has been featured by the communication of ACM as one of the most impactful project in east Asia recently.  So very happy to share Myer experience with all of you.  Thank you.

[APPLAUSE]

>> Moderator Park:  I think today's speaker is probably providing a lot of insights on how we can actually collaborate together from different stakeholders in the global shaping process of AI frameworks.  We have so the Australia cases and Korean cases and I would say a little bit like market U.S. approach and we also need engagement from developing countries.  So I think that's why today's session is very timely.

So I think we have Takayaki Ito online.  I'd like to introduce Takayaki Ito from Kyoto university.

Sure, okay.  Sure.

Floor is yours.

Do you hear me, right?

>> Takayaki Ito:  Yes.

Okay.  I'll share my slide, okay?

[PAUSE]

>> Takayaki Ito:  All right.  So thank you for introducing me.  So I'm Takayaki Ito from Kyoto university.  I talk about current ‑‑ one of my current projects towards hyper democracy, AI empowered crowd scale discussion support systems.

So we are working through the to develop the hyper democracy platform.  Tried to have group decision on the consensus building support.  So basically in the current social network, there are many social problems, like fake news and gerrymandering and filter bubble and echo chamber.  It's a very important problem.  So here by using AI like the ChatGPT we are trying to solve that.

So actually we have been working for this kind of project for ten years.  From 2010 we started to create a system called Collagree system where the human tried to facilitate it and support consensus among the online participants.  And then from 2015, we created D‑agree system, where 1AI agent supported the group system among online participant.  So here we use AI to support human collaboration.  And then we are working for hyper democracy pattern, where the many AI agent try to support cloud‑scale discussion.  This is an overview of the D agree system.  You can see the people discuss by using text chat and then networked by our AI and structured in the database.  So basically by using the structurized discussion, AI tried to interact with the online participant.

Here actually we started this project in 2015.  We didn't have any ChatGPT but by using our classic technology we realized that.  So here by using GPT, we are now working for more sophisticated AI actually.  So this is one case study of a D‑agree.  We use D‑agree system in Afghanistan, particularly in 2021, August.  There American troops left from Kabul city in Afghanistan.  There we open in public the D‑agree and we give many opinions on the voices from the civilian in Kabul city.  So as you can see, our AI can honor rise the opinion types and characteristics.  So from August '15 when the American troop left, the number of issues, problems increased drastically.  It's shown in that box in the center graph.  So we are working with the unit nation habitat and many of these kinds of things.  So now we are expanding the AI facility to many AI agents, and then this is current multi agent architecture.  Now we are testing this agent architecture.  So this is the conclusion.  So, yeah, we are now developing next generation AI‑based group decision system and it's called hyper democracy.  Thank you very much.

>> Moderator Park:  Thank you very much.  Actually, professor Takayaki Ito has been pioneering his interdisciplinary research in the intersection between AI and democracy.  So I think he provides a lot of insights and the importance of multidisciplinarian approach, especially in the research field we have to work together with the policy scholars and also development scholars and also scholars as well.  Right, so we are on time right now.  I'm feeling very relieved.  So we have our last speaker, Seung Hyun Kim from KAIST.  Are you ready?

Sharing some insights from development field.

>> Seung Hyun Kim:  Hello.  My name is Seung Hyun Kim.  I'm currently a Ph.D. student studying around Park.  Before the peaceful life of a graduate student I was program officer for an institute called career development institute, responsible for policy recommendations for developing countries.  So one of the stark realizations that I realized was that in the times that I worked at KDI as a program officer, I have been only been to developing countries.  So my passport records do not go above the annual GDP per capita of $10,000 per year.  So I haven't been anywhere.  So this is the most advanced country that I have been to overseas in many years.  And well, what I realized is that when I came to KAIST, I was exposed to all these discussions about AI, ChatGPT, bioethics, et cetera.  Extremely advanced technologies, cutting edge technologies that is going to change everything.  But then I thought about ‑‑ then my thoughts overlapped.  What happens when these cutting edge technologies are overlapped with what I saw in the developing countries?  Which have a very different societal and economic context?  So I'd like to share three snapshots that provide some peak.  Not insight, I wouldn't say, just peak into how all this discussions, insightful discussions we've had today may develop in the developing world.

So this is a snapshot of a photograph taken in November 16th, 2016 in Medellin Colombia.  If you've seen Narcos this is the Narco capital of the world.  If you see a little bit below, you'll see that there is.

(Speaking in nonEnglish) which is area one, the ZIP code for the poorest neighborhood in the Columbia cities.  This City used to be before the introduction of a cable cars, which was a revolutionary transport mechanism, it was actually isolated from the entire city of Medellin.  So it was a breeding ground for cartels, drug dealers, smuggers.  Cops would not go in there.  They would say I'm not going in there but but because of the cable cars people were able to get jobs and come out of the city and have equal opportunities to education, jobs, and capital to borrow money, et cetera, for banking, et cetera but wasn't publicized drug cartels were using this new cable car, they would hide cocaine inside the replaceable parts and then this had become an automated distribution mechanism that was not publicized for a very long time.  But what if cable cars do this to the drug cartels?  What are they going to do when they get their hands on ChatGPT and AI?  So it's something to think about.  And also what the drug cartels were able to thrive in these regions because a dollar could bribe anyone in the entire neighborhood.  For ten dollars you could basically make anyone do anything.  Whack you do a brand‑new laptop connected to generative AI and what can they do?  What can the police do?  What can the government do against such matters.  One snapshot is unequal opportunities and how that exposes existing social and economic problems.

The second snapshot is one of a lovely pictures that I've taken Addis Ababa Ethiopia.  This is a scene I will probably remember for a very long time.  This is a scene of an official explaining to us, the Korean researchers about the current system of the Ethiopian public finance system, the electronic digital finance system.  So the ‑‑ I would not for get this line.  So the tax systems runs on Microsoft.  The tax expenditure system and budget planning system runs on Oracle.  And we haven't had a digital system for auditing system yet but we will soon, as soon as we get the funding.  So for one government, you have three different information systems running simultaneously that do not communicate with each other, that cannot communicate with each other.  So fragmentation on an enormous level.  And why is there fragmentation at the government ‑‑ in the core governance structure of distribution of financial resources, at the ministry of finance?  Why is that?  Because they're financessed by different institutions.  By the World Bank, by UNDP, et cetera.  They're financessed by different institutions who are connected to different service providers who in turn provide very little part of the government.  And no one agency can provide one comprehensive solution for the entire government.  So what you have is extreme fragmentation of ICT and information systems in government.

The third snapshot is ‑‑ I spent two weeks in Equatorial Guinea.  It was the unhealthiest time of my life.  I was basically half unconscious because of the malaria vaccines that I had to take every two days and so this is the president Nguma Mbasogo speaking at the equatorial guinea national economic conference.  All the ministers were level, all the high level profiles of the government were there but the entire conference was run and executed by Chinese officers from the mainland China.  So every staff, except for the participants, every staff would be Chinese from the mainland.  And after about a week, suspend ago week in there, it wasn't long until I found that the entire ICT infrastructure was basically dependent on one country and one company, China Hawai.  So you couldn't get an Internet connection, Wifi, anything, you can't send an e‑mail without having some sort of support from Hawai.  So I thought this snapshot provides a very significant view into technology sovereignty.  So in some you have one unequal opportunities that may be extravagated into a whole variety of societal problems and two fragmented privacy structures that creates problems for governments and to find a sovereignty that ties into developing countries with companies and other governments and international agencies that provide a very complicated picture for the developing world and for the developing world to become more advanced and to be ‑‑ actually transform into a developed world I think these three peaks are something that we should think about.

Thank you.

[APPLAUSE]

>> Moderator Park:  Thanks a lot.  So we have five minutes to go and I would like to open up the floor because I think we have more than 30 attendants in this session.  Thank you very much for your participation.

So if you have any comments or questions, please raise your hand.

Professor Kim from KAIST.

>> Audience member:  First time doing this.  If I knew I should be standing here I would never have asked the question.  But the question I guess is very much a comment to all of us because as pointed out the problem of fragmentation in a single country level, we are actually witnessing huge fragmentation of AI governance at the global level.  We have something going on at the World Bank, UN, various agencies of UN and also even within single countries.  And also we are approaching this problem from many different angles, right?  Development perspective.  Sociological, ethical, philosophical, CS, whatever?  So the question is how can actually reduce this degree of fragmentation when you talk about AI governance and other mechanisms of regulation or issues.  So someone as you might know or some around the world are talking about the need to create something like AIEA, a version of AI but then there are many limitations, of course, because of so different natures of truth conscience, you know, the collect issue conscience is so centralized and it was borne out of a very dire emergent situation during World War II, but AI is apparently such a democratic technology because anybody can touch upon some part of AI.  So my question is how you ‑‑ any of you cannot address this question of the need and way we can actually think about how we can actually reduce this problem.  You see what I'm saying, right?  Okay.

>> Moderator Park:  Thank you very much.  If you don't mind why don't you just have questions all together and then we'll just address the questions.

Please, could you briefly introduce yourself and then ‑‑

>> Yes, no problem.  I'm Sophie Cal Haan, diplomat from the Danish ministry of foreign affairs posted in Geneva.  Thank you so much for all your really interesting perspectives I think we of course also see a need for a global governance of AI and just like my previous audience member we also see this fragmentation.  It was really good to hear especially of course in the first presentation the focus on human rights, which we also find very important.  And the general multistakeholder engagement.  This is a good example of engagement with the academia.  So we see a need to take a really risk‑based approach and of course thank you for referencing also the EUAI act as a new member state we very much support that approach.  I also want to ask basically the same approach as well how do we approach this globally?  We see this fragmentation but instead then I would like to come back to the first speakers, professor Lee's point on following, how do we ensure this regulation is implemented and we can ‑‑ obviously we have oversight and accountability afterwards?  Thank you.

>> Moderator Park:  Thank you very much.  And gentleman here?

>> Good morning.  My name is Fida, I'm from Liberia.  I would like to ask a question based on African perspective.  I realize mostly from Asian region stating from another continent but in Africa we also part of global society but then we looking how what do you see as the impact of AI?  The use is geographically limit ‑‑ I mean, limited.  So most of the conversation here, it ‑‑ I saw only ‑‑ I mean example case but we looking at Africa as Africa, a lot of issues.  So what advice can you give most of our policymakers in Africa in terms of how research can be extended and what can they do to ensure that as similarly get to become a reality what can Africa prepare for and we do lack setting expertise, what are your advices for some of us youth and then larger audience in Africa?  That's my question.

>> Moderator Park:  Thank you very much.  Okay, another question or comments maybe.

>> Hi.  My name is I can't say minute from Indonesia from the UN for disarmament research.  I do have a few questions.  Please bear with me.  First of all, for the first speaker on human rights based AI framework, just to jump on the previous point that was made by fellow audience member.  On the question of enforceability, I was wondering if you could elaborate on the difference between human rights as a moral framework on the one hand.  On the other hand, as a legal framework with the established international human rights law mechanisms and what can we learn from the established case law that have been, you know, established over the past years as IHR was developed.

Second on the digital public good and public infrastructure, previously I used to work at chat ham house a London based think tank.  We did research on that and one of the questions we were wondering, of course there is the importance of public stewardship but at the same time there's the question of limited resources and the need for scalability.  How do you deal with that?  And third, on conversational AI to advance women's inclusions one of the questions that popped into my mind, it's great, it's got a lot of potential but how do you deal with the availability of training data, whether it's in terms of data collection and data hygiene that it's available in an equitable way.  So to the just in terms of, you know, free, you know, from bias, but also taking into account questions for example some communities might need to be represented in these models but at the same time others might want to be forgotten because of privacy and oppression risks.  How do you deal with that?  Thank you so much.

>> Moderator Park:  Thank you very much.  So right.  So if I may, it's wonderful comments and questions.  If I may just summarize with key words, fragmentation and global AI governance and how we can actually collaborate together for it.  And what can the African countries especially prep for AI strategy.  And there are definitely the conflict rationals in human rights perspective and also digital public goods and how we can actually promote ‑‑ enhance scalability in how we promote digital inclusion in terms of data collection and data analysis.  So I'll give you Matthew, are you there?

>> Matthew Liao:  Yes, I am.

>> moderator Park:  I'll give you two minutes.

>> Matthew Liao:  Sure.  Great questions.  The fragmentation question is such a difficult problem.

Very quickly, you know, I think this is something that we need to all work together.  We need to ‑‑ it's multistakeholders.  We need everybody involved in the conversation, the public, the government, the researchers and so on and so forth.  Now that sounds kind of vague.  So here's something a bit more concrete.  I think professor Seung Hyun Kim mentioned something about the nuclear energy.  There are two things I want to say.  I think the medical model is actually a pretty interesting thing to think about.  If you think about drug discoveries, there's a lot of innovation in a drug arena.  At the same time, there's a lot of regulation.  You know, we're protecting, you know, people.  There are a lot of human subject research.  You know, a lot of sort of stuff that's not like hybrid stuff, not trivial risk things.  And yet we can do it in a fairly responsible way and the community, the international community have basically coalesced around sort of different norms, you know, to sort of say we need to make sure this process is safe.  And I feel like we can do something similar with respect to AI, you know, where, you know, some stuff, maybe they're low risk.  I like the EU‑based approach as well.  I think sort of some stuff that's low risk that, you know, we can kind of look at it and sort of say hey we don't need to worry that much about it.  If something is being used for games but other things where, you know, medical devices maybe we should need to pay more attention, especially if this involves humans.  And I'll just say one other thing, which is I think there's a lot of sort of regulatory capture right now.  A lot of people think, you know, this ‑‑ it's too big to be regulated and I think it's useful to look at the history of regulation.  Take airplanes, for example.  Airplanes used to fall out of the sky every single day, right?  And then at some point people say, you know, we need to come together and regulate, you know, sort of the airline industry.  From the engines, so on and so forth is regulated.  And now the airline industry is the safest, it's so safe to fly these days.  I feel like we can do something similar as well, and so maybe those are indirect models where we can appeal to for ‑‑ to address things like fragmentation and things like that.  So...

>> Moderator Park:  Thank you, Matthew.  Does anyone address to the questions or comments from here?

Okay, sure.

>> I have a few comments on this fragmentation.  Okay, fragment citation.  Maybe perhaps we need to actually create AIGF instead of IGF, AI governance forum.  You know, even the Internet governance, right?  We have been talking about 20 years.  We still have‑not actually come up with a suitable models.  I think we need to actually come up with what is actually workable and best examples models.  Instead of actually having complete global ‑‑ which is very, very difficult and not very palatable for many, many stakeholders.  So I think we need to actually have to come up with sort of best example, what is really, you know, worksable solutions, rather than actually having a concrete regulations on AI I think.  I think that's the way to go.  In terms of Africa, yes, actually I actually work mostly in Africa by the way.  So last several years, 13 years I mostly worked in Africa, mostly in Rwanda.  So actually AI is actually utilized very much so in African context as well.  A lot of startups have been using AI and then, you know, like other actually data‑based solutions.  So I think there are actually a lot of solutions which is coming out.  However, the human resources are limited, you're right.  So there are different ways.  There are actually a lot of shared advances institutions now established in Africa as well like curry Maury institute in Africa.  In Africa mathematical science so that is also ways to advance.  And also also developing countries like Korea gentleman pan, I have acquired many students who study AI and continue to actually do Ph.D.s here in India and Japan and one actually have directive AI models for African languages and he was studying here.  Actually a research fellow, one of the top research institutes.  Unfortunately he moved to Princeton to continue his research, but ‑‑ so that's kind of sort of the human resource capacity initiative.  For example, JICA is actually known for and KOIKA and also other countries doing that, so you can take advantage of those kind of framework.  About DPI and DPGs, yes.  Scalability was always the issues on open source initiatives and also on ICT for development as well.  But I think we're seeing a lot of interesting sort of DPI, like, you know, the Indian model, you know, that is scalable.  They're actually serving like 1 billion people basically, right?  So that actually is seeing a lot of scalabilities beyond POC which is, you know, hallmark for ICT development.  Lastly about women's inclusions.  I think these technology actually gives quite unique opportunities in terms of pseudo minorsations.  Sort of masking the gender but basically giving the opportunity for the inclusions.  I think much more than in person environment.  So I just wanted to point that out.

>> Moderator Park:  Thank you very much.  So before we close, I know we are running out of time, I'd like to just give a couple minutes from Rafik Hadfi and professor Zhu.

>> Ralph:  Communities, for example, in terms of the inclusion of data collection, training, et cetera.  So one approach we found is that not just deploy a solution in a simple social experiment but have a holistic approach where we tend to form local communities, let's say villages, municipalities, schools, train them on how to use the whole, let's say, AIAI system and then this is done for few studies but at the same time it allows us to build datasets to train these models for these communities.  Because one of the things we encounter is when you train these AIs obviously in English, I mean, you're biased towards one particular let's say context.  So we've done this in Tunisia.  So this was an island called west Tengara with university of Mataram.  We trained these datasets.  Currently we're focusing on the Angan case because I think as we all know there's a lot to do in Afghanistan and the case study there covered is mostly focusing on the equity, women empowerment, and, of course, as I said, the data collection, the AI models are trained particularly for this context.  And of course they have been generalized.  This year it's for Afghanistan maybe I don't know I would try Iran, I don't know, next year.  Yeah, so that's all for me.

>> Liming Zhu:  I will be brave.  I think on fragmentation I'm going to be slightly controversial because from a science point of view we see there are different stakeholder groups, great institutions UN, OECD, if they are bringing attention to AI and governance, certainly I think it's valid because different stakeholder groups have slightly different concerns and the robust discussion between the groups and making trade‑offs in some of this is going to be important.  I don't see at that level a fragmentation but more of a more interest upon different stakeholder groups.  Comes to regulation, of course, there is also importance of both horizontal regulation, regulating AI as a whole and there's the pros and cons in that.  And the vertical regulations on particular products.  The interaction between them, removing some overlaps is important but there needs to be both rather than one way or another.  But only one thing I think that shouldn't be in fragmentation is science.  Science is international, science is not value based and data scientific evidence and advice to these policy and stakeholder groups really needs to collaborate more.  I think a lot of scientist research organizations here I'm looking forward in collaborating with them.  Thank you.

>> Moderator Park:

So thank you very much for your participation today and I'd like to continue, you know, our discussion.  So please keep in touch and then before we close, actually I'd like to particularly thank, you know, the Seung Hyun Kim and Junho Kwon, doctoral student.

[APPLAUSE]

>> moderator:  Thank you very much for your time and all the speakers.  If you leave your contact to the Sheun Ho after this session we'll keep in touch with you after this session.  Thank you very much.  Thank you.

[APPLAUSE]