IGF 2023 – Day 3 – Open Forum #131 AI is here. Are countries ready, or not? – RAW

The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.

***

 

>> MODERATOR: Good afternoon everyone in Kyoto.  Good morning, good afternoon, good evening for those of you joining online.  It's great to have you all with us.  This session is on AI is coming ‑‑ are countries ready for not?  And this week has been full of AI‑related events and I'm great it will you have the stamina to join us for this one.

This is a discussion we really want to bring forward on how countries in different stages of their digital transformation are taking the opportunity or trying to figure out the challenges around adopting artificial intelligence for the purpose of their national development process.

And so looking forward to good conversation on this.  I'll just, my name is Robert Opp, I'm the Chief Digital Officer from the United Nations Development Program.

UNDP for those who are not aware is essentially a big development arm of the UN system.  We have presence in 170 countries.  We work across many different thematic areas including governance, climate, energy, resilience, gender, et cetera, all for the purpose of poverty eradication.

And our work in digital really stems from that because it is about how do we embrace the power of digital technology in a responsible and ethical way that puts people and their rights at the centre of technological support for the development.  So just to set a few words of context, I think obviously AI especially with the advent of Generative AI has just exploded into the public consciousness around what is potentially available for countries in terms of power of technology.

And as we are in terms of the state of let's say a pivotal point of history, three weeks ago we celebrated the SDGs Summit that marks the halfway point to the Sustainable Development Goals.  We are not on track for the Sustainable Development Goals unfortunately.  Only 15% of the targets have actually been achieved. 

Some work that we together with ITU did in a report that was released called the SDGs Digital Acceleration Agenda we found that 70% of the SDG targets could be positively impacted with the use of technology.  And I have to say during that week of the high level session a few weeks back of the General Assembly, there was a lot of discussion around digital transformation overall, the power of technology and particularly, like here, the interest, I might say the "buzz" around artificial intelligence and what might it do.

It's not so forward for countries to know what to do, where to turn for countries that don't have necessarily all of the foundations, who are not aware of the models out there, and so the conversation today is really about how do we, what situation or countries in now and what might we do to support countries as they embrace AI?  What can countries also do to reach out and organize themselves with the support of others?

And I think it's important to note that our view on this is really based in the opportunity.  A number of discussions this week have focused on the potential negative impacts of artificial intelligence which is correct because there are lots of concerns, but on the positive side, when we look at this as UNDP, there is tremendous potential opportunity here to embrace AI and really make significant progress against the SDGs.

So the conversation today is about how to do that in a responsible and ethical way, but we are going to focus more on the opportunity than the sort of doom and gloom end of humanity view.  Not that that's not important.

So to join us today, joining us today and for giving some texture to this round table we have got a few fire starter speakers with us.  We are grateful to have a great mix of people that can really speak to this issue.  So we have Dr. Romesh Ranawana, who is the Chairman of the national Committee to formulate AI policy and strategy for Sri Lanka.  That was an entity established by the President this year.

We have joining us soon hopefully in the Chair beside me, Dr. Alison Gilward who is the Director of Research of ICT Africa which is a regulatory think tack.  We have Denise Wong who is Chief Executive at Singapore's info com media development authority.  IMDA.

We have Galia Daor a policy analyst at OECD Directorate for science, technology and innovation.

And we have Alain Ndayishimiye, I'm sorry if I haven't gotten the syllables of your last name in there.  He was the project lead of the artificial intelligence machine learning of the Center for the fourth industrial revolution based in Rwanda.

And so my plan here is that we will go through the speakers and then we want to turn this over you to.

I will make a couple of remarks from the UNDP side and the work we are doing in this space before we go to Q and A.  The opportunity, the offer to join us at the table is still open for those who would like to come, because it is a round table.  With that let's go to our first speaker, and the setting here or the overall question is are countries ready for AI?  What are you seeing on the ground?  And what have the experiences been so far in building open, inclusive, trusted digital ecosystems that can support AI, and to speak first I will turn to Dr. Romesh Ranawana from Sri Lanka.

>> ROMESH RANAWANA: Thank you so much, Robert.  Good morning, good afternoon to all.  As you mentioned Sri Lanka is just embarked on this journey of trying to improve AI readiness and bring the benefits of AI to the general population.  But what we have faced here is a country with very low level of AI readiness and AI capacity it's quite a gargantuan task mainly because the AI revolution is just starting.

If you look at where other countries are, we are significantly behind, and we need to catch up to make sure that we bring the benefits of AI both to the people and our economy.  And something that we have seen happen around the world over the last few years is that most countries have realized that building AI readiness, building AI capacity cannot be done at the corporate level or the private sector level.

It's been accepted now that it's got to be a national level initiative that needs to be taken forward.  We have seen most of the Developed Countries have formulated national strategies over the last years and most middle income countries as well over the last two years have formulated policies.

So in Sri Lanka what we have is a strange situation where we have lots of engineers who are capable of building AI systems and we did a study recently where we found that just over the last year there have been more than 300 projects in our universities conducted by students on AI.  Very few of the systems, none of the systems are going into production.

They are stopping at the stage where it's a proof of concept or research paper but it's not going into society and actually causing benefits.  So our challenge here was how do we create an ecosystem where not only is the research done, but also for some of these benefits to be brought out into common services into building economy, to making food production efficient, bring into education and things like that.

The challenge we have and we are fortunate that the Government took the initiative to set up the task force to look at national AI policy and our current trajectory is to launch the policy in November, and then a strategy which will come up with an execution plan for a policy which will come out in April 2024.

The challenge of AI is the fact that AI is a general purpose technology.  AI can affect every sector from education to health sector to the national economy, Government services and as a country with limited resources our challenge was how do we pick the battles we want to address initially.  We can't do everything because our resources are limited.

And this is quite a task.  As the general guideline for how we want to approach this was we have three main pillars we were looking at.  First, what were those foundation elements that we need to put in place to build up AI readiness and capacity?

Number two, what were, what are those specific applications, and specific areas that we need to focus on that will cause immediate impact and also impact on medium term?  And third is also set up the regulatory environment on how we are going to protect our citizens from the negative impacts of AI?  And for this, once again, the scope is unlimited on what we can do and we have been very fortunate that the UNDP stepped in and has been, has started working on an AI readiness of Sri Lanka which will be the foundation of setting out parameters on what we need to look at for what should be the main priorities and focus areas for the AI strategy we are developing.  So the assessment at the moment is on the way, and this AI assessment evaluate our strengths, weaknesses and the opportunities for Sri Lanka in terms of AI.

So as we stride forward, our sighs are set on fostering an open and inclusive digital ecosystem that will not only with stand the AI revolution, but also harness its potential for the greater good of our people.  It's not going to be an easy task.  Developing a policy and strategy is one thing, but I think the key element for Sri Lanka is how we will execute on this, and actually do this in a sway that that it's sustainable where this policy is not going to be put aside when Governments change or the priorities of the Government change.

So that's something we are looking at on how we can approach that.  Really, the focus at the moment is first identifying our boundaries, what should Sri Lanka initially focus on and then there on wards building on where we are going to go.  Thank you.

>> MODERATOR: Thank you.  Fascinating questions to be asked and I'm sure shared with a number of other countries.  We will go to our next speaker, Denise Wong of IMDA in Singapore.  And Singapore has done a lot very quickly, I would say, in the AI space.  And we are aware of some of the work you have done in policy and governance and how you have worked to include putting people at the centre, taking a human‑centric approach.  Could you tell us more about the approach Singapore has taken and some of the things you have done to make this a human centric endeavor?

>> DENISE WONG:  You are right, our policy has been quite an inclusive one as part of the national AI strategy, everything that we are doing today has been on the back of building upon foundations of inclusion and high level of readiness and adoption within communities.  That's been the bed rock for the work we've done after this.

Focusing specifically on AI governance which is the area I work in, we, of course in the area of governance and regulation, you are always thinking about risks and potential misuse.  I prefer not to see it only in that frame.  A lot has been about what does AI mean and what does AI mean for the public good and the public interest?  It's in that context that we see both opportunities and, opportunities for our public at large, but, of course, with the appropriate guardrails and safety nets and implementable guidance.

And some of our approach has been about being practical and having detailed guidance to help shape norms and usage.  And in doing so, we started off with a model governance framework fully aligned to OECD principles which was important to us to have the international alignment, and we took a multi‑stakeholder approach in developing that.

We also took a fairly international approach in doing that.  We got feedback from more than 60 companies from different sectors, both domestic and international as part of the first iteration of the model governance framework.  We also worked on what we call ISAGO which is Implementation and Self‑assessment Guide for companies.  That was done together with the World Economic Forum Center for the Fourth Industrial Revolution.

That helps to provide practical alignment with companies with governance framework.  We put together use cases which contains illustrations on how local international organisations can align and implement these practices.  So it was a very practical approach we took in organisations, and that took away the sting of maybe politics or risks or existential and focused on what companies could do and should do at a practical level.

In the Generative AI space we have been practical and industry focused.  We issued a discussion paper in June focusing on gen AI.  It was framed as a discussion paper rather than a white paper because we wanted to generate discussion.  It was an acknowledgment we didn't know all of the answers, no one does and we wrote it together with a company in Singapore so we had both perspectives.  We have launched the AI verify foundation in June.  It's an open source foundation.

To be honest, we are learning how to do open source foundations as we go along, but that has an AI open source toolkit, not in gen AI space in, discriminative AI space, but that was a toolkit we wanted to build and that companies can take and adapt and adopt for their own use so it lowers the cost of compliance for companies.

The AI verify foundation has over 80 companies now who have joined us so all over the globe and we did think it was important to bring different voices to the table at the industry level, but also at the end user level to understand what were the fears and concerns that people had on the ground.

So it's been constant conversation that we have had with our public and with our companies, with international organisations, with other Governments.  All of the aim of, I guess, interoperability, global alignment, but also to sort of encourage sort of open, industry‑focused lens, and that is sort of generally the way we have approached a lot of these issues in terms of critical emerging technologies, frontier technologies where we may not know what the answer is.

The last piece I will say is we have been looking at the questions of standards and benchmarking and evaluations.  A lot of that beyond the principles will be about that, what are those technical standards.  We do think it is quite important to have international alignment on that as well, and we do so help that beyond general principles that that is where a lot of the conversation will go.  Thank you.

>> MODERATOR: Thank you.  And I want to turn to our third kind of country‑focused example, and we are going to go to Alain Ndayishimiye, I'm sorry, you will have to correct me on the pronunciation of your last name, who works at the Center for the Fourth Industrial Revolution based in Rwanda.  We know as the Center for the Fourth Industrial Revolution it's by nature a multi‑stakeholder endeavor. 

And I guess my question for you is what's the situation you see on the ground in Rwanda, and what is, how can multi‑stakeholder approaches help with building the capacity of local digital ecosystems to engage in AI?

>> ALAIN NDAYISHIMIYE: Yes, thank you, moderator.

One again, let me take the opportunity to greet everyone, wherever you are in the world.  Allow me to extend my gratitude to UNDP for inviting me to be part of the dialogue.  As AI continues to shape the world, the need for transparent practices have never been more pressing.  AI has the ability to transform societies, but it brings risks even though it's developed through and managed responsibly.  This calls for a multi‑stakeholder approach.

My name is Alain Ndayishimiye, I'm the project lead for AI and machine learning at the Center for the Fourth Industrial Revolution.  Our work revolves arch governance gap for defining governance protocols on policy frameworks that can be develop and adopted by Government, policy maker and regulators to keep up with the accelerating pace of the benefits of adapting AI while minimizing the potential risks.

For Rwanda, AI is a leap for technology that through appropriate design and responsible permutation can help Rwanda's social and economic aspirations for coming a middle income country by 2035.  AI as a technology holds the power to achieve the UN SDGs.  In addition, AI has been identified as a driver of innovation and global competitiveness.

This is as a result of the Government's dedication of harnessing the power of data as a catalyst for social and economic change and transformation.  So in response to the question, allow me to reference our journey in developing Rwanda's policy.  We are referred to as the land of a thousand hills it is not surprise toking the land of AI innovation.  We have set forth transformative journey.  This policy isn't just a roadmap.  It's a testament to Rwanda's vision and commitment to serve as a beacon for responsible and inclusive AI on the global stage.

However, it's ambitious goal requires a strong foundation.  And this is where we wring the concept of stakeholder collaborations at the forefront and this is why we are established as a centre.  Our experience with multi‑stakeholder approach has been both enlightening and transformative.  Implementing national policy wasn't a solitaire endeavor.  It was a symphony of collaboration between the Minister of ICT and Rwanda, the public Secretariat,  international partners, academia, the private sector, and civil society collaborating towards a common goal.

These stakeholders brought different perspectives experience and expertise and reaching the policy process.

The process of developing AI policy was an inclusive and consultative one.  Workshops were held and we bring stakeholders to share insights, concerns and ideas.  By involving multiple stakeholders, of accountability and participation resulting in a more robust policy framework.  One of the key benefits of a multi‑stakeholder approach is the diversity of perspective it brings to the table.

In the case of Rwanda, involving diversity means a holistically understanding challenges and opportunities resulting in a more nuanced solution.  The collaboration helped build trust fostering a sense of ownership of the policy among all stakeholders.

Furthermore multi‑stakeholder approach promotes knowledge sharing and capacity building among stakeholders ultimately strengthening local digital ecosystems.  Stakeholders shared experiences and knowledge for learning and collaboration.  This has resulted in a more comprehensive AI policy and also policy effectively implemented.

This has aided AI strategy on a firm data governance foundation by collaborating with stakeholders through consultation.  Rwanda's AI policy encompasses a privacy balance and this aligns with the principles of the recent data protection privacy law we helped codesign which mandates the safeguards and data privacy of processing data on residents.

In conclusion, the multi‑stakeholder approach has played a critical role in strengthening local disability ecosystems and building a foundation of our strategy.  It has resulted in a more effective AI policy.  This approach has not only forced an inclusive and responsible development of AI but builds trust and confidence among stakeholders promoting sustainable and inclusive local digital systems.

Collaborative risk assessment by stakeholders enable us to mitigate any diverse AI risks.  Moreover, by collaborating with international partners, we are aligned and aligned our local AI initiative with global practices ensuring that Rwanda is at the forefront of AI both locally and internationally.  Thank you for the opportunity to speak.  Over you to.

>> MODERATOR: Thanks so much.  Some interesting observations there, and the last thing you said was looking at what's happening globally.  That's actually where I would like to turn the conversation now.  We have a couple of speakers who are going it talk to kind of a Zoomed out perspective and looking overall.  So with that I want to turn to Alison Gilward, who is the Executive Director of research ICT Africa, and you have been working across the African continent on some research to understand where countries are with their AI readiness.

We have heard an example from the Rwanda case, but if you Zoom out a bit, what are some of the takeaways you are seeing from the African experience so far with AI?

>> ALISON GILWARD: Thank you very much.

So I think when we speak about the digital readiness of AI we are actually asking the same question as we did about the digital readiness for the economy, the digital readiness for broadband or Internet, because in fact, many across the continent many of those foundational requirements are still not met.

So many, many countries have now 95% plus broadband coverage, mobile coverage, high speed broadband coverage, and yet we have less than the sort of 20% critical mass that we know to see the network effects, the benefits of being online of broadband associated with economic growth and those things.

So there is still existing analog problems, and there are also still enormous digital backlogs, what our research, we do nationally representative surveys, access and use surveys.  They are much more comprehensive looking at financial inclusion and platform work and all sorts of other things.  They really give us a better sense of the maturity and what people are actually doing.

Those studies are done across several countries in Africa and what we see actually is that the real challenges are on the demand side issues.  So, yes, the biggest barrier actually to the Internet is the cost of the device, and there are all sorts of associated policy issues and things to be done and then once people are online, you get this minimal use of data, of broadband because people can afford it.

The affordability side is actually, this is the demand side.

The pricing is the supply side.  This goes to business models, regulatory models, lack of institutional capabilities or endowments to do some effective regulation you would need of these very imperfect markets.

But on the real challenge on the demand side, and all of the kind of aggregated gender data that you get that presents this growing disparity between women and men which is not true across all parts of the continent at all is really around education.  The thing that is driving access to education whether you can afford that is education.

That is from the modeling we can do because these are demand side fully representative demand side studies.  And it's because women are concentrated amongst those who are less educated and employed, gender if you control for it is not necessarily a major factor, and, of course, multiple other factors.

The much greater actor than gender is rural location, but a number of intersectional factors really impact people's participation.  So a lot of the demand‑driven new technology frontier strategies are looking at some of the supply side issues and looking at the high level skills issues that you need to make sure that data scientists or data engineers or that sort of thing.  It's this fundamental human development challenge, but also this fundamental ecosystem, economy and society that really has to be addressed fundamentally if we are going to be able to address these high level issues.

So just questions of absorption, even if we are thinking about trying to create public sector data sets that could be used by the public sector, you know, planning and purposes, so building some public value out of this, I think that's an important point we need to come back to because a lot of the AI models are driven by commercial value creation which we desperately want on the continent, and the kind of innovation discourse which, of course, we want on the continent, but to get there and to make sure that that is equitable, inclusive, just, requires some of these other factors actually drive policy.

And basically the kind of absorptive capacity of your citizenry.  We see many countries with now planning AI applications for Government services, which historically if you got less than 20% of your population connected, then digital services become a vanity project, and this can actually get people, you can use these services more effectively.

I think that's why this enabling environment, these foundational requirements that we have are absolutely essential.  And we speak about it a lot in terms of the infrastructure side and the human development side of things, but the enabling legal environment, the enabling human centred as you called it rights‑based environment as we will see is absolutely essential foundations for building this kind of environment.

So I just briefly want to touch on because it might seem tangential to AI but actually we think is a critical stipulation in creating these conditions is the African Union data policy framework which has really created this enabling environment.  The first half of the framework really deals with these enabling conditions.  We don't call them pre‑conditions because we don't have the luxury of getting 50% of people online or the majority of our country with a digital ID, a data infrastructure in place.

These things have to happen, but they are very strongly acknowledged.  So there is a very strong component in the data policy framework that creates this enabling environment that has really leaf gaged the African Continental Free Trade Area in getting Member States to understand that unless they have this digital under pinning for the continental trade, which is a single digital market for Africa, they are not going to be the beneficiaries of a common market.

And I think that's allowed some leverage.  But it's also allowed us to return to some of the challenges we've had around a human rights framework and I think there is, it's a high level principle document, but there is a commitment to progressive realization of very ambitious and I think absolutely good objectives that we now need to get to.

There is implementation plan so countries can be supported.  That's been our biggest challenge.  I think Sri Lanka was talking about the challenge of implementation strategy.  And I think the important part is we can kind of come back to some of these foundational things we haven't got right, there is lots of talk about a trusted environment, there are a lot of assumptions from circle of best practices from elsewhere in the world that assume institutional endowments, regulatory autonomy, competitive markets, you know, skills and ability in these markets that simply isn't there.  I think the document importantly points out that cybersecurity is important, you know, for building trust and data protection.  These are necessary conditions but they are not sufficient conditions.

So the questions around the legitimacy of the environment that you are in if you are wanting to build digital financial system that's going to engage with a common market, these kinds of things all become really important.  It's got really kind of an Action Plan alignment of various potentially conflicting legacy policies that might be there, and, of course, the big acknowledgment, I think the issues particularly with data governance but have implications for AI are that we are setting up a lot of national plans, and, of course, that's all we can do at one level, but essentially these are globalized, and we would argue digital public goods that we now need to govern through global governance frameworks.

A lot of the things we want to do, particularly the safeguarding of harms are very often, we have got our local companies, trying to build local companies, but 90% of the data that's extracted from Africa goes out of Africa, it goes to big tech and big companies.

So these national strategies have to be located globally, and the other side of that from a global governance point of view because we no longer can do this with public interest regulation, and I think a lot of the focus is on the negative things of AI and so you have got to build this compliance regime, harms protection compliance regime, is the lack of attention which we see in OECD work in this area is about the economic regulation you need of the underlying data economy, access to data, access to quality data, those kinds of things, open data regimes which are in the data policy framework governance component.  But in a lot of discussions we have had this week a lot of emphasis on safe guards, harms, privacy, but not a lot on what you would need to require to redress the uneven distribution that we see in opportunities not just harms which we do see as well between countries of the world but also within countries.

>> MODERATOR: Speaking of OECD, we just happen to have them here.

Thank you for opening a can of worms on multiple levels of global governance.  We won't be able to get to all of those, but interesting insight in the Africa experience so far.

So I want to turn to the next and last speaker for kind of our initial set of speakers here, Galia Daor from the OECD, and OECD as Allison is saying, has done a fair bit of work in this space and you have produced a set of Acy principles and I know you are working on toolkits and guidance and things like that.  Maybe tell us more what you see from the global level here about what countries are asking for, what the state of readiness is, just what you are seeing in general.

>> Thanks very much.  I admit it's a bit challenging to speak after Allison on that front but I will try and I will try to do justice to the OECD's work but recognizing that there are challenges and also I think not one organisation or obviously not one country can address all of them.  So I think at the OECD we come to this from the perspective of, yes, a set of assumptions, but I think it doesn't, it doesn't replace, I think, other work that needs to be done.

So maybe just to sort of get a bit into that work, so the OECD started working on artificial intelligence in 2016 and then in 2019 we adopted the first intergovernmental standard on artificial intelligence, the OECDI principles.

And these are a set of five values‑based principles that apply to AI actors and a set of five policy recommendations for Governments for policymakers.  The values‑based principles are about what makes AI trustworthy, and also go into some of what other speakers have mentioned on the benefits of AI but also things, and I think both are important.

Using AI for sustainable development and for wellbeing, also sort of having AI as human‑centred as well as risks such as transparency, security, importance of accountability.  Separately, the policy recommendations, so I think perhaps linked to what Allison said without sort of prejudging the situation of any specific country sort of looking at what a country would need to put into place in order to be able to achieve these things.

So R and D for AI, but also the digital infrastructure including data, including connectivity, the enabling policy environment, the capacity, the human capacity building, and, of course international and multi‑stakeholder collaboration which is a point that others have made already.

So the principles are now adopted by 46 countries and also serve the basis, including Singapore as was mentioned including other countries like Egypt and served as the basis for the G20AI principles.  Our work now is focusing now on how to support countries in implementing these principles, so how to translate principles into practice and looking at three types of actions that we are taking, so focusing on the evidence base.

So one as Secretariat is to look at what countries are actually doing, so looking at national AI strategies that countries around the world are adopting.  We have an online interactive platform, the OECD AI Policy Observatory that has more than 70 countries in it.

And what we know 50 countries have adopted national AI strategies, which is an interesting data point.  The observatory also has other data on AI including investment in AI countries around the world, research publications so to see which countries are more active in the space and what they are doing.  Jobs and skills and sort of movement around the world of jobs and skills for AI.  So a lot of sort of wealth of information there.  We also have an expert group that, a network of experts which is multi‑ disciplinary and international with very broad participation and we are also developing practical tools to support countries and organisations, I should say, in implementing AI principles.

One last point I would mention in terms of what we are seeing with these principles now, one thing is we see that they are impacting national and international AI frameworks around the world with the definition of AI at the OECD principles, but also our classification framework for AI systems.  And the other thing that I will say is that we are also supporting countries if they are interested in sort of developing or advising their national AI strategies to align with the AI principles.

So this has worked out.  For example, we are now doing it with Egypt, but I will stop here.

>> MODERATOR: Thanks, and the time is racing by.  I can't believe we have 15 minutes left in the session.  I'll do my best to open up for questions.  And I wonder if you want to make a couple of remarks as well.  So think of your questions now.  Before I turn to those, just to mention a couple of things from the UNDP side, we are doing digital programming or supporting digital programming in about 125 countries.  40 to 50 of which are really kind of looking at national digital transformation processes, and some of those foundations that Allison is talking about, because we really see the importance of building an ecosystem.

This doesn't happen with fragmented solutions.  This happens when you build the kind of foundational ecosystem that is comprised of people, the regulatory side, the Government side, the business side, so on, so forth as well as underlying connectivity and affordability.  And we have started also to kind of an additional process to that which we are calling the AI readiness process that basically can complement compliment that.

It looks at, and this is what Dr. Romesh Ranawana was talking about where we have been working to support Sri Lanka, Rwanda and Colombia currently on how does Government serve as an enabler and how is society set up in terms of being able to handle AI in terms of capacity and foundational issues?  This is something we've been doing, it's been piloted in the auspices of an interagency inn process led by ITU and UNESCO.  And something that we hope will be one of the tools that are available to countries in the toolkit has they seek to address these issues, taking that kind of ecosystem approach.

If there are any of you representing national interests here and would be more interested in that, let us know.  With that, I was pointing to you, because Ms. Huang is a head of the university in Macau.  If you want to take the floor, I don't want to put you on the floor, but if you have quick observations.

>> JINGBO HUANG:  Thank you, Robert.  I'm here to help.  My name is Jingbo Huang, Director of UN university research in Macau.  We are a research organisation and our work is related to AI governance.  We conduct research, training and education from the angle of the biases related to gender, churn in the algorithm, and we have done research in collaboration with UN organisations, for example, UNESCO, ITU, UN Women and soon to be hopefully with UNDP. 

So I'm here to have an open mind to learn about this topic, and I actually saw a very nice overview and pictures from Africa, from Asia, from OECD.  So it's really great learning.

So the one key word that comes into my mind is collective intelligence, and it's not only the collective intelligence between people and people and we talk about regulatory framework, business, we have all of these entities among humans to work together to make this infrastructure ready, and we are also talking about machine intelligence, if we call them intelligence and human intelligence working together.  How are we taking it?  Like what Robert said at the beginning it is not only about the dark side.  How do we bring the bright side together, so the collective intelligence is the key word that just emerge in my mind.

So I have two questions since I'm learning here, so the first question is related to the different tools and the frameworks that OECD developed, Singapore developed and maybe Africa also has developed and UNDP.  How do these tools work together, for example.  I just learned UNDP's AI readiness assessment tool and I heard about your tools.  How do these tools work together?  Or maybe they don't.

Second question is to all of the panelists about what keeps you awake at night now?  Because this is important for me to learn what are the challenges you are facing right now in this implementation process, in this conceptualization process?

I have the overview, but I want to know the pain points?  Thank you.

>> MODERATOR: We are going to go to a couple of questions here so we will have time for response from panelists.

>> AUDIENCE: Thank you very much, my name is Alka I work for KPMG in the responsible AI practice.  First of all, I was triggered by this sense title, and so you did a good job with the session proposal.  So I work for KPMG Netherlands coordinating our efforts globally and what we see is a large difference in countries just acting as, in a democratic way itself.

And also being part of the ethics works frame gives me a broad view of the entire world actually as certain countries are, have no democratic process in place, others do.  So with our advisory practice it's really difficult to advise on ethics with a country that has no include what's that about to be a little bit proactive about that.

So that's really difficult.  So asking your question, are countries ready?  No.  Definitely not yet, because coming from the Netherlands, we also see even issues in our own country which relatively is quite democratic.  However, yes, so we need to cooperate together and thanks to the OECD guidelines and principles.  They function well and we use them in our daily work and daily basis.

And also happy to contribute on next iterations if possible.  Thighs are my observations from the outside.

>> MODERATOR: We will take one more question here.  We will go online quickly and then I think we will have a chance for panelists to come back and we will close.

>> AUDIENCE: Hi, I am from the Dominican Republic.  First of all, I would like to thank all of the organises for doing this amazing session, all of the people who have made points for AI development in this case.  It's the way that it's being promoted by the companies, by the international organisations that are promoting that AI will transform the world and change everything, which is actually right, but there is the thing that in the race to become AI proficient at all levels in most nations especially the Global South has been taken into I must say not necessarily the right direction because we are focusing on implementing algorithms and solutions that are AI‑infused to do a myriad of things, especially in Government, but the main problem is that we don't have the core elements for doing a transition to an AI‑based society just yet.

It's starting with data.  So we have problems with data quality, with data collection, with how we assure that the data is correct so we can prevent biases.  And, of course, we don't have the infrastructure in place.  And yet most of the countries have inadequate data protection and privacy laws and regulations.

So given this situation, and knowing how things are moving and how things are approaching, how do we propose or create a set of rules, a set of frameworks that help to guide the countries into the right direction.  When we talk about AI we are talking about large language models, we are talking about data.

So if the data is not right, how we can implement properly AI solutions that helps our country to develop in this is the question in the Global South we are asking now.

>> MODERATOR: I'm going to turn to my colleague who is on my team, Alena Klatte who has been moderating online.  Can you just pick one question for the?  I know you have got more than that, but pick one and ask, please.

>> ONLINE MODERATOR: Given the rapid advances in AI capabilities, how can Governments ensure that its technical infrastructure and workforce skills are agile enough to adapt to new AI technologies as they emerge?

>> MODERATOR: How do make sure the workforce is agile enough which is related to many of these.  I'm going to go back to our panelists and I think this unfortunately will have to be our closing round as well.  I think Jingbo has given a good question I would like you to answer which is what keeps you awake at night, but if you would like to speak to the questions about the tools, I will have a response on that one.

Also the issue of this sort of how do we get the fundamentals right, how do we get the right data and those kinds of things, how do we work towards collective intelligence.  Dr. Romesh Ranawana, can I turn to you first for your brief responses, please.

>> ROMESH RANAWANA: Of course.  Essentially the problem is like has been mentioned so many times the foundation elements and for us, one of the biggest obstacles to take our AI and to provide the benefit for the people, especially in sectors of Government efficiency, corruption, making things more relevant.  Sri Lanka is fortunate we have good connectivity and 90% of the population does have connectivity available to them, but the welcome of data has been highlighted so many times is probably our biggest problem.

Data is siloed and it's still available in paper format in a lot of situations, so how to first digitize it, standardize it and then make it available to those who need it in a fair and responsible manner is probably our biggest challenge.

That's not only a technical challenge, but also an operational challenge.  It's changing mindsets, awareness, thrust in these systems and that's something that we are really struggling with on how to take that forward.

>> MODERATOR: Thanks.  Is that what keeps you awake at night?

>> ROMESH RANAWANA: Absolutely.  That is definitely one of the big ones.  We have so many people doing AI projects, but they are running AI projects on data they download from the Internet.  We don't have projects running on Sri Lanka because we don't have those problems available, so all of the effort is being wasted because we don't have a consolidated set of data sets to address national problems.

>> MODERATOR: Alain Ndayishimiye, let's go to you next.

>> ALAIN NDAYISHIMIYE: So the technical development and deployment of AI, and I'm referring to ethical considerations when developing these technologies is what often keeps me up at night, concerns around technology such as biases in AI models, potential privacy breaches and broader society impacts such as job displacement and misuse of AI.  So ensuring that AI is used responsibly in all societies is paramount and it's a challenge that requires vigilance.

Please allow me to talk on how these instruments need to work together.  So let me speak on harmonization, especially African context.  It provides a unified front when dealing with multinationals that are part of the economic transformation.

Such modernization efforts, first economic integration, and enabling cross‑border trade and investment and reducing complexities of regulation.  It facilitates shared digital infrastructure and sharing connectivity across regions.  And furthermore, harmonized approach ensures the continuous consumer protection, robust data privacy standards and boost African competitiveness in the digital realm.

In an essence coordinated policy is essential for Africa to ref length the digital economy and benefits on positions itself as a significant player within this space.  Thank you once again for this opportunity.  Over to you.

>> MODERATOR: Denise, let's turn you to.

>> DENISE WONG:  Thank you.  I think on the global level I worry about fragmentation.  I think we have been in a space for a long time now in different areas but global laws are fragmented and that raises compliance costs for everyone.  We need to have that conversation early.  I think at a more domestic level I worry about leaving vulnerable groups behind even in a society that is highly connected like Singapore there is the fear that technology will widen divides and create harms that we not anticipate to groups of people we should be protecting the most.

The third thing I worry about is cultural sensitivities and ethnic sensitivities especially with black box technology it's hard to predict whether the technology is going to fragment and divide or it's going to unify and cohere.  So part of what we do is to try and unpack what it means from a culturally specific lens.  That is really about AI for the public good.

>> MODERATOR: Thanks Denise, I will turn to Alison Gilward.

>> ALISON GILWARD: What keeps me awake at night is the individual implication of inequality unless we address some of the under pinning structural inequalities leading to this.  I think that's very likely if we simply take these blueprints and take them from countries with completely different political economies, conditions and implement them onto these societies.

And just in that regard, I have to say that although the challenges of having democratic frameworks in developing AI policies is actually a challenge for many of us, but I think we are willing to appreciate that the ethical challenges we are facing are with some of the biggest tech companies that come from the biggest democracies in the world.

So I think the ethical issues should be addressed globally and can be addressed globally.  And finally so say that the point being made about we can't unbias these big data sets because countries like Sri Lanka was mentioning, the countries are not digitized.  People are not online.  We can't unbias the invisibility, the underrepresentation and discrimination we are seeing in algorithms. 

>> GALIA DAOR: Quickly just to say I can relate to a lot of the things that Denise said about the fragmentation and this is the real concern.  I think what keeps me up is also that we will miss out on the opportunities that AI has to really I think ultimately have the potential to make everything better for everyone if we do it right.  I think it's too big to miss.  That means it's something we can't leave to just companies.  We can't leave to a certain set of countries which leads me to this has to be because AI itself is global because it has no border it has to be a collaborative effort.

That needs to be genuinely collaborative.  We have been in the process for a while, but this kind of conversation is important.

>> MODERATOR: Thanks to all of our panelists.  We are over time, and I'm sure we are going ‑‑ yes, I'm getting the nod.  I would say a couple of things to sum up what I have heard, and so add a little bit of my own insomnia to this.

I think we have heard there are certainly the challenges here, and the challenges that have been named are things like fragmentation, and the foundations, and it's so important to get the foundations right.