IGF 2023 – Day 1 – WS #317 African AI: Digital Public Goods for Inclusive Development – RAW

The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.

***

 

>> MODERATOR:  Good evening, everyone.  I want to check also if the folks online have been able to join?  Bobina, Dr. Meena, are you online.

>> This is Meena, yes, I am online, thank you.

   >> MODERATOR:  Bob Bobina?

   >> BOBINA ZULFA:  I'm here as well.

   >> MODERATOR:  I will start with introductions.  My name is Mark Arora and today we're here to talk about AI and its use, particularly for sustainable development as far as digital public goods are concerned.  It's some work that we have been doing in Africa, and we kind of will do a little bit of a deep dive, looking at some of the things that we have done as a program within GIZ, but as explore what are some of the risks that are coming out in the discussions that we have.

With us today, we have ‑‑ with us today we have Yumas from federal ministry of communication and development.  Usen Waweru, Head of office of Data protection.  And Meena and brings experience with having worked with government and academia in private sector and currently director at Move Beyond in South Africa.

And Bobina, at Policy, a collective of data scientists and creatives and democratics working at the center of data and design and technology to see how government can improve on service delivery.

We'll start with a keynote from Yumas to talk to us a little about you know from a high overview what they are doing before we dive into our conversation.  Over to you.  Thank you.

>> Dear Mark, distinguished ladies and colleagues, IGF friends, it's a great honor on behalf of the German GIZ and pleasure to share a opening remarks today highlighting potentials of AI, especially African AI for inclusive development.  What is the potential of AI for inclusive development?  I think we already heard a lot on day zero and today.  In my view, it can be instrumental in achieving the SDGs, it can facilitate medical service delivery, increase efficiency in agriculture, and improve food security.  Challenges of our time.

Yet, only a fraction of the population worldwide has access to AI applications that are tailored to their needs and we want to change this.  This is why we are here, and on top of that, since the negative effects of AI disproportionately affect developing countries especially in the Global South.

However, we also need to be aware of the risks related to AI.  These risks range from high greenhouse gas emissions of large language models to digital disinformation and risks to civil and democratic rights.  The international community is becoming increasingly aware of these issues, and we see it here at the IGF.  Accordingly, in my view, the promotion of ethical, fair, and trustworthy AI as well as the regulation of risks are becoming or beginning to be addressed at the global level, as we heard this morning in the G7 context of the AI Hiroshima process.

AI has been addressed in the UN, G7, G20, and international organizations such as UNESCO, and the OECD have published principles and clear recommendations that aim to protect human rights with AI being on the rights worldwide, and the EU is on the forefront of regulating AI with the EU AI Act.

The secretary General Gutteres is convening for developing countries and I think the contributions from the Global North and Global South are essential so that we can make sure that AI benefits all, and when talking about AI, we mostly hear about models and applications developed in Silicon Valley, California of the U.S. or in Europe, but there is so much more.  And we discussed large language models that represent and benefit only a fraction of the world population.  That is why I'm especially excited to hear about AI use cases today that were developed and deployed in African countries, and that truly represent African AI and that were designed specifically to benefit the public in African countries.

As a German Federal Ministry of Economic Cooperation and Development.  We enhance economic participation of all people in our partner countries, and thus we are very eager to support our global partners to realize the potential of AI through local innovation in these countries that we are talking about here in this session.  We are very committed to the idea that digital public goods are an important enabler, for example, to be more concrete, the access to open African data language sets is supporting local governments and private sector in building AI empowered services for citizens.  For instance, our initiatives pay it forward contributes to the development of open AI data sets in different languages, languages spoken by 150‑million people collectively.  Some of the examples that we'll get to know in the session are built on these language datasets and I'm looking very much forward to this, and to give you an outlook, we see open access to AI training data and research as well as Open Source AI models as the foundation for local innovation.  Therefore, relevant data, AI model, and methods should be shared openly as digital public goods.  To realize the potential of AI for inclusive and sustainable development, we need to make sure at the same time that AI systems are treated as digital public goods, open, transparent, and inclusive at the same time.  In this way, a global exchange on AI innovations can emerge.  This IGF with AI being mentioned in so many session, it's one starting point for the global exchange, and now I'm looking very much forward to the use cases.  Thank you so much for being part of this wonderful session.

   >> MODERATOR:  Thank you so much.  So, before we dive in and building upon that, we are kind of taking a critical approach to try to see how are we beginning to define what AI means to us in the continent, in the African continent.  And today we specifically have this idea that we can actually build solutions and systems and not just look at it from a policy and framework perspective, so to speak.  I will start with Susan because she's in the room and in the hot seat, and I will ask ‑‑ I will start you to the framework, right, and what the Office of the Data Protection Commissioner is doing in Kenya as far as thinking about AI, and then also explore, if you have any ideas in context about what is happening in the rest of the continent.

   >> SUSAN WAWERU:  Thank you for your question and good evening to all.  My name is Susan Waweru from the Data Protection Commission Office in Kenya.  As we may be aware in the AI context, privacy is of fundamental importance to ensure that AI works for the benefit of the people and not to their harm.

Discussing on the frameworks, in Kenya, the top of frameworks is the Constitution of Kenya.  Within that constitution, we have several provisions that sort of guide AI in Kenya now.  One of them is the values and principles of governance.  From a government perspective, we are bound to be transparent, to be accountable, to be ethical in everything that we do, this includes in the deployment of AI for public service, in the service delivery in all forms.  Secondly, we have values and principles of public service.  These are the values and principles that govern us as public servants in how we carry out our public duties, so that is what at a constitutional level we'll be guided by in the deployment of AI in delivery of service.

Of most importance is the Bill of Rights and fundamental freedoms.  Within that Bill of Rights and Fundamental Freedoms which is also in our Constitution, we have the right to privacy.  The right to privacy is what has data protection law, data protection policy, and data protection frameworks in Kenya, so having the top, the top norm in Kenya, giving the guardrails in which AI, privacy, data protection will be guided by, gives a good background and a good, firm fundamental foundation for any other strategy or policy in tech or AI can then spring from.  It forms the global guardrail for everything that should be done.  It may not specifically touch on AI, but the values and principles, the Bill of Rights and fundamental freedoms gives you the constitutional guardrail of what you can and cannot do in the AI space.  It is from that, Mark, all other frameworks including the AI strategy, the digital master plan, all are there, too.

   >> MODERATOR:  Thank you, Susan.  Building upon that and that we have these anchored in constitution or principles that we want to develop, and then we have digital public goods.  And digital public goods then are ways in which government is offering shared services.  If you have a registry, for example, a civil registry, how can it be leveraged across the government rather than the Social Security and tax fund all have their own registers and kind of duplicating that effort.

And so, from that knowledge we are also beginning to think about how, because we have an opportunity and we have seen how multiple registers affect how government delivers services.  Then we see the need for adopting this in government because it can be cheaper, it can be cheaper in terms of how government procures these services.  And I'm giving this background because I do not know if Darlington is online.  Is he there?

   >> DARLINGTON AKOGO:  Yes, I'm here.

   >> MODERATOR:  All right.  So you can introduce yourself, and then move on to the question and share with us what you are doing in West Africa and then you can also talk a little bit about the lessons you are learning from the work that you are doing.  Are you doing ‑‑ are you using any digital public goods approaches in your work?  Over to you, Darlington, thank you.

   >> DARLINGTON AKOGO:  Thank you for having me.  My name is Darlington Okogo.  I'm going to move in very close, so apologies for any sounds.  I'm in a moving car.  So what we do at AI Labs is build intelligent financial solutions for health care, and so connected to the question, we do have one (sirens in the background), one is focused on medical interpretation, and therefore we have in Ghana to certify and approve this ‑‑ I'm sorry ‑‑ to approve this AI system.  Yeah, so we've rolled it out, we have use users from all around the world, in Ghana used in some of the top, capital city it's but as some small towns and expanding ourselves to even the rural areas.

The benefit of this AI system to the communities, for example, is that by default, if you go take a medical image and X‑ray for example, you take several weeks before you get the results because, you know, there are very few reality.  In Ghana for example, there are 40 doctors and African country like Liberia, less than 5 audiologists, so what this AI system does is help speed up the process by using AI to interpret that medical image.  So, the AI system, if you ‑‑ we have it on a platform, and this AI system is able to generate results in just a few seconds, about 5 or 10 seconds you get the results.  What used to take weeks, can now just take a few seconds.  That makes all the difference in health care because you want to know exactly what is wrong with people quick enough that you can respond to it.

The lessons we've learned are quite a lot.  One key one is within the space of AI, there is a huge difference between doing AI for research, or you know doing some sort of demo proof of concept and building AI that is meant to work with real humans.  There is a whole lot of difference.  The key thing is the frequency that you need to do.  This is super applicable in health care.  So getting the AI system to be set to pipeline, is a very, very major step.  A what it takes to get certified so some key lessons is quite a lot.  One things we have learned is just double down on resource evaluation.  The other bit is you don't want to build the AI system and just hand it over to the users.  Let them decide what kind of features they want, how they want the AI system to fit into their workflow.  That is very important.  Yeah.

   >> MODERATOR:  Thank you so much, Darlington.  Thank you.  And moving from what you've just said, like I will turn to you, Meena, and ask about ‑‑ Darlington just said it's very different when you're doing a research project, but when you're actually implementing a solution, there are a lot of things, there are a lot of risks, and I think one of the approaches from a digital public goods perspective or DPI perspective, or Digital Public Infrastructure perspective, are there harms, what are risks, how do we expose harms, and how do we protect so that there are no harms that ultimately translate to the citizens.  So I invite to you reflect on that.  Thank you.

   >> MEENA LYSKO:  Thank you very much, Mark.  Firstly, thank you for the platform and giving me the e‑opportunity to visit Japan and I wish I could have been there in person and I could not, and I apologize for that.  Thank you for this opportunity to e‑visit Japan and captivating and and unique City of Kyoto, the birth City of Nintendo and hosts a number of UN sites.  I'm very envious of you.

Maybe I have intended to share some of the work we have done or that we are doing so I'm come back to it.  In terms of your question, so looking at AI ethics and governance standards, probably first in South Africa.  All right.  We have the ever‑evolving digital landscape, and we have the protection of personal information act or POPI act and also the Cyber Crimes Act which span a significant legal framework which is shaping the realm of data privacy, security, and digital crime prevention.

So, the POPI act, which is endorsed in South Africa, that prioritizes a safeguarding of individuals personal information, encourages responsible data handling by organizations.  The POPI act emphasis is on individual privacy and is reshaping the way organizations collect and manage personal data.  It prompts them to adopt stringent data protection measures.  Perhaps I can give an example.  I frequently get these sort of annoying calls, and then I ask, how did you get my number?  You know, and then they go about and say you do know I have not shared my number with you willingly, so this is against the POPI act.  Very often the phone goes down immediately.  So people in South Africa are very aware of the POPI Act, and feel safeguarded through the POPI Act.  However, challenges do emerge in balancing innovation and compliance, especially in the age of digital transformation.

In parallel to the POPI Act, we have the Cyber Crimes Act, that addresses the escalating threat of cybercrime by providing a legal structure to tackle various legal offenses, for the defenses against the threats.  More important it becomes imperative for business, individuals, and law enforcement agencies to actually collaborate in the implementation of these acts.  Thank you, Mark.

   >> MODERATOR:  Thank you, Meena.  I turn to you, Bobina.  So we've talked about ‑‑ we've talked about digital public goods, we've talked about how we protect citizens, and Susan gave a very good introduction on what is being done as far as the frameworks that we have concerning data rights.  SDG 16 talks about partnerships, a Civil Society.  In this particular field on digital public goods, do you have any collaborations with other stakeholders whether in private or public sector, and do you think ‑‑ that's loaded, do you think that this alignment from what you can see in the landscape, right, with sustainable development as far as AI is concerned right now?  Over to you, Bobina.

   >> BOBINA ZULFA:  Okay.  Thank you, Mark.  Please allow me to go on without video because my Internet is unstable.  Can you hear me now?

   >> MODERATOR:  Yes.  Yes.  We can hear you.

   >> BOBINA ZULFA:  Yes?

   >> MODERATOR:  Yes, go for it.

   >> BOBINA ZULFA:  All right.

   >> MODERATOR:  We lost you now.  We can't hear you.

   >> BOBINA ZULFA:  Okay.  Can you hear me now?

   >> MODERATOR:  Yes.  Go for it.

   >> BOBINA ZULFA:  Okay.  All right.  I'll quickly get to the question.  Good to hear from the panelists (audio breaking up).

I would like to say from the get go of the conversation, I think I have just been thinking the idea of the AI technologies, even on the continent right now digital public goods very much an ideal in we are in a sense working towards.  That's not really at the moment because there were things that you describe as being a digital public good, is kind of more inclusive in a sense is not really what's happening at the moment.

So in a sense of trying to relate that to the SDGs and transforming the world as a whole, as the technology is being adopted here on the continent, and how that's happening along the lines of intersection of partners and how they're working towards realization of different SDG, I think I do see a number of examples.  For example here in Uganda across, I see in academia and private institutions.  I'll give an example.  (audio breaking up).

There is a lot of partnerships with developing partners.  The Lakuna fund for text and speech datasets ‑‑ I think through Google.  I think sort of what I see from the academic space over to private sector and Civil Society the more advocacy roles with the issues and ethical considerations in terms of adoption of technologies.

I think it is something that is springing up in a sense.  It's not happening on a grand scale, but it's something that we see coming up.  I guess we can hope that it's only going to, especially with working around the advocacy, I see more of it coming in the next months and years.

   >> MODERATOR:  Thank you so much.  Thank you so much, Bobina.  I come back to you, Susan.  I'll act like a journalist and say, I'm sure that people in the room are wondering, (Laughing), in Kenya we say that the Kenyans are asking, even if it's one person.  How do we move from this framework, right, to the actual implementation, and what are some of the things that you're doing in this regard so that they don't remain on paper?

   >> SUSAN WAWERU:  Mark, that's a good question.  It's one I'm passionate about.  I'm known as the get‑it‑done girl.  So, my reputation is to move things from paper to actual implementation and execution.  Death in the drawer is a concept that we learn in policy and business administration where you can have the best policies, you can have the best strategies, frameworks, legislations, all documented and that is one thing you see in Kenya.  We have some of the best documentations even borrowed by the West, but implementations become one of the biggest challenges, not only in AI but in the tech space.  So how you get it done, from my perspective is one that leadership matters.  If you don't have leadership commitment to getting what is on paper out to be physically seen would be a challenge.  So what we do is as the technocrats in government we seek to influence leadership.  We have some parliamentarians here with us, and we seek to educate them on the importance of what has been documented, because if the policy is done at the strategy level and just benched, then it becomes a challenge.  But as technocrat, influencing the leadership on the importance of the documents that have been prepared is key.

Once you get the leadership buy‑in, then it trickles down to the user and citizen buy‑in, because those using the frameworks, for example, the data protection act is an act parsed by parliament to be implemented majorly.  It affects major league data controllers and data processers, so we who are largely entities.  So if we don't get entities on board through awareness creation, through advocacy, then we don't have that document done.  One way to get user buy‑in, and we'll talk about this later, is to have a free flow of information, to be transparent in what you do, to be very simple and clear on what compliance journey is for data protection and privacy.  So leadership buy‑in, citizenry ‑‑ leadership matters, citizenry buy‑in, and another thing is collaboration.  Partnerships with organizations and entities who have executed that which is in our documentation.  One we collaborate, for example, with other bodies and other government agency, for example, who have executed ‑‑ who have implemented their AIapplications successfully in the Kenyan government then we collaborate how to do that.  Currently in Kenya, I can say that this get‑it‑done attitude is at high gear.  In the tech space the government has a digital transformation agenda, spearheaded by the Presidency with the President himself overseeing and calling out most projects currently that digital transformation is at infrastructural development stage, and onboarding all government services on to one platform which we call E‑citizen, and he gives specific timelines on when he wants all of that done and checks them himself.  That's the leadership ‑‑ that's the level at which the Government of Kenya is interested in digitalization, towards moving toward an intelligent government where we don't react to public sector needs, and we preempt them and provide them even before they happen.  So those are the three ways that, Mark, I would say how we get documentation to the ground.

   >> MODERATOR:  Thanks, Susan.  And of course today we'll wait to see what you've been doing as well with AI itself.  I hope we can get a chance to see that if we don't run out of time.

I come to you, Dr. Meena, and I ask about training and capacity building.  What does that mean to you for different stakeholders whether they're in policy, whether it's the level of graduates that we have.  We know a lot of them have to go overseas or outside the continent to get their training, to be able to come back and be part of an ecosystem.  So what does that look like for you right now, and especially now with the risks and the potential harms of AI being apparent?  Thank you.

   >> MEENA LYSKO:  Thank you, Mark.  I'm really looking forward to share some of the programs we're busy with currently, but I'll hold back and maybe address this particular key question.

So, the emphasis should be in includes ethics in AI systems life cycle, so that should be from conception all the way through production.  It's a cycle, it means that it should go continuously in a sustained sort of initiative throughout the working stages of a particular system.

Within some of the programs, and for example the one I'm currently on, we've incorporated a module on AI ethics and bias.  Now, albeit we are looking at very hands‑on development, we looked at the soft skill, if I can call it that, right.  Where we need to, we need for our participants, our trainees to emphasize that the adopting of ethics in AI is more than just knowing ethical frameworks and the AI systems life cycle.  So it requires awareness of ethics from the perspective of knowledge, skills and attitude, that means knowledge of law, policy, standards, principles, and practices, and then we also need to integrate with professional bodies and activists.  We have a number within South Africa itself.  For example, we have an AI overarching representative within South Africa as a body.  We have I think it's called Dephub in South Africa which focuses on AI policies sort of in data and recommendations.  Then we must also look at application of ethical competence.  So, we need an ethically tuned organizational sort of environment, and in tune with that as well, we have to look at ethical judgment.

We've been emphasizing that our participants in our training program are fully aware of these aspects, so they need to be ‑‑ their projects, their developments require to be guided by their ethical principles and philosophies, need to be imbibed with that.  In the projects they're in, they need to apply ethics throughout the design and development process.  We've also, to ensure that, we are training people in data and AI science as an example, but we've also incorporated to invite industry experts into our sessions for engaging with the participants so that there is an encouragement of healthy knowledge sharing.  But as in the opposite direction, there is youthful perspectives that are shared on promoting morally sound solutions so they're not recontaminated with what is going on in a market for the purposes of just for profit, and that's where we've seen it happening and put in our training programs as a very successful sort of sharing mechanism conduit.  Thanks, Mark.

   >> MODERATOR:  Thanks, Meena.  I come to you, Darlington, and I ask this question that is a little bit on what Dr. Meena said and working with industry experts.  You have a bucket with the SDGs and AI and problem that needs to be solved.  In your case you talked about radiology and being able to read and interpret what those images mean.  Then we have the complexities of running a business.  So talk to us a little bit about strategies, if any exist to align this in the work that you're doing?  If you're still there?  Darlington is not there.

Okay.  Then I move to you, Bobina.  Are you online?

   >> BOBINA ZULFA:  Yes, I am.

   >> MODERATOR:  All right.  I will ask a question that is related to ethical deployment of AI.  What are you doing as Civil Society in this regard to make sure, for example, that people will not be left behind by digitalization and digitalization topics, that children will learn at an earlier age about the risks of these technologies and even as they begin to use them?  Over to you, Bobina.

   >> BOBINA ZULFA:  Thank you, Mark.  I hope you can hear me.  That's a really profound question because it goes on what we're trying to say in this conversation, but as Dr. Meena was going over a number of ethical concerns and how they're being navigated.  I'll say, for instance, with the work on policy, a lot of what we do is risks technical in a sense.  When our developers know, we look at the landscape and look at how the technology is being adopted in different communities.  So in our role in a sense, a lot is one knowledge production and then advocacy.  So with the knowledge portion, I would say a lot of the teams are looking at critically right now in terms of addressing just the ethical questions and bringing communities to understand the workings of these technologies because they think it's very early to just ‑‑ we always compare this to the health conversation where, you know, there is a disease outbreak and then government finds way to communicate this about the general population even when there is all of this scientific information.  We've been trying to think of how do we come up with language that the population, you know, the ordinary person within the country or across the continent would be able to understand these technologies and day to day how to incorporate them in their lives and how this could be something to be.

And just going over the knowledge production and advocacy, I think we're looking very critically at the issues of ‑‑ a composition of maybe automation and invisible workforce behind the technologies.  (audio breaking up).

Getting to understand that government is regulated in a sense, being adopted by people doing this work, so that's one of the things they're looking at.  The other thing being harmonization of collective and individual rights, so in the frameworks that are being developed and where we're getting sort of like a blueprint from the GDPR, and a lot of frameworks are driven toward individual rights and I think that's problematic more and more.  There is need for us to move towards a place where we harmonize both collective and individual rights and that would bring in a participatory ‑‑ (lost audio).

   >> MODERATOR:  Hi, Bobina, can you hear me?

   >> BOBINA ZULFA:  Yes, I can hear me.  I don't know if you can hear me.

   >> MODERATOR:  I lost you temporarily.  Just repeat the last sentence.

   >> BOBINA ZULFA:  Uga.

(laughter).

   >> MODERATOR:  Audible gasp.  We're here with you, please.  I think we lost Bobina.  Darlington, I see you're settled enough.  Okay.  Perfect.  Cool.  I have a question for you which I think you did not hear, but I will take two questions from the audience, you can prepare them if anyone has a question, probably one from the room and one online.  My question to you, Darlington, before we lost you in Cyberspace was, we have the SDGs, the Sustainable Development Goals.  We have you as a business, and then we have AI.  And not all the time these interests are aligned.  Like sometimes it's the business which has to take precedence.  Sometimes like in the problem you gave us, you're trying to solve a really impactful problem, and other times it's just, you know, you have to comply with some certain regulations.  What are some of the strategies that you have to align all of this in, in solving the problems that you want to do and also aligning with the SDGs?

   >> DARLINGTON AKOGO:  Yeah, I mean that's a very, very important question.  The initial solution or the initial strategy is to make sure that your business is built around solving a problem in itself, so a problem connected to the SDG.  Then fundamentally, there is no conflict to begin with.  So if you are profiting off of something that is damaging it's environment or destroying the health of people, then alignment becomes a really, really big problem.  But if fundamentally you have a social enterprise or business built around solving a problem like in our case the whole business is built around being able to provide health care and making it accessible to everyone, and for meana and everyone making sure software security.

Other than that definitely there are stances where maybe if you took one route you would make a lot of profit, but the impact might not be so much.  Then there is another route where continue might not be the case.  So I can give you a real‑world example.  So, we work on discovery with AI.  There is a scenario where we looked at where you put or take certain conditions and work on new drugs for them, and they'll be very expensive.  You know, there are certain medications where a few tablets are tens of thousands of dollars, hundreds of thousands, and millions.  You could sell it to a few people and make a lot of money, but then the question is are you actually building any equitable access to health care by doing that?  So when it comes to a scenario, you have to have guiding principles.  What you can do is have an internal constitution that says this is our moral backbone and we need to live by it, and physically obliged to make decisions off of it.  Even if the CEO is by not following that internal code of conduct in the constitution, they could be voted out.  And depending how serious you were about this, you could solidify this within a company's constitution and then it will be fully followed.  Some people even the way you register the business.  There is a category in some countries where you can register as social enterprise or you know a for‑profit but public good, I think the term is, and when you do that it means that your primary obligation is not to shareholders and investors and it's to the public.  Those are legal ways of making it binding to make sure that you are actually focused on addressing the SDGs and not just maximizing profits.

   >> MODERATOR:  Thank you.  Thank you.  I guess what I'm hearing from you as well is the ability to also consider self‑regulation, especially in this space, as you innovate, as you solve these problems.  Even where there might be Lakuna in the law or frameworks that exist.

I don't know if there are any questions coming in from the room to begin with or from online?

   >> AUDIENCE MEMBER:  Hi, everyone.  I'm Leah from the Digital Public Goods Alliance and I think we should also quickly talk about infrastructure.  I mean apparently we had some troubles here, which is a good perch to talk about data infrastructure and access to compute.  Obviously you need both of them in order to democratize the use of the development of and also the benefits of AI in an African or Global South context, so how do you deal with these challenges in your project, in your country context?

Thank you.

   >> MODERATOR:  I will open up that question for anyone to take it up.  Yeah?

   >> SUSAN WAWERU:  Thank you for the question.  Something in Kenya, one of the things that I mentioned under the digital transformation agenda is the first building block is the infrastructure.  So I know for the next five years, the government is having the last‑mile connectivity project, which seeks to bring fiber connectivity to every mark, to every bus station, to every public entity that will give free WiFi and that gives them the access to digital public goods.  So that's one of the things that was adopted as one of the first things to be con because you can't develop digital public goods without accessibility because accessibility equals, and accessibility is really important and I think one of the bedrocks to make AI successful.  That's what I know from the Kenyan experience that's happening.

   >> MODERATOR:  Anyone else?  Dr. Meena, would you like to come in.

   >> MEENA LYSKO:  Sure, Mark.  I was looking for a Zoom‑raise‑my hand but thank you for asking me as well.  I think from a South African perspective, let's see if I can turn the camera on.  From South Africa or you may have read in the news or are aware that we have a thing called load sharing, it's a term coined within South Africa, where it is a structured approach for us to actually manage electricity within our power consumption within the country when there are constraints on the national grid.

So this brings about, of course, in addition to what we do have existing as challenges and I guess globally, with infrastructure to ensure connectivity, is redundancy.  But with redundancy, I guess we also need to ensure that it is affordable, and it must be affordable to every latitude and longitude and decimal of latitude and longitude so it reaches every sphere of life within the world.

And, well, from South African context within South Africa, and in running our bootcamp, for example a program that we are currently doing, this has been a challenge to run a hybrid program where people cannot stay online for the entire duration of the training because of the matter of connectivity.  Fortunately we record sessions so they can follow it up post session so we have solutions around it.

So the one aspect is infrastructure, but it is also about redundancy, and then it is also the question of we have this reliance in education and training and also now in our kind of 4IR where we are relying heavily on infrastructure and the question becomes what happens if some day for whatever reason, and we've seen this through disasters in various parts of the world, how do we then manage to come back online as expediently as possible when infrastructure is effected?  Because in this 4IR, in this AI and data‑evolved world, our reliance is fully on infrastructure to keep global economies going, so that the risk quite high and I think that kind of is like a call, a call for action to look into this.

Kind of going in the opposite extreme is the question of the impact infrastructure has on the environment.  So the energy consumption, for example, is massive, right, within this context.  So these are sort of things that I think we have to be very mindful of and look into in a responsible, we talk about responsibility, so we got to be responsible about that as well.  Thank you, Mark.

   >> MODERATOR:  Thank you.  Thank you.  I don't know if you can find one question online to read out?

   >> MODERATOR:  Thank you, Mark.  A few questionings online.  The first one is do we have an AI policy in Kenya?  If yes and legislation to operationalize the policy?  Question number two, how should human workers perceive and interact with robots working alongside them?  Are these rebots supposed to be treated as tools or colleagues with the humans working with them?  Those are the questions, yeah.

   >> MODERATOR:  Susan, I'll direct the one for Kenya to you.

   >> SUSAN WAWERU:  Kenya has what is called the Digital Master Plan and in it is some aspects about AI.  Recently about two weeks ago, the government led by the President instructed for an AI legislation to be drafted, so that's an ongoing work.  Further, there is a central working group that's looking at all tech‑related legislations, policy, and strategy, and one of the things that will be considered is the AI policy in place.  The answer is, yes, within the Digital Master Plan, we have aspects of the AI policy within there, but however their effort, I think within this year to, have legislation, policies, and strategies that will guide that.

   >> MODERATOR:  Thank you.  Then the second question is existential, right.  Should humans interact with robots?  We already are.  Right.  (Laughing).  We already are to some extent.

If there are questions online and in the room, we will take them and continue to answer them because I want to move to ask our panelists and everyone who has shared here to take a minute and just wrap up and leave the room with what is happening, and we wanted you to appreciate that we are not just talking in Africa and we are doing something.  I don't know if I begin with you, Yomas?  If it's okay?  Like just a minute.

>> YOMAS:  Thank you so much.  It's a real privilege to listen to these different examples and use cases because this was really inspiring for me, and hearing more about it and creating African AI and how it works, also the challenges and how you deal with it was super helpful and I would like to stay in close touch with you to continue this conversation.  I think it's just a starting point.  So happy to see the success stories already.  I can just congratulate to the panel and you for the amazing moderation and different panelists which participated also from remote.  Let's please continue this conversation.  For now I can just, I absorbed so much because I didn't hear about it too much in advance, and this is why we're also here to get in touch with the IGF here.  I think it's just the beginning.

   >> MODERATOR:  Thank you.  As you put ‑‑ are we going to put it up?  Okay.  Good.  As you prepared then, I'll go online and I will ask Bobina to give like ‑‑ to reflect on her closing remarks.

   >> BOBINA ZULFA:  Sure.  Thank you very much, again, for having me be part of this conversation.  I think like someone mentioned earlier, I had ‑‑ there has been a lot of conversations happening even at Kyoto, in Kyoto, rather about just going around the conversations, I mean around the technologies as a whole.  So I think to be talking about the direction of, you know, how do we get to realize this being a digital public good and, indeed, being a benefit to everyone is a point towards ‑‑ we're coming from the point of the initial conversations around digitalization and now as a public as a whole, you know, is getting data more and more, how do we let the conversation evolve as the technology is evolving as well.  So I think for me I'm very excited to just hear about some things happening here and there across the continent and very excited to see more of that and very happy to keep in touch with you all to just let this conversation keep on going.  Thank you.

   >> MODERATOR:  Thank you so much.  I move to you, Dr. Meena.

   >> MEENA LYSKO:  Thank you, Mark.  In context of the Sustainable Development Goals, our training has aimed to support, education, zero hunger, responsible consumption and production.  I take with me today what you have said, and I think that could be a nice global call.  Self‑regulate as you innovate.

Our post‑training feedback from previous programs and feedback from participants in our current program is giving a glimpse of how a pay‑it‑forward is being achieved.  And then I want to kind of sum up by saying that as proprietary AI systems are generally used to make money, enable security, empower technology, simplify tasks, that would otherwise be mundane and rudimentary.  But if AI ecosystems could be designed to take advantage of openly available software systems, publicly accessible dataset, and generally openly available AI models and standards or open content, it will enable digital public goods to avail for Africa generally free works and enhence, contribute to sustainable, continental, and international digital development.  Thank you, Mark.

   >> MODERATOR:  Thank you.  Then I move to you, Darlington.

   >> DARLINGTON AKOGO:  Yeah, so I think we are in one of the best moments in human history where we are building technology that finally digitize what makes us special as species and potential.  The potential is beyond anything that we can think, so we are 7 years away from the deadline of the SDGs and there is a lot of realization that we're not meeting the targets, and I strongly think that if we can doubledown on using AI ambitiously, in Africa, Asia, anywhere in the world, if we can seriously double down, invest properly, and address everything about the SDGs.  There is no limit that how far AI can go, especially in the context of foundation models now and how general they are.  I would say let's double down on it but let's do it in a very responsible and ethical way, so as we are solving the SDGs, we don't create a new batch of problems for the next targets that we have to create.  Let's leverage AI and then solve the SDGs.

   >> MODERATOR:  Thank you so much.  Before Susan closes for us, they have been working on an interesting project.  Maybe she can talk to us about it and then give closing remarks.  It's been projected before you.  It's a tool that can help citizens to learn about their artnd can communicate in Sheng a mixture of swahili and English.

   >> SUSAN WAWERU:  Thank you.  Just to quickly run through one of the things we're developing is an AI chatbot to provide the services that the ODCP should provide.  This chat bot is using natural language processing with large data sets to just train it on the questions that the citizenry may have on the ODPC project.  It speaks both English and swahili, the two official languages in Kenya, so if you may just ask it, what is a data controller?  This is an awareness tool, enhance compliance, a tool to bring services closer to people, and just overcomes challenges such as the 8:00 to 5:00 working day, so as a data controller and a data processer seeking to register or make a complaint, you're not limited to working hours that can be done at any time.  It gives information, it gives processes, and it's all free of charge giving it accessibility.

To end the session, my call is AI is already inevitable.  We're already using it, it's on our phones, already in public services.  It's inevitable.  The main thing I would say is to have a human‑centered even when we're developing the chat bot, we put ourselves in the shoes of the citizen more than the benefit of the organization.  If we can enhance human‑centered AI, and maybe bring up the benefits more than the risks, that would be best.  The way to do this is demystify AI and such a panel is one of the things that we do.  You demystify, because currently it's a scary, big monster which is not what it truly is.  That's not the whole aspect.  It's what it could be, but it has much more benefits, especially to public service delivery.  With that, Mark, I just want to say thank you to you and organizers for this, and largely to IGF.

   >> MODERATOR:  Thank you so much.  Let's give a round of applause to everyone who has contributed to this conversation.

(Applause).

I hope the session has been valuable to you.  I hope you learned something, and I hope we can connect.  I hope we can talk more about this topic.  Thank you so much.  Thank you online as well for joining us.  Thank you so much.

(Applause).