The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.
***
>> Good afternoon and welcome to the session for AI sustainable development country insights and strategies. Thank you to everyone who is also joining us from around the world. My name is you ping, I'm head of digital partnerships at the digital AI and innovation hub and we're pleased to organize today's session with the Carnegie endowment for international peace and some of you might have been at the session over in the other room about the international cooperation and the importance of international cooperation for AI. And here we're proud to complement that. With really an in depth look at what it means to advance AI to achieve the Sustainable Development Goals on a country level really turning global discussions into in country impact while examining the challenges of capacity and infrastructure to kick things off and perhaps this session will be a little bit different from what you've experienced before is to have a more interactive flow with members of the audience and this is where weaponed to use Slido to really take the pulse of the conversations in the room and what you yourself are thinking so I would like to first start by inviting everyone including our online audience to engage through silo and this is where we have our colleagues moderating online so I encourage our online audience to participate in the discussions, the first question I want everyone to answer is on screen right now and you can scan the QR code and enter your answer to this question. Of what topic or theme would you like to hear about in this particular discussion. And this will be a chance for our speakers also to reflect the results and prepare your answer to interact with the audience here. Asking all of you to write back to put in your responses via the Slido QR code. So distance QR code and we'll see the response come on screen. And as you're doing that I want to emphasize that this particular issue is important. The question of how we leverage AI at the country level to achieve sustainable development. We are present in over 170 countries and territories around the world we work with more than 130 countries now to leverage digital and AI to achieve Sustainable Development Goals and while we're tremendously optimistic about the role AI can play also conscious of the significant AI equity gap that exists between the global south and the global north and the question of how many people will potentially be left behind in this global AI revolution and so we're working to close this equity gap in a number of ways that I mentioned before and a lot of our speakers today are working in this very practical area as well how do we leverage in AI in novel and exciting ways to achieve sustainable development? So very quickly a few more minutes fill many your responses especially those online while I introduce the panel as well. Onsite we have Mr. Alu from the co‑creation Africa. Online we have Aubra Anthony. Armando Guio Español and Anshul Sonak who is at the intel digital readiness programs.
It's really a multistakeholder panel. The idea that we have government representatives, the technical sector, civil society and really looking to see how we collectively an as community can come together. We also have a representative from the Indian government.
Additional secretary but I believe he's in another session and hopefully will come over soon. Very quickly I reflected on this first question. We have for instance the three keywords are AI regulation, inclusion, and capacity building and I believe the last one is AI in the media and I also see a very interesting number 2947217 which is possibly not a response. But you can see a little bit of the scale of the challenges I think that really are confronting us today with AI. But I'd also like to speak and reflect on the areas that highlighted are in green because these are top of line in our audience online and in screen. And thank you for those reflections. Inclusion and AI in media. On a scale of one to ten, how optimistic are you that AI can accelerate inclusive sustainable development within the next five years? One means very pessimistic and ten means very optimistic. Okay we're going towards the negative field. I see 3s a bit of 6. And overall, the score seems to be 5.0. Very evenly split in the room with a rather strong emphasis on number three which is quite negative so I think this is actually particularly interesting. We've actually done a survey at U.N. DP, the human development report that shows overall most people are optimistic I wonder if it's the IGF that slant to the conservative side but that's an interesting reflection that overall, we are in the middle in materials of optimism about AI and its potential to accelerate inclusive development and I think that point to the challenges that we're collectively are trying to address. Thank you for your response on that I am going to start then to ask again the panel to reflect on perhaps what they are seeing from the audience and maybe think about those in your response to some of the questions and again we're going to have an opportunity for the audience to also come in as well to keep this more interactive so let's start a little bit now by going to our distinguished panelists, and we'll start with I believe Armando.
>> ARMANDO GUIO‑ESPAÑOL: Hi.
>> And so the question is really in terms of setting the scene and I will ask this of all panelists at the same time how do you see the current landscape of leveraging AI for sustainable development?
>> ARMANDO GUIO‑ESPAÑOL: Yeah, well, thank you very much for this invitation. I really like the exercise of starting with these questions. I had been reflecting on this question as you shared with us for some time for preparation and definitely I think from the experience, I am here also representing the global network of internet and society centers. We are an economic network of 130 centers around the world and we have been basically working on this topic, these issues and what we are trying to do of course first thing is that we are working on bringing more evidence about impact of AI and what AI is really achieving and basically try to navigate all these immense and big amount of information that we are processing, decision makers are getting into the evidence of the work going on right now. The technologies available so we want to really help decision makers, policymakers and of course colleagues around the world to look into the kind of technologies that there are. What we have and the real impact of this technology. That's the other thing that we really want to have access to good evidence of what is the impact and what is specifically the main issues related with AI. One of the things we have been going through is measuring the impact of AI in the future work. So we have been working with colleagues around the world especially now with colleagues in MIT and developing this idea of an analysis in order to analyze the real impact of AI, for example, in a specific areas and what we are seeing right now is that instead of replacement of jumps for example, what we are seeing right now is augmentation, actually, improvement in the work, and workers around the world developing and actually AI being helpful in that sense so this is just an example of how we need to gather this kind of evidence, we need to gather this kind of methodologies and analysis in order to make good decisions and that's what I think we can achieve a sustainable use of this technology at the same time sustainable development because basically what we are doing is trying to really understand what the technology is doing and in that sense we have to reduce those big informational symmetries that we have right now. And we have to measure the risks on AI and that's also going to be extremely helpful for some of the conversations we're having in AI governance and AI relation but definitely we need good evidence in order for that process to take place in a way that is going to be helpful for many countries. Perhaps the last point in this first remark that I would like to make is that we are seeing a lot of efficiencies being gained and a lot of benefits also from the use of the technology, that's something that we really need to highlight of course there are cases in which the technology is not being used for the best purposes but we also see there are some benefits and that's something which basically we want especially countries from the world global south countries to understand and have enough element to determine how to better use and deploy these technologies from their society. So that's the kind of work we're trying to do. Building capacity by building evidence by taking this evidence into those decision makers and trying to promote local research around the world and provide collaborations in that sense. So that's what we are doing and hopefully that's first good plan of the kind of work ahead and some of the challenges that we, in which we are working right now, thank you.
>> Thank you Armando, I think that landscape of knowing what the challenges are and really having the kind of information that we have to make sure there are informed decisions about the use of AI is particularly important. Let me go to Aubra and ask her.
>> AUBRA ANTHONY: Thank you and to my fellow panelists. It is an important discussion I'm looking forward to diving in with you all and those in the audience as well.
So Yu‑Ping asked about the current landscape. The way I see it as both promising and fraught for a few different reasons.
The first reason I want to point out is that with AI I think the risks that we face in the context of the SDGs and inclusion has to do with digital divides that have been long‑standing for many years and with AI I think we see the risks that those digital divides become more calcified and they're linked to a few different things in the context of tech broadly but also specifically with AI I think we see that power is becoming incredibly concentrated. Right, with just a few multinational players dominating the discourse, dominating the priority setting. And dominating the types of business models that end up getting pushed out. And sometimes often these business models aren't serving the populations that are most in question and we consider how we achieve the SDGs, right, the notion that bigger is better.
And I think that the concern is that the broad trend lines are just a continued entrenchment of that concentration.
And it ends up that field shaping decisions really consequential decisions continue to be made in ways that benefit those who are already benefitting the most from AI both in terms of financially but also the information asymmetries, et cetera. And the resources that are needed to disrupt that are globally very scarce.
So just as an example, Africa currently accounts for only .1% of the world's computing capacity. And just 5% of the AI talent in Africa has access to the computing power it needs as a result. And beyond that on the data front even something as 2,000 of the world's 7,000 languages of the world on the continent they're considered under resourced because there historically hasn't been enough digital data on them to train LLMs and these different issues of inclusion crop up when you think about the way concentration is affecting access globally. But there's also opportunity there. Right? So, if you flip it on its head because of those constraints I think we've seen some really amazing innovations emerge around building AI that's more robust, right? And less compute intensive and less energy intensive with the development of so‑called smaller language models. And things like this. Right? Innovations that are better suited to the challenges at hand, the constraints at hand. Many firms have managed to do really groundbreaking work in light of those limitations and in doing that they offer a really fantastic alternative model to this brute force bigger is better ethos that's been dominating the AI playing field.
So firms like Lelapa AI who have developed the small language model that stands for hundreds of millions of low language speakers so they're promising signals as well as the more pessimism inducing ones kind of in line with the Slido results we see both sides of the spectrum coming up just very quickly I think there's a couple other points here that are worth highlighting in terms of the landscape of AI so there's, I think also the sense of perceived urgency and a mentality of catch up among many countries if you don't catch up you're left behind and this is tied to the divide.
And in the context of Africa where we've been focusing our research over the last several months some projections show that GDP growth attributed to AI may be ten times lower in Africa than elsewhere and so that creates a sense of urgency and it's not just keeping up with your neighbors, keeping up with the Jones' it's coming one the perception that AI can serve as much needed economic development.
Sometimes that's good but broadly it's also I think and this is an important thing for us to discuss as a community, I think that again, the flip side of that is that broadly it's tough right now to create the space that's needed to ensure that we're seeing AI as just another tool in the toolbox in the arsenal of tools we have available that we can apply for what are often very systemic, political and socially rooted issues. Right? Reduction of poverty, gender inequality, climate change, right? AI is one tool in the toolbox and when you have this sense of urgency that can both help drive the conversation of how we leverage those tools to suit our needs but I think it also risks forcing us to adopt a solution that may not always match the problem, right? And as Armando pointed out, ensuring we have the evidence that helps us make the decision that AI is the right tool for the issue so I think that's one of the current challenges that we face and then I have a third point that I will talk about more later when we have a little more time and it's around funding I think a lot of the issues we see right now have to do with the disparities in funding that with the diminishment of U.S. foreign assistance and with others, foreign assistance profiles becoming smaller I think that really creates an additional urgency around how we address some of these problems but in the interest of time I will leave it there and we can talk more about that later.
>> YU-PING: I think that additional funding will be an interesting one.
That is that moment of urgency, we couldn't agree more about the importance of bringing the global south and focusing on Africa this is why U.N. DP just launched last week the AI for sustainable development which is part of the presidency focused in accelerating AI adoption in Africa and really focusing on African countries and empowering local AI ecosystems. On the point of Africa, let me turn to you here in the room. In your work at the co‑creation hub what do you see as the current landscape as leveraging AI?
>> OJOMA OCHAI: Thank you so much, and thank you to the panels who have spoken before we I think they raised a lot of important points there. Part of what we do and observe every day I want to point to four major things. Number one is that when we talk about AI for sustainable development and the excitement that comes with the potentials and the opportunities that AI brings to society, we usually don't also talk about the balance that we need to create for the unintended consequences of artificial intelligence and in the work that we do have. We have seen how as the models are getting smarter and bigger and big data is, you know, becoming easy to, you know, process, there's also the heavy consumption of energy, right? And, you know, in some of the work we've done recently we've been benchmarking what led to transition from proof of work to proof of stake and blockchain and now that can actually be for some more of the unintended consequences of AI and number two trends that we are seeing is the transition of artificial intelligence from the stage of hype to the stage of hope I believe every new technology goes through three stages. The stage of hype where there's a fear of missing out and everybody's dropping investment and talking about it. And there's pressure not to miss out on the AI race and when we use the word race it's obvious that everyone wants to take part and no one wants to be the last comer to the table.
But we've seen we're transitioning from hype to hope. We can see use cases that we can point to, that is driving confidence in a lot of ways and I also believe that the third stage which is the stage of truth we're going to get there, but before we transition from hope to truth, we're going to make a lot of mistakes, we're going to have a lot of losses. And we're also going to see a lot of success at the end of the day. But I think the stage we're at now requires a lot of intentionality and the way we innovate with the technology and also using a multistakeholder approach to building, you know, AI solutions we've seen that a lot of people are technologically excited about AI. Some of the work we do in education requires that we bring at least two types of professionals in the room especially when we're building AI for edtech. Especially those that include children and also, for example, to build a very useful edtech solution for children in Africa for example. You need scientists, linguists, and assistant techs in the room. You need safeguarding professionals, you need, you know, people who can look at the technology stacks and in terms of digital security as well so that multistakeholder approach we've seen in Nigeria and Rwanda where it's no longer about the technical group but a multistakeholder solution. And I think the third thing I want to mention is the linguistic equity. So, because for AI we have seats across the board to classifications. The technologists and the skeptics. Some skeptics believe that linguist equity is just a talk and it's not concrete. But the truth is that for AI to be local and to be impactful from the grassroots linguistics are very important and Aubra mentioned people building small language models and this is because we need linguistic equity to build stacks and, you know, for some of the languages that are missing and first we do a lot of benchmarking and testing of some of the large language models and believe me we still have a long way to go in linguistic equity for some of the African languages.
So there's a lot of work in evidence and also to ensure that AI is first of all local and it's building features that helps people benefit from the technological dividends, at the lowest level possible.
>> YU‑PING: Again this is where we also see that priority and focusing on the areas that you mentioned so for instance when it comes to linguistic diversity we've been working in countries around Ghana and looking towards digitalizing those languages to create those inclusive models that can serve the engine of AI on the multistakeholder element where again this is something that unites us all and that you actually mentioned I am glad we can turn to Anshul about the capacity. Anshul, a little few reflections from you perhaps on what are the needs and what is the current situation with regards to AI for sustainable development?
>> ANSHUL SONAK: Thanks Yu‑Ping, good morning to colleagues from Silicon Valley this is an interesting conversation. It is 6 a.m. for me in the morning.
>> Thank you for being there.
>> ANSHUL SONAK: Appreciate. I'm really hearing all the comments by all my fellow panelists. As a professional coming from a rural village in India living in Silicon Valley I have this conversation every day. So really appreciate all the comments made by fellow panelists from my reflections I look at it as two big strands in this space. One is sustainable AI itself. How do we make AI more clean, more green, more safe, more fast, more cheap? So, this is all about AI technology itself. So that's one conversation where we need to be paying attention at all levels. The second and probably more relevant to this audience, what can AI do for larger systems? So it's not a technology. It's truly a developmental conversation. What does it have for sustainability truly mean? Now research shows enough that more than I think there was a nature research two years back. More than 79% of SDGs can be addressed with artificial intelligence.
If used responsibly and appropriately. So this gives a big opportunity and challenges that's a true reflection. Opportunity‑wise it can be an equalizer. Everything can change once this comes in your home, right? Just from a personal productivity standpoint if you really use AI appropriately we just did a research recently, which shows that you can save roughly 15 every week if you use AI for your own personal productivity. And then you can figure out how to use that time for responsibly for more value creation for yourself and your life, right? So that's another opportunity side. Challenges side of course I mean we heard about AI not just technology but there are big asymmetries. Gender divide.
There's a color divide, a country divide. All these asymmetries which are emerging.
If we want to be a truly responsible society we need to have this conversation on how to bring it together and create some kind of an equalization for a long‑term sustainability so there are opportunities and challenges in sustainability itself. And there's a separate conversation on how to make AI itself more sustainable. These are the two big reflections. I'm happy to bring this conversation on.
>> And now we're joined by the ministry of electronics and information technology of the government of India. And then I think it might be a good time to come back to that slide we had to start and this is where we polled both the online participants as well as those in the room to ask them what was time of mind for them in terms of the conversation here and what to reflect on and so those areas in green were actually what came up as sort of the areas that they like to hear about when it comes to AI for sustainable development and so I turn to you very quickly for a quick response on these and what you see as the state of the field when it comes to leveraging AI for sustainable development.
>> ROBERT OPP: The thing I see in the response, this is very, very important but when you look at sustainability in issues for AI, I would say that the energy use especially with regard to when we build compute systems for AI applications and models, the amount of energy that is needed for powering these systems is very, very high.
To the extent that normally in fact, now we are going to black wells and they are more energy intensive but even a net 200 consumes the power of one U.S. home when we are trying to save time as I'm sure was mentioning when we were trying to push on productivity when we are trying to push on benefits in the sectors you also have to see where do we balance these key objectives for renewable energies for climate with regard to more efficient computing systems. At some level we'll have to see that the benefits of AI applications and models should not outweigh the costs that come in because of high energy usage. So this will require extensive research and building compute systems which are low energy consuming and involve more instruments and renewable energies, that will involve limiting the use of AI for nonessential functions like things that humans can do better. Why do we need to rely on using AI. We found people using it for very simple tasks like writing poems or writing text. So we need to kind of prioritize that which are the tasks which AI should do, which are the tasks that AI need not do. How do we limit the energy consumption for powering AI systems. How do we prioritize usage of AI? How do we kind of not ignore the needs that are posed. How do we reduce our carbon footprint. These are issues related to AI regulation or inclusion and building capacities.
>> Yu‑Ping: Thank you so much. Now I want to turn back to the panel and ask you to reflect on what you heard from fellow panelists as well as what you see from the responses of the audience on the screen and link that to perhaps a small tailored question based on your expertise in the areas of this experience. I will start with Aubra. Now I think this has been picked up by other colleagues as well around the question of funding and governance and I find it particularly interesting. India will be chairing the next summit in that question of governance and funding, how do you think funding models are shaping national AI ambitions when it comes to global majority and how can we as an international community address some of these challenges there?
>> AUBRA ANTHONY: Yeah, thanks Yu‑Ping and a very auspicious time I mentioned some of the issues we're all tracking.
Right, the U.S. foreign assistance has been actively shuttered. And many of the largest bilateral donor governments and NGOs, philanthropies have moved to shift away from historical levels of foreign assistance so right now it's unfortunately a pretty precarious time for AI application but the necessary ecosystems globally so the fundamental ecosystem strengthening that needs to be in place for AI to be leveraged in a responsible sustainable way by locally impacted actors as was mentioned. The enablers that are key. The compute, the talent to design the systems which a lot of that may fall more into the realm of DPI than AI but I think that's part of this conversation. But even with those friends there's a strong growing recognition that given the scale and the scope of the need, supporting ecosystem developments are these kind of historical donor led siloed uncoordinated investment really leads to a sum that's far less than just the addition of its parts. So because of that I think in large part there's been an increase in recent years in these more collaborative groups. We have seen this in Ashley park. The current public interest AI that was launched earlier this year at the AI action summit. You mentioned the UNDP, the Italian government's launches of the AI hub this month or last hub and we will see what comes from the Indian summit next year. Right? There are a lot of different efforts that I think are trying to meet the moment.
And hopefully moving us at a better direction. But so part of the landscape and part of this kind of broader conversation needs to recognize that these larger more multilateral more multistakeholder funding initiatives that can honestly really better address the scale of the challenge financially are emerging but also have new challenges for those having to navigate that whether that's governments, practitioners, the people having to navigate strengthening are having to navigate a lot of different trends and trendlines. And I'm going to say something that I think was mentioned earlier and we all hopefully at this point agree on but if not I would love to get into more discussion here.
For AI to really deliver for the SDGs and for sustainable development broadly, it cannot be something that is helicoptered in from afar. Its development and employment have to involve communities impacted by AI, its governance have to involve AI and the problem is that critically the funding paradigm historically has really not aligned with that as well. It's really been more about AI that's produced elsewhere. Reaching foreign shores the way we see this shaping out is going to have a fundamental effect on whether we can actually achieve this goal that I think we all share of better leveraging AI for sustainable development. And I think there's been a really solid movement amongst the community over the last several years with the principles for digital development and kind of a recognition that the paradigm needs to shift and we need to better appreciate all of that. But at Carnegie we've been doing research on this and trying to understand where the funding needs are matched and where there are divergences. And so just very quickly I will share at a high level some of the things we found through the interviews, the consultations we've been doing.
>> I just want to give enough time to all the panelists and hopefully so questions from the floor I think the research sounds very interesting and perhaps you could share some links in the chat for even to consult if you don't mind.
>> AUBRA ANTHONY: We're going to publish this soon but I can give you the three key takeaways from the research we've done from the funding that, the discussions that we've had. We can get into more detail in the chat. A, funding must be structured to be nonextracted and I think this is a key thing that's come up in other comments as well.
It must be capable of becoming self‑sustaining at some point even if it's not at the outset if donors are coming in to fund there needs to be paths whether through engagement through the commercial sector and otherwise and also lastly and happy to share more link to this but we need to ensure the way that funding is brought, how risk aversion factors in and I think those are big issues that often go underappreciated especially when you have a lot of different stakeholders coming together which is critical here. In the interest of time, I will stop there but thank you so much.
>> And that is a nice segue when she mentioned the role of entrepreneurship ecosystems that's where your work, tech innovators are relevant and how are you supporting that work and what do you think are the areas of challenge that have been highlighted so far?
>> OJOMA OCHAI: When I mentioned earlier that is around the AI in the early days of hyper. If you look at the amount of money going into supporting AI innovators in Africa we've not gotten even 2% of the investment because everyone was just using ChatGPT and we see you can to longer do that and one of the strategies you used is you're not just an AI company by just name.
What are the core issues that you are using AI to solve.
And recently we're supporting innovators using artificial intelligence and DPI I think we need to start connecting artificial intelligence to core societal issues. There's also fixing what is not already broken. So if, for example, states in Nigeria but the state is struggling with identifying the trends of who are they giving the seed to.
What is the trend like in output as well. That's an instance of using artificial intelligence to get output rather than build something that, you know, we are all excited when you present it or look at it. It's not really solving any issue, for us we are very intentional around AI solutions that are connected to core societal issues when you need to transverse the hierarchy of need you have the investment of what we have of labor AI projects we want to see use cases and I think, you know, some of the monetary commitment that comes to us as an organisation to support innovators, be it an AI solution we are not an array because there's that pressure to quick my demonstrate an array. We are good with that. But there's also the quick idea that you need to commercialize. I think it takes a while for us to demonstrate good use cases of artificial intelligence and that is why some of the work we do now are based on the theory first of all, and not support the start‑ups on the commercialization side first because once commercialization pressure comes you begin to compromise of the safety of what you're building. So usually give ourselves one year to prove that what you're doing is safe and equitable in so many ways and then you can start your commercialization trail.
We work with patient capital now because this innovation that needs patience must not be brought on the pressure of commercialization first of all and I think that's what some of the big techs are compromising on when it comes to some of the conversations around safety, around equity and the good use of people's data.
There are so many tools that we've been experimenting with that we know fundamentally. This is ethically wrong in the way the data is script to fill those models and we must not repeat that and that's why for us AI is local must be built by local people and lastly I'm very happy that we saw a capacity building on the side of responsible people for us we are doing a yearlong model on AI for business master class where we gather business people across Africa representing three countries every month whether we have conversations around for them to really understand artificial intelligence. We launch a solution on people which they don't understand how to use in the first place when people understand they're able to contribute meaningfully and able to use meaningfully as well.
>> Thank you very much I think that was a very comprehensive answer that talked on many aspects and I would like to turn to really reflect on that from India has been a leader in the use of digital technologies, AI and so forth so reflecting from a macro level on some of the things mentioned how has this been mentioned to be able to become really a global leader in all these aspects?
>> Look at AI or look at the solutions. One thing that to keep in mind is all these solutions are sustained and are actually being scaled up and used by most people if they are actually addressing a problem statement if they are helping solve a problem there's a way to make it happen. Make the public funding available. If it results in larger and economic benefits for example we had the problem of inclusion, we had the challenge of people not having access to banking services, financial services, insurance services and one key reason because of that happening, the people didn't have an ID to prove who they are so they can get access to a bank account. Something as basic as that. That was the genesis of the UID that had the project we implemented back in 2010 and today we have unique IDs given to people that has opened 510 million bank accounts and this is a lot of needs. They those microfinance schemes that have led to credit schemes to when you build additional platform like ID it results in a lot of spin‑off benefits with a lot of leakages that happening in public service programs was cut. People could benefit because they could take up livelihoods and once they improve a lot of benefits come in. Similar for digital payments we realized that people were out of systems because they did not have any tools for a credit card or a debit card and they're not able to do a digital transaction. So with that came with unified payment interface. UPA as we call it and we do today many a month and account for 50% of transactions globally and that empowered people and street vendors and have become part of the formal economy. And when we look at AI applications we look at what problem statements we are solving. For example, if we look at health care one key challenge that we have is that for diagnosing tuberculosis we do have hospitals which have got x‑ray machines and which have got radiological machines but very often we don't have expert qualified in the rural hospitals so if there is an AI tool which can do diagnosis which is as correct than a human than that tool can replace that human. If we look at education, there's a lot of needs for personalized plan and for augmenting science and math teachers where the teachers are not there and helping children with special needs get access to lesson plans and content that might be more useful for them.
Create content in all Indian languages. That can create value and the real public value available. There are people who are willing to pay for such solutions. If you saw the problem statement it becomes very useful. We are seeing similar things coming in agriculture where it's helping farmers increase their incomes, reduce their input costs, reduce what are the use of elevation, do timely interventions for the right farmers and the right prices. They are willing to pay a cost for that so if we design this across sectors so it solves some social needs or some socioeconomic benefits there'll be a provision for funding them. There will be a provision for building a commercial model out of that and those will be sustained.
>> YU‑PING: Thank you very much and I really want to try to give some opportunity for some questions from the floor. I am going to ask if you don't mind try to keep your response to a minute very quickly from your perspectives as academia and research, as well as the private sector.
Reflections on what you heard so far and any thoughts that come to mind, maybe we start with Armando.
>> ARMANDO GUIO‑ESPAÑOL: Sure, in a minute, I was just going to say that we really need to focus more on implementation and what is working or not and especially we need to say the efforts that are being done on implementing on several of the policies. Anything that we need to understand which where are those accelerators and what is working and what is not. Also we have to be context aware of how this process is taking place and that will allow us probably to be a little bit more efficient with the resources that we're having and a little bit more accurate in the kind of support that we're receiving and also the support that we're giving in that sense. I think understanding more of that process we need to analyze a little bit more of the implementation side what is working and we need to start delivering more results in that sense from all fronts I think it's very important right now. That's my minute. Thank you.
>> That was a great minute Anshul over to you.
>> ANSHUL SONAK: This request balance responsible public‑private partnership and a great leadership. You have a big ship sitting on the stage and his ministry for example, really prepare this capacity building tools for population scale impact. Look at their example for engaging public. Look at the example on education what they've been doing with companies like Intel and others. There's four Es, entrepreneurship, education, and economic development using this. Creating the right public‑private partnership model is very important and this is very critical.
>> Great, thank you, before I open the floor, another Slido. The Slido that you see that Megan is going to put on screen that you can answer via your QR codes is going to be on the question of having heard all these conversations, based on your own experiences and so forth what do you think should be the number one priority for supporting or enabling an inclusive AI ecosystem in one word or phrase what should be the number one priority if we are to ensure an inclusive AI ecosystem? And you can't repeat the same answer as the first question. Capacity building has come up again so despite me trying to have another answer this clearly is a priority. One word. What is the priority for ensuring an inclusive AI ecosystem and then I'm going to ask colleagues as well to start thinking of the question that you'd like to ask your distinguished panelists and I will try to ask you to keep your answers short how can IGF and other stakeholders support AI digital health care systems in Africa to achieve sustainable development? Especially in this era of digital transformation and health globally today. I think we should talk about the Indian experience and how health is important. I will ask you if you have quick response to that and maybe turn to the other panelists as well this question of digital health especially in Africa.
>> In fact what I would like to say is what we are looking to do especially as part of the impact summit we're hosting in February is a lot of the playbook that we have in which we built the DPA, we built the repository and made it available for the whole world especially countries in the global south. Similarly the AI applications in health care whether it is for capital X screening or diagnosing breast cancer or for tuberculosis are similar use cases.
There was solutions will be made available as part of an AI use case repository and any countries of the global south, especially countries of Africa, African Union if they're wanting to use those solutions we will be more than happy to offer the solutions for an option in these countries with the necessary fine‑tuning with the local datasets as may be needed.
>> That really speaks to responsible AI which is also up there on the QR responses. Participants have heard the conversation, the distinguished panelists and would like to ask a question or comment? I think I saw a gentleman over there. You can come up to the mic here if you want to say something. Yes, please.
>> AUDIENCE: Thank you. Supply gone, question for the EPN model was rolled out between the Indian government and the private sector and there's that kind of agile quick mindset into government and then using open source, using interoperability to roll that out and doing that mix of governance do you see a change in that approach from the very sort of deterministic kind of applications that came out building on top of an ad hoc, building top of a UPI that does approach change or is it the same way that you think about building applications on top of your DPI?
>> It's going to follow a similar playbook and while DPI as you rightly mentioned we built the basic building block like the data layer and the various application for the sectors on top of that. In AI what we look at, what we are doing is providing the, again, the basic ingredients for building AI applications which include providing access to, affordable compute to all those who need it especially developers, start‑ups, researchers, academies, we have made available almost 35,000 GPUs at very low cost of a dollar per GPU per hour for those who are needing it. Then we are also enabling data search platform called AI coach where they search from across domains, across sectors, both from the public and the private sector will be made available and again, we will need compute and we will need the skills that they have to build AI applications and alternatively we are also providing tools for bias mitigation for DeepFakes so all those tools that are required to test your applications are also being provided on a common platform. So, all the necessary ingredients which will be common for those who are fine‑tuning or those who are doing referencing or building sector applications will be provided as a common utility. Similar to the DPA model. That's how we are going ahead with our AI development.
>> Thank you, any other questions from participants here? Our in‑person audience here at IGF? Yes, and please introduce yourself as well.
>> AUDIENCE: Hi, this is Jasmine from Hong Kong. I heard a concept about the AI applications. Also a thing I heard about is the local people that build their own AI system. So my question is, because there may be national... when you say local do you mean national life or regional life or even from a grassroot community I just want to clarify what you mean, someone mentioned about like the people building on the AI system and is there a way to measure or help those local people to efficiently build a system with the knowledge and a system to track the performance? Because when you're helping them, you have to put yourself into there. So how does it come into the official way to do that? Thank you very much.
>> Maybe I'll ask you and then Armando to reflect on measurement and then turn it over to other panelists.
>> When we say local it can mean three examples that you shared. For example, when you are building agriculture tax I will use the example of plantain, it's a form of binary, plantain, and we tested a number of forum models that does not cater for that classification of plantain.
When we say local that means that people that came from the region where plantain is originated from and those who have the classification needs to contribute to it. That's one of the examples when we say local. But also some of the work we do in Nigeria for example when we work with farmers or, you know, vulnerable groups, when you want to use the data, for example, for social welfare distribution from the government for example in regions where there's flooding or there's extreme poverty, we usually go back to them to ask for the permission to use their data and what we're using their data to do, right? And that is why we use it an SMS system where you get a prompt, you say yes or no, first you use your data for the process we're building that doesn't mean we're not using the data for good. It's for good but also we need to let them know what data is being used for, right? Also in terms of contribution and what we mean by local as well. That does not mean that they're the technical people building it but we are aware and contributing to the stacks or knowledge session or even a validation session of what you're building for them, so at the end of the day people believe they cocreated the AI solution when you collect AP I you need the buy in of the people forced to build anything in like a six month project. We spent the first two or three months just engaging the people, co‑creating the problem statement with them. And they contribute to the feedback when we have the first iteration of the solution so at the end of the day the buy in is almost automatic because they're they've been part of the journey. We have constituents saying where people invested millions of dollars on tech solutions that people rejected.
A typical example, not even technology this time around. I think in Sudan, for example, the government built a well for people to use because of water issues. But after building that, the government discovered that the people still go to the local wells to go and fetch and to answer the question why are you going to the local, your walking distance to fetch water again and the women said that is the only time we have to catch up. In the evenings when we take a walk. And yet portable water has been used for them. And not only the technology we're building but it's good for it to be local and that is the definition of local in the context of what I said.
>> Okay. Thank you so much, Armando and then we'll go back to the last round of one liners. Very quickly to the audience. Armando and then Aubra and then quick questions.
>> ARMANDO GUIO‑ESPAÑOL: I was going to say this is contextual. We can talk about regional, local, or national infrastructure entomologies being provided. That is something that depends. In Latin America, we're having some cases of LLMs being developed for the whole region, it's a regional project and we will see some elements that we will see and of course not deliver as suspected. So, this is still very challenging and I think the big element for me is the governance... who is taking part in the decision being made about the functioning, the training and the whole development of this kind of infrastructure which is critical for many sectors. There's also some perhaps food for thought about the governance of this technology of this infrastructure and we will have a sector and a national regional impact just for us to consider at the same time.
Not only the technology but also the governance sector.
>> Aubra, over to you.
>> AUBRA ANTHONY: I would plus one everything that was mentioned around the need for engagement. It's not, you know when we talk about local contributions it's not just about tech expertise that needs to be brought in. It's also about just engagement with the communities and user centered design, human‑centered design specifically and we see a range of examples where that's been done really effectively and also importantly, ways that those opportunities for engagement can be turned into opportunities for capacity development and so I don't think we have time to go into it but we can share a number of links of where that's been done. I was thinking of Togo and COVID. They had a model of cash benefits with a university partner far, far away and then transitioned that to a real strategic goal for the government of capacity development in country based from those learnings kind of moving toward more sovereign approaches to developing AI from that experience. And so I think there's a range, there's a spectrum of ways that that kind of local focus can look different across different contexts but I think there's a rich body of examples to draw from that.
>> I think we're out of time. I am very sorry to those who wanted to ask questions here. I am going to ask the last sort of ten second wrap up but at the same time I want to give an audience to reflect on the question we asked at the start on a scale of one to ten to see if your opinions have changed you recall the results ended up in about an even 5.5 level where we're equally optimistic and pessimistic. After having heard this conversation on a scale of one to ten how optimistic are you that AI can include sustainable development over the next five years I will ask Anshul for the last comments.
>> ANSHUL SONAK: Yeah, ten second comment, bringing skills for everyone has to be a national priority.
>> And then your closing words
>> What I would say I agree that bringing AI skills plus at the same time I will say writing more global partnership for enabling sharing of applications, sharing of datasets sharing of algorithms and sharing of expertise, so if we bring it to summit, to conferences more global sharing is really helping us.
>> Thank you so much and again to everybody here to our distinguished panelists, to everyone in the room who has really contributed to what I thought was a really rich and engaging discussions and your views here. Thank you so much for all being here today and let's thank our distinguished panelists and to our moderators. My thanks to co‑creation hub and the endowment for co‑organizing this event with us. Have a good day everybody and we hope you enjoyed this session as much as we did organizing it. Thank you.
>> Thank you.