IGF 2024-Day 2-Workshop Room 1- WS145 Revitalizing Trust- Harnessing AI for Responsible Governance-- RAW

The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.

***

 

>> BRANDON SOLOSKI: All right. Good afternoon, early afternoon, everyone. I'm not sure if folks in the room are able to hear me? Welcome, everyone. My name is Brandon Soloski. Welcome to our session on Revitalizing Trust: Harnessing AI for responsible governance. I am with the Centre for Diplomacy at Meridian International Center. It's a pleasure today, in the intersection we are at right now. I am very fortunate to be joined by some distinguished panelists who will be joining me to talk about this pressing issue.

To my left, Sarim Aziz, Director of Public Policy for South and Central Asia, Meta, is joining me. In addition, across the way from me is Lucia Russo, Economist and Policy Analyst at the OECD, focus on economy policy.  And across the way, we have Matis Pellerin, Vice President of Global Government Affairs at Oracle. Before we begin, we'll provide introductions to our work, our companies, just to give you a little bit of a flavor of where we're coming from as we dive into the subject today. I'll turn it over to Sarim.

>> SARIM AZIZ: Thank you, Brandon, and thanks, everybody, for being here on this really important discussion. So, yeah, my name is Sarim. I've been at metaphor over eight years. I actually did not start on the policy side. I've been on the technology side of working on AI and mobile applications for most of my career. I've only worked in tech. But yeah, more increasingly, you know, we found that even though Meta has been working on AI for over ten years, actually, in  that this conversation has definitely gone up to the next level. So, I'm excited to be here and add to the discussion.

>> LUCIA RUSSO: Thank you. It is good to be here and join in the conversation. I'm Lucia Russo, as said, at the OECD. At the OECD, we have a division that works on international AI governance, so it started years ago with the adoption of the OECDI principles, which are basically a guide for policymakers and stakeholders on how to foster trustworthy, innovative AI. And since then, we've been working to advance this work with our Member States and beyond.

We have also work that touches upon different sectors, and today we talk about the public sector, and yeah, and then other domains, but I'll stop here.

>> MATIS PELLERIN: Hello, everyone. It is a pleasure to be here with you today in Riyadh. Thank you very much for the invitation. I am Matis Pellerin. I am the Global Vice President for Government Affairs. I joined Oracle about 18 years ago, and my job is to manage government affairs for Oracle outside the U.S. So, in that job, I work a lot with government officials to see how technologies can help them be more efficient and support government public services around the world.

Probably you know the brand, you know the logo, but you don't know what we do. It's very common. So, Oracle, we are a cloud infrastructure and cloud application company, so we provide technology for private sector, for government, to manage their daily operations. It can go from HR, payroll, but also customer experience. And you will find lots of our technologies in lots of sectors, including health care, e government, financial services, and much more. So, we are a very large portfolio. And I'm very happy to join this discussion because AI is something very important now that we are investing a lot in that field, in addition to our cloud infrastructure, AI technology is becoming much more important now.

>> BRANDON SOLOSKI: Thank you, again, Sarim, Lucia, and Matis. Really excited to dig into our topic today. But before I go ahead and begin, there's a couple of things I wanted to talk about in terms of trust.

I work at the Centre for Corporate Diplomacy at Meridian International. At the Centre for Corporate Diplomacy, we are trying to provide the private sector with the experience, the tools, and the insights to navigate geopolitical issues, to understand matters related to trade over the horizon policy matters that impact business. We do that by providing insights to our partners almost on a weekly basis, whether that be a foreign visiting minister or ambassador.

The relationships that the private sector now has with the foreign community, with governments, is now more important than ever. The private sector, they are the new diplomats. They are part of the diplomatic community. And this is a new age that we are in at Meridian that we often refer to as open diplomacy.

And one of the reasons this is so pertinent right now, as well, is when it comes to trust, one of the things I wanted to highlight    and I don't know if anyone ever follows the Global Advisory Edelman Trust Barometer index. So, Edelman, a public relations advisory firm, every year puts out a trust barometer index. They survey over 30 countries, thousands of participants all over the world.

And what they found, I found was quite curious to this conversation as well. The private sector is now the most trusted institution in the world, followed by the nonprofit sector, followed by governments, followed by media. There's been times in my working life where I know that's been completely reversed, where the private sector was not the most trusted institution, but we've seen quite an uptick over the past years, and that's starting to ebb a little bit in terms of how high trust in the private sector is, but it's still front and centre. The private sector is really leading the way with diplomacy, when talking about AI, when talking about governments, when talking about the possibilities that exist within this new infrastructure that we are now building out.

So, one of the things we're going to talk about today, just to o that topic of trust. There's so much potential with AI, from potholes, navigating taxes, to getting your passport renewed. Some of the most tedious things that we all deal with, there's the ability and the opportunity that AI presents is truly tremendous. But at the same time, I refer to that Edelman Trust    60% of global respondents right now actively believe that governments are purposely trying to mislead them.

When you look at that stat right now, in trusting governments, it is quite low, and there is a lot to be done when it comes to AI and when it comes to this topic, and the possibilities are truly tremendous.

So, one of the things I wanted to start talking about was a survey that was done just recently, conducted by the IDM Institute for Business Value. And they found that respondents believe government leaders are often overestimating public's trust in them. They also found that while the public is still worrying about new technologies, like AI, most people are in favour of government adoption of generative AI. So, I'd like to open this up a little bit to my panel. So, how can AI reshape this frustrating process often linked to the distrust of government and mitigate these touchpoints to build faith towards ethical, fair, and trustworthy AI solutions?

>> LUCIA RUSSO: Okay. Okay, thank you. I can start with that. And as I mentioned, we have at the OECD the Public Government Directorate that is doing tremendous work in this field. And I believe that if used correctly, AI can, indeed, strengthen trust in the public sector.

If you look at the components for government that are influencing trust, citizens' trust, these include, for instance, responsiveness and reliability. So, where can we have AI improve those two government components? So, if you look at reliability, as he was mentioning, there are a number of tasks that can be done with AI; for instance, enhancing internal efficiency of processes, so speeding up routine processes and freeing up work of civil servants for tasks that are more useful to the citizens, but also improving the effectiveness of policy making, for instance, by understanding through large amounts of data what better what the user needs are.

And then, when it comes to responsiveness, also being able to anticipate societal trends and user needs. So, there is this report that was recently issued, and it's called "Governing with AI: Are We Ready?" And there are interesting statistics about how OECD countries have been using AI for these three key tasks that I just described. And we found that 70% of OECD countries used AI to announce efficiency in internal operation. 67% improved responsiveness of services. But only 30% to enhance effectiveness of public policy. So, we see that this trend is ongoing, but of course, it's still not fully at scale.

So, here, an important consideration is, of course, that the public sector is also a huge responsibility of implementing AI in a way that is accountable, transparent, and ultimately, trustworthy for their citizens, and especially to minimize harms when it comes to special areas, like immigration law or law enforcement, or even welfare benefits or fraud prevention.

So, here, I would recall, as I mentioned the OECD principles that really define what key values should be embedded in any deployment of development of AI, and I mentioned some of them    transparency, accountability, fairness, respect of privacy.

And I'll just end also with a final note on how public sector should bid the enablers, so skills, infrastructure, and data for trustworthy innovation to actually flourish.

>> BRANDON SOLOSKI: Thank you so much. Matis?

>> MATIS PELLERIN: Yeah, I further agree that education is very important. And if you want to promote trust in technology, especially on AI, you really need to make sure people really understand what is the technology and understand what is AI, how is it built, and how is data managed. And that is the first pillar of building trust.

As a tech company, of course, our role is to support that, and we are working a lot with our customers to provide them some digital trainings and some specific sessions to help them understand how AI is used in our solutions and how AI is built, how we can fight bias on AI, how you can manage your data and make sure your data are safe. Because you don't use an AI tool the same way if it's managed, if it's ChatGPT or if it's a government AI tool. It's not the same way. It's not built on the same technology.

Another, of course, is transparency, and explaining how our AI solutions are built, which, of course, will improve confidence in this technology.

However, I think education is the first layer, but it's not the only one. And there is also probably a more technical discussion to have about AI. That's why understanding the technology's important to be able to go to the second layer, because if you have a more technical discussion, you need to make sure people really understand.

So, that brings me to the topic of sovereign AI. I think sovereign AI is becoming more and more important, especially for the private sector. Because it ensures the data is secure and safe. If you're a government, if you're a private company, you're not going to use the same AI or technology that me or people in the audience here who are going to connect to ChatGPT and use AI for their personal activities, or go to X, former Twitter, and use a new model which was just released last week.

If you're a private company or government, you need to make sure that you are going to be able to train the AI models on some infrastructures that are safe and your data is not going to be used by someone else, especially if you put some very confidential data. So, I think sovereign AI definition    that's how I define sovereign AI. That is probably two things to define it, two things you need to check. First, what AI models you are using. And are you able to train the models with your own data?

And actually, when you are a government, being able to train an AI solution using government data is super important, but you can only do that if you are able to get access to the models and train with your own data. If you cannot do that with ChatGPT, for instance, and sorry, Microsoft is not here    I'm just bashing ChatGPT, but    I love ChatGPT, by the way, but I will not put my confidential data from Oracle into ChatGPT because Microsoft is my competitor, so I cannot use this model to work. I need to have my own. So, that's very important.

And so, being able to give access to some LLMs and train the LLMs    LLMs is large language models    being able to train these LLMs with your own data is super important. That's what we try to do in Oracle. We have lots of customers that are involved in very critical operations, like if you're a nuclear plant or if you are a health care company, you need to be able to get access to the LLMs and use your own data. So, we work with OpenAI, we work with (?) et cetera, and we give the ability to our customers to get these technologies with their own data. So that's the first thing.

The second thing is where your data is hosted and where your data is going, because if you are a research institute or a university or an academic institution, you're making some good research on a specific topic. Maybe you don't want your AI trainings to go in the U.S. or to go in China. So, that's also another point, where you're going to put your AI data. And it's very important for if you want to build a sovereign AI, you need a sovereign infrastructure.

So, what is in the back of AI, it is cloud. It is very easy. Cloud technology is the first layer of AI. So, you need to have a sovereign cloud which is going to host your data to make sure your datas are not going to leave the country and your data are going to be based in the country where you're based, so that's very important. And it's even more important for government.

And just to finish on that, to give an example of what we are going to do here in Saudi Arabia    Oracle is building cloud infrastructure in Saudi Arabia, and we already operate a few data centres in Jeda and Riyadh, and very soon in the home. However, we know that government entities here in Saudi Arabia, they want to have the benefits of cloud and AI technology, but they don't want to put all their data in the public cloud. They want a sovereign cloud.

So, what we are doing here in Saudi Arabia is that we are also building, in addition to a public cloud, we are also building sovereign cloud with SDC, a telecom company here in Saudi Arabia. And SDC is building with Oracle a sovereign cloud where we are going to be able to train and host critical data from the Saudi government and make sure when they use AI technology, when they embed AI technology into public services in Saudi Arabia, they will be able to use government data and make sure it's safe and it's not going back elsewhere, it's not going back in the U.S., it's not going back to UAE. It's based here in Saudi Arabia, so that's very important.

>> BRANDON SOLOSKI: Thank you so much. And more questions to follow up on that related to some of the work here, as well as making sure data remains sovereign and that we have interoperability as well. So, quite a lot on this subject, but I want to turn it over to Mr. Aziz very quick.

>> SARIM AZIZ: Thank you, Brandon, and thank you, to my panelists. As Lucia set the scene on the principles and Matis talked about some of the considerations for deployment. I think it's important to just emphasize that, I mean, AI's not a new thing, right? I think sometimes we forget it's    you know, I think it's important to kind of differentiate and reframe the discussion around, like, why is this    the trust deficit that you mentioned, like, why is that increasing?

You know, for something that    you know, what is the difference between the AI that we were using five years ago versus the AI today, from a perspective that AI has been used. Any computer system that helps analyse, perform functions on existing data, that's been happening for a while, but what's so exciting about this new age of AI, so to speak, is the fact that its ability to not just perform tasks on existing data but to create new data. And this is multi modal. It can take text. It can take images. It can take video, audio. So, that's the exciting part. And I think that does, what's crucial as to what Matis is saying, that this is so important, this technology. And we do believe at Meta that it has a transformative potential.

To the point that it's so important that it shouldn't be in the hands of a few, which is actually exacerbating the trust deficit. You can't have, you know, a few big companies based in the United States    (no audio)    where do we get this technology, right, especially in the developing world. So, I think it's very important to understand that the current model    especially as Matis kind of highlighted    of these closed, proprietary systems owned by a few companies is just not going to get us there. So, we need to fundamentally change the way    the path forward needs to be an open source one that has wide acceptance, that is accessible to all countries. And I think that's why Meta has    our CEO wrote this letter about open source AI is the way forward to ensure that nobody gets left behind, to ensure that people in this part of the world, in other parts of the world, have a part in their conversation. They can test the models. They can understand. They can look under the hood, see how it's done. They can take it and fine tune it, as Matis said, to their local cultural context and languages.

So, I just want to be clear, I think that is going to be fundamental in terms of governments adopting and supporting the open innovation approach to ensure they don't get left behind, that they are part of that conversation. I have lots more to say on that, but I wanted to just see that idea.

>> BRANDON SOLOSKI: That's an absolutely great point. And that brings me a little bit to my next question as well. So, we're not quite there yet, but you know, not far on the horizon, one would want to ask AI about their evaluation that they received, or maybe the patient that was denied service as a result of AI, or other mix ups that might happen, and the powerlessness that one might feel as a result of that. So, you know, I would be very curious, to follow up on your question as well, and I would love for Matis and Lucia also to comment. But with Meta, can you talk to me a little bit about how Meta is leveraging and working with government to improve public services and enhance trust in AI?

>> SARIM AZIZ: Thank you, we see amazing adoption with start ups. The open source technology. Meta is not new to open source. If you are familiar with web technologies, Meta has done plenty of open source work around that, around React and many other technologies. At AI itself, we have thousands of libraries prior to these LLMs that we've open sourced.

So, I think the main consideration with government is, one, trying to tell them that, you know, if you are already doing an open data approach that's an OpenAI approach, open innovation approach is going to be an extension of that. So, first is, are your data sets open in terms of, like, allowing the public sector and the start ups and private sector that works with those data sets    I mean, it's becoming ubiquitous in terms of data sets. Yes, you need to control where it's at, have control over it, customer might be able to customize it, but I think it's more about democratizing the access to that, to data sets, but also like models. And it's about, you know, telling them that there is a conception that open source is not safe and secure, and that's absolutely not true. In fact, the cybersecurity industry will tell you, including the DOD, that it's not helpful when, in the cybersecurity space, when signals and data are not shared. In fact, you have to share with third parties to ensure that you're able to respond to threats and bad actors.

So, from our perspective, it's educating governments around the fact that AI, open source AI can accelerate innovation, it can increase access within public sector, and the fact that you can control your destiny. It gives you flexibility, where you want to deploy it, whether you want to do it some cloud, some on premises, whether you want to    you know, what amount of data you want to be fine tuned, what do you want to use RAG for, for retrieval augmented generation, and it increases accountability.

And so, there's just been this concept that, you know, you need to go with a propriety approach, you know, to hold people accountable. Actually, governments can have more control and customization with an open source approach. So, that's been the discussion, and a lot of it has been being able to prototype, and we have plenty of great examples from France where, actually, it's used by parliamentarians to    they've used our Llama model to make it simple. You know, legal documents and legislations more simpler for other agencies to understand. So they use Llama already. It's deployed, and there's plenty of great examples in health care as well with, you know, mayo Clinic, which is one of the largest medical nonprofits that is using it for radiation oncology, in terms of in their diagnostics. Huge potential there.

For education, public sector, we've seen in places in Africa, Fundamate is using WhatsApp as a study assistant. So, there's amazing opportunity for that. For governments, there is more potential there as far as public private sector. They're already pushing the boundaries. If they have the support of the government, I think we could do amazing work in the public sector. That's been our focus at Meta.

>> BRANDON SOLOSKI: Thank you. Matis?

>> MATIS PELLERIN: Yeah, AI is a top priority for government, as you said. But we need to be realistic. Unfortunately, governments are still lagging behind the private sector in terms of AI adoptions, and lots of stuff have been done in the private sector, but government are, for most of them, still running on very old technology, if we look at what they are doing. Lots of governments are, using technology from the 1990s or the year 2000, so they are very unuser friendly. They are very expensive to maintain operational. They are even not very secure. So, there is lots of work to do, but I think there is a good understanding now from world leaders and government officials that they need to modernize their public services, their public administration, to bring the best tool in country to support economic growth, to support better jobs, but also improve the quality of the public services. So, lots of governments right now are making huge investment to bring these new technologies. Cloud and AI are the two first priority.

One of the big difference between private sector and the government is that the government is sitting on a huge amount of data. I mean, it's a gold mine. The government has plenty of data. And usually, they don't really know how to use this data, because all the ministries usually work in silos. You have the Health Ministry, they have their own data. They don't connect with the Finance Ministry or with Homeland Affairs. So, it means they are not talking to each other, and they are not able to really leverage the power of AI.

So, a first thing they need to do is, first, to connect this data and also to use this new technology, like AI, to be able to really, you know, analyse the data and make decisions which are based on facts, so facts based, and it gives insight to the politicians, it gives insight to the various heads of administration about what decisions they should take through this analysis, through the big data, and also analytics they can use.

I'm very convinced that AI technology is really going to improve public services, improve the quality of public services.

As Lucia said just before, I mean, there is a change in how AI technology today can be used. AI is not new. And for a very long time, we have been using AI to manage very non complex operations with very low value added, but now with GenAI, we have a switch in how the technology can be used, because GenAI can manage very complex requests, and it can also give you personalized answer, which is very valuable for government. Because if you are a government, it means that you can use GenAI to automate lots of the tasks which were done by your civil servants before because they were complex. And now, you can make them autonomous, or at least you can reduce the time you need to really manage and operate them on a daily basis. So, AI will, for sure, make government more efficient.

For instance, you can use AI to manage the relationship with your citizens, instead of having to send an email to a public administration to ask some question. I don't know if some people in the audience have already tried to send a request to your tax authority, for instance. You want to know if you're subject to this regulation or if you need to submit this review. It may take two minutes to get an answer from the tax authorities. If you're able to embed an AI chatbot which is connected to your tax regulation, so the data set of your tax regulation, but it is also connected to the revenue from the Finance Ministry, which are declared by your employer, well, the chatbot can, in a few seconds, give you the answer about your request. And so, you went from two months to a few seconds, and you have the same exact answer. So, faster service.

Second thing is also trying to better optimize public expenditure. Through AI tools, you can drastically, I mean, detect tax fraud. You can identify    you can also better calculate social benefits. In Europe, for instance, we are working with a lot of governments to use AI to make sure social benefits are correctly calculated. And it can save you a billion of euro every year, because in lots of countries, sometimes social benefits are not very well calculated. It's not optimized because you don't    the social ministry's not talking with the other ministries, and we don't really know how much revenue you have, so we give you some money, but then you were not supposed to get the money. So, that's another way.

And also, there is    I mean, there is plenty of use cases. So, at Oracle, what we try to do, we try to make AI easy to adopt. And how we make that happening is that we try to embed directly AI technology into our own applications, to make sure it's easy to use and easy to implement when you're a government, but it also applies for the private sector, by the way.

Another important point about AI is, like, when you use AI, you need to use the good data. If you don't use the good data when you train your models, probably the answers are not going to be very good.

I'm going back to my first example about ChatGPT, but ChatGPT is very good if you ask ChatGPT to draft some content or to draft a keynote or a briefing document, because it's based on a lot of public data which are available in the Internet. However, if I ask ChatGPT to give me a specific answer about a health care situation or about a tax regulation, probably it will not be able to give me a very relevant answer. So, contextualization of data is very important. And for government, it means that you need to bring specific data sets which are coming from your own industry to train your model and make sure it brings a relevant answer to your citizens.

You mentioned passport (?) before, and I think it's a very good example, because how we can use AI for passport renewal? Well, it's very easy. You can have solution that is going to be put on the website of the government. This AI generated chatbot is going to be connected with various database from the government, and so, it's going to help you prepare your passport applications. Because, usually, when you need to do passport application, you need to gather lots of various documents which are coming    I mean, birth certificate, you need justification of your address, you need your formal documents. You need lots of various stuff. So, this AI technology's going to be able to gather all these documents for you, connecting with the various ministries' data sets. It's also going to generate automatically the form you need to prepare. It's going to give you the next meeting available in the agenda, and also, when you're going to arrive for the meeting, the civil official is going to review your application. For him, it will be much easier because he will know that probably the AI won't have done some human error, so the application will be correctly filled in, the documents will be correct. You won't miss any document because the AI will give you all the documents automatically.

And at the end, it's also going to improve how the civil official is going to work because he will not waste some time to tell you you need to come back, et cetera. So, that's a very small example about how we can use AI and why it generates very good benefits in terms of productivity, efficiency, and cost saving for the government.

But just to finish on that. I think AI for the public sector is growing, but it's still very new. And I think governments are still a bit cautious about using AI, but it's clearly accelerating. And now we see lots of use cases which are already live, and there is very good benefits for citizens and the government.

>> BRANDON SOLOSKI: Thank you so much. My apologies for the coughing attack. I seem to be going through right at the moment. I should have brought a little water on the stage.

I think one of the things I wanted to talk about    and you were just mentioning this    was the interoperability aspect of much of AI. There's been a proliferation in this past year on new regulations    policies, laws    attempting to regulate AI, to position various countries, even regions, for the future, to position themselves for this new sector.

Now, it's been a full year since the EU announced the world's first major AI regulation, the EUAI Act. I've been following this, and I'm intrigued to hear some of all your thoughts. And specifically, as governments around the world draw on the EU's regulatory approach to AI, as they shape their own AI policies, what maybe lessons do they might want to start taking into consideration or any thoughts or observations on any of these new laws or regulations? .

>> LUCIA RUSSO: Maybe I'll go first. You are totally right, we are seeing many policies and regulations emerging. And of course, the UAI Act is the pioneering regulatory approach in that it establishes this comprehensive, overarching legislation across sectors that aims at regulating AI systems that enter the EU market.

But we are seeing, likewise, the EU, we are seeing some regulatory frameworks emerging, for instance in Canada and Brazil, that also follow a similar risk based or impact based approach, though these proposals are still being discussed before Parliament.

And then, on the other hand, we also see different approaches, such as those taken by the U.S. you mentioned, but also the United Kingdom or Israel, where instead of going with a cross sectoral approach, you'd rather see principles defined and then regulations to be defined more at the sector level, and this is clearly an approach that, so far, the UK has taken, Israel. And in the U.S., we have seen executive order that has some components of risk management and safety and critical infrastructures, but still relies mostly on standards and involuntary commitments.

So, I think this pace is really evolving quite fast. And what concerns mostly the OECD, being an international organization working on consensus building and facilitating interactions across jurisdictions, is that, of course, this can lead to regulatory fragmentation, which, in turn, leads to higher compliance costs for enterprises operating across borders. So, our mandate is really to establish interoperability across these various regulatory frameworks, and we do that at the very basic. For instance, trying with the defection  definition of AI system, which has been adopted by the EU AI Act, by the convention of the Council of Europe, but also by the (?) framework. So, having the same definition allows these frameworks to talk to each other, because they talk about the same thing.

But also, we are mapping risk management frameworks to establish whether the    what are the commonalities, and so through responsible business conduct allowing companies to see what compliance mechanism they need to ensure to trade across borders.

I'll just, perhaps, mention three things that you said, what countries should look at when they look at the EU AI Act. It's important to establish frameworks through their ecosystems, their priorities, their societal values. I think the key elements from the UI Act would be really the importance of creating regulatory frameworks that are risk based, according to the level of risk of the systems, and so proportion in terms of the requirements. Accountability for deployers and developers. And then, also establishing the robust testing and certification systems across the life cycle.

And perhaps, just to conclude on the risk based approach, I think that should also be based on evidence, and that's why at the OECD, we also build an incident reporting framework, the AIM, it's called. And the purpose is really to see where risks actually materialize. Because we talk a lot about risk in abstract, but then where is that they caused the most harm. And on that basis, this should be able to adopt alongside technological innovation.

>> SARIM AZIZ: Thank you. Yeah, I think just to add on to what Lucia said, from an Asia Pacific perspective. I think it was exactly a year ago at the last IGF in Japan, where the G7 Hiroshima Process was announced, which is actually consistent with a lot of the OECD principles. So, I think from what we've seen is most countries in Asia Pacific are not following the EU model. I think they have followed more of the G7/OECD kind of more principle based approach on making sure, because I think they all understand this is a new technology, right? It's evolving so quickly, and by time you regulate it, it will have already evolved, perhaps.

So, I think there are great examples, including the UK example, where we worked    there is a need for having harmonized, having like AI safety institutes around the world who operate as a network. That's been a great initiative, and I think there's    (no audio)    to assess risks. And with the UK, because of that collaboration, they were able to launch something called Inspect, which basically is an open source software library, almost a year ago, that assesses for risks like cyber, chem, bio, and other safety risks. So, I do think there's lots of great work going on. It's still early, but I do see that collaboration is the key here, not necessarily regulation to something that's still lulling.

>> BRANDON SOLOSKI: I can hear you.

>> MATIS PELLERIN: Not working very well. Okay, it's back. I'm back. Okay. Can I have another mic, maybe? No? Okay. (Audio breaking up)

Maybe to comment quickly. One is on harmonization. I think for private sector, that is very important. Without going into the details of  

Thank you. Okay, that's fine. Without going into details on the AI Act, I think for the private sector it's very important to have harmonization. And we don't want to    at least, we should not see various different framework defined everywhere, one in Europe, one in Asia, one in South America. I know in South America right now, there is a lot of work in Brazil and a few other countries on AI, and they are all wondering what we should do in AI.

Well, I think for us, it would be very complicated if we had fragmented regulation around the world about how we use AI. So, that's the first one. And I really think governments and officials working on that should really consider trying to harmonize the rules.

The second point is innovation and adoption. We talk about adoption at the beginning of the panel. We should be careful about not reducing the trust about these technologies, because these regulations are great, and I'm not saying it's a bad thing, but in the global opinion, sometimes there is some misunderstanding about this technology, and it's not helping adoption, because people think it might be dangerous or think their data are not safe, and sometimes these regulatory imposed institutions generate some mistrust about technology.

And in the EU, it's not only about AI, but if you look at about cloud and all the debates around data sovereignty, unfortunately, it has slowed down drastically cloud adoption because companies, governments all worry about cloud, because maybe there is a risk about their data. While we know from a technical perspective, usually it's very safe to go to the cloud because cloud companies are cyber experts and they are putting billions of dollars every year to secure their infrastructure. So, usually, when you're in cloud, your data is safer.

But there is a misunderstanding about it, and there is some, in the global opinion, the population sort of worry about data sovereignty. And adoption is very slow because of that. I was in Singapore a few days ago, and I went through the customs, and I was super impressed about their ability to use AI in the airport. Now you don't need to    as a customer, you don't need to take your passports in customs. They automatically recognise your face recognition. When you arrive at the boarding gate, you don't need the boarding pass because they have embedded AI facial recognition into their process, and now, people are just going through the boarding gate and they recognise you, they know you're in seat 30B, and that's fine, you can go in the plane. You will never see that in Europe because of GDPR, because of all the rules. It's not possible. So, we need to find a compromise between data privacy, but also innovation, because innovation is important.

It is also through these new technologies and innovations that we can make government more efficient and easier for people.

>> BRANDON SOLOSKI: That's a great point. And ironically, very likely a European company that is handling a lot of what you were just talking about. But you're absolutely spot on right there with GDPR.

I think one of the other things I wanted to talk about    and we started talking about this already    was in terms of partnerships. And you mentioned this little bit, about some of the large companies and the influence that this has. But I think one of the things I'd love it chat a little bit about and get some of all your thoughts, are on what you think the role of partnerships with the private sector is going to play, including start ups. You know, how is this going to evolve outside of just some of the big companies? You know, I'll kick it over to you, Aziz, as I know you started talking about this already.

>> SARIM AZIZ: Yeah, I want to make sure others can chime in. But just to use Singapore as a good example, even a government like Singapore, that is quite innovative, I think still part of the reason is because they realize the value of the start up, private sector and the start up community. So, I think that's where governments can really tap into their local talent and entrepreneurs and start ups, where already they've picked up this technology, they're already doing great things with it.

And I think one of the proofs of this is that we ran an APAC AI Accelerator across Pacific, across 13 countries, from Bangladesh, Nepal, all the way to Australia and New Zealand. And we were blown away by    and this is just the power of open source    how these start ups and non profits were using our technology. And this is one of the both blessings and challenges with open source, like you don't know who    like, you don't know how it's being used because it can be used in incredible ways, and it's only because we did this competition that we found that, oh, my gosh, the New Zealand Net Safe organization, which is an organization that takes care of online harms and safety, is using our model to basically streamline, you know, complaints they're getting from the community around just content. And they're powered by the government to basically, you know, send information to digital platforms, and not just Meta, but others. So, it was amazing to see that.

In every sector    health care, in manufacturing in Japan, there were uses of AI. And what we did was we did this regional experiment locally. We ran local competitions in these countries, and we brought the local government to say, come and see what your own local start ups are doing with this technology. And they're doing it in the sectors that you care about. They're doing it in health care. They're doing it in manufacturing. They're doing it in  

In Taiwan, there was a company that was able to use AI to identify building    like, use blueprints to identify building code violations and whether what they're using adopts to the local laws and regulations. So, incredible stuff, things we couldn't think of were being done.

And so, we engaged over 23 different government agencies across the Asia Pacific region to show them, here's what happens when you work with your, you know, the private sector. It can be foreign, big tech, complete companies, but it can be your local talent who are already using all the tools available to them. That's the power of the cloud. Your local talent can use whatever tools there is    Oracle Cloud or whatever makes sense for them    Amazon, Microsoft. And again, the power's open source, because you're not locked in.

You know, with open source, you can take your data and take it wherever you want to put it. You want to put it in Oracle? Great! Tomorrow, if you get a better deal with Microsoft, go there. Like, it should be what makes sense for you and gives you that control and flexibility.

>> LUCIA RUSSO: Maybe I'll bring in perspective from Egypt. We have been working with Egypt on  for analysing their AI strategy, and they have a very nice example of public private partnership, in that they built this Applied Innovation Centre, which model works as a tripartite model, where you have the Ministry of Innovation and then a ministry that could be health or agriculture or the judicial system, and then you have the private sector. And the idea is that this domain minister comes in with a need, and the Minister of Innovation helps in gathering the technological solution, together with private companies that help develop and scale the solution. And so, this has proved very effective, for instance, in developing solutions for health, like diagnosing retinopathy linked to diabetes, or even speech to text recognition for the judicial system.

So, I think there is this benefit of having the private sector as providers and also knowledge transfer also in settings where, of course, technological innovation may be lagging because of the ecosystem itself.

>> MATIS PELLERIN: I think government can really learn from the private sector, because there is lots of technologies and solutions which have already been implemented in the private sector that can easily be replicated into government.

If you look at what    if I take the Oracle example, what we are doing for private companies to run their HR, their payroll, their procurement. Lots of these applications can easily be implemented in the Ministry of Finance to run your public procurement system, your public contract, your payment of civil servants, et cetera, et cetera. So, there is a lot to developed applications that the government can use to really leverage the power of cloud and AI.

If I give you an example about health care. Health care is a very important topic for Oracle. We bought Turner a few years, which is an electronic medical records company. And since then, we have made huge investment to modernize the health care sector because we are convinced there is lots to do.

One of the main challenges of health care right now is that the data is fragmented. You have lots of stakeholders on the health care space, from health agencies to hospitals to private hospitals to private insurance, et cetera. So, there is a lot of them. And usually, the data is not really connected to each other. So, what we are doing right now is trying to build an ecosystem solution that gives the ability to governments to connect all these stakeholders together and have a global visibility at the national level, population level, to give    and using AI    to give a better understanding for government officials about what is the national situation. So, we call this the direct agent platform for health care. It's already implemented in a few countries. But this platform, using AI technology, gives the tools to, like, identify and detect diseases, for instance, or to predict all the patients' needs in a specific region, even a specific country, specific city, sorry. That is something we have done during COVID, and we saw it was working very well, and there is a huge demand for governments to have this, which will help them reduce health care costs, but also be able to improve patient outcomes.

And the second level is a bit lower, is about how we can modernize hospitals and how we can help health professionals like doctors, et cetera, to improve their quality of work in the hospitals, to make the hospital more efficient.

And so, actually, we just released a few weeks ago a new electronic health record, which is actually to make it simple, it's a hospital management system. So, it's a software that manage the appointment for the doctors, drug prescriptions, number of beds or number of beds you have, everything in hospitals. And now, we are embedding AI technology to try to automate all the tasks. Right now, the health professional needs to do, like drafting a report, like putting the meeting in the agenda, or drug prescription. It takes time. And so, now we are embedding voice recognition in our systems, and so, doctors can just record the meeting. And at the end of the meeting, the AI is going to generate everything for you. So, no reports draft. It will be generated by AI. So, next meeting will be put in the agenda automatically through the AI. Same for the prescription, et cetera. And we are able to reduce the time that health professionals are in front of their computer and not talking to the patient, so that's very important, and that's something which is already live.

Actually, in Saudi Arabia, in UAE, and Qatar, we are already implementing these solutions in a lot of hospitals, and we see drastic improvement in how patients are using health care in these countries.

(No audio)

To schedule cases, to predict potential outcome of legal cases. So, there is a lot of ways to use that. Agriculture is very important, and we have some good cases in Africa, even in Philippines, where we use an agriculture solution to help governments to monitor crops, to monitor the climate, to be able to anticipate climate change or some issues in the crop or stuff like that, or even public safety. Public safety is the one maybe people know better, because when you're a police authority or you're an emergency authority, you can use AI for emergency response or for video screening, et cetera. So, there is lots of use cases.

>> BRANDON SOLOSKI: Fascinating subject. We could go on for quite some more time, and I have more questions in regards to emerging and frontier markets and how AI could be applied there, and I would love for us to continue on the conversation. But we are at the bottom of the hour, and I'd love to end on that optimistic note around partnerships as well. So much can get done in that space, if one could have a favorite Sustainable Development Goal, number 17, partnerships would be mine. So much gets done there.

So, just amazing to be able to get to talk about this with all of you today. Thank you, again, Matis, for joining us, Lucia, for joining us from OECD, Aziz, thank you again, Mr. Aziz, Sarim, for joining us, as well, from Meta. It's really been a pleasure to have this conversation today to understand the role that the private sector plays in this space, its leadership in terms of building trugsz with    trust with the public sector as well. Truly a fascinating subject. It was a pleasure to join you today.

I'll be around. I know Aziz, Lucia, and Matis will also be around. We would love to take some questions at the end, as I think we might be out of time.

>> AUDIENCE: Yeah, thank you very much. It's my pleasure to be here. My name is Anil Poram from Nepal. And in terms of implementation of AI, there are a lot of challenges, but one of the most prevailing challenges is the trust issue, in terms of the data by the    sharing the data by government and public partnership. So, how to overcome that? And are there any good examples you would like to share with us? Thank you.

>> MATIS PELLERIN: Quickly, about trust in management of public governments. We talked a little bit about building sovereign infrastructure. A few examples about close to here, we work with the Government of Oman, for instance. We have built sovereign infrastructure based in Oman because it wasn't operating any cloud infrastructure in Oman. So, the government wanted to use our technology. They wanted to use our technology to modernize their governments, to modernize their public services and use AI. So, what we have done is built a cloud for them, which is a dedicated infrastructure. It's built under the control of the Omanie government with their own security, their own standards, certification, et cetera. So, there are some solutions, as you say.

And for me, the infrastructure, cloud infrastructure layer is probably one of the most important ones to check when you want to really protect your data. And after, we can also go into the protection of the data sets, anonymization, et cetera, but that's another aspect which I would say is much easier, but yeah.

(Captioning ends in one minute.)

>> SARIM AZIZ: At the risk of contradicting Matis, yes, that's one option, but I think the answer is open source, where you're not locked in. You control your data, if you want. Actually, Llama, which is Meta's model, is available to Oracle's cloud infrastructure. So, yes, if you want to host it there, you can. But if things are too sensitive for the government of Nepal and you'd like to host it on your own infrastructure, you're happy to do that. You can also do both! Like, you know, it can be hybrid. You're not locked into one proprietary system. And I think open source is the answer to give you maximum control, maximum sovereignty, whether it's cloud or on prem, and basically, you control your data. No one else does. So, open source is a solution for governments to look at. In fact, many governments are using it. They don't have to tell us that they're using it.

These things are not just going to run on clouds and servers and computers. We're seeing edge devices. There are more of these in the world and sensors in some places that may not have good connections, so you need AI to run on those edge devices and open source models are now getting so small that you can deploy it on your phone or on a small devices, edge devices as well, so, lots of interesting use cases that could come out of that.