IGF 2023 – Day 0 – Facilitating trustworthy innovation: how governance frameworks can enable the safe development and use of artificial intelligence #number SessionTitle – RAW

The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.

***

 

>> PRATEEK SIBAL: Hi, everyone.  So we are going to bought for another two minutes or so before we start.  The lunch, as always, is going into the next sessions.  Just wanted to make sure that in terms of the discussion, we'll keep the panel quite open.  The panelists are very happy to take any kinds of curveballs you want to throw at them.  We will try our best to do that.  And the idea is to engage and not stuck too much to any kind of prepared remarks.  So please help me do that.

And we'll start in a moment.

So you think we'll start now.  Hi, everyone, I'm practice took Sibal.  I'm Programme Specialist, Digital Policies and Digital Transformation Section, UNESCO.  It's my pleasure here to have today five amazing speakers representing diverse perspective from the private sector to parliament to the judiciary.  I would like to first introduce honorable Cedric Frolick, who is a member of the parliament, national assembly house and the chairperson for committees, Oversight and ICT, from South Africa.  Welcome, sir.  We have James Hairston, who is the Head of International Policy and Partnership at OpenAI.  We have Nicola,  EY's Global Chief Technology Officer.  Welcome, Nicola.  And we have Genie Gan, Head of Government Affairs & Public Policy, Kaspersky and judge Eliamani Laltaika from Tanzania joining us.

The overall theme of the session is to talk about how can we facilitate trustworthy innovation, while also going with regulation.  And we will focus on artificial intelligence, supertopical at this IGF.  Global discussion about artificial intelligence.  We have seen the Secretary General se announcing the setting up of high‑level expert group which should be announced some time later this month.  We are seeing initiatives in the U.K., which will convene an AI assembly also at high level and we have ‑‑ we are in Japan, so must recognize the Hiroshima process the G7 has set up.  At the same time we have a very diverse group here with parliamentarians from all across the world, and we would like to also challenge our speakers here on the developments on other parts of the world.  As well.

So I will open the floor first around this kind of ‑‑ I don't know if we should call it a dilemma, it's a false dilemma where the challenge is always between people saying regulations versus innovation, and they both don't go together, whereas we have often heard positions that actually regulation can encourage innovation.

So let's try to dive deeper into this question first and hear from our panelists.

So I'll move first to James from OpenAI.

James, OpenAI has been talking about regulation and the CEO of OpenAI also recently called for regulation of artificial intelligence.  What is your position on how to regulate AI and what is its relation with innovation.  Over to you, James.

>> JAMES HAIRSTON: , thank you.  You know, I think one of the things that has sort of struck our leadership in OpenAI sort of coming off several months where we sort of went on a listening tour around the world about how our tools were being used and the big questions that governments are debating about sort of the future of regulation, about the future of sort of how to capture the opportunity from these tools.

Is it the concerns are very different all over the world, proposed solutions are very different, and so, you know, I think what you find in a lot of the dialogues about this, there's no sort of one size fits all solution from a regulatory perspective, from a sort of governance perspective, but what we have really focused on is trying to listen in on where we can be as transparent as possible about what we are building, how to contribute to the international conversations, about ensuring the safety of these systems, long term, but also not disregarding any of the short‑term harms or displacement.  So I think if there's a theme that comes out, you know, sort of on this spectrum of regulation versus innovation, I think it's probably more cloudy than that.  It's that there's a lot of technical work, technical transparency, a lot of advancing the conversation on safety, that the private sector needs to continue to do.  Many conversations around the world that need to continue to converge.  While having those important conversations on sort of mitigating major harms, accounting for different points of view on the future impacts of these technologies and how to regulate them, we also don't preclude any parts of the world, any sectors of the economy from truly participating in the value we generate.  I'm sure we'll explore that as a panel.

That's something top of mind coming off a lot of the listening we have been doing over the recent months.

>> PRATEEK SIBAL: I do want to press on some points here.  So when you mentioned tools, that's ‑‑ as an example, talk about chat gpt, what kind of harms and safety risks are you talking about.  It would be helpful if you could give some examples for the participants and what kind of specific regulation is OpenAI calling for that and how does that ‑‑ whether item speeds innovation or not, if you can specifically address that in one minute or so.

>> JAMES HAIRSTON: So maybe on the one minute on the harm side, very specifically, a point ‑‑ I point everyone to what are our system cards that we release with every major product.

So with GP T4, we released a sort of mini page system card that outlines all of themany harms we tested the systems for, we think there are many questions, red teaming community needs to continue to work on.

And so the space for, you know, again, countries, regions, international institutions to sort of set rules of the road by sector, by area of harm, we think, is pretty broad, up against some of these areas we specifically test.  Rather than outlining each of those, take a look at the system cards, we can talk more about sort of those specifics going forward.

On the regulatory side, again, whether it's indifferent sectors of the economy, sort of the future of law and health and financial regulations and services, sector by sector, difference countries may come to different spaces about the tools.  Where we have put a lot of our energy is on understanding the capabilities and the harms of the highest risk systems, the most capable systems.

And so we have invested in putting to egg the frontier model form, which advances the global conversation on how the most advanced models and tools work and we think aligning the global community on the safety of those systems will be important.

>> PRATEEK SIBAL: We'll come back to the frontier model.  Genie, I wanted to turn to you, regarding specifically also the cyber security aspects of the risks associated with AI.  What kind of regulation are we seeing in this domain, first if you can share what are the cyber security risks of AI and then what kind of regulation we are seeing in difference parts of the world.

>> GENIE SUGENE GAN: Thanks for that question and the time, Prateek, maybe before I answer the questions, I could introduce myself a little bit because I may be sitting here today on this panel as private sector, but I have sort of a long career history before joining Kaspersky and the public sector and the government service actually.  So I think I ‑‑ what I really do hope to bring to this table and this forum today is really not just a private sector experience, considering the fact that I do look after a couple of, you know, government affairs work in various continents around the world, but sort of regulator and government perspective as well.

So, the question really ‑‑ the question that started this entire conversation was about that balance, that ‑‑ or is there ‑‑ is it even possible to draw the balance between innovation and regulation.

I think it's a tough balancing act, and I think, you know, parliamentarians in this room would completely agree with that.  The balancing act between too much or too little regulation is really a very, very tough one.  Especially when it comes to headline grabbing, developments such at ChatG PT.  In terms of regulation, we are definitely across the globe falling behind, catching up with innovations and developments very fast evolving.  And therefore, sometimes, as legislators around the world that are beginning to consider these issues regarding the regulation of AI, there is a tendency and a propensity to perhaps overreact and perhaps to overregulate.

There are many countries around the world which are starting to look into the regulation of AI, in recent times.  We have participated in several of these public consultations.  I think I could flesh out a little bit more of that as the conversation goes along.  In the introductory remarks, I think I would want to say we cannot be reacting to the consequences of every new innovation, we cannot possibly do that.  We cannot retrospectively come up with rules to you're the harms that some of these AI and machine learning tools and generative AI tools will be bringing to the industry, to society and to community.

So what is really important is to find that constant amongst all of these developments and to really ensure that any regulatory or legislative frameworks or initiatives would be based on that constant.

What is this constant and I really just want to start by saying that constant has to be values, yeah.

So I think we need to be values driven, for example, in Japan, it has a very human‑centered approach towards AI regulation based on seven principles, which include privacy protection, which includes security and innovation, right?  So these are the values the Japan society values that prioritizes, rather, so they have crafted their rules, actually, based on these values.

Who are not looking at a single AI law that solves all the problems, but instead AI is regulated through several laws.  So just as an example, again still sticking to Japan, Japan's digital platform transparency act, for example, requires fair and transparent practice from large online stores and digital advertising businesses including disclosing how they decide search rankings, right?

And what are other examples of such values that I'm speaking about, that countries can consider to achieve, the balance between innovation and regulation, I think Kaspersky, if I may end by saying this, just offer some pointers, because our research team actually has been constantly dealing with this over many years as a company of more than 26 years, using machine learning and AI and the detection of cyber threats most of the time, you know, and we do have a lot of experience to share and using AI to identify cyber threats.

So we suggest maybe just three, and I want to leave you with those three thoughts.

Firstly, I think regulation should not create artificial obstacles for AI developers, because you must be able to continue to encourage that creativity, right.  It should, however, provide additional incentives for companies engaged in developing and implementing AI.  Secondly, regulations should look at the amount of data involved and particularly cover multi purpose AI systems trained on extensive amount of open source data and AI systems utilized in discussion‑making processes involving substantial levels of responsibility and risks and just now, when I was speaking with Samuel from Ghana, we were talking about datasets from the regions, from the countries, rather than datasets which currently are very heavy ‑‑ heavily from the west.  So thirdly, regulations should reflect industry‑specific requirements for AI systems used in different areas because the data used for each industry will come with its own intricacies.

So given the speed at which AI advanced, it will not be possible for regulation to be channeled of development.  It will take political courage, I think, for us to reckon with this, and it's tough, you know, because I came from a policy‑making background and, you know, we also need to have a strong belief that values would be what leads us to balanced AI regulation.

>> PRATEEK SIBAL: Thanks for the comprehensive response, Genie.

Before I go to also Nicola, I want to stop with Cedric Frolick as a member of parliament, how are you looking at what Genie and James just described this challenge between innovation and regulation and how what Genie just mentioned, it's not possible to regulate everything all the time because, well, despeed of technology changes so fast and it takes a long time to catch up and the regulation should be probably more broad and value‑based and then can be applied.

What is the thinking in South Africa and other groups of parliamentarians that you are involved with.

>> CEDRIC FROLICK: Thank you, Prateek and thank you for the opportunity.

I think what's important is  ‑‑ I want to pick up from the previous discussion that we had in this same venue, is that parliamentarians have an incredible amount of power, they have the power to make laws, to oversee how these laws are implemented.  They even have the power to look at the regulations and to say no, this is nut enough regulations, you can overregulate.

All of that is determined in the case of South Africa by our Constitution.  Our Constitution adheres to basic values, values and human rights, and for us, as far as the current discussion on AI and data is concerned, the government moved, we always say a little bit slow, but we have data protection law, protects the personal information of all citizens.

At the same time, that's where the balance comes in.  You must allow for a situation where you put legislation in place to ensure also excess to information so that the one doesn't trump the other one.

So the thinking evolves around that, and also with the information regulator that's in place.  It's basically policing the protection of personal information act, and for those who transgress it.

We believe that our citizens have a right, they have a right to privacy, first and foremost, but also people have the right to access information, and it's around those two systems that we currently evolving, and they are additional laws that are being contemplated to serve as secondary laws in support in very specific instances.  So to ensure firstly nobody is left vulnerable.  Nobody is left behind, but also to ensure that those who are ahead doesn't get so far ahead that we deviate from our core challenges in South Africa, and I believe also probably Africa and the developing world, we face challenges, and those challenges relates to poverty, inequality and unemployment.

So any systems that we evaluate must speak to what our national objectives are, and it must serve our interests.  If it doesn't serve the interests of the people, that's supposed to be protected, or served by it, then it has very little value and the take up will be very low.

While the government are putting these different systems in place and parliament has a very specific role to play and I believe in the discussions we should focus on parliament can utilize these tools to improve its efficiency and effectiveness in conducting its work on behalf of citizens.

We need to constantly be aware that the balance should be there, but it must be guided by our own national interests and also what is ultimately in the best interests of the people.

>> PRATEEK SIBAL: Thank you.  Nicola, I wanted to turn to you.  We have been focusing on the word balance, and the challenges, of course, where is that balance and that is of course also context relevant as honorable Cedric Frolick mentioned, looking at what is happening in South Africa what are the national objectives and seek for that balance.  Genie just mentioned that balance hinges on the values.

What, according to you, you've been advising governments but also the private sector, on whether to regulate or not and how to regulate AI or not.  Where as per you is the balance that gets us to a state where we have a thriving ecosystem, which is what governments, businesses, society want, at the same time the safety aspects are also taken into account as James was mentioning.  Over to you, snuck la.

>> NICOLA MORINI BIANZINO: Thank you for having me, Prateek, it's great to be in these discussions.  So I think what I'm worried about, honestly, we are making a lot of assumptions around the type of regulations we they'd to have and what our ‑‑ the constituents really want without truly understanding what the technology does, right?  This is not one technology set.  There are many technologies in AI.  There are some technologies definitely are dependent on the data that you put in, more like the tolls where you have an input and output that is pretty much the same every time.

Some are very different from a technology perspective.  Trying to introduce regulations without fully appreciating why these technologies are different, how do they behave and in all of that, I think that's a big risk.

The other big risk is the assumption that we think we understand what actually people want and I'm not sure 100 percent about that.  As an example, you can see the level of privacy we are giving away with our mobile phone.  The same datasets can be used in a AI system and we wouldn't even notice it.

I know that there is lot of plus Cal pressure around regulations, but I think the understanding of the technology has to be at the core of the regulation, itself, just to give an example, with the large language models, I can challenge you to put your data in the system and train the system, you'll never be able to find it.  That's not the way the system works.

It's not the same way that it is when you have officer buoy metric data, bio‑metric data, your face and fingerprint on your phone, that is different.

When you start with that assumption, what do you need to regulate?  You need to regulate the action the systems can make.  The other thing that I think is really ‑‑ that I think we should do a little bit of itself introspection, these systems are actually amazing.  What they can, and it's in its infancy, their promise is how ‑‑ you've been doing AI for 25 years, and I can tell you that what we can do now with these things is mind boggling.

It's truly another generation of ‑‑ like effectively the birth of a new intelligence on planet earth.  So it will be really important for us to figure out how to harness it.  But it is not only the negative side of it.  For what ‑‑ I've listened to three or four sessions today and everybody talks about the negatives, think about the positives.  Think about the digital barriers coming down.  You can be anywhere on the planet with a phone and have access to the world knowledge and do it in your language, you can summarize it at the level of education that you have.  So this is something that ‑‑ to me, it's a magical tool.  Of course there are lots of downsides, there is a dominance, I'm a U.S. citizen, I live in California, I know what that means for the rest of the world.  At the same time, I think when you also need to embrace progress and instead of just hampering because we are trying to put regulations that were born to do something different, which is protecting data.  We have seen how that actually worked out with our cell phones.

>> PRATEEK SIBAL: Thanks, Nicola.  Your argument is that we don't necessarily know the technology and are regulating it in different directions and focusing too much on the harms or the risks and not so much on the positives.  Would it be right.

>> NICOLA MORINI BIANZINO: I think the reglation it can not be just a blanket statement, we need to regulate.  We need to regulate it in a way that is tailored to the specifics of the different technologies that form this broader domain of artificial intelligence.

>> PRATEEK SIBAL: If I can press you a bit on that.  Would you say we are regulating technology or the impact of technology on society, because what do you do?  You regulating technology, well, not so much, but we are regulating the impact of technology on society.

>> NICOLA MORINI BIANZINO: This is the difference of generative AI.  It's less about the data and more about the actions.

I know it can sound like a subtle difference.  What we are worried about is our personal data will end up in a technology tool.  That's a good worry to have, an important one, needs to be regulated, GDPR, all these things.  There's another part of the story, these tools have the ability of reasoning.  So they ‑‑ incidents infancy, we grow very fast.  Two or three years from now, we talk to these tools like we talk to a human being next to it.

So the question is that what do we want them to do, right?  What's the goal, the line of interests between what we do and what the machine does?  That should be the focus as opposed to let's try to tie the screw on this without truly understanding what the implication are.  I'll tell you, try, put your data and try to see if you can find it.  It's not possible.

You can have it one in a billion chance you'll see your Social Security number on it, but it's really mission impossible, doesn't work that way.

I think the understanding from all areas of society, and you guys are driving that change in the regulation, is absolutely critical of that technology nuances.  Machine learning and other things, absolutely agreement, there is a lot of data, much more derm deterministic link between what you put in the system and what you got out.  That fundamental understanding is important.  Otherwise we regulate with the wrong capabilities.  We'll have, like, a road regulations for airplanes, that won't work.

>> PRATEEK SIBAL: Thanks, privacy is one of the risks and there are others, of course, related to bias and discrimination that are also often talked about and there are examples of that.  So we'll come to those also in a moment.

Let's turn to judge Eliamani Laltaika from Tanzania.  You were on the other side of enforcing regulation, and my question to you is slightly different.  We do have existing human rights standards, we do have international human rights law, which can be applied to already, for instance, talk about privacy, talk about bias and discrimination, you have articles in the universal declaration or the covenants on civil and political rights which protect these.  How in the judiciary are you looking at artificial intelligence or are you waiting for a national legislation to then start thinking about AI and mitigating its harms, for instance.

>> ELIAMANI LALTAIKA: Thank you very much.  First it's a great honor to be speaking in front of so many members of the parliament.  It's really one of the most August platforms you've had because in the judiciary, we are supposed to interpret laws, and we are supposed to understand the intention of parliament.  So I'm approved in that I'm surrounded by the intention of everyone from around the globe.

I'll start this way.  Members of parliament like honorable from South Africa said have this power that belongs only to them.  So you should not wait for anyone to do what you are supposed to do.  You should come up with laws.

It has always been the cause that we make laws for posterity, for future.  You enact a law in 1995 for it to be used 50 years later.

That is no longer the case.  You must legislate for now and posterity.  You just need a leap of Faith to start moving.  You don't need to understand the entire ecosystem of AI to come up with a law.  You can just identify a few problems that are affecting your people and you come up.

I can gosh tee for us in the judiciary, if we have ‑‑ I can guarantee for us in thary, if we have anything written from the parliament, we give it super, super respect.  If it's not there, the tradition allows us to come up with any thinking that we think from the common law perspective.

So why do you give us that leeway to go all over the world while you can legislate?  IPv can guarantee that there is no ‑‑ should I use this word?  There is no silly act of parliament.  Any act of parliament is a powerful law for that country and other countries that associate with that country.

If you come up with an AI law that everyone around the world think it's just rubbish, it is rubbish to that person because it is that interest which made hum to go and look at it.

So just to come up with any law, if it changes tomorrow, what is the problem?  You are going to the house, that's why you are there.

So don't worry, that's one.

Number two, AI must be regulated.  Before I was appointed by my president, to join the bench, I was teaching at the Nelson Mandela institute of science and technology, a Pan African university for graduate students in natural sciences and engineering.  For most of the African countries.

So I've been teaching cyber security law and cyber ethics for the past ten years.

I can guarantee you that everything that affects a human being in any country, it requires some sort of legislation, some sort of regulation.

Awe is not the happy, ChatGPT we hype about.  It reminds us you get from your g mail.  When you walk into an office and they scan your eyeballs, when you write down something and your data is retained, so that entire ecosystem goes down into AI application.  We cannot say those are huge things so we cannot legislate because ‑‑ no, we must start where we are and move on.

To finalize that introductory part, we think we need to regulate AI in order to come up to ‑‑ with the trustworthy AI for three reasons.  First, it has to be lawful.  Someone is asking, is it lawful to use identification in Tanzania, I say I don't know, just use it.  When you have a problem, you'll be brought before me.

What is  ‑‑ something is lawful if the parliament says it is lawful.  If there is no law, it's unlawful.  So that's the short cut.  That's the meaning.

Secondly, we need it to be ethical.  If you are doing anything that is unethical, it should not be supported.

Lastly, it must be technologically robust.  You cannot deploy an AI system today and tomorrow it doesn't work.  Dome the whole system is down, you cannot pay ‑‑ because you have omitted everything.  You cannot promote people because discussion‑making is done by AI.  To say this person has reached 40 years, has worked for 10 years, you automate them.  If that system doesn't work, then you are in trouble.

So you must legislate to get the standards for it to be allowed to be deployed as a part of the government structures.  Thank you very much.

>> PRATEEK SIBAL: Thank you.

So I think this was kind of the first round of introductory remarks from all of you.  If I can try to briefly summarize a bit, and I would like to open the floor for some, I think ‑‑ two or three questions at this point.  We won't leave it to the end and then we come back to the panel.  First we need to be aware and not ‑‑ of what this technology is, what are its impacts on society and try to regulate if needed those impacts.

There was a mention of sector‑specific focus because AI is, as you mentioned, is also your g‑mail and everywhere you have this technology.  What is it we want to regulate.  I think that clarity is needed.

To inform the regulation, we need the values, which have been articulated actually globally by UNESCO, 193 countries agreed on ethics of AI as a recommendation from UNESCO, there are so many other initiatives from other parts of the world as well.

So with that, I would like to quickly open the floor for only three questions, and I would request you to be brief and not make it a parliamentary statement, please.

Who would like to ‑‑ okay, so we have ‑‑ okay.  But first, are there no women who would like to take the floor, if there is someone I have not seen?  Okay.

Okay.  Okay.  Sir, please, first you, and then we go to you and then to Cameroon.

>> AUDIENCE MEMBER: Thank you.  I'm a senator from Mexico.  I think that what Nicola said is very important.  We don't need to regulate the tool, the artificial intelligence or the internet itself, we need to regulate the bad use of it.  It's like a gun, you can regulate certain things, but a Hunter can have a gun.  If he uses it to kill someone or use internet for a financial crime, or to discriminate women or any violation of human rights, then ‑‑ we already have legislation in Mexico that with the use of the internet, if you use it for those things, then there's a consequence.

But not to regulate the tool itself because then you limit the possibilities and you stagger creativity.  That's what I understand, and I think that's my position.  I'd like you to answer that, whoever.

>> PRATEEK SIBAL: Thanks.  So we'll take all three at once, and then in the interests of time.

>> AUDIENCE MEMBER: Thank you very much.  This goes to industry player, OpenAI.  I just wanted to find out how you are using machine learning and African datasets to train this AI tools because I hear the talk about not regulating AI and all of that, but the tools that you're building, for example, I want to find out how an AI‑enabled e‑agricultural tool could benefit a farmer in Ghana when the datasets using to train that AI tool are tailored to the am for in Texas?  So how then are you working with African dataset providers to be able to access our data so that AI tools don't become Zdeno focus at a certain point in time.

The second question is the benefits of AI is the ability to process large data, and that's one of the benefits of AI.  It's able to process data in superhuman speed and all of that.  Given all of that, what are the data privacy concerns, how are we going to deal with the real data privacy concerns that arise from the ability of AI to pull datasets from different locations and draw determinations on an individual that human beings couldn't have done previously.  How do we deal with that.

>> PRATEEK SIBAL: We come to the honorable member from Cameroon and then to ma'am and then to you.

>> Thank you, I'm honorable Oliver, parliament land Cameroon.  I want to listen to my brother speak from Tanzania.

I was a little bit confused because in Africa where we come from, when people are not aware about something, they tend to clamp down on it just to completely block it.  So I'm asking ‑‑ just asking do we need to regulate because we are thinking about the benefit of regulation or we regulate because of the fear of the unknown, where you're afraid of something and try to cut down on it?

Then also, don't you think that collaboration, cooperation, can be an instrument, a tool that ‑‑ for sharing of knowledge, rather than thinking the things I don't know about AI, we have to crank it down.  One thing I agree here, the fact we cannot stop AI.  We cannot stop it.  It has come and will stay with us, thank you very much.

>> PRATEEK SIBAL: Thank you, sir.  Would you like to take the floor, ma'am, if you could introduce yourself briefly.

>> AUDIENCE MEMBER: Hello.  I'm Mexican.  Please put your headsets on.

I'm a senator in Mexico.  And I think when I see the importance I think of sharing with you that in Mexico, there's something I just heard on the other panel, the EU has taken the decision to start to legislate on matters, as ‑‑ some of the matters you've just mentioned, the impact, and I'd like to just mention, because I'm chair of the equality commission in Mexico, it's not the same to regulate the impact as.

>> To regulate the action itself, the informing, but we already made significant progress because we already regulating digital veins and media violence against women.

And I wanted to mention this because we are still asking ourselves what impact all of this has.  The impact is terrible.  Women who have been insulting, criticized on the network, some people have even committed suicide, some I'm have committed suicide.  We really need to take responsibility here.  We need to make sure that we use this technological progress and do it in favor of nondiscrimination, nonviolence, et cetera, we need to make sure the platforms commit to not collaborating or not normalizing events that have an impact on the dignity of people, in this case the dig netty of women.  Mexico is very far ahead from this point of view.  I have theim pression the EU isn't.  I want to know your opinion.

>> PRATEEK SIBAL: Sir.

>> AUDIENCE MEMBER: Thank you for yielding the floor.  I am honorable laciday.  I'm a parliamentarian, Chairman ICT, Nigerian national assembly.  I want to make it very clear.  I want to toy your line a little bit.  Technology is a moving target.  There is nothing we can do about it.  If you go back a few generations, even as early as cloud computing, when cloud computing was going to come out, everybody was freaking out, distributing data centers and all of that.  Didn't take it away.  All it means, we have to adopt to new ways, the same way we emerged from mainframe, to distributed computing to cloud computing.

We have to adopt and cling on to AI.  It's not going to be ‑‑ it's not going to break anything.  If I'm thinking the way I'm thinking, we just need to have good security framework in place to embrace it.  And secondly, let's not legislate to kill AI.  Let's not legislate to kill AI.

Let's legislate to make sure that AI is being used with well‑defined parameters where innovation continues, but yet we are trying to make sure that it just does not go unethical line or break any laws.  So my line of thoughts around this, after listening to all of you is just to make sure as legislators we embrace, we legislate to accommodate and then fix issues as they come.  Thank you.

>> PRATEEK SIBAL: Thank you, sir.  I think the MP's respect the speaker of the house.  We will ‑‑ wanted to have three interventions, but the floor is yours.

>> AUDIENCE MEMBER: I'm from Guatemala, central America, and I would ask you to put your head sets on so you can listen quickly to what I have to say.

It's very important what we've just heard from the senator from Mexico.  It's crucial that we also have today someone here representing the courts in this annual event that we hold.

But we legislate cyber security already, do so at this stage, we started with protecting minors.  47 percent of our population is indigenous, and there have been many violations, abuses, rape, against minors, and so we started with that.  A few years ago, we wanted to be part of the Budapest convention.  I wanted to mention this today just to consider all the progress that's being achieved and everything that's been achieved in this differentphora.  What we have taken from this, we arday away from approving an act on this front.  There have been many abuses that have taken place involving businesses, hospitals, institutions, both public and private, and they are making use now of personal data from individuals.

So we have done comparative law, looking at Latin America, looking at what Mexico has done, what the U.S. has done, in particular we have looked at the Mexican legislation with the institution around the protection of the data of people, thank you.

>> AUDIENCE MEMBER: I'm the last one.  I'll be very quick.  Thank you very much.  I'm from Italy, Martin front knack.  All be in charge about digital AI on the part of regulation.  I would like to ‑‑ I would like to speak about one topic that I think is really important because I used to study the technology, the impact of the technology, but I really think after a couple of years that the technology is totally stupid.  It's our behavior, how to use technology to make the difference.  We have to be honest, we are all colleagues and we have to be truth to be told.  Even for the human rights, there are countries that respect human rights, and there are countries that don't.

I don't want to put the finger against the one in particular, but ‑‑ we are ‑‑ if I think that the parliamentarian beyond the technology, always to visualize the human rights are respected, then use the technology, we can use some other law, but the main route is that respect of the human right.

I'm a European, in Europe decided to be one of the first country in the world to make legislation about the AI, maybe strictly, maybe somebody ‑‑ a lot of people saw Europe is the leader not in innovation, but in regulation.  It could be, I fully agree with you, there are some countries, there ‑‑ maybe better than us that are very leading in the innovation.

But I think sometimes when we build a framework that respects the human rights, I think we are not talking about limited innovation, but we are talking about to preserve the human right.  This is really important.

I think that Europe paved the way, GDPR is a good best practice.  And it's already done.  Why don't we adopt kind of similar GDPR around the world.

You've already prepared.  People are already prepared.  Something that is already we have.  And we ask to ‑‑ colleagues to get together and think about the human rights and common good.  Thank you very much.

>> PRATEEK SIBAL: Thank you for that statement.

How should we go about this now in the panel.  We have a think seven interventions.  I would briefly mention some broad areas and then you can pick the ones that you would like to respond to in a minute and a half or so, please.

So we have, of course, statements around not too much regulation and regulate the bad use cases instead of just thinking about the technology.  Some points around hate speech, about violence, manifesting to what these technologies, flow dome of the press.  We heard some statements around the fear of the unknown and whether the MP's should regulate, definitely not if people are afraid.

We had some specific questions for OpenAI.

I'll perhaps start from the other side, if you would like to take the floor.  Nicola.

>> NICOLA MORINI BIANZINO: I mean, in general, I think the comments are pretty good.  Very much aligned with what you thinking, especially the one coming from you.

I think at the end of the day if we don't embrace the technology, we'll be behind, that's it.  As simple as that.  So the question is can we make it into more like a humanity tool as opposed to a tool from a very specific area of the world.  The question about the data with African, for example, data is a good one.  That's what we need to push for.  It's almost like the internet in the way.  It doesn't discriminate between one country or another or a language or the other language, depends on what you put on it.  So I think we can go in that direction with AI as well and make it into the overall repository of human knowledge.  I believe in that.

People want to rotate in a ‑‑ if people want to use it in a negative way, you could find instructions how to build bombs and other nasty things.

We need to truly think about the ends as opposed to just focusing on technology, just ‑‑ if we put good ‑‑ no impact at all, things will be exactly the same.  Because I mean, I'm the same.  When I came to Europe, these pop‑up screens on the website, I click okay, anything has been known about me through this.

I think we need to make sure we are focusing on the right target and not on a false target, that's my comment.

>> PRATEEK SIBAL: A study says if you have to click on all the privacy policies in Europe, it will take you 76 days if you were to read them.  At the same time, it doesn't mean that if something is broken, we get rid of the whole regulation, but how to improve it.

>> NICOLA MORINI BIANZINO: I think if we put the screw on the market, what will happen, it's very clear, old investments, the startups will move to London, have a very open ecosystem, to the U.S. or somewhere else or India, so I think every country will make rules in order to invite the innovation to come to them so the people are going to cut themselves out.  IPv have lots of friends in the are moving out, there's too much uncertainty, and they know that the regulation is going to impact them negatively.  You can see from the U.K. government, for example, free for all over there.  In the sense they can try things out, not a lot of regulation.  Is it the right approach?  I'm not sure, we need to think about it.  Just strictly closing the door, you'll be left behind.  That's like to say, strictly access to the internet my own citizens, right?  You're going to do that.

>> PRATEEK SIBAL: Thanks for that provocative intervention.  I'll move to James, you had specific questions for OpenAI and datasets.

>> JAMES HAIRSTON: I'll start with the responses there.  On the question about the data OpenAI is trained on and the gaps our tools, that's exactly right.  Like we ‑‑ our large language models are trained on the part of the web that is largely English language, the highest quality initial training was done on.

Does not reflect the languages of the world to the degree it should.  And there are like really important gaps in performance, and, you know, it's not just the world largest languages perform better and better and each model, evolution is proving that.  As you go to different dialects, beyond the world's most spoken language.  That performance drops.  They needs to change, full stop.

And it's one of the projects we are really invested in.  We worked on a Pilate Bronco the government of Iceland because the icelandick language and its representation online is very small and the number of speakers there is making sure that the language can be used by these tools and figuring out what are the research techniques that will overcome some of these gaps is a big research challenge.

So what are the solutions there?  I think building partnerships on how to train on high quality language data is something we continue to explore around the world, something we want to continue to work with governments on and we think it's a project too for international institutions to take on as well.  How can we make sure the tools aras responsive and very tactically having teams inside of governments evaluate tools like ours and submit evaluations, submit tools about what isn't working, where our tools are inefficient in languages, we run tests like that and publish information in ‑‑ I mentioned the system cards we release, but understanding where there are gaps and I mentioned the listening tour we went on, that's the first question we asked both of developers around the world, governments, where are we underperforming, how might we partner on the research to improve this ‑‑ these are tools and their execution.

I think that's a really important project for the world and for us to engage on.

On the privacy practices and the information practices, I can't speak to every tool in the ecosystem.  We place really high sort of safeguards on protecting personal information that enters and is outputted into our system, we don't want people's personal information to be outputted.  That doesn't mean that you can't sort of try to engineer work‑arounds to get that to happen.  That's another place where we are constantly safety testing and want reports when we are falling short there and that's going to be another space where, again, we have to continue to do research to make sure you can't back in.  That's knot the way every type of tool that doesn't speak to sort of what open source models will do.  But for us and our systems, we work to prevent personal information from being an output of our major tools.

Then to the really important comments made about the need to respect human rights and fors the tools to both reflect and serve communities around the world, we are in full line, we are trying to build general use tools that can aid in all of these important tasks for humanity.

A couple of projects we have just announced that sort of get at this work.  One is democratic inputs for AI.  How do we understand the unique context for communities around the world.  And sort of shaping the outputs that AI systems make.

So, again, that tries to account for the fact that we know there's an overrepresentation of like the English language in the systems or making sure that different regions, different countries, different communities can shape the voice and how these tools work.

We just rolled out a red teaming network where we are asking security and domain specific experts from around the world to stress test our tools.  That's a critical part of how that work proceeds.  You know, another recommendation sort of flowing from some of those.  I think the capacity building and some of the testing and use of these tools by governments, but in low risk ways, like not deploying them in sort of ‑‑ for very high‑risk, you know, external facing use cases, but building the capacity and sort of ‑‑ one of the comments earlier on the panel of sort of civil servants and others, using these tools, again not for high‑risk external use but really growing me facility to understand what are the flaws and the ways it does not work, building the security and safety infrastructure internally, I think, will help with the sort of external pieces of regulation and representation.

>> PRATEEK SIBAL: Thanks, James.  Just on the language point, I would also like to draw your attention to a very vibrant community in Africa of researchers it's called MASAKANI, developing datasets in African languages, to address precisely this problem that a lot of these tools are not available in Low‑resource languages, perhaps OpenAI can also engage with these communities because there's one thing about listening, but also empowering communities with the resources to be able to develop these datasets which is ultimately the common good for humanity.

Moving on, sir.  Frolick.

>> CEDRIC FROLICK: Thank you.  The emphasis should be on the responsible use of AI.

How do you achieve that?  You can only achieve that through collaboration.

That collaboration, which is part of the questions, can only be achieved if all the stakeholders are part and parcel of that process.

For instance, I support regulation, it's necessary, of course in my country I wouldn't like to see certain tools being developed and also deployed that can be to the detriment of our society and to detriment to our people.

I also Herds somebody say, but who says the people want this regulation?  The people vote for us.  They vote for political parties and individuals who go to them, solicit the mandate.  So we have the right to represent them, but also to articulate views and put the necessary legislative and regulatory frameworks in place, that's the common good of society, of course you will have those are completely on the other side.  Just imagine if you have in a situation and it happens in the world currently by the way certain products can be developed, not necessarily for utilization in their own country but elsewhere.  But it's not custom to our needs there.  What happens if the very advanced military technology, for instance, AI is also involved, falls into the wrong hands.

What happens to your internal security systems when you don't have regulations in place on what to allow, what not to allow or the extent to which it is allowed?

So the responsibility is for all government, organized business and just last week in Uruguay, where discussion around to place AI.  There was general agreement even from the big companies promoting AI, yes, we want part of the process, of course we cannot stifle development, new technology, new research, but it must happen in such a way that it serves the best interests of heem humanity and a common interests of a suppress country.

>> PRATEEK SIBAL: Thank you, sir.  Genni, what are your thoughts.

>> GENIE SUGENE GAN: I was busy taking notes, and I hope I'm able to organize my thoughts.

First of all, I think my response would be it's not really about regulation in and of itself, but exactly the challenge here is what exactly are we regulating?  And second of all, why are we regulating?  If it ain't broken why, why fix it?  So then that brings me to the next, and I think ‑‑ sorry, maybe it's not a question, maybe it's a a foregone conclusion, I think it's clear to everyone in this room, AI ‑‑ in technology, we can run away from this, obviously there are a lot of benefits and there is no way we can run away from this, if you run away, you'll be left behind and peentire global development.  The second foregone conclusion and everyone should agree, however with the benefits of AI and technology comes with a lot of costs and a lot of impacts, which is why I think our lady senator from Mexico was talking about are we regulating impact or the action and she spoke about impact on women, for instance and I think our speaker from Guatemala was talking about protecting minors.  These are the vulnerable people we are trying to protect.  There is something we are trying to protect, a reason we are trying to legislate.

Now then, it leads to the discourse we are having today, we are having now about regulations and its role.  However, and having, you know, done policy work in the government before for a long time actually, I completely understand this, there are limitation to the regulations, because whatever is legal may not be unethical.  Which I think I hope ‑‑ I hope we are going to spend some time, for whatever is remaining, to talk about ethics and ethical principles.  That is something for me and our team in Kaspersky, we care a lot about.  In fact later this week, at this very firm, at IGF, we are launching a set of principles we hope can catch some attention and alignment in the global community as well because, for instance, from Ghana, member of the parliament from Ghana, was raising the issue about data privacy concerns and there are in concrete terms from cyber security perspective, some measures we can implement, would be including limiting processing, reducing data composition, pseudoanyone ‑‑ anonymizing, whatever it's possible and ensuring data integrity and applying other technical and organizational measures to protect data and systems.

However, these are technical solutions.  We should actually be thinking about not just regulation also, but ethical principles.

>> PRATEEK SIBAL: Thanks so much.  Genni.  Judge, your thoughts, please.

>> ELIAMANI LALTAIKA: I'll be quick.  I hate honorable Oliver from Cameroon, and I hope this will address others.

Why do we need laws?  All over the world, actually?

There's a subject called jurisprudence that is taught in law schools, and they say there are three functions of law to any society.  The first is prohibitive role, don't do this, don't do this.  The criminal aspects.

The second is facilitative role, to facil Tate someone to be able to ‑‑ if your parents pass away and you want to have that car in your name, the law facil Tates you through probate to get that car in your name.

The last is facilitative, prohibitive and to assist.

So you don't always think that when people say we want to legislate, we want to prohibit, because this is what many people think that Africa wants to enact laws to prohibit AI.

No, you there are laws meant to facilitate to you say we support AI, we want anyone investing on AI in the Uganda to make sure they train engineers from the university.

We want to be a part of this journey.  You make a law like that.

You make a law to say, we want contributions for women in stem.  Women in science, technology, engineering and mathematics.  There is a saying that 20 years to come, you will regret not for things that you did, but those you didn't and you could do them.  So if someone wants to make a law and it ‑‑ you are told that you went to work because science is changing quickly, it will go to the national museum.  People will read it.

So I must be very open that I'm an advocate of hearing voices from all over.  No one should be silenced because they are not up‑to‑date with science and technology and engineering.  No one should be silenced because someone else is working in that area.  I want when going to into Google as a judge, I am looking for a law, even if it is just one line, mentioning AI, from Cambodia, fromma pore.  We should not always be led by U.S.  What do you have to work from Nigeria?  And there is no scene in saying what you think.  Those are the fundamental human rights we are talking about, the right to be heard, to say what you think.  There are many ways of legislating.  I cannot teach the pope because this is your job, so I cannot teach the pope.

But if you may allow me to say so AI cuts across these areas, education, you already have education act.  You can put an amendment there and say students in our country must be encouraged to learn, artificial intelligence and its application, but use it ethically.  The you don't put all your esays in AI and then you passe national exam.  So you legislate on education to permit it.  There are issues of data protection.  You already have some of these laws, you can just add a section.  There are sections of intellectual property.  I am reading across the world people are complaining some of their novels were put into this generative AI.  You cannot legislate and say this will be the benefit sharing.

There are issues of cyber security and cybercrime.  That is one way.  A second way AASE to come up with a whole AI act.  I've met some of you, some of you are heading committees, you can talk to the lawyer and come up with something, some voice from your country that we are waiting for.  Don't worry, no one will laugh at you, we are the ones who interpret acts.

So I am the one saying what you are doing is great, do it please.

[ Applause ]

>> PRATEEK SIBAL: Thank you.  Even though there are a lot of complexities in defining what AI is, that needs to be carefully done in the laws before.

So we have about 15 minutes left and we'll move to the final set of questions.  Okay, I see that ‑‑ there are more interventions.  I wanted to move to my script, but okay.

>> AUDIENCE MEMBER: Thank you.  I'm honorable hajio from Gambia member of the parliament from the Pan African parliament.  More of a comment about this legislation.  Members of the parliament do not legislate what they do not know.  Legislation actually comes as a need for society.

For example in Gambia, you know we have been talking about cyber crimes all over, but our loss are not yet enacted.  What we did to be proactive, we went and changed our communication act of 2009 where we expanded the definition of computer systems now to mean computer systems information on communication technology tools and services.  You'll see that sometimes as members of the parliament, we are proactive, we are not reactive.  What we are doing essentially is to cure the ills of society based on what our stuff wents want.  It's not necessarily innovation, because technology is here to stay.  Whether we want it or not, it's here to stay.  What he are doing is to ensure we support the ecosystem itself, but not harming society at the same time.  So what we are doing regulation is what actually harms society, that's what we look at.  What it does to society.  So we are here to support innovation and we are here also to support AI as well.

>> PRATEEK SIBAL: Thank you, sir.

>> AUDIENCE MEMBER: From the DFC.  As he says, very difficult to make one regulation for the AI because we don't know even what we come out of from the AI, from the moment.  But I think also saying we don't make regression specific to AI and we put some line of different ‑‑ different law from education, that also is not the right thing to do.  Something related to AI.

My concern, we are I don't know if it is because we see a lot of movies, but we are human, okay, and the AI at this level is still dealing with human data.  What will happen once AI will be dealing with AI data?  And allowing to human life.

So we have to have some specific things that is related to AI, not big law, but something that we can make sure that at least anything that came out from AI, we can regulate it, but make sure that it's ‑‑ gives us something that came from human first.

>> PRATEEK SIBAL: Thank you, sir.

>> AUDIENCE MEMBER: Thank you, my name is Sarah Opendi, member of parliament from Uganda.

One thing that I want to acknowledge that artificial intelligence is real toughly new field, especially for us in Africa.

We are at different stages of understanding technology, and this is also something that makes us think differently.

So definitely we all know that we ‑‑ artificial intelligence can be used in health and education, agriculture and different fields, and therefore legislating for a particular sector may also be problematic.  So in my view, certainly laws are not made in a vacuum.  Laws are made to solve or facilitate as justice Eliamani Laltaika said.  For me, it is important that we actually have specific legislation on artificial intelligence because, as I've said, it cuts across different sectors.  So you can't have just looking ‑‑ amending the education contact and this act and the other one.  But overall, at the end of it all, we must know that technology can be disruptive and a country like mine which has 75 percent of its population as young, below the age of 35, we must ensure this technology, this artificial intelligence does not disrupt and compromise and disorganize our values and cultures.

So legislation is important, and I have already seen the challenge that technology has caused.  As we speak now, we have children that have been affected in different ways, by technology.  So we must ensure that we protect the young children, much as we want to advance, but how are we moving forward?  People have stopped thinking because of these gadgets.  They can no longer think.  Are we having human beings machines taking over roles that human beings are supposed to do.  Let us be careful.  We want to progress, artificial intelligence, yes, it can help us be efficient in other fields, but it can be disruptive.  Thank you very much.

[ Applause ]

>> PRATEEK SIBAL: There was the lady behind you was first in order.  Please.

>> AUDIENCE MEMBER: Thank you, from this short.  My name is Tangi a member of parliament from Tanzania.  One of the youngest members of the parliament in my country.  And I am a Vice‑Chairperson of the parliamentary caucus for science and technology and innovation and we don't have parliamentary technology dealing with science, technology in.  This is a point where we will add our arguments when we discuss on the importance of having a permanent parliamentary committee designs with science, technology in innovation.  Right now we don't have a specific committee for parliament.

I'm so proud of the judge here, what he's saying and I'm very flexible, as you can see.  First, I'm one of those parliamentarians who also support artificial intelligence because I also use ChatGPT.  You understand what I'm talking about.  For me, we need technology, we need the regulation.  For me, laws and regulations for us, is for assisting and for facilitating and at some point you need to prohibit but it's not like we are banning the use of technology in the advancement.  We cannot run away from technology.  Yes, for me, we need ‑‑ on the concept of assisting and facilitating.  I'm very glad for my country, our president allowed the amendment for education and we introduced cording ‑‑ I'm in one of the African countries, we are one of the LDC's, so we also ‑‑ we are not that very closed to technology advancement, I'm not saying so, because we also don't have the ministry dealing with science, technology ‑‑ we have some things that are not very clear at some point of technology is within education ministry and some point of technology is  ‑‑ information, ministry.  So it's ‑‑ I'm kind of ‑‑ having a judge who is also ‑‑ who is also a lecturer, at one of the universities, first of all, that's our president recognizing that we are not running away from technology.  For me, this is a point where we are open minded.  I want to tell you, you have these kinds of members of parliament across.  You have these young brains with thinking that we have to accept the science and technology.  I'm here for that.  Thank you.

[ Applause ]

>> AUDIENCE MEMBER: Thank you very much.  I also want to contribute.  I'm from the Republic of Namibia.  I'm a member of parliament.  Of course, yes, whatever development we experience, it needs to be regulated based on certain elements, because regulating artificial intelligence depends on what element you want to regulate.  What interests you actually want to regulate.  It is not ‑‑ it is not ‑‑ it is not done, say, without limitations to what you are regulating, but you look at, for instance, the saved measures associating with the use of ‑‑ associating with the use of artificial intelligence and what type of artificial intelligence also you are referring to.  So you look at, for instance, electronic trans section, how you want to protect your people using electronic trans sections, maybe through systems, maybe through any type of electronic translations, those are the laws or the regulations that we can always look at.  Including communication in general.

How do you regulate or protect your people or give them a right to use ‑‑ to communicate using artificial intelligence, for instance.

So how do you protect your people from the harm associated with the use of artificial intelligence?  So I'm just in general saying we all want and believe and agree that artificial intelligence is there to assist us, to achieve.

To develop and progress.  To communicate.

To align ourselves with the rest of the world.  So that we are completely living in one global unit.

Depending on country to country, of course, because each country has got its own interests.  Aligned to its ‑‑ by its norms.  But also there is one point that I want to say last, and this is what I wanted to say most actually, that is when we have to strike a balance between what you want to achieve with artificial intelligence and the capacity that you are building to allow our people to cope with the use of artificial intelligence, because in the reality, it is not balanced.  We are not the same when it comes to capacity.  The capacity we are using to to use artificial intelligence.  In some countries like Africa, as much as we appreciate the existence and the coming of inter‑‑ artificial intelligence, we have also experience, unfortunately, loss of jobs because we have not capacity our people enough to be able to cope with artificial intelligence.  So that artificial intelligence can create, for instance, employment opportunities for them.  So thank you very much.

>> PRATEEK SIBAL: Thank you so much, sir.  So we are really running out of time now, and I see the IGF secretary looking at me.  What we'll try to do is have closing statements now, but I would request you to really focus on moving beyond the legislation.  I think we have had a real dialogue here with the parliamentarians, people on the panel.  What is needed if you were having three key messages, three key points on creating an enabling ecosystem for AI.  We have heard some points around capacity building.

So we'll start perhaps with Genni.  Your quick three‑point take on what kind of initiatives are needed for building and enabling ecosystem.

>> GENIE SUGENE GAN: Definitely the threw points, awe is here to stay.  From a cyber security perspective, which I think is the reason I'm on this panel, AI algorithms have been applied to speed up the process of threat detection, recognize anomalies and enhance the accuracy of malware.  In addressing cyber vulnerability, which is facing the world today, the ability to analyze a large amount of data is important.  To the growing number of cyber threats means it's virtually impossible to protect them.  While policy discussions usually focus on outputs, such as the material, ChatGPT produces, its inputs ‑‑‑conversation is something we have been having early on of the session, which is the data that its trained with.  The data training sets may be a bigger threat.

So we are talking about data‑driven tools like ChatGPT are only as good or accurate as the integrity of the training datasets are.  Bad actors may well influence what users see and read in ways that have global consequences.  If we are clear about the root cause of the problem, it can be less daunting than dealing with AI itself.  Understanding this and ensuring any regulatory or legislative frameworks are based on consistent values which I started my introductory remarks with, will definitely ensure that AI ecosystem has robust rules that can withstand the test of time, the ‑‑ withstand the test of emerging innovation and development and technology.

>> PRATEEK SIBAL: Move to James, next.

>> JAMES HAIRSTON: Three closing thoughts.  I think the first is really just that we continue to invest and sort of global collaboration on the rules to keep these tools safe and to govern them.  I think there's going to be a lot of divergence by country, by region, by sector, that we don't lose sight of collaborating around the world at the higher levels ensuring long‑term safety of thee tools and that they work for the world.

The second is really to build capacity inside and across governments for using these tools and not just for us own sake or learning what these tools ban be applied to, but to gulf feedback, where are the language capabilities falling behind, were they harmful for communities we care about and talked about today.  That would be number two.

The last one is, you know, I think to the great points earlier about the different layers of injuries prudence and the facilitation and other pierces is to really think about issuing challenges using these tools that could solve real world problems for countries, communities and spending some time and I think capacity building can really help with that.  So it's not a set of challenges issued from far or but are developed, designed and really representative of people in our communities.

So those would be my three.

>> PRATEEK SIBAL: Thanks, James.  We move to judge Eliamani Laltaika.

>> ELIAMANI LALTAIKA: Just quickly, excuse me for this, I did not mention clearly the three things.  Permissive, prohibitive and facilitative.  So those are the what we know all over the world.  Permit, prohibit or facilitate.

On my last closing, never wait for the right time.  That time will never come.

There is no aspect of ICT which not first moving.  You cannot wait for it to come down so you can start legislating.  Don't lose the forest for the tree.  We are not just talking about ChatGPT or generative AI or machine learning, we are talking about an entire ecosystem made of AI.

And lastly, there is this African proverb, I am from the Masai.  I've seen my brother from Kenya is a member of the parliament from Kenya, but we also have Masai in Tanzania, we spoke in Masai a little bit there.  There is this meeting ‑‑ saying that says never go to a community meeting without your own position.

We want a global law, UNESCO, African Union, you know, but let this global law be made of individual perspective, you cannot take the entire African continent to a U.N. meeting and all they can do is keep silent.

  We do not accept that.  We want you to do something.  Our member of parliament will tell you another saying, if we throw you into the water, you cannot stay like this and wait to die.  You will try, try, try again.  So just try, come up with facilitative, prohibitive or facilitative voice from your country.  Thank you very much.

[ Applause ]

>> PRATEEK SIBAL: One of the things that when you were speaking in somewhat Healy, the live translation here didn't work.  That's a challenge.

So.

>> ELIAMANI LALTAIKA: One of the things we want to fix.  We want AI to understand everything, including our languages, including Masai.  Nicola, over to you.

>> NICOLA MORINI BIANZINO: I have three words.  One is the opportunity.  So yes, okay, we need to look at it in the potential harms, but also look at the potential benefits.  I think especially for younger generation, they're going to leave with these ‑‑ live with this for the rest of their lives, it's not something we can just do like at this level.  The the opportunity is a big part.

Second is the purpose.  As you do that, keep ‑‑ I heard many people saying the same thing, which is the why you're doing it, what are we trying to accomplish.  That's the purpose.  Second part.

The last one is balance.  So yes, there are things that are going to go wrong and things that are going to go right.  It's very early stage.  This is a long game, not something we are going to regulate it and static the next six months or next six years.  This is something that will evolve.  The ability of adapting quickly to changes will be absolutely critical.  You don't want to end up on the wrong side.  Accept that we might make mistakes.  We are too open or too closed at the beginning and then we need to have this time to continue to evolve it and grow.

>> PRATEEK SIBAL: Thanks, Nicola.  The MP from South Africa.

>> GENIE SUGENE GAN: I think what's important is that we has parliamentarians must be proactive.  We must not sit and wait for something to happen.  We must initiate it ourselves to the best interests of our people.  As such, it's important that we do create the necessary platforms where the voice of the industry in respective countries and on the continent of Africa especially is heard.

Now what we do on at least platforms, we must promote on inclusion, ethics, regulation and standards.  And also share best practices and educational resources that can benefit us.  There's an African proverb following the good judge I want to end by saying that we must always keep in mind public representatives parliamentarians of the proverb that says if you want to go fast, go alone, if you want to go far, take the people with you.  Thank you very much.

[ Applause ]

>> PRATEEK SIBAL: Thank you, sir.  Well, they could not have been a better note to end.  I'll quickly take the moderator's prerogative and say we need to keep an open mind.  We need to really focus on why we want to regulate, what we want to regulate and what is the impact of that regulation without just responding to the hype but be more nuanced in our discussions and communications, we have discussed issues related to localization, related to limit that was ‑‑ alignment that was mentioned with human rights or the internationally agreed standards.  Finally, in terms of strengthening the ecosystem, capacity building remains a major challenge and an opportunity because today we are talking about AI, but tomorrow there will be other technologies and we need to strengthen regulatory capacities in general to have more informed dialogues like this one.

With that, I would like to thank all the panelists for your time and to our honorable MP's for your thoughts and your insights and we continue to engage with you in these conversations going forward, thank you.

[ Applause ]