IGF 2024 Day 2 Press Room PT Session 3 Researching at the frontier Insights from the private sector in developing large-scale AI systems RAW

The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.

***

 

>> LATIFA AL ABULKARIM:  good morning, Ladies and Gentlemen, in the second day of the parliamentary track.  And a very good morning in Riyadh, although the weather is cold, colder than usual, but I'm sure this session's conversation will warm us up and these valuable insights and rich information especially that we are having it with the parliamentarians, maybe I would say core stakeholders with the private sector.

So, today we are going to discuss researching at the frontier and learn more about how to balance information and regulation in practice while developing large scale AI systems.  Please join me in welcoming our he esteamed panel, Ivana Bartoletti, at Wipro.  She is a privacy and data protection professional and visiting cybersecurity and privacy executive fellow at Thompson business school at Virginia Tech.  She helps global organizations with their privacy by design progress and privacy and ethical challenges relating to AI and data.

She is also the cofounder of women leading in AI network, a lobby group of women from different backgrounds aimed to mobilize the tech industry and politics to set clear governance of AI.

Next, we have Basma Ammari, the Director of Public Policy for Mai region at Meta.  She leads a team that focus on tech regulations and policies, promote platforms integrity and support innovation ecosystem.  By practice Basma is an international development and public policy professional with 20 years of experience, having worked at The World Bank, Washington, D.C. and Africa Mai, as well as social impact organizations and governments in these regions.  Basma worked across several sectors (AMEA) and contexts from education to health, and community development and across several countries, including in conflict and post conflict zones in west and east of African AMEA.  She worked at the prime minister's office at the EU as virus and strategy innovation, she has a bachelors in finance and holds an MBA degree.

Last but not least, we have Faud Siddiqui, she is an EY, like emerging AI, EY's global innovations and emerging tech leader.  As the EY global consulting and emerging tech leader, Faud helps unlock new value economic foresight and challenges, established thinking and advocate for inclusive and sustainable growth models.  He brings more than 20 years of experience, spanning international markets and advising clients on diversification strategies and how to win by capitalizing on the next technological evolution.  Thanks so much all for joining us.

And let me explain that the goal    the main goal of this session is to tighten the channel between the parliamentarians and the private sectors to hear from our esteemed panelists how these companies are designing AI systems to enhance productivity without compromising standards, and what's exactly their views when it comes to AI regulation.  Are they with soft self regulations, and what exactly we mean by sandboxes and how is it related to the main regulations that we are doing at the parliaments.  And are you with those companies who is always coming and saying to the parliaments please regulate the market or are you now having your own strategy or gradual thinking about how the new technologies and digital technologies in general be regulated.

And what are the safety and social impacts of LLMs, and how to mitigate the different types of risks.  As we know, it's not all categorized by low and high risk, but it's also something related to the geopolitical risks and further risks.

So, I will start by you, Ivana, please.  Explain to us what does privacy by design look like in practice and how can companies embedded within AI development lifecycle?

>> IVANA BARTOLETTI: Thank you so much and it's absolutely great to be here.  I just wanted to start by saying that I think this is really, really important session, because all around the world politicians like yourselves are grappling with what is AI and does it need ad hoc regulation or not.  As somebody who has grown within the privacy field, I will start by saying that privacy plays a huge role when it comes to artificial intelligence.

And I want you to understand that a lot of countries around the world at the moment, they have, they are creating AI    sorry    privacy and data protection regulation.  Saudi, for example, which is very important, because one of the risks related to AI is really about the right and freedoms and the right of data subjects, individuals, and the fact that individuals' data need to be protected and secured when it comes to artificial intelligence.

So, my first encouragement to parliamentarians is to not jump into this "We have to regulate AI."  Okay?  This is because it's really important that in your countries, you look at how is AI governed and regulated right now.  Okay?  Privacy regulation, consumer regulation, discrimination related regulation, liability, all these things already apply to AI.  So, to parliamentarians, I wanted to say, don't think that AI does not    does exist in isolation.  It does not.  Already a lot of things, a lot of existing legislation that we have across different countries, they apply to artificial intelligence, and AI is not an excuse to say, well, we don't care about existing regulation.  We are going to create new ones.

So, first of all is, make sure that we do not jump into regulation like this.  Privacy is important because a lot of the harms that we discussed, for example, in the opening session yesterday, they will affect individuals.  And this is important because when we talk, for example, about harms that come from AI, so, for example, if you use algorithms to make decisions or if you train large language systems by taking the data that comes from, for example, from all around the web, what you are talking about is people.  Okay?

And therefore, privacy legislation is important because it will protect a lot of individuals and it will force organizations to, as much as possible, build a privacy, security and legal protection by design into what they do.

Now, governance comes on many different levels.  Governance comes from companies.  So, we as organizations, we have to do all we can to be responsible.  So, you as parliamentarians, you are in command.  You have to say, "Companies, you have to be responsible for what you are doing.  Show us, show you, show us the best practice."  Right?

Then there is regulation and governance that come from state and governments and then there is the international, sort of, governance, for example, that we are building here in the Internet Governance Forum.

On companies, privacy by design means that you say to organizations, this is why your privacy laws are important, privacy is not an afterthought.  Sorry.  If you are doing    if you are using AI to recruit individuals, for example, to say, I am going to hire this person, I am going to promote this person, I am going to give this person housing, I am going to decide whether this person goes to jail or not, whatever you are using AI for, you have to make sure that you have    you know what they tell using, you know that it is accurate, you know that you are not discriminating against certain individuals because you have not done enough due diligence on the data and all the other possible sources of bias, and you are transparent, and you have to say to companies, well, actually, and I come from a government company.  There is no excuse.  You have to be transparent in a meaningful way.

And demand companies    so, for example, if you think about the European AI Act, the European AI Act doesn't, that a lot of people criticize, doesn't really add much.  It only says it's legislation in Europe around AI.  It only says before you market a product, you have to demonstrate that you have done your due diligence, including privacy by design, security by design.  It's not that it comes up with other requirements.  It just says, before you hit the market, that's what you need to do.

So, just to conclude, privacy by design is important.  There are a lot of challenges in privacy in AI, obviously, because just to be clear, discriminative AI, which is the machine learning, what we know so far, is different from the generative AI LLMs, so we are talking about different things and doing privacy in one area is very different from LLMs where, for example, by design it's difficult to say, privacy by design in LLMs.  You know, what does that really mean?

So, there's a lot of things that we need to unpack in this.  But please leverage your privacy legislation and data protection legislation to ensure that the data of your citizens, people living in your countries are safeguarded in AI.

>> LATIFA AL-ABULKARIM: Thank you so much.  I have a lot of questions but I'm trying not to ask for now, to keep it for later.  But so you are recommending that we need to focus on the privacy laws, personal data protection laws, security, for example, guidelines, and thinking about existing laws and how could we maybe it needs amendment somehow.  So, this is something that we also need to think about.

And thinking about the relation between    or the oversight of the parliamentarians on the private front.  So we want to make sure what the private companies are doing, are they following certain governance structure or framework.  Do they need to improve that framework somehow, so this is your recommendation.  What is happening now, we have AI Act and, for example, I would say the Chinese approach, they started with different    yeah, exactly, with different gradual laws, and now they are trying to merge it into one AI Act.  So, I don't know if this is kind of the end journey of several laws to have a one combined law at the end, or do we need to still have the    the Canadian, I think they are following your approach of having amendment to some of the existing.  So that's very interesting.

Maybe we will come to you, Basma, about Ivana has mentioned why and we want to know what, from your side, about how should companies developing AI systems to address risks arising and mitigating MSUs from the technology and do you use any human centric design when it comes to, for example, Llama at Meta as in one of those elements?  (LLM).

>> BASMA AMMARI: Thank you, good morning.  That's a very good question.  I think it's a natural continuation of what my friend Ivana was just speaking about.  One thing to note, maybe before I start speaking, is that AI has been there for very long time.  So, AI is what has been underpinning the tools that Meta uses.  So, what you see and what you have been seeing on your Instagram or Facebook accounts since the early days is powered by AI.  The content that you see, the recommendability of content is also underpinned by AI.

So, that's sort of the first point.  As we move into Gen AI, Meta has adopted an open source methodology with its large language model.  What that means is that these large language models are made available for practically everyone to use to build on.  And why we do that, why Meta decided to do that is not completely ultraistic, it's really to, one, improve access to AI but also to make these models better.  And what I mean by better, it means the more people, the more experts, the more developers are building on these large language models, you are helping us (altruistic) strict biases, societal biases, for example, from being adopted by the AI.  By getting more people around the world and more diverse people from around the world feeding into these large language models, you are supporting it and becoming fairer, more transparent, and more representative.

So, when we think about risks, what we are really asking are the very difficult questions around ethics, around ethics of AI, around responsible development of AI.  And we focus this on four core areas.  One is privacy, which Ivana covered extensively, but privacy is a very important one.  AI models are built on datasets.  We need to ensure that these datasets respect privacy and privacy measures.

The second one is a focus on safety.  So, these large language models do not come    become available at the minute of their development or invention.  They go through iterations, several ones, and guardrails are built into these large language models to ensure that they are safe for use, that they do not have any dangerous information, and Meta has an agreement with the national safety institute with the U.S. government, so nothing comes out before it, sort of, goes through these safety checks.

And the other one is fairness.  Fairness is very, very important, because, again, AI is built on datasets.  If the datasets are only coming from the Global North, it means that this part of the region and me as an Arab, this means that my culture and my history is not being reflected in the AI, even if we are asking AI whether it's social questions or political questions, it can be one sided.

So, how does Meta and other companies ensure that these models are as fair and representative as possible?  We do that by, one, making it open source, but, two, by engaging a large group of experts within the company and outside of the company and testing the AI and frequently before releasing it to ensure that in all the languages it's available and in all the languages, sorry, and in all the countries where it becomes available, it is representative.

And the last one is transparency, which is also very important.  How do we ensure that these models and whatever they are producing is transparent, so we have techniques such as watermarking, ensuring this will help protect against deepfakes and brand impersonations and so on.

I can keep on going, but the summary of all of this is that, one, it's in the design that we monitor for risks.  It's not necessarily through going after stringent and inflexible regulations and new regulations.  Because regulations exist.  How do we ensure that these regulations include in one way or another AI without stifling innovation.  That's one.

And then, two is following a principles based approach and a use case approach.  So let's regulate in a risk based manner rather than regulatory the AI itself.  Because AI is out of the box.  And if we decide to go for a full fledged regulation, what we might find ourselves in is that we are falling behind as nations in promoting innovation.  Thank you.

>> LATIFA AL-ABULKARIM: Thank you so much, Basma.  Risk based regulation, youth based regulation and principles, yeah.  I want to compare between principles and risks in terms of regulations, which one do you think is the best approach?  Because we had this discussion for a long time with different regulators and shall we, for example, follow the AI Act approach as a risk based regulation, or principles based regulations?

>> BASMA AMMARI: I would say principles based regulations and through partnerships also between the private sector industry and the public sector.  But I also think that they go hand in hand.  So if you are designing the right principles  

(Overlapping speakers).

>> LATIFA AL-ABULKARIM: Like risk is going to be very detailed one.  This is one like, I would say, areas of discussion that is always on the table when it comes to AI regulations.  Very interesting.

So, Faud, your bio inspired me for a lot of questions related to it, to the relations between EY's and the clients that you are working with.  So, as private sector as a core driver of entrepreneurial and entrepreneurships and innovation, how has the development and implementation of AI address the social needs and in various parts of the world?  So if you have examples, that would be great in different sectors, and different domains, that would be very helpful.  Thank you.

>> RAJ SHARMA: Good morning.  Delighted to be here and it's always great to be back in Saudi.  I see all the greatest innovations happening and pushing the limits of AI systems innovation as well here so that's fantastic.

For all my esteemed colleagues and parliamentarians, I would say that I see yourselves as the future technologists.  You are almost not a politician anymore, I think as we go into the next decades, you all will be the future technology leaders driving the future of your countries.

So, I think just one thing I    before I give some examples on what sectors are driving certain innovations, one of the level setting that's really important is to understand when we talk about AI itself, AI by itself is not just a technology.  It's a combination, an intersection of a number of technologies that has to work together hand in hand to deliver the business outcomes.

So, as you have built electricity networks, you have electricity grid, you will be building an intelligence grid.  That intelligence grid, in my opinion, comprises of three Cs.  One is what I call the basic connectivity layer.  How do you move data around so you have bill 5G networks and you are getting into 6G networks and you have space technologies, satellites and all.  Movement of the data in a secure way, in high performance way in a low latency way will be critical.

Then you have a computing layer where you are able to have computing infrastructure, where you will be housing your data and federating data.  Then you have a control layer around software systems and AI systems that are embedded within that.  That three layered structures of connectivity, computing and control is what I call intelligence grid.  And for nations in order for them to protect their sovereignty and data and innovate and drive new investments in your setup, I think that infrastructure management will be very important.  So, that's really important.

Now, what I am now seeing is that whether it's manufacturing, agriculture, healthcare, if flavor or permutation of the three C model is being implemented, let me give you a couple of examples.  Been working with a large pharmaceutical biotechnology company where you may have heard of the name, (bayer) so they have a unit called bayer crop science unit.  They have a long history of developing a lot of insights and feeding agro economics advisors who then in turn go to the farmers to help them understand how they should act, what is the type of crop understanding they need to have to drive better yields, et cetera.

Now, what has happened traditionally is that in order to develop the synthesis on very specific needs for a specific crop type, it takes    it took a lot of time.  But they built a library of knowledge.  Now with Gen AI coming in, you are working and bringing on the part of the Microsoft on the cloud layer, we are trying to democratize this whole piece.  Now we are having this general tick systems around this in order to make sure that the agri economic role becomes much more ubiquitous, much more fast so that now that knowledge that was residing in terms of how much, what this crop needs to do to drive better precision growth, what kind of nutrient levels, what kind of water levels they need to have, you can disseminate this information in a much more ubiquitous and democratic fashion.

Very proud in driving that because not only from a business perspective we are now seeing a whole effect down the value chain in that context.

Just one more, I think I would like to say is this is from the    we talked a little about privacy and consumer side of things.  But when you look at nations GDP growth and where some of the workforces is employed in Saudi and UAE and other places, energy sector is very important as well.  What we are seeing is there is a client that I am working with who, basically, have now instituted a programme around digitalization of their oil wells.  And that's really important because if you look    if you understand the oil and gas sector, it's a very GEO diverse sector, you have oil wells in remote locations and any fluctuation in conditions of the well will have a dramatic impact to the production capacity (GEO).

So what this process is now doing with AI systems and this model that the intelligence grid is you digitalize the well, put AI systems on the well and then you, basically, use AI algorithms in a cloud setting to try to monitor and control those wells.  What it has done is two things.  It has not only improved the production capacity and the disruption that happens if any of the instability in the environment.  It has also reduced the ability to go out and do remote visits.  What it has done is reduced the emissions and sustainability impact.

So, you see that it's a chain of cross technologies implemented in such a way that could drive critical infrastructure to drive better productivity, safety efficiency, and that's the whole notion of resilient economies at the back of this intelligence grid, if I will.  Yeah.

>> LATIFA AL-ABULKARIM: Thank you.  Thanks so much.  A lot of interesting work here.  And I'm sure that you are working on convincing your clients about these benefits from different perspectives, either economic or social benefits.

I will open the floor now.  I don't know how many    yeah.  I will just    I don't know if Celina is here.  No?  Okay.

We have    I will open the floor for questions.  So, we will start from Sahara and then go back here, finish from this side.  Please.  (sahel) we have good number of questions.  So, I don't know how many minutes left.  Can we just    15?  Okay.  Good.  Okay.

>> PARTICIPANT: Thank you very much for this insightful session.  My name is ma Hawn duh noser, I'm from Egypt, from interior from Egypt.  And at the same time I have an engineering background and more than 30 years in the ICT industry.  So, I have the two hats:  The parliamentarian hat and the expert hat.

And, actually, we have now this debate about the legislation or having an AI Act in Egypt or just framework and I have been talking to the minister himself.  He wants an act and they want a framework.  And in the industry, they want it to be just a framework or regulation because, of course, you know, doing any changes to the act, it takes very long time.  And the technology is moving extremely fast.

So, we didn't still have this.  But I think what you are saying is extremely right, that we need to work on the legislations of data, because if we could do good legislations for classifying the data and the free flow of data and all these things, this will help the AI.  We don't need to do anything else.  Maybe just the ethics for the AI.

The main question from me is the privacy.  I think, and I have been with people from Meta in a roundtable, that we will have to sacrifice our privacy for the sake of AI in the future and to leverage all the benefits from AI.  So, whether this will be the case or not.

>> LATIFA AL-ABULKARIM: Thank you so much.  Who is next.  Please, professor.  I will come back to you here.

>> PARTICIPANT: Thank you, Dr. Latifa and thank you to all the speakers for a very informative talks.

Two points.  First, I like, you know, the act, I think all parliaments now in the world, everyone is working somehow and debate, and still be there, some debates.  The question will be, do you think from your experience, from what you see, is it enough to have, as she said, like some general regulations and not to be as an act, especially AI, as you mentioned, Mr. Faud, it will be    Siddiqui    AI will intervene in all aspects of our lives, by day now, by minute.

So, I like also what Ms. Ivana said about, I think to enforce maybe the companies to say, okay, show me what you have.  Show me your regulations.  So at this, I can follow up with you before I launch your products.

The final    sorry.  The final point would be, I think the most important in AI, we are all almost programmers and from ITU perspective the computer science is the algorithm.  The algorithm we know, big companies and all companies, they have the brain, the most important, it's not only the technology.  It is the algorithm.

So, how can we enforce or maintain or make sure this algorithm will not support even, I don't know, will support privacy, will not support discrimination against race, religion or ethnicity, whatever.  So, again, all talks is more not about the algorithms.  So, how    and I think companies will not say, okay, I will share my goal, my gold is the algorithm.

>> LATIFA AL-ABULKARIM: Thanks so much.  I will go to this side, have more questions, and be back here.  Can you be, please, concise in making sure it's one minute, not more.  Where is the mic?  Please.

>> PARTICIPANT: Thank you very much.

(non English language)

(No English translation) radical change the technology (No English translation) IGF, Internet Governance Forum and what we been talking from yesterday to today, always you talking about AI.  And the AI is the game.  AI is the name.  And, of course, as you mentioned about the oil and all that, in 1981, I made the my master degree for controlling the moisture in the natural gas coming from the well.

So, we use the micro process at that time and control system.  AI was build there.  Now the thing is we are talking here about parliaments and how parliaments, Shura, to benefit from this technology.  As you know, governments or executive part of the state, you find that they are advanced in adopting technologies, while in parliaments you find them still they are legislating and providing laws.  But how do you see the laws that you execute, how is it executed in the government rules and regulations?  If you want to monitor or see the performance of these laws on the ground, then you need to use some sort of an AI to advance and taking what is called the what if, what if the decision.

Because you have to see if your law's doing the right thing or not.  I think we need some sort of a roadmap and also a proposal of a model that parliaments can adopt it to know how to (?) with the government activities.  Thank you so much.

>> LATIFA AL-ABULKARIM: Thank you so much.  I am just checking    okay.

>> PARTICIPANT: Thank you.  Thank you very much for organizing this discussion.  My name is Silvia duh Nika, I'm a Romania senator but I have appear Ph.D. in applied mathematics.  One of you said earlier that these models have been around for quite some time.  But to be honest, with my experience in the parliament, I would say that the parliamentarians have a very difficult homework ahead dealing with the AI models.

Also because the impact of these models is quite huge in a lot of layers of day by day life.  And they have to deal with it.  They have to put out the framework that is fair, is inclusive, doesn't left anyone behind.  And it's not quite anything they have seen before.

Because most of the knowhow is outside of the parliament.  And we need to bring it inside, and we need to put it in the hand of the legislators.

So, my question for you is how do you see the involvement of the private sector?  Taking into account the effects on the job market, on the    you know, on the education, all the effects of artificial intelligence.  How do you see the involvement of the private sector in such a way that we are all doing well as a society?  Thank you.

>> LATIFA AL-ABULKARIM: We have one here.  And this is the last question.  Then we will    yes.  Then    I wish that we have more time and we will come back to you.  Don't forget you.

(non English language).

>> I am the president of the CEO that groups all the professionals in the technology sector and I'm also a university professor.  So, a lot of different interests converge.

In Cuba, we haven't built (participant) an AI act but we first started working on a strategy so that later we can regulate it.  We have a lot to protect privacy and personal information.  But then how can we make this enforcement possible?  Because in technological terms we have a lot of legislation, but how do we enforce it?

And then how to use AI to policymaking?  I'm sure we could provide lawmakers and parliamentarians with AI and then when decision making in law happens, we can use these tools.  And I think the law of AI Act is a good starting point.  If you have any experience or any lessons learned about this topic, I would love to hear about it.

Of course, a divide is impossible to avoid because AI works with data.  Data is captured.  And then some of us don't have access to all of this information.  So, this divide creates a lack of contribution of this information and then we cannot feed the model so that these answers are more enriched.

And there's also a gap in the processing capacity.  So, maybe private companies can contribute more.  For those of us who have this divide, this gap so that we can have our own source data, national data that we have already collected, normalized and created laws for but we don't have really the processing capabilities.  And we can not use them for the good of our societies.

I think I make around three questions in one.  But it's all about this  

>> LATIFA AL-ABULKARIM: All the questions, and covered all your main core questions.  We have the last question here, and then we will hope that we have still time for answering all of them.

By the way, the next session is about innovation, so you can keep your questions for the next session.

>> PARTICIPANT: (non English language)

(No English translation)

>> LATIFA AL-ABULKARIM: Thank you so much.  Thank you.  We will start with    I'm trying to cluster those questions somehow.

So, we have the question regarding framework versus act.  I think it's almost the same question between amaha and Sally and regulating the algorithm itself or how can we know more about the algorithm.  And there's another question from Ghana relating to the same thing, that AI app is a good start in terms of regulation, but how can we move from having drafts into enforcement, legal enforcement.  So, these are, I would say almost the same type of questions.  So, who would like to start first?  Ivana maybe.  Yeah.

>> IVANA BARTOLETTI: Thank you.  Excellent questions.  So, I wanted to just start with the provocation.  You are the parliamentarians.  You know, you make the rules.  Okay?

>> LATIFA AL-ABULKARIM: You are really faster than the parliamentarians, in terms of governance  

>> IVANA BARTOLETTI: Let me finish, let me finish.  But you make the rules and it's important because AI is great.  We have seen it, right?  All the things that we talked about.  But also there are risks.  The risks to privacy, security, disinformation, all of that.

Now, I always say there's a good AI and we have seen a bad AI.  Now, you need to make sure that in your countries, yeah, you do all you can to stop the bad AI.  Because otherwise, people will say, well, actually, I'm not going to trust this.  I'm not going to use it.  I'm not going to do that.  Okay?  First point.

And that depends on the Romanian parliamentarian senator.  That, of course, it's difficult because a lot of the knowhow is not in the parliament.  But hold on a second.  Hold on a second.

AI is not just technology.  It involves data, the way that we see the world and that's your job.  That's your job.  The way you want to be 10, 20, 30 years, that's your job.  Okay?

And I'm saying this because it's really important for the future that the decision about where AI is going belongs to those who governors countries.  Now, what does the private sector do?  Okay?  We can work with you.  You can consult.  You can ask.  We can simplify things and give you the technical knowhow.  But ultimately what I am trying to say here is that I think it's fair to say that a lot of private sector organizations will say to government, you know, the ball is in your court on this.  It's important.

>> LATIFA AL-ABULKARIM: This is it all the time, right?

>> IVANA BARTOLETTI: Yeah, but I wanted to say one thing.  On privacy, for example, whoever tells you that the reason they come to me between privacy and AI, please do not believe them.  Do not.  Okay?  You can ask companies to say, enforce privacy.  Okay?  You can say and a lot of things we need more research.  You can address where the research needs to go.  How we can interrogate algorithms without actually    is research.  You can invest and you can decide where you want a lot of the research to go.

And it's important that we invest in research on issues such as how do we keep monitoring algorithms.

>> LATIFA AL-ABULKARIM: Exactly.

>> IVANA BARTOLETTI: How do we validate them 10 years down the line.  How do we make sure we control them, how do we leverage AI itself to do a lot of this work.

So, where you want to go and where you want the research to go, it's important.  Now, the European AI act, to me, and I'm a European and someone who has been involved.  It's a good step.  It's not perfect.  It's not perfect, by all means.  But what it says is, based on the risk and how do you define the risk?  You define it.

In the European AI Act, the risks are defined as safety, based on the product legislation that we have in Europe, and AI that may infringe upon the rights that we share as Europeans.  Okay.  That's Europe.

You define what the risks are.  And whether you enact new laws and whether you say I update existing laws, copyright, for example, privacy, consumer, whether you update what you have, whether you enact, it's the mindset that you need to change.  The mindset is, these are the risks I feel, this is what I want to protect.  What is the you in your countries you want to protect and engage in AI.  It's the other way around.  And encourage please to think the other way around.

>> LATIFA AL-ABULKARIM: Thanks so much.  There is somehow a cross border.  So, this is where maybe they are quite worrying about when we are importing some technologies and the technologies that we are using, then there is the line that we could maybe I would consider those risks that is national risks but I have to also consider the risks that I didn't choose it.  But it's there.

So, I know it's very interesting discussion, Basma, your points.  There are two topics mainly about the same thing, the legislations and different matters and directions and the innovation, or I will leave the information side to you about when we need use cases for the parliamentarians, AI use cases and innovation centres to help them.

>> BASMA AMMARI: Yeah, I'm not going to touch upon    sorry.  I'm not going to touch upon the same issues that Ivana covered.  But I heard, I think, two questions.  One was about the algorithm and what do we do with the algorithm to ensure that it's not adopting our existing biases in society and there's plenty of them.

And one of the God fathers of AI that is a professor at NYU but it also an advisor at Meta, who is yam in Kuhne, one of the things he advocates for and he encourages around the world to do, is to digitalize your national archives (governments around the world) strip them out of private information, no names, no ages, all of that.  But even that information, even if it stays, goes through privacy checks before it's used for any AI to begin with, at least speaking for Meta.

One is digitalizing national archives which guarantees languages out there, so using local language, local culture, music, history and so on.  Making that available in a digital form becomes information that AI can feed into and in practice, then this makes the AI more representative, as I said earlier.  So, that's one thing.

And I'm trying to think of  

>> LATIFA AL-ABULKARIM: There's a question related to from the    from roam mania about how can we ensure models or something similar to algorithms fair and inclusive (romania) and the private sector role in terms of the market and the labour, right?

>> BASMA AMMARI: The market and  

>> LATIFA AL-ABULKARIM: And labour.

>> BASMA AMMARI: In terms of workforce?

>> LATIFA AL-ABULKARIM: Yeah, workforce.

>> BASMA AMMARI: Every time there's a tech revelation historically, we see the loss of jobs and then the creation of new jobs.  Will no jobs be lost?  Some jobs are being lost.  That's the reality.  And this is a technological revelation.  We are in the middle of it.  So we have a responsibility (revolution) as industry and as governments to come together to really upskill and integrate and innovate around our stagnant education systems.

One example here, actually, from Saudi Arabia, Meta opened up an academy in partnership with Twig academy to upskill the upcoming generation in tools for AI and tools for the metaverse.  We graduated the first cohort last year, we are graduating about about a thousand students in the AI curriculum this year.  So, this is, I think, our collective responsibility and, yes, industry has a big, big role to play here.

>> LATIFA AL-ABULKARIM: Thank you so much, Basma.  One minute, please, ask from the parliamentarians to EY to collaborate together and trying to find new use cases for the parliamentarians to help them using AI to helping them summarize legislations and knowing the gaps and others.

>> RAJ SHARMA: One thing to sum up in a minute, I think I fully understand the complexity of your jobs here at stake and I don't think it's    the government has to drive it.  The private sector is equally responsible.

One of the things that I have a lot of friends in the industry, and one of the private sector leader in the U.S. told me that we developed something, a solution for a particular something, we almost read it as a hammers.  Everything that we receive is a nail, right.  We are trying to drive a hammer through that problem.  If you look at it, the reason I gave the example of intelligence grid is you as parliamentarians have to think around who your trusted ecosystem will be.  And use them to drive an understanding of how that cross pollination of knowledge will happen.

I give you one concrete example.  You are working with a government at the moment (we are working) and what they have done very well is they have developed something called the concept of future technology observatory.  And what they have asked us to do or some of the consulting firms to do is to help them understand what's coming down the pipe, but develop a model, if something around agentic systems or autonomous AI systems, what will that do to the different government entities and others.  We are developing something called the future tech index to understand the inception of that technology around few dimensions around security, around regulation, around ecosystem impacts, and so on, so forth.  And then using that as a basis to then test it with the recipient entity to see what is the maturity level and how can we work together. 

What that does is to give you a concrete roadmap, and then you can have a better position to drive the discussion down with the private sector or a specific companies that are giving you the technology itself.

So, I think that itself, it's in the bottom line is the future, it's almost creating a future for side counsel now which is driving the mandate, not just to show me how this particular new thing is going to work, but how it works in association with the other.  Because you all have taken, I will give you an example, if you take a medicine, you have a side reaction, you don't know how it's interacting with something else, right.?  This is the same issue.  If somebody proves you my system works, it's not enough until you see how the ecosystem works, right, and that's really the important thing.  I will stop there.

>> LATIFA AL-ABULKARIM: Thank you so much.  I remember a quote from a friend.  He said, we know how to build it but don't know how to use it.

Thank you so much, everyone, for joining us in this very intense, I would say discussion, and looking forward for more collaboration between the private sector and the different parliaments that is here.  Thank you so much.  And please I would like to welcome the new panelists and moderator to the stage.  Thank you.