IGF 2024-Day 2-Workshop Room 5- WS98 Towards a global, risk-adaptive AI governance framework-- RAW

The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.

***

 

>> SPEAKER:    Can you hear us?

Paloma, can you hear us?

Can we try your microphone, also.

>> PALOMA VILLA MATEOS:    Hello?    Can you hear me?

>> MODERATOR:    That's Paloma, yes, yes, now we can hear you Paloma.

>> PALOMA VILLA MATEOS:    Great.    Thank you.

>> SPEAKER:    Hello?

>> MODERATOR:    Your microphone is working.    Perfect.

 

>> MODERATOR:    I think we're ready to get started.    Welcome, everyone to this session, organized by the International Chamber of Commerce.    In case your wondering, this is workshop number 92, Towards a global, risk adaptive AI governance framework.

I am very glad that you decided to spend an hour and a half of your time with us this afternoon.

We have proposed this session not because there are not enough conversations on AI because there are really quite a few but because we wanted to find a way to discuss or take stock of all the various initiatives out there on AI gonence and governance frameworks and try and see if we can find some commonalities or perhaps some ideas to which we can look at AI governance from a truly global perspective and push for a more interoperable outcome or some sort of common approach of how we look at artificial intelligence governance.

I'm not going to spend too much time introducing the landscape of AI because we all have heard a lot about it and I'm sure our speakers will talk a lot about it as well but I will take a moment to just introduce the speakers that are going to be here with us today trying to uncover some of these questions.

in the order in which they will be speaking on the panel, I have Lucia Russo, Mr. Thomas Schneider, Sulafah Jabarty, I also have Alhaknani Noura, and Ms. Paloma Villa Mateos, and Ms. Melinda Claybaugh.

to start the roundtable, I'll ask our panellists to share about their experience in fostering trusted and responsible AI and share a few projects or practices they are working on that incorporate risk based approach to AI governance framework?    Why have we chosen to ask our panellists this?    Because we hear a lot when we look at governance frameworks around the world that say, yes, our governance framework is risk based, approach to AI needs to be risk based.    So there seems to be agreement on that but little agreement on what that actually means.    That's why we try and figure it out together in this session.

to first look at this I'm going to turn to Lucia.    I wondered if you could share a little bit about information on how the OECD is looking at this and what are some key challenges and opportunities to operationalize or look at this?

>> LUCIA RUSSO:    First of all, let me thank you for organizing this very important session and welcome the other speakers and participants here.    So I will talk a bit about the way the OECD is promoting interoperability and international governance and I will mention a few examples of how we are putting this risk based approach into practice.

So just to start off, the cornerstone of the work of the OECD is OECD recommendation on artificial intelligence that was adopted in 2019 and recently revised to take stock of some technological and policy developments and notablely advanced AI systems.

and since then, our work has been really focussing on how to move from these principles into practice.    When we talk about risk based approach we mean having a proportional system of duties and obligations that is tailored to the level of respect each and every AI system brings.

Already in 2222 OECD has developed its own AI classification framework in the form of a scoring table that evaluates AI systems according to five different dimensions, people and planet, economic context, data and input, AI model, and output.    I don't want to go too much into detail here because basically under each of these the dimensions then there would be an evaluation of where for instance in the data and input there are considerations related to privacy or copyright, under the task and output on the autonomy level of a system.    And then in the economic context, the business function of the system, which in turn it's basically telling us about the impact that this system may have on this business environment.

So this risks based approach is what we see also in regulatory framework such as the EU AI Act which takes the risk based approach and establishes stricter measures for systems that are deemed to have a highest risk for safety and fundamentals rights in the EU.

and we see this risk based approach also emerging in other frameworks, for instance the 7Rushami process adopted by the Japanese president 2023 led to voluntary conduct by developers that also aims to disclose AI governance in line with a risk based approach.

and to build on this code of conduct, what we are correctly working on at the OECD is the G7, Italian presidency, the development of monitoring and reporting framework for these commitments, which means moving from the code of conduct that can be, again, high level in a sense to what it means in practice for companies to adhere and respect the commitments embedded in this code.

This is obviously to respond to the needs of transparency, accountability, but also it is I think a good example of how we go a level up from the national borders to an international cooperation that is really across jurisdictions because it is developed by the G7 but of course is not limited to companies in G7 member companies that adheres to this code of conduct.

and lastly, I would just perhaps talk about another initiative that we are having at the OECD, the AI Incident Monitor.    Again, when we talk about risks, what we need to take into account is also the evidence on which we build the frameworks and the objective of this monitoring reporting framework is also to understand where the actual harms materialize and so to have a better informed decision making when it comes to establish what are the high risk categories and how to regulate these categories.

So this is an online tool already.    And it's also reporting framework that is harmonized across different countries.

I'll stop here.    And happy to engage in the conversation later.

>> MODERATOR:    Thank you so much, Lucia.    Quite a lot going on at the OECD, but not the only group doing work.    You mentioned how OECD's work inspires work at the G7 and elsewhere.    I also want to ask Thomas about your previous work and now your work as vice chair at CAI, so as you were negotiating the convention itself and now the risk based impact and mechanism.

>> THOMAS SCHNEIDER:    Thank you very much.    Actually, yeah, it's good that somebody, one of the sessions actually tries to concentrate on the risk based and what that actually means because we talk a lot about legal texts and so on, and we forget about the operationalization of all of this.    Before going into how the counsel of Europe's work fits into all of this, let me begin with the analogy to engines.    There are many similarities.    We have engines in machines that produce goods.    That are more or less good or dangerous for the people, we have engines in cars, airplanes, tanks, and many other vehicles.    It may be the same engines or similar engines.    And they all have of course opportunities to produce something, but they also have risks.    But we do not have one regulation for the engine.    We have thousands of legal norms for the engines but for the vehicle itself, for the drivers, for the infrastructure, liability rules for parts of a car or parts of an airplane for the airplane company, for the one selling the tickets and so on, and we have thousands of technical norms and we have social cultural norms.    From culture to culture, there are different expectations on how to deal with risks.    In some cultures they expect the king or the president or the state to take care of their risk.    In other cultures you have more or less an expectation that people are capable of dealing with risks themselves.    And you have everything in between.    Basically the same thing applies to AI as well.    Again, the risks are very much complex based, in terms of where you're apply a certain algorithm or set of algorithms.    Normally, it's not the algorithm itself.    Algorithms are part of the machines or tools that we buy, like we have an engine as part of a car or part of an airplane.    I think one is to look at the legal texts and the convergence and all the legal texts as you say they talk about risk based approach.    They talk about impacts.    The Council of Europe convention is built on a graduated and differentiated approach which I think is more exact because it's not just vertical risk high or low, but also horizontal.    The same thing, may be different, even though it's a different algorithm.    Even if it's in the health sector you may have differences, and so on.    For instance, the conventions of the Council of Europe that is an open convention to all countries    in the world, so it's not an instrument for Europe, it just requires states to have mechanisms in place.

So it's a very general requirement to have functioning mechanisms in place.    And it says what they should be able to deliver, i.e, identify risks with regard to human rights and Rule of Law, and that states have remedies in place in case risks become actually impacted, and a mitigation plan and so on.    It doesn't go into further detail.    This is where the second instrument comes in that the Council of Europe is currently working on.    This is done in cooperation with OECD, and standards bodies, and hundreds of Civil Society and academia and businesses.    It's a nonbinding instrument to the convention, on several levels.    It's a methodology for human rights democracy and Rule of Law risk assessment tool.    Also, the level two document is a document of about 20 pages explaining, giving guidance what you should need which is a context based risk, initial risk analysis, stakeholder engagement, in order to see whether your initial risk analysis goes in the right direction or whether you're missing something.

Then it's the actual risk analysis which is a classical check list question thing.

Then there's a mitigation plan.    So if you realize that risk starts to become reality, how will you react and protect people?

and then of course some logic about iteration, how you do this with technology that is evolving.

and it's building on the work of the technical standard institutions that are also participating, tries to make the link between the legal text, a legal norm, and a technical norm.    But also giving the flexibility to take into account social cultural norms and expectations of how to deal with risks that you may not be able to harmonize.    I think this is important.

 

As we see how difficult it is, the EU has given a mandate like two years ago to develop technical norms to interoperationalize and implement the AI Act and both sides are still struggling to understand each other and to see whether they actually are able to come up with something.    So this shows it's just one example.    I don't blame them.    It's really a difficult, difficult issue.    How important it is that there is co operation and the OECD is very helpful in bringing people together, Council of Europe as well, and standards organisations and others.    We need to build bridges between technical bodies and legal bodies and cultural bodies, again, so we understand how to make this work as a whole and not just on paper as a legal text or any questionnaire for programmers.

So this needs to fit together.    That's a huge work ahead of us.    Thank you.

>> MODERATOR:    Thank you, Thomas.    That was a great intro into the work of Council of Europe on this.    I want to keep working on this.    As we move out into the MENA region, I want to go to Sulafah next.    What are your insights working in a technology company in this region?    And maybe perhaps further than Saudi Arabia but entire MENA context what are some of the views you see on how AI technology works here and how are the risk based approaches on the table here?

and also, what are some of the elements that we can maybe elevate into a more global approach?

>> SULAFAH JABARTY:    Okay.    So I guess we all agree that AI has been reshaping the economy and the society all over the world.    And based on such a globalised economy, and a globalised area that we're speaking about, which is AI, one of the most advanced technologies in the world, so globalization aspect here is much more wider than regular business and regular digital transformation aspects.

and so, speaking about what's kind of unique or specific, if we want to go out, zoom out of this globalised space, I think the uniqueness of the MENA region led by countries like Saudi Arabia that are investing heavily in AI, so one of the very unique pointers in the MENA region is heavy investment and leadership in the digital transformation, supported by government, supported by Private Sector, as an example the Allad company launched under the PIF recently with capital more than $1 billion.    That is a specified company just investing in AI, deep technologies, manufacturing, and localizing all of that out of here, making the best of the international minds, the international technologies, and the investment environment here.

Also, the investment in the sector, whether it's the financial investment or investing the minds, the regulations, the government mindset, has actually gave us a result that we have reached number one this year in the United Nations indicator of a digital government where we stood six years back in number 52.    That just says how much investment is going on.    And the speed, and speed cannot be based only on financial investment.    It is definitely a mandate collaboration between mindsets, government, Private Sector, academia, all together, based of course by a very strong economy.

Second, uniqueness aspect in my opinion is something everyone also I guess agrees upon is such a young, let's say, generation and tech savvy youth which makes the biggest amount of our population.    So that also adds to the speed of the these technologies.    A lot of technologies are just embedded and live before even we know about them.    And I guess this is also part why regulations are very important.    We need, when we speak about risk based regulations, the advantage of that is that they are flexible, supposedly, and to meet these kind of different levels of maturity of these applications and technologies.    That's why flexibility is very much needed in these kind of regulations.

Also, the adaptability to the kinds and the ongoing different risks, differentiation between the kinds of applications, versing the kind of blanket regulations that are not definitely needed for these kinds of technologies.

So if we consider, back to the globalised framework, I guess we all know that the European Union this year has activated their landmark AI law, which is considered the leading global law.    Nothing this mature was before.    Considering the kind of effort put into such a law, we speak today about localization.    We're speaking in technology we never believe in starting from scratch.    You capitalise on what's there.    Open source, and other technologies, where you can build on.

It needs to be the same kind of mindset in terms of regulations.    So what we need to do in MENA is, okay, we take those frameworks and then just fill the gap, taking into consideration the unique, let's say, socioeconomics, cultural technology, differentiation of aspects, which I don't believe are going to be a lot, speaking in this kind of making, which is the AI.

and then, embedding them.    I guess as we speak there's a lot that's already been done in Saudi Arabia in terms of, I speak about Saudi Arabia as leading in the region, in this area.

We have an authority for data and AI.    They have launched a couple of frameworks in different areas.    And I believe we can definitely match and fill the gap between what's been done internationally and locally to move this faster.    And so summing that up, I guess, what we all agree MENA and globally is that this kind of risk based framework supposedly gives a much wider space of flexibility and adaptation and inclusivity, supposedly, for everyone to make the best of what's going on all around the world and for us to be able to lead that ongoingly for sustainable framework adjustments.    Yeah.    Thank you.

>> MODERATOR:    Thank you very much, Sulafah.    Lots to learn from.    I am always amazed when you quote the number from 58 to one in six years.    And what that requires, with collaboration with the various expert groups, and of course also the energy and the talent of young people, which brings me    I'm sorry I messed up your title before, but your work at the University and Information Technology Department.    How you see the rule of universities in building this new generation of developers and tech workers.

>> ALHAKNANI NOURA:    Hello.    I'm pleased to be among the distinguished speakers.    I want to start with, I would like to ask, as Ms. Sulafah mentioned, Saudi Arabia and the MENA region is leading.    So according to Vision 2023 AI is actually has a pivotal role at the core of Vision 2023 because they want to establish the kingdom as a global leader in technology and innovation.    And we're spearheading this effort, and as Ms. Sulafah mentioned they published several frameworks.    They published a framework in 2023 September and again published adoption framework in September, 2024.    And recently in January 2024 AI intelligence guidelines.    So they are keeping up to date with all what's coming, within the technology and legalization.

and in the end, in the latest publication in terms of AI, the artificial intelligence guidelines, Sadi ensured responsible use of AI and data privacy and ethical standards and tried to balance innovation with societal values, potential risks, and mitigation strategies.    They talked about explicitly certification fraud as a risk.    As you all know, AI now can produce human like content.    You could have essays, even detailed research, undermining all traditional educational and professional standards.

Therefore, Sadi also stated mitigation efforts for assessment and training explicitly here in Saudi Arabia.

in terms of AI adoption in higher education institute, actually adoption and management of new technology in higher education institutes can be complex due to their diverse constituents including faculty, students, staff, each with distinct needs and priorities.

but there's a paper that was published in September 2024.    It is titled AI governance in higher education, case studies at Big Ten Universities.    This study examined how the prestigious universities in the United States are approaching the guidance of artificial intelligence, particularly in response to the growing influence of generative AI and higher education.

They reviewed AI governance policies and strategied at 14 prestigious universities.    We can see from the study that universities started investing generously in AI governance.    Massachusetts Institute of Technology has invested 1 billion in AI initiatives.    University of Utah launched 100 million AI responsible initiative and use of AI while tackling societal issues while protecting human rights.

and the Centre for AI Governance focussing on AI ethics, policy development, and international cooperation, and University of Oxford launched the Oxford Martin AI governance initiative to understand and mitigate AI risks through research and collaboration.

and also University of Birmingham.

There's been online spaces created to discuss gen AI within the university community at another university.

So these universities are developing programs and research initiatives and governance structures to address all these issues.

and to go back to the MENA region, again, I'll go back to Saudi Arabia.    In Saudi Arabia, universities are focused on AI with an obviously Vision 2030.    In KSU, KSU established, KSU centre and office that are both concerned with AI.    The Centre has efforts and numerous partnerships with technology in the field of AI, while the Office is concerned with AI research and applied programs that serve different economic disciplines.

Again, there's KALS, also they established a centre of excellence for AI, dedicated to placing Saudi Arabia at the forefront of AI research in the region and globally.

>> MODERATOR:    Thank you.    There's quite a lot that universities are able to do.    And I guess also when they're able to do that when they're supported to do it.

So, again, I think what you said fits very nicely in what the panel has said earlier, or already, in how we make sure that expert communities are either based in economic circles and Private Sector circles or government or international organisations manage to come together and build on each other's knowledge to further this work.

and that we need the expertise of all of them if we want to get the approach right.

in that vein I want to turn to Paloma online and ask, where do you see the role of the Private Sector's efforts in driving this responsible AI innovation by design?    And what are the role of the policies that are necessary around this?    Paloma?

>> PALOMA VILLA MATEOS:    Yeah, thank you.    Can you listen to me?    It's okay?    Okay.    Thank you.    I do think the magic word here is AI governance.    And this applies for private and public sector.    I do think that we need to be humble and have a substantial conversation between us because otherwise we will not benefit.    I think we have done a great job in the last decades in the different international organisations, and also in the companies, and the question for us is, in the end, how to ensure AI, that this is developed responsiblely, while fostering innovation?

and I do think that the AI governance from the company perspective lies in four interconnected pillars of AI governance and which is really important.    The first one is principles and guidelines and mainly come from international organisations.    Regulation is the second pillar, technical standard.    Most have been already mentioned but I think it is important trying to get this interconnected proposal, starting from some principles, to the more sophisticated development of AI.    No?

Regarding, for example, the principles and guidelines, I do believe that OECD and Council of Europe, UNESCO, executive order, all these things going around the world, is linked or directly connected to what the companies are doing.    I think the development of what we have been doing in the last two decades have been quite in parallel.    This is very good news.    The principles are there when we talk about transparency, fairness, privacy, human rights, democracy, Rule of Law.    Microsoft, Meta, we've been working with the Council of Europe and OECD on a daily basis and with UNESCO.    So these principles are there.    I do this, this is my positive insight, that we're in the same road.    The problem comes when I think Thomas has said now when we come from the high level principles to the lower, how to apply all these principles.    Now, for example, at the OECD and many other organisations we are not    we are developing in a more sophisticated way things related to the AI, not only high risk.    I mean the highest risk approach, everywhere, the high risk approach, there's no discussion on that, but discussion on more specific topics.    For example, AI and intellectual property.    This is, again, the problem.    How we make possible this interoperability, Europe, other regions, where the history of the law is completely different.    How can we find this common interplay.    No?

So the second pillar is regulation.    I think that here, accompanies, in the case for Europe, where the AI Act is already in place, basically there's a principle we already discussed.    I do think companies are doing a great job, for example, signing the EU Pact which is really relevant for companies trying to voluntarily implement the AI Act before it is formal.    No?    And many companies are engaging in core commitment and AI governance strategy, mapping the AI system, and developing AI literacy in the companies and outside the companies.

These three core commitments of companies are relevant for what we are talking now.    I mean this collaboration between the institution, the public sector, and the companies, are extremely relevant.    The problem here in the second pillar in regulation is how we will implement regulation.    Again, this is the problem.

Maybe the problem is not the regulation itself, but all the standardization, what it implies, and how high risk is the system.    And sometimes it is a problem.    There is a grey zone.    Sometimes, when we talk with companies with EU institutions, the problem is we're trying to very quickly resolve the standardization process, which is very difficult, and the technical details are really difficult.    So when I start talking about being humble and having substantive conversation within the public and Private Sector, sometimes we have a legal instrument from the 20th century but the technology is from the 21st century.    This is a challenge.    A challenge for the institution but also for the companies because we have to comply with these regulations when the legal framework is not fit for purpose.

for the third pillar which is the technical standard I have to say that companies, telephonic and many others, I'm talking about Telephonicca, we're involved in a substantial process participating in all the conversation, also with AI Office with standardization of code of practice, but we do have also international standards with the ISO and this and so on.    In the end, we have a complex scenario with many standardization processes going around.    So here we have a lot of work ahead.

but I have to say that this conversation is taking place also with participation of company.

and the fourth pillar has to do with self regulation.    Here, I have to say that the companies and in the last decade especially those who are using AI internally and operating the data service, we have put in place AI governance strategy with a very substantial model, scaling the process internally with the responsibility within the companies.    And also ways to identify the risks internally that are really in line with all you have already said.

I think self regulation is relevant because the technology goes very fast.    We have seen that during the process of the AI Act.    We start talking about AI.    In the end, the global focus of AI was in the middle because the technology is faster than the framework.

So I'll stop here because I think we can go in depth later.    Thank you.

>> MODERATOR:    Thank you.    Can you hear me?    Thank you so much for that, Paloma.    It's quite a complex framework, as you said.    I think one commonality of all those four pillars is the collaboration between industry and regulators to make sure that we get the balance right, that we balance the innovation and rapid development of technology with some of those commitments and goals that we want to address to risk management.

I want to stay with some of this idea of, as I turn to Melinda, we've heard a lot about safety risks of AI.    And there's a number of global summits already on this issue.    I'm just wondering if you might want to dole out a few lessons learned there and see what we can do to maintain or get this balance and act right between innovation and risks, but also what is the private sector already doing and what processes are the Private Sector doing for that balancing act?

>> MELINDA CLAYBAUGH:    Thank you so much.    Just a little bit of context to Meta's, my company's, conduct in how we're coming at the AI conversation.

So we have two main buckets of AI products.    One is our generative AI products which are in the app, in any app in Facebook and Instagram and WhatsApp.    You may have seen Meta AI assistant, basically a chat bot powered by a large language model that you can interact with and ask it to do things and answer questions.

Also, we have image generation tools, things like that, that help you create content online.

the other bucket of our AI products is a large language model called Llama that we have released several generations of, and it's an open source model which means we make it freely available to anyone to download so it's essentially giving away, you know, many, many millions of dollars of investments to entrepreneurs and developers who want to build on it for their own applications.

I think that's just important context to set for how we come at the conversation as both a model providers and a gen AI system deployer.

So at the model    let me start at the gen AI system level, our Meta AI Assistant, we assess risk in the way we would assess privacy risks in general.    So we built our AI risk management programme on top of our privacy risk management programme.    It's to say any time a new feature or product or assistant is developed or improved in a certain way it goes through a risk assessment and review process and mitigations are identified and applied, and there's kind of a cycle of improvement.

in the same way as happens on the data privacy side.

Are respect to our large language model risks are assessed and mitigated at the different points.    At the stage of data collection or pre training stage we're identifying or going out of our way to not collect personal data and then we're identifying potential personal data, removing it, identifying, you know, data that may have copyright connections, going through all of those risks at the pre training stage.

Implementing certain red teaming, other safety testing, and risk assessment and mitigation processes to make sure the model we're releasing is safe and then we release it and developers can build on it.

I think, in addition to those, kind of the product development process, we also, as mentioned, signed up to multiple kind of international frameworks.    Domestically to start in the U.S. we were an early adopter of the White House commitments, kind of high level commitments to the safe deployment of advanced AI.    Then we signed on to the Sole Frontier AI commitments.    We I think are seeing a positive harmonisation around safety frameworks for advanced or frontier AI.    I think that level of    and I think that will be furthered in addition by the development of the various AI safety institutes and how they are going to be working together to understand the science of risk identification mitigation, evaluations, benchmarks, all of that.

So I think those are really positive developments.

I think where some of the challenges arise is in the more bread and butter AI.    Not the kind of frontier AI, you know, safety stuff we're talking about, but how is AI being applied in our everyday lives to maybe make decisions about us or offer us goods or services?

and I think that's where some of the stickiness comes up in terms of reaching consensus about what are the risks that we are trying to identify?    What are the mitigations that should be applied?

and is there a global view on that or should it be kind of, you know, nationally determined?

Because there's going to be differences in how different societies view different risks.

So I think that's a really interesting thing to keep in mind, the difference between kind of the very advanced AI safety concerns, and then kind of the day to day bread and butter concerns.

Just a few general thoughts on risk.    I think it's really important to focus on the marginal risk we're talking about.    Because I think we tend to come to this and think, oh, my god, AI is new and it's different and it's terrible.    And, you know, in fact we've been dealing with AI, classic AI, for a really long time.    And I think what people get concerned about is this really advanced stuff.    That maybe we'll lose control of.    People are worried about it.    Or maybe it's doing things we don't understand and all of that.

and so I just, you know, we have a whole legal    we have many, many legal frameworks that already govern things like data privacy, that already govern things like kids' safety online.    So we have a lot of mature frameworks to draw from.    I think from the company's perspective, what is going to be really important is how these things are rashallized prepare so I think there's a risk of imposing, in the lens of, you know, AI, imposing a whole new framework and regime on top of all the ones we already have and then how do those relate to one another?

We're seeing this to some extent in Europe in the AI and privacy conversation.

and how data can be used in AI or not.

and how does the legal regime on data privacy intersect with AI and the balance of innovation and privacy protection is really at a tension point where we all recognise data is needed for AI advances, but of course there's limits around it, and I think the unique nature of large language models means that we may not be able to implement data subject rights or other things that arise in data privacy frameworks the way that we can in other types of data processing.

So there's a real life tension there that I think has to be grappled with.

Then another, just two other points I want to make real quick, I think it's really important to focus on the use cases.    For us, as a large language model provider and particularly as an open source LLM provider, we release our model.    We do all the mitigations that we can.    We release it.    And we have no idea how it's used.    Anyone can build on it for any purpose.    It's up to them to put into place the mitigations that are necessary for their particular use cases.    So I think it's important to look at the value chain and really breaking down what are the roles and responsibilities of the various actors in the AI value chain?    And what is in their control to identify and mitigate?

I think that's really an important conversation.    Again, the use case conversation, and then particularly looking at what are the laws we already have in place?    We already have in place laws about discrimination and employment and most places, and discrimination in housing services.    So what is not new here and not already covered and can we cover those new risks and frameworks as opposed to existing ones?

>> MODERATOR:    Thank you, Melinda for that.

It's been quite a rich conversation around this table.    We've heard a number of ideas coming out of the speakers here on what is it that we're facing in terms of risk based approach to AI, what are some of the elements that we can build on.

So I want to focus on our second round of questions.    I have the same question for all of you.    In addition to reacting to what you heard from one another, is to just share a little bit on how you think forums, like where we're sitting today, and these global conversations at the IGF, and other global fora, can help bring what you mentioned in your interventions into fruition for an actual global approach to the governance of AI in the way that most of you highlighted balances the rapid growth and allows the rapid growth of technology and innovation while making sure that some of the harms or the harms that we fear from are actually mitigated?

I don't want to summarize what you all said because it would take too much time but I hope we can take this one question and do a round robin around the table and react to one another and bring out those elements that can actually help in global conversations.    So Lucia, you can go first.    I'll hand the microphone over to you.

>> LUCIA RUSSO:    Thank you, Timea.    It's truly fascinating to hear from such a diverse group of speakers.    I think for me what resonates the most with what we heard is on one hand this need for Multistakeholder conversation and collaboration, the need also to have a contextual and cultural approach to this type of regulation, and also the need to think in practical terms of what it means to translate these principles into concrete requirements and along with the factors we have advocated.

So what I want to get at, we see some sort of regulatory fragmentation.    This is no news to anyone.    We perhaps shouldn't seek to have this, which is maybe not achievable or desirable because as we've heard there are some cultural considerations to be made, local, yeah, values or technological developments, but even cultural and institutional histories.

So I think the way we are approaching this issue at the OECD is really to have this Multistakeholder groups coming together.    So we have expert groups.    Overall we have 600 experts that work with us and they're divided into expert groups that focus on specific topics and for instance one of them is working on a group which is called Risks and Accountability.    So it's a group that's the name that speaks for itself.    And it really is taking this approach of looking at the different risk management frameworks that have emerged so far, and try and see where they share commonalities and where they differ.    And so the idea is to develop responsible business content for enterprises which is not yet another framework to comply with, but more of a framework that would indicate to companies, especially those operating transporter, when they comply with a given requirement what it means, for instance, in the EU what it means in terms of complying for in the U.S. or in another jurisdiction.

So the idea is to really put this interoperability in practice, meaning having a level of alignment or a level of understanding for operators of where these different requirements intersect.

and so this is the project that we are currently working on and we should have the due diligence guide next year.    And perhaps the last point I would like to add, then, that Melinda hinted at that, is that it's a risk management framework not only looking at one specific actor but at AI across the chain.    Because it's not only one part of the chain that is responsible, but there are upstream and downstreams that also have due diligence requirements to apply with.

and that would go down to data, to the very first investment, and so it's really a more holistic approach.

So yes.    I would say that the value of these conversations is really to bring together these perspectives.    It's the way to go.

>> TIMEA SUTO:    Thank you, Lucia.    Same question to you, Thomas.    What's the rule of the global community here?

>> THOMAS SCHNEIDER:    Thank you.    It's actually interesting to see to what extent, and I think the value of a forum like this is to hear from each other where we are and to what extent we're on the same page or going in the same direction and to what extent processes are converging, legal processes, standardization processes, and also to what extent they may not be converging or don't have to converge.

a fundmental question not raised here is who defines what a risk is and what a too large risk is?    That largely is different from country to country and AI to AI.    And in Liverpool, you have a river that nobody would go in to swim.    There's a sign with a fence, danger, danger, beware.    And from the 1930s there's a fence that said that, and even another one added in the '50s.    And there's a river with cargo ships, but elsewhere thousands of people go swimming in the water and, in Brazil, and have to access to the sea, it's a great thing to do.

if the government forbid people to swim there, the people would just say no.    The UK and Switzerland is not 5,000 or 10,000 or 20,000 meters apart but just to say in an airline business where people are okay to trust experts because it exceeds their personal knowledge, also, in the airline business people are willing to agree on international risk management because they want to be sure the airplane lands safely because they can't run it themselves, but the closer it gets to your own life you want to make the decision.    That will be the same with AI, under heart surgery operation you may be happy it's clear what the red lines are, and the safety tests the tools need to pass.    But when it's about AI generated content or your freedom of expression, expressing your cultural or political views, you may not want what expert or,on, the government, to tell you what is right or wrong.    You may want to decide it yourself.

So harmonisation is fine.    So people don't have to care.    They want to trust the experts.    But there will be areas where they want to be the master and use AI for what they want and discuss with their neighbors what they think is right or wrong and not the government or people far away so I think we will have to live with some kind of diversity in this field.

>> TIMEA SUTO:    Thank you, Thomas.    Sulafah, how do you see this?

>> SULAFAH JABARTY:    Well, capitalising on what they just said which is I guess I can see how we're all coming closer to the same area which is I really liked what you said in terms of what we need to develop or not develop.    Because this area is actually requalifying the whole drive, because it's just, okay, we need to regulate this sector.    So let's go and drive and do regulations every day and question everything, and as she said, this is a scary new thing.    The idea is, actually, we really need to be very objective but also very connected to the technology itself.    And to the society itself.    So I think Paloma or if I'm not mentioning the name right or wrong, but, yeah, Paloma, she said something about that the speed of technology sometimes exceeds the speed of regulations.    And it's not fair to, like, ask the businesses to slow down and just wait for regulations, which does happen sometimes.    On the other side, in a business world, as an example for the cyber security area which is a very, very highly regulated area and still part of this whole, as they say, crowd, a very small example.    Some of the applications we provide to some very highly regulated entities, we every now and then have to adjust the applications we provide with the regulations of cybersecurity which are very highly adjusted in our country.

So we ended up realizing some entities because they're just giving us the regulations as they are and they want us to adjust the application to it without having the eye for the business owners themselves or their organisation, we end up to a place where the authorized users can enter to the application and then we have to drive some concept into it and we actually bring our business culture, our business understanding, to them.

and this brings us back to why we need a Multistakeholder governed frameworks because we need to bring the society in, ac demics in, business people, all together.

We need flexibility, coordination, and awareness.    Awareness is a very important part.

So give people the right establishment and right ground to be able to think with us on the same harmonized approach we need to enable them first to know what they need to know.    That brings us back to being very clever and actually inviting the right entities and the right stakeholders to participate in this.

Some people are very closed in boxes of regulations, law, or academia, despite the other side which is the business itself.    No one should work on this in a closed box.    They need to be very much attached with live embedded data, informatics, and this is what it's all about.

We sometimes, I'm sure we all sometimes find people who are working on this who are very isolated from the core of itself and the spirit of this technology, AI, which is based on very live data and information flows.    So I think what we need at the end we all aim to reach a very robust and adapted framework that everyone can use all over the world.    Thank you.

>> TIMEA SUTO:    Thank you very much, Sulafah.    Alhaknani Noura, how about you?

>> ALHAKNANI NOURA:    I see this forum as a very good place to get everyone together.    I notice AI is    everyone is afraid of what AI will do, how will AI develop.    I can see that because when I started AI, when I started studying, it was just I'm doing an AI algorithm or machine learning algorithm in one specific area, and it will, for example, find a tumor or find    now, it's a different thing.    It's a generalized model.    And what's happened is that features of AI, we really    the users of AI really don't know how AI will respond.    They teach AI and then AI will respond how it responds.    So it's important to regulate it from the beginning, from entering the data, from the early steps.    Because whenever the data is in, or whenever like    when anything is in, it's very difficult.    For example, a cake, when you bake it, if you take the ingredients before mixing them, you can do that, but after you bake the cake, you can't take out the individual ingreedients.    It's impossible.

So I do see why it's a great concern.    I see it as a positive thing to have great concern just to regulate it, but I see it is coming and it's coming very strongly because it is very beneficial and we see the benefits of it day after day with health care, with every aspect.

You can see it's very beneficial.    But there's a surgery that happened, there's a blind girl that managed to be seeing now because of an AI surgery.    So there is huge benefit.

the fear is we could understand, but I think, like other than this, also, the governance should be very specific for each sector.    It should be very different.    We can't have just one framework that governs everything.    Every sector is completely different.    It has its own characteristics that we need, other than the region.    So I think we're on the right track.    We're working.    It's a work in progress.    And let's hope for the best.

>> TIMEA SUTO:    Thank you, Noura.    Step by step, and not one size fits all is the takeaway.

>> PALOMA VILLA MATEOS:    So what's been said is really relevant.    The definition of "high risk." No?    If we think in Europe to the AI Act, in the end we have a regulation on high risk application mostly, I mean.    Here, we are developing this standardization process and the problem is how to come from theory to the real world.    And this is something more difficult that some of the policymakers thought it is.    Last week, for example, we were in Brussels having some conversation with the AI Office, and so in the next seven months they say they have to come up with this code of practice.    And they have thousands of people participating in this code of practice.    And at the same time, we have to respond to a public consultation again on the definition of some of the application on high risk and so on.

So it's more difficult than it is.    In the end, we as companies, we have to protect people's rights, safety, and so on.

but we have also to protect in Europe innovation and also how to compete in the global economy.    So this problem is really difficult.    And I do think that engaging with companies is really relevant because having a theoretical approach sometimes is against what we are trying to do.

and in parallel, I have to say that companies we are also learning how to provide or how to work with responsible AI.    GSMA, for example, we are now working on a responsible AI metric involvement, to trying to provide a framework for companies to work in an AI governance strategy that, from the beginning to the end, we are able to provide ethical AI system.    So this is going hand by hand, and I think it is important as I say to combine and balance people's rights and innovation.    This is something that is relevant, more relevant in the next year, where in Europe, for example, we will see this new code of practice, standardization, and analytics.    It's critical now in Europe to balance that because it could be a regulation that in other parts of the world are looking to.    So it is important that we do it right.    Thank you.

>> TIMEA SUTO:    Thank you, Paloma.

Melinda?

>> MELINDA CLAYBAUGH:    Can you hear me?    Yeah.    I mostly echo what the other people have said but just on the point about the EU AI Act I think it's an interesting reflection of how unsettled things are.    So with the code of practice in particular, there's still live conversation and no consensus on what even is a prohibited practice or what is a high risk practice.    So you would think the prohibited practices would be fairly understood generally.    But it's not.

and so I think just as we    I guess my recommendations for convenings and global convenings is to take time to do it right.    Because I think what's happening is that the EU AI Act was finalized in a frenzy around gen AI development and advanced gen AI development and now they're kind of having to figure out, oh, actually, what is prohibited and high risk?    Meanwhile, the clock is ticking on compliance for all the companies so it's really a difficult situation to be managing.

So I think building more consensus around some of the risks and some of the high risks and what's in bounds and out of bounds, recognizing of course there will be cultural differences.

but taking some time to set that step right, rather than rushing ahead, as the technology still is advancing as well.

>> TIMEA SUTO:    Thank you so much, Melinda.

So a lot to take away from the panel.    We've discussed a Multistakeholder approach and a cross cultural approach, and the importance of bridging fragmentation in regulatory spaces, and trying to build towards common principles, but not a one size fits all approach, to try and work together to define what's high risk and low risk, and also the value of conversations and the acknowledgement that it might not be the same across regions, to make sure that we are looking and are connected with the technology when we're trying to do fast regulations.    And again, the value of the Multistakeholder approach here so we don't pass regulations that are actually restrictive to the benefit of a technology that we're trying to regulate.

to go step by step and make sure that we place the regulatory things at the right moment, not approaching everything at one go.    The role of standards and balancing innovation and regulations with an approach to standards in industry initiatives.    And then of course taking the time to get it right, to tell us actually where the risks are, and to look at it also from the user perspective, the way that technology is being used in the field, as opposed to where we think risks might be coming from.

So a lot coming out from the panel.    We have about maybe 20 minutes, a little bit, to turn to the audience, a little less than that.

Those online, and here in the room.    I understand Paloma will have to leave so if there's anything last second that you want to share, before you have to move to your next meeting, please go ahead.    Otherwise, we thank you very much for being here.

>> PALOMA VILLA MATEOS:    Thank you.

>> TIMEA SUTO:    So in the room for the rest of the speakers or online, if there are questions, please we'll get you a microphone and we'll try to get you an answer, as well.

>> AUDIENCE:    I'm Am ala, I work in DGA.    Thank you very much.    Here in Saudi Arabia, it's a great honour to have you all here.    My experience is a total of three years.    Two in Private Sector and one in government sector in DGA.    I want to say it's really exciting working here and I have seen how the government sector is working very closely with the citizens to be human centric and I realize a challenge that we are facing to enhance the practices of creating new products.    Which is the first one is, how to actually adhere to the best practices that are available to doing what humans really need?    Because the more we contact through the workshop the difference stakeholders, we realize some of the processes we're doing, they're not very fit.    And on the product level, when it comes to, let's say, creating sort of a feature.    Going through the right process is sometimes not the very best option to it.    So this is one of the things that I have seen.    And it's kind of like a balancing between the frameworks and the reality itself.

>> AUDIENCE:    My name is Jack Peklinger from Switzerland, with the European IGF and with Swiss IGF process, but also in the business ICC team.

My question is, following on what Thomas was saying on different perception means different aversion to risk or embracing risks.    Wouldn't that call for governments and for business to engage much more in education?    And explaining as much as possible so that the users can make a free choice?

>> TIMEA SUTO:    The question was addressed to you, Thomas, but all of you around the table if you'd like to elaborate a little bit on how we educate around AI.

>> THOMAS SCHNEIDER:    Well, I don't necessarily think it's addressed to me, but of course what I said before about people swimming in the river in Switzerland, they don't want the government to forbid swimming in the river.    They want the government to make sure that the water quality is okay so there's no damage.    They want the government to make sure that everyone properly learns how to swim at school and society teaches also foreigners and immigrants how to deal with water, and they also want the drivers of the cargo ships to know, okay, I go on the left and the people are on the right so I will not kill them.    So education is key to, yeah, freedom of choice.

but also make people adaptive to be able to assess the risks for information that may not have been seen.    You may have set up rules but the reality may not see the rules and what do you do?    As the society is able to deal with risks, we will have them probably with AI, that it will be easier for people to react.

>> TIMEA SUTO:    Thank you, Thomas.    Anybody else want to react to what we've heard from the audience?    If not, are there any other questions?    In the back there.

>> AUDIENCE:    Hello?    Yeah.    Okay.    Great.    Thank you.    My name is Malti Kobis with Dutch government and standardization at Pfizer.    I'm seeing the interpret is confounded by standards, really based on standards, and in AI we're now trying to develop new standards.    And I can imagine that that difference makes also    it also has implications to how we govern it.    So, what are your opinions about this, how this difference affects the governance model that we have to choose for AI compared to the Internet?

>> TIMEA SUTO:    Question there about the rule of standards, whether they need to come before development or development needs to become before standards, if I understood the question correctly.    Any other questions that we could walk through together?    No?

Quite unfortunate that Paloma had to leave because she always has a lot to say on standards but perhaps others?    Melinda?

>> MELINDA CLAYBAUGH:    Actually, I'm not that close to the standards development work.    In the U.S, I can say the quote, unquote, standards, not the ISO, but the NIST is the primary soft standard body in the U.S, they've been focused primarily on risk management frameworks for gen AI.    I think there's a place for that because that's kind of a standardsization of a process and how to mitigate risks that you want to make standard across anyone developing and deploying AI.    As for the technical standards which I know are so important to the Internet, I actually don't have a view.    I differ to you, if you're saying it's more challenging in the AI space.

>> THOMAS SCHNEIDER:    Maybe just a quick reaction.    The question is, what do you mean by standards on the Internet?    The IETF is continuing to develop norms and standards.    Also, there are basically, it's probably not fundamentally different because somebody proposes a standard, you test it, and like a running code and so on, and if nobody has a problem with a standard it may get to be the standard, although you have competing standards and a variety of standards.    You had this with television and previous.    You may have competing standards and over time maybe one or the standards or two will succeed, in just being the most attractive, not necessarily the best, but the most attractive for businesses or whatever.    I don't see a fundamental difference.

of course, also there are standards like case sensitive, but I don't see a fundamental difference in logic.    You just try and see what happens and then you standardize as you go, more or less.

>> TIMEA SUTO:    Thank you.    Yes, just one thing if I can add from my role as a moderator, we also need to make sure as we develop standards we are mindful of not fragmenting the space further.    We want to take to regulation, the use of technology, also that standards do not add to creating pockets of technology, that this technology works on this standard and the other one works on that standard, and the two don't talk to one another because then we're fragmenting the opportunity we get out of the technology.    That's two cents from me.

We have a question there?

>> AUDIENCE:    When you talk about standards we also need to bear in mind standards are not carved in stone.    For me, and also from my experience in business, it's okay to have standards, but they shouldn't be derivative to start with but then there might be a serious review process or at least the expectation that it's going to be reviewed once and that flaws are expected.    What's been done at the Council of Europe principle based is fine.    Whether the AI Act went a little bit too far in this respect and not enough in other expectations, to be revised soon, which we saw with the GDPR which was not revised so quickly, so that you might learn from it.    But I think it's really essential that there is a perspective and certain know how, that there will be revision.

>> TIMEA SUTO:    Thank you for that addition.

We seem to have exhausted the questions from the audience.    I hope not the audience itself.    (laughs.) We have, yes, about five minutes to end our session.    So I just wanted to turn back to the panellists here on the podium and ask, what is your main takeaway from the session?    If it still had the character limitations that we have on social platforms to express our opinions, what would be your one sentence takeaway from this that we can put in a report about what we discussed today?    I'm going to start with Sulafah and just go around the table here.

>> SULAFAH JABARTY:    I think mostly it's to make this sustainable it's actually the harmonizeddization of the global framework that we've heard bits and pieces from different backgrounds and we all, I guess, agree that as much as the process is flexible, inclusive, and as they say connected to Multistakeholders as well.    And listing out everyone, giving everyone the space to embed their process in, I think that's the way to actually make it faster and more convenient and more sustainable, let's say.    Because at the end, this is an ongoing process.    As much as it's the flow is connected to multiple entities as much as it's sustainable and objective, if we may say.    And considering all of the aspects together.

>> MELINDA CLAYBAUGH:    Yeah, I echo that and I agree that finding the balance between what we agree on and then allowing for variability.    So setting a floor and then you can add to it as needed for the use case, for the country, for the context that something is being deployed in.    And so firming up the foundation and then whether looking to kind of sector specific assessments beyond that, however that differential should be implemented.

>> TIMEA SUTO:    Like the floor and then allow space to move up.    Lucia?

>> LUCIA RUSSO:    Yeah, for me, as well, I think it's this notion of having an adaptive framework, not having something set in stone that you can't review and can't reopen, especially in light of the speed of the technology and the length of the policy making process.    So this notion of proofing legislation or regulation in a way that is not set in stone, or that you have processes to update your requirements, and also I think we have the need for risk based tailored approach to the use cases, to the sectors, as well.    I think Melinda expressed very well this notion that we have to advance the AI systems where we may call everything "AI" and also Noura was mentioning the transition from the narrow AI to now the large foundation model that can do much more.

So I think that is at the core of what we call risk based approach and to tailor the requirements that are imposed to really careful consideration of what the impact will be.

>> ALHAKNANI NOURA:    Hello?    Yes.    I agree with Lucia it should be adaptive and especially since it's global.    As also Melinda said we should have a basic and then different differences.    And I think all that could be done through dialogue and again dialogue and reiterative process of setting the standards, and it should be like regularly and continuously because things change.    Our, like, beliefs or our point of view changes with the changing world.    So I think as I will actually emphasize what they've said, and that's all.

>> TIMEA SUTO:    Thanks.    Thomas?

>> THOMAS SCHNEIDER:    Yes, thank you.    I also think what a surprise that adaptive is the key word I think of this afternoon.    And I think it is important that the framework is adaptive but the goal should always be the same:    To make sure that people are free, but people use the freedom with responsibilities, that there is protection for human rights, for democracy, for Rule of Law, and also clear rules for the industry that they know what can they do and what can they not do, at least when a certain level of risk is reached.

So the principles should be stable and reliable but the way they are implemented, the way it's made sure that people continue to be free, but safe to the extent that they want to be safe, need to be adaptive.    And I think also, my country is not a member of the EU, but we are grateful to the EU that they dared to do something of which we can all learn and of course a colleague from Telephonic is right that it's not easy, but just letting everything go may not be the right thing, too.    So we watch closely what the EU is doing, what difficulties the member states have implementing this at the local level and so on.    Yes, of course, they're the front runner and have some advantages but they also pay a price.    As long as we stay engaged and can learn from each other, we think it's a mutual benefit.    In my small country we'll try to achieve the same goals but something more agile and smaller, because we also don't have the same resources that the EU or big group of countries have.    So as long as we learn from each other, I think we will go in the right direction, if we share the basic principles of freedom and respect and autonomy and human rights and so on.    Thank you.

>> TIMEA SUTO:    Thank you.    So we started from one word, or hyphenated word, risk based, but we added quite a couple to it.    But I think Thomas is right the end word we seemed to converge around is adaptability.    An adaptive framework that moves with the plan, with the technology, that moves with the changes of our views and perspectives and the way that we, our culture develops, with the technology together, while making sure that we keep our eyes on the prize, keep our eyes on the right goals that we've set for ourselves in the beginning.

To all the words we've said today I will add two more, which is thank you.    Thank you to all of you who have come and shared your knowledge and expertise with us for the past hour and a half.    Thank you to all who listened and contributed to the conversation.    Thank you to all who joined online.    I know Paloma had to go but the audience that is there still.    I hope this was as useful for you as it was edifying for me.    I hope to see you next year and at the next IGF as we progress and go from adaptive to who knows what the next word will be.    Thank you, everyone.