IGF 2023 – Day 3 – Open Forum #81 Cybersecurity regulation in the age of AI – RAW

The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.

***

 

>> Well, I think we're ready to begin.  Can we have our speakers on Zoom on the screen. 

>> DARIA TSAFRIR:  Can everyone turn their cameras, please.  Good morning, everyone.  Welcome to our session on cybersecurity regulation in the age of AI.  Unfortunately, due to the current situation in Israel, my colleagues and I were unable to attend the session on society, so our colleague, who is already there, offered his help in moderating on site.  So let's start and then get back to me.

 

>> My name is Dr. (?), from Israel, the manager of the IL.  And also, we are a chapter of Internet society promoting digital inclusion, education, and cybersecurity in Israel and working on digital gaps and other initiatives for the public.  So I'm not originally part of this panel so I won't take too much time to present myself, but to invite anyone who wants more details about our Internet society to approach me after the session, and I'll be happy to introduce myself and our work, and now let's go back to the original participants of this panel. 

>> DARIA TSAFRIR:  Thank you.  Let me ask you to start by introducing yourselves.  Let's start with Dr. Al  Blooshi. 

>> Hello.  Good morning, everyone.  Can gives me great pleasure to share the stage with great panels.  It's 5:00 a.m. here.  I am Bushra Al Bloosi, head of research and innovation at Dubai.  We monitor the cybersecurity posture in the City of Dubai, and I'm the official UN member and on the global future council on cybersecurity.  I'm also a member of many advisory boards nationally and internationally. 

>> DARIA TSAFRIR:  Thank you.  Mr. Zaruk.  Mr. Zaruk?

OK, Mr. Honjo is there?

 

>> Yes, my name is Hiroshi Honjo.  I think I'm the only one based in Tokyo, Japan but I've just come back from Germany.  So I'm the chief information security officer for a Japanese‑based IT company called NTT data, which the employee of the 230,000 globally ‑‑ most of ‑‑ well, Japan is only a small part of the employees, so we have more business in more than 52 countries, except for Japan.  As a private company we run AI projects to our clients, so it's a very hot topic.  Pleasure to talk with you.  Thank you. 

>> DARIA TSAFRIR: 

 

>> Good morning, my name isicalia thauer [sp].  I'm in the directorate of science and technology and information.  Our division covers the breadth of digital issues, including artificial intelligence, including digital security, measurement aspects, privacy, data governance and many other issues but for today we'll be focusing on AI and digital security so I'll stop here and look forward to the discussion. 

>> DARIA TSAFRIR:  Thank you.  Mr. Loevenich.

 

>> Good morning, everyone, I'm the general policy and strategy office at the office of general security.  I am very much concerned on AI software security standards.  Let me just stress that, sharing the stage with you, and congratulations on a great event up until now.  Thank you very much.

 

>> Yes, everyone, I think we can hear you now, if you could present yourself. 

>> AVRAHAN ZARUK:  Hello, everyone.  My name is Avrahan Zaruk, the head of technology in the INCD.  I manage the technology division, so I am responsible for day‑to‑day operations, project implementation, IT operation, and providing national defense.  I am also responsible for preparing the INCD for the future by creating our NDFTVs, promoting and establishing with national level solutions.  I have eight kids, and they always ask a lot of questions, so I already know how ChatGPT feels, so thank you. 

>> DARIA TSAFRIR:  About the state of affairs, and the second one will deal with is there more to be done in the domestic international levels so let's get into it.  We are all familiar with the cybersecurity regulation tool kit, breach of information, mandatory requirements for a critical infrastructure, risk assessments, info sharing, et cetera, and the question is whether this current tool kit is sufficient to deal with threats to AI systems or for the data used for it.  Now, our goal of this session is getting some insights of what governments can do better and where they shouldn't be at all.  Please now we want to talk about regulation, we mean not only regulations but also government ‑‑ we mean it broadly.  Also government guidelines, incentives, and other such measures.  So for everyone's benefit and so we can be on the same page, let me turn to Mr. Zaruk and ask you, can you please make out for us what you learned in the security risks and vulnerabilities related to AI?

Mr. Zaruk?



>> AVRAHAN ZARUK:  Can you hear me now?



>> DARIA TSAFRIR:  Yes, now we can hear you. 

>> AVRAHAN ZARUK:  Thank you.  The INCD focuses on three domains.  The first domain is protecting AI.  AI‑based models are increasingly being put into production in many critical systems across many sectors, but those systems are designed without security in mind and are vulnerable to attacks.  Since the average AI engineer is not a security expert, and the cybersecurity experts are not the main experts in AI.  We need to find a way to establish and improve AI resiliency.  As we approach this issue from several angles, one is examining weaknesses in AI algorithms infrastructure, data sets, and more.  This is done as an ongoing task.  Promotes R & D projects for promoting models.  Management in the IT world.  In the AI world, an approach is needed from each algorithm and it focuses on common model and dedicated attacks.  Another angle is building a robust risk model for AI.  We attempt to define metrics and the models to measure risk in AI algorithms as a mean to measure the test and robustness of AI as we do within other IT domain.  A third angle is AI resilience.  INCD has established an initial lab which develops an online and off line platform.  For self‑assessment of machine‑learning models.  Based on risk model, we developed.  The AI lab is collaboration between the academic world, the government, and the technology giant.  And we collaborated with the cybercentre at a university which is a leadership in research, which brings cloud knowledge in a cyberprotection and AI.  A second significant domain is using the AI for defense.  They must both pull in a use from a form of AI some more and some less.  If you don't have a level AI insight on your product and you don't say AI three times and mean it, no one will bite.  We understand that the power of AI and what it can offer, and that is an ongoing effort.  Make sure our infrastructure and product support the latest AI‑powered technology.  INCD much like many other nations is using and imposing the AI technology.  In this sense, we don't want to be behind when it comes to the technology.  Our role as a regulator is merely not to interfere but to see where we can assist the market.  To promote implementation and the use of advanced technology.  We use a variety of tools and capabilities to support our day‑to‑day operations.  This is to include tools to help researchers in their system, cyberinvestigation, a use of investigation to assist and response from incident it helped our cooperation with the cyberproject.  119 is a reversed 911 to provide better service to citizens.  Collect relevant contextual information, provide more circle responses and support additional languages.  Aims to help investigate network traffic.  In an easier, faster, more human way.  AI scale index care (?) So in the time of war, AI allows us to direct main power to critical pass.  We use AI to assist in mediation between the human and the machine.  Last domain but not the least, and it may be the most complex subject, which is in count 3 and design.  AI enhanced, AI‑based (?).  We see increase in the various AI tools among the packers, and we understand that in the future, we will see machines carrying out sophisticated attacks.  We are a country in the process of designing a way to approach this threat scenario, which will probably be built from several components working together.  In the future, we will see attacks and defense fully managed by AI, smarter, stronger, and faster player will win.  Thank you. 

>> DARIA TSAFRIR:  Thank you, Mr. Zaruk.  Dr. Blooshi, based on your prior experience and current work in developing and shaping policy at both domestic and global levels, what do you make of AI risks?

How do you frame it from a cybersecurity regulation perspective?

 

>> So I think in a city that was at the forefront of technology transformation revolution, our role as cybersecurity curator is to innovative those critical national infrastructures to use the new technologies but use it with the right controls on cybersecurity, and it shouldn't be perfect from the first place so guiding and incrementing cybersecurity regulations that we work together with the business developers just to make sure that the business objectives are being met and also securities being considered, so I will divide what I'm going to speak about into three main points.  The first one is the AI model security themselves versus the security of the consumers of those AI models.  So when it comes to the AI security models and the developers of those models ‑‑ so there was the controls, the standards, and the policies, are totally different ‑‑ I'm speaking about the consumers of those AI models.  For me, when I'm talking about the AI security itself, the AI model itself, AI at the end of the day is like any other software that we were using in the past, but what makes it different is the risk that it might generate, the way it has been deployed, how it is being implemented.  So for example, an AI model that is deployed and IO team 1 shouldn't have the same security models that the deployed in connected vehicles where any risk or any issue in that AI model might impact the human lives.  So at the end of the day, it's the way that AI model is being deployed, how it is being used, and why it is being used.  That makes it different than any other development tools that we get used to developing in the past.  This is how it became different than the normal software development.  And the second point is the security of the AI consumers, so those people, government entities, the consumers of the AI themselves, I think in our scenario, in our case, we are more worried about the consumers than the producers because we have many players, as we can all see, we have many players specifically when it comes to generative AI that are attracting lots of attention or lots of customers to use them.  So when it comes to the AI consumers themselves, I think we are ‑‑ we need to consider many elements, so how it will be, where will it be, how it can be used in a national infrastructure, what about the data privacy of the data than being used over there, and then also why I'm using that AI model so I can categorise as the previous speaker was saying.  So I can categorise it into three main areas, how we are using AI today.  We might use it to protect as cybersecurity professionals, using it in the new defensive methodologies that you are using, or it can be used by the malicious actors to harm or the third category, it can be used by normal users or it can be used by government entities, and in that case, we will be worried about the data privacy of the data being processed in the AI and the AI model.  So when it comes to the policies and regulation, I talked about AI security itself and the consumers and the last point is the policy standards and the regulations that we need to put around the AI models, I think there has been lots of efforts globally and internationally having OECD principles, missed security standards, and then the great bunch of policies that were issued recently and joined by EU.  I think we are creating progress towards that, having let's say standards or having specific policies around the security of the AI.  But as I said at the end of the day, it's like the previous software models that were being developed in the past.  So if we will think about it how we should deal with the AI when it comes to the policies and regulatory point of view, I think we need to develop first of all the basic best practices and principles like any software development life cycle, secured by design, so those principles should always be there, and then develop one layer on top, and that layer can be specific to the AI itself, how AI should be developed, should be maintained, should be trusted, should be ‑‑ so another layer which is specific to AI.  And the third layer that can be added, as I said, at the end of the day depends on where I'm going to use it so it's a specific layer.  So banking layer controls, transportation layer controls, medicine layer controls, so this is the third layer where we need to work with the business owners or the business sectors themselves in order to make sure that the fair player also contains enough controls that will also enable them to use it in a safe manner.  I strongly believe that that approach is the best approach which where we should all consider because having too much controls will limit the usage of the AI and having too loose controls also will take us into other security issues.  In our case, for example, we developed an AI security ethics and glimpse back into 2013 that can still be applicable to generative AI.  We are also developing an AI sand boxing mechanism for government entities to test, to try, to implement AI solutions that they would like to implement at the city level, and also we have clear guidelines about data privacy so as we are saying that most of the AI models now are hosted, so we have clear information on how it can be hosted in the cloud, and that will include AI models that are hosted in the cloud environment so I don't think we should reinvent the wheel.  We should develop on the basis of the things that has been there a long time now. 

>> DARIA TSAFRIR:  Thank you, Doctor.  You bring up some really great points.  I'll turn now to Mr. Honjo.  Mr. Honjo, you're representing the private sector so from your organisation's point of view, how are you currently dealing with AI risks and cybersecurity?



>> HIROKI MASHIKO:  Yes, so pretty much close to what Dr. Blooshi said, so as I private company, we kind of state the AI governance guidelines within company, and that includes privacy, ethics, technology, everything, and so basically what we do for generative AI as a company, basically, we do everything a client asks, so many client asks for the application development, for instance, code generations using generative AI.  That oversee and include a lot of problems, include Intellectual Property issues that if you learn the, you know, the code from whatever source, maybe including the commercial code or Open Source code, so that's privacy protections.  Well, Intellectual Property protection is very the very important thing for company as well.  So ‑‑ and also the frameworks includes OECD frameworks that helps the defining risks for the ‑‑ what their project is, so that went pretty much as well for the define the risks within the AI project.  The thing is the ‑‑ although we kind of state the risks within projects, it's all come up with the ‑‑ what's the projects?

Was it infrastructure?

Was it the banking transactions?

Or was this the more like the what's on the, you know, displays and transcript.  It really depends on all the risks there, so the older projects are not really the same.  Privacy issues.  So a lot of the language models in market is learning data from somewhere, and you have to learn a lot of bit datas.  It's not small datas.  It's a huge data.  And the question is, where is that data resource and who owns the data?

It's basically ‑‑ it's more like the cross border data transfer issues that, you know, what's the data source?

What's the use of the data?

It's basically more like the international transfer.  So the question is which loads of data will be applied to that data.  So that's the issues with the cloud issues.  So it's easy resolutions for that.  Basically, we have to deal with the data along generative AIs, so that's basically a lot of privacy protections, and cybersecurity, cybersecurity also applies to the generative AI.  So basically when you talk about this AI security or AI guidelines or whatever, you state within the private company, it really depends on the includes, the data privacy.  When you get the data compromised, data source compromised or the result of the data was compromised or any breaches happened within the large language model that has been attacked a couple of times, so that's really the kind of lessons learned, also security applies to not all but part of the generative AI projects.  So as a private company.  So it's not the single company or country level.  We just need to deal with the multinational, multicountry‑level projects that have to deal with the data, privacy issues, and also they need to protect the models or data where that resource is.  So it's pretty risk models, risk‑based management, so it's all about money, but basically due to the multinational projects, that's no easy resolutions, but we kind of ‑‑ with the guidelines and some of the lessons or things we kind of applied to cybersecurity things into the generative AI that kind of resolving some of the issues regarding the generative AI projects.  But as I said, we have to deal with the different countries, so that's our challenges right now, so not technology itself.  It's more a cross‑border multinational, different regulations.  That's the real challenges for private company.  I think I'll stop here. 

>> DARIA TSAFRIR:  Thank you.  That was very interesting.  The OECD was the first, if I'm not mistaken, to publish clear principles for dealing with this.  Can you share with us from the OECD's policy from today's point of view with the emphasis on the robustness principle and maybe a word on where we are headed.

 

>> Sure, thank you.  So indeed, in 2019 the OECD was the first intergovernmental organisation to adopt principles for artificial intelligence, so these principles that have seeked to describe what trustworthy AI is and they're based ‑‑ they have five values‑based principles that apply to all AI actors, and they have five recommendasions for policy, more specifically.  And so within these principles, like you said, we have a principle that focusos robustness, security, and safety, which sort of provides that AI systems should be robust, secure, and safe throughout their life cycle which I think is a particularly meaningful aspect, and the principles also note that systemic risk management approach to each phase of the AI cycle on a continuous basis is needed, so I think it gives sort of the beginning of an indication of how we can apply a risk management approach in the context of AI.  So these principles have now been adopted by 46 countries and also serve the basis for the G20 principles.  And since their adoption in 2019, we've worked on providing tools and guidance for organisations and countries to implement them, and so we sort of took a set of three different types of actions.  One focuses on the evidence base, so we developed an online interactive platform that is calls the OECD observatory policy that has a base of national policies and strategies from over 70 countries and also data and metrics and trends on AI, AI investment, AI jobs and skills, AI research publications, and a lot of other information.  We also work on gathering the expertise, so we have a very comprehensive network of AI experts with over 400 experts from a variety of countries and disciplines that help us take this work forward, and we also develop tools for implementation, so we have a catalogue of tools for trustworthy AIs, or I should say, we don't develop the tools but we compiled them, so we have this catalogue that sort of different organisations and countries can submit the tools that they have, and when we process that and anybody can access and see what is out there that can be used.  And in that context is where also our increasing focus on risk management and risk assessment in AI, and we ‑‑ already last year, we published framework for the classification of AI systems, and others have noted that the risk is very context‑based.  So the system is not ‑‑ in the abstract, we don't know what risk it may pose.  It depends on how we use it, who uses it, with what data, so this classification framework is really there to help us identify the specific risks and the specific context, and we also ‑‑ we will soon publish a mapping of sort of different frameworks for risk assessment of AI and what they have in common and sort of the guide post, the top‑level guide posts that we see for risk assessment and management in AI.  So that's sort of ‑‑ the main focus is AI here but I want to say something about the OECD work on digital security which is our term for cybersecurity in the economic and social context.  So we have an OECD framework for digital security that looks at four different aspects, so it has sort of the foundational level, which is the principles for digital security and risk management, general principles and operational principles, how to do risk management in the digital security context.  It also has a strategic level, so how you take these principles as a country and use them to develop your national digital security strategy.  We have a market level of sort of how we can work on sort of misaligned incentives in the market and information gaps to make sure that both products and services are safe or secure, sorry, and also that, in particular, and others have mentioned that AI is now increasingly used in the context of critical infrastructure and critical activities, so we have a recommendasion on the digital security of critical activities, and the last level is a technical level where we focus on vulnerability treatment and including sort of protections for vulnerability researchers and good practices for vulnerability disclosure, and so I think this leads ‑‑ and maybe I'll stop here ‑‑ but I think others have said about the intersection between AI and digital security which is really the heart of today's conversation, and we sort of see like the first intervention like Mr. Zaruk said.  We need to focus on the digital security of AI systems so what do we need to do to make sure that AI systems are secure so, in particular, instead of looking at vulnerabilities in the area of the data that is used, so data poisoning, and how that can affect the outcomes of an AI system, but we also need to think about how AI systems made themselves be used either to attack, so generative AIs may be somewhat of a game‑changer in this aspect too, so we know, for example, the generative AI can be used to produce very credible content that can then be used at that scale, and phishing attacks, for example, and also it's less work that we have not yet done but sort of how AI systems can be used to enhance digital security.  So I'll say just one word on that, that we have at the OECD we have the global forum on national security which is for prosperity, which is an annual event where we bring different stakeholders from a very large range of countries to talk about the sort of hot topics in digital security, and the event that we did earlier this year jointly with Japan focused on the link between digital security and technologies with AI obviously being one of the key focus, and that was exactly one of the things with our discussion there, so I'll stop here, but thank you. 

>> DARIA TSAFRIR:  Thank you.  I can share with you that Israel has adapted the OECD principles into its guideline papers on AI.  At the moment, the guidelines are not legally binding, and the current demand is for sectoral regulators to examine the existing need for a specific regulation in their field, but I imagine we'll be soon looking into the AI act as well.  I'll turn to Mr. Loevenich.  Could you share with us German's policy regarding cybersecurity and AI.  In your opinion, will AI affect Germany's policy on AI regulation, and how will you implement it into your system?



>> DANIEL LOEVENICH:  It's a very difficult question, fantastic.  Since AI is, as you know, it's brand new, but indeed we in Germany are very much concerned with the European perspective of AI.  Just let me stress the fact that especially only the union level of standardisation, knows how to do a great job on that.  They very much focus on the issues addressed in the AI standardisation quest, and we in Germany are very much looking forward to how to implement structures and procedures based on our conformity assessment and specifically certification infrastructures, implementing the technical basics for conformity assessment of these standards.  But first of all, let me stress the fact that if we say AI risks are special risks to cybersecurity, that we always have in mind the technical system risks like, for instance, a vehicle and we address the ‑‑ especially for imbedded AI in such a technical system, we address all these risks based on our experiences with engineering and analysis of these technical systems, or in case of a distributed IT system with the whole supply chain, in the background, we have special AI components or modules as, for instance, cloud‑based services that play a key role for the whole supply chain, so we address the risks in terms of the whole supply chain of the suplication.  And it's very important to be aware of that while we in Germany consider AI risks, we have to concentrate on these AI modules within that complex systems.  And we do that just by mapping down these application or sectoral‑based risks which may be regulated, of course, by standards down to technical requirements for the AI modules that are building, and, of course, we have a lot of stakeholders being responsible and being competent to address these risks, and they are responsible for implementing the special AI counter measures, technical countermeasures within their modules during their lifecycles, as we have heard from the speakers already, and this is where we do concentrate, especially Germany back in the EU.  The overall issue is to build a uniform AI evaluation and conformity assessment framework independently of who's responsible to implement the countermeasures effectively working on that for the cybersecurity risks.  And this is a European approach.  It is number 1 on key political issue in the Germany standard AI standardisation roadmap.  So if you ask me what we do next, yes, on the basis of existing cybersecurity conformity assessment infrastructure like attestasion, you know, second‑party or third party innovation, certification, and so on, we try to address these special AI risks as an extension to the existing frameworks, implementing the EU/AI risks.  Does that answer your question, basically?



>> DARIA TSAFRIR:  Thank you.  Thank you so much.  And it actually brought me to the second round of our session, which is what we can do better, and as some of you mentioned already, one of our major concerns as government is the protection of safety and safety of critical infrastructures, and as a result, chain supplies and we are also looking into SMEs.  So I have two questions, if you could address it shortly.  One is what should governments be doing in the regulatory space to increase cybersecurity.  And when we talk about regulation, I think we need to address or consider the risks of over regulation, and we also need to think is AI dynamic?

Maybe it's too dynamic for regulation.  And the second question is, how much of the challenges should be addressed within international forums, including maybe binding , so if you could address these questions, maybe an idea or advice for the future.  If you have one, then I'll be glad to hear that, so we'll keep the same order.  We'll start with Dr. Blooshi and we'll go on. 

>> BUSHRA AL BLOOSHI:  I will take it from the global perspective.  It's very difficult for what  ‑‑ both providers and consumers at the end of the day so providing those services and AI models in other countries, to which one I should comply with.  Shouldn't we at least come with the minimum requirements for conformity assessment or compliance that we make it much easier for producers to comply and at the end of the day, we also give consumers that confidence that this AI tool is being internationally organized by multiple countries, so that fragmentasion, as I say, it makes it really difficult for both consumers and providers to comply with.  The international collaboration and the harmonisation of AI standardisation, the complaints to the requirements adds to those challenges, and actually, this was one of the papers that we published last year with the world economic forum calling for a harmonised international certification scheme for different things.  AI was not part of it, but at least it addressed the idea how it should be done, what are the minimum requirements.  I'm not saying it's the point certification the country should rely but at least it's the minimum requirement certification or minimum requirement assessment.  That makes it easier for providers to comply with, and we'll also make our role as regulator much more, let's say, less than having different standards, different requirements, different, let's say, acts in different countries.  This is a nutshell I think in international requirements is very important to move forward with the different AI apps we have today. 

>> DARIA TSAFRIR:  Thank you.  Mr. Honjo. 

>> HIROKI MASHIKO:  She said what I wanted to say.  But this is a private company, so many international organisations for auto regulations, on this keynote speech idea of Japanese prime minister, sunset, that would be AI regulations, guidelines, G7 companies, it's OK but that's not enough so there are more countries.  So we need these minimum requirements, minimum organisation to run business across that multinational countries.  So I'm looking forward for that.  But what happened with the data protections, GDPL.  Some companies have based on regulations are based off low, and that's a company that costs a lot, so I hope all the things harmonise with AI.  So I'll stop here. 

>> DARIA TSAFRIR:  Thank you.  Ms. Daor.

 

>> Thank you.  So, yeah, I think we've heard a lot about the fragmentasion issue and, obviously, that's a serious issue.  I think it's difficult to talk sort of in the abstract about whether we should or shouldn't have regulation or ‑‑ because these things are happening, so I think it's also to talk about what we do with this, and I think from an international ‑‑ from the perspective of an international organisation, I think we can talk perhaps about sort of three roles of international or intergovernmental organisations, and what they can do to help countries and organisations in this situation, so one thing is sort of looking at the mapping the different standards and frameworks and regulations and all these things out there and sort of trying to identifying commonalities, I don't know, perhaps minimum standards and sort of develop some sort of a practical guidance from that, but I think another important role is the ability of intergovernmental organisations, and we see that here today, to convene the different stakeholders from the different countries and from the different stakeholder groups to sort of flag their issues and have that conversation and perhaps a third aspect is to advance the metrics and measurement of some of these issues that are very challenging, and so in the context of our work on AI we're developing and we will launch next month an AI incidence monitor that sort of looks at real time live data to see what actual incidents AI systems caused in the world and I think that's maybe one step to advance that issue.  Thank you. 

>> DARIA TSAFRIR:  Thank you.  Mr. Loevenich. 

>> DANIEL LOEVENICH:  Yeah, we in Germany want to open market to new technologies.  We want people to be creative with AI technologies.  We want them to be on their way to use these technologies, and even to develop new ideas with these technologies.  So we really don't want to prescribe things.  We just want to recommend people and organisations to do special things, so basically, the ‑‑ and obviously, the first and overall instrumentarial for this is international standardisation.  So that people can decide on issues and their own risks and requirements to use technologies in special ways, and not to use or to misuse in other ways.  But please allow some remarks on that standardisation issue, especially on the ISO level.  My experience is they are a lot of people involved.  Many of them are AI experts, but I can't distinguish three schools of thought.  Technical, sectoral, means complication‑specific in contradiction to the technical application agnostical view and the nomative and ethical things on thes top.  It's nothing new.  It's three different aspects of AI technology since they are data‑driven, we have ‑‑ we have data in these systems, and they are used as machine understandable data, not readable data but understandable data.  So people are very much responsible in using these technologies for specific purposes.  Now then, if you have appropriate standards and speaking of hominizisation, you can do this.  It's very easy.  If you come to application‑specific requirements, you can standardise that.  In Europe we have C, for instance, or ITU for the healthcare sectors.  Very effective.  You can do that.  And you can do it even on the application at sectoral‑specific levels.  You can do regulation if you want but let the market do it.  Let they decide this is use of our AI‑based systems, and let the market and the customers decide, I want to use this technology in that way that is regulated.  The third school of thought or level is very much specific on value‑based things.  There are society and all these kind of organisation and digital serenity that play a key role of that.  In the EU, for instance, you have 27 nations, if I'm right, with probably 27 different value‑based governmental positions on that, so it's very difficult to ‑‑ 

>> DARIA TSAFRIR:  Our time is coming to. 

>> DANIEL LOEVENICH:  Time to stop here. 

>> DARIA TSAFRIR:  It was very interesting.  Yes, thank you.  I did steal back our five minutes, I have to say, but ‑‑ well, anyway, time flies when you're having fun, and our time is unfortunately up.  So I would like to thank you all for participating, and I know some of you had to wake up very, very early in the morning, so I really appreciate your effort.  It was very interesting and very enlightening, and I hope to see you soon.  Maybe in the follow‑up session.  We know we have an AI week in February in Israel, so we'll be in touch.  Thank you very much, and I would like to say, to thank, again, the staff, Dr. Vinner for helping out.  Thank you very much.

>> Thank you, everyoneIt was a pleasure.

[applause]