IGF 2023 - Day 3 - WS #33 Ethical principles for the use of AI in cybersecurity - RAW

The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.

***

 

>> MODERATOR GAN: All right. I think it's time for us to call the meeting to order. Let me maybe just start by introducing all the speakers on the panel that we have today. We will start with my left. We have professor Dennis Kenji Kipker from the university Bremen. Expert in cybersecurity law from Germany. I have on my right professor Amal, on my far left would be Ms. Noushin global research and analysis team from Kaspersky. And Anastasiya Kazakova, flown in from Serbia. And myself, I am Genie Gan head of government affairs and policy for Asia Pacific Japan, Middle East, Turkiye and Africa regions from Kaspersky. This is ethical principles for the use of AI in cybersecurity. We are witnessing rapid use of A.I. around the world for some time now. It has potential to bring many benefits to the world as we have seen on a day-to-day basis.

A.I. algorithms help with rapid identification and response to security threats and automate and enhance the accuracy of threat detection. This is something we experience at Kaspersky because we are a cybersecurity company. While these have already been developed by various stakeholders, for example, you know in 2021, the UNESCO actually adopted the recommendations on the ethics of A.I.

However the growing use of A.I. and machine learning components actually in cybersecurity makes ever more urgent the need for ethical principles of A.I. development, distribution and utilization in this domain.

Due to the particular opportunities but also risks of A.I. in cybersecurity there is a need for a broad dialogue for on set genome specific ethical principles which we felt today is a good opportunity for us to sort of discuss that.

And also for this reason, we at Kaspersky developed aspects that should be taken into account. These will be discussed in today's workshop.

So just to run you through the structure of the workshop and what we plan to do in terms of our agenda today, we are going to start in the moment to run some survey with our audience today, including those who have dialed in online with two poll questions. Which I will ask my colleague, Jochen to pull out a moment.

Then we will take some questions from the floor as well.

We are going to expert some very good discussions. So without further ado, let me just invite Jochen who is joining us online. We should be able to see him.

To run the online poll question.

Yes, Jochen received.

We can hear you too, very good.

The first question, Jochen will put up is In your opinion is the use of A.I. in cybersecurity more likely to strengthen or weaken the level of protection? In your opinion is the use of A.I. in controversy more likely to strengthen or weaken the level of protection?

Of course we have got options for people who are participating in the poll. Of course the first option, it will strengthen protections, second it will weaken protection and the third one is in the name of democracy we allow you to say you don't know.

Very good.

I think we have got 62% who have said it will strengthen protection. Let me just write this down. 20% say it will not, it will in fact weaken. And 20% is exercised their right to say that they don't know.

That's good, I think this is something we will flesh out in a little bit with the presentations from our speakers. I would also want to just invite Jochen to put out the second poll question.

We only have two to start off before we get into the panel discussion, so let's put out the second poll question.

The second question is what should prevail for cybersecurity security. The answers include number one, it should be regulated as heavily as generative A.I.

Second, there was no need for regulation. Voluntary adherence is best. Ethical principles would do just good, of course the third option would be existing cybersecurity regulation need to be updated to account for A.I. technologies.

I'm not sure if the poll was working well. The online audience that hear from Jochen.

>> JOCHEN MICHELS: It is working, yes.

>> MODERATOR GAN: Thank you.

I think we have 38% of our audience saying it should be regulated as heavily as generative A.I.

Nobody selected no need for regulations. So I think we have, well at least some agreement there. And 63% are saying existing cybersecurity regulation need to be updated. That's interesting. That's just park that aside for a while. I think thank you, Jochen, we will have you back with us later on in today's session. We can close the poll. Thank you, Jochen.

Now I think, I'm going to be opening up some questions later onto our panelists but I would first call on Noushin to perhaps, she has some slides for us also. Some slides, yeah.

And I will just invite Noushin to please deliver short remarks impulse speech on opportunities and risk of A.I. and cybersecurity and ethical principles, she feels should be developed to promote the opportunities and mate  mitigate the risks. Noushin, please.

>> NOUSHIN SHABAB: Okay, thanks, Genie. As my colleague perfectly stated and most of the audience agree, A.I. and particularly machine learning has helped to strengthen cybersecurity in a lot of ways. We have been using machine learning techniques in Kaspersky for a long time. So it's not something new for us.

But as we have always had this concern about ethical principles of using A.I. and machine learning in cybersecurity, we thought to take this opportunity to share a little bit about some of the basic principles that we believe that are important in the use of A.I. and cybersecurity and we want to have a discussion today and maybe develop these principles further. Let me start with the first principle. So the first one is transparency. We believe that it's important and it's the users rights to know if cybersecurity solution has been using A.I. and machine learning, and the companies, the service providers need to be transparent about the use of A.I.

We have a global transparency initiative and as part of this initiative we have transparency centres in different countries in the world and the number is actually growing. They are opening more centres. In these centres, stakeholders and customers, enterprises they can go and inspect and visit the centres and look at the code of our products. Including how A.I. and machine learning has been used in our products.

So we commit to being transparent and making sure that users know and consent to their data and contribution to our network is transparent to them. And they are aware of the machine learning techniques are used in the products.

Number two, safety.

So when it comes to the use of A.I. and machine learning in real world, there are actually a lot of ways that these systems can be misused by malicious actors to make them make mistakes deliberately. So there are various techniques that attackers can use to try to manipulate the outcome of machine learning systems and algorithms. That's why we believe that having safety of the A.I. and machine learning systems in mind is very important. And towards this principle we have a lot of security measures in place, like auditing our systems with machine learning. With reducing the use of third-party datasets for the training for machine learning systems. And also a lot of other techniques such as making sure that we favour the cloud-based machine learning algorithms to the ones that are actually stored and deployed on the user systems.

Number three human control.

So we all agree that A.I. can help a lot in a lot of areas in cybersecurity. For example, in improving detection of malicious behaviour in anomaly analysis and so on. But when it comes to sophisticated especially with advanced per   --  persistent threats it's important to understand these malwares mutate and adapt different techniques, encryption, obfication and so onto by pass machine learning and A.I. systems. Because of this we always have human control over our machine learning systems. We believe it's important to have an expert that has good knowledge and understanding and is backed by big dataset, big data of cyber threats to supervise the outcome of machine learning algorithms. That's how human control has been always there for the systems that we use machine learning for.

Number four privacy. When we talk about big data and data from cyber threats it always comes with some sort of information that can be considered as personal identifier data.

So we believe that it's users right to have privacy  on their data. When it comes to machine learning data and what is used to train the algorithms. Like anonymizing, reducing the data collection from users. Removing personal identifiable information from url's or other data that comes from user systems.

Number five, developed for cybersecurity. So as our mission to create a safe world, we are committed to only use and provide services that work in defense. Along with this principle, we have the services that use machine learning and A.I., developed only for defensive practices. And we encourage other companies to join us in this principle too.

Last but not least, that's actually why we are here and we have this discussion here. We are open for dialogue. We believe that it's only through collaboration between various parties and between everyone in industry and in government sector that we can truly achieve the best results for users and user data against cyber attacks and cyber threats. So that was it, thank you.

>> MODERATOR GAN: Thank you  very much, Noushin. I hope that sets the stage and kind of sets the tone to today's discussion because we really are focusing, for those who have just joined us, we are focusing our workshop today really discussing the ethical principles for the use of A.I. in cybersecurity. And also I think I just want to take this time to sort of hear from more technical scientific perspective from Amal on how can  --  the microphone maybe. How can A.I. or machine learning techniques contribute to cybersecurity, and which issues can emerge while using A.I. techniques for cybersecurity and how can we solve these issues? I think you also have some slides, if we can put up some slides.

Yes, we see them.

>> AMAL EL FALLAH SEGHROUCHNI: Hello, everybody. I am very happy to talk about A.I. in cybersecurity. I think there is a need of regulation like most people voted earlier.

My presentation will be very short, even if there are a lot of pointsment   --  points. Mostly I want to emphasize where A.I. can be used in cybersecurity. The ethical problems comes from the way we will use A.I. in cybersecurity. So the context, as you all know, cybersecurity is a very huge problem for all software around. And in this presentation, as Genie said, I will address some points related to how A.I. is included in cybersecurity systems.

So as you know, Kaspersky detects like 325,000 new malicious files every day. This comes from a report in 2017. So I think today there are much more.

Problem with classical methods for cybersecurity is there is slow detections. What we expect from A.I. is to enhance and transform cybersecurity methods by providing predictive intelligence and long life cycle of the software.

So the role of A.I., more specifically in cybersecurity is two fold.

The first thing is that A.I. can automate, common cybersecurity tasks. We can identify in large datasets that have not been analyzed manually.

As you can see cybersecurity and A.I. is a national security priority by the NSF NSTC today. So what I want to present is there are two kind of A.I.

The first boxes on the left represent what we call a blue A.I.

And in the right you have the red A.I.

The blue A.I. presents some opportunities for cybersecurity. For example, A.I. will help to create smart cybersecurity. For example, effective cybersecurity controls, automatic vulnerability discovery et cetera. The fourth point, by using A.I. you can fight cyber criminals. For example, for detection and analysis, intelligent encryption, fight against fake news, et cetera. This is the good news for using A.I. in cybersecurity. But as you know, cybersecurity, these techniques or these A.I. systems are also vulnerable and rise a lot of challenges like robustness, vulnerability of algorithms, and also some misuses of A.I.

For example, by creating fake video dollars, A.I. powered malware, smarter social engineering attacks, et cetera.

So A.I. for cybersecurity, I will go very fast. Don't worry.

A.I. in the domain of cybersecurity will help in all these steps and this is the NIST CSF framework.

Identify, protect, detect, respond and restore. Activities aimed at maintaining resilience plans. So this is the life cycle of cybersecurity. Defensive cybersecurity. And A.I. can be used at all the stages of this life cycle. So I can say that the ethical issues of using A.I. in cybersecurity can be studied through these five steps. For example, if you identify your asset, you should be sure that your resources are resilient, are not vulnerable.

To protect also and detect  et cetera. How do we implement all this by using A.I. techniques? If I address some task of cybersecurity, like fuzzing, pentesting, et cetera. The tasks of A.I. used in practice today are deep learning, reinforcement learning, deep learning and reinforcement learning, classification of bugs and also some NLP and methods of machine learning.

This means that all the problems that come with A.I. techniques will be found again in dealing with cybersecurity.

So this is only one step. Identification. We can deploy, I don't have time, this is why I cut. But we can do the same for all these phases in cybersecurity system.

So now we can use techniques from cybersecurity to secularize or make A.I. system more robust.

This is a challenge of A.I.

Robustness algorithms. For example, there are well known adversarial machine learning techniques that can be used to secure or to attack A.I. systems and algorithms.

Also, this is why I say that adversarial A.I. attacks A.I. systems. A.I. cannot be made unconditionally safe, like any other technology. So we have to take care that our A.I. system used in cybersecurity will not be attacked by malicious attacks or something else.

So this is a very famous example in computer vision, we can use. If you look at the pictures they are similar. But the A.I. system will detect different things. It's just a question of changing one pixel sometimes in picture. And you can have a different output. For example, you have the left one, you can see a car. This is correct. But in the left one you can see ostrich. The system will recognise. Human being cannot see the difference. But machine learning algorithm will make the difference.

Okay, so last thing is misuse of A.I. for example, by creating fake videos, they are very famous today and A.I. powered malware, social media attacks and so on. I will end with this.

We know today that A.I. can create new kinds of cyber attacks, phishing, cyber extortion, using generative A.I. in cyber extortion is something very common today.

So the need of regulation is crucial. It's very important. We inherit all of the problems, the issues coming from software, but we have also some very specific problems for cybersecurity domain.

And A.I. will bring major ethical and regulatory challenges also in cybersecurity.

So my conclusion is that, I agree, we need ethical and regulatory consideration for cybersecurity systems. Delegations of control. We have to find consensus between human total control and total autonomy of A.I. systems. Delegations of control will be granted with the sole objective and not toward total autonomy of A.I.

In cybersecurity and cybersecurity actors are still looking for an adequate legal basis to conduct their daily practices for privacy and data governance for example, in cybersecurity. Thank you for your attention.

>> MODERATOR GAN: Thank you very much. That was wonderful. Already madly taking notes because I will have to synthesize all of this.

But before I do so and really do a full on, you know, panel discussion with perhaps some questions from the floor, I would like to just pass the time over to Anastasiya, who will be talking with us about some of the current trends and reflections on A.I. policies particularly in the field of cybersecurity and you know, maybe an impulse statement by Anastasiya on the challenges, rinks and   --  risks and ethical principles.

>> ANASTASIYA KAZAKOVA: Thank you very much. I worked in policy before private sector. Kaspersky focuses on the policy. My current work we also discuss with multistakeholders how could be implemented and not solely focus on A.I. we largely focus on the norms for responsible behaviour in the context of international security and context of overall cybersecurity. Policies definitely getting more and more attention.

So thanks so much for the previous speakers. I think we have seen indeed that A.I. already in the world of cybersecurity. I think quite many years ago it helps detection and helps to collect intelligence for better analysis of the cyber threats. I think many, if not all cybersecurity companies these days, especially advanced companies do apply to some extent A.I. in the methods how they deal with the threats and what the intelligence they produce for their customers.

The big question though, how does it work? What kind of a data do the companies use for this? How actually A.I. still quite unknown for many who develop A.I. mysterious black box. What happens, if it makes decision how it made a particular decision. So all these really important questions. I think one of the key fundamental challenges not only in the minds of the policy makers but also the minds of the users and the minds of those who develop A.I. and A.I.-based solutions. And in this regard, yeah the human control, retain human control, I think all the speakers have said already, it's really fundamental. A.I. should not be autonomous. We cannot allow something that we don't know exactly or completely to allow it to somehow make some much big impact on our human life. We see humans are afraid of this, right?

But even though the policy makers already have started talking and discussing how to make A.I. more predictable, transparent, ethical, the question is still, if we give, we retain the human control. We give back the control to humans. Those who develop, academics who would like to see what are the algorithms used there. The questions are still quite challenging how this control will be displayed and which will be on the table. Who would, in the end retain the biggest control among humans? The developers of A.I. or the policy makers or academia, how to ensure actually the data that's been collected on massive scale for A.I. is not monopolized by one actor or a few actors on the market. How to make sure academia and Civil Society have access to analyze which data, which policies, which processes it's been used and data protection cybersecurity solutions. These are, I think really open questions. Really difficult questions, they are contextual questions. Speak about A.I. in terms of the impacts for society, for economy, for security, international security. All of those questions I think will be decided on the particular context and it's really important. And it's really challenging, therefore.

One of the other challenges I think in all the emerging policies or even the regulations to make more transparency more ethical is to define A.I.

I think there's no universal definition so far what A.I. is. And the policy makers, I think, have a really struggle to carefully scope future loads to pin down what A.I. entails in a particular context.

One of the aspects important for policy makers and legislators to make sure, the lawyers focus on outcomes and expectations but the not the technology itself. It will help to make the laws more future prone. And focus what actually concerns people. People, I think users, ordinary users we don't want to know how the code is written for A.I.

We do want to know how this code will impact our lives. How this will impact our security. How this will impact our jobs or the community or society or the broader scale.

The other aspect I want to mention, even though currently I think we name lots of policies or regulations narrowly in the field of cybersecurity, in terms of A.I. and cybersecurity. Here I would agree with the audience that participated in the poll. I think most of the people said that it's rather the existence needs to be strengthened rather than new regulation needs to be developed. I agree with this. I think it's really important to see broadly more horizontal level how A.I. one more piece of technology and one more piece of code in the end. Even though it's really complicated fascinating and difficult piece of code. But still. How it produces, which impacts it produces for different stakeholders. And in this regard they are already emerging existing laws to regulate the security of data in particular context, to regulate the security of the critical infrastructure and so one, A.I., I believe complicates the picture but doesn't require a new approach from scratch.

Yes it complicates the current picture and requires discussions but still we need to look again at the impacts what the technology gives to us.

I also wanted to say that, we do see the emerging discussions in terms of the impacts on the international security and peace, luckily within the U.N.

And within the regional fora but still not that extensive as they should be. The problem is that still, I think the international community and those who engaged in those discussions including diplomats still lack substantial evidence how many advanced A.I. tools, if they exist, can be used for above defense and defensive purposes. Still the knowledge is very limited. There's a lot of secrecy about this. It's the knowledge that is not accessible to a broad public or to even limited group of academics, unfortunately. There are sometime growing interest and calls of the international community to produce the sort of rules of the road, how to regulate A.I. in terms of cybersecurity, especially where A.I. can be used in the military context on the battlefield. And I think it's real important but probably we will see more. To highlight to have these discussions we need to understand the tools already out there. And to increase the transparency in terms of the different types of actors that are involved in cyber activity.

And I would probably conclude to this question saying that overall speaking of the regulations, I think it's already evident that the large markets such as E.U., U.S., China, other countries probably will pass conflicting regulations concerning A.I. quite soon. I think we heard yesterday from the U.S. diplomat that U.S. is preparing the executive order on the artificial intelligence soon. And the G7 leaders, they also have committed to establish a set of guiding rules for A.I.

So we see to define the rules, who will have ultimate power to define impacts, will it be the governments, which governments? The vendors, the companies? Just one or a few companies. If more fragmentation happens in this field, how it happens overall in cybersecurity, in Cyberspace unfortunately, it will make I think less opportunities for different communities to truly benefit from learning what A.I. could bring to us as the international community, as a society.

There are still beliefs and hopes that vendors or organisations or companies could take a lead and organize a consortium and to make a self voluntary, self-regulation approach to be more transparent. What we just heard from Kaspersky, it's a good initiative. We hear more and more initiatives companies extensively involved in A.I. to be more active and saying what kind of data they use, how they process this data. I think there's still optimistic hope if this conversation continues, probably bottom up approach would lead and in this regard sort of, there will be more opportunities to avoid the risk of conflicting laws of the fragmentation in this field and probably to make sure still the access to this technology, to the research, to the discussion, will be much broader than just within the borders of one particular country of few countries.

But I think there are still open questions, and to some extent the emerging policies try to address this. In terms of the result toward a conclusion, will come, I think that's the open question. Let's see how humans will be optimistic or pessimistic solving this. Thank you.

>> MODERATOR GAN: Thank you, Anastasiya. I would just want to finish off this preliminary round of remarks with inviting Dennis to sort of speak about, can A.I. be legally regulated at all, given the current political and technical difficulties with A.I. Act in Europe and are we destroying innovation through over regulation? So maybe I will just hand the time over to Dennis.

 

>> DENNIS KENJI KIPKER: Thank you, Genie. I have a legal perspective on the topic. Regulating A.I., we need to draw a clear line as it was already noted by the previous speakers. We aren't talking generic but specific use case. Just a piece of use case scenario.

A.I. and cybersecurity are two topics that already come together a long time before use cases like generative A.I. became public in the recent month. For example, A.I.'s use with regard to cybersecurity in automated, enemy detection, networks and already wrote some publications about that six years ago.

And this, of course, begs the question regarding this very specific use case, do we need a special A.I. regulation for cybersecurity in the future? And my answer with regard to that is quite clear, I would say no. This might be interesting but to justify this in my opinion we need to differentiate. There are three use case scenarios we will have to talk about and take a closer look unto. The first is A.I. is used to improve cybersecurity. The second one, A.I. is used to compromise cybersecurity and third one, A.I. in general is being celled. The first two scenarios are from a legal perspective, quite easy to answer. When A.I. is used to improve cybersecurity, it's technically one of several possible measures that can improve cyber resilience. For example, European law makers, who, in my opinion, currently lead the world in cybersecurity legislation for example, with a new network information security directive that became effective in the beginning of this year or for example, also draft version of cyber resilience act. We have a lot of upcoming cybersecurity regulation. The point is regarding this cybersecurity specific legislation, the European law makers have so far avoided exclusively naming specific technologies to realise an appropriate level of cybersecurity and have, instead of that, used the general term state-of-the-art of technology, which say general guideline and many legal regulations of technology such as cybersecurity as well. It means for example, private companies, public institutions that implement cybersecurity have to fulfill the state-of-the-art of technology to become primed with the legal rules. This in my opinion as a lawyer is very fitting. A lawyer will never be able to conclusively map all the technologies that will be developed in the future that are needed, especially here for cybersecurity. In a sense due to rapid technological development we have. We have very fast developments currently and the future. This is widely accepted this opinion by the scientific community. The second use case scenario I would like to mention. That means when cyber attackers use A.I. to compromise I.T. systems this is also not a specific A.I. security scenario, attackers may well use different technologies to successfully attack I.T. systems as well. They are typically criminal offenses, many countries we have cyber criminal law. And these criminal offenses, in the national cyber criminal legislation are being operated and as a part of this legal interpretation they already cover the use of A.I. as a means of attack, Az teccal means of attack without need for explicit regulation. Now we come to the third point of this very short statement.

The cert aspect was said A.I. is not but to development of A.I. we heard statements keeping A.I. secure when it is being developed. Of course this is an important question that we also have to address from a legal perspective. But this development issue of A.I. cannot be considered a cybersecurity specific issue. So it requires a focus and of course, it must be ensured, for example, as mentioned A.I. systems are not themselves compromised at this very important stage. That's something that we have talked about in several panels during this conference. And this also, what a European A.I. Act as regulation that has already been mentioned several times for example, seeks to achieve, when it explicitly in its draft version what's made public last year stipulates that A.I. itself must be cyber secure. And therefore developers of A.I. must provide safeguards to prevent for example, manipulation of training datasets or to prevent whole style input to manipulate A.I.'s response. This is also something mentioned. But this, in my opinion is just one facet of secure and safe A.I. development. Not really a use case for implementation of A.I. and cybersecurity. So to come to a conclusion as a result, in my opinion, the regulation of A.I. and cybersecurity must clearly differentiate between scenarios in which A.I. is only one of several possible technical means. And the regulation of A.I. specific risks. I think this is an important point which has to be taken into the policy debate in the future legal debate as well. Thank you.

>> MODERATOR GAN: Thank you very much, Dennis for that. So I think, so far, what we have heard beginning with the ethical principles that were sort of put forward by Noushin on transparency, safety, human control, privacy, defensive cybersecurity and being open for dialogue have been agreed upon in various ways. We heard from Amal about the framework. The five steps to defensive cybersecurity life cycle and identifying protecting, detecting, responding, restoring. Which of course kind of dove tail with various aspects put together on safety human control, privacy and defensive cybersecurity. And then we heard from an Anastasiya, multistakeholder cooperation to things amongst other things and Dennis highlighted limitations and regulation and the need for some ethical principles that overlay. We will talk about all of these in a short while. I will take this time to open the floor to some possible questions?

Because otherwise I am going to ask a round of questions. I see there are no, I will just ask Jochen if you have got any questions from the online participants? Otherwise I will be quite ready to launch into my round of questions.

Yes there is a question in the room. Can I ask you to take the mic, yeah? You have, okay.

You have to turn on the mic, push the button, thank you.

>> Thank you for the presentations as well. The question from my side, although the ethical question is more philosophical approach, for sure. When I look at the cybersecurity, because the adversary is going to use adversarial A.I. and they don't care about ethics. For us to defend an I.C. that detection might be where we might imply the ethical approaches. But when we are talking about response, especially active cyber defense and engaging in responsive actions, implying ethical A.I. to counter unethical adversarial A.I. might put us at a disadvantage. I would like to hear your approaches or thoughts on this as well.

>> MODERATOR GAN: All right. Maybe I will ask Anastasiya to take that question. Thank you for the question.

>> ANASTASIYA KAZAKOVA: That's a good question. If the organisation has the right. The haul backs, if they are legal, lawful. I think in most of the countries, the governments and industry came to the conclusion that organisations shouldn't have this right. So law enforcement have the mandate per law should step in. And if the organisation ask for this help, the law enforcement can investigate and decide how to do, dependent on what type of actor the organisation is dealing with. Cyber espionage, matter of national security of the relations of the two countries. It's really getting more critical but if it says something really advanced complicated or the vision with the E.I.

Whether the organisation has this right. I think it will be really risky to go into this direction.

But overall, as you said, it's really philosophical. How we define ethics in this regard. And why we, as a good actress be ethical has a lot of bad actors that behave unethical. I think it's a really risky conversation I might take. Because we need to define what's our goal. Our goal is to enhance security for all. Sort of optimal collective security. Our goal is to enhance stability.

As a good actor behaving unethical even to protect ourselves is a part of the security instability? I think not likely. So we still need to, well, abide to international law, domestic law, national law. And overall, sort of the rules to make sure if there is a bad actress. We sort of stay on the side where we do understand the limits of our actions.

But I don't want to conclude in a pessimistic note but still on a hopeful note.

The challenges that we see in the Cyberspace, they of course get more and more sophisticated, it's not purely technical. That's what's difficult. If it's technical, the technical people will solve it. The problem with much more nuanced sometimes policy solutions with international security solutions. So in this regard, I think we need as humans who try to protect ourselves have to be creative in terms of focus on what we already have for sen tris, international law, it's the national law. But also be creative. How the new types of responses could be developed in this regard. How if could enhance cooperation between communities and vendors who could share knowledge with output research, or the government despite the current geopolitical situation, how could we increase our chances to create the solutions that address the threats that are getting more and more complicated to us. Again, that's difficult but I think there's a lot of hope that it will be developed more and more because I think we all want in the end the security for us all.

>> MODERATOR GAN: Thanks for that. I thought I will also give, pay some attention to the questions from our online participants. There was one question from Yasuki Saito. What do you think of using LLM ChatGPT to deceive human users and force their PC's to be infected by malwares? Is there any good way to avoid such things? Noushin?

>> NOUSHIN SHABAB: I guess we heard from Amal about this, like advanced social engineering enabled by A.I.

This is a perfect example to use an A.I. system to make a more convincing social engineering, conversation or an email or a message that looks very, that looks very benign and doesn't raise any suspicion. This is just one example of how A.I. can be misused by malicious actors.

But I would say, still with advanced security solution, obviously having machine learning techniques implemented into the solution can also help to identify phishing email or even social engineering attack. But also apart from having an advanced solution to address and protect users against such attacks, I would say talking and raising awareness, I'm sure with the use of A.I. they can by pass services. It's much easier to understand if their victim was their target environment and how the environment is, what are the softwares, what are the security measures in place in their target environment and try to figure out a way to by pass that. I would say something to compliment an advanced solution would be education for common users and also employees and organisations to understand the risks and understand how A.I. can help in making a more convincing conversation or a more convincing spear phishing email. And yeah, make sure that users are there and they don't fall victim.

>> MODERATOR GAN: Thanks for that. I think just taking stock of what we have so far. From the poll, from the survey results and also from the discussion I think first of all what we are hearing is A.I. and cybersecurity has produced a lot of benefits, right. And we can't run away from the use of A.I. in cybersecurity. But second of all, of course, it comes with costs. There are impacts. There are unintended consequences. And Amal brought up statistics from Kaspersky about the number of new malicious files detected on a daily basis. Thanks for bringing that up. I thought I could also give an update on the statistics. As of today actually, Kaspersky uncovers and finds on a daily basis more than 400,000 new unique malicious files every day.

And that's not that's astounding.

And when I talk about unique malicious files we are talking about maybe one malware that infects 10,000 computers, let's say. That's not 10,000, it's considered one, if it's the same malware. All of us sitting in this room for an hour and a half for this workshop, we are essentially talking about 27,000? 30,000 new unique malicious files uncovered by a single company like Kaspersky. So that's astounding. So there are costs and there are benefits to the use of A.I. in cybersecurity that we need to be concerned about. And that brings me to the third point. Which is that, the reason for this course we are all talking about. And we discussed then what are the role, what is the role of laws and regulations and all that, right. And then we also hear that, you know, we then start thinking about not just regulation but what exactly are you regulating and why are we regulating? And then we also hear discussions about conflicting regulations which are beginning to surface, right? Globally. And so then, what that brings us to would be that there are limitations to regulations. There are limitations to regulations. A a lawyer I'm saying anything that is legal may also not be ethical. Do we then take a step forward and then start thinking about ethical principles, beyond just legal frameworks? And that is, I think where we are today. I think we have a question from the floor. Sir, can I just ask that you take the mic and introduce yourself and give us your question, thank you.

>> Hi there, this is Maarten Botterman. I've been talking with ICT, complexity comes up and you talk about A.I. and cybersecurity, I agree with what has been said. But a complication is that security will require identity.

I can see specifically with A.I. that has dual impact. One thing is that data, thanks to A.I., will become more personally identifiable than before. The other thing is A.I. can also help secure, as has been pointed out but maybe also with the identity factor. So how do you deal with the dichotomy between identity need in the future going forward. There's no way around it. At the same time, also privacy and this is part of your legal considerations of course, and ethical.

>> MODERATOR GAN: Okay. I will leave Dennis to take this question.

>> DENNIS KENJI KIPKER: Of course, when developing A.I. we have high impact privacy risks. I think this is quite clear. From the European Union perspective speaking, we have general data protection regulation which addresses the use of personal data, also when A.I. is being trained. But as I mentioned when it comes to the possibilities and also the problems of A.I. regulation, I think in general we need to move away from trying to regulate every conceivable scenario and risk. We have risks definitely but this is not a typical thing of A.I. this is something that addresses the whole technology sector. On the one handful will never be possible. On the other hand administrative practices also raise the question for example of who should control and implement all these laws. You will need a lot of resources. We see it with regard to data privacy authorities. I think not only the European Union but all over the world they have problems they are struggling in implementation of laws. All these companies that are not compliant. This is of course a question not A.I. specific. Legally, it has long been proven that what matters is not the severity of sanctions after a certain kind of violation, but the likelihood of detection of a violation. I think this is where we need to work. What this means in my opinion for A.I. in the wake of the current hype we have seen since the beginning of this year that we should not fall into a mindless, my opinion, mindless regulatory debate that possibly ends up even delaying or pay the way really necessary regulations. We need a core regulation but we have to distinguish between things that are necessary and not necessary for the first start. In my opinion, the European Union A.I. Act with risk classes is a good first approach for the time being. Even of course needs to be revised again because we have seen this year there have been some upcoming risks. Since A.I. is mainly not developed by states but currently just in the hands of big tech players, mostly coming from U.S., the cooperation between the state and the industry actors really needs to improve. This is where we need to work on as well. Self-regulation alone not enough. We need a system of transparency, we need more cooperation that needs to be established on a permanent legal basis. When we talk about ethical principles and this is also part of this session, I think ethical principles can help of course. But the authorities for supervision of A.I., they must be stronger. That means they need more financial resources. They will need more personnel resources in the future so we can tackle all these problems.

>> MODERATOR GAN: Thank you, I think I will add professor Amal to add on and then Anastasiya as well.

>> AMAL EL FALLAH SEGHROUCHNI: Thank you for the question. I'm trying to answer the question about security. When we talk about security, we are naturally interested in the identity of the person we are trying to secure, for example. But there are some initiative around the world where we can, the purpose is to try to make a difference between identifier and identity of person. This is very interesting because you can rely on third party to identify that person is associated to that vird fyer. We don't have access, the whole identity of a person. And another very nice initiative is to avoid to have unique identifier of a person. This alone do not have access to 360° of the person itself.

So its sectorial identifier that is associated with the same identity which is associated through a third party to some person. And you add all these layers to avoid the direct access to a person with all data of the person. Because unlimited data is not enough today.

 

>> ANASTASIYA KAZAKOVA: I don't know if it answers your question. I'm curious what you think as well. That's the question you ask, while they are very specific but really critical of course. not the most popular opinion, I think regulations can be slow and not address the problems with A.I., we still don't know how A.I. would affect us in a week. It's still rapidly developing. While regulations are important in terms of manufacturers of the product, tech company is to move in the right direction with legal and regulatory action to put the right incentives on the market for them. I still believe the industry has the capacity and has the ability to do lots of really important things without policy makers and regulators being in the room. For software there are initiatives on the software bill of materials. The idea is to increase transparency of the composition of the software they are using. If you take a cake you need to know the ingredients to make sure it will not make any harm to you given dietary specifics.

the same logic applies to software. If you are the bigger company you need detailed encryption automated that could be machine readable to understand what co-components. How could you use in a visibility, it could be ex ploided. I think the same logic could be applied to those, increase transparency of the components you use, increase the data documenting what type of data source is collection methods. Techniques you apply. Yes, it will probably be accessible to the largest advanced, these companies have their own users. I think that will happen back security for us all. Hopefully, it takes time but I think it might be more a child, wait extensive regulation to be passed on.

>> MODERATOR GAN: Thank you for that Anastasiya, your point about software bill of materials is something that resonates with me. That's something at Kaspersky we practice for our software. I think it is important to know the ingredients of the cake you are about to eat. I think Professor Amal?

>> AMAL EL FALLAH SEGHROUCHNI: Because we are talking about ethics from the beginning and we don't specify what do we mean with ethics. I think ethics is not limited to data protection. But also we have to consider dignity to protect human rights, for example, when you detect some malicious attack for example, you should be careful with the origin of that attack.

Fairness, privacy and also informed consent. My point is what do we mean by informed consent. When people give some data, some information interact with a system like for example, in generative A.I. systems people are not aware of the consequences of the tool they use. They give consent. They think they are informed but in fact they are not informed.

Because most of people are very far from technology. And most of them have no idea of on cyber systems. What do we mean by informed consent. How do we protect dignity in these situations.

>> Thank you for that. That's not emotions, it's just my throat dry.

This morning in the discussion very much legal isn't enough. Legal is the last resort in a way.

Whereas we have been talking a lot about privacy and security by design. It's important to recommend the AI context that is an extra challenge. But think it to the reference of the European Union's A.I. act, this is also where the algorithmic accountability act is coming up in the U.S.A.

You can see that's a place we may end up with A.I. not just being magic but something real and concrete could take responsibility for. I think that's an important element. Thank you for your answers, it's just we don't know all the aens yes, I very much realise that. The old principles of sec cheiroity by design and privacy by design are important, realise identity is there to protect you, it may make you a victim.

>> MODERATOR GAN: All right. Thank you for that. I think I am going to, I'm mindful of the time we have about 11 minutes left. I'm trying to economize the time left we have. Not for getting we also have one  more for participants. I will go down the road and begin with Noushin. It's the same question for all our speakers, actually. I will ask the question for each of you. Maybe keep your remarks short, 1-2 minutes max. Which are the two most important principles in your view that definitely need to be followed in cybersecurity ethicals.

That's actually a very good question, the two main points that's been discussed more than other principles today. So first one transparency. So being transparent to the users.

And also what we do with user data and how we implement detections and how we protect users be it through machine learning technique, algorithm. Or more like traditional ways.

The second one is privacy, we are in cybersecurity industry. And we deal with targets and victims of cyber attacks. For us, it's one of the most important aspects to protect users and obviously if you don't take care of the privacy of user data ourselves, it doesn't make much sense to try to protect them from cyber attackers, right? I would say transparency and privacy for me.

>> MODERATOR GAN: Thank you, Noushin. Next to Dennis. I hope you will touch on different principles.

>> DENNIS KENJI KIPKER: That's really a difficult question. To make a long story short, as a scientist I can say that even with A.I., this is something I mentioned several times also in my opening statement. We do not eliminate problems just by regulation alone. This is my opinion or allusions different. Cybersecurity we need to clearly align ourselves with the three scenarios of A.I. that have been mentioning in my opening statement. In terms of the principles, I find it very difficult to just say we have two principles that are relevant. The use of A.I., not only in cybersecurity but everywhere has so many facets and different risks that we do not have approached yet. I think one of the most important things is we have human control about decisions. This is something that is also clearly described. With regard to use of personal data for example, to decisions of private companies that might have negative impact on individuals these decisions cannot be made only based on A.I.

And in my opinion, the second important criminal, I would we have to disZingg wish between security and sieve Ty. We have a lot of use cases for use of A.I.

It means security is connected strongly with safety. We should take a strong look into safety. In my opinion, this would be the two utmost important principles putting on top of the principle of the one Noushin said. Thank you.

>> MODERATOR GAN: That's great. Amal would you like to give us your two most important principles in your view?

>> AMAL EL FALLAH SEGHROUCHNI: If we are talking about ethics. We are talking about ethics as if it's stamps and ethics is not there. Ethics discussions how things will go ahead. From my point the first thing to take care of, how do you preserve tig knit and human rights in all the systems. The second, to reach informed consent with population that use these systems. This means we have to be very deductible to explain things. For example, we have to talk about accountability. Those are tools, data protection, all these are tools towards principles of ethics. Thank you.

 

>> ANASTASIYA KAZAKOVA: I'm also expecting to answer this question, I think principles do help us as users or those who live in cybersecurity to have sufficient degree of security, right. Transparency alone, we know about the code, we have the policies but how could this help us be more secure and feel more secure and stable in Cyberspace. None of those principles alone actually helps to achieve what we want to achieve but all together and many more could increase our actual chances to have optimal security. But we should avoid produce harm to each other, to others with any type of technology and A.I. is of course no exception here.

>> MODERATOR GAN: Thank you for that. I think my secret sort of came true and everyone touched on the different principles. Now I think it leaves us to hear from our online audience as well. I will invite my colleague Jochen to pull out the final survey question. I think it will be interesting to hear what we have from the online audience. Please mark 1-6 because we have six ethical principles, the significance of each A.I. use and cybersecurity. Six being most significant, very, very significant and of course significant of all things. But I do agree with Anastasiya so everything comes together.

Yeah, depends on how you formulate the principles. Amal is whispering to my ear.

>> JOCHEN MICHELS: Let us wait some further seconds for the poll.

>> MODERATOR GAN: Okay. In the meantime I thought I will like to say what are we going to do about the ethical principles that are in, well currently in the sort of draft proposal stage, right? Today we heard ideas that were discussed. We had new suggestions that were made and the proposals will be further developed. It doesn't just stop here. The goal is to develop a basis that could serve as guideline for industry, research, for academia, for politics and civil society and developing individual ethical principles. So after this session, we will be publishing an impulse paper. It will reflect the discussion results and will be made available for the IGF community as well. In addition, of course, the paper will be sent to our stakeholders, complimentary feedback. And of course Kaspersky will further develop our own principles based on this paper and provide best practices for cybersecurity and the suite we are in. Thank you for putting the results of the poll. Please mark 1-6 the significance of each ethical principle of A.I. in cybersecurity. Jochen would you like to interpret the results because there are many colours.

>> JOCHEN MICHELS: Yes, there is no priority, all of them are mentioned by the different attendees. And it makes clear that what you said, Noushin and Dennis and Amal and Anastasiya, it's very important to take into account the different principles and to start multistakeholder of that.

>> MODERATOR GAN: Thank you very much for that. I think we can close the poll. I will just take one minute to sort of wrap up.

I think the key take-aways really are that the ethical principles all sort of come together as one, and complement one another. And they need to be further developed beyond today's discussion. Of course like a nal, that is something we Neade to further develop. When it comes to transparency, privacy, being open for dialogue, these are by far, I think equally important principles. So agreed. And I think the call ended well, it remains for me to state the call for action would be that we need further international multistakeholder discussion on these ethical. They are not exactly rocket science. I think it's about collating all of them that is dock meant and makes sense for everyone. Because we are a player in the cybersecurity world.

I just want to take this time to thank all of the audience today. It has further this is course, and thank all of our speakers. I'm Genie from Kaspersky, signing off. Thank you. I hope you have a successful rest of your time at IGF.