The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.
***
>> HADIA ELMINIAWI: I can hear Dr. Maha loud and clear.
>> MAHA ABDEL NASSER: Hadia is saying she can hear me loud and clear, so you have a problem with your headphones.
>> HADIA ELMINIAWI: Let us know when you would like to start.
>> We are ready, Hadia.
>> It's working.
>> HADIA ELMINIAWI: Thank you so much.
>> The tech team to display the Zoom on the screen once again, please.
>> HADIA ELMINIAWI: Okay. Thank you.
>> You are on the screen.
>> HADIA ELMINIAWI: Okay. Noha, let me know when you want me to start. Do I start now?
>> NOHA ASHRAF ABDEL BAKY: Please feel free to start. We are all here with you.
>> HADIA ELMINIAWI: Okay. Thank you so much.
So, okay. So, I am starting?
Welcome, everyone, to the Internet must session at the IGF in Saudi Arabia, "Navigating Risks and Innovation in the Digital Realm."
First, I would like to thank the forum and our host for their excellent organization and welcoming atmosphere. My name is Hadia Elminiawi, Chief Expert at the National Telecom Regulatory Authority of Egypt. However, I am here today in my capacity as a member of ISOC Egypt and chair of the Africa regional at large organization AFRALO at ICANN. I am also a member of ICANN Security and Stability Advisory Committee. I am an engineer by training and hold a Master of Science in management and leadership.
Today I will be speaking and co‑moderating this session with my colleague, Ms. Noha Ashraf Abdel Baky ‑‑ Mrs. Noha Ashraf Abdel Baky, who is on site in Saudi Arabia.
Mrs. Noha Ashraf is a support engineer at Dell, member of ISOC Egypt and an instructor at the Pan Africa Youth Ambassador IG.
In today's session, we will be exploring the risks that accompany digital innovation, including cybersecurity threats, ethical dilemmas and other emerging challenges.
Our discussion will focus on strategies and frameworks for harnessing innovation while effectively managing and mitigating associated risks.
We are honored to have with us today Dr. Maha Abdel Nasser, a distinguished member of the Egyptian parliament. Dr. Maha is Vice President of the Social Egyptian Party and one of the founding members. She has been an executive on the salvation front and has been elected as a member of the supreme Council of the Egyptian 62nd Cat of Engineers. Holds an engineering degree, an MBA and a Ph.D. degree in political marketing. She is also a certified instructor at the American University in Cairo, the Arab Academy for Science and Technology, and the American Chamber of Commerce in Egypt.
We are honored as well to have with us online today Mr. Caleb Ogundele, an accomplished Internet public policy expert. Caleb is a dedicated volunteer with ISOC Nigeria and the Manitoba Chapters and currently serves as a member of the Board of Trustees of the Internet Society, chairs the Nigerian School on Internet Governance, and was a former management lead of the information technology unit for the University of Ibadan Distance Learning Center and the Project Director at the African Academic Network on Internet Policy. He also volunteers as an ‑‑ he's also an instructor at the African Network Operating Group. Caleb holds two Master's Degrees in Computer Systems and Information Science.
Engineer Noha will be managing the queue both on site and online. We look forward to an engaging and insightful discussion. Thank you for joining us today.
Without further delay, let me start with my first question to Dr. Maha Abdel Nasser. It's an honour to have you onsite with us today on this session. And thank you for your time and effort and shared thoughts. My first question to you is, what are, in your opinion, what are the primary risks accompanying digital innovation.
>> MAHA ABDEL NASSER: Thank you very much for ISOC and for Internet Egypt I'm glad to be here, honored to be with all of you. Thank you for the audience for being here. Actual when we are talking about the threats for digital transformation or digit era, the first thing that will definitely get in our minds is the cybersecurity, which is the most important thing and the most aspect that a lot of people are thinking about. And, again, the data privacy, the ‑‑ we are all worried about our data and how we are visible to the world, our data is now almost everywhere and we cannot do anything about it, but there is lot threat that I think people will not be thinking about it a lot, which is the dependence on the digital transformation or having this technological dependency, which actually may cause things to be completely stopped if something happened, which we already have seen in the airports across the world.
We couldn't think that a small bug can actually do all this harm to the people who are traveling and made them delayed to their work, and got people to think really about what we are going to do if there is shutdown in electricity, shutdown in anything. We are so dependent on the technology now.
And I think this is one of the major threats, risks and challenge at the same time.
>> HADIA ELMINIAWI: Thank you so much, Dr. Maha, for this insight and pointing out the dependency on technology, and how a small bug, as you mentioned, could put the world on a halt.
So, I will follow up with a question. So in your opinion, how could we mitigate this?
>> MAHA ABDEL NASSER: It's very difficult to say, but, of course, if you are talking about cybersecurity, well, all of us knows about the firewalls that they can go all the precautions, which actually will not stop still the cyberattacks that we are seeing every day, everywhere across the world. It's just a matter of who is racing who, who can take the lead and somehow be able to avoid attacks. So, it's kind of mouse and cat scenario because all of the governments, all of the corporates, all of the organizations are just trying to avoid the cyberattacks. And at the same time, the attackers are trying to do the cyberattacks. So I don't think that there is something that can be done or there is a legislation for anything that can help in that.
For the data privacy, I guess we all know that we have the GDPR and most of the countries are trying to follow or to do some legislations or acts similar to the GDPR despite the fact that some of them are not really successful in that.
For the technological dependence, I don't think that we can ‑‑ we cannot be dependent on technology anymore, but we have to find ways, we have to find some kind of alternative solutions if the technology fails us, we have been working without technology, we have been living without technology for centuries, but now we are so dependent when we find that there is a problem with our phone, we just ‑‑ we panic. We feel that as if the world stopped. We cannot even remember how the life was before that, if the Internet got down, we feel that we are off track. So I don't think we can really help it. Because it's more of a feeling or a lifestyle that we cannot get away from it.
But for the corporates, they have to find a way, they have to find alternative solutions. So when the technology fails, they have to mitigate it somehow. They have to have the traditional way as a backup. If there is a problem, so you will not get everything stopped. This is how I see it.
>> HADIA ELMINIAWI: Thank you so much, Dr. Maha. And indeed, as you mentioned, maybe diversifying and maybe also depending on local or community systems or applications, I'm not sure, but maybe also there could be a role here for frameworks or developed maybe by governments or incentives provided by governments.
But I will stop here and move to Caleb. And, Caleb, I would like to also ask you the same question. In your opinion, what are the risks accompanying digital innovation?
>> CALEB OGUNDELE: Thank you very much. Thank you very much, Hadia and Dr. Maha. Very interesting perspective from Dr. Maha. I must say that I do admire the way she approached the question.
But first, one of the few things I did pick from a conversation was, basically, the fact that we cannot get along without technologies this day, right? Because we can no longer do without technologies these days, that means we are stuck with it, right? And if we are stuck with it, we need to really find solutions, basically, to some of the innovations and the risks that will follow.
So, one of the few things I did think while preparing for this panel session was, one, first, we need to, first of all, start having what we call government‑led initiatives. Some of those initiatives could also be based on legislation, regulatory frameworks, sandboxes we are accompanying can also test innovation safely. Take, for example, we are now in the age of the AI and people need to get some regulatory assurances that AI is not going to take over their life. So, government needs to start having regulatory sandboxes that can help them safely test some of the AI systems.
I'm aware that the sipping pore government has, like, a testing framework that allows companies to test AI systems while sharing insight into some risks and solutions as well. We need to start having what I call cross‑border collaboration mechanism across different spectrums.
Now, basically, the entire idea of having open standards is because we want to have collaborations from different perspectives of technology, innovation, and so it's good that we ‑‑ the government and not just the government, the civil society and as well as the academia start encouraging what we call cross‑border collaboration mechanisms. We have a lot of international data sharing agreements for risk assessment, standards for global assessment for risk ‑‑ global standards for risk assessment, and also trying standardize frameworks for sharing some of the threats, cyber threats and intelligence that we have across board.
I'm aware that you also, there are works in Egypt where you guys take a lot of cyber threat intelligence very seriously.
However, we cannot remove the fact that there are different type of cyber actors, bad actors that I will say even when we look at the geopolitics of cyber threats, that are also interested in sabotaging some of the efforts of this open standards cross‑border collaboration just for their own benefit.
So, my encouragement, basically, at this point is that we should continue to have a lot of public‑private partnerships, we should continue to have joint research initiatives between governments, private sectors, to manage some of this innovative risk that we do have.
More importantly is to also have, like, very good funding model that supports even private organizations that are into some of this risk assessment. The reason why I'm saying this is trust me, people will definitely go out of funding, but when they are doing some important work that has to deal with, take, for example, national security, global security, when it comes to some of these things, it's always good that we have government supporting them. We also have cybersecurity framework knowledge base that also tries to support some of the things that they do as well.
So, I just want to stop here so I don't take so much of our time and we can allow other speakers also contribute some of the input into what their thoughts are about innovation and risk assessment. Thank you.
>> HADIA ELMINIAWI: Thank you, Caleb.
And I move to Noha, and I know that Noha also would like to speak about her thoughts about risks accompanying digital innovation. So, Noha, the floor is yours.
>> NOHA ASHRAF ABDEL BAKY: Thank you, Hadia. So, I believe the digital innovation raises way faster than building national strategies, than acquiring new digital skills, than drafting policies and good frameworks. So, I believe that the primary risks that will have a bigger digital divide, a bigger digital gap between the privileged and less privileged technology users, because we already have half of the world population are still offline. So, the more privileged people will ‑‑ who are well‑educated, who have ‑‑ who are equipped with digital skills, who have already a good Internet access and accessibility, they will be more privileged. On the contrary, people who are offline or who have lack of access, lack of digital skills, they are going to be less privileged.
And also, the vulnerable groups will be more vulnerable. Like it will be easier to attack women, children will be more exposed to threats. So, we need to understand and acknowledge these risks and work upon them so we can, like, try to, like, be more relevant in this digital innovation race. And, yeah, thank you. Back to you, Hadia.
>> HADIA ELMINIAWI: Thank you so much, Noha. And you bring also a different perspective here where you talk about vulnerabilities of some groups and how those groups could be even more vulnerable now and be exposed to, like, digital attacks.
>> NOHA ASHRAF ABDEL BAKY: Yeah.
>> HADIA ELMINIAWI: And with that, I would like to move back to Dr. Maha and ask her about this and what Noha just pointed out, that technology is definitely moving faster than any frameworks or policies being drafted. And as it moves forward, the digital gap widens and unprivileged people become even more vulnerable.
So, what can we do, actually, to protect those vulnerable groups? So, Noha mentioned women, and we don't want to spell out a specific group. But how could we actually protect those vulnerable groups?
>> MAHA ABDEL NASSER: Thank you, Hadia. I just want to comment on your last question about the cybersecurity, that, while, yes, I didn't mention that we need to have national, not just strategy, we have to have local softwares in each country, because if you just got your support from any other places, so you will definitely be more vulnerable as a place or as a country.
But back to what Noha said, which is extremely, extremely important, we have legislations, we have legislation against cyberattacks or any attack through the Internet or Internet crimes or anything like that.
But, again, I don't think it is just legislation, because most of those vulnerable people, they don't even report when they got attacked because we have cultural issues and we have awareness issues. And even the people who receive the reports, if they decided to report this violence and we call it digital violence, and especially we have been talking, we have the 16 days for competing violence against women and we have been talking about the digital violence. And women are one of the most vulnerable groups, as well as some, I don't know, religion‑based groups, based on the country, of course, children, even not just religion. There are some attacks or violence, digital violence against black people in other countries.
So, we have an issue. And even with all the legislations that are already in place, we cannot stop it. People have to stop it themselves. And I think the awareness and the civil society role is extremely important in that, because if you just have the legislation and people don't report what is happening to them or the people who is getting the reports are not sympathetic with the victim and they are feeling that, well, this is not a real attack.
And in Egypt, we had some women, they took their lives away because of cyber bullying or a cyber blackmailing or things like that. So, we have a real issue. And I don't think it's just in Egypt. It's everywhere.
So, the legislations alone will not be enough. We have already legislation in place in Egypt, and I think in a lot of countries. But still, the attacks are there. And still we have the increased violence, as Noha said, and it's extremely, extremely important. And somehow it comes with the package, exactly like exposing or data. It comes with the package. You cannot get into the digital era without having these threats.
>> HADIA ELMINIAWI: Thank you so much, Dr. Maha. And I was wondering here, because you mentioned, I think this is very important, you mentioned reporting abuse and reporting online bullying. And I think one of the problems lies in that people do not know where or how to report. And, of course, I am aware of ‑‑
(Overlapping speakers)
>> MAHA ABDEL NASSER: Yes, awareness, as I told you.
>> HADIA ELMINIAWI: Yeah, awareness, of course, is crucial. But awareness is mainly crucial for people to understand, and we don't want to get into the cases of the women who actually took their lives. But honestly when I heard about those incidents, I thought if those ladies knew a little bit ‑‑ like if they knew better, they would have never done so. And if the community also and the society was well‑aware that cyber bullying exists and that many stories are just made up and are not real, also those women wouldn't have cared much about what has been posted about them and made them take their lives.
However, again, the question is, do we have a place to which if someone is exposed ‑‑ is bullied online, they can go online and report it? How easy is it to report abuse and what channels can they report that abuse through?
>> MAHA ABDEL NASSER: Well, actually, first to think about Egypt, because I don't know about other countries, but in Egypt, there is a phone number you can call. Unfortunately, there is dives there isn't a portal to report on. This is actually one of the suggestions I made in the parliament, but it didn't go through.
But and in each police station, you can do so. And there is a hotline in the women's Council, they take the reports against violence too against women specifically. But the Ministry of Interior has another hotline for reporting all the abuses over the Internet for women, for children, for men, for anyone, because anyone can get, actually, hacked or blackmailed online. It doesn't have to be a woman. Women are more vulnerable. But a lot of people actually have these issues.
So, there are ways to report, and I guess there are a lot of places now or organizations from civil society who are trying to spread the message. Because as you said, if those women new better, they wouldn't commit suicides and this is extremely sad. We have this burden on our shoulders that we didn't let them know. But we have to work all together to spread this information and to do the awareness in every country, not just in Egypt, because it's happening everywhere. Thank you.
>> NOHA ASHRAF ABDEL BAKY: We can report to the social media platforms itself, so they can suspend the account or take it off, in parallel with the governmental reports. So, Hadia, I have some questions for you. What are the primary risks and challenges associated with the quality, bias and security of AI training data?
And second question is, how do these factors impact the ethical deployment and effectiveness of AI systems?
>> HADIA ELMINIAWI: Thank you, Noha. So, again, privacy and data protection are among the key risks accompanying digital innovation. Services and applications using IoT and AI depend mainly on processing and collecting huge amounts of personal data, which raises privacy concerns, but it also, in addition, of course, failure to comply with data protection regulations could result in large legal penalties. It's both ways, you know.
But let me speak specifically about the risks associated with the quality of data. So, bias in AI applications and systems happens when outcomes of AI systems favor or disadvantage a certain group or favors certain outcomes or favors certain individuals. This bias can result in decisions and unfair treatment, depending on the field in which the AI system is deployed.
So, examples could include random security screenings and a lot of us, you know, face this at airports where you are ‑‑ you know, a specific ethical group are always selected for this random security screening. It affects employment opportunities, even job search results. Unequal treatment in legal or medical systems. And this is all because of the data that it's used in training the AI systems.
So, data used in training AI systems or many AI systems, let's not say all, but many AI systems use historical data that reflects past human decisions, behaviors. So, if the data contains some kind of prejudice or biases or they can only based on ‑‑ is not diversified, the AI will inherit and replicate those biases in the decision‑making process.
And the other thing also that comes to my mind here is, so after the decision also is made, how do you know what this decision was made on and what data was used for that? It's also about transparency and accountability, right?
And this human prejudice can be intentional or unintentional. So, it doesn't really matter whether it's intentional or unintentional. But there should be a way in which we do not have those flawed training data. And there should be some kind of transparency and accountability also associated with this. So, addressing those issues, of course, are crucial for fairness, accountability and transparency in AI applications.
If we talk about security risks associated with AI, so AI usually uses vast datasets that may contain sensitive personal information. So, improperly handling of this data can lead to harmful consequences. Also data could be from a security point harmful or misleading data could be injected into the training process, corrupting the AI models and performance and causing unintended behaviors.
So, those are all the risks associated with the bias ‑‑ the bias and quality of data. So, Noha, the floor is yours.
>> NOHA ASHRAF ABDEL BAKY: Thank you, Hadia. I would like to ask our audience if they have any questions to any of the speakers. I don't see any questions from the online audience. Okay. I guess we can move on and leave the Q&A section at the end of the session. Yep.
>> HADIA ELMINIAWI: Okay. Thank you. I wanted to ask Dr. Maha about, so she was ‑‑ Dr. Maha was speaking about reporting cyber bullying. And I was wondering if there is, like, a single platform at a national level that people could report cyber bullying through. And I know that some countries have those platforms or single platform to report cyber bullying. That's one of the questions.
And the other is to all the speakers, and the same question will go also to Caleb. And then the other question is related to, like, international frameworks and through which also people can report online incidents and maybe if we take, for example, DNS abuse, you know, and online security of users, do we have sites through which we could report this? And of course, we can report directly to the services directly as Noha mentioned.
So, I will give the floor to Dr. Maha and then Caleb and then Noha to discuss this. Thank you.
>> MAHA ABDEL NASSER: Thank you, Hadia. As I said earlier, there is no platform, single platform, as I said. I suggested that to have something like that. And the reporting should be online, as it is all online crimes. But it didn't go through. So, unfortunately, in Egypt we don't have the single platform to report. And the reporting is, process as I explained earlier.
And internationally, I think there is nothing except what Noha said, than reporting to Facebook specifically or Instagram specifically to mainly specific, not platforms, but the applications itself. You can just report what happened to you or report the account or report a specific person, something like that.
But I'm not aware of anything else, unfortunately.
>> HADIA ELMINIAWI: Caleb?
>> CALEB OGUNDELE: Thank you very much. So, one of the ways I am thinking about this is, first of all, every human right that is physical is also the same thing online, which means that if I harass you, just in context, not that I'm harassing anyone, take, for example, if Mr. A harasses someone, harasses Mrs. B, when they are not online, the same rules should apply to when they are online. Therefore, if you look at it that what platforms or how should the reporting be done, the first instance is, does the person being harassed know their right? Do they know that they are being harassed? Do they know how they can be protected if they report some of these issues?
So, one of the issues that I have with the online reporting is the anonymity, most people report online are not even sure that if they report online they will get the necessary protection that they deserve. Some of them do not know if they report online, actions will be taken, right? So that fear is also despite the fact that some of them might even have the information, they have the training in digital literacy like Noha and Doctor mentioned, right?
Now, the question now leads us to how start all of this reporting and all of that? I will give you an example of some of the contexts in my own country, Nigeria, and some of the abuses that has happened with regulatory frameworks, even though they are regulatory frameworks that protect women, that protect the vulnerable or those who were exploited online.
We realized that the political class is beginning to exploit the cybercrime act which encompasses some of the laws and acts that needs to protect those vulnerable people. Take, for example, someone who feels that he has so much power, instead of saying that they want to protect the vulnerable, they will rather tell the vulnerable person who is complaining and not really harassing them online and say that, you are harassing me online, and then they use the powers of the police to get the person arrested, thrown in jail and all of that.
So, we have seen a lot of abuses, even by some of our political classes. And I feel that these are issues that we need to bring to the forefront. These are conversations that we should not stop talking about, even at this political since that the vulnerable needs to be protected by the law, not the political class only. And then we find out even our police are, in a way, respectfully to a law enforcement agencies, are trying to choose who they prosecute when some of those issues are reported.
So, do we have justice for those who are vulnerable? Do we have justice for those women that we are talking about? So, these are questions that I would probably want us to go back and have a second thought about, as I see that we have someone who has raised his or her hand to ask a question. So I will just let the floor. Over to you, online moderators.
>> HADIA ELMINIAWI: Noha, over to you, Noha.
>> NOHA ASHRAF ABDEL BAKY: Yeah, thank you, Hadia. As mentioned by Dr. Maha and Caleb, there is a lack of trust with the reporting mechanisms, and also a culture barrier when it comes to ‑‑ victims can sometimes feel ashamed of reporting the incidents or the attacks, and, instead, they go silent about it. They are ashamed of their communities or how will people talk about them or whatever. But we need to break this barrier, this cultural barrier and stop victimizing the attackers.
I guess we saw many incidents in Egypt, as Dr. Maha highlighted, where young women and even teenagers took their lives and suicided. So, we need to be, like, stand in solidarity with all victims, with all online victims of cyberattacks and stop attacking them again online. Because sometimes when you report an incidence, people will start to comment with hate speech or share their negative comments. So, sometimes even victims, they took back their reports and to avoid all of this hassle.
So, yeah, we need to look at it from a 360 view, putting the good legislation, trust the process, awareness, civil society to lead the awareness part, and Internet users to be more responsible when using the technology, because technology is here to help us. So, we need to, like, use the good part of it and report the bad part of it. And that's it for me. Thank you. Back to you, Hadia.
>> HADIA ELMINIAWI: Thank you. And I see Mariam has her hand up. So, Mariam, do you want to take the floor.
>> MARIAM FAYEZ: Yes, hello, Hadia, hello, everyone. I like how the conversation is really going because it boils down to the human rights and what each and every person has to feel comfortable, safe and empowered online and offline. I second what Noha and Maha have been saying and I really think the civil society should be moving this concern forward. In Egypt we have multiple successful initiatives, whether initiated for the betterment or for the safety of women or vulnerable women to address their issues or just the different groups and different rights. And we have many, from women harassment, for example, to even first respondents ‑‑ responders in terms of crisis, like the eSIM card and all the eSIM activity.
And even women harassment, we had very, very successful initiatives. And those ‑‑ all those initiatives has attracted the politics or the government. They looked closely at those initiatives and they let them grow or those initiatives had the opportunity to grow because they had the people's support of the people's momentum. People in Egypt were lacking, you know, for women ‑‑ for example, women in Egypt lacked the opportunity for ‑‑ to teach safe and to feel safe in reporting and into speak up when any sort of abuse happens. And the trust was not there. But the trust started building when those campaigns and when those initiatives took momentum. And they did not took momentum on the ground. Social media was a very strong tool. WhatsApp, for example, chatting tools were very strong. That supported such mechanisms.
So, when it starts in the grassroots organizations or with civil society and then it will move forward, I think this is a good thing to start the momentum. So, civil society is, I think, comes first at this stage. Thank you.
>> HADIA ELMINIAWI: thank you so much, Mariam. And indeed, civil society has a big role in leading the way when it comes to online awareness, when it comes to online awareness processes.
And I would like to turn to Dr. Maha now and ask her about, first, are there practical strategies and frameworks for effectively managing and mitigating the risks that we have been talking about? And is it doable to have practical strategies and frameworks for that? And do we have such frameworks to member state online risks and not necessarily rules or regulations.
>> MAHA ABDEL NASSER: Actually, if we are talking about frameworks, I am not the person who should be answering. This should be the government or the executive body. We have a lot of strategies in Egypt. We have a strategy for the cybersecurity. We have the strategy for the digital innovation. We have a strategy for AI. I still don't see the the real implementation of the strategies. Strategy is a very nice word. You can write very nice things. But when it comes to implementation, it needs a lot to be seen actually on the ground. We are still far behind in a lot of things, especially if you are talking about cybersecurity. I know that the government is taking it very seriously. But still, we don't have the on‑ground activities that needed to deal with the cybersecurity.
For the AI, we are definitely, definitely far behind. There are no incentives for SMEs or startups or any innovators who could work in the AI, which is extremely important and needed.
You can find, you know, fragmented initiatives and people are working on themselves. But there is no structured work concerning these things. And I think it's extremely needed now.
This is my point of view. Thank you.
>> HADIA ELMINIAWI: Thank you, Dr. Maha. You mentioned AI so I will go back to AI and indeed artificial intelligence has the power to transform businesses and is important for governments to perform more effective and to be more effective and perform more efficiently, and that applies not only to governments, but to all forms of businesses. And that gets me back to the question of frameworks. And maybe what's required is for organizations and entities to establish clear principles for their responsible use of AI.
So, any organization or entity that is using AI will need to define guiding principles for using AI and commit and add here to those principles defined. So, those could be related to accuracy, accountability, fairness, safety, ethical responsibilities that would be established and published by organizations or entities using artificial intelligence.
And, again, we go back to, as I started, Dr. Maha, by stating that technology is part of our life, and we have this dependency. That's not going away. It will only increase. And humans and machines have always been working together.
And moving forward with AI, this is also what's going to happen or supposed maybe to happen. And so since the early ‑‑ the very early human history, I would say, so people were using carts and then they are using machines in agriculture. And this keeps on moving. And then computers. And then mobiles.
So, it has always been humans and machines. But it's, again, you know, how do we do that? And I ‑‑ Noha, yes, I will give you ‑‑
>> NOHA ASHRAF ABDEL BAKY: We have a question from the online audience and another one from the online audience. Yeah, please.
>> PARTICIPANT: Thank you so much. It's very insightful discussion, thank you Dr. Maha, thank you Noha, and particularly Hadia.
Actually, I am coming from a technical community, so I am a security researcher. I actually know the other side problems. The one who is creating these problems, I can actually give some insights on that.
You know, so I am running this organization Secure Purple and we are doing, you know ‑‑ our focus was actually on the end users' safety, so particularly women and kids. We have been very active in that we arrange workshops, trainings, awareness in different regions in Pakistan. I am from Pakistan, basically. But, you know, normally when our trainings and workshops what we used to do is train the women and kids because they are the most vulnerable part of the Internet, and particularly on image‑based (?), particularly on cyber bullying and stuff like that.
What we used to do, train them how to stay, you know, protected from these kind of threats. And one of the recommendations we used to do was never, you know, share different or questionable or maybe indecent pictures of yours, maybe if you are in online relationship or normally or anywhere over the Internet because that was the main cause of, you know, creating or maybe, you know, give a rights to image‑based abuse.
But no, I am actually in doubt of that recommendation, because with AI now, you can create any sort of content with just a singular image. So, now I'm thinking, what's the next step, you know?
And I will give you statistics actually. There is the organization Sensity AI, and they have been tracking the deepfakes since 2016, and they have given the statistic that 95% of the deepfakes are actually nonconsensual porn. Imagine the defect, huge technology coming up and 95% of its conception is actually on the nonconsensual porn. What would be the amount of the image based views, what would be the morality and, you know, the social structure of the society if there is so much content, questionable content producing daily just using AI?
It's a lot of discussion. I mean, I can kind add up to every insights the speakers have shared, but due to time limitations, I would just say we just need to identify every single stakeholders of the Internet and we just need to reach them out. For example, on reporting, even if I report, the reputation damage it caused to me, the virality the video gets, the damage has been done. I know accountability is necessary, but still particularly for women, the reputation is gone, the damage has been done. They might not get ‑‑ able to get a job. You know, there have been cases we dealt with where, actually, you know, people get divorces just because of a single image getting public that might be an indent picture, but still, you know, the impact is too much. Legislations or rules, I mean, coming from a technical side, I can get away with this stuff, you know. The anonymity the Internet give me, I can create a fake Facebook account with a fake email, with a fake phone number. Who are you going to trace me?
There's a lot more to still consider in the Internet space. And still, I mean, we even from the technical end, we still confuse, how do we deal with it? And it may take time to evolve. So, yeah. Thank you.
>> NOHA ASHRAF ABDEL BAKY: Thank you. Thank you for the very realistic and on‑ground intervention.
Hadia, I guess we have Musa Turai has raised hand, as well as Caleb.
>> MAHA ABDEL NASSER: And we need to conclude, people are waiting outside, by the way.
>> NOHA ASHRAF ABDEL BAKY: We have until 12:45. Yeah, yeah. No, no, no. Yeah.
>> CALEB OGUNDELE: Let me quickly jump in because of time. In reaction to the last speaker one of the few things that I think I observed is he asked a question about the conscious effort of even the technology organizations such as that own AI infrastructures, right? What are the conscious efforts that they are putting in, into making sure that there are no abuses on some of this AI generated items that come online that could become viral.
One of the things that I know that Meta does is Meta allows you to be able to flag AI generated contents, and because you are able to flag that, it kind of in a way reduces the virality of that.
But one of the things I'm not sure of is that if other social media platforms are beginning to follow or come in line and have some source of governance board, accountability board that also helps review some of the things and some of the accounts that they have, at least I am aware that Facebook is making conscious effort on that, but I am not sure about X, I'm not sure about Blue Star, I'm not sure about other ones, but it will be a very interesting thing to see that they are taking conscious efforts to make sure that they are able to flag AI generated content such that those generated AI content do not become viral.
One of the things I would also like to see is for AI generated content there should be Meta tags that indicates within the images that are generated, within those contents to indicate that these are AI content and they should be like a global standard that allows for those kind of Meta tagging of AI generated content even from any platform, yeah. Thanks. That's my quick intervention to the last speaker. Thanks.
>> NOHA ASHRAF ABDEL BAKY: Thank you, Caleb. We had a raised hand from Musa. Okay. Thank you, Musa.
Hadia, are you still there? You are muted.
>> HADIA ELMINIAWI: Yeah. Sorry. Yeah, I was disconnected for a few moments. But I am back again. And I wanted to ask Dr. Maha, after hearing what Caleb said, how could we require tech platforms to take responsibility for abusive content? Is this possible? And how could we actually put it in action?
>> MAHA ABDEL NASSER: Definitely, yes, we have to put them responsible for abusive content and actually they can do that. They have the resources, they have the tools, and we have been seeing what they have been doing, for instance, during the ‑‑ what was happening in Gaza and the conflict, they could have taken down all the content that they thought it's not right from their point of view. They were biased. They were not neutral. So they can do whatever they want. So they should be responsible for taking any abusive content, hate speech, all these kinds of things.
And, again, I didn't answer your question about the framework for AI. It's ongoing debate between having an AI framework or legislation. I think we all know that the EU already has or had an act, and it's not working. Most of the countries from the European Union are not working with this act. And I guess mainly France, who tried to work with the act and it didn't work out.
So, we have been thinking that a framework could be more realistic, taking into consideration the huge, huge action and speeds in the AI which is moving almost every day and changing every day. So, legislation could be a little bit not good for this.
But we are still, because we have actually proposed legislation in the parliament for the AI, and we are waiting to see what will happen in that.
But still, legislation or framework, we have to have something to specify the ethics, what should be done, the responsibility, and it's extremely, extremely difficult. If something happened with a self‑driving car, with the software, and it kills a person, who is responsible? And I am a policy or a legislator, I cannot say who could be responsible. Am I going to put the man who made the programme in prison or the car? It's ‑‑ you can do nothing about that. You will just try to make as much as precautions, but it will be never, ever enough. And we will always have these kind of things. And I am talking about the self‑driving car because it's already there. It's happening.
But we are hearing about the avatars, so you can have your avatar go and have ‑‑ do a murder somewhere and no one can go back on you. And the VR, and a lot, a lot. Actually, thank you very much for what you said, because it's almost impossible.
I think the ones from you who saw black matter, I think we are in the black matter era and we don't know what will happen after that. Thank you, Hadia.
>> HADIA ELMINIAWI: Thank you so much. And if no one from the ground wants to make an intervention now or Noha if you want to make ‑‑
>> NOHA ASHRAF ABDEL BAKY: Yeah, we have a raised hand here.
>> PARTICIPANT: Can you hear me?
>> HADIA ELMINIAWI: We hear you.
>> PARTICIPANT: My name is Lisa Framir from the Netherlands. Thank you so much for this interesting discussion. I wanted to share one thing with you and also ask a question related to the innovation part of this session. So, the first thing I wanted to share is that I work on the European AI Act in the Netherlands. It's quite a new law. And at the moment all the member states are working to really transform it, transition it into their own legislative systems. So, this is quite a challenge. But I am still hopeful that it will work.
And maybe related to the discussion about abuse of content and about deepfakes and the problems that are arrived in that, there are provisions in the European AI Act that try to address this issue, and personally, I find it very exciting to see whether it's going to work or not. Because there are provisions that state that it has to be made clear by the developer of, for example, deepfakes that it is AI generated and manipulated. And it also needs to be machine readable, and also for users of the Internet, it needs to be recognizable, that it's generated or manipulated content. So, maybe that can help.
But, on the other hand, of course there's lots of actors who make this content and their intention is to do harm. So in that sense, the law is always limited to what extent it will help. But at least it gives some kind of power to legislators in EU to enforce and address when things are going wrong.
And I wanted to ask, in what kind, maybe to you, Dr. Maha and to the speaker online, Caleb, to what extent do you think that the risks that you mentioned in your presentations hamper the entrepreneurial spirit of people, of SMEs, for example, in your countries? I'm very eager to hear, for example, female entrepreneurs are really stopped by the abusive practices they experience online or whether they still have, for example, in Nigeria, research still going on and not being limited too much by this abuse of the law, for example.
So, thank you in advance for shedding some light on the innovation ecosystems in your countries. Thanks so much.
>> HADIA ELMINIAWI: Dr. Maha, and then Caleb, if... about the innovation section.
>> MAHA ABDEL NASSER: Shall we take the other question or ‑‑
>> Take all the questions and we can answer everything together.
>> PARTICIPANT: Thank you, Caleb. And thank you for the excellent interventions. I really enjoyed it much. My name is Amr Hesh, and I am a member of Internet Masr, and it was quite enlightening listening to all those interventions. But let me share with you, because we are ‑‑ we have always been thinking of the risks, but let me share with you a story that happens maybe five centuries ago. And it has costed this part of the world that we are living in, the Arab world, a lot over those 500 years, because they were afraid to adopt innovation that was coming up at this point in time, which was printing. They were afraid that through printers and through printing, the Qur'an, which is the holy book for the Muslims worldwide could be forged and it could not be accurate as the handwritten Qur'an. So actually the Sultan back then decided that he is not ‑‑ he is forbidding printing to be deployed throughout the Islamic world that was controlled by the Altman empire back then because they were afraid of the challenges and the risks associated with printing.
So, if we are reflecting today on what is happening with the technology and innovation, let us please not consider the risks as much as we are considering the opportunities that could be ‑‑ that could AI be opening to us. And if we are afraid of deepfakes or something that happened, let us, instead of thinking about forbidding it because at the end of the day, we cannot forbid it. The people who want to do deepfaking or something like that will have access to the resources and will do it anyways.
But let us think about counter technologies that would help us to make this reality and to actually get the good part without the bad part. Thank you. And looking forward to hear your comments on that.
>> MAHA ABDEL NASSER: Thank you, Amr for the positive note.
>> HADIA ELMINIAWI: I also wanted to say something about what ‑‑ yeah, this is Hadia. So, I wanted to just point out something about what Amr just said. What happened really that is, Qur'an spread faster with the printing and instead, more people could read it.
And also, now with the Internet and having everything online, you can just go on your mobile and read it. So, it turned out that it was even better for the spreading of the holy book.
And just before the ‑‑ I know you had a question with regard to innovation, and I had one also in regard to innovation. So, I would pose the question, so that you can answer both together. And the question is related to the establishment of collaborative platforms like how can policymakers facilitate the establishment of collaborative platform for the exchange of insights among peers and experts in the field of digital innovation and risk management, of course.
So, I will stop here and, Noha, do you want to give the floor to Dr. Maha and then maybe Caleb?
>> NOHA ASHRAF ABDEL BAKY: Yeah, sure. Dr. Maha.
>> MAHA ABDEL NASSER: Okay. I will start by answering your question. Well, actually, the female entrepreneurs know, they are still working and the innovators, the innovators, I don't think they can be stopped easily by any cyber bullying or cyberattacks or things like that, because they are entrepreneurs and they are in the innovation business. And there are measures, actually, and there are things that they can use or they can get them to help them.
And as Mariam said in the intervention, actually, there are still problems, but there are a lot of initiatives, actually, which is somehow helping, especially women in this area.
Commenting on what you said, there is nothing can be stopped anywhere. We are not talking about stopping anything. If we are backwards or afraid of seeing the risks, we definitely see the opportunities and the ‑‑ and what AI can be helpful in. It is just we need to be careful. We need to see the challenges and to address the challenges, and as you said, find the counter technologies to work with that. But it's moving extremely fast. And I think this is what is frightening. It's not just worrying. It's even for us who are working in the technology field and in the digital field. But we think it's moving extremely fast. That's all I can say.
As for your question, Hadia, it is not, again, the policymakers who can do or who can work with the collaborative platforms. It's the executive bodies. And there is nothing that can prevent this from happening. Just the problem of collaboration anyway, any kind of collaboration. And, unfortunately, we are talking talking about digital collaboration. But everything goes back to politics and as one who is working in persons with disabilities, I can tell you that it is not easy because to get the data flow in Africa, it has to have political will to do this, to have a platform, joint platform to work with between countries. It has to have political will. And it is a political decision. It's not a digital decision. And it's not from the policymakers or from the legislators. It's from the executive bodies and the governments.
I guess and we can give them our ideas, our thoughts, and try to push them to do so, because I think we need to do this, especially in Africa, in the Middle East, and the Arab world, we need to collaborate together, we need to cooperate together, and to work together to get things done. As I think that no country can do it on its own. It's now beyond the countries. We need really to work together. Thank you.
>> HADIA ELMINIAWI: Thank you, Dr. Maha.
And Noha, I see a hand from the floor. I just had a follow‑up for Dr. Maha. But maybe I can just pose the question and move to Noha and then Dr. Maha can answer later. And the question is how do those bodies, executive bodies, work together in order to exchange insights in the field of digital innovation and risk management? How do they coordinate?
And I will close here, Noha, and give you the floor to match the queue.
>> NOHA ASHRAF ABDEL BAKY: Thank you, Hadia, we have a hand from our online audience. Musa, do you want to take the floor? Or maybe share your questions in the chat.
>> MUSA TURAI ADAM: Yeah.
>> NOHA ASHRAF ABDEL BAKY: Can you raise your voice, please?
>> HADIA ELMINIAWI: But you are cutting.
>> MUSA TURAI ADAM: Can you hear me now?
>> NOHA ASHRAF ABDEL BAKY: Yeah. Please gold.
>> MUSA TURAI ADAM: My name is Musa from Nigeria, I'm a student here in Malaysia in (muffled audio). I would like to make a contribution regarding what the former speaker has said about collaboration between the bodies. I think that is the strongest way to deal with issue of ‑‑
>> NOHA ASHRAF ABDEL BAKY: Sorry, Musa. The voice quality is not good. Can you try to raise your voice, please, or pose your questions in the chat?
>> MUSA TURAI ADAM: Can you hear me now?
>> NOHA ASHRAF ABDEL BAKY: Yeah, yeah, okay, please proceed.
>> MUSA TURAI ADAM: Yeah, I want to make a suggestion and contribution towards the former speaker has said about the collaboration between the bodies and ‑‑ between the bodies.
I think that is the perfect way to change like insecurity in Africa, because, like, back in my country, there is (muffled audio) whenever you post something bad about a document, like a politician in the country, it is very easy for the government to squish you and get you and like trace you and get you and deal with you regarding ‑‑ regarding the issue.
But for the institutions and society, I don't know what is the kind of technology are they using to trace the person that post in social media against a politician and (muffled audio) use to trace, like, the bandit and the other people (muffled audio) so I think a collaboration, the government and the government and the alleged complicater and (muffled audio). That's what I want to raise.
>> NOHA ASHRAF ABDEL BAKY: Musa was emphasizing on the importance of collaboration between the government and other bodies to facilitate their reporting mechanism or finding the attackers. Yeah. Maha, you have a comment?
>> MAHA ABDEL NASSER: No, Noha. I guess he was talking about anyone who can post anything against the government or against someone, the politicians, can be easily caught.
>> NOHA ASHRAF ABDEL BAKY: Okay.
>> MAHA ABDEL NASSER: And trust me, Musa, it's not just Nigeria. It's in a lot of places, yeah.
>> NOHA ASHRAF ABDEL BAKY: In Africa.
>> MAHA ABDEL NASSER: It's all across Africa and some other countries, too. We don't want to specify.
And I don't think that this is what we need the governments to be collaborating in. We need the governments to be collaborating in doing ‑‑ in trying to make the Internet safer place.
And for the ‑‑ what your question, Hadia, I don't know how they can do that. It's their job to cooperate. They can easily, in a place like this, in the IGF, in other forums, the officials from different countries, they can sit together and agree on a way of cooperation. They can agree on having a unified platform for reporting the abuse, for instance, or doing things, helping each other in approaching safer place in the Internet for all vulnerable groups.
So, it's doable. It is just, as I mentioned before, it needs a political web. It needs them to be really wanting that and feeling the importance of this cooperation and collaboration, which I don't think is happening right now across Africa so far. Thank you.
>> NOHA ASHRAF ABDEL BAKY: Thank you, Dr. Maha.
Caleb, I saw your hand raised.
>> CALEB OGUNDELE: Yeah, I just wanted to comment on the lady who asked a direct question to me about how we encourage women‑owned SMEs when it comes to innovation and risk management.
So, I just wanted to mention something, that first of all, with respect to my male gender, I would say that women are the best money managers, right? And we need to give them their flowers when it comes to that. So, why am I saying that?
It means that women are actually the ones who power ‑‑ on the ground of the economy that we have, even globally. Trust me, women are always the ones at the marketplace, which makes them even more exposed and vulnerable. And I will take it back to how we can encourage women, SMEs when it comes to around innovation and risk, the first thing that I see is the system is very vast against them when it comes to funding and supporting women such that they can innovate, such that they can expand businesses, make it more scalable. They are even more exposed to risk than even the male gender themselves.
So, I would say that they don't have the same access to funding like their male counterparts. And so it limits their ability to want to scale and innovate some of the ideas that they have, even I haven't seen so many female innovators when it comes to AI. I have only seen a lot of male. Why is it that the percentage of women are lower than those of men?
So, I would like to see in my own view, responding to what the question that was directed at me earlier on to see that more women are actually supported when it comes to networking, funding, mentorship, and even risk management, as well as, you know, capacity training to help them have that.
And then government should be conscious about having inclusive funding strategy or procurement for ‑‑ to support women and those who are disabled, disabled at least half certain percentage for them. I'm aware that that is being done in Kenya, at least about 10% of government procurement are given towards women, those with disabilities and a couple of other criteria. But I feel that more can be done.
So, that's just my little intervention to the question that was asked by the lady in red, I couldn't pick her name when she asked the question. Yeah. My apologies on that, please.
>> NOHA ASHRAF ABDEL BAKY: Thank you, Caleb for responding and for being a good ally for women empowerment.
We need to wrap up here but there is one raised hand from the onsite audience. Row San, you have, please, in less than one minute.
>> PARTICIPANT: I am Rosanne Zakarea from Egypt. I'm an ambassador and a content creator. You talked about the collaboration with government, and I saw that the most important to collaborate with the owners platform owners like Elon Musk, Mark Zuckerberg because they already have social media platforms that, you can say that control on our minds and use algorithm sometimes to control, in our point of view. As a content creator, when I want to share my point of view, especially in the politics, politics topics, that I see that algorithm that's forbidden me to share my point of view and consider it as hate speech or something like this, especially that's related to the war in Gaza. And we see that at the last year.
And we already, we have some tricks like symbols like watermelon or dots between the words. But we really add ‑‑ we don't have freedom to share our points of view. And it's a foreign platform, and the owners have a background with political thoughts and they manage on our ‑‑ on our sharing of our point of view. So, this is my issue, actually.
>> NOHA ASHRAF ABDEL BAKY: Thank you, Rosanne, yeah, everybody has a hidden agenda and it's good we are trying to trick the algorithms.
Dr. Maha, do you have any comments? And please add your closing remarks because we need to wrap up.
>> MAHA ABDEL NASSER: Okay, thank you, Rosanne. As I said earlier, you can do nothing about it because they own these platforms. And whatever we tell them, actually, we talk to them directly and they claim that they don't do that. But we know very well that they are doing that. And there is nothing we can do about it, except tricking them, as you said. And they couldn't ‑‑ they couldn't do anything about it. So, we could manage somehow.
As a closing remark, I think I will close with the positive note that the technology, despite of all the facts that we have been talking about, about the threats, the risks, the challenges. But we should look at the opportunities. And we should think what could be the world without technology. It's completely different world. And I don't think that we can do anything without technology anymore. So, this is the end of it. We have to live with it, even if we had to sacrifice some of our, let's say ‑‑ yeah, not just conventional way, sacrifice our, some kinds of freedom or our data. I don't think that the privacy ‑‑ we will have to sacrifice our privacy somehow, or at least percentage of it to be into this world. And I think we will have to be okay with it, because this is the new future.
>> NOHA ASHRAF ABDEL BAKY: Thank you, Dr. Maha.
Caleb, please, in 20 seconds, can you please give us a ‑‑
>> CALEB OGUNDELE: Yeah, I will just quickly wrap up with this. I see a lot of issues around the fact that we don't have most of our people, even women, participating a lot in governance, which means they don't have a voice that advocates for them at that point in time and when it comes to innovation and risk issues.
Here's what I will say. According to the great philosopher that says if you fail to participate in governance, you end up being governed by imbeciles. When you are being governed by imbeciles, they make the wrong rules and regulations.
We are grateful for the fact that we have people like Dr. Maha who happens to be participating in governance activity in a local environment and so we want to encourage that we have more women who will advocate for things that pertains to risk and innovation for women innovation. So, that's just my quick wrap‑up at this time. Participate actively in government so we won't be governed by imbeciles. If you won't be given a seat at the table, clear the table, bring a seat from your house and sit at the table. Thank you.
>> NOHA ASHRAF ABDEL BAKY: Thank you, Caleb. Hadia, your turn.
>> HADIA ELMINIAWI: Thank you so much, Noha. And I just wanted to comment on what Rosanne just said. So, the example she just mentioned is a clear example of how platforms can actually hinder creative ‑‑ can hinder innovation, and as they do hinder controversial expressions. So, this is a clear example of how platforms could discourage creative content, as they discourage controversial expressions.
And, again, this goes back to the principles that we were talking about and how entities, organizations, platforms should actually have their AI principles published and they should abide and add here to those principles.
And I would just quickly point at some of ‑‑ point out some of the things mentioned today or the statements mentioned today. Those include empowering legislators, civil society leading when it comes to raising awareness, agreeing to cooperate, whether it is with regard to having unified platforms for reporting online or whether it's for sharing insights and expertise in the field of digital innovation, investing in alternative offline systems for critical infrastructure and alternative, maybe community‑developed systems and softwares.
So, those are, like, some of ‑‑
>> NOHA ASHRAF ABDEL BAKY: Sorry, we are being told ‑‑
>> HADIA ELMINIAWI: Yeah, I give you ‑‑ I'm sorry. I know, we are ‑‑
(Overlapping speakers)
>> HADIA ELMINIAWI: The floor is yours. Thank you so much for this ‑‑
>> NOHA ASHRAF ABDEL BAKY: Thank you for organizing this session. And thank you. I would like to thank the online and onsite audience for attending and for staying for the whole session, for the 90 minutes and with this, I close the session. Thank you, everyone.