The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.
***
>> Hi, everyone. Good afternoon. How are you feeling today? Hello.
Hi. A very good afternoon to all of you where, welcome to the global Youth Summit 2024 here at Riyadh, the kingdom of Saudi Arabia. For those my name is Carol Roach. I'm the 2024 chair. Distinguished participants and colleagues, yes, you are distinguished. I welcome you here to the IGF 2024 Global Youth Summit. We are very excited it's exciting times for digital technologies and the internet. Today our focus is on the impact is on AI and education, exploring the opportunities and the risk. Looking at the A readiness of educational systems, its policies, frameworks, and capacities, we will explore how AI and education may affect human rights, the norms and cultures of a society, the quality of life for individuals, both negatively and positively.
We want to deliver outcomes that are beneficial by using a multi stakeholder approach for the answers to the policy questions. There's no better way to doing this but through our panel of experts. Most importantly, you, our participants.
Before I introduce our distinguished panel, I would like to give the floor to Mr. Li Junhua, the United Nations Under Secretary General for Economic and Social Affairs. Let's give him a warm welcome.
>> LI JUNHUA: Thank you. Thank you very much, Moderator, for giving me the floor. It is a great pleasure and honor to join all of you for this Youth Summit. Look at my great hair. I feel so exciting to be able to stand in the room with all the young people, new faces.
Colleagues, let me say your focus on the impact of artificial intelligence on education is a very, very timely, vital, and encouraging. It reflects your understanding of the transformative challenges and the opportunities that AI presents to our future. So it is very heartening to see that the decision makers, collaborating with all of you and recognizing the critical role of the young people in shaping these discussions.
I'm truly inspired by the remarkable leadership and the spirit of the cooperation that you have demonstrated through the IGF Youth Track of the past months. It has facilitated the very regional dialogue and learning along with the Global Youth Track of the Global Youth Summit. It sends a powerful message to the world. Namely, uniting across the generations is essential.
So only together can we meaningfully address the reshaping of the foundation for our future. As we come together today to discuss the impact of the AI on education, let's remember, first of all, foremost, education is one of the fundamental human rights enshrined in the Universal Declaration. It is our shared responsibility to ensure that AI supports this right, this fundamental right, throughout the world rather than undermine it.
There are a number of good examples around us. For instance, in Morocco AI is helping to reduce learning disparities in rural areas. In France AI is helping virtually impaired students to read the converting digital information into the haptic feedback. In Brazil AI powered the natural language processing and is improving literacy. Also, in India, AI driven voice assisted education tools fostering language inclusivity. In U.K. AI is being used to convert complex documents into easy to read formats in 70 languages. We have to recognize that there's a digital divide that actually hampers the potential of the others to build on all those good practices and one very striving figure. One third of the global population is not connected to the internet. Not to mention the AI accessibility.
So in this connection United Nations has been given a strong mandate to right this wrong. The recent adopted global digital compact calls for actions to force the international partnerships. Let's build AI capacity through the education and to expand access to the models and the training data. Today's summit provides a platform for stakeholders into generational dialect on AI, especially on AI training, AI education.
I believe your summit, your discussion, will greatly help identify a path towards an ethical digital future where we leverage AI to help guarantee that every individual has an open access to quality education regardless of their backgrounds.
So I look forward to hearing from you more on your insightful actions. Thank you. Thank you very much.
>> IHITA GANGAVARAPU: Thank you, Mr. Li. I know he is one of the busiest at this conference, and we appreciate the support with young people in the Internet Governance space. With this, I think I'll take over for Ms. Carol. I'm the coordinator of India IGF and the co moderator for onsite participants.
We have Ms. Ines as well as Mr. Keith Andere from Kenya IGF and the Africa Youth IGF. This is to ensure we have a seamless participation both virtually as well as onsite.
With this, we now move on to a very interesting intergenerational panel, and it gives me immense pleasure to introduce you to our panelists for today's session. We have Mr. Henri Verdier. We are also joined by Ms. Margaret Nyambura Ndung'u, ministry of information, communications, and digital economy, government of Kenya. We have Ms. Phyo Thiri Lwin. We have Mr. Ahmad Khan, research and development engineer, Aramco, Saudi Arabia. We have Ms. Umut Pajaro Velasquez, the coordinator of youth LACIGF and youth IGF Colombia.
With this, I move to the first question which is directed to our post. Mr. Ahmad Khan, let us start from the host country. What is your experience on how AI's innovation is impacting education? What countries what are the different countries in the world that are adapting AI in education, and what are the different opportunities that this space brings to all of us?
>> AHMAD KHAN: Actually, the thought came to my mind whether I should use ChatGPT to help me with this. On the one hand it could dilute my thoughts and perspectives that I want to share, and I think this is the dilemma we face with AI. I'll get back to this point. I would like to touch on some points to answer this topic, and we can expand more in the Q&A as there is interest.
In terms of innovations in AI there are generally two categories of AI developments in education. The first one is educator focused or instructor focused technologies, and these are called the instructionalist approaches, which are focused on supporting and automating grading and procedures to get feedback on students, for example.
The other approach is called the constructionist approach. The focus is on how students and learners can use technology to construct knowledge themselves, and that's more of a hands on approach to education. Both really provide value and should be involved in how we go about integrating AI technologies into education.
In terms of concerns there are concerns with data use by technology companies. There are concerns with decision making capabilities by AI tools. By the way, this is a structure limitation inherent in large language models. My main concern is and this is looking at the future of what education could be. My main concern is the possibility of a future where society would become an end user of knowledge and not a creator of knowledge, right?
As a youth advocate I would like to talk more about this and how we can give a few characteristics of what education of the future could look like and what it should do. First, good education should instill a level of deep thinking and curiosity for knowledge. There are some tools new tools now where you have large language models. They use the Socratic method where they ask questions and get them engaged and reach answers on their own.
The idea here is that generations should be able to use AI to support their thinking and not replace their thinking.
Second, good educators should be a source of inspiration and guidance for students, and we should really focus on enabling and supporting this direction. The idea here is someone who is (lost audio). Someone who is inspired will use AI to help get the answer.
Third, good education would foster self learning and empower life long learners who would think collectively and not individually. On this I think that's something we have to practice as we teach. One example that I would think of is developing learning hubs that we can spread around the globe where opportunities and policymakers, technology development can come together and discuss ideas and see how they get feedback, what worked, what didn't work. Then that helps with integration and really pushing it forward.
I'll get to an idea that I will close it. The idea of technology adoption curve, if you are familiar with this. You have your normal bell curve that starts with your pioneers, your Steve Wozniaks and then you have your early adopters. It turns out, this is the section that really drives the maturity and adoption of technology. Those are the people who will stand in line to wait for the new iPhone for hours. Those are the people who will tell you how good it is and what needs to improve. Then you have your average adopter, the majority adopters. Those are the practical people who want to think about how will I use the technology? How will it help me without really taking much of my time? You have the late adopters. Those that were either not able or interested in adopting it early on.
In developing these learning hubs we really want to think about how we can help more people facilitate for more people to become early adopters and get them involved in the discussion and engaged early on. Then you can have the effects go from a local to a global level. Thank you very much.
>> IHITA GANGAVARAPU: Thank you so much. Now I will request the ambassador for digital affairs from the government of France. My question to you is are, France needs to address AI. How do you see AI impacting education? What principles should guide the development and implementation of AI in digital settings?
>> HENRI VERDIER: That's a question. Especially for five minutes. I will try to share some views. When we speak about AI on education, we speak about at least three different things. We think about how to use AI for education. Of course, we can dream of a world, for example, with more personalized education. If a model could tell me don't do mathematics because you did a test on this three years ago, and I will fix it, and now you can continue, that's, of course, a dream.
So I have to think about education through AI. We need skills. Human with absolutely no AI literacy won't be as free as they could, so we need to empower and to prepare. We need to prepare our children to the world of AI. The world is very complex where if you don't know to do your job with AI, you will lose your job. We have companions that always obey and serve us, but it's a good way to become a great human being to live to be surrounded by this model.
Those are very different questions. That's important too. The use will present a variable group. We have duty to let them become a citizen, free human being. They have rights, and we have to pay much more attention that users of AI. I say this because everything we are doing in the field of AI regulation and governance matters for education.
Let's start with (lost audio). Trade models, you need to respect the UNESCO principles and to avoid bias and to pay attention. In order to make this, we need to conceive because we don't have it. The way to audit AI model in a democratic way. It's not just few experts coming to me and saying, oh, I did this. It's great. We need the society to be able to have a conversation regarding the models.
For this we need to conceive new strategies. We need to avoid not bias, but there's a lazy confirmation of inequalities. Today, as you know, if I ask to AI show me a CEO, it will propose a white 50 years old man, so like me. Today the average CEO is like this, but it will change, and AI has to change too to prepare.
This is not just for education, but it's a very important question. If you don't fix it, you will have trouble for education. Then maybe you have questions that are more educational. We need to save the spirit of public service. Education is a fundamental right. Companies can help us. We have research. We need to make sure it will remain a public service.
If we end with a world with one giant company teaching every children of the world, we are lost. That's finished. We need diversity of solution respecting cultural diversities and the needs of every country. That's very important. We need to be sure that the principle of equity, equal access, nondiscrimination will be preserved.
We in France think that for this we need a infrastructure at some level. We cannot just rely on self regulation from a few companies. We have to think about what is a public service of educational AI. That is some questions.
I quite finish because the five minutes are running fast. Probably the international community will have to conceive a framework for knowledge and education. We cannot let capture it will put us lot of knowledge with AI with all the data, but I can say the companies that will build this knowledge, you know we live in a world where we live because there is a public science because there is a common knowledge, and you can innovate and create value on whatever because there is also a common knowledge.
So what is a common knowledge that we need to share regarding AI? This I don't think that we have a strong conversation regarding this question. I conclude with this, and that's not just because we are at the IGF Youth. We need to engage the youth from the beginning. I think this deeply and frankly. Not just because I'm the father of two daughters, 18 and 20, but we need new solutions. We need strong innovation and brave innovation. We need ideas coming out of the box. For this we need to engage very early the youth.
For example, to prepare the Paris AI Summit, we did work with IGF France and other organizations, and we did organize some sessions and workshops. We did ask to young, which kind of education do you dream about? I was very interested by the answer.
For example, all the answers were answers with AI in my pocket that I teach myself and I know the model and the model doesn't know me so well. So they don't want they didn't those youth that did work with us, they didn't want one big central AI model somewhere in the United States. They wanted their personal companion that they teach themselves. That was interesting because it was instinctive. They didn't really think about it, but for them a good is where AI works for me in my pocket with my prompts and not a future where someone to decide somewhere about my future. It was very, very interesting for me.
Thank you.
>> IHITA GANGAVARAPU: Thank you. You mentioned a lot of concerns. As a community, we need to address, think about. That brings me to my next question, which is directed to Ms. Margaret. This is when we talk about cooperation and collaboration. What can policymakers do for AI to support and push education for everyone? And how can global collaboration be possible to address the challenges and opportunities in AI with education?
>> MARGARET NYAMBURA NDUNG'U: Thank you. It's a good honor to join you in these 2024 IGF, and I'm glad that I'm able to join you online and extend my gratitude to the whole country, Saudi Arabia, for organizing these and the U.N. Secretaries.
Going into the question, the intersection of artificial intelligence and education presents both profound opportunities and pressing challenges. As we dive into this discussion, I would like to frame my remarks on the two key areas that you have asked, what policymakers can do to ensure artificial intelligence supports education for all and how the global community can support (lost audio)
Policymakers have a critical role to make sure that AI becomes a force multiplier. In the U.N. sustainable goal number four. To achieve this there's a need to focus on accessibility addressing the digital divide and safeguard equity among others.
Artificial intelligence powered tools must be designed with inclusivity at their core ensuring to learners with diverse needs, including those enabled differently and those in underserved communities. Governments should incentivize the development of open source artificial intelligence tools and platforms that democratize access to quality educational content. This can be done or can be achieved through following universal design principles. Also using artificial intelligence to personalize learning experiences that adapt to individual's needs, such as text to speech capabilities for visually impaired learners or speech recognition tools for those with hearing disabilities.
We also must focus on multilingual support through systems with language transition capabilities, especially for local and indigenous languages to bridge the linguistic barriers for learners in underserved communities.
Again, we're talking of collaborative platforms through promoting the creation of open source educational platforms that expertise globally. Making high quality content available to all learners regardless of education or support. Finally we are talking development of localized content that is culturally relevant and learning material that is in line with the local communities.
Distinguished delegates, the second area I would like to focus on is the area of digital divide. Artificial intelligence potentials can only be harnessed if all learners have access to the necessary digital infrastructure. Policymakers must prioritize investments in affordable and reliable internet connectivity and digital devices, particularly in rural and marginalized areas.
Addressing the digital divide through artificial intelligence in education requires comprehensive strategies that ensure all learners have access to the digital infrastructure and tools necessary to benefit from artificial intelligence powered solutions. By focusing on infrastructure, affordability, and inclusivity and combining efforts AI can be transformative tool to overcome the digital divide and provide equitable educational opportunities for all learners.
We are doing a lot of infrastructure development, looking at affordability and looking at capacity building as a country. Distinguished delegates, safeguarding equity is important in leveraging AI to back education for all. We must mitigate the risk of bias in AI algorithms that could exhibit existing inequalities. Policymakers should ensure transparency in AI in education.
To safeguard equity for all, several strategies must be adopted to ensure AI supports inclusivity and does not in any way perpetuate or exhibit existing inequalities. I know we all know across the continent and more in Africa that these inequalities exist.
The third area that I would like to focus on is the element of global collaboration. Global collaboration is not merely a choice, but a necessity to harness AI potential in education responsibly. By working together the governments can make sure AI contributes to education for all. The challenges and opportunities presented by AI in education are inherently global and so must be our response. Collaborative efforts are essential in shaping an inclusive digital future that includes strengthening international partnership. Governments, education institutions, private sectors actors and civil society must act together to develop shared standards and best practices for AI in education.
Organizations like the U.N. can provide platforms for dialogue and cooperation. With this, we bring the education institutions because we are talking of our young people, our youth to ensure that they are fully integrated.
Governments, education institutions, private sector, and civil society must come together to develop shared standards and best practices that ensure AI's ethical, equitable, and effective integration into education systems. Sharing knowledge and resources is one key area of global collaborations and the transformative potential of AI particularly in education, health, and education development calls for equitable success.
Countries with advanced AI capabilities bear a responsibility to ensure their knowledge and innovations with those that are still developing their AI ecosystem. Some of the key strategies is promoting digital public goods. We are talking of global research collaborations. We are talking of capacity building and knowledge transform.
Once we do that and ensure that we are exchanging knowledge and when you are talking about digital public goods, advanced AI nations can support the development and dissemination of open-source AI tools. Data sets and platforms as digital public goods ensuring activity for all that can facilitate the adoption of these tools in underserved areas.
As we discuss this empowering youth participation is one of the core issues to consider. Again, we are seeing that empowering youth through global collaboration is critical to shaping an ethical and forward looking AI ecosystem by giving young people a seat at the table and giving their active engagement.
We can safeguard a future where AI serves humanity.
The last year is ethical AI development that is equally critical in global collaboration. By embedding principles into the design and implementation of AI systems we can build global trust. This includes respecting cultural context and safeguarding data privacy and security. Especially for vulnerable populations.
Global collaboration is critical to embedding ethical principles in the design, deployment, and use of AI systems. By leveraging shared values and diverse perspectives, the community can ensure eh development aligns with the principles of fairness, inclusivity, and respect for human rights fostering global trust.
Thank you, moderator.
>> IHITA GANGAVARAPU: Very well captured on the policymakers' concerns and certain ways with respect to collaboration and how these concerns can be addressed. Now I hand it over to Ms. Carol to take it forward.
>> CAROL ROACH: Thank you very much. We have had a lot to digest just now. It's basically from a lot of us with gray hair. Now it's time to hear from the youth. Especially based on what we've heard and the desire to really engage the youth and not just as figureheads, but to really get you involved and sitting at the table. So here's your question. Sorry. You come from Myanmar and a very active in the regional youth initiatives. I see her online all the time. Not everyone has the same opportunities. What policies are necessary to prevent the digital divide from widening due to AI implementations in education?
>> PHYO THIRI LWIN: Thank you for introducing me. I think from the young people from the developing country we are also trying our best to catch up everything the way we can, because I know that there is an academy that we are trying to train the young people, younger generation. I'm Gen Z anyway, but they are trying to educate and train them to be more about the AI, the Gen X'er, let's say.
Let's highlight that. Even though there are many things happening in many developing countries we are trying our best to catch up the best, not miss any kinds of opportunity, but there are also the challenges like accessing the education because developing their infrastructure related to AI is quite expensive. Especially for those in the private sector academy or the school or university. They are quite challenging with the investment related to, let's say, like the setup of the hub in the developing country related to the AI. That is one of the challenges that I see.
Not the challenge will be related to the education. Internet has been challenging for us to access the technology in the developing country. Maybe it is related to that geopolitical related matter. Without internet I don't think we can continuous lending with the AI and empower the young people to continue their education. So if we are talking about the AI in education, the internet is also important for us to get access.
So for preventing the digital divide, that's a question. Preventing the digital divide from AI in education. I personally feel like at least we need to get access to internet as a fundamental right. Then we can continue shape the policy. Even that is a challenge at the policymaker level. We can shape our society and community at the very ground level, like a school. We can change our policy, and we can allow the student to use the AI to at least, but the educator are also needed to be open minded for using the AI.
What I experience is that for accessing the AI let's say even though the student want to use the AI too, they ask some of the educator staying narrow minded. They are concerned for cheating on the assignments or something like that. I can see from their perspective why they are concerned for cheating the assignments or using AI technology, but from the learning perspective from someone like me, I'm not native speaker in English, right? I need assistance AI assistant to revise my idea in better version. Let's say kind of like this. I feel like at their school or university level we can shape their ground policy at the school and university at least to grant access to their student for using the AI too.
One concern might be probably be the assessment system because many students maybe student can also cheat on their using the AI technology, right? Maybe we can we need to change their exam system. Maybe assessment system as well. That is the way what we can do, and even from the educator side they can I personally feel like it's better to change their system or assessment of the system.
I'm mentioning about at the ground level what we can shape our society by changing the policy at the school or university, right, but at the higher level of the policymaking for the developing country and also it's always the big get between the developing countries. The thing is we can share our resources it each other because we are other human beings. Like one of the speakers say we have to think about the diversity and inclusion, and also we can also share the resources, at least like sharing the information and also sharing the opportunity to learn about the AI.
Let's say this speaker mentioned about AI in France, right? So maybe we can get the opportunity to the young people to go and learn what is happening in the developed country by attending the AI Summit, kind of like this is also opportunity and also empowering them to do something back and initiate at the local level. They need a chance to attend something like an AI summit that might be beneficial for them to share best and how to be the open minded using AI technology.
Yeah, that is a way of how we can share the resources among us, and also at the global level (silent).
We should not leave anyone behind this AI evolution and revolution periods and era. We have to bring everyone as much as we can by shaping our educational policy at the ground level and also at the higher level of the position.
>> CAROL ROACH: Thank you. I think what we hear repeating here is that very good. Thank you. With you hear repeating here we need corporate responsibility in terms of helping countries to develop their AI because it is an expensive endeavor, and we need them.
I totally agree in regards to the change of mindset, especially of the educators. Probably of parents as well. So that the youth have a say and just don't look at AI as a negative, but to embrace the positive part and first embrace the positive part, yes, we do need that mindset change.
So we're going to now here from Umut online. Can you hear us? Umut? You can hear? Okay. So while we sort out the technology, let's move on to our next question, Minister, this is a big one for you. After hearing all of what's been said of these important views, both from the youth and from others of the older generation, what preoccupies your attention as a decisionmaker on AI implications for education? It's a big one.
>> MINISTER AMAL: Thank you very much. It's a huge question, but I will start with the dream. My dream is to keep young people as far as possible from computers because I think they spend already a lot of time connected and very close for machines. I think that AI should be used when it has real added value, and we have to discuss what does it mean added value. For example, if you use AI to simulate classroom, this is something you cannot do alone as teacher. You need a tool to simulate this classroom and with students, et cetera. In this situation AI can provide some benefits.
I would like to say something just to set up the scene for me. In '18 there was already education based AI. We called this assisted with computer. Education assisted. It started many, many years ago, like 40 years ago. There was a lot of advances in mathematics, in science, et cetera.
Then we scale up from this basic AI assisted to serious game, for example. Gaming becomes something very important in many, many situations. Not only at school, but also in companies, et cetera, because it put the person in situation of learning.
Then we started thinking about more personalized experience with AI, and we got to this generative AI very recently in 2000 maybe ChatGPT, 2022, but generative AI started, like, five years before. This Gen AI has, like, very interesting features, and I will go back to this, but also some bad features, like for example, plagiarism is something we met all education system is disturbed by this. ChatGPT. Maybe ChatGPT can give me the answer. Let's focus on the positive aspects. For example, the voice. Using the voice in generative AI is very useful for education, and we started developing a lot of apps. For example, translation from one language to another one, and using some approach based from speech to text or from text to speech. This is very useful, in particular, in the Global South. You have to face literacy and multi languages.
In one country you have to deal with 15 or 20 languages, different languages, and generative AI help us to move to shift from one language very smoothly to another one, and this is I think the real added value in education of generative AI.
Now, just when I listen to you, I think there is a very huge problem. Again, if we focus on the Global South, it's about connection, connectivity. It's something very difficult to have everywhere. The infrastructure is also a huge problem if you deal with large language models. So we have to find some new approaches. For example, in France there is a very nice group working on frugal AI in the sense that we need to certify what the output of AI are very accurate and also they don't need huge amount of data and not a very big or very large models.
Also, the access to platforms. If you put the apps on platforms, people should be able to access. So I think maybe the thing that is crucial is about ethical aspects of AI and education. We have already some (inaudible)
If you have people that have access to learning with very sophisticated tools while others don't have access, you have this problem of accessibility and equity. There is also a need to maintain clarity about how AI systems function and makes decision. We have been now we removed from systems to autonomy of systems with AI. This autonomy allows some systems, educational systems, too, for example, to make decisions about orientation, to make decisions about access to university, et cetera. This is related also to accountability. We need to explain why this person gets this kind of access or not.
Another topic very important is data privacy and cognitive right in education. Because, you know, data is something very important to protect. In particular if you deal with community data, it's much more important. There is the possibility to trace all the cognitive data and to manipulate to apply some knowledge on this cognitive data to go ahead with manipulations at large scale. Okay?
Finally, I would like to mention all the problem related to AI, like in data, the group of invisible woman. It's a lot of times related to the use and data and we rely on data and these algorithms also.
Just to summarize, there is the problem of data, there is the problem of infrastructure, and also there is the problem of design. How to make AI in particular in the case of education.
>> CAROL ROACH: Thank you very much. We hear a lot of different terms of ethical AI, but I think I like (inaudible). We'll have to use that a little more. Umut, are you on? Here's your question. What are the youth in Latin America and the Caribbean thinking when it comes to who should be held accountable for decision made by AI systems in educational environments? Take it away.
>> UMUT PAJARO VELASQUEZ: Good afternoon, good day or good evening, wherever you are. When it comes to decision of who is going to be held accountable in AI decision, we are actually in Latin America, we think that actually this has many problems related to internet governance. It's a problem that should be addressed by several stakeholders at the same time.
Probably the main stakeholders that can be held accountable for decision made by AI system in electronic environments requires careful consideration on the possibilities of, first of all, the developers. AI developers have a responsibility to design and develop AI systems that are ethical, unbiased and transparent.
They should ensure their system is trained on the data sets, especially in Latin America where we have so many cultural and different languages and all of that. Also, ensure that AI systems, especially the ones that we've been using for education, are designed to protect student privacy.
Educators have a role in these aspects on accountability also, and I'm an educator myself, so we talk about this a lot of times. A lot of people say that most of the educators have some resistance to AI. I think it's the opposite. Most of the educators don't know exactly how what is the responsibility in all this process, so they don't know exactly how to use it somehow, so they feel more afraid because they don't know. Not actually they are against it to the use of the technology.
So educators here have to play a crucial role in implementing AI systems in the classroom. Mostly most of the educators needs to be trained on how to use AI effectively and ethically, and they should be involved in the decision making process regarding or how AI is used in the schools.
So that means that educators should be involved also in the implementation and the processes. Not only being the ones that receive education to implement those tools, but also the ones that decide how it's going to be implement and how to be going to be regulated. They use all these tools inside of the classroom.
The others that can be considered important here is the policymakers. Policymakers have a responsibility of society to create regulations, guidelines, and that government and guideline that is the government deduced or AI in education. This policy should be addressed so that the privacy, algorithm bias, and obviously accountability in students because they also are part of the process.
We can't avoid having some great accountability without including students inside of this conversation. Without them it's impossible to actually address fully the encompass or the complexity of having AI education tools and making them accountable for the use of the tools because it's not developers that are going to be accountable for it. We need to see them in other process. Not only in the design stage, but until the deployment and implementation.
Students themselves should be empowered to understand how AI is being used in the education and how to have a voice in the decision making processes. They should be educated about the potential benefits and risks of AI and encouraged to critically evaluate information generated by AI systems.
So students in this case, they need not only proper education to know how to use the tools, but also some critical thinking on how and when they started using the tools because most of the students aren't using the tools. We have to think that accountability is really complex topics to talk about, providing we five minutes we don't have no time to cover everything about accountability, but we can say is that accountability should be something that we should be share with all the stakeholders. It requires a collaborative approach that prioritize ethical consideration, responsibilities and well being of the student because the student is the main focus on the educational system.
Before I forgot, another ethical to take into account is academia. Academia needs to understand and investigate how AI is affecting the education not only in the practices and then inside of the classroom, but outside of the classroom and how it's changing the dynamics on how a student can learn and improve their abilities or their abilities inside of the classroom or for their daily lives.
Academia needs to understand the development aspects that are being affected by the use of artificial intelligence tools in the classroom. So there is another stakeholder that should be taken into account in this. So we can have AI more accountable in education that actually is transparent fairness, has a human side, respect the privacy of the student, importance to equity, and it's child friendly.
So that's my approach to it. Thank you.
>> CAROL ROACH: Thank you very much. We can see that the number of critical stakeholders are growing here. We have government, the technology community, and of course, academia. After listening to all the talks, I have a question running around in my head, but I have to leave it running for one more speaker, and then I'll put it out to you. Mr. Khalid. I get to put out my running question. I'll be honest, I have not used one single AI tool. I'll tell you why. It's because you can still hear me? It's because of what was said at the beginning. I think it's from hold on. Am I going to be enhancing what I do or die Luting? Is it going to be me or 1,000 other people? I'm going to put it out to you who have used AI. How do you feel ethically, personally when you use an AI tool to help you to enhance? Do you think that you're really enhancing? What do you do to help yourself ethical compass? I'm throwing that question out to you in the audience.
>> HENRI VERDIER: If you do receive advertising on the web or with work, that's done with AI. If you receive information in your work, the feed, that's AI. We don't always notice that AI is everywhere, and maybe that's the most important because we cannot confront and contest and discuss democratically because the decision are made, and we don't even know that there are decisions.
>> CAROL ROACH: I agree that sometimes we don't know, but I'm throwing it out that I do know. When you look at Zoom, you have the AI apps you can use, WhatsApp. Everywhere you turn. Right up and now I don't know if they're trying to be ethical. You have to click a button to say, yes, I want to use it.
So I'm looking at the point where I click and say, Yes, I want to use it. But thank you, you're quite right.
>> My name is Gerard James. I do a lot of work with AI and a lot of concerns or comments that have been made, I have heard them talk about them before, and I have them as well. I think what you are discussing there is consensual data mining and consensual activity with AI. I think when I use it myself and my hesitations right now around using it, which is something I would love to hear the panel discuss, I think for me I'm wondering about who is responding to me in the sense like who it's a large language model. So whose language? Whose background? Whose perspective is diluting my creativity? Whose background and perspective is diluting my output because I really value the fact that I come from East Africa. I am well trained and well educated in all sorts of things in the West, but I am applying that education through this perspective and this lens. When I use AI tools, I often notice that I almost have to give the AI my philosophy first, and I have to write out logical prompt that is give "if then" statements. If I believe this, then this is my outcome of what I would like to see.
I almost have to deprogram the AI from the language model that it exists in currently. So I would love to hear yeah, Ahmad, I think you are ready to go on this. Instead of noticing how it's regulatory wise. Who are the people? Do you see Global South members being the next Steve Jobs of AI or these big innovators in AI, or is it going to follow the same path of the foreign delegates or the corporations come in, and they give you these language models, and then you have to decide what's true or not?
>> CAROL ROACH: Thank you.
>> AHMAD KHAN: Maybe first I'll start with the concern that you raised, and this goes to what the ambassador said. Really I think we blew it with social media. Why is it that someone in a different continent gets to decide what I watch on my phone for 24 hours? I'm not mentioning any names, Mark Zuckerberg.
For AI, the idea of how the technology works, basically, it's a large language model as in it takes data and it learns what the data says. This is what it learns. You take a bunch of information on the internet. What it learns on average is the average content you see. So if without any training, this is what you get. The average response you would get in the internet.
Then there is the fine tuning that happens after to be more supportive, give it more information, this and that, and then it learns some concepts that can provide you with the direction that you can give it, right?
How it happens now is the different companies control for these things and ensure it's trustworthy in a sense. They try to make their own best judgment in terms of how to use it, and then we become end users. The question is, again, back to the point, how can we push it to use AI so that I can tell it what I want and it serves me and not the company? This is really what we want to focus on.
I think this really is an overlap between the capabilities of the technology itself and then what we do with it. I'll leave the floor if there are any more comments, if anyone wants to add more to it.
>> CAROL ROACH: All right. Next.
>> Can you hear me okay? Brilliant. Thank you so much for that, Ahmad. I think my experience as a young person utilizing AI, especially I'm a consultant. My job is quite professional. I usually struggle to get it to give me an objective stance in the sense that whatever prompts I give it, it ends up speaking to what I'm feeding it, which is not necessarily what I want, right? I want this thing to challenge me, to give me some sort of objective truth. I guess my question to you is, do you think an objective truth exists in the sense of AI, or is it always going to be manipulate to do a certain extent by its users and the community that utilizes AI technologies?
>> AHMAD KHAN: I think, again, this is a bigger question of what's right and what's wrong overall, right? AI will give you the answer that it has learned. In that sense it's always objective. If you tell it "I want you to challenge me," then it will try to challenge you. This is what it's good at.
If you use it for what it's built for, it's great, but if you use it to try to extend it further than what it should do, then it will fail. Then we say the company is responsible for it.
If you use a knife that's supposed to cut things and it cuts your finger, maybe you didn't use it right. Maybe it wasn't too sharp. We have to really know what the limits of AI is before we really try to use it for all intents and purposes, right?
In terms of how we can actually use it to get large goal and objective answers, there are tools now. Large language models learn intuition from data. This is what they get. If anyone is familiar with the system one, system two, it's the fast thinking process of intuition. It just learns intuition, but it doesn't have structure for logic.
There are now hybrid model that is they're developing that can actually ensure something is logically and reasonably, objectively making sense. That's something we can incorporate into developing tools. I think that would take longer. Maybe hold on to ask it what the meaning of life is until we get that answer.
>> HENRI VERDIER: A very brief comment regarding your question. The current models were made by companies to sell something so they try to be (silent)
They didn't always answer. Sometimes they told, are you sure this is your question? Do you know why you are asking this question? (Silent).
There will always be answers. When they don't know, they do invent. When they don't invent, they hallucinate. They will always agree to answer. For me that's my worst concern.
>> IHITA GANGAVARAPU: Do we have any comments or questions online? Is I see that we have a comment or question from Lily. If you would like to speak, please.
>> Good morning from here. Are I'm excited to join the conversation. One of the things I wanted to say early on was the fact that Madam Carol had mention that she actually clarified it. In using our emails or calendars there is the subtle use of AI. Someone says it enhances productivity. Indirectly we're all using. Somebody that's coming from the angle and perspective of answering if it adds any efficiency or is effective for me, but secondly, I'm a Ph.D. researcher in privacy, so my concerns actually going towards the idea of privacy. Actually the sentiment of the speaker who took the microphone for the first time to say whose perspective is it maybe spotlighting for me?
In that aspect one of the things that we started to look at is even what these companies are doing. For example, ChatGPT. I discussed so much about my dissertation with ChatGPT that when I ask the question, it brings elements of my past work into the prompt or into the response it gives to me.
One of the things that I think that they're doing is when you start the conversation, you can toggle the button and say, hey, don't train with my information or don't train using my data. That's one step for people to say, hey, I'm looking into my privacy, and maybe I don't want this to be used in training this large model for others to be a part of.
But one of the conversations is that aside from what these companies are doing, we are speaking about responsibility. For us as people who are probably looking to be private and secure while using AI tools, you will also start thinking about how we know and understand what these tools are and first for looking online security. Are you uploading Social Security numbers or pass words that can lend to the training of the large language models?
Remember what the minister said. He said this isn't natural learning that they're using. Or even these E these tools acting like the human brain, and it brings about that neural part of AI.
We are all using the tool, but we have to take the time to learn for ourselves and make sure we are taking proactive approaches to protect ourselves while these companies and policies and every other thing also works in place. From my point of view AI supports me, but I also look out for privacy because it is huge. If you don't think of it for yourself, the companies will all play a role, and your information will be probably used in training these models.
>> CAROL ROACH: Thank you very much. That's very helpful. The next speaker online. We have quite a cue here. Please we are asking everybody to stick to the two minutes. I think maybe it should be a one minute intervention so that other persons can have a chance to get involved. Thank you.
>> Hi, everyone. Hello.
>> CAROL ROACH: We hear you.
>> Hello?
>> CAROL ROACH: Can you hear us? We can hear you. Okay. We'll go back to
>> Hello. Thank you so much for the insights. I have a question. The first one is we see a very gap between conversation when it comes to AI. It talks about AI and the opportunity that it will bring as an economy of everyone and in the Global South and the protection side of AI and technology. How can we guarantee that we have women and girls have that side in the conversation where there are the systems are aware of the concerns and there are safety measures and concerns, but it does not define the opportunity space where we can have more girls and women shape the industry?
Then the second question related to bias and AI. We know that there are biases related to the data itself. AI inherited our history, our civilization. There are fears of biases against women and girls, and this is what we're also receiving from AI input, but also the AI the algorithm bias when it comes to those creating AI, mostly men, creating softwares for other mens.
Then the last part is the bias. Those who have already gender biases and who are asking the wrong questions and how can we make sure that we are who is responsible for fixing AI that can work for women and girls? Thank you.
>> CAROL ROACH: (Inaudible). We'll go back to the online speaker after your response to the question. Do you want to all right. We'll go to the online.
>> UMUT PAJARO VELASQUEZ: I would like to talk about the last part of the question on gender bias on AI because that's related to my work. So, yeah, one of the things that we can we can't actually blame AI for being gender bias when we create the data that feed the AI system. It exists in the community so we need to change that in the entire society in order to have less gender bias in artificial intelligence.
We can actually improve the language sometimes when we are talking about language models. One we can do, that's actually what I'm trying to do within my language that is Spanish is to have is to actually improve those models in a way that actually is the presentation and the output that people receive when it comes to gender and more equal when it comes to men representation on things and woman representation on things.
So it's hard when you have some languages that are actually so the presentation of gender inside of it, and there is a cultural background that actually is really, really gender. It's not going to be easy to tackle the gender bias, but we should try. There are many people trying to do it at the moment.
So what we can pertain with one of the things that I say to people that I want to improve the models related to gender bias on AI is actually start feeding the data with more related things that women and other genders too and how they represent themselves and what they do in the more common things. So it would be helpful for that AI, for the different AI models, to give a less gender response to the prompts that there is.
Yeah, that's what I wanted to say.
>> CAROL ROACH: Thank you very much. Very, very helpful. You're quite right. It's our most circular argument there with regards to what AI is are feeding what we we've fed it and feeding from our biases so we need to look at how to address that. Thank you very much.
We're going to take we're going to ask you to take one minute. We're going ask at this time that nobody else join the cue. Thank you.
>> (Inaudible). When we are questioning the youth to be critical, I'm want sure if we're getting a good environment for them. When we all have (inaudible)
Are we giving enough information about this. Even at my German university they do not really specify (inaudible) and how to use it. I think we should really think how do we work on this so we actually inform and not just telling them to be early adopters.
>> CAROL ROACH: Thank you very much. That's a very good point. We now five minutes to wrap up. Or four minutes. Let's make it quick. 30 seconds.
>> So this year we had African IGF last November and the topic was emerging technologies, youth participation, amplifying your voices. One was establish advisory and participative platform that is involve you in policy making and governance at reach E regional and national levelled. What kind of methodologies? Oftentimes there's an attitude in all these conversations how possible or what kind of methodology or strategy should we put in place so that they are all inclusive? Also, quick one, who are we benchmarking in terms of all these technologies or policies? Who are we benchmarking? Who should we learn from while we benchmark, and who are we learning from?
thank you very much.
>> CAROL ROACH: Very good point. I would encourage you to join working group on youth in the IGF. We formed that group to try to help some of the things that you said, so your voice can be heard actively. Next.
>> Dana Cramer for the youth IGF Canada. I'm curious about how we as students can adopt for AI adoption in our educations. For context I'm a Ph.D. candidate in Toronto Canada, and my university has sweeping regulations on AI usage now, which really impacts on how you can become first movers of AI programs. Then by being a first mover, have more experience to then enter, for example, that seat at the governance table on it.
Our regulations at my university aren't just ChatGPT but dissemination programs. I'm wondering if the panel could speak to strategies for advocating for having youth be able to use AI in our education that then we can be partners and stakeholders in governance tables too. Thank you.
>> CAROL ROACH: I just want to to flip the switch a bit here, we've been asking persons persons have been asking the older generation are on how to change. You said you would like to see a change, but what are your ideas towards change? What are your ideas? How can you ensure that I can use ChatGPT to produce my paper without you worrying? What are the guardrails you are suggesting in that type? I'm just throwing the question out. Sorry.
>> The perfect introduction for my question or actually my comment. I am part of the responsible technology hub, a youth led nonprofit that is actually working on this question specifically.
So one aspect that we do is we actually have spaces that are intergenerational in means of we're not only giving young people the mic, but we let them actually develop AI.
So instead of asking them and serving them, what do you want, we ask them the question of, give us a solution, and then we will have a question or we will go through the problems that you actually are seeing. That way young be people are actually taken serious. They feel like respected and on the same level. The kind of discussions we're having are way deeper, way more solution oriented and way more inclusive of young people. At least for us in Germany. One aspect that I really wanted to highlight because I feel like it's missing and the minister actually brought up in regards of ethical aspects. We do not talk about click workers. AI has to be developed by labeling data, and that data is being labeled by young people who are super underprivileged, mostly from the Global South, not paid well enough.
If we talk about including people and young people in this aspect, we need to include those who are exploited by developing it. Maybe that's an open question for later on as well. How can we include these young people? This is the most important part for my work at least. Thank you.
>> CAROL ROACH: Insightful. 30 seconds.
>> I'm a youth ambassador. Personally I major in (inaudible). What is being proved is that the generative AI gives a misunderstanding about the stem topic. How can the says youth scientific researcher or the STEM student make sure or to judge that the response is correct and who should actually be responsible for the false information that's been given by the AI?
>> CAROL ROACH: I'm sorry. My two speakers, we have been given the signal to end. I can't even do a wrap up. I'm very sorry about that. We cannot take any more speakers. However, we cannot take anybody else, again. However, I think we started a very good conversation. Now the point is take it past our conversation. Now we want to take it to action. I think sometimes for youth it's that I'm going to ask the older generation this is my problem. How are you going to fix it? Now we're going to turn and flip it and around say, I have a problem with you guys and how are you going to fix it? Keep that in mind, please. Thank you very much for your participation.
>> AHMAD KHAN: Also, one good way is I have a solution. What do you think about it? Instead of how do you fix my problem? This is a solution. What do you think about it? This is the learning that comes with the solution and see what the policymakers think about it.
>> CAROL ROACH: That's a good way of putting it. Thank you. Thank you, everybody. Give yourselves a good round of applause. Thank you very much. Thank you, online participants. Thank you very much.