IGF 2023 – Day 1 – WS #495 Next-Gen Education: Harnessing Generative AI – RAW

The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.

***

 

>> MODERATOR:  Hi, everyone.  Good evening and warm welcome to those online and joining us in the room.  I am the Coordinator of IGF Youth and Board Member of the international telecommunication union, ITU.  I'm be the on‑site moderator today for the session titled next generation education:  Harnessing generative AI.

In education, one particular branch of AI called generative AI is getting a lot of attention for its potential to revolutionize how teaching and learning practices are picked up.

Now, this particular session is something I very closely relate to, having recently graduated from my master's and using generative AI tools in academic and professional settings.  I see and definitely admire technology for its applications and potential, but I do have personally some concerns around the use of generative AI in an educational context.  Gen AI is capable of generating original content, promising personalized learning experiences, and aims to improve educational outcomes.  In today's inter‑generational and diverse roundtable discussion, we aim to answer three policy questions.  Of course, in addition to understanding the importance of digital literacy, critical thinking, and ethical decision‑making in the context of generative AI for youth.

What are the questions?  A, what policy should be in place to ensure the responsible and ethical use of AI technologies in educational settings?

How can policymakers collaborate with relevant stakeholders to ensure that teaching and learning processes are enhanced while sustaining creativity, critical thinking, and problem‑solving?

How can policymakers ensure that the generative AI technology by youth in education is inclusive, age appropriate, and aligned with the developmental needs and abilities?

Before we begin, I would like to introduce three speakers for today's discussion.  Dunola the digital inclusion program officer at the United Nations international telecommunications union, who is joining us remotely, we have Osei who is representative of Ghana Youth IGF, representing the technical community, and Connie here, hi, Connie, she's from the ITU Generation Connect Youth envoy from the Asia‑Pacific region.

For today's discussion, we have Gabriell, Karsan and Adisa joining remotely as online moderators from the technical area.  And Purnima Tiwari for this session.

We have an interesting format for this particular session.  We made sure that it is extremely interactive, a roundtable with a little bit of a twist.  We have made sure that there are going to be multiple interventions from the audience, and I ask your active participation, of course keeping the time in mind because we expect really crisp responses on your interventions today.

So, I would like to now, you know, bring in Dunola, but actually before that, I would like to ask the audience if somebody who is interested, I would ask you to line up, please.  What comes to your mind when you hear generative AI?  What do you think is its role in education according to you?  Your answers will set some is context for today's discussion.

   >> AUDIENCE MEMBER:  Gen AI is a tool that's making everything easy and more accessible.  It is a new way, from if I see from layman's perspective, the impact that it has is making everything accessible for everyone.  Like, for example, be it making music or, for example I am not photo editor, but using Gen AI I can edit very quickly and it's very useful.  This is one aspect of Gen AI which is impacting the public at large and worthy of discussion.

   >> MODERATOR:  Thank you so much.  I also request you to please tell us your name and maybe a stakeholder affiliation?  Yes, please.

   >> AUDIENCE MEMBER:  Hi, Jobisi, ISOC youth ambassador.  One of the concerns related to generative AI I hear a lot of is plagiarism, we use knee‑jerk people banning generative AI in educational institutions.  That is not really enforceable in that degree.  I would like for the discussion to go in that direction.  How we can address the problem much plagiarism and intricate generative AI into the education system without worrying about that.  Thank you.

   >> MODERATOR:  Thank you so much.  I would now, if there are no interventions, I would like to request Dunola, I hope you can hear me, and I would like to request what is your understanding of responsible and ethical use of generative AI technologies in educational settings, and what do you think are the gaps that need to be prioritized and addressed?

>> DUNOLA OLADAPO:  Thank you so much.  I hope I'm audible.  Thank you for letting me to share today.  I'll talk about the ITU work and Generation Connect and what we're doing about it and then also collaboration with AI for Good, and after that which I have to leave a bit early, but I'll leave you with some links in case any of you are interested in following that up.

So the ITU is the UN agency for ICTs.  We're a specialized agency.  Our membership and our partners, we're fully focused on achieving universal meaningful connectivity for all, particularly the 2.6‑billion people who are currently offline with no access to the Internet.

So, digital empowerment of young people is a major component of this vision, and our youth strategy is fully aligned with the Youth 2030 Strategy of the UN System, so as we all know, of course we're talking about AI, digital is no longer something that is elective.  It's a necessity.  As we saw from COVID‑19, that was a defining moment in our digital history, so millions of students' educations were put on hold at that time and the global economy suffered, but of course young people especially those in the most vulnerable situations suffered disproportionately during the lockdowns due to lack of digital access.

So, young people are digital natives and remain the driving force of connectivity, so 75% of young people are online compared to 65% of the rest of the world population, but this is not uniform across all regions, for example, 98% of youth in Europe have access to the Internet, compared to just 55% in Africa where most of the LDCs are.  This session is about how the next general Generation Connect is harnessing generative AI, but how can we even have this conversation when we know that so many young people and children are completely left out of the digital ecosystem and don't even have access to the Internet?

So to be able to influence and benefit from emerging technology such as AI or have the impact on education, young people need to be connected in the first place, so fundamentally young people, especially those in vulnerable situations need meaningful connectivity, safe Internet access, sustainable digital infrastructure, affordable devices, no data costs, all youth should have access to develop digital skills and enroll in the digital development dialogue.

Through Generation Connect, the flagship initiative of the ITU Youth Strategy, the ITU is directly trying to engage young people, encourage their participation as equal partners alongside the leaders of today's digital change, empowering them the skills and opportunities to advance their vision of a connected future.

So, I'm very proud as well that this session is being led and I can see many familiar faces by some of our Generation Connect Youth Envoys and Board Members and it's a true testament to youth leadership in this space and the digital development dialogue.

So kind of round off, I would like to share some of the concrete work that Generation Connect done this year with all of that background in partnership with AI for Good.

In case you haven't heard about it, AI for Good is United Nations yearround digital platform where AI innovators and problem learners, can learn, build, connect, and identify practical solutions to advance the UN SDGs.

So if you've never heard of AI for Good, I encourage everyone to join the platform for free, of course, and you can access so much information, you can network, and meet people who are also interested in this space and also linked with the UN.  So, our ongoing partnership with AI for Good is really around amplifying youth voices in global discourse on AI.  I want to share briefly some of the things that we've done in case it might be of interest to some of you.

So, we created this Generation Connect AI for Good youth consultation group and the group helped co‑design the now live global survey on AI and youth which is now available in all 6 UN official languages, and I'll drop the link in the chat and I hope all of you who haven't completed the survey will please do so, because it's helping us inform our work and helping us see how young people are interacting with AI, a lot of questions around Chat GPT, education, generative AI in the education space, so it's very good for this session and I hope as many of you as possible when you complete that leave your details tend of the survey so we can reach out to you and you can, of course, share more detail if you have strong thoughts on the topic.

We also had AI in Education Webinar earlier this year on the same topic, after that we had a podcast episode on Generation Connect and building on the effort, the Government of appear January sponsored one of the members of our consultation group at the IGF can right now and he had his speech yesterday on the high‑level panel and these are some of things that we're trying to do with the group, we're trying to grow the group as well to ensure that young people have a real say in this sort of AI overall global agenda dialogue.

So to round off, to be able to harness AI for good in educational settings and beyond all stakeholders, including youth and children, have to have a say in the global AI agenda dialogue.  We can't afford to leave anyone behind.  In this current global context that we have right now, emergency, worsening climate crisis, need for inclusive and innovative digital solutions are never more urgent, so together with diverse youth stakeholders, we must all try to co‑design solutions to try to close the global digital divide so connectivity for all is achieved and also the power of technology can be properly harnessed for a connected, prosperous and sustainable digital future for all.

I wish you all a successful session.  I'm honored to have shared space with you today.  And I'll leave the links to the things that I mentioned in the chat, and of course if you want to get in contact with Generation Connect or with me, there are so many Generation Connect people I can see on the call, so you can reach out to any of them.  Or you can just email any directly.  Thank you so much Ihita, Connie and everyone for this session.  I wish you a great session.

   >> IHITA GANGAVARAPU:  Thank you for taking us the importance of ITU and also the importance of young people in voices and how we want to perceive AI and regulative AI moving forward especially in educational setting.  Thank you so much for joining us.  I would now like to request Osei.  And I see you, from your technical perspective can you share your ideas on response and generative AI settings in educational settings and role of algorithms and what do you think are the gaps that need to be addressed?

   >> OSEI MANU KAGYAH:  Thank you very much.  It's been an insightful conversation so far.  Wonderful presentation from Dunola and my greetings from Ghana to everyone.  Good morning, good afternoon, good day wherever you are joining us from.  And I think it's an all important conversation, one which needs to be looked at.

I think we all do how generative AI or AI has democratized education, made education more accessible, and one thing is just we are all looking at all the promises or the goodies, all of those things, but we're just looking at the negative side.

One is how is how it has become a friend yet a foe.  And also how it has been constructed deployment or developing these tools is always constructed in White, Western, and wealth.  And the sort of trained datasets don't take into account certain minority groups and it has become something where we need to look at.

One of these major gaps is how industry is always racing ahead of academia.  It seems academia is just taking a sprint and industry is taking down a wild chase.  There seems to be that we need to find a kind of mutual or stable line where it fits and find common ground in deployments and develop.

Issue of safety, issue of accountability all do come on to the forum, but one thing in deployment or say these guardians or training these datasets, it needs to take a human‑centric approach.  That's my view of where most countries need to take a human‑centered approach.  We've seen other stages or other organizations taking an approach which leaves much to be desired for, but with the human‑centric approach, it takes into account safety, takes into account trust, and it captures everything.

I know the conversation and I'll let my other speakers come in and we'll start the conversation.  That's a brief remark from me.

   >> IHITA GANGAVARAPU:  Thank you so much for your remarks.  In the interest of time we'll jump to Connie and take her inputs as a young person and student, what is your understanding of the same?  And what do you see as some of the major benefits and concerns around generative AI?

   >> CONNIE MAN HEI SIU:  Thank you, Ihita.  So, generative AI technologies in educational settings, especially from the perspective of young students like us, are truly exciting and it holds immense promise, and this field of innovation has the potential to completely reshape how we approach teaching and how we approach learning, but with any sort of groundbreaking development, there is always a need to take a balanced view, and considering both the incredible benefits and also the important concerns that come with it.

So when we talk about generative AI in education, one of the most striking things is the potential for personalization.  I mean as a 22‑year‑old biomedical engineering student myself, have felt the difference I grab certain concepts and unique ways I prefer to learn and generative AI can be a real game changer here, because it's a virtual tutor that tailored everything to your specific needs by generating custom content, quizzes, exercises and ensuring that your learning journey matches your individual pace and preferences perfectly.

And generative AI can also knock down some of the longstanding barriers in education.  For example, language‑wise, it can break language barriers by translating lectures into different language, making learning more inclusive and accessible globally, and also disability‑wise for those with hearing or visual impairment, this technology can generate realtime transcripts or provide audio descriptions of notes, making learning an enriching experience for everyone.

Now, I'm also painfully aware of the student/life hustle and I'm sure many of us students find ourselves juggling various roles including being student assistants and peer mentors so generative AI can also be very helpful here as well.  Because it can handle the tetius administrative tasks of generating schedules and answering questions and inquiries.  It means more time for us to focus on studies, better study/work/life balance, maybe more time to procrastinate and connect with our peers, thereby reducing stress of multitasking.

However, as promising adds as all of this sounds, it's important to address the concerns as well.  One significant issue is the misuse of this technology.  That leaves some individuals have used generative AI to create misinformation leading to fake research papers and misleading information.  And this misinformation can harm students who annoyingly rely on it for their studies which will not be good for anyone's education moving forward.

Another concern is the risk of becoming too dependent on AI.  While it is convenient, a bit too convenient really to use generative AI for assignments and projects, it is overoverreliance could hinder creativity and critical thinking, which are all crucial skills, so it's essential for students to be aware and strike a balance between using AI for convenience and also nurturing their own abilities.

And privacy is, of course, another big issue.  It's not just limited to young people and students.  Generative AI often needs access to lots and lots of data, including personal information.  Could lead to breaches and misuse which potentially compromises our privacy and security, so it's important to stay vigilant about how our personal information is shandled, but at the same time we should learn how to protect our information to the best we can when using generative AI.  And also lastly, one thing that I can think of is the issue of bias, so AI systems can inherent biases present in their training data, which perpetuates stereotypes and discrimination, which are actually a serious ethical concern.  So as students, it is on us to be vigilant to identify and address bias in AI systems to ensure fair and inclusive learning.

So, overall, yes, generative AI is a powerful tool in education, and it shouldn't be regarded as either inherently good or bad, and to make the most of its benefits from minimizing concerns, we need responsible and also ethical usage because ultimately, the aim is to enhance the learning experience, safeguard privacy, nurture critical thinking, and promote fairness and inclusivity in education.  Back to you, Ihita.

   >> IHITA GANGAVARAPU:  Great points, Connie.  Thank you so much for taking us through your experience with the benefits as well as some major concerns in our critical thinking, bias, and a lot more.

So now I'd like to open up the floor for audience intervention.  Where we would like to ‑‑ where we would like your inputs on the policy questions, which is what policy should be in place to ensure the responsible and ethical use of generative AI technologies in educational settings?  Please, the floor is open for your remarks.  I would also request Adisa, in case there are any online interventions, to let me know.

   >> AUDIENCE MEMBER:  Thank you.  This is Anenus Apisiara, youth ambassador for the record.  When it comes to youth regulation, now everybody tends to think about the policy and regulation that we need to do and to take around AI.  I think the right regulation now is the one that's ‑‑ does not hinder innovation and also allows good data transfer.

When we think about regulating AI, we should also think about the movement of data since Connie mentioned, there is really ‑‑ having a good AI is also having a lot of data behind.  So the good regulation would be the one that's really considered data transfer and also does not hinder innovation.  That's it from me.  Thank you.

   >> IHITA GANGAVARAPU:  Thank you so much.  So good regulation is something that does not hinder innovation.  Thank you.  Are there any more interventions on what policies do you feel are the need around this topic?

   >> AUDIENCE MEMBER:  Hi.  Brenev, I am a student of law.  I've been thinking about the use of generative AI, and before we go into the policy aspects, I've been thinking there is need for more experimenting on what are the use cases of using generative AI in academic settings.  For example, I found it very useful in doing certain analysis.  So, it's not about making ‑‑ just copying your work or plagiarizing, but using it for critical thinking at scale.  You have designed something, and you have gotten the right model, and now you are giving your work to a machine to complete it for you.  I think that should be completed.  We do utilize different machines and software for analysis, especially when we're doing empirical work, and so the fact that most use of Gen AI in academic circles is considered ‑‑ is seen from the lens of plagiarism, is in my view not very good and more experimenting and open discussions.  And this is where PhD guides and guides for the master's students should be having more open discussions with the larger teams and ethical committees of the relevant universities so that more experimenting can be done and flaws can be understood and these challenges can be resolved.  I hope this helps.

   >> IHITA GANGAVARAPU:  Thank you so much for sharing those points.  Are there any more policy interventions that are required?  Yes, please, please?

   >> AUDIENCE MEMBER:  Patan from Finland, looking at this from teacher's perspective.  I'm not now teacher, but I have been.  Basically, there is no point in regulating it out.  Okay, I'm looking at university level.  Not saying lower level, but maybe.  It's just like I remember when pocket calculators came about and it was horror, you know, people will cheat on math exams and so, but all of these techniques will be used anyway.  Exams should be designed to deal with reality what people will do anyway.  I have used exams in the past using computers, and can you do anything with them except talk to each other.  If you Google, cool, I'll take it and allow for it when designing the exam, it's Googlable, okay, my bad, same with AI.  If you can answer that AI, okay.  I should have allowed for that and tried it myself before doing so.

Maybe regulation to require teachers to take this into account would be useful.

   >> IHITA GANGAVARAPU:  Thank you so much for your points.

   >> AUDIENCE MEMBER:  Hi.  ISOC youtha.m. ambassador.  While we try to make this more accessible in Global South, such thought should be on using generative AI in Global South because the quality of teaching that is accessible to children in the areas is quite limited, and I think that generative AI can really help to bridge this gap that we're seeing in the Global South, so that is something that people should consider.  Thank you.

   >> AUDIENCE MEMBER:  Thank you very much.  My name is Valaris, currently also a student, master's degree in informatics and concentrating more on data science.  My question, and maybe can you weigh a little bit of thinking regarding the AI.  Like students, they're still going to use it.  You cannot ban it.  It should be allowed in my perspective, but we need to think about the way how we can govern it, for example, for the exams.  You know, I can generate code, I can write it just with Chat GPT right now, I can build a site in a matter of 5 seconds and another 5 seconds I'm going to spend just to input the code, and then I'm going to see my website, which is did I do it?  No.  Artificial intelligence, it's text generation, it performed it for me.  And the question is, you can write an essay, you can write whatever you want with it, and the model is going to become more and more complex, going to become more bigger, therefore it's going to become ‑‑ it's going to improve it on itself.

So, we need to think about some kind of a policy that will prevent people using AI on the exams, not to cheat, but in order to get the skill.  So, I'm thinking we need some kind of a universal like policy in between in the university and academia to have the field level down.  For example, in the EU, you have certain levels of education around like European Union but the educational level is different in the different countries, so we just need to think about something that will prevent people from using AI during the examination period.

So it's open and you can use it, but at the end of the exam it's you and your personal skills and this should be important, in my understanding.  So thank you for your attention.

   >> IHITA GANGAVARAPU:  Yes, please.  Thank you so much.

   >> AUDIENCE MEMBER:  Thank you very much.  I would like to just add a few things.  From the teacher's perspective, I think the tools certainly are useful for the students because it enhances their creativity and critical thinking, but from the teacher's perspective, I mean we're going to create our submissions which are standing let's say 5 seconds, so we need to find a way to evaluate those submissions or evaluation rubrics need to start think being how we evaluate those work, and if there are any ‑‑ I mean there are ways to integrate like digital watermarking where it is clear to the teacher that some of the contents are coming from AI sources and that helps the teachers to actually see how well the students are doing.  I think that would be something that we can discuss also.  Okay.  Thank you very much.

   >> IHITA GANGAVARAPU:  Thank you very much for all of your interventions.  They're noted.  I think we'll take it up ‑‑ take up a few competentes from the speakers towards the end of the session on the interventions.

Moving ahead, I would actually request Connie to talk about how do you think policymakers should and can collaborate with relevant stakeholders to ensure that teaching and learning processes are enhanced while sustaining creativity, critical thinking, and problem solving.  You mentioned these terms during your initial talk, so please.

   >> CONNIE MAN HEI SIU:  Thank you.  So, as a student who has been through the educational landscape of Asia, let's take a moment to delve into what integrating generative AI into education here really means.  So Asia, as you probably know, is a vast continent boosting like a huge array of different country, each with their own unique educational systems, culture, language, so on, and given this remarkable diversity, any discussion around introducing generative AI in education needs to begin by recognizing and embracing this fact, especially like some of the audience members have just mentioned, especially in assessments and in exams.

So, other than that, policymakers also need to get a solid grip on what generative AI is all about because without a clear understanding of all of this technology, any strategies that the device might actually miss the mark already.  It's crucial for policymakers to educate themselves on intrinnics of generative AI that could be done through collaboration of experts and policymakers in the field and once the policymakers have the firm grasp, the next big task is prioritize into integration.  It's it's all about finding the sweet spot of leveraging power of generative AI into enhance education while safeguard can go, and if possible enhancing essential skills like critical thinking, creativity, and problem solving, and achieving this balance is obviously not easy and requires active engagement with with educators and students and all to emphasize the skills.

Inclusivity actually takes a very important role here because collaborating with various stakeholders in education, teachers, parents, tech companies, it's very important and together they can identify and tackle issues like accessibility, algorithmic biases and also digital divide and ultimately make sure that generative AI enhanced education is accessible to everyone regardless of their background, and to make this implementation a success, educators need to be equipped to make the most out of AI tools and policymakers need to promote partnerships with AI training institutions and education companies to provide ongoing development and support.  By doing so educators are empowered to harness the power of AI in the classroom, and ethical concerns such as privacy, bias, data security, these should not be all.  Policymaker need to team with data authorities and educators to establish clear guidelines for ethical use of AI in education, and guidelines should ensure that student data is safeguards and AI algorithms remain free from bias and also data security should be very rock solid.

And also furthermore, academic researchers and institutions can play a vital role in evaluating the impact and ethical implications of AI in education, so by collaborating with them to conduct regular assessments and research initiatives, this could provide valuable insights that policymakers can use to make informed decisions, collaborations with universities and tech startups, ideally incentivized by government funding can also motivate the development of new ethically sound AI learning tools, which could in turn enrich the educational landscape as well.

Lastly, let's not forget about the global per perspective as a whole.  Policymakers should collaborate with other countries and regions sharing best practices and setting common standards for the ethical and effective use of AI in education, so in our interconnected world, international cooperation is the key to addressing common challenges, and also ensuring that the responsible and ethical use of generative AI in education benefits students from all across the world.

So, overall the responsible and ethical use of generative AI in education is a multi‑faceted challenge that demands collaboration among diverse stakeholders while navigating this complex terrene may be challenge I believe the potential benefits for education make it a worthwhile endeavor that can ultimately enhance the learning experiences for students all across the world.

   >> IHITA GANGAVARAPU:  Thank you so much, Connie.  That was a very comprehensive response.  I love it.  I would now like to invite Adisa who is also the online moderator, but wants to give his interventions as attendee.  Adisa, over to you.

   >> BOLUTIFE OLUYINKA ADISA:  Thank you very much, Ihita.  Right now we don't have any questions from the online attendees.  By the way, I'm Adisa, Bolutife, member of the Generation Connect Board and glad to be here as online moderator.  I also have a personal input, so I would like to add to what some of the attendees have already pointed out, but I think I would focus more on the potential of generative AI because I think while we have a lot of things to worry about, it's also like a huge potential for growth, and if we see the convergence of education and artificial intelligence really as giving right to a lot of innovative possibilities, especially in the educational sector.  We now see situations where there is more immersive learning through things like, you know, physical sort of training that directly addresses people's cognitive abilities which I think is really good, especially for children, because the usual way of teaching probably would change in the coming years.

I think rather than regulate generative AI down, I think academia should probably learn how to work with it.  I feel instead of banning students from Chat GPT or whatever technology is out there, the curriculum perhaps needs to upgrade or needs to evolve to address realworld issues because at the end of the day, when we leave the academic environment, they will still use these technologies.  They probably won't have to do those things manually anymore.  So perhaps you test the knowledge in other ways, and we just need to evolve with the current scale artificial intelligence is moving.  That would be my recommendation.  I think less regulation is a good thing and also creates more room for innovation.

With this, I would like to give the floor back to Ihita.  If there are any other questions online, I will be sure to bring them to you.  Thank you very much.

   >> IHITA GANGAVARAPU:  Thank you so much, Bolutife.  I think there were a couple more mentions about how we need to do the assessment testing and research and how it's going to impact instead of completely, you know, removing generative AI from let's say a young person's life in an educational context.

But now I would like to invite Osei to answer the same question.  You know, how can policymakers collaborate with relevant stakeholders when it comes to teaching and learning processes, enhancing of these processes, while making sure sustaining creativity, critical thinking, and problem solving.

   >> OSEI MANU KAGYAH:  I think this question has lingered on for years.  Policymakers and all converging with industry or say that of the technical community and enhancing the issues.  The fact is that day in and day out, there are going to be new tools and new technologies.  We need to embrace these new technologies.  We have, I would say consistently there have been these guardrails in place but how are we implementing them?  Because new technologies do come up, but there is call for policy, there is call for guardrails and all of those things, but how about previous policies or say initiatives or say guardrails concerning these things?

I'm of the belief that we need to regulate AI and all of these things because we see complex desired systems like the reactions and all of those things having certain guardrails and we can't let the journey out of the box, even though it might be out of the box.  We instead to be ahead of the curve.  Have all of these regulations.  In a way we don't hinter or stifle innovation.  I always maintain in collaboration with academia or industry or policymakers, it needs to make policy or philosophy that is human‑centric.  I believe if we are deploying these technologies, and we're talking about ethics, if it takes a human‑centric approach, all of these issues must be involved.  As I already mentioned, there are always going to be new technologies.  So how about instead of trying to say, how about saying we be human in our designs instead of saying how about we co‑existing, because colonize, we've seen in copius academic literature where protect plagiarism or academic integrity is a bit bias against non‑english speakers.  So there is that race of deployment that needs to sort of colonize.  So how about enough.  This may be fine, too, but we seem to be developing new tools which big biases, so instead of transformative artificial intelligence, how about choose for humans, which is going to make us more effective.  So, I do believe, essentially, we they'd to take a human‑centric approach, and we need to be effective collaboration from academia and multiple stakeholder groups.  Over to you, Ihita.

   >> IHITA GANGAVARAPU:  Thank you so much, Osei.  Actually, just now following up on the points that were mentioned by Connie, Adisa and I will say I would like now to open the floor for the audience intervention.  The policy question that we are trying to get your inputs on is, how can policymakers ensure that the use of generative AI by the young people in education is inclusive, age appropriate, and aligned with the developmental needs and abilities?  I think it's a really important question for us to think about.  You know, in one of the previous meetings that I had an opportunity to attend, they were talking about how you can have some kind of content moderation or parental controls, so there are lots of aspects to it.  So we would like to hear from you what should be ‑‑ what is it that we should be doing moving forward?

   >> AUDIENCE MEMBER:  Hello.  Thank you so much for organizing this session.  I'm a high school teacher in Tokyo, and I think AI has good potential to raise global citizens.  I think teachers and international students together, and also researchers should try designing good curriculum to actually deliver AI‑supported class curriculums together in human‑centric way, and we will see whether AI is ethical and transparent, and students and teachers and researchers will judge whether it has ability to promote the good part of the student all over the world.  Thank you.

   >> IHITA GANGAVARAPU:  Great points.  Thank you.  Are there any more interventions?

   >> AUDIENCE MEMBER:  Hi.  My name is Nicholina a first‑year student here in Kyoto studying information processing.  My main thing that I think is really important is policymaking, is that it reflects the fact that AI is going to change academia.  Because I think if the policymaking reflects ‑‑ it is trying to just maintain the status quo, academia as it is today, I think it will fail to integrate AI successfully.  I hope that the policymaking will reflect the fact that AI is going to change, academia is going to change education, yeah, for policymaking.  Thank you.

   >> IHITA GANGAVARAPU:  Thank you so much.  I like that there is a mix of teachers, professors, and students in the audience.  It's a very important discussion, so if there are any more interventions, not just from speakers, you know, teachers and students, but I can also see and identify a few members of the technical community here, so if you have any interventions, please, share your policy recommendations.

   >> AUDIENCE MEMBER:  All right.  I'm just thinking if we're going to speak about usage of the generative AI and concern because currently everyone is speaking about Chat GPT or other generators that were created, so who is going to create ‑‑ my first like kind of point of thinking is who is going to create something for the kids?  Is there going to be a responsible for the financial company to do it?  Or is it going to be the governmental responsibility to do it?  Because we're going to speak about the education.  Because now days like the generative AI and all of this text recognition and visual recognition, it's practically useful to education, but right now it's become a huge thing in, for example, company life, like automazation of tasks,s but who is going to do it and will there be incentives of companies to do it, or we should rely on their good will?  And what data is going to be used to teach those AI algorithms and teach those specifically for children because now days, like open ad just scratch a lot of information data information and put it together and start learning from that, but inside of this data there is already a bias that nobody could check because so much information was fed to the algorithms.  So who is going to do it, and it's going to be costly, so who is going to bear the costs?  Will it be the private sector, the taxpayers, or who is going to pay for this entire thing?  We also need to think about that if we really want to make something for the children and their education.  Thank you.

   >> IHITA GANGAVARAPU:  That's a fair point.  Are there any more interventions?  We're looking at, you know, we're looking at inclusion for one.  Age‑appropriate generative AI, and aligned with developmental needs and abilities.  I think these are the three verticals we're trying to highlight and think through in this intervention.

   >> AUDIENCE MEMBER:  Thank you.  This is Atanis again.  One thing when we're thinking about inclusion in AI, I think we should go back again to the data that's being fed to the AI.  So, now when we are using this generative AI, you realize that the data ‑‑ like the response you are getting tends to be oriented to the ways to some cultures.  Where you can choose it in some languages, whether our local languages or some other maybe regional languages but notes English and not related to some regions.  So what I have realized is the data that we are getting tends to relate to the developer of the AI tool, and most of these are developed by the Big Tech and whether based in the U.S. or in the Europe.  So you see, they have already displayed definition.  So when we are looking at inclusion, we also need to look at that.  When we need inclusive AI, we also need AI developed in other countries and other regions.  So I think when looking at it from the academia, like I'm a software engineer but as a student, so when I'm looking at it, I think we need to develop our own systems that are based on learning from our data, and that also can relate to our cultures, to our regions, and then there we can talk about inclusion.  Thank you.

   >> IHITA GANGAVARAPU:  Thank you.  Thank you so much.

   >> AUDIENCE MEMBER:  I agree with you.  I'm Isharo from Japan.  I may be too old to talk about it education, but I wonder education with generative AI at current level, might be very strong English‑centric.  In the end, you are saying that hey, everybody can learn English, must have English with with generative AI.  But nobody talks about your own native language, even if Japan.  I wondered if in Japan we cannot make Japanese generative AI software?  It's very English‑centric or American‑centric.  All the people in Asia, we have small ‑‑ very small language groups, and I wonder if Japanese can have our own generative AI system, especially for education.

   >> IHITA GANGAVARAPU:  Thank you so much.  I think there were some mentions of use cases also, and I think that is something for us to consider.  Thank you for sharing the point on inclusion also involves language.  Are there anymore interventions?  I think we will be happy to take one more.  Yes, please.

   >> AUDIENCE MEMBER:  My name is Patrick Polwalsh from the sustainable development solutions network, so I'm also a professor at University College Dublin.  One thing that I think that's important here is we have to realize, it's not just about getting access to the Internet and learning how it use this and use these platforms and technologies.  The one problem that you got to think about is what's called open education resources.  So the freedom to actually have resources created and quality assured and put on, let's call it the global knowledge commons, which could be the Internet.  Of course, lots of countries have great teachers, great curriculums, great applications to the domestic context, but they're not online.  So, you know, if you really are going to apply this tool and know there is biases in the data that it's drawing from, you got to be very concerned that that data pool, the great lake that have knowledge is not being populated, in general, right across the world in terms of education resources.  So that's why the UNESCO Open Education Recommendation, open Science Recommendation, and Open Data Recommendations are so important because it's not just about use, but it's about the right to populate the global knowledge commons, with all of this resource material.  That needs a lot of work from policy, tech people, universities, governments, so on.

   >> IHITA GANGAVARAPU:  Thank you very much, all of you, for your interventions.  They're dually noted.  He we would like to move to the next segment, where I would like to invite Osei to share a couple of minutes to talk about the question we just raised and also your closing remarks since we're running short on time.

   >> OSEI MANU KAGYAH:  Yes, sure.  Time is our biggest enemy.  One thing is that we need to all agree on how generative AI has sort of created personalized learning materials, made learning easy, and all of that.  But there are some myths around these things that we need to do, and I think some of our participants over there raise it.  We talk of how these generative AI tools has made education democratized, but truly it isn't democratized and it is still centered around certain language, around ‑‑ so more American‑centric and all.  For it to be truly democratized, we all need to benefit, Japan, Africa, everywhere.

Two, is how when you talk of policies and making policies and initiatives or say national strategies, I truly wish that when you go back to our respective country, we advocate national strategies or AI policies, strategies and policies.  We need guardrails.  We can't let the jeanie out of the box.  We need to be ahead of the curve.  There are always going to millions and thousands of detour, but how do we make it safe and how can we trust these tools?  How can we be open and accountable?

Lastly, when I do talk about deployment or I would say of these tools, how you design these tools, how you need to take an approach of human‑centric.  I can't say this enough, we can make all of the policy, but in designing all of these tools that it doesn't take the human‑centric approach.  We may be failing.  There must be synergistic energy and collaboration from academia.  Just like we are here.  The multistakeholder approach in designing these tools, and it's been such an enriching conversation, and seeks to inform, and I'm very privileged to be here.  I learned a lot here, and I would like to hear from Purnima, who is also on the call, I'll give my one minute to Purnima to also say something on that.

   >> PURNIMA TIWARI:  Hi, everybody.  I think that's sort of a last minute.  But thanks for this.  I probably, I mean it's an interesting conversation.  I think I would certainly want to contribute in a sense of, you know, some ideas on how can we make it more inclusive.  Of course, I think making those data models, enriching them, of course, would be to look at different sets of ‑‑ realize different sets of concerns on, you know, ethnicities and all of those areas that come along with it.  I really appreciate the concerns that Connie has raised about how can we safeguard it also, so I guess in the current digital world, everybody is sort of more using these platforms.  I think it's very important to build a strong infrastructure alongside.  That would be my submission.  Yeah.

   >> IHITA GANGAVARAPU:  Thank you, Purnima.  Thank you, Osei, for your comments.  I would now request Connie to share her closing remarks and addressing the policy question.

   >> CONNIE MAN HEI SIU:  Okay.  So I guess to sum it up.  First off, collaboration is inevitable.  Policymakers, educators, tech developers, and any other relevant stakeholders should form a team and such collaboration should extend beyond occasion meetings and instead to involve into an ongoing dialogue, just like how AI technologies are constantly evolving, and our policies have to keep up to this pace.

The next is data protection, since AI is becoming more and more prominent in our education, we need better and more robust data protection laws, and the laws are essential to ensure that students' personal information is secure while they engage and interact with AI‑powered educational tools.  We have to mention digital literacy programs as well.  Before students dive into the world of AI, they need a solid understanding of digital concepts and such knowledge empowers them to critically assess AI generated content and navigate the digital realm with confidence.  So if policymakers put more focus into digital literacy, young people could be equipped with the skills to think twice before consuming online information and perceiving them as right or wrong.  And also, this brings us to inclusivity, be it language and knowledge sources like some of the audience has mentioned, policymakers should recognize the immense potential of AI in addressing educational disparities by personalizing learning experiences, tailoring content to needs, language, learning styles and pace, and this inclusivity driven hey approach ensures AI serves as a bridge and not barrier in education, and also policymakers should adapt a proactive stance in asession the impact of AI on students and collaborating with researchers.  This should shed light on how AI affects students in the short and long term, including skills like cognitive development, social skills, and so on.  And finally, to ensure that the use of generative AI by youth and education is all right, young people themselves should be actively involved in these policymaking issues.  Because we have the unique insights based on our firsthand experiences and preferences, and policymakers should tap into this knowledge and also include it into their policies.  So I guess to conclude, generative AI will inevitably be integrated into education, so policymakers should lead the charge of responsible AI integration in education by setting clear guidelines, promoting digital literacy, addressing educational disparities, and encouraging AI development, and involving young voices in the decision‑making processes.

And with these comprehensive steps, we can definitely shape an educational landscape where generative AI becomes a powerful Ally of ours.

   >> IHITA GANGAVARAPU:  Thank you so much, Connie.  You know, with this, we reach toward the end of the session.  First of all, I would like to thank all of you for your active interventions because we are at the IGF to discuss ideas and hopefully come up with effective actions an how we would like to meaningfully have meaningful policies around the use of generative AI in an educational setting.  So, yes, with every emerging technology there are concerns surrounding multiple things, and especially with generative AI we spoke about bias, fairness, type of content, challenges of information, misinformation and copyright and more.  There are lots of thoughts and inputs needed, and like Connie mentioned let us work together to leverage AI and bridge the gaps in education.  Thank you very much and have a great day ahead.

(Applause).