IGF 2023 – Day 1 – Launch / Award Event #27 Education, Inclusion, Literacy: Musts for Positive AI Future

The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.

***

 

>> Good morning.  Good morning from Kyoto, Japan.  I'm Connie Book, president of Elon University in North Carolina, USA, and chair of the National Association of Independent Colleges and Universities in the United States.  That organisation represents a thousand private and independent colleges.

This is my second time at IGF, and the 12th time that Elon University has sent a delegation to this important global gathering.

Our engagement at IGF since 2006 has been through our imagining the Internet Centre.  It is Elon's the public research initiative, focused on the impact of the digital revolution, and what it does and impact on individuals and institutions.  We have a booth over in the village, and our team is recording video interviews at IGF, and I encourage you to take a few moments to stop by and share your thoughts with us at some point this week.

Today's launch event highlights the urgent issues related to artificial intelligence and higher education.  We are releasing a substantive position statement titled Higher Education's Essential Role in Preparing Humanity for the Artificial Intelligence Revolution.  If you work at a college or university, you know how timely and important this statement is.

The statement introduces six holistic principles, and calls for higher education community to be included as an integral partner in AI development and AI governance.  The statement provides a framework for leaders at colleges and unfortunates around the world, as they develop strategies to meet the challenges of today and tomorrow.  At Elon University, faculty are adapting the statement as they create policies on AI and design new approaches to teaching and learning.  In writing this statement, we worked with higher ed leaders, scholars, and faculty members from around the world to synthesize ideas from authoritative sources on AI.

I want to thank everyone who spent time considering this statement and contributing their thoughts and support.  Today more than 130 distinguished academic leaders and organisations from 42 countries are initial signatories to the document, and we invite you to join them.

Study the document on our website, and sign on if you wish.  There are printed copies available for those in the room today, and our moderator will post a link for remote participants.

Let's briefly look at these six principles.  First, principle number one, people, not technology, must be at the center of our work.  As we adapt to AI, human health, dignity, safety, and privacy must be our first considerations.

Two, digital inclusion is essential in the age of AI.  We must be an advocate and ensure people at our universities and colleges and beyond gain access to these technologies and be educated about AI.

Principle three, digital and information literacy is no longer optional for universities.  We must prepare all learners, no matter what their discipline, to learn and act responsibly with AI and other digital tools.  Digital literacy gives us power, and that must be part of every post‑secondary education.

Principle number four, teaching and learning is already undergoing dramatic change because of AI, and we must carefully navigate the use of these tools in education.  Using them transparently and wisely and protecting the interests of students and faculty members.

Principle number five, we are just at the beginning of the AI revolution, so we must prepare all learners for a lifetime of growth, and help them gain hands‑on skills to adapt to accelerating change.

Principle six, this final principle has to do with AI research and development.  Research conducted at ‑‑ in higher education institutions around the world.  These powerful technologies carry great rewards and great risk, and therefore, great responsibility.  We need strong policies in place to guard against negative consequences of digital tools that could go beyond human control.

These are our core principles, and this sets the stage for a great discussion by our distinguished panelists today.  After their remarks, we will open the floor for all to share their thoughts on higher education's role in advancing the future of humanity in the AI age.

Let's begin with Mr. Lee Rainie, who spent the past 24 years as Director of Internet and Technology Research at the Pew Research Centre in Washington, DC.  We're very excited that Lee has joined Elon University to lead our continuing research on imagining the digital future.

Lee, please get us started today.

>> LEE RAINIE: Thank you so much, President Book.  It's a pleasure to be here and to be associated with really important initiative.

We believe the six principles for the internet and artificial intelligence and our global petition are essential for maintaining human rights, human autonomy, and human dignity.  The principles bring time‑tested truths to the age of artificial intelligence.  There's evidence aplenty that society's advance as their education systems emphasise how people's adoption of new skills can help them become smarter as people discover new ways to create, connect, and share, as diverse populations are given the wherewithal to control how new technologies are used, and as people adjust their lives to the emerging practices that the new technologies afford, including lifelong learning.

As President Book noted, we think institutions of higher education can be the vanguard of Civil Society forces that enable beneficial changes for humanity.  Since the earliest universities were created centuries ago, they have cultivated the grandest purposes of human kind.  Discovering and advancing knowledge, training leaders, promoting active citizenship, and, yes, critiquing the societies around them and sounding warnings as troubles loom.

Importantly, we know as technology revolutions spread, one of the major jobs of universities is to pass along the best ideas and most effective strategies for learning new literacies.  Especially to other institutions and those involving children in particular.

Clearly, we are at a singular moment now as AI spreads through our lives.  In the past, tools and machines were created to enhance or surpass physical capacities of humans.  The adds vent of AI for the first time brings technologies that enhance or surpass our cognitive capacities.  This revolution will cause a big sort that will force us humans to identify and exploit the traits and talents that are unique to us and make us distinctively valuable.

What will be the differentiators between what we can do and what our machines can do?  How can we domesticate these technologies to make sure they serve us and not the other way around?  At Elon, we're planning to be in the forefront of university studying and disseminating information about how AI is affecting people.  We have an ambitious agenda, a fresh research that will build on several decades of exploration of candlelight vigil tall trends, and future pathways for digital innovation.  In fact, we are gathering data right now in a survey of experts and a separate survey of the general population in the United States to explore how both groups use about possible benefits and arms of artificial intelligence are going to unfold in the coming years.  We will be releasing those findings in early 2024.

Beyond that research, these are some of the questions that will guide our work in the age of artificial intelligence metaverses, and smart environments.

What are the new literacies people would be wise to learn?  They might include things like media and information literacy.  The accuracy and inaccuracy of information judging it and making the right decisions based on it.  Data literacy, privacy literacy, algorithmic literacy, creative and content creation literacy.  In addition, we at Elon seem intent to explore how well we are doing to hone our singular valuable human characteristic.  Things like problem solving, hierarchical decision making that makes pattern connections and makes decision trees about how to move forward.  Critical thinking.  Sophisticated communication and the ability to persuade which machines can't yet do.  The application of collective intelligence and team work especially in diverse environments.  The benefits of grit and a growth mind‑set.  Flexibility, especially in fluid creative environments, and emotional resilience.

In the end, big issues await exploration.  What are the sign posts and measures of human intelligence?  What are the qualities leaders must possess?  How do people live lives of meaning and autonomy?  What is the right relationship between us and our evermore powerful digital tools?  Our past studies have shown there are wide range of answers to questions like those.  And yet there are universal ‑‑ there is a universal purpose driving people's answers.  They want us to think together to I did advise solutions that yield a greatest possible achievements with the least possible pain.

Thank you so much for your interest.  Please feel free to reach out to me here or find me in our booth in the exhibit hall.  If you're interested in furthering this campaign, signing the ‑‑ our petition and maybe getting involved with us, we're always on the hunt for new partners, new collaborators, and new ideas.  Again, my thanks, President Book.

>> CONNIE LEDOUX BOOK: Thank you, Lee.

We now have two distinguished speakers who are joining us remotely.  First is Professor Divina Frau‑Meigs, who helped with the research and writing of this statement and connected us with thought leaders around the world.  She teaches and researches at Sorbonne Nouvelle University in Paris, and has been quite active for years with UNESCO and at IGF.

Doctor, you're up.

>> DIVINA FRAU-MEIGS: Hello, everybody.  It's 2:00 in the morning in Paris, but it's really worth it to be with you and to meet, to return to IGF.  As I saw it being born since I participated in the World Summit in Information Society in 2005, representing academia for the Civil Society Bureau of the Summit.  And I've worked on this ‑‑ these topics ever since, and followed ever since from the beginning of social media in 2005, to what we could call now the beginning of synthetic media.  And this is maybe one of the tacks I will take.  I wanted to thank before that Janna and Daniel Anderson, as well as Lee Rainie and Elon University for including me in draughting the document and fine tuning it.  And I want to stress the importance also of IMCR, the international association for media and research that is a UNESCO observer status NGO which has supported fully all members of the statement.  And added ‑‑ a statement of its own, and I think, I hope one of the impacts of this big statement by us all, contribution to IGF, will also encourage other entities to make their own, and because we each and all have to appropriate what we feel is going on with the internet and make sure that the cultural diversity of our universities continues.  So that we don't fall under problems, one which would be homogeneity, types of AI models in the world and therefore creating more digital divides.

The other one which is something I think we all feel, is that as researchers we have to resist panic.  The current panic about AI systems and the fact they coproduce a super intelligence that is more intelligent than us, I think we all agreed as we discussed, and went around the world that this has to remain human‑centered, and that actually the humanities have a possibility of being back, not just STEM as a field, because more than ever we need to be human‑centered and get down to what it really is to be human.

So I represent also it's true a network of researchers at UNESCO called the Media and Information Literacy and Intercultural Dialogue Network of universities where we also try to think these items, we push of course for media and information literacy first because it permits a kind of familiarity that allows us then to move to AI literacy.  So one of the focuses of how to go about it for us would be go with familiarity so that people don't have the feeling there's a huge gap before getting all these competences.  So as to prevent the panic on the contrary, leave space for understanding and for adoption.  We need to lift fear, and anxiety.

For that, we have to go also at the policy level, and I think for us, we would emphasise, that's the nice thing about the six items we've put in there, they can all be impact and updated.  So if I were to unpack and update our work in a sort of continuous way, I would say that one of the most important things is proper guardrails for teachers and students.  And we know and research has shown the guardrails proposed currently by AI systems tech companies can be bypassed, so this is a problem.  And we as universities have to come up with our solutions for teachers and learners worldwide.

Also, we need explainable AI.  It's probably one of the most important elements.  Because we have to have access to the motivations for creating AI systems, for funding AI, for the validity of the AI, the fact that the scraping of the data has to be lawful and biased, safe.  Because that's how we can make proper decision making.  And we know there's no really ethically sourced data, they're not consensual.  The models are not consensual, especially in certain parts of the world like Europe, where I come from.  And where we have a feeling that there is a lot of violation.

And for us at university and in research and teaching, source reliability, ethically sourced data are crucial.  We must, we can't let go of fake information, fake news, including proposed by synthetic media that are coming up, without being scared about what happens, pseudo sciences.  This undermines the whole remit of our university and our research approaches.

So I would call on a lot of reflection on source reliability, because we probably aren't facing a new kind of force, a source that is not a primary source, nor a secondary source with intelligence AI models.

So these are elements that I wanted to put into the discussion, and soon at the moment it's under embargo because it's not out yet, but UNESCO will release during Media and Information Literacy Week at the end of October in Jordan, we'll release the approach, its approach on AI and media and information literacy.  And I hope you'll see that it buttresses everything that is being done here.  Clearly at IGF level, we would support I think all of us the creation of the body on information and AI.  Information and AI.  With all stakeholders and especially of course universities and researchers, and because probably the best place to facilitate the relatively asymmetrical dialogue right now, in between the tech companies in the AI techs that are becoming extremely proprietary, extremely commercial, and what we would like to have as independent research spaces that are universities and policy making spaces.

So definitely at IGF you guys who are there could push for the creation of a global idea of this kind, it's more or less being delineated at U.N., but IGF could be a very good space for continuous discussion about these items that I've underlined, and ‑‑ like source reliability, AI explainability, and of course all of this within our human, very human rights.

Thank you very much.

>> CONNIE LEDOUX BOOK: Thank you.  Lots to consider there.  Thank you for those thoughtful remarks.

We are honoured today to be joined by internet Hall of Fame member Alejandro Pisanty, a legendary leader in global internet governance circles, he is a professor of internet governance at the national autonomous University of Mexico.  Doctor, please give us your thoughts on the future role of higher education in the AI age.

>> ALEJANDRO PISANTY: ‑‑ to begin the speech by correcting the previous speaker, but legend?  That's Divina.  That's Elon University.  Legend, that's Janna Anderson and Lee Rainie.  And I don't want to continue with the list because it's very long.  I'm very honoured and I hope Professor that you realize how highly many of us think of the effort that Elon University has done, you really made a world worth ‑‑ marked with Janna Anderson and Lee Rainie's work with the Internet Governance Forum, they have done so much from having students over documenting things that no one even thought were worth recording, and now they are that document to their deep thoughts and understanding, they're identifying leaders, bringing young people who I followed a few of them, of your former students who have become really brilliant media analysts or figures or communicators.  So they have increased the aura of Elon University to immense heights.  This is really wonderful, so thank you for ‑‑

>> CONNIE LEDOUX BOOK: Thank you.  That's very nice.  Thank you.

>> ALEJANDRO PISANTY: It's very hard for us not to look at things through a lens of size and Elon is especially remarkable when we see that you have done far more than universities like mine with probably 20 times as many students as us.  We have ‑‑ we add two zeros to your numbers.

I want to enter now the subject matter of this speak, make it very brief and try to make it concise.  First, I join Divina ‑‑ it's one of my most admired figures in this world, from the era that she has mentioned, from the early times of the World Summit on Information Society, when people like her and INCR were championing this alternative use to state‑controlled media or to the large private interests.  At the time it was mostly media, and carriers, network operators who needed opposing, and we have now a much broader spectrum and much more ‑‑ we simultaneously need to oppose ‑‑ platform many of the entities that are now considered troubles.

I want to join her statement in particular of resisting the panic.  I think that the first thing that universities have to do, universities and schools all over, is sober up until everybody, sober up, calm down, cool down.  Look at this rationally.  What are we if not ‑‑ not of the truth.  But of the way of approaching whatever becomes the truth.  And letting it ‑‑ and that's I think the very first thing.

I have a second question here for the universities, I want to thank Janna Anderson and Lee and Janna in particular because she made much of the follow hip up for sharing with me early draughts of the statement, because it's now become the best statement for this, and I was a little bit shocked at the beginning, because I thought it was leading the university ‑‑ considering the universities in a very special and small role in the corner of things.  Where they should be part of the mainstream and even the leading edge of things.  First world universities, let me abbreviate things by calling them advanced economy or first‑world universities, are seeing now what we have suffered in developing countries for decades.  If not centuries, which is a brain drain.  One of the things that you are so concerned about comes from the fact that AI development pays a lot better in companies than it does in universities.  Universities were sort of the ‑‑ even the winters were weather, even the AI winters were weathered out by universities where this slow research kept going on, algorithms were developed, the mathematics was developed, not only the computation of technique, but the basic math was developed in ‑‑ and we're suddenly out of our best people because they are working for companies which have not only large funding, but the other thing that bribes researchers, the opportunity to do it.

When our researchers leave, our PhD students leave for the U.S. or for Europe, or Japan, they don't ‑‑ they're not only looking for a place which will pay a better salary, but they're looking for a lab that is equipped for work.  Where they can actually do the measurements, do the experiments, get them published.  It's significant, it's actually doing the thing.

And you're suffering the same thing now.  There's just a new echelon of that.  So the question here, and I'll stop with that question for this, is, the most expensive thing we have in developing countries is the highest cost we incur.  It's the cost of not doing.  The cost of not having developed a solid academic system with tenure, with infrastructure, with diversity, the cost of not developing government that is rationally driven, that creates policies with continuity on an evidence basis, that invokes rights, invokes ‑‑ would never know where we actually are.  So rights are involved as a way of pulling the hand brake, instead of finding the way of calling rights for, not for the other guys to go faster, but for us to be able to go as fast or faster.

So that cost of not doing is now being clearly manifested in the shortcomings that universities are trying to overcome with this statement.

Thank you.

>> CONNIE LEDOUX BOOK: Thank you, doctor.  Really interesting.  Calm down, cool down.

So next we have Dr. Francisca Oladipo, vice chancellor and professor of computer science at Thomas Adewumi University in Nigeria.  Doctor?

>> FRANCISCA OLADIPO: ‑‑ Elon University, speaking from the perspective of African university and an African researcher, we are probably just catching up with the rest of the world, but then you look at ‑‑ I imagine technology or like everyone else, experiences something new for the first time, there is that risk of wrong adoption, or even possibly of abuse.  And so I believe that's universities, most of our roles should be centered around the educational aspect of artificial intelligence of AI.  So if you look at not just interdisciplinary education, but cooperation, AI is applicable in practically every field.  AI researchers should not think of just collaborating with subject‑level experts, but ‑‑ in the field of AI should be made to study all the subjects, like philosophy, finance, healthcare, social sciences, to give some basic kind of domain knowledge.

And unfortunates also need to promote ethical artificial intelligence and do a lot of education around AI, because students know to ‑‑ against that abuse and misuse.  And then there's lots of questions in the society about rule of AI ‑‑ role of AI in the educational space, so not just educating, there's ‑‑ there's an educated society, generally, maybe through seminars or hand bills, or to have a (indiscernible) on artificial intelligence, the curriculum these days needs to be centered around AI, because whether we like it or not, it's going to be with us for a very long time.  I mean, it's always been here, but the awareness is now higher.  So most of the curriculum in the humanities or in the apps or sciences and technology, even ‑‑ needs to be viewed around AI literacy for everyone.

Universities, we need to do a lot of advocacy to ‑‑ and engage with policymakers, the issue of ‑‑ we could contribute our parties to responsible intelligence in governance, but how we can we do this if we don't engage with policymakers and do a lot of public outreach?  We must continue to promote more diversity and inclusion in Nigeria we see AI as more, oh, it's for you computer people.  But it's no longer the case.  They use ChatGPT now to get answers, they use online AI tools for one reason to listen to research papers and so on.  So there is always that indirect application of AI across every field.  And so we need to be more inclusive to embrace everyone and not make AI look more like it's for computing people.

When we talk about AI for social good, the people primarily are the center of a social good, mainly in the arts and humanities, they're the ones who studied behavior that look into issues and how factors affect people due to different reasons.  So it is important that these people are also included in the study of AI.

There is a need for every one of us to engage in continuous learning.  The fast pace at which AI is imagined now with language models, and before we know it's something new is out there.  We all need to continue to learn to keep abreast and be able to educate others.

Thank you very much for this opportunity again.  I'm sorry it's 3:00 a.m. in Nigeria, and ‑‑ pardon me.

>> CONNIE LEDOUX BOOK: Very late.  I know.  Thank you.

Now, joining us remotely from India is ‑‑ Siva Prasad Rambhatta.  Doctor?

>> SIVA PRASAD RAMBHATLA: Good afternoon.  I must thank Professor Anderson for this opportunity.  I ‑‑ let me, because I'm an anthropologist, so I look at it quite differently.  We ‑‑ technology is a medium.  Which we as humans feed into it, we as humans guide it, and our biases also are put into it.  Now that I'm looking at it, defining education is one of the challenges that makes access to a large number of people who have been denied on account of their economic condition.  And if you really look at the status states of education in many countries, especially in the global ‑‑ we must remember there is disparity between global south and global north.  So the global south if you look at it, those who have no access to education are from these sections.  During ‑‑ especially very interesting thing was during COVID‑19, this technology, especially using all kind of online education technologies, played a very interesting role.  In fact, after that online presentations began kind of a commonality, and at I and other technologies are really useful in this.

So what we find is, this itself has new challenges for the academics.  And what I'm saying new challenge, you find major problem that lies in the digital divide.  Access to the equipment, access to the technology.  And many of the people, especially children and others doing COVID time, they never had broadband ‑‑ bandwidth connectivity.  Some of them were even climbing trees to catch the signals.  And it was such a horrible thing.

So apart from that, online courses are also ‑‑ need to be designed and articulated in a way that captures the minds of the learners, that is also a big challenge.  And what we find is the lack of skills and the ability to design courses, using be multimedia or even the kind of new technologies that are there especially like ChatGPT, that is where we find ‑‑ designing them in an imaginative way to keep the attention of the learners is important.

And that is where we even try to train the teachers or the persons who are designing the courses that is where capacity building was one of the important things we need to undertake.  Which means we need many specialists, including experts from the media, to sensitise the online content and developers, and this is where technology is trying to (indiscernible) instructors, because the moment you design it carefully and it can fill the fan.  But it doesn't fill the human gap.  It is only partly the knowledge gap.

So the challenge was ‑‑ is from the generative AI, especially ChatGPT.  These challenges, the issues of copyright, plagiarism, and other things, and in fact plagiarism, there are some tools developed even to capture whether the content is taken from the other sources, online sources.  In fact, that is where also the problems of others have mentioned.  Copyrights, apart from the ‑‑ data kind of importance.  Security.

So this is where ‑‑ what this ‑‑ clear anything.  Among the younger learners.  Because they don't ‑‑ they like to bypass the process learning, just to copy ‑‑ they can ask ChatGPT to give me this.  But then it doesn't help.

So the challenges are real.  And ‑‑ in fact, the very important thing, if education has to be included and multicultural, and knowledge should be more ‑‑ it has to be more local.  We need to have a local AI models of learning.  That is local AI models of learning, because the subject is not ‑‑ there can't be universal kind of content, because most of the things are local, therefore what we need is a local knowledge, local content, and that needs to be given in order to make people to learn better about it.  Because problems are a local and solutions can be local.

Thank you very much.

>> CONNIE LEDOUX BOOK: Thank you, Doctor.

Next to speak is Wei Wang, a member of the IGF Dynamic Coalition on Data and Artificial Intelligence Governance.  He is also a teaching fellow at the FGV Think Tank in Brazil. 

Dr. Wang?

>> WEI "WAYNE" WANG: Thank you so much everybody for coming.  I think I'm so sorry that I cannot be physically with you together with you in Japan, but I'm excited to be here virtually.

Actually, I will probably ‑‑ with some legal aspect, but before moving to the legal aspect of ‑‑ AI's implications to higher education, I think I will have some very general points as well.  Actually, as you ‑‑ as was mentioned, we are encouraging the connection on data and AI governance, and we will release a research report on sort of global AI governance, landscape as well, probably tomorrow.  So if you're interested in this topic, you can probably get some hard copy.

(indiscernible) artificial intelligence, and this sort of supply chain is relevant to at least the three packs.  The first is what do we call data protection for short?  As you may know, that ‑‑ (indiscernible) this definitely has triggered a lot of data protection globally to investigate those generative AI services, like Italy I think it's the ever first authority to investigate this (indiscernible).  The second ‑‑ content safety, misinformation, disinformation, and the most significant rates at what we call (indiscernible).  That means ‑‑ for example, if you use some generative AI services, you will find there's citations links are fake, and it will produce a lot of challenges to, for example, the research integrity ‑‑ where it will effect our scientific discovery.

And the final intellectual property, currently I'm volunteering work with creative commons Hong Kong chapter, I think it's a good mechanic in terms of AI (indiscernible) sort of licensing ‑‑ contractual mechanism for copyright.  Typically there have been some litigations already with generative AI services, because ‑‑ using copyrighted materials for training.

So I think there will be a big issue, because generative AI also are challenging ‑‑ those services are challenging our perception of fair use in copyright law.  Many years ago there was a very ‑‑ a case about a Google book, and the judges thought largely thought it was sort of fair use.  So it's fine.  But what about the generative AI services in the near future?

I think this is three areas, I would like to quickly touch upon (indiscernible).

( video frozen ).

Thank you so much for having me.

>> CONNIE LEDOUX BOOK: Thank you.

Now we'll hear from law researcher Eve Gaumond of the University of Montreal.  Her research focuses on the impact of artificial intelligence on higher education, and she's currently working on that research here in Japan.

>> EVE GAUMOND: Good morning.  Thank you very much.  Yes, so I would like first of all to thank Elon for inviting me to comment on the statement that they are launching today.  I would like to build upon the three elements contained in the statement to talk about AI, freedom, and higher education.

The three elements are improving, (indiscernible) human agency and autonomy, and digital and information literacy provided at principle three.

I'll walk through these three elements in order to make the following point.  It is crucial that people who develop and deploy AI in the higher education understand interest essentially well, ask relevant questions, and ensure the (indiscernible) of higher education doesn't prevent students from making important choices about their lives.  So let's start with enhancing learning and teaching.

AI has the potential to increase, to improve the quality of education.  It can help create personalized learning experiences, students can learn at their own pace, focusing on their strengths or weaknesses, stuff they struggle with.  And it can also be used to (indiscernible) student‑teacher relationship.  There are some educators who report that they use data analytics to reach out to students that are suddenly disengaging from classes.  But it's far from ‑‑ these positive impacts are far from quarantined.  Even though AI promoters say personal learning increases (indiscernible) of information, there isn't a lot of data that supports that claim.  Oftentimes ed tech looks like modern snake oil.  And modern snake oil can have real negative impacts.  The datafication of students' life can discourage them from engaging in meaningful experiences, and it's especially worrisome when we know that the data starts being collected as early as ‑‑ at app early level and continue following them through high school and university.  Some students refrain from writing essay about controversial topics out of fear of that it might limit a future opportunity.  So they avoid learning the formative experiences, engaging with difficult ideas, with challenging ideas.  College students can refuse invitation to go to the bar on a Monday night because geo location data can be used to predict their likelihood of success at school, of ‑‑ predict if they're at risk of dropping out.  And it can influence their admission to grad school, or their scholarship application, again, we're preventing people to engage in meaningful informative experiences.  Remember when you were in college, these are things that promote ‑‑  flourishing.

What if an immigration officer can access and grant student classes attendance data, for instance?  Is it really what we want for higher education?  Is it really fully promoting the full development of the human personality as international human rights law says it should?  I don't know.  I don't know.  But these are questions we ought to be asking.

This is why it's so crucial that professors, universities, administrators understand how AI and data works.  So that they can ask relevant questions.  What kind of data is being collected, what is it used for?  Who can access only professors?  Or third parties as well?  And if third parties can access it, what for?

So, yeah, this is what I ‑‑ why I believe this statement is so interesting, and so important.  In part principles one, four, and three, because they can contribute ‑‑ students' freedom.

That's it.

>> CONNIE LEDOUX BOOK: Thank you, Eve.

Our final panelist is Renata de Oliveira Miranda Gomes, she is an IGF 2023 youth delegate representing Brazil, who recently earned a master's degree in communication at the University of Brasilia.  And she's here with us today.  Welcome, Renata.

>> RENATA de OLIVEIRA MIRANDA GOMES: Thank you.  Thank you so much.  Good morning.  I would like to thank the opportunity to participate in this panel as a youth representative.  I am part of the Brasilia delegation this year, and I have been studying for some time how we use internet and specifically digital platforms to communicate science.  I'll be mindful of my time here and pass to the main point that I wanted to bring to the debate, and it is how new digital platforms are extremely present in higher education, and I believe that the (indiscernible) actually showed this, showed us this quite significantly.  During a time of social isolation we had to quickly adapt to a new way of learning and exchanging knowledge.  And AI was certainly very much part of it.  But the thing S. I believe there is still a gap between students and educators when we think about the accept Siva Prasad of new ‑‑ acceptancy of new platforms and ways of learning.  I'll give an example of what ‑‑ that resonates with what the professor mentioned.

For example, ChatGPT can be used in multiple ways.  I'm aware of arguments that point out it can facilitate like plagiarism, or cutting corners when used in assignments.  However, and I was discussing this with some friends in the Brasilia delegation, that it can also make your online life easier.  We were faced with long lists of writing materials, and although it does not substitute comprehensive reading and understanding of text, it can certainly age by producing perhaps a bullet point highlights, and aid us, gain us some time, actually.

So my ‑‑ it can also be a tool to develop critical thinking and analytical skims.  My argument is educators and students should work together, and the principles here presented are proposing to find solutions that can help all parties involved, specifically I want to point out the principle number five, learning about technologies in an experiential lifelong process.  A new platform (indiscernible) users than on the software itself, so it is crucial that we educate ourselves and work collaboratively to ensure that it can be the best possible.

So this is why I believe these debates are so important.  In Brazil the approximation between AI and education is going beyond scope of higher education also, for example, a state recently announced it is working to include an AI in the state, the state's high school curriculum.  So it will be the first data ‑‑ this is a great way to begin the dialogue of good platform usage from the initial learning processes.

So I think this is pretty much what I had to say to bring to the debate for now, but I look forward to discussing it further with you.  Thank you for the opportunity.

>> CONNIE LEDOUX BOOK: Thank you, Renata.  We do now want to engage the community here with us, and broaden our conversations.  So we're going to open it up for questions.  There are microphones at the table.  So the floor is yours, please.  Does anyone have any questions?  Yes.  Say your name and your association.

>> Certainly.  My name is crystal, I am a professor of European Union law at two universities.  I would like to react to the point made by the youth delegate just a moment ago.  (indiscernible) experience, gap in expectation.  I can see that for example at my Dutch university, (indiscernible) at the moment is trying to formulate (indiscernible) policy, not quite managed it.

But for the time being, they said actually we are against using it.  My students, of course, are from a wholly different generation, they're all digital natives, they know how to use these things, and they want to use them.  So I can see the gap that you're talking about.  And I personally in one of my courses where people have to write an essay have taken the approach suggested by the department that deals with these matters, which has said one way of doing it is to alert students to the possibilities and the dangers, especially in the legal field.  You may all be well aware of the fact that a lot of (indiscernible) information is provided by these models, so you alert them to them, but you also tell them that, yes, you can use it.  Because it makes no sense to say no.  It's just not realistic in my opinion.

So I have followed the approach of telling them, yes, you can use it, but with proper attribution.  So in your papers you have to state were or not you have used AI and how you have used it.  I think this is a better approach, because as I just said a moment ago, it's totally unrealistic to expect that people will not use it.  It's also not clever, because as you said, quite rightly, there are positive elements in these systems, and we should use them in a positive sense.

So thank you for your contribution.  It's entirely (indiscernible).

>> CONNIE LEDOUX BOOK: Thank you.  Is I think we had another question here.  Yes?

>> Thank you very much.  This is ‑‑ my name is (indiscernible), I come from Bangladesh.  (indiscernible).  So ‑‑

( microphone cutting out ).

In between the ‑‑ in addition to the ‑‑ in the different groups and generations.  (indiscernible) so I was thinking since ‑‑ still there is a ‑‑ you're talking about the AI universities, so if it is becoming more and more (indiscernible) how the divide ‑‑ some people will be super tech people and they will be using AI, and that technology and getting more and more opportunities access.  I imagine public service will be based on AI (indiscernible).  So then people like us, in other countries, global south, living in a (indiscernible), how they will have basic rights, (indiscernible).

When we think of how the AI can be ‑‑ as a part of our lifelong learning, sometimes you're thinking technology will come (indiscernible) people can ‑‑ the knowledge and skills bite lifelong learning.  One of the institutions are taking that curriculum, developing the curriculum for the ‑‑ (indiscernible) left behind.

They're also taking ‑‑ becoming part of these new technologies.  So I don't know ‑‑ this is actually the thing that came to my mind.  Thank you very much.

>> CONNIE LEDOUX BOOK: Would anyone like to react to that?  Yes?

>> SIVA PRASAD RAMBHATLA: The (indiscernible) has many shades of it, because it has something to do with socioeconomic backgrounds, social backgrounds, and also the nearness and ‑‑ the towns or cities, and infrastructure.  And those who have this infrastructure, they are the ones who will benefit and those would not have would not benefit.  So the digital divide is real.  In fact, there are people nowadays who come down, it is true, it has come down as the kind of availability of the internet and (indiscernible).  But still, there are problems.  And how do we ‑‑ that is one aspect, second aspect is, that we have something what we call the algorithms ‑‑ (indiscernible) themselves, reflect the kinds of discrimination exclusion, because the moment we do ‑‑ perpetuate them, whether it is generative AI and other kinds of forms, we are challenged, because that is where ‑‑ how do we count the biases, how do we count these exclusions?  This is the challenge.

This is where academics have to think about (indiscernible).  Because one way is using the traditional medium, but (indiscernible).  There's the technology that we have can lead ‑‑ and even some of the private forms have (indiscernible).  This ‑‑ this is not the way.  Thank you.

>> CONNIE LEDOUX BOOK: Thank you.

Any final question?  Yes.  I think the microphone is right there.  They'll turn it on for you.

>> Hello.  My name is Julia, I am a youth delegate for Brazil.  I'm here with my colleagues, and I am very proud to participate in the group's presentation panel.  (indiscernible).

( microphone cutting out ).

But jumping to my question, I asked myself during this presentation, how are the participants see and act to work (indiscernible) and empathy on the (indiscernible) perspective of using AI?  Is there a connection through using different engines for AI, like not fitting only to (indiscernible).  Let's see different engines and different groups, corporations develop work like the open source, and the closed source engines, and ‑‑ like diversifying, because it's ‑‑ there is that sense of using the diversity of engines to help building sensibility, and (indiscernible) problem.  I site as a problem, because there is a lot of empathy or uninterested STEM academics or STEM operators, not necessarily (indiscernible) that are uninterested in developing and working with AI until an ethical or moral ‑‑ over ethical and moral centers.

>> CONNIE LEDOUX BOOK: Thank you.

Doctor, that is right up one of your observations.  Would you like to respond to that?

>> ALEJANDRO PISANTY: Yes, thank you.  There are around 1,300 ethics codes for AI around the world that have been collected.  And there must be 10 times as many that have not been collected anymore.  No one cares.  Some of them are very so it, they were built from the ground up, starting from an inventor of ethical systems by the Institute of Electrical and Electronic Engineers, which is now developing a set of standards for ethical AI that can be used by companies and governments for developing, providing that development of systems.  And for guiding the assessment of systems.

One problem these have is that it's very hard first to avoid subjectivity, you look at the whole big 30 pages of ethically aligned AI, that is age appropriate for children, and in the end, it's a value judgment.  Someone has to make a value judgment, whether it's something appropriate for 13 years and not 13½ years old.  So that's one problem.

The other one is that it's very hard to bring these codes or the law, by the way, because some people say that codes a way to ethical ‑‑ ethical codes are a way to avoid the law, not have the straight legal observance.  Either way, it's very hard to bring this down to the person who is actually doing the coding.  Who is actually selecting data and saying how you ‑‑ how you actually develop the system and put data into it.  That has to have a large part of contribution from the universities.  Wherein exercises which challenge our students at all levels, the people who are doing the hard computer science, coding and so forth, and all the way to people that was mentioned, students using ChatGPT for their essays.  We have to work on that, and we cannot solve that at the university level alone.  If our students arrive from high school,  from pre‑university education, without these ethical and without the mathematical competence, there is a huge challenge for universities to compensate for 18 years of noneducation.  This again goes to the cost of not doing it.

And one other contribution here, as I said, I second Divina from my statement of resisting the panic.  But I won't only say, okay, freeze, calm down, I think we can develop tools, I personally am going to make ‑‑ bring a plug for a tool I have developed, which is not for AI, but can be extended.  Which is when you look at all the panics around the internet, and those all the ways that the internet is seen as a panacea, you can actually see that most of the things, we like very much or dislike very much are happening on the internet.  Have a human social pre online or offline component, and disruptive sometimes radical evolutionary change that brings through the internet.  It's like fishing or Wikipedia, the easel and the good, they are all either phishing is simple for us, hugely enabled by the internet, and Wikipedia as you know, plain human warm hearted corporation, the will to share knowledge, maybe.  So we have six elements there, identity ‑‑ scale, identity, transjurisdictional border crossing, barrier lowering, friction reduction, and the management of humankind's memory and forgetfulness.  We can analyse every conduct that we like or dislike online, or every project, divide it into these pieces and reassemble it, and then decide whether you want your ethical code, where do you want your police?  Where do you want ‑‑ human minds will not ‑‑ if you don't change human minds you will not stop having fraud.  You will not stop people trying to cheat people, and people falling for cheats.

So let's not blame the internet and let's not blame artificial intelligence, or it's a very small niche thing called ChatGPT, without ‑‑ we still need some fluffiness and fussiness, but this is the kind of tool we can have.

 

Final point, universities can contribute to this in institutional way, we have been providing our individual academic contributions, technical solutions, they have their own role that transcends the activism that sometimes comes with situated academic social science, and bridge with the technical community that's actually doing all this development.

Thank you.

>> CONNIE LEDOUX BOOK: Thank you, Doctor.  We couldn't agree more.  That's why we think that having an articulated set of principles to begin the work of higher education, and I love the encouraging each organisation to make it their own.  So we have that diversity of thinking.  With this set of principles.

So we've reached the end of our time.  And I'd like to conclude with an invitation.  Please go to our web page and see the list of signatories and consider adding your name.  This will IGF our statement more reach, and credibility.  Our site will provide updates as the statement reaches new audiences and begins to influence institutions around the world.

Thank you all for your participation in our event today, and for your support of this important initiative.  Thank you.

(applause)