IGF 2023 – Day 1 – Launch / Award Event #30 Promoting Human Rights through an International Data Agency – RAW

The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.

***

 

>> EVELYNE TAUCHNITZ:  Good afternoon. Welcome to the session, Promoting Human Rights through an International Data Agency. Welcome, both to our participants and speakers here on site and also to our online audience. I'm Evelyne Tauchnitz. I'm going to be moderating the session today. I'm a senior researcher at the Institute of Social Ethics at the University of Lucerne, Switzerland and also a member of the IGF. Here with me are Peter Kirchschlaeger, Director of the Institute of Social Ethics, also University of Lucerne, Switzerland. Kutoma Wakunuma, professor at the university in Zambia/UK. And then Frank Kirchner, professor at the German Research Center for Artificial Intelligence in Germany. He's online. He'll be joining us online.

  We have here with us on‑site, also Yyeoung Joo Kim. And then Migle Laukyte, in Barcelona, she'll be joining us online. And then also Yuri Lima, from the Federal University of Rio de Janeiro Brazil. Will be joining online.

  Some words to the flow of the session. We will start with short input presentations. Just really short, to give you a bit of an overview of what the session is going to be about. Afterwards, it's going to be ‑‑ there's going to be a question and answer from both online and on‑site participants. And then we would really like to also have ‑‑ open the floor in the sense of having a lively discussion with all of you. Also hear your inputs, your comments, and your contributions. What you would like to share with us.

  So let's start with Peter, who's here with us today. So if you can maybe explain in the beginning what this international data agency is about, and how it will contribute to help to strengthen human rights? 

  >> PETER KIRCHSCHLAEGER:  Thank you so much, Evelyne and thank you to all of you, for being here. A warm welcome to the session. So the idea of the international database systems agency, IDA, is a result of a multiyear project started at Yale university in the U.S. and finalized at the University of Lucerne. Basically addressing the question how we can make sure that we identify early enough, the ethical opportunities and the ethical risks of so‑called AI, in order to make sure that all humans can benefit from the ethical opportunities, and that we are able to master the risks in a way that humanity and the planet can flourish.

  And based on that research, I made two concrete proposals. One is to deal with AI in a human rights based way. So, talking about human rights‑based AI. This means though, looking at the entire value chain of so‑called AI. So looking into how we extract the resources that this is happening in human rights, a respectful way. How we produce technology products. That we do that in a human rights respecting way. And also then the use, and then maybe human rights based non‑use of certain technologies.

  Recognizing that certain technologies we shouldn't use, because they may be human rights violating. That is the first concrete proposal. And the second proposal is to think so‑called AI with a dual nature. So having ethical upsides and ethical downsides, and comparing that to nuclear technologies. Because there we have ethical positive potential, but also ethical negative potential. Thinking in the model of the atomic energy agency, simplifying it in the field of nuclear technologies, we do research, built atomic bomb and used the bomb several times and realized as humanity that we need to do something about it in order to avoid the worst.

  And fully aware that the internationalatomic energy agency is not the perfect solution with geopolitical implications, but it still needs to be admitted that it was able to avoid the worst. So I think an analogy in the model of the international atomic energy agency, we should also establish at the UN an international database systems agency, IDA. International Data‑Based Systems Agency. IDA, aiming for fostering peace, promoting sustainability, and promoting human rights. But also making sure that no AI‑based product, which is human rights violating is ending up on the market. And I'm very much looking forward to our discussion in this session about this idea of IDA. Thank you so much.

  >> EVELYNE TAUCHNITZ:  Thank you very much, Peter for providing us with this overview. And what you're envisioning for IDA. We go on now to Kutoma, who will also give us a short input on IDA and what possible role you would see.

  >> Kutoma Wakunuma: Good afternoon, everyone. Thank you very much for joining us on this session, which I'm hoping we'll have a very good discussion between us and yourselves. I think it is very important that we do think about establishing an agency such as IDA. And I think one of the things that we ought to be doing as we try to advocate, or as we advocate for the establishment of IDA, is to look at how we can be responsive when it comes to the identification, or the identified social and ethical concerns around emerging technologies like artificial intelligence.

  Oftentimes when these technologies are being innovated, being developed, or perhaps designed and then implemented, one of the things that is always looked at is the positive aspect of these particular technologies. Very little I think in the process of these designs, up to the implementation stage, do we ‑‑ or do the developers think about the consequences or the threats that these technologies present.

  And this then brings us to concerns around privacy and data protection, for example. And also other ethical concerns, such as ownership and control. Because we know that as the technologies are being developed, the concentration of ownership and control is in the hands of a few. Especially as you go down to say, the Global South. We have issues around transparency and accuracy of the technologies. We have concerns around autonomy. We have concerns around power. You know, which then speaks to aspects related to monopoly, to dependency and to a certain extent to digital colonialism as the technologies become mainstream.

  So rather than becoming reactive when the concerns start ‑‑ the unintended consequences start showing up, we need to be a bit more proactive. And I think this is where IDA might actually come in. So some of the questions that we need to ask is how do we become responsive to responsible innovation.

  For me, I think one of the things that we ought to be looking at is being inclusive, particularly when we are looking at how these technologies permeate globally. Yes, of course, they perhaps start from more developed countries and then trickle down to less‑developed countries. But the issues perhaps, may be similar to a certain extent, because obviously, privacy and data protection concerns I think could be universal to a certain extent. Although of course, the way these concerns may be looked at or experienced could be slightly different.

  We also need to be cognizant of the fact that we need to understand how these technologies can have an impact on the different subjects that start using these particular technologies. So how do we go about ensuring that we cocreate, for example, or coproduce these particular technologies?  Because for the most part, we have these technologies as global technologies.

  And when we're talking about global technologies, sometimes we should be concerned about who the voice is that are representing these global technologies in a particular manner. Do we have everybody at the table when we're talking about ethical concerns that impact people?  And for the most part, I think there is a gap in terms of who is at the table, whose voices are being represented, whose social and ethical concerns are we going to be talking about.

 

  And if we are going to have an agency like IDA, that may actually help in terms of overlooking or supervising or indeed, monitoring these particular concerns, so that we can actually use these innovations, we can actually use these emerging technologies in a more responsible and not irresponsible manner. So this is what I have to contribute for now. And then hopefully I'm looking forward to an exchange with everyone else here. Thank you.

  >> EVELYNE TAUCHNITZ:  Thank you, Kutoma for adding this aspect of responsiveness, which is, I think really a key word that is not often mentioned. But I think yes, you're right. If we want responsible innovation, it should be responsive, inclusive and proactive, as you mentioned. Thank you very much for adding these points. We will go on now with Frank Kirchner, who is joining us online. Frank, are you there? 

  >> FRANK KIRCHNER:  Yes, I'm there. Can you hear and see me? 

  >> EVELYNE TAUCHNITZ:  Yes, we do.

  >> FRANK KIRCHNER:  OK. Thank you for the opportunity. My name is Frank Kirchner. I'm the Director of the German Research Center for Artificial Intelligence. Actually, and at the same time, the professor for robotics at the University of Berlin. Where I would like to take a point of view, of course from the creating these robots, creating these systems, that we actually call AI‑based robots, because they have to act and are already acting in real‑world environments, in direct cooperation, for example, with people in production facilities, but also already in private households.

  And I think what we're seeing now is just the beginning of it, because in many countries, because of the demographic factor, we will have a very, very high need for more and more of this kind of automation. At the same time, these systems will be required to do even more "complicated" tasks that usually have been done, or are still done today by human beings. So what that means is that we have to create systems, robotics systems acting in real‑world environment, maybe in direct contact with human beings, that will have to be able to perform really complicated, maybe for human beings, trivial tasks. Like packing something or cleaning your house and stuff. But for a robot, for a technical system, it's still very complicated. And this can only be achieved with massive intervention by artificial intelligence tools. Having said that, as Peter said, there's one thing that is actually ‑‑ one way and one avenue that we have to go down that is really very useful and can be of high value for humankind. But on the other hand, because we have to use this highly sophisticated AI models, there's always the risk of danger in whatever way, to misuse these systems as well. So how do we deal with this? 

  The problem is that we can not say we avoid it. We can not say we don't touch it, we don't do it, because it will be done. It's already moving forward. And one thing that has already been mentioned by the previous speaker is if we look at who's able to actually do these kinds of things today, who can build these robots, who can build the systems that can drive the robots, the AI technology inside. It's only very few.

  And it's not even states. It's not even countries. It's actually private companies, you know?  So if you want to create the foundational models that you need in order to enable a robot to do these kinds of tasks that I was just naming, you have to put a lot of money into creating the model, the foundational model. And if you look at who's doing this today, it's the big five, and not even countries. Not even the highly developed and rich countries in Europe or North America are putting the kind of resources to the table.

  So this kind of ‑‑ this idea of having the IDA, I think I do support a lot. Because it gives the opportunity to create, ways to design these systems that gives the power to more and more people. So instead of just having a few experts that can design these kinds of systems, we have the possibility by creating standards in the way we design and programme these systems. From the very low level mechanical and electronic level of performance all the way up to the high level behavior and decision‑making in these devices.

  So these standards have to be created and somebody has to monitor them. And that's something that can be done by, as we have seen in software development tools in general. If you go back to the '70s, for example, there were a few people on the planet that could programme your IBM computer. And these guys were flown back and forth between all parts of the world in order to do this kind of programme.

  In the meantime, we have been able to develop frameworks and model‑based development tools that allows basically everybody to programme his own computer. And the same thing, I think we have to think about for robotics and for artificial intelligence based systems.

  The effect of this will be that we have more and more people that are able to not just create these systems, but also to understand their working and to understand their inner functionality. And usually, there's an effective way to block and ‑‑ misuse of these kind of systems ‑‑ put a wall through these kinds of abuse of systems ‑‑ modern frameworks for designing programming allows us to do, is we can also use Meta knowledge. Metaknowledge for all the parts that go into these robots. So we can have a cradle to grave tracking of all the components that go into these robots.

  Where has this motor been produced, who has produced it, what material, what was the carbon footprint for exactly this material that went into your robot?  So we can track it. But all by having a more standardized way to design, to build, and finally to program and use the kinds of systems that by no means, by no question, we need in the future to serve so many challenges that humankind is facing now, and moreover, in the future.

  So that will be my comment. And my hope for something that's, an institution or an idea like the IDA, could support, and maybe even be an institution, like Peter said, to monitor this kind of development worldwide. Thank you.

  >> EVELYNE TAUCHNITZ:  Thank you, Frank, for adding this new aspect of creating standard, and also monitoring the compliance with this standard. And also this tracking system ‑‑ tracking system that you mentioned for the design, development use of robots in AI. We're going to go on now with our on‑site participant, on my left side. Hyeoung.

  >> HYEOUNG JOO KIM:  Thank you very much. The pronunciation for European is very difficult.

  >> EVELYNE TAUCHNITZ:  I'm sorry.

  (Laughter)

 

  >> HYEOUNG JOO KIM:  OK. Thank you very much for having me, this good opportunity, this meaningful meeting. Especially for participator Evelyne. I have learned a lot from this conference yesterday and from today also, from presenter. How should we live and prepare for our digitalized society, in all of the ‑‑ our human rights with digital technology.

  So let me start with brief technical ‑‑ a court, legalness of whether we enthusiastically embrace technology or deny technology, we are bound to ‑‑ the question concern technology.

  Yes, the using of AI's on top of ‑‑ for example, I should, our situation in Korea, to express, before three months at Korea Ministry of education decided to offer AI education to all our children in high school student starting in 2025. In addition, subjects such as math, English, will be taught with AI tools.

  Two ways. Coding and some related to AI technology, we should all, we should all around in our education, knowledge, we should do, and also in other subject. Mathematic, English. With this subject, also with AI tools, to be operative, to all our education person.

  So in this context, I would like to say that it is very evident that we should have an agency such as control tower item. Because as we well know, AI technology has not only positive ‑‑ has positive, but also as mentioned, a negative side. So we need a control tower in order to minimize the negative side. It is self‑evident. Therefore, I think more significant is not merely asking whether it is possible, but asking a question and regarding how it should be. More concrete, how we should and will build an institute, because such a question finally constitutes object or targets of education.

  And (?) Education constitutes the character of that object. Meaning to say, (?) Entity. So I would like to suggest discussion regarding of directional building of item.

  First, is about a problem of ‑‑ an age of artificial intelligence, data are becoming ownerless. We are, even though yesterday many presenter in the main session have stressed the data authority. But this fact can be considered as a counterfactual evidence for the fact that owner of our data is becoming weak. The agents, that (?) The use of data will eventually collect more data than any agency should be controlled or regulated. This could lead to call for the agency to be a subject, to be also controlled as well. Yeah? 

  Therefore, it is important to well demonstrate agencies's trustworthiness. At this moment, we should come back to the value of transparency and fairness.

  The second, the problem is of definitional research on human rights. OK. If it is (?) The concept of human rights in abstract and theory called dimensions such as level of political... maybe today so many speaker, and yesterday in the main session said. It may be related to just the philosophical concept such as very broad human rights, or dignities. However, if you consider the cultural context in Africa, or Asia, so many other group. The concept of human rights will be made concrete and realized in accordance with ‑‑

  This should be a research ‑‑ there should be a research group to establish a circle, structure bean general and particular value, namely, (?) And diversity. I think this should ultimately be, through collaborative research. Various research group. Something, researcher and ethics and philosophical research group.

  OK. And my point were two. Thank you very much.

  >> EVELYNE TAUCHNITZ:  Yes, thank you very much, for pointing out, first, the relevant education and knowledge, which we haven't talked about yet. And also for highlighting the need of transparency, fairness, and embedding human rights in their contexts.

  So we will go on now with our online speaker, Migle. Migle, are you there? 

  >> MIGLE LAUKYTE:  Yes, I am. Can you hear me? 

  >> EVELYNE TAUCHNITZ:  Yes, we hear you perfectly and we see you also.

  >> MIGLE LAUKYTE:  OK. Good. So first of all, it's great to see you again, although it's online, but it's great to see Evelyne, Peter, Kutoma, and Hyeoung. Great to see you again. Thank you for this opportunity to explain why do I think that IDA is relevant and necessary in this context of, especially artificial intelligence advancements. So basically my point was more, I start from the European perspective.

  So as to argue that well, we need, we do not have, and therefore we need a sort of international agency to address the threats that the artificial intelligence and the related systems might give rise to. So much so that European parliament has recently published its suggestions on how to expand, how to improve the proposal for the artificial intelligence act that European Commission is promoting. The first artificial intelligence legislative document that we are right now negotiating at the European level.

  And one of the things that the European Parliament has seen as very relevant and very important was the idea that we need to address, not only classify artificial intelligence on the basis of the risk, but also bear in mind that the high risk artificial intelligence systems might and surely will have a huge impact on the human rights.

  And therefore, European Parliament has proposed that the high risk artificial intelligence systems should undergo the fundamental rights impact assessment, which was not foreseen in the original version of this legislative proposal.

  So the assessment of this impact would basically include such elements as the (?) Of the system, intended geographic and scope and use of the system, categories of natural persons and groups. Not only persons as such, but also groups likely to be affected by the use of this system. How we are going to verify that the particular artificial intelligence system is compliant with the legislation related to the fundamental rights. But of course it applies to the human rights more widely.

  And what kind of reasonably foreseeable impact we can envision through this impact assessment, and what specific risks, what harms can we think of, and what adverse impact there might be. And should this assessment lead to the certain, huge, and negative outcomes. So the foreseeable misuses or harms are kind of, especially relevant.

  The developer needs to inform both the national authorities and also the stakeholders. And in particular, the national supervisory authority that might start the investigation.

  So having said this, of course we say, OK, that's a great initiative, and well, we very much hope that all these assessments might be brought into being. What I do ‑‑ where I do see the role of IDA is that, is making ‑‑ is basically being the focal point where all these assessments might flow. So as to basically make the good use of all this enormous amount of information related to artificial intelligence risks and harms to people and groups of individuals or ethnic groups or any other groups of human beings.

  Because this information is fundamental to prevent these risks and negative impacts, right?  So making this knowledge also available and accessible for international organizations would help us also not only to prevent these harms from taking place in Europe, but also would expand this protection worldwide, because United Nations, and in particular, the International Data‑Based Systems Agency, so IDA, could be the institution that could be in charge of this task. Because otherwise, we discover things in Europe, but then we would say, OK, so many companies might say we can not do this in Europe, but there is the rest of the world, right? 

  Where you can do anything you want. And the way to prevent this from taking place is to build IDA and make it the focal point for this sort of information, to be distributed, accumulated, and put to the use that would prevent any abuse, harm, or other negative effects on the people from other cont nents, where actually ‑‑ continents, where Kutoma pointed out, that there was a historical tendency to colonialize and abuse other continents. So I think this is the way to prevent, also the repetition of historical errors, we still kind of not comfortable with. Thank you very much.

  >> EVELYNE TAUCHNITZ:  Thank you very much, Migle, for your input. And also highlighting, I think all the speakers have agreed that technology has lots of advantages, but we also need to handle the negative consequences and the risks. Especially when it comes to these high risk AI that you mentioned. At least at the European level, impact assessments or what to do with the information that these assessments generate. Ideally to predict future risks.

  So there, we also have like a new contribution that we have not discussed so far yet for IDA. We'll go on now to our last speaker who is online from Brazil. I'm not going to ask what time zone that is and what hour of the day [Laughing]. But if you're there, can you hear us?  Yuri? 

  >> YURI LIMA:  Thank you. It's 4:00 a.m. in Rio. Good afternoon to the participants of this important session on the International Data‑Based Systems Agency. It is a pleasure to be here today. I would like to briefly speak about the challenges of building a fair international division of labour, in the digital economy. In the past decade, we have witnessed the rapid and unprecedented revolution of AI and digital platforms that usher in a new digital hish globalized ‑‑ hyperglobalized economy.

  While the potential of recent technological advances to drive growth and innovation is staggering, there is a significant disconnect between the pace of this evolution and society's capacity to adapt. The speed at which new technologies emerge far surpasses our collective ability to understand, regulate and fairly integrate them into our economic fabric. The result is unequal distribution of the benefits of this technological progress. The digital economy, as it stands, presents a stark disparity between the international flow of profits and labour. While a handful of multinational tech giants amass incredible wealth, sometimes larger than countries's GTPs, most of the digital labour force finds itself in a challenging position.

  This dichotomy results in an international question of labour that is often invisible, underpaid, and inhumane. A modern dynamic that echoes central ‑‑ practices, when resources from many were channelled to benefit a privileged minority. The technologies might have changed, but the underlying logic in their development, operation, and even disposal, still relies on exploiting cheap labour from the Global South. From Kenyan moderators who flag harmful content to train Chat GPT, and workers in Brazil who drive for Uber while producing data that helps to develop autonomous cars, that will eventually replace them. To the Congo miners, who take the materials to produce the next iPhone that will later be dumped in electronic waste landfills in Thailand. Many people around the world face poor working conditions with low pay and little to no labour rights or protections.

  To sustain a digital economy that seems very clean and ‑‑ in the developed economies silicon valleys. Article 23 of the human declaration of human rights, articulates everyone's rights to just and favorable conditions of work and to just and favorable remuneration, ensuring an existence worthy of human dignity.

  Moreover, the Sustainable Development Goal number 8 calls for decent work for all, fostering economic growth while upholding workers' dignity, safety and rights. Sadly, the current digital economy diverges from these noble ideas. In consequence, the time has come for urgent action to promote a more ethical international division of labour in the digital economy. We need greater transparency around the supply chains and labour practices that sustain big tech. We must recognize that the role of underdeveloped countries in the global flow of technology and wealth, cannot be diminished in importance. As it is implicated with the more value parts of this global value chain. Both sustaining and allowing it to exist in the first place.

  The Global South, where much of this digital sweatshop labour takes place, must have a seat at the table, (?) Global rules for the digital economy. Enter the potential role of an international database ‑‑ International Data‑Based Systems Agency, IDA. An agency that can serve as several, monitoring, (?) Justice and inequality are upheld in the digital sphere. Observing the current state but also anticipating future challenges.

  Revealing inequities, identifying Best Practices, and recommending actionable solutions. IDA at the UN level can bring transparency and provide a platform for governments, workers, businesses, and Civil Society to engage, collaborate, and commit to a fair digital economy, by promoting the rights to a fair international division of labour, IDA would ensure that a larger portion of the society, not just the privileged few, enjoy the fruits of the digital revolution.

  In conclusion, while technology drives progress, it is our collective responsibility to ensure that this progress does not come at the cost of human rights and sustainability. As we build a more technologically advanced society, we can not leave human rights and dignity behind. The future we want is one of inclusion, prosperity and equity. Getting there will require both steps to reform the international division of labour in a digital economy as it stands. An International Data‑Based Systems Agency at the UN can be a platform for technical corporation in the future of digital transformation, promoting just, equitable future for all. Thank you.

  >> EVELYNE TAUCHNITZ:  Thank you very much, Yuri for also pointing out here what Kutoma already mentioned. Who is sitting at the table and that absolutely, the Global South also needs to be included, as well, if we talk about labour rights. Thank you for that. We have now heard the input of all of our speakers. I would like to give the word to the audience, both onsite and online. First for a round of question and answers to our panelists. Maybe we can start first with the participants here on‑site if you have any questions.

  >> Hi, can you hear me?  OK. Hi, I'm from Seoul, Korea. I'm studying public administration in the university, and now I'm a PhD student. So I really, really wanted to ask, is there any model as a governance ‑‑ I mean is IDA looking for IAA model or FDA model kind of things?  So if you are thinking that AI could be hazard as a nuclear energy, are you thinking about IA‑EA model, and do you think it is fit for the context of AI, and what should be the authority and power for the governance exactly?  And so I was curious about what the governance part of IDA would actually do concretely, and what the authority or power should it have. Thank you.

  >> EVELYNE TAUCHNITZ:  Thank you so much for posing this very important question about the concrete function and powers also. Who would like to answer that question from the panelists?  Maybe Peter? 

  >> PETER KIRCHSCHLAEGER:  Well, thank you so much for the question. So you're absolutely right that the need to be adoptions made, you know to let's say the model of the international atomic energy agency adopting it to the field of AI. I think you're absolutely right on that. I still would think that the model of the international atomic energy agency can serve to give us orientation, how, you know, how many functions, rights, entitlements, such an agency should have in order to really make a difference on the ground. Because I think what's important now is I think we have gone through a period of beautiful declarations and guidelines and recommendations.

  But we haven't seen yet so much impact of that. You know?  Businesses run as usual. We still face the same risks. You know, we are not that good in identifying the ethical opportunities together. Not everyone is benefitting from AI.

  And so we need something really with just ‑‑ that has an impact. And there we need to adopt the international atomic energy agency model in order to make it fit for AI. But I think it's possible, looking at for example, concrete functions IDA should have. For example, what is absolutely usual and not even questioned in the field of the pharmaceutical industry is a certain kind of approval of access to market process.

  And something similar would be needed to be done in the field of AI. So IDA would have the rights to run such approval process. Secondly, it would need to have ‑‑ I mean the proposal would be that it has also possibility to sanction, not only state, but also non‑state actors, not fulfilling their duties, not fulfilling their obligations.

  So in order to really make, to see a difference of the impact of artificial intelligence on the ground. Basically, the underlying motive is to protecting the weak from the powerful. So, and of course who the powerful is, as we have heard from Frank Kirchner, from Germany, is that the powerful, it has kind of shifted.  The powerful in the field of AI are the multinational tech giants.

  And not so much the states anymore. Of course that needs to take into consideration as well. Thank you.

  >> EVELYNE TAUCHNITZ:  Like to add something? 

  >> Just quickly, and for me, this also relates to Migle's contribution when she talked about the EU‑AI act, which is currently being debated. So it's interesting, because it is an EUAI act, and my take is that (Kutoma speaking) it's going to be slightly different from, say, the U.S. if they're going to be talking about their own act, and it will be different from say, for example, Brazil. Perhaps also Africa might be looking at a different kind of act or regulations. And within these particular countries or continents, there would be countries looking at different regulatory policies, or acts, if you like, whatever it is they're looking at in terms of AI policy).

  So for me, I think that IDA would be ‑‑ one of the things that IDA could do is to then sift through all these different regulatory policies to help come up with ‑‑ I know it's going to be quite difficult, but at least come up with something akin to one global standard of artificial intelligence.

  Because as Peter rightly said, one of the things that IDA would do, potentially do, is to protect the weak from the strong. So if we have an organization, an agency like IDA, I think it might help to then come up with some standard or some AI act that can be cohesive and cover a global ground so that everyone is protected in that respect.

  >> EVELYNE TAUCHNITZ:  Thank you very much for this ‑‑ is it on?  Hello?  No. Yes?  OK. Thank you.

  Thank you for sharing your insights on that. Are there any more questions here from the on‑site participants?  If not, we would go on to see if there are online questions. But please, if you have any more questions, feel free to pose them. If not, Melina, she's our online moderator. May I ask, are there any questions in the online chat? 

  >> MELINA FAH:  Good afternoon, yes, actually there's one question. I would like to invite you to ask your question. Ayalew didn't raise the hand, but he already posted the question in the chat. I will read it.

  And is it possible to protect or prevent international database information by building sophisticated technology advancement, or is there any other means to protect or prevent from the hackers? 

  >> EVELYNE TAUCHNITZ:  Who would like to answer that? 

  >> AYALEW: Sorry, thank you very much for giving me this opportunity. I can elaborate my question it looks like a little bit ‑‑

  >>.

  >> EVELYNE TAUCHNITZ:  Sorry, maybe if you could put your camera on, is that possible?  So we could also see you.

  >> Ayalew: Yeah, no worries. I joined IGF last year, and I'm researching in digital Fiat currency and CBDC, and finished my master in information communication technology. My understanding is last year there is the positive and negative impact of AI, and also what we haven't mentioned here is a lot of technology is behind it.

  We have ART and IOT, and also blockchain technologies. And all these data, all these technologies is generating huge amount of data. We try to create, international database. So are we really creating international database and protecting these database with sophist waited technology?  Or is there any other mechanism we can relate internationally with Global South including Global South. At the current database, for example, swift ‑‑ SWIFT is international data transaction with cross‑border.

  And that's 875 different banks of different nations sign and regulated. And we need to find that kind of international. Rules and regression also, we need to think how we teach the hackers. If we hack another country, what happen if another person from another ‑‑ hacker is hacking our own country. We need to have ethics.

  So what are ‑‑ this international, IGF forum try to find out and set up all inclusive countries, international (?) Which is governing both Internet, and Internet related technologies. And what are the prospect. This is my question. Thank you very much. If it is not clear, I can elaborate more.

  >> EVELYNE TAUCHNITZ:  Thank you very much for your question. If I understood correctly, your question has to do with the regulation of this huge amount of data and how the Global South can be included specifically. Please correct me if something is missing. OK.

  So who wants to address this question from the panelist both on site or online?  Kutoma, please go ahead, yes.

  >> KUTOMA WAKUNUMA:  I don't know if I'm going to address it adequately. But yeah, I'll address it in a manner that I perhaps understood it. I think your question for me took me to reflect on discussions that we're having around Chat GPT. It hasn't been very long, I mean a couple of years if you like, we didn't really have a concern around Chat GPT. Now we're starting to look at it and think about all these concerns in education, in different kinds of sectors. And chat GPT is a classic example of how these unintended consequences can actually affect different, I suppose organizations, of the world differently. And one technology that is permeating everywhere and people are struggling to understand how or what policies we can start to look at. Being an academic and being very much involved in student activities and things like that. We are now thinking about, oh, OK, this is a technology that has bolted. So there is no way of bringing it back. So how do we help students or how do we encourage students to use it responsibly? 

  And I think this is something that everyone is kind of thinking about across the globe. And there is no ‑‑ there is no right way of looking at it. And this is why we probably need agents like IDA to proactively look at this particular global events or situations and how we can then have global mitigating aspects related to this.

  And one of the things that we ought to be doing also is to be inclusive. And I think you did allude to the fact that in the Global South, there could be an impact and things like that. But for the most part, only a few, I suppose especially from more developed countries, really are sitting at the table discussing these particular elements. And we need an agency like IDA to ensure that everyone, including people from the Global South, from the Global North, are sitting at the table, trying to find solutions to, consents, for things that are currently emerging, or to have a foresight as what could potentially come as far as these potential technologies coming in. We should not sit around and wait until something has happened in order for us to then start scrambling to find solutions. And this is one example of what Chat GPT has done, and I'm sure a lot more upcoming technologies are doing. So I hope I have answered your question, perhaps even, in a little way. Thank you.

  >> EVELYNE TAUCHNITZ:  Yes, of course, Peter, please.

  >> PETER KIRCHSCHLAEGER:  Thank you so much for your question. I want to add like three minor points. I think the first thing is really that IDA should promote technological operation, and I think that's very important for tackling cybersecurity.

  And secondly, it shows also that IDA needs to have some kind of force also being legally binding, because a problem like cybersecurity, we can not tackle with recommendations and guidelines.

  And thirdly, I think it creates a certain kind of optimism that this will be possible to find global consensus on IDA, because of the huge and enormous damage, economic damage, cybersecurity is basically threatening all of us. Be it state actors, be it nonstate actors. And and to join forces in that regard, could help us to tackle that huge issue. And I would suggest that IDA could play a substantial role in that. Thank you.

  >> EVELYNE TAUCHNITZ:  Thank you very much, Peter. Any more questions from the audience?  Online, is there anyone there who would want to ask another question? 

  >> MELINA FAH:  No I don't see any more questions. Does anyone want to add something? 

  >> FRANK KIRCHNER:  If there's no other questions, I would like to add to what Peter just said, to the question of ‑‑ I think Peter already said we can not prevent hackers from doing what they want to do.

  We'll always have criminals in the world, and if they have enough criminal energy, they will do it. So this is not the way we can make this data safe. But there is, of course, others ways to do it. And that's what my colleague was about, the standardization, and the opening of this knowledge to a broader audience, to a broader public. And this is exactly where the agency could play a vital role. Because if you think about Wikipedia, if you think of something that is an open database, a database of knowledge, and everybody can read it, and everybody can add to it.

  And this is how, I think you would be able to minimize the possibilities of misuse, or hacking or whatever, by the largest extent. Because if everybody sees and has the benefit from having this database, everybody will also make sure that this database is not corrupted. So still means that there's possibilities for people that want to misuse it, they will misuse it.

  And then we have, like has already been said, ways of regulatory or laws that can then intervene and say, OK, you misused this data, you will be punished by law. Because you committed a crime or whatever. Because you misused the data that we've provided to the general public, all over the planet.

  But to my mind, the biggest ‑‑ the best possibility to make sure that we can use this great technology, which it is. Right?  It is a great ‑‑ a very, very powerful technology. We have to use it. But we have to use it to our best, to our benefit.

  And we have to live with the fact that there will always be people that try, at least, to misuse it. And this is where governments can come in and set in regulations, like the USIS, says you will not be punished for creating artificial intelligence, you will be punished for misusing it. If you come up with an application that is misusing artificial intelligence.

  So that's my perspective. And I think it's correct what you said. By looking at the further demands on automation that I have referred to, all these machines, all these robots, all these machines you mentioned the Internet of things. They will all create data. And it's an enormous challenge and task for humankind, actually. How to manage and how to create and how to safeguard this data.

  But it can not just be in the hands of a few big companies. We should not forget that. Peter also mentioned it. It's not the states. It's not United States, it's not Germany, it's not the European Union that is creating these techniques. It's companies. It's companies that have enough money to pay the energy bill of a state like New York, to create a foundational model. Billions of dollars. Nobody can put these billions of dollars out.

  And the most stupid thing is that they are all doing it again and again and again. So if Microsoft comes up with ‑‑

  >> EVELYNE TAUCHNITZ:  Sorry to interrupt you, Frank, but I got the sign here from the technical staff that we have to come to a close of the session. But I would like again to take the opportunity to thank all participants, all speakers, both on and offline. I think there was a broad consensus that we need to, if possible, proactively prevent the misuse and risks of so‑called artificial intelligence database systems. And standard‑setting of course, as Frank has pointed out at the end, is also a way to do it, is also a way to do it for IDA. Thank you very much again for being here, and I'm sure that discussion is going to continue. Who knows, maybe at next year's IGF, let's see.

  So thank you again for being here. Thank you.

  (Applause)