The following are the outputs of the captioning taken during an IGF virtual intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.
***
>> MARIANNE FRANKLIN: We will start in ten seconds, everyone.
>> We all live in a digital world. We all need it to be open and safe. We all want to trust.
>> And to be trusted.
>> We all despise control.
>> And desire freedom.
>> We are all united.
>> MARIANNE FRANKLIN: Good afternoon. Good morning, and good evening, everyone, from Katowice and around the world. We have participants from Greenwich University. So we have, in fact, every hemisphere covered not only with our speakers but with our participants. Thank you for joining us at the end of Day Two. We are now going to start the workshop which we called Syncing AI Human Rights and Sustainable Development Goals. And the subtitle is The Impossible Dream.
So without too much further ado I'm going to run through the protocol and who is who and what's what. Then we will get started. We have an online poll that we love you to take part in. It is a free association poll. If we can have that pasted again in to the chat. And we have also a resource list which is at the end of the description. So there is a wide range of resources there to give you a sense of how important these topics are together and separately for every sector.
I would like to introduce my speakers and then introduce the format. We have Renata Avila, the incoming CEO of Open Knowledge Foundation and cofounder of many things of progressive international. Renata is from the Latin America and Caribbean group. We have Parminder Jeet Singh and the Asia‑Pacific Group. We have Raashi Saxena. And Raashi is an AI specialist. She will enlighten us on some important initiatives. We have Mr. Thomas Schneider of the Swiss Federal Office of Communication amongst many other hats that Thomas wears.
We will have shortly Paul Nemitz, principal advisor to the Directorate‑General for justice and consumers of the European Commission. And we have our Michelle Thorne, senior program officer leading the Open Dot program which is a doctoral research program for the Internet of Things. There you are, Michelle. Good to see you.
Lovely. You have all made it. My name is Marianne Franklin. I am on the Steering Committee. Former Chair of Internet Rights and Principles Coalition. I'm joined by Minda Moreira who is our current Chair of the coalition who will be doing the online moderation and helping us out with fielding audience questions. And we have also on the ground Michael Oki on the Internet Rights and Principles Steering Committee. That's who is who.
Now it is a big topic area. So to focus our minds and thoughts we are all going to think about a provacative formulation of the issues as one would have to debate it in a traditional debate. I shall read it out for the audio and live captioning. We are asking ourselves what if current artificial intelligence trajectories now indispensable are undermining the future of sustainable rights and the sustainable world.
So those are the ‑‑ that's the focus. The speakers will have time to make their opening remarks and to elaborate because we will have three rounds. At least two rounds opening remarks, two groups of three. And I will just call the speakers as we have already planned. And then have a chance to hear from the audience via the chat, if possible. Maybe a couple of questions from the floor.
And then we will ask a panel responses. Then we will move from analysis to action. What we are doing and what we will doing is and what we can be committed to doing with money, political will and all the rest.
So no shoulds in this panel, please. We are and we are going to and we will. No shoulds, ifs or buts. All the shoulds or buts, ifs are in part one. We love to hear from the audience. So people feel free to post your brief questions up on the chat because that's part of the official record.
And we will go from there. Last rule, three minute speaking rule. And I have got a little timer. And I will actually have to assert it. I have used up all my speaking rights for this round. I think that covers it. So let's get started. Keep an eye on the chat, everyone.
Our first group of speakers are Renata and Raashi and Parminder and then followed by Thomas and Paul and Michelle. I will assert the three minute rule. Remember the question, one at a time for the record, what if AI is actually the way it is going, the way we are developing at a systemic level what all these trajectories are undermining the sustainability of Human Rights and our planet. Not a rhetorical question. I would like to give the floor to Renata Avila. Your three minutes starting from now.
>> RENATA AVILA: Hello. I'm sad I can't be in Katowice. The first thing I have is the ‑‑ that what if it is a mirror. It is a reflection of our abandonment and undermining of the multilateral system. The principles, the foundational principles of United Nations, 70 years ago a miracle happened. That consensus and commitment to Human Rights to place in the human at the center and place in development, peace, and the fulfillment of all the potential of humanity in the hands of a multilateral system that was equal. It was participatory. It was Democratic in essence. That would allow all the difference to converge to a work towards achievable goals. And now it seems that all this commitment is diluted. And it is undermined by alternative systems that just groups, fragmented groups of countries that abandon the system as the backbone.
My pledge will be to bring back this system in to relevance and to make it ours. And to bring back a collective issue of AI in to the system. And to update and upgrade the foundational principles of the United Nations and to reflect the technology challenges of our times. And while doing so, also transfer those principles as the building principles of the technologies of the future. Things like inclusiveness, privacy by design. Inclusiveness by default. Could be reflected in the technologies of tomorrow.
And those technologies of tomorrow build in a collective vision could be the keys to unlock the real potential of technology as a public good and as a key to solve the complex problems that we are facing as a humanity. And let's make those principles. Let's not allow those big tech only to define the principles and rules for other planets as part of our system in the years to come.
That's my first intervention. Let's bring back multilateralism and update it to reflect our vision of a digital future.
>> MARIANNE FRANKLIN: Thank you so much. You were under the three minutes. You have lead the bar. Thank you for that opening challenge. Moving to Raashi, you have the floor.
>> RAASHI SAXENA: I'm going to try and attempt if I can actually share my slides. Can you all see my slides?
>> MARIANNE FRANKLIN: Yes, we can. Maybe a little bigger.
>> RAASHI SAXENA: Okay. Now?
>> MARIANNE FRANKLIN: Not yet. I will try sharing, Raashi. If you begin I will get the share screen.
>> RAASHI SAXENA: Yeah, that would be great. I can go ahead then. Yeah. So I do think that Artificial Intelligence is increasingly becoming the weary decision maker of our times. It is a general purpose technology that a lot of countries are certainly trying to push. It does have considerable potential and does play a role within 134 targets approximately when it comes to the SDGs. With new technology you have new developments, regulatory obligations, legal ethical frameworks which have been a risk to our society. And it is AI can be an interfering factor for 59 targets under the SDGs. Some of the risks to Human Rights and freedoms of the disparities between Developed and Developing Countries being very Eurocentric, data colonialism, risk to welfare and the risk of algorithms making decisions on behalf of you. There are marginalized groups where there is no explicit input but AI finds relationship that's unaware which can be discriminatory, for example, correlation between zip codes and socioeconomic status.
And also a general lack of transparency and hard to understand, explain and regulate AI. So what do we do here? My pledge, some of the work that we have been doing with UNESCO at the AI for Policy Foundation is moving towards and adopting a multi‑stakeholder approach in AI policy making. I think the slides that Marianne has shared, we think using deliberation is the solution. It ensures that consensus can be reached. And also raises awareness and builds capacity for a lot of other stakeholders who don't have a role in AI, especially with citizens and policy framework can be flexible if you have sustainable feedback mechanisms. But, of course, in practice, translating it in to theory can be difficult. It requires effort, vision, financial resources. You can go back to the previous slide?
>> MARIANNE FRANKLIN: Your time is up.
>> RAASHI SAXENA: These are testimonies. That's my presentation.
>> MARIANNE FRANKLIN: All speakers will have a chance to retrace their steps. Moving on to Parminder.
>> PARMINDER JEET SINGH: Thanks. Yes. And Renata and Raashi already brought up the issue of how principles are developed. I think that AI is very dangerous as the trajectories are moving, but I believe the technology is what you make of it. This AI and at least 'til now is still AI that we make. We may be overtaken, I'm sure nothing came up to it. Who is making this AI and who should make the AI is the most important. And I go and focus on the economic distribution aspects of it.
Kia Lee who is a great AI scientist and a businessman spoke so beautifully in an Article in the New York Times when he said that internationally, there will be just two countries who would have all the AI power. And the other countries would have no options other than to be kind of Dominions by taking money. And this guy is really somebody who knows his state.
Another issue of when Renata says going back to the UN, the biggest thing we start listening to public interest actors. In the last 20 years we have said listen to business and technical community. Probably now we say listen to business less. It is not that AI would ‑‑ principles are not being developed now. In the next one and a half minutes I will tell you what is happening now. What is called multi‑stakeholderism of current things. OECD, Committee on Development of Digital Policy, in 2018 developed a set of principles which were adopted by OECD as soft law, as legal instruments. It is on their legal instruments page of OECD. And they may those AI principles or norms. After four months, G20 and I hate that India was a part of it. They said we adopted them. And the OECD principles are in the annex that's brutally nonparticipatory. And that not being enough, then they launched this AI partnership, which now wants all countries to take up those principles and so‑called multi‑stakeholderism committee take up those principles.
And it is clear the partnership says why is the Secretariat with the OECD because they want the OECD to keep the leadership. India made a proposal. Nothing else. And the whole world shouted that was multilateralism, but everyone calls OECD's begging as multi‑stakeholderism the same. Make it and will give advice. I stop at about 3 and I'm ready to come.
>> MARIANNE FRANKLIN: Thank you. So we have our three opening comments. We have three more speakers for opening statements to our provocative question. I would like to begin the next group of three is Thomas Schneider.
>> THOMAS SCHNEIDER: Thank you. I hope you can all hear me.
>> MARIANNE FRANKLIN: A little louder.
>> THOMAS SCHNEIDER: Okay. So I will try to speak louder. Every technology as history has shown can be used for good and bad purposes. We have all witnessed positive uses and negative uses. In this respect I think that AI is not different from other technologies.
Of course, it is possible that AI may have any even more profound disruptive character that previous technological innovations. Like Parminder, what is AI's use depending on us human beings around political representatives. What possible necessary legal and other guardrails should be given to this technology. Much has already been done. We have hundreds of sets of principles and Guidelines. Council of Europe and others and also NGOs and by accompanying themselves. In many areas of the world the ideas about how to enhance regulation on AI, with regard to Europe which is where I come from there have discussion on an EU level. But we actively participate in a Council of Europe where there is the plan and the prework has been done to elaborate a framework Convention on the use of AI which is following a little bit the example of the Ovieda Convention on biomedicine that sets out the principles based on the Council of Europe principles.
AI is not being developed in a vacuum. We have a general legal framework that are applicable to AI. This goes for legal framework to protect Human Rights, right to self‑determination and other rights that the people have and that are already in Consumer Protection issues that are already there in a traditional system. So the main point in our view if you want to prevent AI from being used from wrong purposes, it is important that all people in all countries fight for respect and rule of law and democracy and the fight for strong legal basis of their rights and that will probably be the most efficient.
And then we can build on what is necessary particularly for AI. In Switzerland in my country we have the general approach that we try to keep a legal system as technologically neutral as possible. So we try to have a solid set and clear principles, also by courts that are trained to decide on cases based on these principles and are trained to apply the principles of adequacy and proportionality and common sense. We prefer to update the laws, with necessary AI components instead of producing overarching but too detailed and time bound that cover too many issues and not possible to implement and lagging behind.
>> MARIANNE FRANKLIN: Thank you so much. I have to cut you off there. I don't know if you heard, our little thing.
>> THOMAS SCHNEIDER: I didn't.
>> MARIANNE FRANKLIN: I thought that was a great way to segue way in to our next speaker. Some very provacative points there as well from Ambassador Thomas Schneider. Moving on to Michelle.
>> MICHELLE THORNE: Thank you so much for making space and putting this topic on the agenda. I would like to Kathleen who helped articulate the positions I'm going to share. So yes, I think right now we are seeing ourselves, of course, in this moment of a green recovery, the Governments are trying to find a way to get growth after the pandemic while addressing the climate crisis. For example, we see the European green new deal and this is an effort to push environmentally friendly technologies and economic growth. So my first question is that possible.
And also we see AI presented as one of the solutions to help fuel this green transition. I want to ask is that true.
So as people ‑‑ as other panelists have alluded to AI can be used for all sorts of different ways. We see many human centric uses of AI that will help. But at the same time we are also seeing any implementation of AI is really relying on massive and growing volumes of data that has to be stored and processed. But has a significant environmental impact. And furthermore, we are seeing AI systems used to speed up fossil fuel extraction and the burning of those fuels are causing millions of deaths each year.
Plus that's all in addition and violating trust online.
So I would pause it at a minimum. We should be talking more about transparency, about the AI environmental impact that includes emissions and land and water use and these human impacts. And so hopefully later in the panel we can share some mechanisms that might help account for those harms. Thank you.
>> MARIANNE FRANKLIN: Thank you, Michelle. That was two minutes. Thank you so much. Now we have our last opening speaker. Thank you so much Paul Nemitz from the European Commission for getting here from your meeting. Appreciate it.
We have three minutes. I will enforce the three minutes. And then we are going to ask the audience particularly in the room if they have any interventions also to time limits. Because Michael will have a walking mic and then we will throw it open for a little bit. Mr. Paul Nemitz, welcome and the floor is yours.
>> PAUL NEMITZ: Thank you very much. And as you know the European Commission has proposed to set horizontal rules for Artificial Intelligence as part of a broader package of legislation that sets in Democratic fashion a frame for big tech. Not only big tech but the world in which we live and in the future even more. The technology dominated world.
So what we see here happening in Brussels is a move from ethics talk and the old talking about safe regulation and so on, all these things that have not worked. And the best example is the charger, the mobile phone charger where industry promised 20 years ago to come up with one charger and they never did. And we need regulation and laws. What's happening in Brussels is a Renaissance. We have in Parliament the important debates on how to shape technology. And the law is recognized as the most noble instrument of how democracy expresses itself. The difference to China where you have no democracy and also the difference to Washington where it seems to be impossible to come up with laws for the public interest. We are doing it in Brussels and also good for citizens.
Let me close with this. Pleading for sectorial laws is the classic approach which industry has always taken on GDPR and again also on AI and so on. But from the point of view of citizens we first need let's say strong pillars of clear common rules which apply across the board. For example, on AI, the people know when AI is talking to them, when they get messages sent or they hear a voice, they need to know is this a human or a machine. And then, of course, on the basis of the horizontal rules there can be differentiations by sector. No doubt, we have to make the intellectual world for humans and that requires simple rules for people which protect them, which they can rely on. And I would say which help them to accept that our world becomes evermore dominated by technology.
If we don't go this way, if we make also the rules complicated by splitting them up only by sector, I fear we will have in democracy a problem of acceptance of this technology trend which we are faced with.
>> MARIANNE FRANKLIN: Thank you. These are the opening statements. We have a walking mic in the room. There we go. We have a walking mic in the room. And Michael, if anybody wants to intervene on the audio, this is their chance from audiences and also in the Zoom room.
This is a really wide ranging set of opening statements. So I have put up on the chat for all of us because I will ask our speakers what each of us mean when we speak of Artificial Intelligence. From the UNESCO report to the APC Global Information Society 2019 edition, we have machine learning and phrases like computers acting intelligently. And we have phrases like systemic forms of Artificial Intelligence that are bound with surveillance, automation. We all know particularly our ‑‑ Mr. Paul Nemitz given his work on the ethics of AI, we all know that definitions are at the core. Particularly from our technical community representative Michelle, how Mozilla understands AI in everyday speech and specific projects. Are there any questions from the floor before I ask our speakers and audience to answer my simple conceptual definition or question?
>> Does anybody in the room have a question or a comment? Does not seem to be anyone in the room who has a question.
>> MARIANNE FRANKLIN: Okay. Anyone on the Zoom room? Because I understand we have some law students from Greenwich who might be itching to ask questions. Michelle, what do you and your team mean when you are talking about this big umbrella term?
>> MICHELLE THORNE: You might find we don't have such a unified approach even within the organization because there are people who take a much more I would say technical definition and then others are using it to speak more broadly of kind of some of the newer waves of digital development. But I do find that we're seeing that talking about AI has been one of the ways in which we can talk about some of the challenges we see of the Internet Today showing up especially as other speakers have talked about the ways in which it is less clear, how decisions are being made and who is responsible for those decisions, things about explainibility and ability to audit the systems that has turned ‑‑ that's been one of the major foes we've had when we think about how we can make AI more trustworthy. So I don't know if that's such a helpful definition, but I'm happy to add in the chat some of the work that Mozilla has done in this space if people would like to learn more.
>> MARIANNE FRANKLIN: Please do. You mentioned the word decision making. This workshop is looking to connect through huge areas and AI and all it might mean and sustainability and Human Rights. Back to the initial definition or question, anyone from our speakers want to follow up with Michelle on some conceptual points? Parminder? No Renata?
>> I never say no to ‑‑
>> MARIANNE FRANKLIN: Never say no to the floor.
>> PARMINDER JEET SINGH: I was happy to hear the spirited defense of the EU rep. I'm reminded of the four years that I spent in the UN Working Group on Enhanced Cooperation just discussing and arguing that globally we need a body for horizontal rules and the EU rep and all European country reps and Ambassador Thomas, I want to put him in the corner was there to bail me out on this. Argued that no horizontal rule making is needed in a digital space. Sectorial rule making is enough. We have moved on at least now and agree that AI is too strong a force of which there are huge amounts of common characteristics as Europe was saying that has to be dealt with in a common manner. The education guys can't deal with it. They have to be horizontal rules on AI and on data. And I remind you UNCTAD has called for a global governance for framework. AI becomes a most important economic force. And I think let's rise to the occasion and leave behind the differences and agree that all countries should sit together at the same level and not OECD alone or EU alone and talk and make and not already say China is not going to agree; Russia is not going to agree. Post war we did it and Renata started with now. Now we are almost in that bad of a situation with AI around. And if we discuss we will agree to many norms. Thank you.
>> There is a question here on the floor.
>> MARIANNE FRANKLIN: Good to hear. Before I move to Michael with the mic, thank you, Paul Nemitz, for putting up an official set of conceptionalizations around AI. So Michael, please hand the mic. Could you please introduce yourself for the record? And you will have no more than three minutes once you start.
>> I'm Alamn, the Dutch United Nations youth Delegate ‑‑
>> MARIANNE FRANKLIN: Could you come a little closer to the mic?
>> Yes. Am I more audible right now? I heard you speak, Mr. Paul, about having simple rules that are easy to understand for also young people who aren't a part of the tech world. And I can't help but think what do you think would be the right rule for young people to help you in creating these simple and understandable rules?
>> MARIANNE FRANKLIN: Yes, Paul. Feel free to respond.
>> PAUL NEMITZ: Yes. The answer which I like is that the texts which we produced when read with innocent eyes, meaning not with the eyes of experts, must be understandable.
But that doesn't mean that these are simple comic texts. I would say they must be understandable after two or three times intensive reading of the whole text, and, you know, going back to specific formulations, and real intellectual effort. The world today is complex. And unfortunately, you know, laws are sometimes very, very detailed. But they must be written in such a way that a normal person with serious effort can understand it. And I think, you know, that's what we are trying to do.
Let me say a word about definitions. Definitions for political purposes and also in law, will never be as clear and crisp as scientists and academia or for that matter engineers or business will expect. Why? Because first legal texts are texts of compromise in Democratic processes. You have thousands of amendments in the European Parliament and to get the majority you have to work on the text. And it is not a scientific product. It is a product of democracy.
But second, the function of a definition for a scientist and for an engineer is a very different one than a definition in law. In law, the definition, for example, of Artificial Intelligence means a technology falls within the scope of this law. It doesn't mean that somebody has to do anything. It only means this law falls within the scope of the law. And then you have to continue on reading and, for example, in the Artificial Intelligence Act then you come to the risk which differentiates between four levels of risk and understood that your AI falls under the definition and falls within the higher risk groups. And then you come to quite a number of obligations and I would say rightly so, because if you put a risk in to the world, you know, the risk has to be mitigated in the public interest.
So the fact that an initial definition of a law is rather broad, often serves simply the purpose of creating transparency and creating a tension among the actors, but it doesn't mean that immediately, you know, you can't do anything. And oh, my God, we are under the AI law. Absolutely not. Relax about the definition. It is better to have broader definitions. The key is to look what obligations follows from falling within the scope of the definition. And that is a more differentiated discussion.
>> MARIANNE FRANKLIN: Thank you for that very important distinction between legal definitions and everyday social, philosophical definitions. But that begs the question of how do we get things to connect. We have been talking about horizontal lines of consultation and inclusion. Here I would like to bring Renata as a lawyer and activist. Why is it so difficult to have ‑‑ why is it always we read certain thoughts of authorships talking about culture and society and then legal people talk about the legal things and the technical people talk about the technical things? How is it that we can't connect? How can we? I think that Renata this question is for you.
>> RENATA AVILA: I have been studying this for a long time. It is a lot of talking and talking. And at the end of the line is that ‑‑ is a humble and shadowy department that ends up deciding a lot of things and has very little scrutiny. And that's the procurement space.
I think that the states, if I think the public sector, the public sector can shift a lot of that, the decisions, principal decisions in the public sector can like shift things that sustainability, things like how obsolete our technology is. By just shaping the rules and how they acquire things.
And it is not only the state by things for itself, but it is also all the aid sector, international aid sector. And I think that it has been neglected. And if we look at big budgets and if we look at the way that Governments all over the world and agencies are acquiring technologies and deploying systems, they are absolutely disconnected from Human Rights principles. And absolutely disconnected from a coherent vision on sustainability, inclusiveness, and just driven by prices and the best offer or whatever you get as a gift from a richer country.
If we redefine the way that we acquire technologies it is a very interesting starting point because it will shift. And it will permeate at the local level, across the entire education sector, health sector and so on. I think that it is just an idea. We are going ‑‑ as you said we have to say that we are going to do, we are going to exercise scrutiny in the way that our Governments acquire things. And we are going to shift the priorities by like tightening the rules of, have more of our public money is spent. Initial thoughts.
>> MARIANNE FRANKLIN: Thank you. We have an audience hand raise. Just let me get back to that hand quickly. Procurement, deciding what things get purchased, deciding what AI technology deployment is going to be deployed in an institution and the workplace.
I wonder if Thomas Schneider and Michelle would like to explain to this concrete example of what we are dealing with. AI is already out there from the classroom to the Assembly line including this meeting. The Swiss Government did note the note of private enterprise in its Guidelines. Where do Human Rights and environmental sustainability fit in? If we hand it over to the private sector just to be provacative. Michelle first and then Thomas.
>> MICHELLE THORNE: I appreciate this point around procurement. Mozilla has this interesting hybrid approach where it makes technologies but no shareholders. I think with ‑‑ and one of the things that we are trying to prioritize is transparency and accountability. Also Guidelines that are for technology in the public interest.
So I agree there is a lot of ways that we can see that happen, local levels and national levels and in different other programs. So I support this idea that focusing on procurement is a big one. And it pushes, it pushes companies. So that's a good place for leverage, especially for Civil Society and for Government.
>> MARIANNE FRANKLIN: Thank you. We have Ana. Before Ana and then Alka, I would like to hear from Thomas, this tricky tension between Human Rights law, the environment sustainability and private enterprise which is where a lot of AI is happening at the moment. Thomas.
>> THOMAS SCHNEIDER: Yes. And I think it is clear that AI is all over the place. There is enough research by groups like Algorithm Watch and others that show where already now in the public sector AI is used.
And so it is not just procurement. It is also procurement but it is basically all over the place. And with regard to this, again what I'm trying to say is that we as states, at least those that have signed the European Convention on Human Rights, the Government is obliged to protect the rights of people, to protect privacy, to protect the other rights. This is a positive obligation. And we have to, of course, make sure that we have a legal basis to also oblige our companies and industry to respect the rights of the people. And this needs to be implementable. We have nice papers and compacts and declarations, but you have no means of enforcing it. The people have no right to tell the Government that I have the right. And you are obliged to protect my rights. This is Charles' nice words.
I'm with Paul there, there ‑‑ and with all of you there are cross‑cutting issues like transparency, have a human being in the loop, on the loop. There may be some variations on having people know who is talking to them and having redress mechanisms and so on and so forth. There are so elements that you can come up with cross‑cutting issues. And this is why we support the Council of Europe on the Framework Convention. You will be able to solve enough. So you need to have a sectorial and a horizontal approach at the same time. This is nothing new I'm saying. To Parminder, you have both. If you look at WSIS and Human Rights it is horizontal. You need to have both. Thank you.
>> MARIANNE FRANKLIN: Thanks very much. We have a queue forming. We have Ana and Alka and then Wolfgang and Parminder.
>> We have another person waiting in the queue here as well.
>> MARIANNE FRANKLIN: Four. Ana, would you like to take the mic? And I will start the clock.
>> Ana: Yes. Thank you very much. Where we are focusing on introduction AI in the public sector. And here Michelle mentioned about transparency. And I have a question. If anyone has a solution how to achieve transparency, if we talk about public sector, who is using AI, and connected with, you know, public security roles, because many ‑‑ we have many cases when we want to get some information about how AI is used in a public sector. And most of the cases we hear that we don't ‑‑ we can't give you this information because of the public security or public interest. And how do you now resolve the problem. Worries the boundaries of the public sector roles and worries the boundaries between Human Rights to get access to the public information. And you know, public interest are something else. Just it is my note for this topic. Thank you very much.
>> MARIANNE FRANKLIN: Thank you. I think that links to the issue around procurement, public sector, private enterprise and these tricky relationships. Tricky as in on an everyday level. We have next in the speaking queue Alka. And the video if we can see you. Otherwise we will move through to Parminder and the person in the room.
>> Can somebody in the room who was before Alka, can she speak first?
>> MARIANNE FRANKLIN: Of course. Who is it in the room?
>> Go ahead.
>> Okay. My name is Angie. I'm a global ambassador of peace and a humanwide consultant. I have a question from a group, that texted me. They say we have a problem already. We are talking about digital resources. Digital money. Digital Peace Treaty. Digital communication. Digital empowerment. People already got challenges in the third world country to understand. If we are talking about the dream, how that dream will be possible and develop in country with the AI implementation? How that's going to help them? Thank you.
>> MARIANNE FRANKLIN: Thank you. Yes. We might need to think about that question, how to respond to that. Could we have Parminder? Would you like to take the floor and ask our speakers to respond to the question ‑‑ yeah.
>> PARMINDER JEET SINGH: Thanks. I wanted to come at the time procurement that was being discussed. Procurement otherwise is a very important instrument and we have fought for it in many technical areas. But remember what large digital AI companies are concerned, they are just too powerful and they have two powerful offerings, but procurement and diversity mirrors the sector and not very much power to resist them. Google was just giving such an attractive offer. Unless they are born from the policy to the top, procurement becomes weak in front of large companies.
While I'm having the mic and should be certain political tension in all good political debates, Ambassador Thomas talked about WSIS wars. And this is horizontal and questioned why Developing Countries should be maintained with a 16‑year‑old framework and should not be making AI norms and data norms, platform governance norms right now in 2021 which OECD is making, is the question and not whether WSIS was inclusive or not. I also know that WSIS+20 is going to come in four years and the drama will unfold in many, many ways. Be ready and watch this space as we go forward WSIS+20 in four years.
>> MARIANNE FRANKLIN: Thank you.
>> Thank you very much. I will keep it within three minutes. My name is Alka. I'm working for KPMG. I want to give a little bit of different approach and like to bottom‑up from Civil Society influencing the public debate. But also in that respect enabling private sector on taking responsibility themselves. So without ‑‑ without any policy needed. And I do guess that a public debate that we are forming is important. If we create an AI system, this should have a good representation of the whole public actually, while training the data, et cetera. And if that's been formed and implemented you have to review if the system that has been created by the company is representing different stakeholders. And in that respect, also their system should be able to be audited. And that's also ‑‑ also really important.
After that you get the feeling what we are doing that's progressing. And there might be improvement. But when you are able to have responsible AI, so if the AI system that is being created is transparent, then yeah, this whole responsible AI discussion can also be created bottom‑up instead of topdown. Thank you very much.
>> MARIANNE FRANKLIN: Thanks. As always Alka's brief and to the point. Could hands be taken down? I'm going to assume that's an old hand up. I see Jacques is with us. Would you like to take the mic and then followed by Thomas?
>> Jacques: My name is Jacques. I'm cosecretary of the Swiss IGF. I would follow up ‑‑ okay. What was saying, about to bring in a different point which is I mean there is legislation, there is soft law but also private initiative. And in Switzerland in particular there is some kind of public/private corporation on setting up a trust seal. And this would also be something that I'm lacking so far in the discussion. So certified trust on products and our services. This could well mitigate the tensions between just sectorial laws versus horizontal regulations.
>> MARIANNE FRANKLIN: Thank you very much. Yes, very practical suggestions here in terms of about what to do, how to join the dots horizontally, sectorially. No one has yet raised the 360 degree challenge. Thomas, the mic is yours.
>> THOMAS SCHNEIDER: Thank you. And I think one point that I fully agree with Parminder is that no Government or no company per se are doing the right thing or the good thing or they are necessarily entrepreneurship beings. On the contrary, I'm a historian by nature. History has proven to me that anybody has power, even if it got it with his ‑‑ he or she got it with good intentions, has the tendency at some point in time to abuse power. We need a framework that creates incentives to make companies, Governments but also individuals to do the right thing and not the wrong thing.
And then we can debate about how to get there. This is where we agree. And there are different ways of getting there. I just wanted to strike that. And there are more like detailed single ideas like creating a trust label but that's not so easy with ‑‑ it is easier with a harder product than with software, algorithm and then changes itself on a second basis. So some things are easily said and then implemented in a way they work. So this is ‑‑ and, of course, I think we should ‑‑ this is a necessary debate. So we try and find out what works. And one important point is that what works in a country like Switzerland may not work in another country. So it is also good to hear the voices of people from the Global South, because they may have a different regulatory environment, they may have different Human Rights conditions, economic conditions. And we should in particular also put the stress on those who are less able to defend themselves through the classical structures, how can we support them and to fight for their rights and fight for their visions. Thank you.
>> MARIANNE FRANKLIN: Thank you, Thomas. Thank you. Yes. Very clear. Minda, your hand is up.
>> MINDA MOREIRA: I want to flag up the question that was in the chat earlier by Wolfgang. And I wonder if Wolfgang would like to ask the question directly or if I should read it.
>> MARIANNE FRANKLIN: Wolfgang is on video already. Welcome.
>> Wolfgang: Thank you very much. I'm not ‑‑ I don't feel as the big expert. But what I see here is that thanks to the Civil Society awareness has been created. And thanks to a number of actors solutions are being solved, so‑called Guidelines and principles and so on. But we will have to live with what seems to be a period of recommended response from different sides, national, regional. And my question would be how can we bring it really to the global level, to the multilateral level, in order to be as inclusive as possible, and at the same time guaranteeing some compliance with general principles which actually everyone can agree to?
>> MARIANNE FRANKLIN: Thank you. Renata, the floor is yours.
>> RENATA AVILA: Yes. When we talk about advanced technology as Parminder pointed out those making them very few actors and I think that the Pegasus software is a scandal. It is going to show how, you know, technologies made in the Global North are impacting globally. And how a little bit of political will and commitment not to ‑‑ of, you know, elevating universal standards for the technologies that we produce the same way we did with cars and other technologies in the past is going to have a global impact. But there is a moment of accountability because if such as ‑‑ we are still dealing with the harm and the ‑‑ of surveillance technologies. Almost ten years since WikiLeaks started exposing the files and then Snowden came with the other revelations and now with the Facebook files. If we don't tackle this time for AI that much can be irreversible. I think that it is necessary, we cannot wait. I believe that we cannot wait for a Global Treaty. That would be the next step, but I think countries must immediately start addressing this. And if not, I think I call upon our courageous country to become as we did in the past with universal jurisdiction, to become the country, to become the forest so we can start like litigating these cases. I know it is a risky idea but we need a place. We start to look in to these cases with the scrutiny of impartial judge or tribunal we can fast forward the safeguards that we need.
>> MARIANNE FRANKLIN: Thank you. We have a half an hour left. We are pretty much working on action points, but I want to ask our speakers to think about how we are going to connect these three areas through the thought of our discussions about AI automation, whether or not we need ‑‑ not whether or not, the idea that we need some litigation tools. We need some accountability protocols. And we need some understanding of the incredibly unequal distributions of power around those who can afford the research and development and those whom the tools are deployed. What about the schools or universities that your kids or grand kids go to?
That's the end of mine. I'm going to stop. Really concrete examples about what we can do at the everyday level. We have talked a lot globally. So schools, workplaces, kindergartens, zoos, museums. How can we address this issue of enormous electricity consumption and digital footprints, carbon footprints and yet somehow get the good of AI without having it crush us underneath its extraordinary weight? You can ignore me, but I'm going to start with the backwards, Paul Nemitz, Michelle and Thomas backwards that way for a round of more formal interventions.
Starting with Mr. Paul Nemitz. Paul, off you go.
>> PAUL NEMITZ: Yeah, I think when we talk about concrete action points for me the most important is that democracy can function well in relation to this very complex technology. And what does this mean? I wish that we get information from industry which is really truthful, which is not lies or stories which are, you know, spread because of an interest related to selling the technology or making money with it or, you know, a certain business model or a certain type of AI promoting this or that.
I would say think that the big tech companies, especially they can't afford ‑‑ they are bigger than many countries in this world. They can afford when they put in a political process to be a real, you know, amicus democracy, a friend of democracy and they don't need to do what they do right now which is lobby for business interest and so on. It is so sad to see this. They would not lose out a lot of money if they tell what's happening and how does this work. And not undermine the Democratic process, trying to soft wash regulation and trying to make it meaningless but help that democracy can deliver. That's the most concrete thing we need. We need to be able to show people that democracy can deliver also on these very complex issues of technology regulation.
>> MARIANNE FRANKLIN: Thank you. So not to start with the technology. Start with the Democratic Human Rights principle. The technology needs to follow. Almost an oxymoron.
>> PAUL NEMITZ: It is called the privacy of democracy over technology.
>> MARIANNE FRANKLIN: Thank you. We will now move to Michelle. Mozilla, please.
>> MICHELLE THORNE: Thank you. I would love to return to the topic of AI's environmental impact and remind us again of AI's intense water usage in drought areas. AI used to speed up fossil fuel extractions and training AI models, to the equivalences flights. Currently entirely voluntary for tech companies. And tech companies are rarely publishing information about their ‑‑ about the greenhouse gases emitted in their digital products. We have attempted to measure this. I will say again thank you to Kathleen Gregor who worked at Mozilla. But 98% of our emissions come from the use of Firefox. Come from the use of digital tools. And in the typical greenhouse gas accounting, those emissions aren't actually really ‑‑ no one does things with those levels of emissions. If you compare that to a company like Google or Apple, they are going to have these what's called scope three emissions that are going to dorf what's the Firefox usage.
So what I think what we need and speaking of action, is to push more mandatory reporting on these different scopes of emissions. One of the things that we are going to be doing is pressuring other tech companies to start to report on those. And furthermore, expand the conversation around AI to not focus on emission but to put people at the heart of this. So just wanted to share that. And thank you for this incredible conversation.
>> MARIANNE FRANKLIN: Thank you. Do I hear your subtext? Your subtext is we need a radical redesign from all the big tech giants from the ground up for all the services. I see Michael nodding. Will Mozilla lead the way? You don't have to answer that. I want it on the record because I'm a Firefox user. Okay.
Thank you. So moving on now to Thomas Schneider. Thank you.
>> THOMAS SCHNEIDER: Thank you. With regard to Paul's wish, that the tech companies should be nice, I don't think that they are better or worse than all companies or other ‑‑ I don't know, the car industry in Germany, they just try to make a profit. They are not nicer or less nice if we ask them to. We need to take our faith in our own hands and again in every country, and this also goes for the sustainability part. If we vote and elect people presidents that tell us I will make you rich no matter what the cost for nature or the cost for the Global South is, then nothing will change. If we vote for politicians that will say I will help us fight climate change and injustice and so on, we insist that once they are elected, they do it, then we have a chance. We have a chance that they are forced to be environmentally sustainable. The negative effects are integrated in prices and so on. Everyone, we have to be not agnostic. We have to fight for the common cause. Thank you.
>> MARIANNE FRANKLIN: Thank you. Not every part of the world enjoys the sorts of Democratic representative democracies that we are very fortunate to be living in in most of us. Yes. Moving now to Parminder on this penultimate round what to do.
>> PARMINDER JEET SINGH: What to do, two concrete points. I can carry on like a broken record saying what I have been saying for 15 years. You need to get a global place. Now all those distant things are over. We are in a serious space. We need to get a space where people come and talk. Stop giving the China bogie. It has outlived its use. Get a place that UNESCO people get and talk about legislation. We need a global place where everyone comes and talks about intelligence, digital intelligence, about data. And develop the space, develop research, like UNESCO is doing, WHO is doing and develop soft norms. It will happen and if you can't do that you don't talk about democracy. Everyone can see through that thing. Don't fool ‑‑ you are not fooling anyone by using the word democracy to kill democracy.
The second part we worked a lot on it, AI is so efficient, that it will solve all problems. And we should start sacrificing efficiency Biden is talking about. How far he will go I don't know. We have written a paper and talked about platform being broken from the training which already happens in India. We say that big tech should be broken in a certain manner, that data collection should be separate from cloud computing and separate from AI, should be separated from consumer facing AI services. We have a full paper to break the accessibility along these technical chains. Network Neutrality. Each layer has to be a separate component. I connect this to the first question. No single country within itself sacrifices efficiency because they think they will lose out.
Got together and agrees to certain minimums and that agreeing can be only done at the UN level. Where are you going to end up? The only way to end up is to get together and start trying a global model. Thank you.
>> MARIANNE FRANKLIN: Thank you. Also very clear. Moving to Raashi.
>> RAASHI SAXENA: Yes. I have been thinking about this but I also think that in the Global South, there isn't ‑‑ we don't have any policies. We can talk about the Global North but perhaps what we need to do is get people involved. Build capacity on the topic and convince the importance of participation. Someone was talking about a bottom‑up approach. Incentivize disadvantaged groups. And understand and increase the overall ownership of AI and the impact it has on society. A lot of Civil Societies don't really understand how AI works and affect their way of life. And perhaps some of the lessons that we learned from the consultations that we had is that countries cannot work in silos. We need to come together and borrow from each other's experiences. We established a diverse and inclusive trust task force or group of experts. Learning from local and global, examples ensuring participation in reaching on a consensus that leads towards concrete action and policies.
One good example is the recent Chile AI strategy which had participation from citizens. They move from ‑‑ and for the first time they also had a lot of Civil Society and academia getting involved. It is possible. But it has to come from the top. It has to be incentive and budget and a time and horizon of the strategy where you can also, you know, kind of be open to finding the unusual suspects so that you can lead to and make effective use of consultations and procreation.
>> MARIANNE FRANKLIN: Thank you. Thank you. So yes, exactly. So Renata.
>> RENATA AVILA: I don't have much left to say. I would just say that I think that we also need to focus on countries great in the new constitutions. And countries like Chile come to my mind. It is very, very exciting to see new constitutional processes in the 21st Century. And I think that the new constitutions written by citizens by now are a great opportunity to bring up this topics. I think that it is a very interesting bottom‑up way to show that it doesn't matter that the technology is not yet there. It is a segue way to unlock the possibilities of the future. Because I mean general rules, we need to write general rules for the next hundred years. And do not wait for the technology to arrive. When all the decisions have been made somewhere else.
>> MARIANNE FRANKLIN: Very good point about the timing, Renata. Wolfgang, I see your hand up. Is this a new hand?
>> Wolfgang: Yes, indeed. I wanted to follow up on what I said before on the issue of compliance. I think there is interest of tech companies which by the way I'm missing on this panel, to show that their transparency has been said several times. And therefore are opportunities to strengthen the dialogue. There is the U.S. Congress and Europe and Parliament. I think there should be questions regularly on the activities. And the next step would be to institutionalize this in the form of oversight bodies, for example, voluntary or not so voluntarily created. But in which in an institutionalized way this dialogue can take place more regularly and with a number of actors from all over. So I think we need to think about practical steps to go further, but what can be done quickly and any time and partly is happening already is to institutionalize this transparency and this right to information. Thank you.
>> MARIANNE FRANKLIN: Thank you very much. Let's just take a moment. We have 15 minutes. I'm going to open it up to comments from anyone, including our wonderful team, any of the speakers. We have had trouble connecting the three main areas I think today. Because it is almost too big. One of the things are coming through in all the comments and input and amazing information on the chat is it is about accountability. And it is about where does one go if something goes wrong. We are working at a high level. What about back to school, when they buy a free tool that's collecting data about five‑year‑olds and automating to what those five‑year‑olds see. Who does a concerned parent go to at the school? Connect procurement to responsibility, to accountability, to decision making, to democracy and the most basic sense of everyday life. Do we go to the various big tech brand names and say we don't want your automated role, your 3D virtual role? And I have no control how or with that data that's being used or I have no control on the way in which my five‑year‑old's intellectual abilities are being shaped on an algorithms are happening today. Any comments? That's my final comments. We have an open floor now. Please just raise your hand.
>> PARMINDER JEET SINGH: I am going to connect the three areas. AI we have talked a lot about. Wait a minute, who told you that meetings are not part of rights. It is unbelievable that in the digital area we think the political and civil rights are called actual rights. Labor rights are Human Rights. Economic opportunities are human rights. Right for self‑determination is a human right. Right to development is a human right. You see the world is different. And connecting to the SDGs, SDG 10 is about greater equality among people and countries. And within that, the targets are the people have greater ‑‑ every country has a greater participation in, they call it financial institutions. Digital institutions are important.
So if you cut across all of them are talking about the same thing. And one example, India has done a report on communities ownership of data. They have used the human right, right to self‑determination where it said the country's natural resources are owned by that country and community. And they will data mine. And data intelligence is social resources. Renata started by saying there is existing governance systems both institutions wise and norms wise to root the current institution. Thank you.
>> MARIANNE FRANKLIN: Thank you. We have two from the floor. I'm going to reset the timer to two minutes because ‑‑
>> You don't have to do that.
>> MARIANNE FRANKLIN: Off we go.
>> Can we go to Tapani?
>> MARIANNE FRANKLIN: Welcome.
>> Sorry. Okay. I'm Tapani from Electronic Frontier Finland. I'm not going to define it but a few observations. First from a computer programmer's point of view, AI differs from normal programming in the sense that programming does not understand how it works. They are programmed to learn and told. The programmer cannot explain how it works within. This may not be an obvious distinction to people who don't understand how programs work in general. But it is significant.
Another observation, AI systems arguably are a system that makes decisions that are not decisions of any human being. In that sense we have had AIs for a long time because companies are AI systems. They make decisions by rules that, of course, are implemented by people within but they are not decisions of any individual there. And there is something to think about, what exactly is AI. And, of course, any organization, including our Dynamic Coalitions in AI in this instance who is making decisions. It is not any individual but a set of rules that are being implemented by people and things that people make. Thank you.
>> MARIANNE FRANKLIN: Thank you. So I have a queue there. It can be corrected if necessary. We have Alka and then three people on the Zoom room.
>> Thank you very much. My name is Alka. Maria, you give a good example of something we don't want and you may create an example of a child playing with toys and data has been collected. But maybe the problem is that actually the public doesn't really know that those kinds of stuff is collected and a model is created. So a possible suggestion for me would be if you make that transparent or ask for an external company to review algorithms or make them even public, you could enable the transparency. And if that would be put in to some kind of policy, those decisions or those algorithms, AI systems wouldn't even exist. Because then it could be that that's marked controversial and in that matter the company would decide differently.
Thank you.
>> MARIANNE FRANKLIN: Thank you very much. Very good suggestion. We now have a couple of people who haven't spoken yet in the Zoom room. It is Bartoszek?
>> MASEUSZ BARTOSZEK: Me. My name is Maseusz. Hello. Good evening. I'm a recent law graduate in Poland. And from a more legal point of view I just wanted to point out that when we are talking about Human Rights, the institutions that help us preserve Human Rights and serve to preserving Human Rights are courts and the justice system. And then what do we do when courts themselves and the justice system, there is bigger picture, use Artificial Intelligence that can somehow make the court's decisions not really human decisions. They can be maybe influenced by the system, by the people who stand behind the systems. Maybe by the big tech. Can we somehow fight this? Is it even possible to fight this? If the technological development goes in the direction of AI being virtually everywhere.
How is that safe anymore? Yes. Thank you.
>> MARIANNE FRANKLIN: Thank you very much. Important point. Automation of jurisprudence in court rulings. Anetta. The floor is yours.
>> Thank you. I wanted to start with your question, Marianne. I think we have to talk much more about democracy and power. The power of firms to make use of AI and the very ‑‑ the need of a public interest infrastructure. We lack infrastructures. We use Zoom here. We don't even have basic communication infrastructures, safeguarding, the basic rights of self‑information or self‑determination. We are talking about you gave the example of kids, pupils, students and so on. They are forced to go to school, which is good. But they are partly forced to use the infrastructure of private firms that do not follow the basic laws. And so I think we have to talk much more about offices of public infrastructures that really follow the basic laws.
And ‑‑ so I think ‑‑ I just want to make this point and think we should discuss this more strongly. And then the second point indeed is on worker's rights. I think we have to talk more about dependencies and power. And we have to also safeguard the rights of those people who are in a dependency at work and also their rights not only of privacy but also of security. And the whole decision making that they are not just objects but also subjects in this process, just to make it short and highlight this that there is a big issue here, that's all I want to say. Thank you very much.
>> MARIANNE FRANKLIN: We have five minutes left which is just time for our speakers to give single statements. Raashi has had her hand up. I'm going to start with you. Just one sentence take‑away, otherwise we will go over time and get kicked off. Thank you.
>> RAASHI SAXENA: I think we need to develop a more human‑centered and ethical approach to AI and also perhaps look at the Global South and enhance the diversity of different datasets because there are languages that are spoken more widely than English. So yes. And also mobilize more. And if you really want to be more inclusive then we have to kind of mobilize that participatory process. And move towards that.
>> MARIANNE FRANKLIN: Thank you so much. Paul Nemitz, your take‑away.
>> PAUL NEMITZ: Yes, I am there. My take‑away is that there is an appetite for rules. So we don't live anymore in let's say this world of Techno absolutism and neoliberalism in which the unbroken global Internet was the most important, good to be defended in discussions like this. I think we see a Renaissance of democracy and the rule of law. Also in the digital sphere and in the Internet and also on AI.
And I would say that is ‑‑ that is the good thing. And I would say indeed we have to maintain and strengthen the primacy of democracy over technology and let's say capitalism in terms of business models or big companies. Primacy of democracy is key also in this technical age.
>> MARIANNE FRANKLIN: Thank you. And moving to Parminder.
>> PARMINDER JEET SINGH: Very quickly, if AI doesn't bring us together nothing would. It is so powerful. If we don't want to come together now, get ready to organize ourselves around two points of U.S. and China who will suck up all AI power around since that audience here is Europe, dominated, let me warn them, that they are diluting themselves. That they are old, rich along with U.S. and they are being left out. There is no point being (cutting out).
>> MARIANNE FRANKLIN: His connection ‑‑
>> PARMINDER JEET SINGH: Making it kind of legally binding through WPO framework. So I think AI is the last chance of us getting together. So let's do it fast. Thank you.
>> MARIANNE FRANKLIN: Last chance. Moving to Thomas.
>> THOMAS SCHNEIDER: I think I have spoken enough. We are aware where the powers lie. Don't be afraid. The question is in my case we are a country of 8 million people. So we will probably not be the biggest power when it comes to the developing AI tools, although we are quite good at research. But then it is normally others that make the money of things that we research because we don't have the capital to do that but I'm always open to your good ideas on how to create more equal chances for the smaller and poorer countries to compete in this system. Thank you.
>> MARIANNE FRANKLIN: Thank you. Renata.
>> RENATA AVILA: Let's hope for a multilateralism Renaissance and bring it back because I see it as the only way where countries in the Global South can exercise Democratic power where we can like, you know, global rules that are there for everyone. And let's make this digital transition and this processer feminist and sustainable.
>> MARIANNE FRANKLIN: Thank you. And Michelle.
>> MICHELLE THORNE: Thank you. I agree with this commitment to reduce the harms of AI environmentally and on humans and how we can challenge the dominate narratives of AI that will save and have the public interest from heart. Let's build the sustainable, equitable alternatives that we need.
>> MARIANNE FRANKLIN: Thank you. I'm going to end with a question from the floor, from our legal students in Greenwich. We have no time to answer it, but I'd like us to leave with the question in our minds, is a Human Rights approach enough to keep the human in the loop in such a rapidly automated ecology environment we are living in? Are humans being decommissioned?
I would like to thank for your focus, amazing ideas. We have to ‑‑ we are going to end the session now. To the technical crew thank you so much. To our live captioner thank you so much. And to our speakers thank you so much. Let's hope we can have a conversation with a computer called Helena rather than Hal and Helena will listen us to rather than pretend to listen to us. Big clap. Thank you so much. Have a good evening, and see you around. Bye‑bye.