The following are the outputs of the captioning taken during an IGF virtual intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.
***
>> ROMAN CHUKOV: MODERATOR: Hello, dear colleagues, welcome to the main session devoted to Emerging Regulation and thank you very much for those who are online and offline with us here. I would like to give the floor to Mr. Jovan Kurbalija, our moderator of the session. Thank you.
>> MODERATOR: Thank you, Roman, big hello, as you can see from my back, from the background of the room from Vatican. I'm connecting from the eternal city and more specifically for Vatican, and it is interesting there are quite a few connections and since this morning I attended session on interreligious dialogue, I'm attending the event which is addressing the question of digital development and interreligious dialogue. One of the echoing questions during this discussion was the question how different religious communities all over the world can address digitalization and AI in particular. That was the main concern.
But there was one question which was echoing in discussion, and I guess we can advance that discussion with such remarkable panelists whom I will introduce in a minute. The question was simple, they were asking who they, religious people, cities, and worldwide, should call, contact, with they have digital problems from cybersecurity to privacy, data, access to the Internet? And my preparation, cognitive preparation for this discussion is shaped by that dilemma, what we can do as IGF community as people involving Internet Governance, policy, from different communities, what we can do to answer the simple question that will be asked more and more of and as we depend more and more on the Internet.
And we are fortunate that the navigation around this question will be pleasure, at least for me as a moderator, giving this remarkable lineup today which where we have with us, and I would like to welcome all of our guests, all of our distinguished guests, both in terms of their experience and expertise, but also in terms of their really important contribution to current debate which we have on the future of regulation, future of the governance in general.
Our session, the title of the session is regulation and the open interoperable interconnected challenges and approached. Let me introduce our panelists today, Margaret, Vice President of European Commission. There is no need to introduce her given Vice President really prominent role in digital policy.
>> ROMAN CHUKOV: We had a change from the European side, so it's Mrs. Miapetra Kumpula‑Natri, a member of European Parliament. Sorry.
>> MODERATOR: Good. We will have the European Union points addressed and I'm probably reading. I'm in Vatican here. Everything is happening over the centuries. This morning they said they had been discussing ethics for the 20 centuries, and like other churches, the people from other denominations. I'm sorry for this mistake. But now I'm sure that I will get it right.
We have today with us Vint Cerf, again, there is no need to introduce Vint Cerf, one of the fathers of the Internet, but a person who in spite of his remarkable contribution to digital world is still finding the time to share his wisdom, expertise and experience with us. Vint, thank you for joining us from Washington, D.C.
>> VINT CERF: That's correct.
>> MODERATOR: We have in this really remarkably diversity of speakers, we have today with us Anton Gorelkin, member of Parliament of Russian Federation who will bring us that parliamentarian perspective which has been developing in the IGF I think that is also parallel track on the Parliament. So welcome Mr. Anton Gorelkin.
We have with us Mr. Pique from ICANN, we have Maria ‑‑ there were some shifts in the lineup for the panel, therefore I'm trying to organize all information that I have together, and this is in a way, the beauty of the IGF, with agility, people like Roman and the others can address to this really tough time and you don't know who is going to be online, who will be joining the session.
>> ROMAN CHUKOV: The latest version is in the chat.
>> MODERATOR: We have Carolina Aguerre in Argentina. Therefore I should open the chat, and I will be making sure that I pronounce correctly all names.
We have also Nighat Dad, Executive Director of digital rights foundation coming from India, and you can see this diversity of views and the positions. Before we pass to the remarkable panelists, we have Carolina Aguerre, I hope I didn't, I won't make any confusion in our discussion.
Now, what is important to keep in mind that this session is part of the process, build up process, and in discussion about strengthening of the figure, one of the main messages has been to have, to move it beyond just an event. It's great to get together. I really miss Katowice meeting, but it is also important to have a process, build up, to have a point developed gradually and matured for discussion.
We had a few sessions before this event, and one of the underlying messages from this preparatory sessions is that there are many, many mechanisms for regulation for governance. In the Government sector, in the private sector, in the local communities, Parliaments are getting on the same international organisations, standardization bodies.
And very often the main challenge is how to navigate it, how to get that answer to the call which I was hearing this morning from the people from religious communities. This robustness of this approach is that it is basically approach to governance from diverse, we can say diverse geometry.
In some issues you have laws, you have so called lard laws adopted by Parliaments and we will be hearing from parliamentarians about that. In some cases you have best practices developed by businesses which are providing quite a great and remarkable results. For that diversity is both great strength of the Internet Governance and digital governance in general, and Internet Governance Forum in particular.
But as it always goes in life, with the strength you get also weakness. And this was, I would say, an echoing theme, how to get it right, both nurturing the strength of this very diverse space, but also addressing some justified concerns how to navigate that.
That was one of the echoing themes from the preparatory process. Now, with that having that in mind, let's dive deeper on the three areas we are going to cover today where we are trying to see what is happening in the field of the Internet and digital governance. Those areas are data, content and AI. Let's put them vertical and we will make horizontal connecting points about best practices, about how some of the issues are covered in this process.
Now, without further ado, I with like to invite Ms. Miapetra Kumpula‑Natri from European Parliament to tell us something from her experience and expertise possibly reflecting also on data because this is what is happening in European Union, question of data governance and not GDPR, but other initiatives., how to nurture this diversity but have easy entry points for the small and Developing Countries. Over to you.
>> MIAPETRA KUMPULA-NATRI: Thank you for the invitation. Do you hear me well?
>> MODERATOR: I hear you perfectly.
>> MIAPETRA KUMPULA-NATRI: Good. So greetings from the European Parliament and I got a very short notice to come here but I'm happy as I am participating on the Parliament delegation to IGF in Guadalajara and Paris and I was planning to come this time so you gave me opportunity and I only now get to know that it was even to replace, so I'm taking two big steps now.
Let me have my short remarks because there are good speakers to come and I have had opportunity to a little bit watch what you have been doing but not everything yet. So to regulate or not? Some areas exist and that's why democratic control cannot be forgotten.
Data, for the data, the interoperable is the pre‑condition for the reusability of the data. Without data interoperable, we cannot make maximum of the use of protocols of the data, economy and data society. Data interoperable does, however not mean that all of the data should be created according to the same standards from the beginning. That would be nonsense.
New online services are constantly intervene for the new usages, innovators will need to create data models and formats for this purpose imposing a comment in interoperability standard through regulation of the data level should only be considered in situations when there is a true lock‑in or competition problem.
For other cases, market‑driven uses can work well. There are many service providers out there that can make data interoperable when there is a need. One piece of legislation in the interoperability of the data in Europe is the Data Governance Act where we reach a political agreement last week. I was one member of the Parliament negotiation team.
Data Governance Act establishes a framework for the common European data spaces, architecture, and data sharing intermediators that support data flows between online services and systems. It is important to look how we get, who gets to decide on the interoperability standards. In the DGA, the Governance Act, we have so called Data Innovation Board that advises and has a commission on defining the interoperability principles, like Internet, it is multi‑stakeholder initiative.
It includes members from industry, research, academia, civil society, standardization organisations, and relevant European data spaces, and other relevant stakeholders. So one of the tasks of this part is to propose guidelines for the standards for the common European data spaces meaning purpose or sector specific or cross sectoral interoperable frameworks of common standards and practices.
The concept is the key for the European strategy to create trustworthy market for data. We will also in the EU budget support creation of such data spaces in the future. There are exemptions that 80% of the data in Europe is not unleashed for its potential and use so unleashing it at the same time and then creating ecosystem of trust is the key.
The second part of the data question is, of course, a protection. We want to make data that is trustworthy, and transparent. In terms of the personal data, the crown jewel is, of course, the GDPR. That has become an example for all over the world from U.S.A. to China it's now there are different initiatives and it's actually one of the competitive factors the companies want to say how trustworthy they are for handling for the private data.
Individual using online service should be in principle be able to take their data under the GDPR portability right and allow other service provider to make use of that. We know this is not reality today, and it would be, as it would be if the Internet would be truly open and distributed, but today Internet is not that distributed place, but more and more centralized to the large platforms.
For that question, EU has been active in rebalancing the rights and responsibilities, the power of balancing applications with the regulation ongoing. I will not dig deeper to those now as I was asked to talk about data.
I'm from Finland, and there has been a lot of active people from Finland in the My Data movement, which is actually now a global movement, and with many hubs around the world. This movement supports the idea that individuals should have practical tools to make data from one server to another. This is also important for the companies.
It is important that the companies trust that what they can share the data, it is done in prosperitive way and respect their rights so that is why the intermediators are required to provide cybersecurity techniques and other access to data. The interoperability is important in the new place as Internet encompasses the connection of objects, Internet of Things, the interoperability of those objects will become very important.
Otherwise, only mega platforms will have structural advantage in offering these objects. So interoperability can play a role in keeping the physical and now connected object open so the companies and individuals can buy from one company without having to worry about the interoperability of the object with the other data. This is important in industry and manufacturing and farming but also in household use.
The Data Governance Act is just the first piece of legislation on the data. We are eagerly waiting to propose the data Act. It has been promised to be given early next year. And then we will deal with the openness and functioning and the open data flows, which is very much the core of the EU.
So others outside of the data question, I will conclude with the following. In line with its values, EU strongly promotes multi‑stakeholder model for the Internet Governance. No single entity, Government, international organisation nor company should take the control of the Internet in practice neither.
So EU should continue to engage in fora to exchange cooperation and ensure protection of the fundamental rights and freedoms, right to dignity, privacy and freedom of expression and information is the core. Thank you for the possibility to join you.
>> MODERATOR: Thank you. As you said, you are coming from the country which is a champion with My Data, and in Finland, you walk the talk on discussing this balance in principle between usability of data and protection of data, and a few points from your introduction really resonating like we have different types of data, therefore we need also to adjust to different types of regulation.
One question before I invite vicinity to build on that, one question that was raised this morning is very simple question, but we may try to answer in the coming years. The question is why from one person, why we cannot switch our accounts like we can switch our mobile operator? Let's say social media accounts. With mobile operator you can move between companies and you carry your number.
It was an interesting question, and probably this will be one of the challenges ahead of time resonating what you discuss interoperability, data spaces and other issues. Vint, you made this possible, the data flow in the packets across the Internet.
Are you worried these days about the flow of data, and more particularly what you can bring us from best practices complementing what we heard about parliamentary or Government or more multilateral perspective on these issues. Over to you.
>> VINT CERF: Thank you so much, Mr. Moderator. I appreciate the opportunity to engage.
I have prepared remarks, but I wanted to respond to Ms. Miapetra Kumpula‑Natri's observations. First, they were extraordinarily coherent given that you had such short notice. Second, you brought up something which I hadn't thought so carefully about in your point about interoperability which is key to everything that happens in the Internet, but the idea that data needs to be interoperable and we have to have standards for that to enable effective data flows is really important.
So I'm glad that you brought that up. And the last observation, Jovan, you mentioned the question that was asked who do we call when we have problems? We need a cyber fire department, and we don't have one yet. So we might ask ourselves what does that look like.
But let me take a moment with prepared remarks to respond to this invitation. In the nearly 40 years that the Internet has been in operation and 30 years for the worldwide web, we have learn add I great deal about how powerful computer aided technology can be. It has enabled great strides in information analysis and sharing, electronic commerce, freedom of expression and a host of other benefits. We have also learned that such systems can amplify the harmful effects of misinformation, deliberate disinformation, and exploitation of vulnerable people and systems.
It's timely to explore the need for and scope of regulatory response to these risks while seeking to preserve the demonstrated value of the open flow of information across the global Internet. Private sector actors benefit from clear and common roles and standards for data protection.
At the same time, we have learned that the digital trade thrives on the free flow of information across borders and enables smaller enterprises to grow to serve global markets. It is increasingly evident that data flows contribute more to GDP growth than the flow of goods. Thoughtful regulation provides incentives in privacy preserving technologies such as differential privacy or fed rated learning and provision of extensive privacy protecting controls for users.
At Google, we believe that privacy should not be a luxury good. Much of online business is driven by advertising, and commitments to E‑privacy, GDPR, DSA and DMA are vital to that interest. There are further evolution to ensure transparency and user control is worthy of attention. It seems to me important to provide users with access to powerful enabling technologies so that information discovery in the open web can continue to serve them while protecting their privacy.
In aid of this, Google is moving away from third party cookies to improve user privacy, for example. Development of privacy protecting technologies can go hand‑in‑hand with regulatory practices to reinforce trust and safety for users of the Internet and the worldwide web. Judicial use of cryptographic methods have increased privacy protections for all users.
Every day we experience visits from about 20 million users who use the Google privacy, security and ad settings to manage their data and make choices that are right for them. By way of example, we have developed auto delete controls that are available over location history, YouTube and activity data. Allowing users to choose how long to save some data in their account, automatically delete data after three or 18 months. Our password manager automatically protects user passwords with one click. Password checkup tells whether passwords are weak, whether they have been used multiple sites or whether they have been compromised in third party data breach.
Finally, we believe strongly in the value of intermediary liability in the support of free low information, free expression, educational opportunities culture and creativity and economic growth.
Online intermediaries have brought freedom of expression and other societal benefits and these were made possible by liability regimes that provide broad safe harbors for intermediaries and incentivizes responsibilities by offering incentives.
For example, laws like the U.S. Communications Decency Act, section 230, and the law in Brazil treat platform differently than the author and publisher, served, linked or hosted. I will stop there. I am very much looking forward to a continuing dialogue on this topic which I consider to be vital to the future utility of the Internet, and the Worldwide Web. Thank you, Mr. Moderator. I turn it back to you.
>> MODERATOR: Thank you, Vint. For one probably slip when you said DNA, but I think you send a clear message to our discussion that the question of data and privacy is basically DNA. There are many metaphors, blood, oil, but I think DNA is probably one of the, I'm sure it will be Tweeted as one of the messages.
You developed really this line of discussion built on the complementary parliamentarian, you mentioned cyber fire department which we may discuss, but question of interoperability, relevance of standards and one message which was for me particularly relevant was that you said that businesses, they are looking for predictable policy space around standards, around regulation, which is quite clear, but sometimes it is overlooked in the simple dichotomy that Governments want to regulate and businesses, they don't want regulation.
What type of regulation? This is what we will discuss today. Now, we have a satisfactory set, we have a question of dynamics between interoperability, standardization, digital spaces, and you mentioned safe harbor and there is an interesting initiative in Switzerland on digital self-determination, also around the question of data spaces.
Addressing this question how to galvanize this data while protecting privacy. We move to content. And to see how when data gets the meaning when it becomes the content, what is happening then. Vint, you tackled that, but let's invite our two speakers today who will reflect on data from their specific experience, Mr. Anton Gorelkin, member of Parliament of Russian Federation, and Ms. Nighat Dad, I'm sorry if I'm pronouncing wrongly, Nighat Dad, Executive Director of digital rights foundation.
Mr. Anton Gorelkin, could you tell us more from your experience in the work of the Russian Parliament on this aspect of the regulation and policies on content? Over to you.
>> ROMAN CHUKOV: Dear colleagues, I am a MAG member from the Russian Federation. Mr.Gorelkin is currently presenting his Bill in the Russian Parliament in the lower chamber and we hope that he joins us a bit later, so we will keep five minutes for him and I would say let's move on.
>> MODERATOR: Sure, Roman, we will have to wait to enable us to be at different places at the same time, but for time being we understand the position, we will wait for Mr. Anton Gorelkin later on.
Mr. Nighat Dad, what could you tell us from a civil society perspective, and I would say rich experience in India on the question of content and content policy?
>> NIGHAT DAD: Thank you. I'm not from India. I'm from Pakistan. I want to make this correction., given the current situation from both countries.
>> MODERATOR: It would be the biggest blunder of the IGF, I think, if there is a blunder list, you should, it should be at the top. I'm really sorry.
>> NIGHAT DAD: It's perfectly fine.
So, and I also wanted to thank you and DiploFoundation, because I started working on digital rights because I studied Internet Governance. My first certification back in 2009, sitting in my law chamber with a very slow Internet in Lahore, Pakistan, and I started doing this virtual course.
So my work on digital rights basically started from this certification from DiploFoundation. So I'm thankful and grateful for the work that you all started. So when it comes to Internet regulation, I think it's very important to look into the Global South and Developing Countries. So in the form of Internet regulation in the form of content moderation, laws and policies, data protection, regulation of technical companies and accountability of Governments to users is becoming one of the foremost issues of our times. When I look at our region, we have seen not just in India, in Pakistan, and you know, in Asia‑Pacific region, in south Asia, there is this race of regulating Internet.
I would say rather controlling Internet. So lately there seems to have been a shift towards national Governments asserting their regulation over technical space, online space which has led to concerns and a fragmented approach to regulating the Internet.
On the other hand, we are seeing the convergence of approaches as countries are borrowing frameworks from one another. One prominent example of this is data protection law, the out sized impact of GDPR an jurisdictions well beyond Europe shows how there is often a cascading effect of laws and regulations related to the Internet.
Increasingly regulation is becoming at the table since the impact of online spaces such as controlling hate speech, online gender‑based violence, misinformation, and disinformation is increasing every day, but on the other hand, regulation that is badly drafted or does narrowly cover these issues does more harm than good. And we have seen this in our own jurisdictions.
So regulation to control misinformation or cybercrime has often been vague and broad in its scope, leading to targeting of descendants, journalists and ordinary citizens. I'm saying this because of our own experience in our own jurisdiction.
And this misdirected regulation passed as a result of panic about one emerging thread, but use for another has become very common in this region and has led advocates to become suspicious of any attempts of regulation. So there is also a Global North and Global South divide when it comes to regulations since many regulations passed in the north ends up being replicated in other context where the rule of law is not as strong as it is strong in European countries or U.K. or U.S.
So the damage these regulations are copycat and do is immense and needs to be discussed, and lastly, this imbalance between the north and south really needs to be addressed, you know, when it comes to self‑regulatory approaches. As many Developing Countries do not have the bargaining power or economic clout to regulate private actors such as technical companies. So this means that there is even less of an incentive for these companies to listen to regulators located in the Global South.
>> MODERATOR: Thank you very much, Nighat Dad, it is great to hear that you were our student. I'm sure that you got excellent marks. Whenever I meet former students, I wonder if that person got good marks, but in your case, I'm sure they were excellent marks. And what you brought in your discussion is let's say one new shift towards the need for regulation, and the question of having a public authority stepping in with some risk, and also development countries perspective from awareness to, awareness also parliamentarian decision makers.
This is the track where we need to invest a lot of time and energy in capacity development across the board. That was echoing the theme in this session that I attended this morning. People were asking for more understanding what's going on and I think your presentation brought good summary on the content aspect.
We are moving from data to content, and we may have a few reflections on the content policy. The floor is open, obviously our panelists can comment on it. Let me see if there is any comment before we move to the AI from Ms. Miapetra Kumpula‑Natri or Vint Cerf on what was said on the evolution from data to content and what Nighat Dad outlined as a position from Developing Countries.
>> VINT CERF: It's Vint, and thank you for the opportunity to intervene. First of all, I think what Nighat had to say is important because regulation can sometimes fail to do what it is intended to do, and end up being harmful, but at the same time, having common regulation, as I say, from the business point of view is helpful because if it's uniform, then it creates a level playing field for all actors within which they can compete with each other.
I do worry, however that content regulation is hard to do, especially at scale, and that term hasn't come up in the discussion so far. So I want to emphasize that operating at the scale that we and others currently operate at forces us into automated methods for coping with content recognition.
I can tell that you, of course, we try to use machine learning and AI, but these are, I will say, brittle tools, and I'm sure we will talk about that in the third segment of this session. But I want to emphasize that these are imperfect mechanisms for scaling, and I hope that the regulators and the lawmakers who promote the regulations will appreciate and attend to the difficulty of coping at scale with the content that concerns us.
So, again, this is going to be a process of iterative learning of what works and it's important, I think, for all of us to share our successes and our failures so that we understand better how to approach some of these problems and whether some of them will function at the scale that we need.
>> MODERATOR: At scale is another key word after DNA which we should focus on. Basically the right to be forgotten, regulation made at Google, the biggest body in the world, with something like 1.2 million cases processed since it was introduced. This is an extremely valid point, how to cope with that in this case, 1.2 million with limited possibility of using automated solution.
But we will come back to that later on, Vint, thank you for that intervention bringing this aspect. We will move, smooth transition to AI, data content AI and we will bring in discussion also questions, very interesting questions coming from our audience. We will now ask two speakers, Jutta Croll, she volunteered to reflect on AI and children based on the recent UNICEF report, but before we ask Jutta Croll to speak, let's ask Carolina Aguerre, codirector for centre of technology and society to build on the content, data and now AI. Over to you.
>> CAROLINA AGUERRE: Good morning, good evening, everyone. So I will build upon this and just to say, and Jovan, you let me know because I was asked to chime in with my role in participating in the ad hoc expert group for the UNESCO recommendations on AI ethics approved two weeks ago at the UNESCO General Assembly.
I can bracket that and leave it at words and maybe I can address now the flow of the conversation that Nighat and Vint were raising. For one, I think that AI, and I have just been through the problem here, we have this thing of AI addressing content and data, and in panels, recent panels participating with experts, content and data, for one, it's not a clear division. So AI systems are interfering from a user perspective or being used by large companies or Governments, et cetera, to filter and monitor content, but that content is data.
So for one, I think we are still having, and I don't want to address this as an ontological debate about what is data, what is content, but we really have to look more, refine our way of thinking about these issues, maybe looking at, for example the data Act of the European Union, how data is being framed, looking at other examples, but I think we are moving to a level if we want to conflate both, just be sure what we mean by that, and if we want to distinguish data from content, what are the implications of working along those lines.
That's for one. And then I just want to bring in, and not to discuss the AI ethics document in this part of the conversation, but just to say that interestingly, I mean, when they approached me, they were really interested in bringing in the experience I had with participating in the Internet Governance system, and I think we have with the Internet as a general purpose technology and the community that the IGF is still a very focal point for the discussions around the Internet, we are building this conversations about governance instruments, ways of approaching general purpose technologies such as AI, and just to think is it possible to think about the power and the complexity of AI systems if we cannot think about the Internet at the same time?
So I understand some of the discussions that have been taking place in the IGF in the last years regarding where do we cut the lines between the Internet issues and AI, but I also think that we cannot talk about AI without Internet and without data. So we have to be aware what we are thinking about with these terms. I'm sorry, I started hearing myself. Thank you.
>> MODERATOR: Thank you. We will be now first hearing from Jutta Croll about UNICEF, the latest initiative, and then from Vint. He wanted also to comment. And I would like to invite online participants and others to comment, pose questions. We have a few questions already. Carolina, what was particularly interesting in your presentation, all aspects are relevant, but what was interesting is the interplay between Internet Governance and AI.
There is sometimes unnatural and forceful division, and in my experience too strong red lines do not make sense in digital world. Yes, there are specificities of Internet Governance, there are specificities of AI governance, but there are so many interplays. Let us go and hear from Jutta Croll about AI and the children and then we will come to Vint and his comments on ethics.
>> JUTTA CROLL: Thank you for giving me the floor on short notice, but I thought it would be helpful to refer you to the policy guidance on AI for children that UNICEF has just published in version 2.0. And I have been in contact with several stakeholders, have done research on the effect that AI will have on children and has already on children.
So today we face a situation that many, many people are already using services based on AI without having knowledge that this is based on AI. They are using it in their daily life, and even children are confronted with AI‑based services without having knowledge on that.
So far we have really no research on how algorithm base, for example, impacts on child development, especially on early childhood development.
Therefore the policy recommendations that UNICEF has drawn up are very helpful. I would only, they have several, several recommendations. I would only highlight some of these, which are prioritized fairness and non‑discrimination for children, which is pretty much in line with the UN Convention on the Rights of the Child.
Especially provide transparency, explainability and accountability for children. So to make people more acquainted and not only the children, but also the adults responsible for children, to make them aware where is AI impacting on their development. And I also would like to stress that they recommend to empower Governments and businesses with knowledge on AI and children's rights which are close to each other.
I have had a meeting yesterday with Commissioner from DG Home at the European Commission, and she said, which impressed me very much, the Internet should be school, library, and playground for children. And I do think that in all of these areas of everyday life, now confronted with AI‑based services, applications and so on. Therefore I would like to emphasize as you can see from the background, that we should adhere to children's rights in regard of AI. Thank you.
>> MODERATOR: Thank you, Jutta for bringing also this generation aspect, and we should have also Chair, virtual Chair or physical in Katowice where we should have a future generations empty Chair, we should also think about what would be their interest and their concerns as we are shaping the world that may have impact on their decision and their choices.
For that generational aspect is extremely, extremely valid in and useful for our discussion. Thank you for the link. Vint, you wanted to comment on the question of ethics which Carolina mentioned and we will go after your comment to the next iteration of comments from all speakers to see how we can bring together all of these nice threads and pieces into nice puzzle or mosaic of our session.
>> VINT CERF: Thank you.
I just wanted to emphasize that our concerns for ethics should not be confined to AI. Generally speaking, software doesn't always work will way it is intended, and we can make the same argument for AI and machine learning. So we should be conscious of an ethical responsibility in creating software‑based services, not simply confined to the AI and machine learning methods.
So my big worry is, for example that bugs in software are factually the thing which allow that software to be exploited by hackers, for example for nefarious purposes. Therefore I think there is an ethical responsibility for software creators in general, not just machine learning designers, to have in mind an ethical posture where not only do we recognize that software has bugs, but we make sure that, A, we test to get rid of them as much as we can before we release, and, second, we prepare mechanisms for correcting mistakes in software which is already propagated which means having the ability to update the software, which raises a third problem, which is how do we know where that update came from, and how do we know whether the update itself has integrity?
And here digital signatures can prove to be quite helpful in verifying the software update is coming from a good source and has not been modified. So I simply want to make the problem harder for the good of all mankind and thank Carolina for drawing attention to it and suggest that we expand our concerns for ethical behavior to full up software production.
>> MODERATOR: Very valid point and sometimes I'm concerned that the limited policy energy that we have people involved in this problem are going just to our own issue, it was Blockchain three or four years ago. Now, it's AI and ethics, how it's a fashion and these things, but one should recall that we basically, law is based on ethics, law is codification of ethics and how it's applied is different question, but it is codification of ethics.
This is where European Union framework made conceptually very interesting break through by saying let us see what we can apply from existing regulation to AI and then discuss that small portion about question of ethics and other issues.
Ms. Miapetra Kumpula‑Natri,.
>> MIAPETRA KUMPULA-NATRI: I happen to be dealing with the AI act, also in the Parliament it was a long political fight between the Committees that who and how get the powers, but now it's settled and my Committee will take part being the Vice Chair for the AI individualized special committee, it is very dear to me all of these discussions, and rightly so. It's difficult so say what kind of AI at what stage. Is it machine learning? But then clarify for those who are not maybe having the proposal now on the table from the commission, it is actually quite simple that we do not regular laid AI, we regulate user cases that the people like children or consumers are facing.
We do have some regular trust on the vaccinations, on the medical things, on the services on the market, so the idea is also that when there are AI systems on the market, you should have some trust that they are not too high risk. So this is how it is built actually, yes, as was said on the existing regulation on the products.
It's not as easy as a toy, is it safe? Elevator, is it safe? But still compared with some like medical treatment, then you also know that it is not that easy to regulate, but then, yes, we do have some check and control on the products on the market, and when they are AI‑driven, some transparency needed to be there for the trustworthiness of the citizens all involved in AI systems.
So I look forward to very intensive interesting next year or two to set some ground breaking first ever regulation in place in the market of the EU, and I'm quite happy to see the other ideas evolving in the different places in the world. So it's not only the ethics, it's also very practical what products on the markets are. Thank you for this opportunity.
>> MODERATOR: It is very important and then link to what Vint said also, the question of ethics of software producers, how to find it and how to ensure that the software is solid before it is released, and also to have some question of externalities because nowadays failures in software can create major problems and the costs, and that's the analogy which I was faced recently was analogy with responsibility of producers.
It's more complex in digital field. It is not easy to make analogies. They show something that is common but hide the differences, but we should increase definitely ethics across the board not only in AI as both you highlighted. You wanted to respond to the question just for those of you who are not reading the question, Amir asked whom to call in that metaphor, whom to call if there is a problem of child online pornography or Internet ransomware and other, what is the phone number, Vint?
>> VINT CERF: Amir lays out the broader statement I think than who to call. He or she, I'm sorry, I'm not sure which,.
>> AMIR MOKABBERI: May I jump in and ask my question here?
>> MODERATOR: We got your question. Thank you.
>> VINT CERF: So I was going to elaborate because Amir's comment is a broad one. It says when crimes occur in a variety of settings, online and offline, and now in the Internet, they can take place across international borders, which was still true in the past. Before Internet you could commit crimes using the telephone or the postal service.
The important thing is that you need cooperation across those international boundaries in order to apprehend ‑‑ am I getting an interruption.
>> MODERATOR: We got it, but now it's disabled. Could you just mute, gentleman, interpreter for Arabic.
>> VINT CERF: I think that was the Arabic interpreter. So the Secretary‑General of the UN, António Guterres, has called for a broad Digital Cooperation Initiative, and here I think is exactly getting at the problem. How do we build confidence and trust and cooperation in law enforcement across international boundaries?
How do we arrange for the safe exchange of information? How do we preserve data that might be needed in order to offer testimony in a court of law that's acceptable in such settings. There are a wide range of issues to be resolved in order to cope with crimes that take place using the Internet across international boundaries let alone in any national setting.
So there is still much work to be done to make this a safer environment.
>> MODERATOR: Thank you. And we will follow what Google is doing so well, having the frontal simplicity of the issues for a search, but building it on complexity of your AI system. So for citizens we have to create some sort of simplicity while building on what you just outlined, and what Secretary‑General called for. And that's, I will tell you just one anecdote when I'm trying to explain to my friends, neighbors what I'm doing, and they got recently very excited by the European Union call for the standardization use of the USB3 standard by Apple because it affects them when they travel when they have to charge mobiles they don't need to look for different mobiles.
Therefore when we can look for these types of entry points, this is the best awareness building, and what you just explained and what Amir asked about whom to call for, in the case of digital attacks. We have now all building blocks from data to via content and AI including I would say a rather important discussion of interplays, how the data feeds into content and ultimately to AI.
Before we move to the next round on the comments, which would be more comments of knitting, getting these threads together into something which can come out of our session. I would just like to go quickly to the chat session because this is very important. If you ask colleagues to contribute, nothing should be ignored and not reported.
We had a question from Madagascar for Vint about data protection and privacy, how to keep it away from hackers. We have a question from Ursula Marcerich on some system that is developing that supports GDP and SDGs and 2030 Agenda, and she shared the link to WIPO document. Thank you, Ursula, for contributing to that.
Then we had also comment from, oh, this was comment for me personally, there were quite some students, my former students in the room, I'm very proud of them, and you have also a link from Jutta Croll about UNICEF new guidelines, and then we hear from Thomas who had a quick update from the room, we are having technical issue connecting audio to the interpretation rooms. Colleagues are working on troubleshooting.
Should we do something or we just continue? Please advise us? Or you explain us what is happening or we should do something to solve this problem?
>> VINT CERF: I'm sure they are working as hard as they can to solve it. They wanted to acknowledge it publicly because people on the chat were asking what to do. We have another, Ramon says carry on. I see that there is another question.
>> Yik Chan Chin has a question. Please go ahead.
>> YIK CHAN CHIN: I think my question is broader. It's about, so, I think I agree with Vint about we need global standard and global collaboration. As we see there are so many original negotiation of treaties have been signed at the moment, and even regional alliance, like UA, and the DTG7 digital trade principles, and then transatlantic agreement. So there are so many different agreements. So my question is where is the central authority?
So who is, which one is the central organisation or central authority should be responsible for coordinating all of these different initiatives? Should UN be the ultimate central authorities or central institutions? Can it be? Because there are so many problems with the UN. Those are big questions.
Thank you.
>> MODERATOR: Thank you. I can see Vint raise his hand. We will go ahead and then have a question from remote hub in Bangladesh.
>> VINT CERF: I think that Yik Chan Chin has raised a question to answer. I would like to suggest a suggestion from my engineering point of view, sometimes trying things out to see if they work is a really good idea as opposed to trying to solve the entire problem ahead of time and then instituting a practice. We learned this lesson in the development of the Internet because we iterated four times on the protocol design before we were satisfied that we thought it would work.
So there are different proposals on the table. I wonder if there is a way to try some out whether bilateral or multi-lateral in order to see if they can be made to work, and also to expose reasons why they may not work.
Before we try to come to some common practice, perhaps we need to test these ideas first.
>> MODERATOR: Thank you, Vint, we are having the new dynamics in the cybercrime that we may have in four or five years' time two Conventions and we forget also that our telecommunication infrastructure is regulated by two ITR, international telecommunication regulation, one adopted in Dubai and the other from 1988.
But fortunately differences are rather minor, therefore we communicate normally via Zoom, but that's a problem it's going it raise, especially when we start having regulation of specific areas like data health, E‑commerce, which is accelerating. We are noticing it in Geneva it's accelerating, and that can increase potential overlap and confusion, and probably this is one of the most difficult and demanding questions ahead of us.
We have now the quick question from our colleagues in Dhaka, Bangladesh from the hub and we will move for few questions in the chat. Over to you.
>> REMOTE HUB DHAKA: Question, how AI can work for rural Developing Countries?
>> MODERATOR: Could you repeat?
>> REMOTE HUB DHAKA: Our question how AI can work for rural developed country where Internet is...
>> MODERATOR: I think it's extremely important question, and we will ask our panelists whoever can pick up the question maybe Nighat from your background in the region and wider region, that could be interesting to hear. We have now to sort one big problem. This is a problem of the hybrid meetings, and it seems that we online managed to overtake the conversation, and people who are in Katowice who made the effort to come to Katowice are complaining that they cannot ask the question.
Now, Vittorio, it's great to see you. Let's solve this challenge of hybrid meetings. This is one session how to make real equal participation. Usually online participation is less equal, but in this case, it seems that it is you, over to you, Vittorio in whatever way and then we will ask the answer for the question for colleagues from Bangladesh.
>> VITTORIO: I can speak into the laptop in the room. It's interesting, but thank you for sorting it out. My question was related to the point of interoperability, which was possibly the main point of the session because, I mean, I'm very happy to see the regulation around the world, especially in Europe, since I'm European, trying to uphold the principle of interoperability which is one of the original key principles around the architect of the Internet. The problem we have now is that sometimes Governments are going against it, sometimes there are also the big tech companies that are going around this.
So I would like to get a comment, and maybe from whoever wants on the panel on how can we get out of this current situation of everywhere where many dominant companies are building services in ways that are not interoperable and integrating services one with the other, so there is no way for third parties to provide alternatives, which clearly would be the point of ensuring interoperable and open standards for the Internet.
So if anyone has comments on that. Thank you.
>> MODERATOR: Definitely, this is a crucial question, I'm sure that Vint will have something to say on it. This question, when you hear the AI, this big latest technology and what our colleague from Bangladesh said, well, we have a problem with access to the Internet, how would you reconcile in to dynamics?
>> NIGHAT DAD: So I also wanted to comment when colleagues were talking about ethics around AI, and I feel that, you know, developing ethical principles around AI is not sufficient only. We need to see, you know, who is basically training these AIs, you know, when we talk about big technical giants, for instance.
Do we see really the diversity? Do we see who are these people? Is it like a white man sitting in Silicon Valley or somewhere in Europe developing this AI for a woman sitting in Pakistan or Bangladesh? So we need to see the structural and basic problem when it comes to developing AI, and so developing principle is one thing, but I think we need to look at the design and structure, and I feel that we are really not, we discuss this every single time, but we are really not addressing the problem when it comes to diversity or who are the people designing these AI or are we really looking into training these AI, again.
And I think that's, and I'm coming from a perspective of online gender‑based violence, for instance. In Pakistan or in India or Bangladesh, we are, when woman or marginalized communities place online violence in different languages, you know, the social media platforms do not, their AIs do not recognize that kind of hate speech or harassment or rape threats that they receive, the woman receives or the marginalized communities receive in local languages.
So I think there is a problem of not just looking into the structural issue around AI, but also how much resources these big technical giants are putting into, you know, developing and training these AIs. So that's one thing.
But I also, I'm not sure how is the situation in Bangladesh, but speaking from Pakistani perspective, I feel we are not even there to discuss AI when it comes to, you know, the advancement of these conversations. I'm discussing this in IGF, but I don't think that I have gotten a chance to discuss this nationally or locally. So the conversations are not even there. We are still discussing gender digital divide, who has access to Internet, who doesn't have access to Internet, Internet shutdowns, Internet censorship, regulations, which are basically not really to address the issues that we have discussed here, but to regulate content or shut down, so I think we really need to see are we taking along all of these countries in this conversation?
I don't think so. We are behind these conversations, and if conversations or narratives are not there, then we are not even, I don't know how we are going to look into the universal solution or come to a point where all of the countries from Global North and Global South are following those principles.
>> MODERATOR: Great point. That's exactly what is the core of some debates like AI and ethics which is, for example very important debate in Vatican. Vatican has been discussing it for centuries and other churches, there are people from other denominations, but when it comes to the nitty‑gritty sort of question of access, question of gender violence, that's probably something where I guess people in the south would invest more policy energy and time into fixing this issue.
I think we should be very, very aware of that. Thank you for bringing that element.
One solution that can bridge between your and Vittorio's question is question of standards, to what extent standards instead of general discussion of ethics, to what extent standards can help us to deal with interoperability or to deal with the question of AI, and that's probably the next step because we have regulation laws that we can accept, but there is some sort of uneasiness, especially in dealing with unknown developments to regulate them heavily.
Although Vint hinted to one interesting point to have one policy box to see if it works, to get feedback and move on. Standards are emerging as an interesting area for the policy making. Could I have a comment on Vittorio's question from anyone at our panel how we can move from the words, from the rhetorics into real interoperability. What is the position of Governments? What is the position of technical companies? And can we walk the talk?
It's open for all of our panelists. Carolina, you were relatively quiet after your great initial intervention. Would you like to take on this point bringing also standards in AI in discussion, and then we will ask other panelists to join?
>> CAROLINA AGUERRE: So Vittorio raised a super relevant issue, which I think that in terms of the interoperability debate in the core of the Internet standards we, I mean, there is a lot of best practices that we can take from there, but we also have to acknowledge that there are other initiatives that are trying even to sort of for me one of the most relevant and important successes of the Internet and this is open end‑to‑end interoperable Internet that we have managed through people like.
>> VINT CERF: And the TCIP protocols and the following organisations. There is exemplary cases of how this has been put forward.
And yesterday I was participating at this report regarding the Internet's technical success factors. There is evidence of this end‑to‑end interoperable layers Internet is very much alive and kicking, but there is the elephant in the room which Vittorio raised and it's how are we going to address this with, I mean, the CDNs and concentration of large players and the Internet infrastructure layer as well, and I'm bringing in another topic. It's not data, but it's data that is driving those models and disconnects with AI and how data feeds into this AI models.
But I think this very delicate governance challenge around interoperability and security that it's, I mean, if we want this interoperable data systems at the platform layer and it's working with users' data, we really have to tread carefully and I think it's a very valid ask to be concerned, but we have to think about how we are going to work with that and I think that we need to think not just top down but work from the communities and not just communities that are in the Global North, developing these technologies, but also in more peripheral communities.
>> MODERATOR: Thank you, Carolina. We go back quickly to the online chat and see what's going on in online chat, and then we will get back to the panelists to develop further on this Vittorio and other questions to create the closure in our discussion. There was a question of Chris and Yik on the UN and the role of the UN, Chris mentioned, but UN can cover a wide range of venues and modalities these days.
And this is an interesting development, as digital is less and less specific, digital, we have now digital health or E‑commerce, but soon it will become health and commerce and human rights and that transition I'm not sure how the IGF community will adjust to that tradition.
Suddenly which was exclusive domain of what we were discussing becomes basically general topic. And then Yik Chan mentioned about IGF plus is the key way to stumble forward, take a good solid forum and make it better, and then Jutta Croll couldn't agree more with Nighat what she said about vulnerable and marginalized group, support for Nighat all over the place.
Then we heard from Chris also on the comment on the evolution of IGF is going to work in other UN spaces, cybersecurity, cybercrime. IG evolution needs to strengthen IGF's role in relation to those efforts.
I think it's a key issue, Chris, definitely. What is the new sort of, it's soul searching for IGF, new role for IGF in this fast‑changing environment. Then we have digital is also on the table with other developments and exchange, and human rights council, very rich seen. You had one comment, you can type another question, but let’s now since we have about five minutes to wrap up and we did online, I don't know what's going on in C2. I hope the people in the room didn't give up on our online dynamics.
Here is a specific question from Amir for Vint, my question to Mr. Vint Cerf is that some problems like Internet crime and cyber-attacks could be solved in the architecture and technical level by initiatives like security by design, safety by design. And law enforcement with aim of preventing and combating cybercrime. It is a crucial issue. I will share with you in a minute a link to the paper which I prepared for this interreligious dialogue discussion on the digital future trying to explain to non‑IG people what is going on.
And one of the issues in discussion was we had very strong Blockchain community, which was almost religious, I would say, trying to argue that Blockchain is a solution for all problems, and it made an interesting discussion how far you can go with by design solutions.
And one example is that the dark web, TOR was developed for legitimate purposes for to punish people for what they are saying, but then it became a space for the dark web and we know what misuses. One has to be careful with a complete by design solution, but it would be great in the final wrap-up statements to comment on this question or any other question from our great panelists. You are really fantastic.
We start with the European Parliament and then you can pick up on any of these points, maybe close a few threads, therefore, we have a nice sort of set up towards the end of the session. Please, go ahead.
>> MIAPETRA KUMPULA-NATRI: Thank you. I love the discussion, really the best of the IGF, even not physically there.
A friend from Bangladesh, I'm sorry, I didn't get all of the names, AI for the rural development regions. It is so important, and Yik Ching mentioned the UN system, but then really the goal for Our Common Agenda and digital compact now by the Secretary‑General, they go for the connectivity, connectivity, and also AI development inclusivity and others but then really to see once we have every corner, every person connected and online, there is also more tools to continue.
And AI solutions needs to link with the countries during agricultural crisis, more that comes with climate change and so on, so on. So actually the ITU, UNESCO, every school connected, once schools are connected, there are more connection for the neighboring region. So it's very important. Thank you for taking this up.
For Vittorio, I tried to elaborate in my speech that how to avoid gardens, it is not the way for interoperability being the key if we accept these kinds of development, and thank you for the Pakistan, the diversity on the design structures, it is actually something that we should try to soft regulate is it's a little bit difficult to say quota for every single company, the group of messengers and I woman to be an woman engineer myself, but setting it under hard legislative way, but it is the soft we need to do.
And European Parliament will next Monday have on its plenary the discussion on the online violence again, on the gender basis. So this is human kind together and we cannot accept raising up the hands and say we cannot act. We have to and we will. But as long as the criteria for the social media platforms is that the time you use online and then every second will be there, it's really hard to have any kind of avoiding this. So we will need to work with the companies and try to set rules there as well.
So let me conclude my part. I believe that a new corporate responsibility is not anymore the climate alone, but it should be also the responsible use of data. And it also, the it is so little we don't see it, but everybody knows how to govern it. We have the climate agreements and we talk about something intangible CO2, and I think it's the same with the data unit. We don't see it and we see how it structures the world so we need good governance for that. Thank you for being able to be a part of your session today.
>> MODERATOR: Thank you. It was a great comment, and one our colleagues in chat may answer this question, at what point we will have this situation we can shift from Telegram to WhatsApp but not only data which is portable, but network with us as we are doing with mobile phones? What will be that point? And it would be, I will say, if I can use this term, Holy Grail of interoperability to have the real possibility to switch and move between platforms.
Thank you for your great work.
>> MIAPETRA KUMPULA-NATRI: The short answer is interoperators in this DGA, they have the label of trustworthy interoperator as far as they also can have interoperability and that's where the innovation work is now helping to create them when they go for the sectoral data space so a lock‑in would be avoided in the planning phase. That's the dream.
>> Let's see if we can meet around the bones which the bones are the legislation, now the soft infrastructure to be built and companies to get on. That's the European trial. You may copy. You may not.
>> MODERATOR: Maybe to supplement that with the standards. That would be interesting also to supplement that.
Vint Cerf, we haven't heard from you the last 20 minutes. What would be your sort of echoing point from this discussion? How would you bring it together to some sort of closure.
>> VINT CERF: That's very difficult given the scope of the discussion, but the thing which I think is most apparent to me anyway is that we should bring to this discussion a certain degree of humility. This is an extremely complex environment. Just to give you an example, the idea about security for design, yes, you may be able to achieve this objective for a particular piece of software that you are developing, but think for a moment about the Internet environment, billions of devices with who knows what software are in them are interacting across the network, some for the first time.
Who knows what the consequences of that interaction is going to be. I think we should be a little humble about imagining that we know how to solve these problems. This is not an excuse to avoid working on them. I absolutely want to work on them. One other point I mentioned scale earlier. We have had to do with the amount of content that the online providers have to cope with.
There is another scale problem associated with machine learning, and that's the amount of information that is needed in order to successfully train a machine learning model. If you are saying I want to automatically detect harassment and other harmful speech, the techniques we currently have require huge amounts of that bad language in order to train the system. There is something kind of unnerving about that observation.
So let me back up a little bit and try to summarize. We clearly need rules and regulations that work. We want them to be as uniform and as globally applicable as possible. I would urge us to remember that our first attempts and our second attempts and even our third attempts may not work, so let us learn from our attempts. Let us try things out and iterate and not attempt to solve the entire problem with our very first piece of law.
So I'm looking forward to the continued effort of many to create a safer and more secure Internet environment for everyone.
>> MODERATOR: Thank you. I think that's the best possible summary of our discussion. If other panelists, especially Carolina, and Nighat agree, we can close on this note and message of humility agile approach to testing and not to shy away from the problems, but to approach them with the utmost humility and wisdom and innovation.
And for those of you who are based in, who are physically in Katowice, I'm sending message from the place and you may visit the place near Katowice. The name is Nova Hueta where Papa Voilet was a priest before he became mope, and there is one physical link between where I am and the Katowice where the meeting is happening. This is the look from my room, hotel room. Thank you very much for the great, for me, inspiring discussion for all of your tolerance with all mistakes I made, especially Nighat with introducing you.
You, I had fun. This is the most important. It kicks off quite a bit of synapses in my IG part of the brain to think of the things I didn't think before. I hope that panelists enjoyed it equally and especially our audience both online and in situ. Thank you very much. All the best.