The following are the outputs of the real-time captioning taken during the virtual Fifteenth Annual Meeting of the Internet Governance Forum (IGF), from 2 to 17 November 2020. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid to understanding the proceedings at the event, but should not be treated as an authoritative record.
***
>> The chat feature is for social chat only and only the Q&A feature is used to ask questions.
>> Thank you. Quick question. Webinar is recorded. Where would it be possible to get the recording of the webinar afterward?
>> I think one day we wait for this.
>> KIAN CASSEHGARI: That would be on the IGF website?
>> Yes.
>> KIAN CASSEHGARI: Same for the presentations of the speakers?
>> Yes.
>> KIAN CASSEHGARI: Okay. Thanks. Well, let's start, I think, so that we're on time. So let me introduce the webinar. Welcome to the speakers and attendees. I'll start with the topic of the webinar and the speakers.
I'm Kian Cassehgari and I work at the World Trade Organization. With my colleagues Agata and Lee who helped me to organize this webinar.
In terms of the topic at hand, international trade is facilitated where internet functions at its best and data can flow freely and efficiently.
By the same token the global reach and effectiveness of the internet depends on international trade to a large extent. That's why in this session we would like to show items that international trade governance contributes to the smooth and effective functioning of the internet. And therefore should be part of the internet governance discussions with the IGF.
In addition to that, we would like to share that governance is key to international trade growth as well and the Trade Policy Committee can also learn from the different policy communities that regulate data.
So for this webinar we have four speakers with us today with different backgrounds. I'm sharing my screen to present the speakers. Please let me know if you can see the PowerPoint. I suppose not.
>> SEAN MANION: Yes, we can see.
>> KIAN CASSEHGARI: Wonderful. First we have Carl Gahnberg, a senior policy advisor at Internet Society. We have also Jade Nester, director at the GMSA consumer policy unit. Richard Syers is a principal policy officer at the information commissioner's office in the united kingdom but is also representing the Global Privacy Assembly today which is an association of data protection authorities.
Finally, Sean Manion is a neuroscientist and chief scientific officer at ConsenSys Health which is small and multiple enterprises.
Okay. So we will open the floor to you, the audience after the presentation of the four speakers so that you can make comments or ask questions. You'll be able to turn on your mic thanks to the help of Alexandra who is helping with questions today.
You can also write questions in the Q&A or the chat box where my colleagues will be monitoring your questions. We'll take questions at the end of the presentations. If there are follow up questions we may take some during the presentations.
Without further ado we start with Richard. You are going to present the privacy dimension of the discussion today. The questions for you that I have will be the following: How do data protection authorities cooperate to ensure trust on the internet globally? And how do privacy governance principles and rules facilitate data flows and contribute to international trade growth? Over to you, Richard. I have to share my screen for you.
>> RICHARD SYERS: Thank you.
>> KIAN CASSEHGARI: Wonderful. I'm in your hands.
>> RICHARD SYERS: Thanks, Kian.
Thank you very much. It's a pleasure to speak to everyone on the webinar today and take this opportunity to engage with you all. My name is Rick Syers, I work with the an information commissioner's office which is the UK data protection authority. Today speaking, also the global privacy assembly which the ICO currently chairs. So next slide, please. Thank you.
So the global privacy ‑‑ I thought I would start by explaining who the global privacy assembly is for those who don't know. The global privacy assembly is the premier global forum for data protection and privacy authorities to interact with each other and cooperate with each other.
As I said it is currently chaired by the UK information commissioner Elizabeth Denham. We provide the secretariat function for the global privacy assembly. We will be the chair for the next year until the next conference.
So the GPA has been around since 1979. It's been around for a while. It was until fairly recently called the international conference of data commissioners which was a mouthful. So one thing we have overseen as part of our chairship of the GPA is the change of names to the global privacy assembly. We held our 42nd annual assembly recently which was scheduled to be in Mexico. Unfortunately because of the current situation we are not able to travel. So we held it as a virtual conference. We think it went well. We are currently in the process of digesting all the output from that. So that was in the middle of October. So it was only about a month ago that we held that.
So the purpose or the goal of the GPA is to be an outstanding. The idea is to provide practical assistance to help different authorities more effectively perform their mandates and provide leadership on an international level, data protection, and privacy issues. By connecting and supporting the efforts at both domestic and regional level we want to connect with other international forums such as the world trade organization and the Internet Governance Forum so we can provide that expertise in data protection to the questions we are talking about today.
Next slide, please. Briefly that's the GPA's vision. An environment in which privacy and data protection authorities around the world are able to effectively act and fulfill their mandates both individually and through diffusion of knowledge and supportive connections. Next slide, please. Thank you.
There are three strategic priorities for the next year. They have been since 2019. Obviously we want to advance global privacy in the digital age and want to do it by working toward a global regulatory environment with clear and consistently high standards of data protection. We want to maximize the GPA's voice and influence. So we want to enhance its role in wider digital policy and strengthen our relationships within international bodies and networks in order to advance data protection and privacy issues including observer arrangements, for example, with other forums.
We have other organizations that can be a GPA observer as well. Finally, we want to help build capacity amongst the assembly members. There is quite a wide spectrum of capability and resourcing and of experience within the GPA with at one end we have varied experience, privacy regulators like the ICO and other European regulators that have been around for a long time.
We also have new data protection authorities who are regulating regimes. So the idea is that by working together we can help build that capacity and knowledge by sharing the experience we have. In some ways, the newer authorities have an interesting position because they can create the data protection regime purpose built for the digital age we are in now as opposed to having to adapt to it as some of the older authorities and regimes. So I think there is an interesting future there in terms of how new data protection regimes are built with the digital world in mind. Next slide, please.
I just wanted to touch on a few things ‑‑ some workstreams the GPA has been doing over the last year. So these are relatively new things. We split into a number of different policy workstreams. Three of them are particularly relevant to the conversation today. The first is work we did over the last year to analyze different global frameworks and standards to look at what the differences are between them and where the points of difference are and also why they are similar. That's a list of the ten we looked at. GDPRs and there is well established things like the privacy guidelines, 108 which is the only legally binding data protection standard globally. And then there are newer things like the Madrid Resolution which laid outsets of common standards for the GPA members in order to help them build that regime. Next slide, please.
>> KIAN CASSEHGARI: Richard?
>> RICHARD SYERS: Can I get the next slide, please?
>> KIAN CASSEHGARI: I'm facing some internet connection issues.
>> RICHARD SYERS: No problem. The first thing we'd like to stress about that was we found there were strong commonalities between the different frameworks. The basic principles are fairly well established. However, there were differences particularly in mechanisms for facilitating cross‑border data. Although the principles that most of those frameworks have when it comes to cross‑border transfers were similar in terms of allowing data to continue to enjoy the protections in its country of origin, the mechanisms for ensuring that were different and there were different ways of doing that.
Over the next year we intend to carry out further analysis more specifically on the cross‑border transfer mechanisms to see where they differ and are similar and look for areas of commonality we can work towards more interoperable systems. The next is the digital economy. A quick look at what the mandates of the workstream is. Basically to create a narrative and more coherent approach to issues around the data protection aspects of regulation and also to look at closer engagement with multilateral and international bodies. This webinar is part of that work that we'd like to do to reach out to different forums to talk to forums where data protection issues are important.
Obviously we think we can contribute to that and learn from practical issues. Next slide, please.
As part of that we did a background paper called "towards a trustworthy digital economy" which lays out our view of what the digital economy is, what are the important developments and main things we need to look at in this context. Next slide again, please.
So one of the things we wanted to point out in the paper and the point of today and that we'd like to make in the GPA is there are systemic benefits to systemic data protection. We would say data flows across borders. We mean a digital economy. They are a sustainable level playing field. We believe trust is something that's hard to gain but easily lost.
While certain uses may yield economic growth in the short term, if those uses are detrimental to individuals they eventually become detrimental to the digital economy as a whole. We think appropriate laws and regulations combined with appropriate oversight by supervisory authorities and courts ultimately help to ensure data control is in ways to allow people to enjoy data protections and realize the benefits of the digital economy.
It enhances the choice of individuals and prevents harms so they are able to make informed choices about the services they use and consumers are aware of the risks that data ‑‑ misuse of the data could lead to. So there is also an opportunity to offer a competitive advantage for organizations that can sell their products on the basis of it being privacy friendly. We think that's a selling point now. All parts of this is trust which we think is important.
We believe data protection is a driver for innovation. At the ICO we have a sandbox which allows organizations to come in, test out ideas in a closely supervised environment so they can use data in new ways but in a way which still respects the basics of data protection law. Next slide, please. I'm sorry, I'm slightly overrunning. I will wrap up.
Just kind of the key takeaways is trust is key to a vibrant digital economy. You need trust from users and the systems they are using and between different countries. Data protection law is a vital building block of that trust. So we therefore don't believe data protection law should be a barrier to economic activity. We don't believe this is a question of data protection or economic benefit. We believe they go hand in hand and that with a modern well-regulated sensible data protection regime you enable data flows. In terms of data flows across the globe the more harmonized that could be, the more interoperable the systems can be, the easier it will be for data between borders and the last will be between different data flows.
Next slide, please. Last one. So the last workstream that's a good example of working together and how it helps with the international enforcement cooperation workstream. This is something again that's been newly established. It provides a forum in which members can discuss live issues leading to joint working on matters of mutual concern. Particularly with multinational organizations operating in different countries it is useful to talk to each other in that forum and to coordinate our actions. That's led to concrete enforcement cooperation initiatives, a joint investigation between ourselves, the ICO and the Australian Information Commissioner into Clearview AI and a joint statement of video conferencing companies signed by six member authorities of that cooperation. That's become more and more important in the current situation where we are much more reliable on video conferencing suites at the moment.
Thank you very much. My final slide just has links to the GPA website where you can get information on those things. The GPA website has a lot of information there. Thank you very much. I look forward to discussing it with you further.
>> KIAN CASSEHGARI: Thanks, Rick.
>> RICHARD SYERS: Thank you.
>> KIAN CASSEHGARI: All right. Now to the next speakers, Carl and Jade. We have heard from Richard that privacy governance is key to generate trust and can contribute to digital economic growth and international trade growth. That's what matters for us.
Of course there are other policy objectives for data policies such as fostering market competition, innovation or protecting national security interests.
So taking that into account, I would like to ask you, Carl, first if you can tell us more about the relationship between those different data policies. Just to name one. Data rules and the global architecture of the internet. Carl, over to you.
>> CARL GAHNBERG: Thank you very much, Kian. I'll share my screen here for the presentation. Hope everyone can see it.
>> KIAN CASSEHGARI: Yes.
>> CARL GAHNBERG: Thank you very much for having me and thank you very much for inviting me to this great panel. I think we are all contributing different complementary angles to this really important topic. The angle that I will be presenting is stemming from some of the work that we have been doing in 2020 at the Internet Society about an Impact Assessment Toolkit for the internet trying to understand how policies or other developments might impact the global architecture we call the internet.
Looking specifically at the case of forced data localization in this context.
So I usually start these presentations with this picture to give people a sense of what we are trying to do. If you look at this image you'll see there is a forest and then there is a highway surrounding it. Ideally someone will have considered what is the impact on that eco‑system in the forest. That forest itself when you decided to build a highway. This has become the norm in many countries around the world that we do these type of impact assessments. We assign a high value to the environment. It's become almost a reflex in many communities to do that type of impact assessment.
We are hoping to do something similar for the internet and that's the work we are working on this year to contextualize and describe the impact of different choices and how it relates to the internet. So what we did in order to achieve this and started building this work, this is a work in progress, I should say is first trying to articulate what's really important about the internet, what's the critical thing that needs to be strengthened and preserved going forward.
Step two here is to try to provide a lens or other tools for people to assess the impact on the internet itself.
That resulted in this Internet Impact Assessment Toolkit. Starting with that first question, if we are thinking about trying to describe what's important to preserve, it is useful to try to articulate the internet in some way, shape or form. What you see on the screen is one of the ways that people usually describe the internet. It's a network of networks. There is also a bit of a so what question to that. It seems to be a necessary but not a sufficient description to talk about the internet.
If you were to sort of conceptually visualize the internet it would look like this. You would have networks of internet hosts interconnecting. Together they create a global platform we call the internet in which you have these amazing innovations and creations that is sort of fueling the global economy today.
Now what we wanted to do is sort of add a little bit more detail to that. We're talking about a network of networks. An internet, but can we add more detail as to what that capitalized I global internet we have today, what defines it, characterizes it.
So we started looking at the fundamentals together with our community and other experts trying to identify what are those critical properties that come to define the internet and what are their ideal states? What would be the ideal way to do the networking model? The purpose is to establish that baseline that you then assess different policy developments towards.
Hopefully what you see on the screen here should resonate with many people. It shouldn't be any novelties here. When we are talking about the internet we should be striving to have a common protocol, for example. We should have a common global identifier system in terms of IP addresses and the domain system, for example.
There are also ideals. We know, for example, that in the current internet we have two protocols in IPB4 and 6. In the networking model you are striving to have one protocol, the IPB6 for example. Each of the properties are trying to describe things that are critical for us to talk about the internet and that capitalized I internet and preserving that networking model.
The idea here is that you be able to add more detail to this conceptualization of what the internet is. It is not only a network of networks. It is a network of networks that have a common protocol, a common identifier system that's general purpose and so forth. By doing so you can also start thinking about when do we move away from the internet? When do we start harming the internet, degrading the internet? You end up with question marks.
If you move away from a common protocol, move away from having a general purpose network, move away from having high degree of autonomy of the participating networks then you move away from the internet as a networking model and you get something different. You do not have a capitalized I internet anymore. That's what we are trying to convey in this toolkit. We are describing these critical properties as being associated with important benefits that we have achieved today.
For example, the fact that you have an accessible infrastructure with a common protocol means it is very low barriers to entry to participate in the internet. IP and you can deliver your service online. Simplified. Basically it provides low barriers to entry.
Similar if you look at decentralized management and distributed routing you have networks that are able to optimize their interconnections and routing to serve local needs while still maintaining a worldwide connectivity. So the autonomy of networks participating in the networking model is critically important to have the internet we have today.
Similarly, that general purpose network that allows the network to just move bits and bytes and innovations at the edges to preserve the internet going forward.
I'm going to move to the second step in this toolkit where we have a lens for different ‑‑ what we call use cases and trying to articulate how they affect the fundamentals of internet. It's important to think about them as fundamentals because they don't capture everything that's important about the internet, for example. It is only looking at those critical fundamental pieces and how they are affected by different policies and what you might lose through some of those policies or how the policies might strengthen the properties as well.
We created a number of use cases in this work. We have more coming in 2021. I'll talk briefly about the case of forced data localization which is one of the angles of today's conversation. If you think about forced data localization you could describe it as you see on the screen that it refers to government requirements that control the storage and flow of data to be kept within a particular jurisdiction.
That can come roughly in two different forms. Either data at rest which means that the data is physically located within a country. You also have other types of data localization requirements that is about data traffic not traversing networks across borders for example.
They can come in different forms. Today we'll talk about the data at rest case. But the idea behind these use cases is that we are trying to illustrate with such requirements what is the impact on these critical properties? I have listed them here on the screen. I won't go into them in detail now. I think we will discuss to a great extent the impact on the first critical property in terms of data localization requirements raising barriers for participation of services on the internet because in effect if you have these type of requirements where they are indiscriminate, you would need multiple duplications of your infrastructure in order to comply with the law. So you wouldn't just require to have one web server in a country. You need to duplicate that across jurisdictions and so forth.
Similarly the general purpose network, if you start having requirements about how traffic can flow across the networks and across borders then you are adding requirements in terms of the networks becoming content‑aware and doing traffic management in a way that's not in line with this general purpose idea.
We also have here critical property three about the decentralized management and distributed routing. I was thinking I should try to illustrate this a little bit better visually. I usually find it helpful to get a visual description of some of these issues. But the idea here is that the fact that you have these data localization requirements ‑‑ and we are talking about data at rest ‑‑ is that it doesn't really correspond to how the internet itself looks at the world, if you will.
If you think about how we as humans look at the world, we would look at it in this fashion kind of. We would see it in countries and in country borders. We look at relationships between those countries in terms of those borders. So the relationship between country A and country D, for example, is that in between them you have country B and country C. That's how we think about the world as humans.
But this is quite different from how the participants or networks and services in the internet looks at the world, if you will.
So rather when you are on the internet you think about the internet from a network topology point of view. In this example this is a miniature version of the internet describing a network topology. If you were a cloud provider and had the option of allocating resources in network A or B you are faced with a choice. Where am I likely to more efficiently deliver this content, for example, to clients across the world?
By looking at the map stylistically it looks like network A would be more beneficial than network B for a few reasons. For example, if you are in network A you basically have three hubs to any other network in this map.
In network B you will have five hops to some of these networks. Also for example you have a higher degree of interconnection in this part of the graph than here. Those are the types of rationales that you can benefit from when looking at the internet because the internet doesn't have the idea of borders. It looks at interconnection patterns and where it is most efficient to allocate resources.
The problem is when you're trying to sort of impose that geographical constraints on the networks because it doesn't map well. You have networks that cross borders and just because you have a resource within one country doesn't mean that it would actually improve the service delivery for users in that country who might have better service delivery if you have in your resource stored in a network in a neighboring country.
So you're sort of losing these benefits of the interconnectedness of the internet through these data localization requirements.
So those are the type of issues we tried to bring forward in this toolkit as considerations when you look at different policies, including trade policies like this, that you start losing some of the benefits of the internet, the global reach, the interconnectedness and the efficiencies that an interconnected network can give you in terms of promoting your economy and your services. They can be put at risk from stringent and broad data localization requirements, for example.
So I'll stop there. Thank you very much.
>> KIAN CASSEHGARI: Thank you, Carl. That was interesting. One follow up question is whether or not privacy rules or governance will affect one at least of the five properties of internet and for example decentralized management, whether or not privacy governance will influence the decision of cloud service providers.
My gut feeling is no because we are talking about data transmission and not the content of data. But I would be happy to hear your views on that.
Before I go to the next speaker I would like to see with Agata if there are follow‑up questions from the Q&A.
>> AGATA FERREIRA: Yes. There is actually one question. But the question was addressed to a previous speaker, to Richard. So we can ask it now if you want or leave it for later.
>> KIAN CASSEHGARI: Sure, if it's a follow up for Richard, go ahead.
>> AGATA FERREIRA: So the question for Richard asks that there's been discussion of the DNS over HTTPS DoH implementation in the context of cross‑border data and privacy. Is that something that GPA would consider?
>> KIAN CASSEHGARI: Before that ‑‑
>> RICHARD SYERS: Hello?
>> KIAN CASSEHGARI: You can unmute your microphone, please. Before I give you the floor, Richard, Lee in the chat box and in YouTube are there any follow up questions?
>> We are not having the YouTube people posing any questions as yet. But you've got three thumbs‑up in the YouTube.
>> KIAN CASSEHGARI: Thank you. Richard, go ahead and maybe Carl can address a question as well.
>> RICHARD SYERS: I tried to respond in the Q&A. Apologies. I should have waited to answer.
Basically I don't think it is something that's been discussed at GPA. I'll admit I'm not an expert on internet protocols. I'm not sure it's something I could answer at this point. It's certainly something we can take away and raise with colleagues.
The only thing I would say data protection law is generally ‑‑ it tries to be technology agnostic. It wouldn't usually specify or endorse a particular standard or a particular protocol or a particular technology. All these things have pros for privacy and cons for things.
I don't think there is one that's perfect. Generally what's data protection law and what privacy standards try to do is leave space for data controllers to choose the best solution for them if you like in those circumstances.
Clearly from a security, cybersecurity thing the more secure something is the better. Of course it has to be balanced with whether it allows you to fulfill what you're trying to do. I'm sorry if that doesn't answer the question. I'm not an expert on internet protocols. I don't know if that quite gets to the question. It's certainly something that I can say has been raised and we can consider whether it fits into any of the descriptions we are having at the moment.
It's something that I think if there are ‑‑ if you have a more in‑depth question on how you think that relates we would be happy to take it away and think about it.
>> KIAN CASSEHGARI: Carl?
>> CARL GAHNBERG: In response to the DNS question or the other question?
>> KIAN CASSEHGARI: The other question except if you want to address DNS as well.
>> CARL GAHNBERG: I'm not sure I would have a response. It's an interesting question and probably relevant for Richard's work given that perhaps the question by the participant was sort of thinking about one of the rationales behind DNS is you get an encrypted connection to the DNS resolver and the big players may not be located within the geographical jurisdictions. You might access a resolver outside of borders which is a move from how it's traditionally been within the access network.
It's an interesting question for Richard for sure.
With regard to the privacy rules the key is coherence. Some of the things we have seen in terms of, for example, I don't want to say undermine but coming in conflict with some of the benefits of the internet is something you can see illustrated with the GDPR, for example. It was an extremely needed data protection regulation and it certainly has improved a lot for internet users not only in Europe but globally.
The crux is that although it moves the needle forward in a very positive fashion it also comes with, for example, an extraterritorial application where other providers outside of Europe in some instances have decided not to make themselves available to users in Europe. If you try to visit, for example, the Atlantic in the U.S., that website, it usually says they won't resolve in Europe because of GDPR.
Of course that isn't in any way, shape or form breaking the interoperability of the internet, but you are adding legal conflicts that emerge and suddenly you don't gain the benefits of that global connectivity because you have regulations that prohibit people from making themselves available globally. The point is the coherency is important because it can affect your utility, usefulness of having a global network.
>> KIAN CASSEHGARI: ‑‑ data transfers model that Richard mentioned becomes really key for allowing international trade or what is the flow of services across borders.
Richard, do you want to say something?
>> RICHARD SYERS: Would you like me to say something on that or wait?
>> KIAN CASSEHGARI: The last question we received from jack?
>> RICHARD SYERS: I'm sorry, let me bring it up. Yeah. I think, first of all, I would say the purpose of data protection law is to allow data flow. So we certainly would hope it is not being used as an intentional barrier to data flows and to economic activity and trade.
Of course I think the problem is, like you stated, when you have differences between different systems that can create a barrier not intentionally, but a barrier because you are transferring data to two places that often have different views on how data protection works. They are coming at it from different views, different optics.
That's where we hope that forums like the GPA we can try to sort of break down those barriers and to come up with, you know, common standards, common principles at least that mean even though the legal system may be different. Even though I think we have to respect that different countries have different views on these things. But that doesn't means the impossible to build interoperability. I think the basic principles of data protection are well established. Not many countries have a completely different view. It's how you achieve those things. So from our perspective at the ICO at least and I think I speak for most GPA members in this sense, no, we support trade. We support the ability of data to flow as long as it is in a way that maintains the protections that the data enjoys.
I think the answer ‑‑ well, that's a simplistic way to put it, but it would help to find commonality to allow data to flow more soundly. But, yeah. I would reinforce the point that the point of data protection is to create a common set of standards. Certainly in Europe is to create a common set of standards so the data can flow unimpeded without the friction you get with different standards in different countries.
>> KIAN CASSEHGARI: Thank you so much. Jade, what about you? What can you tell us about how mobile operators benefit from cross‑border data flows and how are they affected by data policies? Over to you.
>> JADE NESTER: Can everyone see my slides? I think so. Hi, everyone. My name is Jade Nester, director of consumer policy at GMSA. It's a pleasure to join you today.
For those of you who don't know what the GMSA is, we are a trade association. We represent the views of nearly 800 mobile network operators worldwide. We also represent about 300 companies from the broader mobile eco‑system including handset and device makers, software companies, and different companies in the mobile eco‑system. It's a broad array of companies that depend on cross border data flows which is why I'm here today talking about it.
Much like Richard, we really want to start with privacy as the basis for discussions about cross border data flows, privacy and accountability as a core privacy principle.
Back in 2011, the GMSA developed mobile privacy principles which are based on these commonly accepted fair information practice principles that Richard also mentioned. So these were a mobile industry‑facing set of principles and we also developed some privacy design guidelines for app developers based on those principles.
But more recently, we developed this report on smart data privacy laws in 2019. The idea was to have a resource for our members to figure out what the best type of privacy laws are worldwide as they encounter revised and new privacy frameworks in their jurisdictions.
This is also intended as a tool for policymakers. In part because we would get asked in consultations or directly what our views were as a global trade association, so we wanted to have some sort of benchmark tool to talk about what we think a smart privacy law entails.
I think a lot of what's in that report actually mirrors elements that Richard mentioned earlier in terms of embedding things like privacy by design, privacy impact assessments, having a law that's sector‑neutral, technology‑neutral, horizontal like the GDPR is focused on the risk of harm to individuals so that it forces organizations to really think through the privacy risk process and mitigate those risks.
But really, what underscores our report and just our view on privacy and data flows in general is the importance of accountability. We think that's the bedrock the other elements are built on. We think accountability can facilitate data flows globally. Data flows require trust between governments and individuals, governments and companies. Privacy is a critical part of trust. Being able to demonstrate as an organization that you are doing the right thing, abiding by the law, that's what is also going to facilitate trust. This has been hugely important for us.
Why are cross‑border data flows important to the mobile industry? In general for the digital economy they are important. You can build businesses based on your local area and have global customers both for goods and services and for our operators we are fueling through connectivity that growing global data economy.
But our members also need to be able to send data across borders sometimes for reasons like if they have a global footprint themselves and they need to send data from one jurisdiction to another for HR purposes. Increasingly some of the elements of networks that were physical hardware becoming virtualized or cloud‑based. So we need cross‑border data flows to make it worthwhile to scale up to those more efficient cloud‑based systems which creates for consumers if those are managed effectively. These are some of the benefits.
Some of the issues that apply to mobile operators are that telecom license obligations sometimes include restrictions that prevent mobile network operators from sending data. But sometimes that's public like in India there is a universal license that includes data localization requirements. Sometimes it is a confidential requirement that the mobile operator knows about because it is a license obligation but the broader public may not know about. Also the telecom operator would have to deal with the license obligation whereas other players in the eco‑system may not. It's not an issue of wanting to lower the bar. It's like bringing the bar up for everyone in a way that's workable and facilitates data flows. We have issued a number of reports in this area. One of these reports on realizing benefits and removing barriers for cross border data flows made a number of recommendations including things like committing to cross‑border data flows and removing unnecessary localization numbers. We had recommendations similar to those in the smart data privacy laws report including removing legacy sector specific privacy rules particularly if they include localization requirements encouraging regional data privacy initiatives. So basically the regional version of some of the frameworks that Richard spoke about earlier. Making privacy frameworks more fit for the digital age, and addressing government concerns.
We see them as different issues with different solutions. We know these are ongoing debates, particularly in the European context. The data supervisor ‑‑ the board put out guidelines on data flows post Schrems where the privacy field was invalidated. A number of recommendations in the guidelines are similar to those we have in the report that we did. In addition we have been looking at specific regional frameworks more intensely. Taking this and doing a deep dive. In ASEAN in particular starting in 2016 we started engaging with the ASEAN countries and also looking at the ASEAN privacy framework and figuring out how that can be useful for increasing privacy protections across the region and also for facilitating data flows.
We took that and asked a detailed survey of members and governments in 2017 talking about privacy practices and talking about what would be useful for the member states and for our members in terms of facilitating data flows and best practices. We used some of the information for a report on regional privacy frameworks and cross‑border data flows. This report is quite detailed, substantive, looking at commonality between the ASEAN privacy work, other global frameworks Richard mentioned in terms of convention 108 and the OECD guidelines and mapping the privacy principles, trying to figure out how certain elements of the privacy cross border rule system could be brought in to cover the ASEAN area. One thing we landed on was the development of something called a regulatory pilot space which is a sand box specifically focused on discrete data flow issues. This is a safe space to test policy ideas and specifically looking at cross border data flows between a region that has a localization provision, for example, and one that doesn't. Or one with a more developed privacy framework, a less developed privacy framework. Basically how to bridge the gaps between frameworks and build trust between a specific set of ASEAN member states. If we look at the next slide this is a rough diagram of how this works. There is a paper that explains the space more, but basically for those of you familiar with the APEC cross‑border privacy rule, it is a lighter touch specifically between two member states to facilitate data flows between member states.
So there are governance controls around it and there is accountability enforcement mechanism as well. This should help with trust between countries.
But we are still working on this. We still have support from ASEAN. We are looking at the best company and companies to get it off the ground. We are still excited to have support from the region. But it is still a work in progress.
I'll end on this slide talking about some of the work we have been looking at across Africa that we did a report which is very long. But there is also a summary to it as well. We looked at the privacy frameworks if they exist and telecom and other types of laws with privacy and data governance elements across the MENA region. We mapped those against GDPR principles.
The idea here is to figure out, like Richard said earlier, to figure out where the principles are in common so later if you want to facilitate data flows across the region or across two or three countries specifically you can figure out where the gaps are between countries and develop some sort of accountability mechanism to be able to facilitate data flows between the countries to make sure there is trust. The interesting thing about the region is even from when we put this out in 2019 to now there are new privacy laws in the region. The Egyptian privacy law was passed after we put the report out. We know there are a few draft pieces of legislation in the region as well. This is an interesting area where things are changing and you're getting GDPR‑style rules in these different countries and different independent regulators as well.
I'll close on that and look forward to any questions. Thank you.
>> KIAN CASSEHGARI: Thanks, Jade. I'll come back to one of your slides on this sandbox. It would be interesting to dive deeper into that.
I'll give the floor to you, Sean, and then we'll put up questions to Jade's presentation.
So, Sean, we have discussed about data privacy and privacy governance. As I said in the introduction, data policies can be introduced for achieving different types of objectives. One of them is to promote and foster competition and innovation. So you work in the health sector. It would be great if you could share your experience with us on how data can be used to foster competition in that sector.
>> SEAN MANION: Certainly. Thank you. Thank you to Lee and Agata and Kim from World Trade Organization along with IGF for having me here. I enjoy moving outside of the health sector sometimes. I think there are greater challenges and greater opportunities for growth in the health sector we can learn from.
My name is Sean Manion. I'm a neuroscientist by training and I spent time overseeing research and got to know the pain points and opportunities to have data flow and data stoppage.
I moved into the health tech sector in 2017, block chain technology was for me the in‑roads to looking at what could be done with emerging technology in health care. What I would like to do is introduce a new paradigm of what could be done with health data in a way that probably has implications not only on the regulatory side, but also on cooperation and competition and new opportunities in trade. The block chain technology, I'm not going into the details, but it has arisen rapidly over the last decade. Health lags behind. Something like finance or supply chain. But there are a large number of companies that have begun setting up consortiums to explore the use of the technology.
The value proposition for using this technology in health care is greater from a consortium standpoint, sharing data to compete with other aspects of data.
In this, it resembles so far a lot of other industries, use in supply chain, health care supply chain, use in pay and remittance, use in sharing information from provider directory has really been the baby steps or the initial look at what this technology can do.
But health data itself is sort of the gold. That's the most valuable thing but also the most heavily regulated and sets up some of the challenges for how that data flows not only for individuals' health but also in public health crisis like we are in now with COVID‑19 along with what are the opportunities for different types of businesses to thrive in this new environment.
When thinking about health data it is good to think about not only the regulatory compliance but the cyber security aspects, privacy aspects above and beyond those that are written into regulation and of course the bioethics and the identity of those individuals and those organizations involved in health care and health research. So it is this wider framework we keep in mind as we look at what some of the new technologies we have can do for sharing the information that's necessary but not sharing the data that's not necessary.
This new paradigm shift, and if there is one thing you can take away from this, moving copies of data is no longer the only way to create value from that data. We are looking at how you can share knowledge and not necessarily have to move the data so that the data flow and the regulations relating to the data flow no longer become a barrier to more rapid advances in health care and health application at both population level as well as a personal level.
In this, I will use this diagram to visualize what we are talking about. There is a tremendous amount of source data in any industry, but in health care especially. Often times people move from one locality to another or need treatment for some sort of problem in one place and need to get the data from another place. There are regulations and, you know, the GDPR in the U.S., HIPAA and many localities a variety of different privacy‑observing health regulations that can be a constraint to moving the data around, to even getting treatment with the doctor having the best data available. Of course in a public health crisis like the current pandemic, it becomes a very big barrier to advancing not only public health treatment but the innovation and some of the new creation that can come out of that.
By looking more closely at this data and focusing on the derived data, we have a smaller set of data that if we can share this, there may be ways to advance treatment as well as business associated with innovative treatments without violating any of the regulatory barriers in place already.
Data is becoming a very big challenge and issue in just its volume. In health care it is certainly the case that there is just a huge, huge amount of health data that's coming into play. As things get more advanced, some of the imaging. You used to get an x‑ray and that's a small amount of data. An MRI is a much larger amount of data. As more and more technology advances how we treat and how we assess health, sharing that data becomes almost impossible to do. It's not only a regulatory issue. It becomes a real world challenge in where that data is stored, how quickly you can transfer large amounts of data.
So this new paradigm and approach is really looking at not moving data around but instead leaving data where it is and having instead of large data silos be local data repositories. Those can be tapped into in a unique way with a new array of technologies.
Here I won't get into heavy technical depth but want to introduce three areas of technology being blockchain, privacy in depth and decentralized AI. So built on the current web 2.0, this is the foundation for in health what is being called web 3.0.
Blockchain provides a variety of functionality. The deployment of smart contracts which are just programs to automate some analysis or quality assurance and things that would be done already in a reliable, automated and verifiable and auditable way. Bitcoin, those applications in health care becomes a much more advanced application. Here it is optional but is something people are exploring. Blockchain alone is just a rail of security and trust that different parties can tap into. The real power in deploying decentralized AI or an algorithm to the data so you don't need to move the data around and just having the derived data come back to you is where a lot of this new functionality comes from.
Right now machine learning is very powerful, but there is a lot of challenges with the ethics of it as well as the viability of it when it comes to who touched the data, what machine learning data was used for, training the algorithm, all of these things can be open transparent and audible with the use of block chain. That's why partnering these is powerful.
Finally in health care especially you need to preserve privacy. There is trusted execution environment when layered over the two other applications which allow you to essentially have your data lake or an individual's data lake exist across the planet and never move it outside of the local municipalities and deploying algorithms and smart contracts that can be set for the particular privacy regulations of each place where the data may sit. You can pool and derive this combined data for a population health question I may have, an individual question I may have. I think that changes the paradigm entirely of what can be done with data, how data flows can and will match the regulatory barriers that are necessarily in place and how we can do things to advance health from population to a personal level in a new and different way.
I'll stop there and you can jump to questions for me and the others as well.
>> KIAN CASSEHGARI: Brilliant. Thank you so much, Sean and Jade. Agata, do you have follow up questions in the Q&A?
>> AGATA FERREIRA: Yes. Kian, there are follow up questions. There is a follow up question to Richard and there is a follow up question for Jade as well.
Maybe I'll start with the one for Jade because Jade was talking about some framework. The question is what suggestions do you have for the framework you mentioned on how to convince countries that the data localization is not the answer to ‑‑ surveillance or law enforcement outside the contract.
>> JADE NESTER: In terms of the foreign surveillance question, we pointed to encryption in particular in that report. I think that's something that's also been echoed in the guidance in the EU on data flows with the U.S. You know, that data needs to be ‑‑ or one of the safeguards you can apply is that it is encrypted and the key is kept in the EU.
I think in practice as technology becomes more sophisticated a number of techniques that Sean just mentioned will actually be used more in practice. Things where you can learn from the data remotely, things like secure multi-party computation where not every party involved has access to a full data set but you can glean access from data. Things like that will be more critical. And more people will be implementing them. I think in practice, you are going to have to use some sort of cryptographic way if there is no legal basis for it basically.
Or if, for example, in the use case it was deemed disproportionate. In terms of the law enforcement element of it in the report we mentioned things like the effort in the U.S. to advance the cloud act and cloud act agreements in various countries. Basically some sort of improved version of the mutual legal assistance treaty framework.
In the EU there's been an effort to look at the E‑evidence process, basically even within the EU the process to get information from one country to another and from service providers in different EU states and there are still issues in terms of streamlining the process and making it effective. And to prevent service providers from being basically deputized by law enforcement to assess the legality of requests. That's not a position that we want to be in where service providers are having to evaluate every request. So there does need to be some sort of better centralized way of dealing with it. The short answer is it is up to, in that case stakeholders to work with governments to try to push them in the right direction and to figure out something that works in the governmental level.
>> KIAN CASSEHGARI: Thanks, Jade. Let's take a couple of questions. Agata, do you have more follow up questions in the Q&A?
>> AGATA FERREIRA: There are quite a few follow up questions. I think one that is quite interesting relates to unfair competition. Victoria is asking whether data protection laws are also a way for privacy‑friendly countries to prevent dumping and unfair competition so they are established in less privacy‑friendly countries that they could offer services for free at the lower prices because of increased opportunities to monetize without the consent of the user. So that is the question that relates to the unfair competition that some countries may be facing from those bigger companies from other jurisdictions.
I guess that's the question to all panelists, but particularly to Richard, think.
>> KIAN CASSEHGARI: Richard, if I may, we'll take more questions and then open the floor.
>> RICHARD SYERS: Sure, sorry.
>> KIAN CASSEHGARI: I'm sorry to interrupt you, Richard. I see a few questions for Sean.
>> AGATA FERREIRA: Yes, there have been a couple of questions. One that relates to Sean is about opening access to data. The question talks about data flow concerns, the social, commercial and political value of data. The tension remains because of how various geographies access and share valuable data. While various actors share data liberally in matters concerning health or environment in technology and commerce there are various barriers to sharing.
So the question is what work is the internet community doing on opening up access to all data that must be global?
>> KIAN CASSEHGARI: Okay.
>> AGATA FERREIRA: There is another one which I can follow up straight away to Sean. Sean, are you suggesting that companies will adjust to data localization in the health sector by localizing AI processing? And can you provide concrete examples of companies or coalitions that implemented decentralized AI?
>> SEAN MANION: Certainly. I'm sure there are more outside of health care. But to the last question first. Owkin is one of the leaders in deploying federated learning or decentralized AI in health care and health research in both Europe and the U.S. The melody consortium mentioned on my first slide which is a host of different pharma companies in the U.S. and in Europe are also looking at how it deploys decentralized AI and in this case they are using the more advanced combination with blockchain for the auditability.
What we are doing and we are precommercial and our partners unfortunately aren't able to disclose at this point, but in spring of 2021 we should be launching our product and have a little bit more public discussion. But what we are doing is bringing this extra layer of privacy in.
In all those instances you are starting to see the exploration especially in research, the health research allows for the consenting. Moving to other questions, whether or not you are getting the deployment of these algorithms locally so the data doesn't need to move, yes, it is happening in a very early stage. I think companies are looking at the reality both because of regulatory challenges and also because of the volume of data and looking to do this more and more.
One of the interesting questions, and I didn't have a chance to go into this in this short presentation, but the question of, you know, access to the data and especially the imbalance in regulations across countries.
One of the things that we are looking to do and we spoke at a U.N. general assembly‑related event recently and unveiled what we call the health utility grid is to deploy this type of data access but also patient data control. Individuals controlling their own data as a new type of utility and a new type of way to empower both the individuals, particularly in instances where there's been historic discrimination or health disparities to see if we cannot empower those individuals to be the ones who control access.
I don't envision everyone on the planet will immediately want access and control of their own health data. Many don't want to be bothered. It may create a system like you have a bank to control your money, you may have health banks to control your data. And you can apply certain settings in this futuristic scenario I'm laying out of who gets access to your data and also when you get recompensated for it.
You know, I may only allow pharma companies who reached a certain threshold of ethical responsibility to access my data. I may only allow those companies who are giving me some share of profits if it is a successful drug discovery to access my data. Putting the access of data in the hands of the individual rather than controlling it by the access of necessarily each company or each municipality will level the playing field. But also open up new types of opportunity for competition because if you've got that sort of framework and structure of fairness, if you will, but also wide open access, if you can make the pitch for, hey, I have a new company that's trying to gain access to people because I want to look at those with this behavioral profile and bring new treatment to market. Groups may be more interested in sharing data with me, small company, than some big company that has questionable uses of data and reselling data and problems in the past. So I think we are able to flip the dynamic and it started to happen in real world practice. It's going to take a little while.
>> SEAN MANION: Wonderful. Richard, I will give you the floor, but I want to stress to the audience that if you would like to make a comment or ask a question orally we can turn on your microphone. So please let us know in the chat box whether or not you would like to intervene as well to share your perspective and experience with us.
Richard, you had a question, I think.
>> RICHARD SYERS: Thank you. Just to confirm, the question addressed to me was the one about a way to prevent ‑‑
>> AGATA FERREIRA: Competition.
>> RICHARD SYERS: Yes. So I'll presume that's coming from the angle of preventing unfair competition from other countries who don't have privacy law. I'm sorry. I wasn't sure whether that was coming from the angle does it create a barrier to competition or is it good for competition.
Very much we would say, of course, if you have services ‑‑ you know, the starting point if you like, in Europe at least, is that the individuals have a fundamental right to privacy, a fundamental right to data protection. That's a right that must be respected by anyone offering services to citizens of those countries. So if you are a company operating in a different country that doesn't have those privacy standards but you are targeting your services then the GDPR says you must comply. Those are the standards to comply with if you want to offer services to EU citizens.
So I don't think it would be a good thing to allow countries without those privacy standards to offer services that can benefit if you like from using data in a way that they don't have to comply with the standards. I think the standards are important.
I think what I would go back to because I know one of the questions in the chat was about not every country has the same cultural values when it comes to privacy. There are countries where perhaps people are more willing to share the data. The importance is that the individual has the choice. They can choose and make an informed choice about the services they use, which data is used at part of the services. Of course it has to be a recognition on the privacy side of things that some services, there is a value exchange there. You can't offer service for free effectively. If you're not paying for it, you are paying for it somehow and that's through the use of your data. The important thing is if that's what a company is doing, they are clear and transparent with individuals about what data is being used, how it is used and the individual has the choice and they can make the choice.
As I mentioned in my presentation poor privacy practices won't give you a competitive advantage in the long run. If anything, they are going to give your competitors who offer better privacy protections an advantage.
Ultimately reputation is important and I think people are more aware of how their data is being used. So I think people will stop choosing services if they realize the services are not treating their data in a way that's appropriate
But of course that relies on transparency and the law is there to ensure those organizations provide transparency in how the data is used. I hope that answers the question.
>> KIAN CASSEHGARI: Just to back up, Victoria's question is interesting. I'll give you an example. Yesterday we had a discussion with an E‑commerce platform in Latin America which is the equivalent of Amazon. They are adopting what they call binding corporate rules. That's a term used in the GDPR as well. Those are corporate privacy guidelines. Any supplier using the E‑commerce platform would have to abide with these corporate privacy guidelines regardless of where they operate in this jurisdiction.
So to Victoria's question I answer ‑‑ to give you an example. It is not the real case. But if Chile has a low privacy regulation but a Chilean company wants to sell a product through the E‑commerce platform it would have to comply with MercadoLibre. They promote sellers even in a jurisdiction where you have low privacy standards.
Please don't record that I have ‑‑ I'm not judging Chilean privacy standards.
I'm sure they are really strong. It was just an example.
So we'll take more questions from the audience. No one wants to intervene, make a comment to ask questions orally? We can turn on your mic. Let us know in the chat box.
In the meantime, I will ask the questions if you have speakers to ask between each other based on the presentations you have heard from the others.
>> AGATA FERREIRA: There is a follow up question in Q&A that refers to what you were saying. It's about balancing ‑‑ how do you balance different views and concepts. It's a question from Imran. Is there different views of the concepts of trust and accountability and different approaches that some societies don't mind data being shared because that's their culture.
By focusing so much on data privacy in the con equity the of globalization of data privacy how do you balance those different approaches?
>> KIAN CASSEHGARI: Interesting question. Do we have more questions from the speakers themselves?
>> AGATA FERREIRA: We have a question for Jade from jack who asked would you argue that encrypted data should be considered similar to good content type of respecting data should be then irrelevant?
>> JADE NESTER: So I think what kind of data it is important in this preassessment process or pretransfer assessment process. Because if it is personal data that you're transferring you should have been, I think at this point regardless of the framework, you know, you should be thinking through the risks associated with the data transfer. So certain protections would be applied to the data which you probably should have already thought about before it is encrypted.
If you're encrypting it that's not necessarily going to mean you're going to do a better job if it is high risk data. You should be doing a good job regardless and making sure that's implemented correctly. If it is high risk data there are other governance controls in terms of access controls and access to the key controls where there is a heightened requirement depending on the type of data.
If you are talking about it in terms of a trade angle it's interesting ‑‑ and I don't know if this is a solid answer ‑‑ but I was interested in a point made earlier in the Q&A about data localization applied to nonpersonal data because there was a leaked version of a potential European commission recommendation or rather a new proposal on data governance. It mentioned localization of data including potentially nonpersonal data.
A lot of people said that potentially violated WTO rules. So in your question about what kind of data it is ‑‑ the content type should be irrelevant, I wonder in that case how do you apply a privacy exception under WTO rules if the data is encrypted and you don't actually know if you are dealing with personal or nonpersonal data on the technical level.
So that's sort of I think a slightly wonky trade point but something I thought of in the context of that question.
In short, I don't know for sure, but it does seem like the content type once it is in transit isn't as important and maybe could be considered goods. I don't know.
>> KIAN CASSEHGARI: Thanks, Jade. Alexandra? Can you turn on the microphone?
>> Okay. You can do this also.
>> KIAN CASSEHGARI: That's true. Okay. In the meantime, Richard and Sean, I think, want to address Imran's questions on the cultural dimensions or differences in terms of privacy.
>> RICHARD SYERS: Thanks, Kian. I would echo the point I made earlier that choice is the important point. So I think it is a good point and one we recognize that different cultures in different countries are will have different approaches to privacy. They are coming at things from a different historical background, a different optic they see these issues through.
Europe is very much from a post‑war human rights background that data protection law comes from. So there is much more sensitivity to those issues than other countries.
The point is if you have a service operating in different countries and it uses the choice over ‑‑ the more control you can give to the individual over how much information they share, the better it is. If an individual is happy for the information to be shared happy to make the trade‑off if you like between the service they receive and more personalized service for example because they are happy to share more activity data as opposed to somebody who would rather have a less personalized service but they can keep the privacy. By having the controls built in with privacy dashboards and things like that, I think that's a good way of addressing that. By giving individuals the choice of how much they share and how the data is used and clearly explaining what it means so they can make an informed choice, that's a good way of tackling differences between different cultures.
>> KIAN CASSEHGARI: Sean?
>> SEAN MANION: You know, you have said 90% of what I wanted to say. That was perfect. Just adding that from the individual choice perspective I think the array of technologies we have at our disposal already being deployed will allow for that level of personalized privacy and of course countries in the United States ‑‑ California has a different level of privacy settings than other states based on laws just passed. We'll get that smaller to the individual level, being able to set privacy settings and change them. I might be going through a health problem where I want more privacy than I wanted yesterday. I should be able to readjust those privacy settings as I choose so each individual is empowered to do that and each individual is empowered to do it as they choose when they choose. It becomes more of a personal choice than a governmental choice.
>> KIAN CASSEHGARI: Over to you, Bertrand.
>> Thank you very much for the good panel. I want to thank Jade for having picked my question. There is a lot of debate around the notion of digital sovereignty, data sovereignty becoming a buzzword in the international environment. As she mentioned the European union including commissioner is using things like European data must be stored and processed in Europe. There doesn't seem to be a particular distinction between personal data or privacy protection and other types of data. I think it is a very important element to take into account in the discussion because we address most of that in terms of personal data where it might make sense in a certain number of cases under certain conditions but when we are talking about industrial data it is not necessarily the case. The notion of the paradigm shift in value location. The location of the data is now less and less relevant. What matters is control, access and value creation. The fact that you don't have to share the data, there are APIs, ways to access especially nonpersonal data that changes the economics of the data environment. I think one of the challenges is that a lot of the thinking about this data sovereignty is anchored still and when it is dealt with in the concept of trade it's about moving big chunks of data as if it were traditional goods that are rival instead of having the opportunity to query across borders. It's a completely different type of question.
So I just wanted to broaden the scope a little bit. We have a mental framework that's sometimes anchored in very traditional notions of sovereignty, territoriality and moving the data as big blocks from one country to the other whereas most of the case it's not the charge. Thank you very much, it's a very interesting panel.
>> KIAN CASSEHGARI: Thank you very much, Bertrand, for these insights. Who wants to reply to that. Sean?
>> SEAN MANION: To the extent we are talking about health, I think the shift in mental framework is important. It takes a while to get that shift into the legislation and interpretation in the past. The internet past is relatively brief in the legislative past but it is still the case that we are arming legislators with only sledgehammers and buzzsaws. They don't have the fine surgical precision or understanding. When they talk about all European data must be housed in Europe and all processing must be housed that needs to be dissected in the types of data and in the types of processing and what you are defining processing. It takes a lot of communication and education both in forming those legislators what tools are at their disposal and what is available as well as the public so they know and understand that, you know, you don't need to use the tools you use to build a house in order to do the surgery that's necessary in the current environment with data.
I think that mental framework shift has to happen on a lot of levels and education is key.
>> KIAN CASSEHGARI: Do we have more questions in YouTube or in the chat box?
>> Nothing on YouTube.
>> KIAN CASSEHGARI: Agata?
>> AGATA FERREIRA: There is a very related question to what's been discussed that in the long‑term perspective looking at data flow in general will there be a distinction between personal data protected by privacy law and other data subject to all types of other regulation and national intrusion?
>> KIAN CASSEHGARI: I think we'll stop there so the speakers can add a last point. We'll provide the slides that will be uploaded on the IGF website.
So I turn now to the speakers to know if you would like to have a last point before we close the meeting.
>> JADE NESTER: I'll make one point on the nonpersonal data point. In the context of COVID‑19 when mobile operator aggregated and anonymized so nonpersonal data was used for research to see the spread of COVID and plan a possible spread we emphasized that even though we are talking about nonpersonal data accountability is still important. I think that's a way to think about it that it's not necessarily in a way running to figure out which framework has to apply. In general the data needs to be accountable that the parties involved need to be accountable. There needs to be governance between the parties.
>> RICHARD SYERS: I think one of the interesting points in the conversation about localization is looking at the incentives for localization. Why do people want to start making data more localized and having to be kept in a specific territory. It's a good thing to think about. What are the ways of removing the incentives for people to localized data. If it is a worry over access and law enforcement access to data does more common principles, more standards in terms of agreed approaches to how you access how law enforcement and how national security data can be accessed. Is that a way of removing that comes from a fear you don't trust what will happen with the data. The more trust there is and what the standards are that everyone is complying with, the less incentive there is at a localization. There will be countries that take an approach that we feel we should get the benefit.
(End 11am CT)