IGF 2018 - Day 3 - Salle X - DC Core Internet Values: Link tax and upload filtering, Friction & Core Values

The following are the outputs of the real-time captioning taken during the Thirteenth Annual Meeting of the Internet Governance Forum (IGF) in Paris, France, from 12 to 14 November 2018. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid to understanding the proceedings at the event, but should not be treated as an authoritative record. 



>> OLIVIER CREPIN‑LEBLOND: So, good morning.  Welcome, everyone, to this meeting of the dynamic coalition of core internet values.  It's 9:00 in the morning.  I see a few tired souls around the table today and I thank you for coming in early.  We've got an interesting session today that we'll be discussing.  First a quick rundown of what the core internet values are and then we'll dig into the meat, or the vegetables, depending on whether you are a vegetarian or not, but we'll dig into the food very quickly, speaking about the GDPR, first.  General Data Protection Regulation.  And how that breaks the core internet values and article 11 of the European Directive on e‑commerce and after that, we will have discussion on article 13.  And then we'll have comment from the community.  The agenda that's currently online is the agenda we'll be pursuing.  There is a reference document linked to the agenda that has sort of the position paper that was written for this session and that explains the concerns that will be expressed during this session.

We are supposed to have possibly a remote participant, but I'm looking at the speaking queue and it's empty and I don't find them online, so I might have to cover for that person.  But, I think that we can just get started and see if that person turns up.  We might have to shuffle the agenda a little bit.  My name is Olivier Crepin‑Leblond.  I am the chair of the commission.  I have been for I a number of years.  It's just one of my hats.  I'm also head of the European coalition at ICANN which provides look into the ICANN policy and also the Chair of the UK chapter of the Internet Society, so, several things.

We have distinguished guests with us today, including, and I'm just going to get them to raise their hand, guess.  So, Alejandro Pisanty is supposed to follow us remotely from Mexico.  It is the middle of the night there, so we'll see.  We have Desiree Miloshivec, who is sitting next to me.  Desiree, in a few words, what are your responsibilities?  What do you do?

>> DESIREE MILOSHEVIC: Thank you, Olivier.  Thank you for your invitation to be here.  I'm team lead of the Internet Society chapter UK.  That's what I've been listed as but I have other chapters I wear.  Sometimes as the chair of the foundation in Serbia and also Board trustee of the Internet Society but I'll be speaking in my capacity of the ISOC UK chapter.

>> OLIVIER CREPIN‑LEBLOND: Thank you, Desiree.  Next is Diego Naranjo.

>> DIEGO NARANJO: Yes, I'm senior Vice President of the European Digital Rights and I work on data privacy and also copy right.

>> OLIVIER CREPIN‑LEBLOND: Thank you, Diego.  On the other side of the table, we have Andrew Sullivan.

>> ANDREW SULLIVAN: Hi, I'm Andrew Sullivan.  I'm the president and CEO of the Internet Society.  I'm sorry I'm on this side but that's because I may have to duck out at some point so I thought I'd be closer to the door and less disruptive.

>> OLIVIER CREPIN‑LEBLOND: Thank you so much, Andrew.  These are our three main speakers in the room.  We'll start immediately with the brief background Core Internet Values.  I'm turning over to remote participation.  Do we have any posts?

>> Amelia and She‑Chang.

>> OLIVIER CREPIN‑LEBLOND: Okay. Welcome to our participants.  We don't have Alejandro Pisanty so I will cover for him.  The first part is just a rundown of what the Core Internet Values.  These are technical values, not societal values, as such, but these are technical values on which the internet was built on so values such as them being, the internet being based on packet switching, having a layered architecture, having this interoperability for all of the different devices being on there and for the different networks to be able to work with each other.  It is an internet, after all.  It also has a number of important values such as openness, the end‑to‑end principle is also very important.  It's very decentralized and it has to be robust.  And it's, of course, scalable as it started out as a very small network and grew very fast to what it is today.

The values themselves have been doo fined in prior year papers we have published.  They are available on the IGF website.  They are given there in more detail.  I'm not sure it's a great use of our time given the only one hour that we have here to dig deep into them but really one of the concerns that we have is, as a coalition, is that there are a number of developments out there every year that break Core Internet Values and might make the network very different what it was originally intended to be.  We're looking at here some of the points which may stifle innovation or if the network, for example, openness, if the openness is not there anymore, then you might start having barriers to new enter rants, and especially when it comes down to all of the services that you have.  So, the internet is a very particular animal, as one would say, in that it does allow pretty much any start‑up to come up with an idea and put it out there.  There's a requirement for licensing and so on.  I guess in the speech we heard a couple days ago from the background, certainly several instances of regulation that were mentioned and so there certainly would be concerns what happens might be looking at this next year as to what type of regulation might infringe on Core Internet Values.

Today, we're looking at the General Data Protection Regulation as one of the recent things that have happened that has certainly changed a little bit, some of the values.  Then article 11 and article 13, which we'll dig into more detail afterwards.  So when it comes to General Data Protection Regulation, you certainly have regulation in terms of productivity but that forces intellectual ‑‑ traffic based on geographical boundaries which are not well defined on the internet itself.  So, it's not well represented by this whole thing of IP addresses and no boreds between the networks and the concepts of national IP addressing.  So, these are the numbers, basically, that every single computer on the internet requires to be able to operate the concept of having national IP addresses is something which completely changes the way the internet is defined.

The EU cooperative obligation, by breaking the layered architecture derived obligation.  The transit of package between certain pair points has to be interrupted to intersect them.  A decision on whether to let the packets arrive at their destination or else to start a notification process to parties such as the sender, the destination, the operator, the intermediaries and the authorities.

And some countries are establishing and putting operation surveying systems that violate the end‑to‑end systems as well and force layer crossings in order to affect the openness and decentralization process.  So, these types of systems also affect the scalability of the network, the one you introduce more equipment between the start and the end, certainly have a problem when it comes down to the volume of the information it comes to.

So, we're generally concerned that the layered architecture is not understood globally and when speaks regulation in many instances, one then very quickly gets concerned listening to these calls for regulation as not really well defining what layer to be regulated.  The layer is the internet, layered architecture starting with the basic layer being the physical layer and the telecommunication side of things and of course, that is already regulated.

But, most of the time, in fact, 99 percent of the time, one is really looking at one of the really highest layers which is the content and that is not very well defined or even understood outside the technical community that put internet together.

In the past, we looked at freedom from harm.  In recent discussions, that was last year's discussions, I invite you to have a look at the video from that situation.  But better definitions of harm have been forthcoming and actions have been started in different fora such as standard and architectures that enclose IoT devices, Internet of Things, in distributed but not higher level of devices.  More recently the British government has come up with a paper that provides some guidelines or some best practice as to how to have security by design in IoT.

Again, here, we're looking at many different layers from the actual device itself to the actual to the actual severances on the device.  I'm not quite sure I'm covered GDPR well enough but I could turn it over to des Miloshivec.  If you're okay with that, Desiree.

>> DESIREE MILOSHEVIC: Thank you.  I'll briefly respond to that as to what I also think about GDPR and whether or not it infringes on any of the Core Internet Values.  And the issues that I've seen, so, I think that the three main things, and that is that GDPR regulation reinforces user‑centric right.  That means that internet users should be in control of their data and and therefore it actually doesn't break.  It reinforces the ability of an end user to control the information that is held about them.  I also that you GDPR, the regulations is finally on the side of internet users.  I think this is great news.  We don't have any regulations where we actually are thinking of how internet users are affected and that is, I think, a core value, also of this regulation.

So, congratulations to the European Commission and the Parliament for adopting this regulation in that sense and reinforcing the internet user's rights.  And it is also the highest standard of production, so ideally, it also causes the GDPR frictions we have seen.  Whereabout it obliges other countries to follow the regulation of protecting EU citizens, so, whilst this is a good thing that our internet rights are protected and that we have control of information, that we can delete information, one of the novel things for GDPR is actually that this right is also in a machine readable form.  That means that you can take your information from, let's say a social platform that you use, not to mention names, and take it to another social platform in this machine‑readable format so that gives this additional user centric core.  But, to go back of about expectations, implications, and difficulties of extra territoriality that it imposes, the question is that many citizens would like this data portability but if so many other countries are about to come up with a regulation that they would impose an extra territorial basis, it would really be a nightmare to comply and regulate with this information.

So, that is one of the things that is proving difficult with this so no credits for that but credits for high standard of protection and high standard of regulation.  We're hopeful that countries who do not have privacy laws would therefore try to synchronize and adopt some similar privacy laws and this is definitely happening in Serbia because it's one of the countries that is wishing to join EU so they have already documented a similar law that would protect internet, control of users of what information and platforms and companies hold about them and have on the record.

But, lastly, to finish just the comment on the GDPR so we can all get into nitty gritty discussion about other things, I think, it's fair to say about maybe the false expectations of how this regulation is being implemented.  Users, for example, European users, when they go to some of the non‑EU websites, they're asked additional information so you have to give constant, again, and you have to, again, give more permission in terms of terms and services, for example, and therefore, it's questionable how much extra permission you're now giving to those, let's say websites that as a ewerser, you visit because you currently have to opt in, into many other regulations or terms and services of that player.

And, it is definitely been a costly exercise but it's also good to say that although it's proportionate, if you are an industry player that has a lot of data about internet users, you proportionally have to take measures to safe guard and protect and not sell it and pass it on but if you only hold a small amount of users data on your platform, obviously you would then do less work or proportionate amount of work.  So, for example, Chicago Tribune, the newspapers has decided not to serve European internet users with their content because maybe they have done a calculation that for them, it's probably either under economical or they will pay less fines in case that they are not breaching any of the GDPR rules.  So, we're seeing this chilling effects, if you like, of internet users in Europe not being able to access all the content outside of EU.  I can invite others to also comment on GDPR if we want to continue on that topic and then go back to link tax.

>> OLIVIER CREPIN‑LEBLOND: Yeah, thank you for this, Desiree, I'm a little concerned about the time so what I was going to suggest is that we have the whole discussion, all the topics at the end by opening the floor.  Otherwise, we might end up running out of time for the presentations that we have today.  But, thank you for your contribution on this and for showing that it's not only negative, I guess.  We've got a bit of everything and that's one of the points, actually, with regard to the development of.  Some of the developments actually do reinforce some core values and we often are too intent on seeing what doesn't work or the threats but recognizing not enough the things that actually are going in the right direction.

So, I guess I can give you the floor again, now, then to move on to Article 11.  And with a brief background of what this is.

>> DESIREE MILOSHEVIC: Thank you, yes.  I'll continue with the, Article 11, which is part of the EU copy right reform.  Its specific article is also known as link tax and that article, the Parliament, the EU Parliament would enter in final negotiations with the Council and it will be drafted and finalized in January of 2019.  However, each of the European countries may have different implementation of this new EU copy right reform.  It's part of the 2016 copy right directive.

So, what is actually particular and interesting about this Article 11 when we talk about.  First of all, what it's about is that news outlets like Google news will have to pay a small publisher's fee for including the link to the article they might have published on their platform.

But, as we know, the devil is in the details so the fact is that all though it only refers to companies who may wish to reuse the articles that are published by some publishers, it's not quite precise how many additional words in addition to the URL itself.  And you know that URL is something like the long string of HTTP, www, will not regulate internet dot html and so on.  And so, it's really interesting that you cannot copyright this URL but then the link tax says that you can use few additional words.

And if we look at the regulation in Germany, specifically on this link tax, it has been very little evidence that has been possible to actually get any money.  From any news aggregators, based on the current state of law in Germany.  It's been very difficult to answer to any questions from the especially Green Party since 2013.  What has constituted really these two additional words that you can include to describe the link that you're copying.  So, I think that's probably one of the main issues is that these rights are assigned on a geographical basis, so the links that could be copyrighted with a few additional words only refer to the publishers in European union.

And that has to be respected by the news aggregators across the world.

The other probably worth mentioning point when it comes to art Article 11 is that users are exempted from these regulations so, individuals would be able to copy those links so it only refers to companies.  However, it will negatively also affect the content that is generated by the internet users.  We know that we are all participants and user content generators at the same time.

And I think it also clashes with the openers of the internet, one of the core internet values, this link tax.  Because we know that in order to link and get further information and get from one page to another, this is how we learn to use the internet to share information to reference other resources and so on.  So, the fact that we would not be able to actually get any internet users, bloggers or journalists who are writing to get any similar treatment is a problem and there's also a problem with decentralization because what we're going to see is that this concentration of data provision to content providers can actually afford the costs brought by the regulation break.

And will break this decentralized value that we are working towards to have as many different resources and many different, I think not concentrated and not centralized ways of controlling and regulating the network.  So, I'll stop here and be happy to answer any questions when we go deeper into our discussion.  Thank you.

>> OLIVIER CREPIN‑LEBLOND: Thank you very much for this, Desiree.  Now we turn over to Diego Naranjo for Article 13 and I believe Diego has some slides so if we could have the slides up, please.  The slides?  The slide deck, please?  Sorry.

>> DIEGO NARANJO: Yeah. we could go to the second one, thanks.  To the second slide.  Please?  Can you go to the second slide.  Yeah.  Don't worry about the slides.  I always afraid when I see someone with slides, they will not be many words.  On Monday, you heard a fable about the protection of ‑‑ and how important it is we protect them and blah blah blah.  I agree with the idea.  I don't agree that the solution is to break into the accessibility rationalized internet and filtering everything.  The directive that was mentioned in proposal 2016, many other things about Article 11, this Article 13 proposes a change in the reliability of platforms in a way that it will impact the internet data we know to a great extent.

The idea behind Article 13 is quite simple.  Authors, those big multinationals, the 3 percent authors and also collecting societies.  They say they don't get enough money from some big platforms such as YouTube and others because they don't have a leverage of negotiation with them.  They do get some money from them.  They don't get enough, apparently.  That's probably true.  I may agree with them.

But, what they propose, what the commission proposed, what these stakeholders are supporting is sort of a preemptive censorship.  They propose these platforms are now going to be directly liable so instead of the current system they come as directed, these companies need to be notified of some illegal action being done, exercised through their servers and then taking the content down, they can monetize the content or any other similar action based on notice and action procedures.

They will need to preemptively prevent the availability.  That's what the text from the commission said.  That's what the accounts team is proposing as well.  And this has been launched with a lot of cheerful comments, like we're going to protect authors, they're starving, we need to do something, which is true.  But, if we do this, we change the liability of a platforms, what's going to happen, they're going to be trying to avoid being sued.  They're going to try to avoid any problems with the rights authors and by default, they're going to overplug more of the content.  If we can go to next slide, please.  There's one of the problems that I see with the algorithm regulating free speech on the internet is that they don't work.  There's rich literature as the impact assessment of the recently launched terrorist content regulation is saying that there's rich literature saying that these forgetters won't work.  One example I ‑‑ fetters won't work.  One example I can mention is that James Roe the famous pianist was trying to upload a piece of himself playing Bach in his living room.  He uploaded it to YouTube.

Somehow the algorithm recognized that that was a piece owned by Sony music.  Bach died a few years ago and of course, that's outside of copyright.  Anyway, that content was taken down then he had to fight to get his own video of himself playing Bach in the living room back online.  That and many other examples that we feel that this could be the next phase of using the internet to regulate free speech because although we have currently a good system of some threats in some state, namely, Hungary, pole and but now also Italy and others that we feel they can start saying many some sort of DMCA request saying, okay, this content is against copyright so please take it down.

The other problem is that the algorithms do not recognize things like this one.  This is actually a real image.  I just put that in the speech bubble.  But, it's a real image and are they going to recognize parodies?  Are they going to be able to allow mem es or any other source of free speech?  How is that going to work in practice?  How are we going to have a proper redress mechanism?  Next slide, please.

What's going to happen in the end is that as we see with the content ID on YouTube and in many other platforms that use similar mechanisms, in the end, what's going to happen is that when you go to any of these platforms, say, hey, I have this legal content that's been taken down illegally.  I have a right to parody because I live in the UK and that's exception, blah blah blah, that's very nice, they won't go to talk to their lawyers to ask if that's true or not.  They're going to say, yeah, that's very nice.  Sorry, this is against our terms of service.  Then, good night and go away.

That's what's going to happen.  That's why the redress mechanism is not going to work.  The other reason is that all though nice for data protection geeks like myself, they say in their copyright directive that they will not collect personal data about the users.  So, how, if I go to YouTube and I ask them to take some content back because it's mine rand I think it's legal, what are they going to say?  They're going to say, well, I don't know if you upload the content or not.  I don't have any data about you.  Why should I respond to that request?  Again, terms of service.  Bye‑bye, have a good day.  So, that's probably what's going to happen.  Next slide, please.  I'm reaching almost the end.  So, these other three main elements.  The copyright directive, Article 13 in all of its versions from the commission, Council, and implicitly one proposed by the European Parliament are bringing upload filters that will not work, that will block legal content and then bring useless redress mechanism that will not help users.  The good news is that while we are here in Paris, although we have some speech I was mentioning before on Monday, also a series of ‑‑ so I think there's sort of a revolt and we have until around February that we will be active on saveyourinternet.eu trying to bring this measure down.  So, thanks a lot.

>> OLIVIER CREPIN‑LEBLOND: Thank you very much for this, Diego.  And thank you for this call to arms.  It is something, Paris is known for this sort of thing so great.  Great to hear this.  Now, let's move on over to Andrew Sullivan for comments from Andrew Sullivan.

>> ANDREW SULLIVAN: Yes, so I am Andrew Sullivan and as usual, I try not to speak for other people.  But, I think there's an important theme that's running through the discussion that we're hearing here, and it's really this.  If you have a core value and remember, these core values are not political values, right?  What we're not saying is, oh, this is the way the world should be.  What the core values say is, if you want internet, this is what you're buying into.  You don't have any choice about this.  Interoperation is a necessary condition for having an internet.  If you want an internet, then you have to have more than one network.  You can't internet on your own, so, the key thing about this is that you have to have this interoperation.  And in order to have interoperation, it seems to me, you have to have operation at all.

The fundamental problem with most of these mechanisms is that they're not actually intended to allow the network to work.  They're an attempt to fix a social problem by messing with the underlying technology.  And tech fixes are frequently ineffective for social problems.  Take, for example, the link tax.  The link tax, as described, well, to the extent it's ever been described, is actually impossible.  It's technically impossible because nobody will ever deploy it.  The only way that it will ever get deployed is if everybody agreed that I really want to be taxed.

Well, that seems unlikely.  And since it's unlikely that people are going to sign up to be taxed, then they're never going to click on a link that causes them to be taxed.  All of the clients will simply not implement the necessary technical requirements to, you know, to cause the tax to take effect.  And there are only two possibilities at that point.  Either the government authority can say, well, you're not allowed to use a board compatible mechanism in which case the ‑‑ backward compatible mechanism in which case the internet is over.  Or, you are allowed to use a backward compatible linking mechanism in which case only the backward compatible mechanism is going to be deployed by clients.  Those are the only two possibilities and I've yet to have anybody explain to me how we get out of that.  On the internet if you actually want people to deploy things you have to give them a reason to deploy it.  There is no self‑interest on the part of the client here to deploy this thing and so it's not going to happen and we don't have a network such that the government, any government is in a position to say, well, perhaps with the exception of one government in the world, but that's really not on the internet.  We don't have a situation in which anybody can say, okay, tomorrow, we're going to upgrade all of the computers on the internet in order to have this effect.  Every time I hear one of those stories, I want to remind people, the last time we forced everybody on the internet to upgrade on the same day was 1983.

We had a copy on paper of the names and addresses of every single person who was connected to the internet in those days, and it didn't work anyway.  We missed the deadline.  That's the reality of deploying things on the internet.  So, what we have to do is stop imagining that that's the world we're living in and instead accept that what we have to do is create the incentives for people to deploy things that are good for them.  That's the reason that the internet works.  That's the way that we're going to solve any of these kinds of social goods that we want.

None of this is to suggest that copyright is unimportant or that authors getting paid is unimportant although whether guidance media combines getting paid is important is maybe a different question we could ask.  None of this is to suggest that the viability of journalism and so on is unimportant.  Of course, similarly, protection of individuals' data and so on, all of that is really important but the way in which proposals to do this are proceeding is as though this is a centralized system that can be directed from above.  That's the fundamental mistake and it's a technical mistake.  It's a deep, technical mistake.  What I have as a question for this Dynamic Coalition, then, is how we can convince governments that they're making a category error.  They're proposing legislation that is not actually going to deliver the benefits that they want.  It appears, for instance, that the GDPR is almost exclusively beneficial to large, the, basically, the very large organizations that are the target of it that are the worst abusers of the data, that have the most data and are most likely to lose control of it.  That seems like a problem but they're the only ones who can afford to comply.  Everybody else is in a position where they can't and it's going to choke out new enter rants who might have innovations that could help with this problem.  The same thing with these two articles that we talked about, right?  These things are really chilling for people who might come along with proposals for how actually to fix these things in a distributed and internet‑like way.  And I think that's the critical thing need to bring somehow governments to understand that the legislation they're proposing and the mechanisms they're using for this are actually harmful to the end goal that they're trying to achieve.  Thanks very much.

>> OLIVIER CREPIN‑LEBLOND: Thank you very much for this, Andrew, and I might add that a couple of days ago, on Monday, Susan, I hope I say the name right, Votcheki, CEO of YouTube created a blog post in the YouTube creator blog originally appearing as an op Ed in the New York times and the title of that was the intentional and intended cons Article 13, where she certainly points out the effect, and it does take an example, being the Descito song having several, I think she's mentioned, hundreds of sources or not just one or two or three.  We're dealing with very large numbers here so that then becomes an absolute forest of problems.  I think we can now open the floor for comments or questions.  I didn't know first I was going to ask Desiree if he has any comments and in fact, I can also ask Diego if you've got any comments on anything else that's been said and then start gathering questions an comments.  Desiree?

>> DESIREE MILOSHEVIC: Thank you, Olivier.  I think I am mostly in agreement with the other speakers have said and definitely on the last one that maybe some big social platforms would be able to afford both link tax and implementation of filters that will strengthen them even more than today.  With regards to GDPR, I'm not so sure that I agree on that because they definitely, the penalties that they are being threatened with could be an incentivize to correct some of their behavior.

>> OLIVIER CREPIN‑LEBLOND: Thank you, Desiree.  Diego?

>> DIEGO NARANJO: Yeah, what I was going to say same thing.  The GDPR is beneficial to the biggest organizations, I agree.  But, biggest organizations that society as such, and other big tech companies.  If this was beneficial for this big tech companies, they wouldn't have been lobbying so hard for four and a half years in what has been described as the biggest lobby storm ever in the European union.  But, I agree with something.  I agree that the, there's a trend in the European policy but I also think worldwide.  When we have any problem in society, we need to solve it with tech, with technology.  So the idea behind this is the technology is the solution, what is the problem?  Like the model, we have child abuse, we can use technology to solve it.  We have the terrorist content, we have technology to solve it.  We have copyright infringements, we have technology to solve it.  So, I think that's one of the ways we can really break away, our technology works in a way, the way we are to ask society.

Finally, I'm a bit scared that one or more sources on Despacito.  I think one is too many, but I'll leave it there.

>> OLIVIER CREPIN‑LEBLOND: Thank you, Diego.  Okay. So this is a big room.  We'll go clockwise in the room.  I see several hands and we'll start by, Sheva, if I could just ask for all speakers to please introduce themselves when they take the floor.  Thank you.

>> I'm Sheva Subramanian.  I'm part of this coalition.  The president of France, Macron, inaugurated the IGF in its great step forward and France hosted the IGF, but parts of the speech from the French president's, part of the contents from the French president's speech actually threatens to take the internet backward a little which brings me to the point of the categorical error that Andrew was talking about.  And part of this problem of GDPR making a categorical error or a government taking a categorical wrong position are from the governments not receiving proper inputs and advice.  Andrew, have you, as the president of Internet Society, looking at the situation, have you done enough to reach out to governments at the highest level?  To give them inputs and to give them an understanding about how the internet works?  Have you done enough?  Thank you.

>> OLIVIER CREPIN‑LEBLOND: Thank you, Sheva.  Let's take a few questions and then let's get them together.  Sebastian Rashovi.

>> Thank you.  In the same vein, but I will not comment on my president's speech.  Yet, somebody in the tweet says we want to be an unDemocratic company and I don't want to finish on the gel here.  But, yet, I like this, but one point is how we will solve.  Okay. We can ask Andrew to go around the world and meet all the queens and kings an prime ministers around the world.  Thank you, Andrew, for taking this duty.  As an end user, I appreciate your willingness to do that but frankly, you are not yet God.  And not yet, Andrew.  And, I think we need to ask a question.  How we do something, and the second, it's why this Dynamic Coalition.  As it's supposed to be within an IGF, we don't have representatives from the governments here.

Because if we want find a ‑‑ can't find a place and IGF was supposed to be the place to have this kind of discussion, if we have one making a speech on a stage and the other talking here, that means that IGF is not doing the task it was supposed to do.  Then we are happy to meet together and I am happy to see you, Andrew, but that's not the way we need to do.  I'm sorry, Olivier, I know I'm a little bit long.  I think it's important, do the president have some inputs?  Yes, definitely.  For example, Andrew knew that very well there's an ISOC French chapter and they give inputs to people who have some link with the current government.  Therefore, it's not the question that they don't know, they know.  But how we have a discussion with them all together.  That's the point and I think the Dynamic Coalition must be the place.  Thank you.

>> OLIVIER CREPIN‑LEBLOND: Thank you, Sebastian.  I know that Andrew has to run off to another session to I'll give the floor.

>> ANDREW SULLIVAN: Okay. Thank you.  The short answer is, have we done enough?  Apparently not because the wrong answer came out.  Obviously, if I just tried harder, more people will come around to this.  I'm perhaps a little more jaded than that.  I'm not totally convinced that all of the people who with are participating in this really care if they break the internet.  I think that part of the difficulty is some of them don't care and what we have to do is accept that we need populations, societies to step up and say, hey, wait a minute.  You're taking away a benefit that I'm getting so one of the things we need to do is not talk to individual presidents or queens or kings or whoever but rather to convince the population that what's being done on their behalf is, in fact, not delivering the goods.  That, for instance, to show this that the worst thing about censorship is blah is what's going to happen and people need to understand that because that's the only way people are going to be able to implement this.  I don't believe that the political authorities are going to do this out of maliciousness but I do believe they would like it better in a world in which they can call up the five people who can do a thing for them and cause the effect they want but what's happening is the rest of the human population is losing an advantage and we need to make that clear to people and I don't think we've done a very good job at it so that's actually the place that I think I would rather spend those times on airplanes actually trying to convince people that the right thing for them is to have the real internet, the internet that we know enables people to do what they want.  Thank you and I do apologize, I to run.

>> OLIVIER CREPIN‑LEBLOND: Thank you, Andrew, for answering before you have to run off.  So we'll continue on this table.  Did I see a gentleman on the right hand side having put his flag up, currently looking at his mobile, perhaps.  Okay, no, let's go for this gentleman.

>> Hi, I'm Eduardo from copy fighters and if you speak with most of the members of the parliament of European Parliament, they say that this is a directive that is made against the interest of the big American techs, like Google.  However, Google said this week or the week before that they spent 100 millions on the content ID system alone and the content ID system right now doesn't fulfill quite the requirements.  It has to be modified yet.  But my question is, how can Europe ban start‑ups, for example, or any other small company in the world, even try to compete with that system so with a system so expensive, is this regulation really against the interest of, let's say, Google?  Thank you

>> OLIVIER CREPIN‑LEBLOND: Thanks for this.  Diego?

>> DIEGO NARANJO: Yeah, I don't think that's really a question, but I agree.  Of course, what I think is going to happen is that small companies will end up buying a license from Google that has the best and worst systems so far, which is content ID.  And so, that's the way they will work in practice because they have the biggest database, they have things working for many years and have invested that money so I think that's the way that's going to work but I also think they want to harm, in a way, make it more fair for collective society or others to be renovated.  I think we've been for two years trying to explain alternative regulation but that hasn't worked out.  What has worked out going back to whatever speaker just left said, we need to mobilize people.  We need to make people understand why this is a problem and make them active and I think in groups like yours in Portugal and others around the European union are doing a great job at it so give it up.

>> OLIVIER CREPIN‑LEBLOND: Thank you.  Desiree.

>> DESIREE MILOSHEVIC: I think you have pointed out a fact.  That companies like Google has this system built already so it would be interesting to comply but it's also interesting to hear that follow‑up story where they might be licensing that and therefore it will definitely play in favor of big players and it would affect small players that wish to provide an upload service.

>> OLIVIER CREPIN‑LEBLOND: Thank you, Desiree.  Any other questions.  Let's continue around the table clockwise.  I'm interested in finding out and I'll come to you in a second, Roberto, I'm interested in finding out if there's actually anyone in the room that has an opposite view to what's been said so far in the room.  It's great to take with each other but if everyone agrees with each other that's it we can just go home and say, great, we know it all.  But I'd be really interested in seeing a counter argument.  Please don't be scared of nobody is going to come out feet first from here but it's really interesting to have a good dialogue in this place.  I did invite some people from the content industries that are supportive of this but I'm not seeing them around the room.  Let's get Roberto.

>> I don't know if I agree or disagree.  I have a point of view that's, I basically agree with what Andy was saying when he was saying that the people who, some of the people who are moving in this direction really don't care about the internet.  I was just talking before the session, I expressed this opinion that the internet is, in this case, a pawn in the great game, in the great political game and I think we as a Dynamic Coalition on Core Internet Values, we need to protect the internet values, the Core Internet Values, being aware, we shouldn't just, how can I say, you know, that we would have to be able to poz those things.  On the other hand, as European, I have serious difficulties in taking complete part in this and go against some of the legislative processes going on in the European union.  I think there is some part of the legislation, the GDPR that is, in fact, protecting, if not the core internet value, some other values that I have which is privacy and human rights on the internet.

So, I wouldn't shoot the GDPR because it will affect the Core Internet Values.  And I think we need to come to a solution where we have an agreement that we can protect the Core Internet Values but also our values that are rooted in the internet.

A second thing I want to say is that in my opinion, the GDPR comes because of our enact as internet community, as ICANN community to solve the problem within.  The issues that GDPR are facing are issues that were on the floor since the creation of ICANN and we at ICANN have been unable to solve so at that point in time, we have given the possibility to some political actors to play around this, around our inability to solve a problem on the balance between privacy and other rights and step in with the legislation which is the wrong way.

But, I think we are not innocent on this because the root cause of this is also our inability to solve the problems over 20 years of existence.

>> OLIVIER CREPIN‑LEBLOND: Thank you, Roberto.  The lady in the back, please.  I think you need to turn your microphone on.  Is it?

>> Okay. Now it's on.  My name is Bethany Jusani.  I'm from Argentina from Villa Libre Foundation.  I strongly agree with our fight against loin, Article 11, but I think we have a problem.  I will take the chance that we are all together in a same position to try to address our problems from our side.  I think there's no such thing as the internet values.  What are we talking about when we talk about the internet values?  Is it something confusing exactly as Macron saying French values.  What are we talking about when we talk about values?

It's freedom of speech.  It's privacy, it's, what are we talking about?  It's a technical thing?  I think our problem in this narrative is that for people in Europe, we could talk about internet freedom and things like that but in most of our countries in the developing world, in Africa, in South America, internet, for the people, is Facebook.  They cannot make a difference between internet and Facebook.  Because they just use Facebook, and they use serrating services so if we go there with our narrative of internet values, they think it's Facebook values or Google values.

And I want to propose this debate because in Argentina we are facing strong lobby from the big media to have something like the link tax.

And they say, what's wrong with asking Google to pay us?  And the whole debate is between the big media and Google.  And that is hard to address from Civil Society because if you go against the link tax, they say, oh, you work for Google.  What's wrong?  Google has a lot of money, they tell you.  And we have to move on from this narrative of the internet values because there are no such thing as internet values.  We have to reengage with human rights values.

And from that concept, I'd like to say that GDPR is a mother for us, for example.  We are debating now a new data protection law.  And I think there is no problem with protecting our rights on our data to regain control on our data that will not harm internet.

But, these kind of things will do, and these kind of things will harm human rights, not internet values.  So, just to, thank you.

>> OLIVIER CREPIN‑LEBLOND: Thank you very much for the contribution, and the problem with the one‑hour session is that we always run out of time.  I'm going to let the, our panelists say just a few words, closing words, and then we, I'm afraid, are going to have to vacate the room.

>> DIEGO NARANJO: Very quickly, go to saveyourinternet.eu soon.  Try to be active, get informed, support the GDPR, but don't support Article 13.  Thanks.

>> DESIREE MILOSHEVIC: Yes, thank you for your comments.  I think it's a good suggestion and this coalition should take into consideration.  However, the values we discuss in here were the underlying layers of internet architecture that, so maybe that wasn't clear.  And I also, I have to say, agree with what Roberto Guitano said in terms of support of GDPR is that the actual, it was a revenge of lawyers against IT.  That's how it's known as the code words of GDPR to try to solve some issues that are societal issues but with technical solutions.

>> OLIVIER CREPIN‑LEBLOND: Thank you to this, Desiree.  And thanks of course to Andrew Sullivan for having joined us and to all of you who have spoken and who are attending the session today and the people that are following us remotely.  Just one last thing, we do have a sign‑up sheet that did go around the room.  I'm not quite sure whether it was made clear that what we will do is to send an invite to the e‑mail addresses that are there.  And it's not obligatory as such.  We'll send an invite if you wish to join the Core Internet Values mailing list and we can then continue the debate and discussion on the mailing list and hopefully be able to clarify any points and take this the to next tier.  So, thanks everyone and this has been a good session.  Bye‑bye.