IGF 2017 - Day 3 - Room XII - WS164 Terrorism: Freedom versus Security

 

The following are the outputs of the real-time captioning taken during the Twelfth Annual Meeting of the Internet Governance Forum (IGF) in Geneva, Switzerland, from 17 to 21 December 2017. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid to understanding the proceedings at the event, but should not be treated as an authoritative record. 

***

 

We have started some introduction remarks from the panelists, and then we will have time for discussion to ‑‑ so if you want to think about which questions, which comment you want to raise, please start doing so.  I am just introducing our distinguished panelists.  Let's start with from the UK government, national police intelligent units on cyberjihad.  Very extensive title ‑‑ 

Let's start in order with remarks from the UK government.

So terrorism and security, how can we ‑‑ in the session, how can we issue proposes that achieve security and enable the ability to keep privacy and allow everyone to express themselves clearly. 

This is really challenging and apologies to those of you in the encryption session earlier.  I will go over some of the same ground as this morning. 

This is a really really challenging issue for us.  The internet has exposed new vulnerabilities, introduced new challenges.  But the priority for the state remains the same, which is keeping people safe.  The threat is real.  We've had a number of terrorist attacks this years.  The internet has been involved in some of those, although to different extents. 

And how we deal with those threats, anticipate them, and are able to disrupt them is a really big priority.  I spend an awful a lot of time in meetings with the home offices, intelligence agencies, the police, where people are really clear that what they are able to do is protect citizens around the world. 

We also have interests in keeping everyone else safe as well.  I think it's worth stating from the outset that the UK is firmly committed to the right to privacy, freedom of expression, freedom to access information.  You know, in general rights should be protected online as they are off line and that is absolutely critical from us.  Any interference with those right has to be consistent with the principles of legality and procession and proportionality.  No questions about that. 

We are also acutely conscious how our domestic policy on terrorism and extremism may play internationally and states may look to us to sort of take their own approaches from.  So we don't want to be giving credence to approaches to combat terrorism online which other states may misuse for political purposes or sort of to control as mall policing, I guess. 

So I guess I'll just talk about two ways that we're looking to address terrorism online.  One of those is our legislation.  We think that governments can protect both security and fundamental right through well designed legislation produced in a transparent and heavily scrutinized way. 

So we have the 2016 investigatory powers act, which I'm sure many of you would be familiar with, which incorporates robust independent oversight with clear avenues for legal remedy.  A lot of people hate the investigatory powers act.  Let's be very clear about that.  I've read all the criticism I think you could read. 

And fair enough.  You know, a lot of what is in that law is not to people's liking.  But the process that it went through to get to where it is today and come into force was three independent reviews, and I would highly recommend to have an academic interest in reading a question of trust by David who is highly respected barrister in legislation in the UK. 

It went through three parliamentary committees including sign and technology.  We had calls for evidence.  Anyone could submit evidence on the proposals and the bill and it was then voted through two houses of parliament.  And even the former director of liberty in the UK did not vote against the legislation.  So we are ‑‑ and another thing is to say it has been channelled and in some cases we are amending it.  It's not necessarily that we've got it perfect, and that's worth acknowledging.  But it is open to challenge, which is really important for us, to make sure that it is as robust as possible and that we can use it in a sort of constructive way. 

And I would just say that it also contains an overarching privacy clause to make sure that warrant or other authorizations should not be granted where information can be reasonably obtained by less intrusive means.  And that act covers ‑‑ essentially, it's also overarching.  So it covers issues relating to encryption, which I've talked about in detail this morning.  And but that is also a fundamental standard at the moment. 

We separately have an issue around terrorist use of the internet for propagandizing the recruitment in some cases for radicalization.  And this is a real political priority, and this is something that you ‑‑ again, if you pay attention to British politics, you will see a lot about this.  For our ministers, it is a real issue. 

I would say there are clearly a lot of factors that go into committing an act of violence.  And radicalization online may be one of those.  Not always.  And but I think we do substantial research around how the internet or how propaganda online may play a part in recruitment and radicalization, alongside other factors including community interventions et cetera.

But what the we want to do ‑‑ and from my experience, as well, I worked in northern Iraq where I've spoken to people who have gone out to fight against ISIS and often it's people who are bored and lonely at home and the internet offers them a community and a way to essentially shift thoughts in some way.  That's not quite the same thing for terrorism online, but it makes you think about the access which the internet provides to people who need a cause and who want a cause to sign up to. 

So we want companies to use their platforms responsibly.  I was challenged to responsibility earlier this morning.  It's do you want terrorists to be recruiting on Facebook?  Probably not.  Facebook and Twitter are aware of that.  They've got extensive terms and conditions that cover these issues.  And we work with them that we're reducing content to the basics we can where it's illegal.  We are not creating terms and conditions for the companies. 

And I think it's just saying very aware that the technical challenges are massive.  And there are real issues around context and context dependence and in some cases detection isn't quite there yet, but we have a continued dialogue with the companies, particularly through their global ‑‑ I'm going to forget the acronym, internet forum to counter terrorism.  And we're confident that's going to be a helpful measure in the future. 

I'll conclude.  One thing I would say, don't underestimate the challenges that people face and the challenges that we face in being able to protect people.  It's not the case that we have access to everything that we want to be ‑‑ that We want to and we can just sort of serve rights.  That's just simply not it.  That's a lot of frustrations, a lot of difficulties in government. 

And if you are interested to see how intelligence works in practice and how those surveillance powers may have been used, I really recommend looking at our intelligence and security committee because they do a lot of reviews after terrorist attacks to look at how information was collected and possibly used in a wrong way or right way and the extent to which human error also plays a significant factor. 

But that's really really interesting to read in terms of the difficulty we face with terrorism online nowadays and I would also recommend an article called the unravelling web by Paul.  He's a serving GCHQ officer.  He's really really excellent, and he published an article in November that outlined the challenges that we face as more and more information is coming online.  But as we're still trying to do our jobs in a responsible rights protecting way to make sure we can combat terrorism in an effective manner.  So those are my intro thoughts.  Sorry for rambling.

Those are element that we have to consider.  Obviously the measures implemented have to be bigger necessarily and proportionate.  So I would think, I would ask Steven Turner to explain the measures that you are taking and how does this compatible with what we have just heard.

As many of you know, Twitter is a global platform.  We receive over a billion tweets every 48 hours.  Many are positive discussions, discussions, stories, personalities, humor.  As we've just heard and as we'll see in this panel, there's a lot of challenges that we face, and some of the communities that we see on the positive side, there's also a threat on the negative as well. 

The increasing political and societal will to combat violent extremism, radicalization and terrorism has been pushing forward in the last couple of years, both online and off line.  And as terrorism, it's very much a societal issue.  Therefore, it requires the societal approach.  Tech has a serious role to play in this and we understand that very well, as much as many of our sister companies, as do government, as do educators, NGOs, law enforcement and academia.  To better understand the ecosystem that we're dealing with. 

I want to run through four things Twitter is doing and what has taken place over the past year and how we're looking to advance in the coming years. 

First and what you've seen in the press, how we're leveraging our own technology to address the issue.  We're able to leverage internal tools and repurpose to serve as for review and identify terrorist content and accounts based on behavioral signals. 

Research found ISIS accounts have faced substantial and aggressive disruption due to the action in progress made by Twitter. 

The second, and I'll expand on these more in discussion but I just wanted to give you an overview and thoughts in discussion.  The second big area is industry cooperation with our sister companies as well as wider online internet system.  This is both regional and global, so expanding on the work that took place at the European level and moving that to the global internet forum across the world. 

We're fully conscious of the challenges facing smaller companies as we are in the awkward semi smallish company compared to some of our partners, but we will continue to provide guidance, information, and best practices, as well as collaborate and provide comprehensive approaches to some of these solutions to try to better address a counter and violent extremism online. 

Some issues include organization such as ITP for peace, anti‑terrorism, providing great tools for smaller companies as well as great overview as to how they should plan for some things to consider when they're trying to address terrorism or radicalization on their platform, if they only have one person running the company. 

The third point is strengthen our engagement by civil society and NGOs around positive ‑‑

We recognize that this is a problem that requires a multiprong approach.  And we're looking at why a certain audience is ride for recruitment.  So trying to think beyond the take down or beyond the content basis and looking at behind the scenes why are certain communities, certain individuals, certainly susceptible to some of these areas and trying to address some of these main concerns, whether it be education, digital literacy and working beyond the online space. 

And we're trying to consider how we focus on the audience as much as we focus on those distributing propaganda.  So again, trying to find the double edged approach. 

So some of these initiatives have included expanding our network with organizations working on counter environmental extremism as well as related issues including digital literacy, promoting positive digital footprints, empowering women online, as well as breaking down digital divides and communities in order to break up more empathy and understanding online. 

The final point, and this has been stressed by a number of companies has been on transparency.  For Twitter, we're a very public platform and we have been very public internally in trying to show the progress we make in some of the measures we're taking.  We have been very open on the challenges we face as well as some of the pit falls and mistakes that we've made in the past. 

I think one of the ways we do this is obviously through our biannual transparency report, and we're trying to provide more input on statistics, insights and requests that we receive.  We already include government terms of service requests and we're looking to expand to see where that information ‑‑ where the next step can be and how further information can be useful for civil society groups as well as for organizations overall. 

So I think overall our efforts will continue to evolve and as we try to continue to invest and adapt changes in cultural and societal understanding in these issues.  Thank you for your time. 

There is a presentation.  It's not on. 

We have an internet perform as many other countries have.  Ours only exist in September this year so, we've been in operation for three months.  We have three main tasks.  The first is detection of content.  I'll just go into the specific kind of content that's the available a little later.  We do also content analysis and one of the other tasks is the notice that take action which is of course the much discussed task, and it consists of voluntary requests done by us to internet service providers to block or remove specific content. 

I'd like to make two main points before we continue.  The first is that we are not convinced that we can clean the internet.  It's of course loose ends we'll always find.  We do think we can contribute to prevention of distribution of the specific content.  And that's what we try to do. 

And this is of course an exploration.  We are still trying to find out how to do this and also to do this while ‑‑ 

I would first like to go to the kind of content that we use.  Of course, there are personal posts, but most of the content that we're looking at is professional content produced by media outlets that are affiliated to our ‑‑ or recognized by terrorist groups like IS.  This is just a few examples. 

These are the kind of stamps that they use to show that the distribution of the production comes from them.  The kind of content that we've been looking at, it's very diverse.  There's graphic, claims, info graphics, news, updates, chants, statements, and also in this picture, also stamped by professional media outlets. 

Sometimes it's clear on the message and sometimes subtle.  We'll go a bit deeper into that later. 

We used to look at public accounts before 2015 mostly.  Then people realized that governments were also following what was happening online and it was used in court.  That's where the messages became at least on public sites more moderate and/or on private and encrypted channels.  And it's also where a more resilient infrastructure can intervene. 

And they started to use apps, add‑ons too, create a network for the distribution of the propaganda. 

I think one of the most important things is of course the criteria.  What do we refer?  Four main categories.  First there ‑‑ copywriters, criminal law, public order.  At least in the Netherlands, there's something like community law that says that challenging behavior or unnecessary exposure to others is prohibited. 

This is why for example you can quite easily just prohibit a public screening in a square of a beheading or so.  But there are broad lines between behavior and online discretion.  We don't have a single set of rules when it comes to all this community law. 

And then of course there's the undesirable which in the open court of human rights, it falls with a further expression.  So it is shocking, offending and disturbing content that's also a category. 

And then you have the context breach, and that's the where you have the terms of service.  Then you will see that these are overlapping categories.  There's not just one.  Some of the content that you find will just fit in more categories.  If you go a little further, we'll refocus on only look at content that is against our law and even in the law, we make a distinction between the severe crimes and the less severe crimes. 

It means that we only refer contents of which we are convinced we as police, we are not judging of course, we are convinced that it's in ‑‑ to terrorism, incitements to violence, or recruitments to terrorism.  And it is blind to the theme.  We are focused on terrorist propaganda but also left or right wing propaganda. 

So don't read reports or discreditation.  We don't report content that only shocks, offends or disturbs. 

One other important thing is that context is key.  Of course, if you look at a distribution like flames of the war, one of the latest productions ‑‑ I think it was Hyatt.  We can find it on scientific blogs, on news sites, but you can also find this on private blocks.  And it's quite difficult sometimes to make the distinction. 

We really try to focus on ‑‑ we try to look at the context of the sites and the forms where it is supposed to be.  There can be neutralized in context and then we cannot remove it. 

I think the general picture, the context is a lot of differences now.  Now are more European internet referred units.  We are just one of them.  And what you see at this point is there is just ‑‑ it's not a complete picture.  It's what I found on open source.  You see there are differences in legal basis.  For example, we work on the general ‑‑ how do you say, the general police task which is not police.  That's the basis of what we do.  Other units like Europe or France has more explicit basis for their internet referral work. 

Also the criteria differ, as you see.  The transparency and accountability, there are differences in that some units produce general reports.  Others don't.  We will do this.  And of course also the Court review.  France has a review, an independent review.  We don't.  And this is something that we also ask questions about ourselves.  How can we just organize this.  It's not up to the police, of course, to do it ourselves, but at least we can ask questions about who reviews our work. 

Yeah, there's been to the last slides.  It's about responsibility.  There are vast amounts of data online.  We as law enforcement, we cannot be free riders.  We do need to act and we do need to do something about what we think.  The question is how do we do this in this new world and new digital area and with such huge volumes of content.  Criteria, of course, we talked about that.  This also relates to who is first, who makes the decision, and who is responsible and the judicial system of course which is also something we talked about already. 

So this is for now.  Quick overview of our work. 

I'm part of the Brazilian network information center team, but today I'm speaking in my personal capacity as an independent researcher and I will give a little bit of the context of what's happening in Brazil and some human right concerns. 

The first thing I like to mention is that terrorism is not a major concern for Brazilians.  I don't remember any recent attacks associated with terrorists in the currently.  The most serious episode happened in May 2006 when we registered several attacks against security forces that tied into other Brazilian states, but it was quickly associated with a local organization. 

Despite the fact that no terrorist action was registered in the past years or recognized as such in the past years in the country, we have a terrorist law from different international law and it included a threat to be included in a black list regarding financial transactions for not having penalties for financing terrorist organization. 

The debate around the adoption of the anti‑terrorism law lasts several years and it was finally approved in 2016, some months before the link. 

The law attracted criticism from academics and social movements for vague attempt in defining terrorist attacks. 

The introduced a new series of crimes to the panel code and also ‑‑ which can get you 30 years of prison.  The attempt to buy illegal arms or ‑‑ could indicate proprietary acts of terrorism.  This accusation was used against ten people some weeks before the beginning of the olympic games.  The investigation that led to the imprisonment involved interception of private communications which allegedly indicated the planning of acquisition of guns to perpetrate crimes in and outside Brazil. 

Apparently the accusation included several communications from Facebook that would highly indicate an connection to bygones. 

I believe this raises concerns from thought and expression, including information.  Considering certain activity and ideas may be considered an indicator of proprietary act of terrorism.  Examples I would mention would be participating in an event or discussion online or off line assessing certain web pages, doing specific roles and search engines, et cetera.

It also raises a concern about privacy and of course both have completely related.  Since there is a question of how to indicate ‑‑ how to investigate a proprietary act of terrorism.  In the approval of the Brazilian constitution, a great emphasis was given to the protection of privacy.  Regarding the confidentiality of communications, it determined that the secrecy of correspondence and of telegraphic data and telephone communications except in the later case by court order in the cases and manner prescribed pie law. 

One of the Brazilian constitution offers high protection to privacy and the telephone intersection act follows best practices in regulating wiretapping.  This has not prevented abuse.  For instance, condemned in 2009, but into quarter review right for intercepting communications of human rights activists in 1999. 

Still more concerning is that while the telephone interception law includes several safe guards, they were not incorporated by subsequent legislation involving access to data. 

Some relevant examples are the laws on money laundering in criminal organizations which provides telecommunication companies to store and give access to specific data from clients to law enforcement authorities.  Goes in the same direction when it establishes access to internet and application logs should be stored for certain periods. 

The anti‑terrorism law does not include a specific surveillance measures.  It states that law against criminal organizations applies to terrorist organizations. 

So to summarize and give space to our discussion, which we can continue this conversation, this newel adopted legislation brings several human rights concerns and comes into an environment that can be characterized by an increasing of powers to Brazilian enforcement agencies and the lack of safe guides when it comes to mitigator and the use of surveillance technologies.  Thank you.

And I think we may hear something more about this from access.  Please go ahead. 

I would like to start by saying that a lot of the measures put forward after terrorist attacks involve data collection or profiling of users online and therefore the storage and further collection of data which means there is a potential interference with these rights.  In order for these to be justified, then we will need to be provided for by law necessary and proportionate.  The point is to be sure that measures and makes the standards.  I will focus on three specific measures.  Profiling in particular in the context of passenger records.  And CV programs.  

I will be focusing also mostly on the EU context, but some of these context might apply to the global context as well. 

In particular in Europe, we had discussion on having a European wide law on the retention around the year 2004, 2005, that the discussion was told until the unfortunate event in Madrid and London of terrorist attacks.  And at that point, the council pushed for the quick adoption of what was the retention directive which was invalidated by the EU court of justice in 2014 for being disproportionate.  The measures required the mass collection of the mass retention of data of any cities no matter whether they were under suspicion. 

These measures have been disproportion a second time, recently, in order to provide further guidance. 

A similar path was followed with the negotiation at EU level of passenger named RICO directive which is about the collection by airlines of information about air passengers which needs to be pushed to authorities in order to fight against terrorist crime and terrorism.  These legislation was first on the table in 2011 in the EU parliament and was disregarded in order to assess better the impact it could have on expression and privacy.  The eparliament was not confident the measure put on the table were effective.  And to further displace until 2015 and then 2016 the Brussels attack. 

The negotiations was no longer possible.  It was only a matter of adapting it as quickly as possible without considering the full impact of these measures on fundamental right and right to privacy. 

These measures include a measures on profiling and authorize authorities to check the collection of the border for piano purposes to be detected again assisted which database and for the purpose of the citing the directive.  This is quite broad in the sense that a large number of database exist, as you can imagine.  And it's not clear how exactly will these data collected will be merged with potentially other data into other database and which profile will be obtained by the authority accessing those data. 

The summer, another framework in place, the EU cannot have provisional.  Agreement was evaluated by the court of justice and was found to be not in line with our human right framework, in particular with the right to privacy in that application, mainly also for issues regarding profiling that might happen ‑‑ profiling of passengers that might happen. 

The Court also find that these measures often lead also to interference with the right to nondiscrimination.  Not always in a direct manner but also in a indirect manners.  There are measures to try to prevent discrimination and specific targeting of certain nationality or passenger based on origin or religion.  However, the nondiscrimination on an Indian basis is never fully entered and the court has entered further criteria to mitigate those risks. 

And in coming to the issue of CD program, rely on collection and screening of data on platforms which may or may not involve human intervention in determining which contents needs to be taken down, removed or pushed to the authority. 

Those can also have an impact on the right to privacy, which is not usually fully considered and one of the main issue we see was program being put forward.  At the moment, is also the fight that often there's an issue with the legality element and to be provided for by law, either because the definition are unclear or vague, or which also makes compliant with the difficult and for many actors and it creates in certain terms of if an action from a certain country says something, it's shared broadly, how do you technically apply this.  So there is definitely some further effort to be made in terms of clarity and in terms of compliance with the provided for by law. 

To conclude, I want to highlight that we agreed on the state responsibility and doubt to protect the citizen and protect for the right to liberty and security.  And this should be done in a respecting manners.  The measures that I've mentioned before have not been deemed completely impossible to realize by the court, but some specific has been set out.  However, despite the ruling and the criteria set out, we're yet to see implementation of these measure that will allow for the full protection of the right to privacy, in a way that can also help in advancing security.  Thank you.

We have serious terrorist attacks in several country, not only in the European continent but others as well.  This is a tragedy, and I would like to show my support and solidarity to all victims of terrorist attacks today. 

But also today I want to talk about other types of victims of terrorism.  Victims of privacy and freedom of expression, thought and other freedoms, restrictions due to terrorism legislation and other trends that we're seeing for a while now. 

So what are the trends that we're seeing that affect an environment that you've heard about tendency to increase surveillance powers, the aversion of privacy and data protection rights and the heading of fighting against terrorism.  You've also heard about data retention and a passenger named records legislation. 

For example in the case of data retention, that despite the concerns that NGOs have shown to policy makers, the Court of justice decided to fully follow the recommendations and yet the Court of justice of the European union declared it invalid because it was legal according to EU law. 

This case is brought by one of our members.  So we see one action where an NGO had to take action to ensure our rights are respected.

Another trend is encryption, and you've heard about this in other sessions.  Encryption is sometimes hyped as a major problem and sometimes has to be adjusted and stick interpret productive effects and policies so we can encryption as a solution to them. 

While law enforcement aims are very limited as you heard today, heeds in the use of geographic rules can create abilities that can be used by terrorism criminals, the same people we're trying to protect population fromment.

Another trend is we see pressure from countries to ‑‑ we're seeing a trend to push for direct cooperation with companies bypassing legal assistant treaties.  And that's ‑‑ and this is not okay in our view because there are some safe guards that the proceeder provides that may not be respected through direct corporation. 

Another trend we see is the increase of capabilities and funding for agencies so‑called security related projects and no money however for organizations in the room to actually defend the rights of freedom in our society.  So we see that this is a very concerning trend. 

Another trend I would like to highlight today is the tendency to criminalize speech and opinions more and more.  Not only to criminalize this problem, but also a tendency to discriminate against minorities and certain fractions of the population. 

For example, in Spain, we have seen that several uncomfortable forums of inspection have led to people being increasing. 

For example, a rapper in Spain has made six tweets that have led the courts to punish him with a year imprisonment.  These tweets however, some of them are not related to terrorism actually are now published in several newspapers.  And we see that a previous high court in Spain had ruled that this rapper was innocent.  However, the Supreme Court ruled in the other way. 

So we see legislation is leading to different outcomes and sometimes decisions that are having a chairing effect on our freedom of expression in this case.  This is very concerning because of the effects, not only oncessor but on all of us. 

And also I would like to highlight a tendency to criminalize curiosity.  For instance, recent terrorist related websites.  Well, the legislation had some criteria to follow, the French constitutional court ruled it to be unconstitutional. 

Again, this was thanks to the work of NGOs.  But the legislators went on and actually a few days later had this criminal provision updated.  And yet again on Friday, on 15th of December, the constitutional court defends ‑‑ declared it again unconstitutional.  So we see that there's a struggle of power between those that have legitimate aims that are however going beyond protecting us and protecting security.  So do we need to wait for NGOs? 

Really that struggle with funding to protect our right and freedoms, I think this is a very concerning trend that as you can see in all of the trend I've highlighted, it seems that NGOs are like one of the key defenders of rights and freedoms.  Thankfully in the area, I want to praise the work of neoes and the union because at least we're not saying that the system is perfect, but as he explained, he's one of the few units that actually have a content on the basis of the law. 

We're seeing that there's more and more pressure to ask companies to remove comfort not on the basis of the law, but we may disagree that at least it was democratically elaborated.  We see pressure on companies to actually remove content on the basis of the terms of service that they themselves have established.  Are we comfortable for Twitter or any other platform to actually decide what the ‑‑ and imagine that what data leads is real terrorist content, as some of the content that is put forward. 

So what the the action that is being taken from the public sector side?  Are we having investigations?  We have made some investigations at the EU level to inquire where there are some sadists or in the versus of the units that was mentioned.  And the response is that there are no statistics being kept about whether the euros that were declared to be removed are actually needing prosecutions or investigations. 

So we don't still need the approach to ensure our security, which again, I stress, that the very important. 

And this is really a problem that we see because we are becoming suspects by default.  And this needs to stop.  Counter terrorism policies, we find a defense of human rights and make us unsecure.  And I expect the community problems cannot be the creation of security risks or from the meant rights restrictions that are not necessary proportionate for the eight pursuit. 

And how to ensure the security freedom can work together?  We think that the first step is to ensure we have leaders that will not take emotional, actionary premature actions.  First at the European level, the terrorist letter was drafted within two weeks after the April's attacks.  There was no assessment on whether it would undermine attacks and there was no public consultation prior to this traffic. 

And there's an article eight that also criminalizes the visits to terrorist website.  So what about countries are going to do?  We see that there will bow a conflict here and we hope that leaders in our political sphere and from all stakeholders, we strongly come together and defend what we are here for, which is the respect of right.  Thank you very much.

Now I would like to have your thoughts, questions.  I see there is also one of the authors of our regulations that has authored a chapter on countering terrorism online that may have some comments.  Christina, if you have anything ‑‑ please, go ahead. 

So my question is, we have to find a middle term.  Not that panel, we were not discussing the block of an act, but we were discussing a terrorist. 

What we can do, like a real concrete thing we can do that does not compromise the users and does not compromise the security of the people.  So that's my question, whoever wants to answer it.  Thank you. 

Some types of information can be given to authority without a court order.  The so‑called subscription information.  We also have a great task of reviewing our type of interception acts, I believe, and we also have to be very careful when we talk about what type case in Brazil, the lack of information and preventing investigations and preventing the comeback, the fight against crime.  I believe we have few information about that.  Most cases were secret. 

We have a great task in requiring more transparency from authorities in terms of which type of cases, what are the number of cases that are solved similar to what has been said here are in the panel.  In our case, we have little information.  We have been trying to do access to information requests to authorities and have received little information on that. 

We're very limit ‑‑ Olympics and the world cup, several surveillance technologies were required by law enforcement and we still don't know what the use that has been done of these and what are the safe guards and the regulation in place to regulate their use. 

So I believe in Brazil at least we have a lot to do.  Although there is a great challenge that we are all trying to figure out in several sessions here. 

So maybe you have thought thoughts on that as a representative of a U.S. company who maybe faces some of the intentions around that.  And then also as a law enforcement, as someone representing law enforcement, this has been cited.  The frustration that governments have with the ability to get access to data for criminal evidence has been cited in certain cases, justification for measures like data localization and/or accessing or perching spy wear and other tools to be able to get access directly through devices and through software that have vulnerabilities. 

I'm curious as to what extent you think that is indeed a legitimate issue or if you feel like you have enough cooperation in existing authorities to be able to do your job in terms of preventing and prosecuting terrorism.  Thank you. 

So what we propose, any mechanism that would actually bypass when trying to create ‑‑ between companies and organizations.

We totally understand that M lots need reform, but that will not come fast enough to enable us to tackle serious crime and terrorism.  As you well know, we've worked hard with U.S. government to try and put into place sufficient safe guards that included that proposed agreement which ‑‑ so it can only be used for people outside. 

And I guess the nature ‑‑ the question you asked is if you are too inconsistent who are plotting a terrorist attack which is going to take place in the UK and the only reason we don't know is because you were using it over Facebook's device messenger thing, just an example, the fact that we either with a law are not able to get at that content except in a case for threat to life is really problematic.

And we went thou an exercise where a group of leading experts on internet safety couldn't often get it right in terms of understanding how a Twitter moderator might respond to a particular scenario.  And I was really amazed at how nuanced some of these decisions are in terms of what's considered a threat, when a threat is something personal and general and when it's taken down and isn't. 

I wondered if you could talk about some of those same issues when it comes to terrorism content or might be interpreted as threatening or extremely uncomfortable but may or may not cross a line as to be something that's actionable. 

It's tough for mod it's and tough to explain how that process goes.  Through some of those examples that you see, it is tough to see within the context to better understand what is directly targeted and what is not.  I think when it comes to terrorism, it's even more complicated. 

As we see for Twitter, our bread and butter and some of our best users are journalists and activists throughout the world who want to make sure we're providing them the right protection and tools to better simulate their information and get information out of tough situations and to make sure that there's a voice. 

And when it comes to terrorist content and propaganda and issues overall is making sure we're finding the right balance, providing that protection for those reporting on very sensitive topics.  So they might be engaging and responding to or presenting content that we would take down otherwise, if it was coming out of official accounts or somebody that's a sympathizer for a certain terrorist organization. 

So it is something we've been trying to address over the past year and how do we put a policy and safeguards for when something happens so we can reinstate content if we accidentally take it down.  As these are still new issues and processes for us, we're trying to better assess how those impacts are taking place and where we have to fill some of the gaps where we're having the challenges. 

We're looking at more of a societal issue and the context of the accounts overall.  So addressing it both from the individual, is this an account that's retweeting or supporting one account, one issue, but doesn't have a history of supporting terrorism overall versus somebody that's more active in the debate.  It is very much the context and trying to understand those behavioral signals is a big challenge frus right now. 

It was mentioned here that refer units operate on basis of terms and conditions.  In our case, it was the case that we will only refer content which is within our mandate.  So when I make reference to the terms and conditions, what is meant here is basically we have no enforcement power so we will inform the content providers but that doesn't mean that we can enforce our opinion on them.  

So I guess and that is my question that this is the same in the Dutch context.  And the other observation I had was there was that slide which seemed to indicate that there is no judicial supervision or no independent reading.  And here, I would like to put this into perspective.  So with your report, would an erefer unit ‑‑

On top of that, any processing operation upon personal data at your report can suddenly be brought before the open court of justice.  Also, that is something that may not be immediately to the refer unit as such, but I guess again, and yes that is my question to you or some of the Dutch context.  I assume it is the same setting in that sense.  You would be subject to introduce your scrutiny as well.  Thank you very much. 

Do we really need more surveillance that is actually harming our privacy to just find these needles in a hay stack?  Basically do we want to increase the amount of hay in the hay stack just to find the needle?  Does the mese have enough resources to act on the things they already have right now?