IGF 2021 – Day 2 – WS #197 Protecting human rights in the State-business nexus

The following are the outputs of the captioning taken during an IGF virtual intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.

***

 

>> We all live in a digital world.  We all need it to be open and safe.  We all want to trust. 

>> And to be trusted. 

>> We all despise control. 

>> And desire freedom. 

>> We are all united. 
(silence)

>> HENRY PECK: Good morning, everyone, good afternoon, good evening depending on where in the world you are.  It looks like we have (?) in Katowice.  But I'm glad to see many of you on screen.  We're just waiting for one more panelists to join, but I think we should probably get started and hope that his link is generated in the meantime.  So welcome to this IGF roundtable entitled "Protecting Human Rights in the State Business Nexus."   This event has been co‑organized by the U.N. OHCHR's B‑Tech project, by the Business and Human Rights Resource Center, and by Privacy International.  My name is Henry Peck, and I am a Human Rights and Technology Researcher at the Resource Center, and I'm one of the two moderators for today's roundtable along with Isabel Ebert who is an adviser to U.N. B‑Tech. 

We have about an hour and a half for this conversation, and we're lucky to have several experienced speakers from Civil Society and the private sector with us.  I'll just give a couple of housekeeping notes, which are remember to speak slowly and clearly so that our transcriber can keep up.  And we'll have a presentation to begin with, and then we'll switch to interventions from our panelists, and then have a more free‑flowing conversation and time for questions and answers at the end. 

I'll now hand over to Isabel Ebert for a bit more context. 

>> ISABEL EBERT: Thank you very much, Henry, and welcome to the session.  I'm sorry that we started a bit late.  We're still waiting for one panelist who has actually excellently followed the registering procedure but, you know, I think it's a sort of running gag to have these sort of tech issues these days.  So, yeah.  Thanks also to the business human rights resource center for co‑organizing this session and also our fellow panelists. 

We are a project that is called B‑Tech and hosted at the office of the Human Rights Commissioner of the U.N.  And our focus is on corporate responsibility in the tech sector.  And for those of you who are not familiar with the business and human rights terminology, we are focusing our work around U.N. guiding principles for business and human rights and have been adopted by the human rights council in 2011 and have become a sort of emerging software norm in the corporate sector, have been endorsed by corporates when they were adopted, as well as Civil Society, international organizations, and individual member states. 

The guiding principles are consisting of three pillars.  Today's session focuses in particular on pillar 1, which is the state duty to protect human rights, and also on pillar 2, which is to corporate responsibility to respect human rights.  The third pillar usually focuses on access to remedy and different mechanisms in which potential victims of corporate human rights abuses can seek remedy for their harm. 

The guiding principles as such are not sector‑specific, and that is exactly why the B‑Tech project was launched, to break down the specific requirements of the guiding principles to the technology sector and articulate what are the specific obligations of states and what are the responsibilities of technology companies in that area. 

Today's focus is on this interlinkage when states and technology companies cooperate for various purposes that can be for, as we will hear later on, really public/private partnerships, but there are also other situations where, for example, governments procure technology from companies that requires additional checks and balances.  So we will explore in today's session a spectrum of scenarios, how states and technology companies collaborate, and which respective obligations and responsibilities should derive from that sort of constellation. 

Just to close, you will also probably hear the term human rights due diligence quite frequently in this panel that describes a process in which companies should identify, assess, and mitigate adverse human rights impacts that are stemming from or being linked to their business activities.  It is also building on an assessment of human rights impacts.  So just to sort of set the ground, I will put in the chat also the link to our home page where you can then find more information about the project as such, and I'm happy to answer additional questions. 

We will have to adapt, I think, the chronology of panelists.  Let's see, one panelist, sadly, is still having troubles to connect.  I would then pass on to Henry Peck again for ‑‑ who's volunteering as our kind moderator today to start the session.  Thank you, Henry. 

>> HENRY PECK: Thanks, Isabel.  So to begin with, we're going to hear from Privacy International about their recent work on the state business nexus on one aspect in particular.  And then we'll turn to our speakers from the NHRIs of Germany and Chile and from Ericsson and Vodafone for their perspectives and experience. 

So one of the aspects of the state business nexus is, of course, public procurement and within that the dynamics of public/private partnerships.  Privacy international has conducted investigations thathave identified a number of issues common to public/private partnerships that involve surveillance technology and the mass processing of data.  And they have developed and this week published a set of safeguards in response to these issues and trends which they are now going to introduce to us. 

Here to do so is Ilia Siatitsa who is Privacy International's Program Director and Acting Legal Director, and Lucie Audibert is a Legal Officer at Privacy International.  So I'll pass it over to you two now, and I'll share a copy of the safeguards in the chat as well. 

>> LUCIE AUDIBERT: Thanks very much, Henry.  So it will be mostly me who will be introducing the safeguards.  Ilia is here for support, if needed.  I'm going to share my screen.  As always, these things can go wrong, so please let me know, can you see my screen?  Great.  Perfect. 

>> HENRY PECK: Yes. 

>> LUCIE AUDIBERT: You can see just the slide, right?  Not the rest? 

>> HENRY PECK: Yes. 

>> LUCIE AUDIBERT: Perfect.  All right.  So good morning, everyone.  Anyone who's in the room in Katowice, I don't know if anyone's there.  I can't see any faces.  And good morning to everyone online for attending.  I'm a lawyer at Privacy International, or PI, as some of us call it.  We're a London‑based NGO that researches, litigates, and advocates globally against government and corporate abusers of data and technology. 

So I'll start with a quick introduction to our work on technology and surveillance partnerships and how we came to design our safeguards.  And then I'll delve into the safeguards in some but limited detail with a few examples.  So we at PI have been, for some years, investigating relationships between states and tech or surveillance companies around the world, in particular where states contract with companies to develop their surveillance or their data processing infrastructure in order always to deliver their public functions or to monitor their populations. 

What we've observed and what our partners around the world have observed in their jurisdictions is that these partnerships don't always resemble the ‑‑ a sort of traditional one‑off commercial relationship that can result from a public tender, for example, but are taking on a new form where parties are much more co‑dependent and where states build entire new systems and processes that can be completely reliant on the services of one company while providing companies with access to a lot of very valuable data that they can then use in developing their own services. 

So we've analyzed a number of examples of such partnerships going wrong around the world.  And from these we've identified a number of common specific issues that stem from this co‑dependence and from the ease with which the technologies involved can enable surveillance and sometimes lead to human rights abuses.  And then for each issue, we've designed a corresponding safeguard. 

So the safeguards are classified between broader principles of transparency, adequate procurement, accountability, legality, necessity, and proportionality, oversight, and redress.  These are, as many here will know, long‑standing principles in international and human rights frameworks formulated in different ways, and the safeguards seek to put them into practice in the context of technology or surveillance partnerships.  Many of our safeguards are actually framed around the U.N. guiding principles on business and human rights, which Isabel has just introduced.  And where appropriate, we explain how a particular safeguard furthers the application of a guiding principle. 

I should say these safeguards intend to be jurisdiction blind so that they can apply as widely as possible across the globe and can be used for advocacy in various jurisdictions.  And one thing to note is that we intend on them being a living document that we hope to update regularly with new examples of abuse from across the world and with examples of successful advocacy which can refine or define new safeguards in the future. 

So I'll start with the first broad principle that our safeguards fall under, which is the transparency principle, which we see as a preliminary requirement to the challenge of state authority.  What we found is that a very common feature of technology partnerships is a lack of transparency that often stems from an overprotection of companies’ commercialist interests and a desire from governments to shield the rationale for their decision‑making or to shield the extent of their surveillance systems. 

So an example would be in the past few years, the American data analytics company Palantir has developed multiple collaborations with the UK government, for example, to build the NHS ‑‑ the health system vaccination database or to build the post‑Brexit border flow tool.  And very few details of these partnerships were initially published, and so we, Civil Society, had no idea how the procurement process was conducted.  All we had access to was a couple of redacted framework agreements that were available on the government's contract website.  But these gave no details as to what kind of data would be processed, what Palantir’s access to data would be, or what role their software and their algorithms would play in the government's decision‑making. 

So after months of pushing freedom of information requests around government departments and writing letters to Palantir and the government, we obtained a handful of disclosures of data process ‑‑ sorry, data protection impact assessments or data sharing agreements, again, heavily redacted often on grounds of Palantir’s commercial interests and protection of intellectual property rights. 

So the five transparency safeguards seek to address some of these issues.  The very first one is a straightforward one that requires that all documentation governing a partnership is made publicly available with legitimate reductions only.  The second one which I'll focus on a bit more seeks to address the limit on transparency that commercial interests or intellectual property rights impose.  And it requires companies to make any technology or software they sell to governments fully auditable at every stage in the partnership and importantly prior to contracting. 

If legitimate confidentiality concerns exist, we think audits can be performed by independent oversight bodies or independent researchers or experts, always bound by duties of confidentiality.  So hopefully this can address the commercial interests argument that is too often used to shield the logic of public decision‑making.  We think there are many ways of being able to provide transparency without entirely giving up intellectual property rights, and this goes through engagement with Civil Society and oversight bodies in providing sufficient amounts of information to understand the substance of a partnership and the technology that it deploys. 

The next broad principle is the adequate procurement principle, which at a basic level requires states to comply with or to put in place, if they don't exist already, formal approval processes for public procurement and to conduct appropriate assessments when deciding to contract with a company.  And the reverse is true as well.  We would require companies to conduct appropriate assessments such as human rights due diligence when they contract with states. 

So one issue we seek to address here is one that we've seen in many places around the world and is quite emblematic of technology partnerships where technology is deployed initially for private, commercial, or personal purposes are co‑opted by public authorities for policing or surveillance purposes without having recourse to required public procurement processes. 

An example that PI has revealed and criticized last year is the deployment of the company Facewatch in the UK which is a company that sells cameras equipped with facial recognition to retail shops like supermarkets.  And we discovered last year that they offered access to the police to the retail surveillance network that they built across supermarkets whereby the police could upload pictures of crime suspects to identify them if they visited stores equipped with Facewatch cameras.  And in return, people identified as safety concerns by stores could be denounced to the police through the Facewatch network. 

So the issue here for us is an extension of the realm of state surveillance to places where no one had ever intended it to reach.  And all of this, of course, without adhering to a public procurement process that would have questioned the very purpose and acceptability and result of this extension and these systems. 

So an example of safeguard we've come up with to address this is safeguard 10, requiring that as a principle, public authorities should not have general access to surveillance in mass data processing systems deployed in private spaces, nor any data derived from these systems.  So any use of such systems or any access to data should be on an ad hoc basis where the state can request specific information when strictly necessary following the appropriate legal framework and a prescribed procedure for access to these systems or this data. 

Our next broad principle is accountability, which is a core principle that requires defining responsibilities and identifying obligations, duties, and standards to be imposed on each actor of the partnership and, of course, having documents and policies in place that hold parties to account for these obligations.  So an example of issue we have addressed here is in France during the COVID‑19 crisis, at the start of the crisis, the government attempted to use the existing network of CCTV cameras that existed in streets and public transport to turn them into intelligent CCTV cameras to monitor mask wearing and social distancing in public spaces. 

This was struck down by the French Data Protection Authority, the CNIL, because it lacked a legal framework and a clear policy turn defining the purpose of the monitoring and any red lines.  So, for example, the CNIL required that it shouldn't be used to identify and prosecute people for not wearing masks, and rather that it should only be used for the collection of non‑personal data for producing statistics about mask wearing, for example, in order to inform the design of incentive policies.  

So a safeguard that seeks to address that is Safeguard 13 which requires that whenever a technology is approved for use and for a specific purpose, of course it should be authorized by law, and we'll come to that later, but it should also be accompanied by a technology use policy that defines clear boundaries for the purpose and the use of the technology with an exhaustive list of authorized uses and a nonexhaustive list of prohibited uses.  So any use of the technology that doesn't comply with this policy should undergo a new approval process determining whether the new use would be lawful and compliant with other safeguards, and the technology use policy should be amended to reflect this new agreed use.  And, of course, any use that is wholly incompatible with the original technology deployment's purpose should be rejected. 

So this is really to address what was often called function creep where it's quite easy for technologies originally deployed for one specific and acceptable purpose to evolve and regularly expand the realm of surveillance without approval processes having been gone through, a process that are required when you install new technologies. 

Our next broad principle ‑‑ or actually three very fundamental principles which are legality, necessity, and proportionality.  So, yeah, fundamental principles and international human rights law.  And in the context of a technology or surveillance partnership, this means, for legality, that any use of technology to address a public need or to fulfill a public function has to be authorized by an appropriate legal framework.  So, for example, in the UK, the police had been using mobile phone extraction technology for a number of years without an appropriate legal framework until the UK data protection regulator declared this unlawful.  And this is only now being rectified in draft legislation.  But we went for years with this practice going on without any legal framework. 

For necessity, we require that a necessity assessment be conducted as part of human rights impact assessments to clearly demonstrate that recourse to a particular technology is necessary to achieve defined goals rather than a mere hypothetical potential advantage.  And for proportionality, similarly, a proportionality assessment has to be conducted to measure the adverse impact on citizens' rights and freedoms and demonstrate that this impact is justified by a corresponding improvement in citizens' welfare, always taking into account any chilling effects on the exercise of fundamental rights. 

So an example of where these necessity and proportionality assessments would have been required is in ‑‑ I think it was two years ago, the municipality of Como in Italy had deployed a facial recognition system provided by the company Huawei.  When Civil Society obtained the data protection impact assessments that had been performed, the deployment of facial recognition had only been justified on an isolated incident that had occurred years ago and where the municipality couldn't explain the need for facial recognition systems rather than more traditional straightforward video surveillance, and the impact assessment didn't contain any recognition or assessment of the impact on citizens' rights and freedoms of being subjected to facial recognition in this way. 

So we think these assessments are really essential to demonstrate that states have applied proper thought to the real necessity of a particular technology and to its impacts.  The next principle is the oversight principle.  In the safeguards we require continuing oversight rather than these one‑off checkbox exercises to ensure that a technology is constrained to its stated purpose and to detect abuses all along the partnership. 

So we want independent oversight bodies to be given a clear mandate to monitor partnership, to undertake regular audits and consultations, and importantly we want Civil Society and affected communities fully involved in this oversight.  So one of the safeguards we recommend is the institution of a civilian control board, that's Safeguard 20.  This board would be tasked with monitoring the impact of surveillance technologies on populations so that ‑‑ so it should be consulted prior to deployment of any technology.  It should seek the opinion and consent of the affected population, and it should be tasked also with receiving and voicing grievances from any affected individuals. 

So an example of where this would have been necessary is when police forces in the U.S. struck a deal with Amazon Ring to get general access to their doorbell camera security footage.  This is something that had a profound impact on the lives of communities and neighborhoods, creating a sense of distrust and constant surveillance, especially on individuals who were at heightened risk of discrimination.  So that's why consultation of affected communities is absolutely essential. 

And the final principle is redress, which is equivalent to the third pillar of the UNGP's access to remedy.  And in this context, the availability of redress relies on other principles, of course, having been upheld such as seeing through the opacity of an algorithm in order to understand its impact and to seek redress.  So the main safeguard we recommend in this section is that policies governing of partnership should include redress provisions that point to existing mechanisms or that establish new mechanisms for handling complaints and enforcing sanctions for violations of policies. 

So an example of where this went wrong is in 2017, the Mexican government used NSO's Pegasus hacking software against two lawyers who had been critical of the government who were then assassinated after having been located thanks to the hacking software.  So far NSO has never been held responsible for these atrocities and human rights abuses and has always refused to cooperate with efforts to obtain accountability and redress from the Mexican government.  So we think that's where redress mechanisms could have required NSO to disclose whether they aided the government surveillance efforts in providing their hacking software. 

Just a word of warning that any redress mechanisms should not and never bar access to courts or other established judicial mechanisms because this can be an issue, for example, where if a regulator or an oversight body is designated to handle complaints, it might take them years to work through complaints where they might be purposefully underresourced by a government, meaning that people aren't able to obtain redress. 

So in these circumstances courts should always still be able to accept complaints if a regulator hasn't yet decided them.  And we think this is essential to balance access to justice and quality of justice.  So that's it on the safeguards.  I hope this has been a useful overview of what they are and what they seek to address.  We're conscious that they are quite high‑level principles that will need to be refined in practice and that they may not apply in all situations.  But hopefully they provide a framework for advocacy to uphold human rights principles and the UNGPs in the face of privatization of public functions and surveillance. 

So the safeguards are available on our website at privacyinternational.org, and I think Henry might have shared a link.  If you have any questions on the safeguards or any feedback for us, you can reach me by Twitter at the handle over there on the slide.  So, yeah.  I look forward to the broader discussion with panelists.  I'm not expecting any specific feedback on the safeguards, as I know it's quite a lot to digest.  But if you have any, that would be most welcome.  And, yeah, I'm keen to hear about others' experience in applying the guiding principles in the tech sector.  Thanks very much. 

>> HENRY PECK: Thank you, Lucie, for these thoughtful and practical findings and guidance.  And I think in particular thinking through the UNGPs, this does provide a really useful framing and framework for implementation with concrete settings and circumstances.  And it is particularly useful also in the trend you're seeing more co‑dependent relationships between companies and the state and the need for due and equivalent protections when private actors are providing public services and similarly the importance you stressed of regular oversight, not just one‑off processes. 

In the broader session today in our roundtable, I think it's important just to set the safeguards in context, given the much wider scope that we are addressing of the state business nexus, and our panelists today are engaged in a wide range of public/private interactions, not just public/private partnerships, and will speak to different aspects of state business engagement to including license agreements which are often more one‑dimensional or less co‑dependent than the PPPs that Lucie's described. 

So with that, I'm going to hand over to our first panelist, Deniz Utlu, who is a Senior Policy Adviser at the German Institute For Human Rights, which is Germany's NHRI.  Deniz, over to you. 

>> DENIZ UTLU: Thank you very much, Henry.  Thank you for having me and giving me the opportunity to intervene in this session.  As you said, I'm working for a national human rights institution.  Currently I'm also sharing the business and human rights working group of the global alliance of national human rights institutions.  So let me start with outlining the perspective of NHRIs on our topic.  National human rights institutions are neither governmental nor nongovernmental organizations.  They are entities in between financed by the government.  However, independent of it.  And they have the mandate to promote and protect human rights.  Being financed by the government, they still ought to be independent, and this is checked by a U.N.-level accreditation system.  So as states use NHRIs such as the German Institute For Human Rights, for which I am working, may participate at the sessions of the U.N. Human Rights Council.  The UNGPs which were introduced already mentioned NHRIs in all three pillars.  And the first pillar of the NHRI ‑‑ sorry, of the UNGPs do summarize somehow states' obligations, and these are, just to be clear, these are obligations.  This is not ‑‑ these are not guidelines.  These are ‑‑ it summarizes international law and human rights law, which is binding for governments, different than the second pillar, which is the pillar on ‑‑ for the corporate respect of human rights. 

So the commentary to guiding principle 3 states that NHRIs, national human rights institutions, should help states to identify whether relevant laws are aligned with human rights obligations and if they are effectively enforced.  This means that NHRIs can serve as an intermediary between different state agencies to check with them, first if they sufficiently follow a human rights‑based approach, including in establishing PPPs, public procurement procedures.  And second, to monitor if they align with states' human rights obligations when using data technology interfering with tech companies, including long‑term strategic relations with them. 

And third, NHRIs, can also, if they have the capacity, undertake human rights impact assessments and artificial intelligence impact assessments.  Human rights risks or impact assessments are a necessary governmental task before a procurement decision is taken or a collaboration with a private actor is established.  Collaboration between state agencies and tech companies is already an area of human rights concerns.  For instance, in state missions such as building critical digital infrastructure, the use of surveillance technologies to advance public safety, applications for the criminal justice system or border control, national defense, and national security. 

That human rights impact assessments or impact assessments are necessary follows from the requirement of adequate oversight as stated in guiding principle 5, in the first pillar of the UNGPs in the section on the state business nexus of the UNGPs.  It states, quote, states should exercise adequate oversight in order to meet their international human rights obligations when they contract with or legislate for business enterprises to provide services that may impact upon the enjoyment of human rights.  And then the commentary to guiding principle 5 concretized that states should ensure that they can effectively oversee their enterprise activities, including through the provision of adequate independent monitoring and accountability mechanisms. 

In fact, the Privacy International safeguards do reflect those requirements in safeguards 7 and 8.  Safeguard 7 states DPIAs should be performed for the deployment of any technology involving the processing of personal data, whether the processing is considered to be likely to result in a high risk to individuals or not.  Where algorithms will be used to make automated decisions, AIAs ought to be performed as well.  So these obligations are reflected in the safeguards and made more concrete here, which is helpful, as I find. 

Governments should involve independent experts as well for doing those impact assessments or for executing those impact assessments.  And NHRIs can clearly contribute here.  For instance, to give you a good practice example, has the Australian Human Rights Commission been undertaking a human rights and technology project led by Human Rights Commissioner Ed Santow.  They produced a discussion paper back in 2019 which recommended the establishment of an AI safety commissioner as an independent statutory officer within the NHRI. 

Independently the Australian government already has an e‑safety commissioner who can cooperate with the NHRI where necessary.  And the e‑safety commissioner is a cross‑sectional governmental agency that monitors the human rights impacts of the government's use of artificial intelligence.  Such a body was in the government and in other entities such as NRHIs should systematically be involved in public procurement procedures and in the establishment of any strategic relationship with private actors. 

This said, it should be clear that the field of human rights impacts by technology companies is still a fairly new area for the human rights community.  Therefore, governments should allocate resources for generating knowledge here and expertise.  For instance, within NHRIs, but not only, and for other independent human rights expert groups.  Especially in the area of data economy and artificial intelligence where the technical expertise usually lays outside of governmental and human rights bodies.  Collective action can also be a way to safeguard human rights due diligence in some public procurement procedures. 

In general, meaningful involvement of Civil Society organizations and affected groups is needed to reinforce accountability for states to prioritize human rights protection in whatever action they undertake including public/private partnerships and public procurement. 

UNGP 6 sets out that, again, the first pillar, sets out that, quote, states should promote, respect for human rights by business enterprises with which they conduct commercial transactions, end of quote.  So the UNGPs envision an implementation of the states' duty to protect here by using public procurement as a tool to move business enterprises to what's a stronger application of human rights due diligence, as it has described in pillar 2 of the UNGPs.  So you know that there's the human rights due diligence cycle, having a risk assessment, taking measures to address them, and checking if the measures were effective, and then delivering remedy and reporting on that.  So this is basically, in a nutshell, the human rights due diligence circle, which is described in detail in pillar 2 of the UNGPs.  And in pillar 1 where the state obligations are described, it says actually public procurement should be used.  Well, it said it is an interpretation, to be fair, but I just read out the quote where you see the source for this interpretation. 

For now, however, procurement law often regards human rights as a secondary restriction, not equally important as financial aspects of the procurement decision.  The German procurement law, for instance, allows public officers to take aspects of sustainability and human rights into account.  But neither is this obligatory, nor is there any mechanism that incentivize public procurement towards a decision‑making in favor of human rights.  Approaches like this may even incentivize public agents to find a way around human rights when they are cooperating with the private sector, since the primary goal and mission is something else, namely rather reducing costs than respecting human rights. 

But respect and protection of human rights is a cross‑disciplinary state obligation that should be the basis of any government ‑‑ of really any governmental undertaking.  Thank you very much. 

>> HENRY PECK: Thank you very much, Deniz.  That was really a useful overview of both NHRI responsibilities but also the overlap with the different guiding principles and the different legally binding requirements and responsibilities.  I'm going to turn over now to Theo Jaekel who is the Corporate Responsibility Expert with Ericsson. 

>> THEO JAEKEL: Thank you, Henry.  I hope you can hear me. 

>> HENRY PECK: Yes. 

>> THEO JAEKEL: Good.  Please let me know if there are any ‑‑ I've had some issues with Zoom lately, so please let me know if you can't hear me.  But anyway, thank you very much for the invitation to speak here today, and thank you for the very interesting introductory presentation as well from Privacy International. 

Maybe just to start to explain kind of the position of Ericsson within the ICT ecosystem when we are talking about these issues.  So Ericsson, we are a communication network provider.  So we provide the infrastructure, and we do this through our customers where usually communication service providers or mobile operators.  So we rarely deal directly with government entities but do so, of course, as they are the end users of some of our technology through business partners and our customers such as the mobile operators, for example. 

Many of the questions that have been raised here today, even though we don't deal with them through public/private partnerships, as I said, I mean, our end users are sometimes government entities.  So many of the issues of conducting human rights impact assessments and due diligence in our business engagements with our customers are, of course, something that is crucial from our perspective as well. 

And maybe just to reflect on some of the points that were raised before, I think when talking about these issues of how to protect human rights and dealing with surveillance technologies, one issue going back to the point of transparency, which is, I think, very important is to clarify the roles of the different actors in the ICT ecosystem, to understand what responsibilities they have and what is reasonably expected from a U.N. guiding principles perspective of the different actors in order to make sure that each player in this ecosystem, both companies and states, take their responsibility seriously.  Because I think if we only apply the same kind of expectations or same practical requirements on all types of actors and players in this ecosystem, that might not be really relevant or adaptable to what's feasible for each actor. 

So we recently published a report on 5G and human rights, for example, where we explored this concept of different players in the ecosystem, both from a vendor perspective, mobile operators, if it's platforms, government entities, and making sure that we also raise awareness about these issues across the value chain, across this ecosystem.  So that's my first point on transparency.  I think that we need to think about these issues also beyond single transactions, but what role can we, as companies, play in dialogue with both government entities, our business partners, but also Civil Society and raising awareness about how the technology is used, what is the purpose of the technology, what is the risk of misuse, and how we can mitigate it as different players of this ecosystem. 

To the point also of the state business nexus, which is very, of course, important from a U.N. guiding principles perspective as well, I think it's crucial for us not to forget that one of the kind of important contributions of the U.N. guiding principles, when they were adopted ten years ago, is defining that distinction between the state duty to protect and the corporate responsibility to respect human rights.  So, of course, the corporate responsibility to respect exists regardless of a state's ability or willingness to comply with their duty to protect human rights.  So we need to take a responsibility regardless. 

And, of course, part of the state duty is potentially regulating corporate behavior.  But I think we should be cautious in not blurring those lines too much when we are talking about dealing with these types of issues from a human rights perspective where, for example, states' duties would be applied to companies.  So I'm not saying that companies, as I said, I mean, our responsibility exists irregardless of a state's ability, but I think we still need to have that distinction, and that's really what the U.N. guiding principles -- one of the strongest contributions under U.N. guiding principles and why we apply those, that thinking, in our way of working is because it is that clear distinction, which I think is an important part of how we design our due diligence frameworks and what we can do in practice. 

And then maybe just the last point before I hand over back to you, Henry, is on remedy and redress.  And I think that's also a point that has been discussed to quite some extent, and within our industry, not the least within the B‑Tech project, is how to design and enable remedy, again, in this ecosystem approach, because in some cases certain companies such as Ericsson might sometimes be a few steps removed from the actual potentially impacted stakeholders.  So, I mean, kind of applying the thinking of operational‑level grievance mechanisms in our industry might be quite difficult or not really effective.  But what can we learn from other industries such as the financial sector, for example, in designing and enabling a remedy in an ecosystem approach.  Again, maybe not just specifically for a specific transaction, but partnering with our business partners, customers, Civil Society organizations, in raising transparency again but kind of building in those systems.  And also thinking about what is rights‑respecting remedy for privacy violations, for example.  That needs maybe a specific approach as well. 

So I think that is something that we really need to focus more on in our industry and, of course, in collaborations with governments as well.  So I think I'll stop there for now and hand it back to you, Henry.  Thanks. 

>> HENRY PECK: Thank you, Theo.  Thank you for spelling out also a bit more about the differences in actors among communication network providers as well as vendors and other elements within this ecosystem and the challenges of applying some of these principles wholesale given the distinctions you've shared. 

I'm now going to turn over to Moira Oliver, who is the Human Rights Lead for Vodafone.  So Moira, over to you if you're ready. 

>> MOIRA OLIVER: Sure.  Thank you.  And good morning, good evening, everyone, wherever you are.  I think it was really helpful, actually, to follow on from Theo because it's, as you're saying, Henry, it's really important to understand different relationships, I think, within the ecosystem.  I'll expand a little bit about Vodafone of how we operate and then come on to say talk about our interactions with government through the licensing arrangements, which mentioned Henry at the top of the call.  And, again, just to make it clear that we don't, as Vodafone, operate public/private partnerships in a way that Lucie has described in her really excellent paper and great insights.  And I think, you know, the theme of that about transparency is also one that I’d want to pick up in in a moment. 

So Vodafone is the largest fixed and mobile operator in Europe, and we also operate in other countries.  And we have mobile and fixed networks in 21 countries, and then we partner with others commercially in about 49 more.  And overall just to give you a sense of scale, we have 300 million mobile customers.  So, obviously, we operate at scale, as you see.  But what we do is basically give people connections.  We don't operate platforms for people to provide, you know, online content, and we contract with individuals and businesses around the world to provide them with that connectivity. 

As part of that, so when we operate in different countries, when we have actual licenses to operate for our subsidiaries, then we are required, as you said, to have a license from the government of that country.  I think at last it is a contractual arrangement, so there will be contractual obligations as part of it.  It's not a contract in the sense as something that you can freely negotiate.  So often it's very much standard terms.  And as part of those licensing requirements, we have to comply with local laws. 

And I think this is, you know, where we see the sort of interesting nexus between state obligation under pillar 1 and companies' obligation under pillar 2.  And I completely agree with Theo about the importance of understanding the separation of responsibilities.  I think, you know, often the challenge can be, if you are operating in a country or indeed, you know, we haven't talked about supplies, but sourcing from a country where the rule of law or respect for human rights or the state obligation that Deniz was talking about is not in place to the international standard.  And I think that, you know, presents some real challenges because as Theo was saying, you know, as a company, you still have your obligation to respect human rights, but if you are operating in a framework where there isn't that ‑‑ there's sort of statutory safeguards, then that can be a real challenge. 

And I guess, you know, where this particularly comes into play is in the area of law enforcement demands, for example.  If as an operator you may be required to comply with a local direction to either pass certain customer data to the local government or to block the network in certain ‑‑ or throttle the network in some respect.  And I think that, you know, that can be a real challenge.  You know, Vodafone joined the Global Network Initiative, and there is also in the GNI and a number of other operators to really highlight, you know, the challenges that there can be in this area around the world. 

And I think, you know, going back to this point about transparency that Lucie was saying, you know, as a key part of the U.N. guiding principles, I think this is a really critical and important way of shining light on some of those challenges.  Vodafone has been issuing transparency reports since 2014, which talk about some of those challenges and give data where we can.  I think going to Theo's point about remedy, and I'd be very interested to hear others' thoughts on this.  But, you know, when people's data is under ‑‑ you know, is part of a law enforcement demand, for example, often people won't even know because there is no obligation in some jurisdictions.  And, in fact, in some jurisdictions it's, you know, against the law to actually disclose this.  Some countries allow for disclosure after the event.  But, you know, it's very difficult, I think, to talk about remedy if you don't know that your rights have been impacted.  So I think that's a particular conundrum. 

And, yeah, I think I'll just pause there.  I think in the chat there's a link to our materials and our law enforcement demand reporting is available from that link as well.  Thank you. 

>> HENRY PECK: Thank you, Moira.  Thanks so much.  I think I'll move straight on to our final panelist, Sebastian Smart, who is a Senior Adviser at the National Human Rights Institution of Chile.  And then we'll turn over to questions from the audience and can have a more free‑flowing discussion.  Sebastian, over to you. 

>> SEBASTIAN SMART: Many thanks, Henry.  And, well, first of all, I'd like to thank you for the invitation to be part of this panel and special thanks to the Business and Human Rights Resource Center, the B-Tech Project and Privacy International as well for organizing this conversation and to all the panelists as well from whom I already learned a lot. 

There are a few questions that guide our conversation, and the first question that talks about the value of the UNGPs and our experience in practice, working with the UNGPs.  I'll have to start by saying that I have worked with the UNGPs from different also perspectives.  First from an academic point of view.  Then applying it to structured industries in Chile, particularly, and then working with different companies and Civil Society organizations on issues related to technology and human rights. 

I'm currently working with the Chilean National Human Rights Institution.  And I think that such personal progression has to do also with the progressive interests of different sectors, I will say, in the UNGPs as well.  So we have to remember that when the UNGPs were firstly being drafted, the main sectors that grabbed the attention were companies that operate through extended supply chains and instructive companies, and tech companies at this point were usually considered only, I will say, narrowly in the context of censorship and surveillance.  And, in fact, if you remember the period and especially the occupy movement and the Arab Spring, tech companies and platforms that they provided, of course, were seen as enablers of democratic activism.  So tech companies were mostly seen as enablers of human rights rather than rights to them. 

But that narrative, I will say that it not only affected drafting of the UNGPs but created as well an environment where government's regulations were seen, I will say, as negative and counterproductive.  In recent years I will say, however, we have seen how companies and how governmental activities have generated and increasingly adverse impact on human rights from issues that have been clearly well expressed here from issues of freedom of expression, surveillance, and decision‑making, I will say, for social programs that lack the transparency and may end up discriminating, which is of particular importance in the context of increasing privatization of public responsibilities. 

I think that, then, the smart tools or mix of tools given by the UNGPs including regulation, public procurement, national action plans in pillar 1, but also the set of tools provided in pillar 2 for companies can be a starting point to avoid such consequences.  In other words, UNGPs should be, like, basic model if you want to put a floor where we should start building a current mechanism to protect and promote human rights in the digital environment. 

And in that context, I will say that PI safeguards for public/private in tech are an excellent complement, or if you want to put it, a clear guidance on how to develop the UNGPs in a specific context when it comes to tech.  I still think that one of the biggest challenges that we have at this point of growing interest of applying the UNGPs and different guidance is to specific national or even transnational context is policy coherence. 

And let me give you a couple of examples in Chile, not of this smart mix, I will say smart mix, of regulatory frameworks that are currently using the UNGPs and that keep us, at least in the Chilean National Human Rights Institution, especially alert.  On the one hand, we are in the midst of developing a second national action plan with business and human rights, and it should be remembered that the first national action plan on business and human rights does not refer to technology at all, no.  So we expect that the second national action plan will have some issues that relate to tech and business and human rights, especially because it counts with better stakeholders consultation mechanisms. 

Also, there are a series of bills and regulatory frameworks of special importance to the digital environment that are currently being discussed in Congress.  And they had been criticized for different reasons.  Among them, the recent strategy on artificial intelligence, for example, which makes few or almost no references to human rights or to business and human rights.  Also, the existence of a bill that creates -- that are regulated but which lacks the sufficient independence from the executive branch and a bill to regulate digital platforms that has been strongly criticized by the Global Network Initiative.  The biggest criticism here has been that in the way it's formulated, the law could excessively limit freedom of expression. 

Finally, and I think more importantly, Chile is going through a process of profound change of constitutional reform, which means a potential change in the institutions for the promotion of protection of human rights but also in the human rights catalog.  And here I will say particularly important, the social rights catalog.  And despite these issues, the question of the relationship between social rights and a digital welfare state, as (?) puts it, where we see an increasing privatization of decision‑making processes seems to be absent in the current constitutional process in Chile.  And here is where I would like to put some attention, no.  Particularly because this situation in the Latin American context can be, I will say, catastrophic not just for human rights but also for democratic stability in Latin American countries. 

And in this specific context, PI safeguards for public/private partnerships in tech give us some, I will say, guidance to avoid part of those problems, no.  Firstly, I will say that it's particularly worrying from a private/public partnership perspective when it comes to social rights, are the systems of surveillance inspection and punishment of people who do not meet the criteria for access to social rights, no, and processes that have increasingly relied on the use of digital technologies and automated decision‑making processes. 

A second issue to take into account is the assurance or guarantee of equal and effective access to enjoyment of social rights so that digital or artificial intelligence technologies used in the provision of social services do not imply obstacles that affect the holders of these rights.  And a third issue is to ensure that decisions about people's welfare do not depend exclusively or decisively on automated decision‑making systems, whether or not they are based on artificial intelligence techniques. 

So wrapping up, I will say that we are experiencing a growing context where states engage in privatization or contracting out of tech services that may impact on human rights, that in these processes, governments must exercise adequate oversight.  And here I'm using the words from the UNGPs, including by ensuring that contracts or enabling legislation communicated with states ‑‑ the states' expectation that service providers will respect human rights of service uses, and yet that's not enough, no, as PI Safe was clearly put it, states should promote awareness of and respect for human rights by businesses including through the terms of procurement contracts. 

In such a complex scenario, my biggest worry, I will say, is how to ensure policy coherence, including when the state acts as an economic actor.  How do we ensure that government expectations, for example, towards technology companies may not cause confusion among companies and stakeholders.  For example, how do we give tools to governments through, for example, through public procurement to distinguish between companies that have or do not have robust human rights policies and due diligence processes.  And on the other side, how do we ensure that companies do not ignore, as we have already said here, regulatory requirements or guidances of different states. 

And in my opinion here, and here I'll finish, national human rights are key actors for achieving such policy coherence in public procurement and any policy that makes the relation between governments and tech companies, and there are good examples on how that can be achieved in the digital sector.  Deniz has already given good examples in Germany.  Also the NH Institute For Human Rights joined, for example, Global Partners Digital to create a guide for including tech issues in national election plans.  I had a great opportunity to participate in the initial steps of that guide.  And in Chile, from our side, we have published extensively on business of human rights, and we are starting, I will say, and here I say starting a process in the tech sector, so happy to partner with the different organizations or to take good practices to continue that work.  So many thanks. 

>> HENRY PECK: Thank you, Sebastian.  Thank you for that very useful intervention and introduction to some of the work you are encountering in Chile and the circumstances you are encountering and broader developments among NHRIs. 

That is the end of our set presentation and analyst intervention stage.  And so we're going to turn over to question and answers.  And while we're collecting these, I might kick off just by returning to Lucie and Ilia and ask a little bit about what are your next steps for the safeguards, and perhaps if you could also speak about ideas for engaging actors with them and also how to deal with a sort of pandemic climate in which PPPs are increasingly ‑‑ well, maybe rushed through under the guise of emergency and how to effectively navigate such circumstances or emergency needs with the safeguards that you've proposed. 

>> LUCIE AUDIBERT: Yes, for sure.  Thanks, Henry, and thanks, everyone, for the really interesting sharing of experiences and the positive comments on the safeguards as well.  So in terms of next steps for the safeguards, so we've just published them today for the first time.  But we've started using them in various aspects of our work.  And because our work often revolves around understanding the role of companies and states often at the same time in enabling surveillance, we have found them already quite useful in advocating for stronger protections.  So, for example, in the context of platform workers' rights, we found some of them quite useful to recommend ‑‑ to recommend protections against sharing of data between ‑‑ sharing of employees' data from platforms to governments and things like that. 

Otherwise we have partners ‑‑ partner CSOs around the world who have started to use them in their own advocacy.  And this advocacy can be within specific partnerships.  So when partners investigate a partnership and advocate for protections, they can be useful there.  But they can also be useful, and that's where we're hoping to do a bit more next year, when doing advocate ‑‑ legislative advocacy.  So in the context of the European Union AI Act, for example, we're hoping that some of the protections that we recommend can make their way in there.  Because we find that there is a lack of consideration in the AI Act to these relationships between states and companies that can enable abuses of AI.  And so if we want to have protections that work, then we think this relationship should be tackled head on. 

So, yes.  So I think, yeah.  That's what our aim is, is to really encourage a Civil Society to use them when denouncing specific partnerships but also in legal advocacy.  And then your other question was about the pandemic and how to face partnerships when they're rushed through.  Well, we think, actually, that's where ‑‑ that's really where the safeguards can be most useful.  Because if you have a set of established protections maybe in legislation or in policies, in companies' policies or in government's policies that have been agreed upon and passed through in advance, when you're faced with these rushed ‑‑ these rushed circumstances, that's when they become really useful because you can ‑‑ you don't have to scramble to think about what kind of process you need to go through, about what kinds of assessments you need to make.  You've got this checklist that you can go through and ensure that you're not, you know, pouring in massive amounts of money, as we've seen in the pandemic, into solutions that will end up being counterproductive or that will end up harming individuals and communities.  So, yes.  That's really when we would love for them to be used as a checklist for these kind of rushed situations.  And we've actually seen the most problematic partnerships in the past couple of years have often arisen from the pandemic and from these, yeah, rushed situations.  So, yeah.  I hope that answers the question. 

>> HENRY PECK: Great.  Thank you.  No, that's definitely more food for thought.  I'm going to pass it to Isabel who may say a few words about how B‑Tech has approached these different challenges within the state business nexus umbrella. 

>> ISABEL EBERT: Yeah.  Thanks, Henry.  I think also we had really fantastic interventions from our panelists that describe the challenges on the ground.  Obviously, three points and reflections on discussions.  So you will see that we put out one foundational paper as part of the B‑Tech Project, which has a focus on pillar 1.  And in particular -- and I just put a link in the chat -- and in particular headline 4 articulates some of the actions that the state can take to uphold their statute to protect.  And some of the examples we are mentioning is really that when states, for example, work with expert credit agencies, support expert credit agencies, finance them, they have really strong leverage to incentivize companies to demonstrate that they’re carrying out proper human rights due diligence, that they are reviewing relationships with state agencies properly, and we've included a few of those examples. 

Another area is obviously also development finance.  For example, the European Investment Bank is going to publish a report probably this week on the responsibilities of telecommunication companies towards human rights, but obviously also that entails responsibility of development finance institutions to also review in which type of companies they are investing and to differentiate between those companies that are really leaders in the human rights space and who are becoming more accountable and transparent and are putting a lot of effort into their processes and, on the other hand, the ones that are really the laggards and are failing to demonstrate these crucial activities on respecting human rights. 

The second point I wanted to make is that obviously that's why we are also here today.  We are happy, as B‑Tech, to act as a platform for exchange to also convene stakeholders if they want to work together on sort of operationalizing what good practices could look like for state business nexus relationships.  And that also entails that we are currently in parallel convening conversations about how a rights-respecting approach to technology regulation should look like.  So you can see where sort of the cutoff points where a regulation might get too broad and overregulatory and what would be elements of a regulation that is really focusing on impact rather than, yeah, compliance. 

And the third point I wanted to make is obviously we are not operating in a vacuum.  B‑Tech is convening a company community of practice where we work with the respective representatives in the technology companies to talk through some of these challenging situations, which also sometimes include challenging situations when states are the business counterpart.  I will also, after I spoke, put some links in the chat regarding that where you can see some reflections are coming out of these conversations.  So really just to summarize, on one hand, we've put out language what UNGPs can say about state business nexus relationships.  We are happy to provide a platform, and we are also really happy to listen to all types of stakeholders about the work, not, of course, only business but also Civil Societies and NHRIs where we will be intensifying the work.  You can also obviously reach out to us.  I also put our email address in the chat.  Thank you. 

>> HENRY PECK: Thank you very much, Isabel.  I want to open it up for anyone to respond, if they'd like.  And please just raise your hand on the platform.  I think we've lost Theo, unfortunately, due to a connection issue,  but we're trying to get him back in.  But, yeah, please feel free to flag if you'd like to add to anything that's been said.  And in the meantime, Moira, if you're willing to talk a little bit about something you mentioned in terms of Vodafone joining the GNI and maybe the effect that's had on your practices, but also perhaps talking about globally in the context of that sort of state business nexus that we've been talking about operating in a playing field that’s not entirely level where some companies are giving more attention to these issues than others and how that may be a challenge that you in encounter in dealing with different contexts. 

>> MOIRA OLIVER: Yeah, sure.  I mean, maybe Isabel could put a link in the chat to the GNI’s website for people who aren’t familiar with it.  GNI is a multistakeholder organization with investors, large platform companies and telcos as well, academics, NGOs and others.  And I think it was very much set up to address the issue of advocating for privacy and free expression globally particularly in the context of that sort of state business nexus that we’ve been talking about in terms of law enforcement demands. 

I think, you know, how does it help?  Well, I think GNI is a great organization.  It has a set of GNI principles that all company members must adhere to.  It has an assessment framework that goes with that.  And so where I think it particularly has helped bridge the gap, maybe the gap that I think the B‑Tech Project has also been addressing which, you know, it's been great working in the B‑Tech over the past year.  I think, you know, we have had this challenge of, you know, the UNGP is a great foundational document, but what does it mean in practice?  You know, what does it mean to implement the UNGPs throughout an organization?  What does it mean in particular scenarios?  And I think, you know, the tech sector has some particularly -- you know, AI has been mentioned a few times.  So I think, you know, this is exactly the sort of barrier where tech is moving ahead, and regulation is maybe struggling to catch up.  And so I think this is where organizations like the GNI or B‑Tech have really helped to sort of work through what it means in terms of the UNGPs and respecting human rights in certain specific scenarios. 

So I think that has been really helpful to Vodafone and a number of other companies just to really understand, you know, the policies and processes that you need to have in place, as well as the shared learning and, of course, the outreach.  On GNI's website, there's, for example, the country legal frameworks resource which gives a summary of the laws that relate to the issues that GNI deals with around the world.  So, you know, that sort of transparency is also part of its advocacy. 

So ‑‑ I've forgotten your second question now.  Certainly, yeah, it's helpful in terms of that that policy and implementation piece.  Do you want to remind me of your second question, Henry? 

>> HENRY PECK: Yes.  Sure.  No, thank you.  I mean, that sort of leads into it, which is really operating in a field where not all of your competitors are members of the GNI or adhering to the same principles. 

>> MOIRA OLIVER: Yeah.  Got it, yeah. 

>> HENRY PECK: How do you anticipate throttling situations, for example, and how does that affect the corporate behavior? 

>> MOIRA OLIVER: I think, you know, when it comes to those specific demands, you know, we have to be honest.  There's very little you can do because you have to comply with the law.  It's around how do you evaluate the law.  So I guess, you know, when it comes to the UNGPs and GNI principles, it's around the policies and processes you have in place to evaluate, you know, the legal position, whether the legal position applies.  You know, is the demand in writing, et cetera?  Whereas maybe some other companies that operate without such adherence might interpret a demand differently.  I mean, you know, at the end of the day, if you're a licensed operator, then the obligations apply to everyone.  I think, as I said, it is about how you go about implementing it or, you know, making a decision about implementation. 

I think ‑‑ I think this point around level playing field is a really important one, Henry, because I think, you know, a number of companies have been very advanced in implementing the UNGPs throughout their business operations for a number of years and making, you know, good progress and good efforts.  You know, and other companies haven't seen the need to do that.  And I think, you know, with the proposed EU mandatory human rights due diligence legislation, obviously we have yet to see the exact detail, but I think, you know, it's exactly that kind of framework which is needed to sort of create a level playing field and also, you know, to make sure that all companies live up to their responsibilities in this respect. 

>> HENRY PECK: Thank you, Moira.  Thanks a lot.  I might call on Sebastian, if I may, if you're up for it, in terms of ‑‑ well, you spoke a little bit about your initial responses to the safeguards, and perhaps you could also talk about if there are any challenges or gaps you foresee, or in terms of maybe more locally to Chile, whether you think the developments in mandatory human rights due diligence in the EU will help or affect or have any impact within Chile. 

>> SEBASTIAN SMART: Yeah.  Thank you, Henry.  Well, I think that in terms of I will say more challenges of the safeguards in Chile, as I was saying before, I think that one of the ‑‑ and it's not about the safeguards.  It's about Chile itself, no.  It's how to start thinking about a digital welfare state that we may have.  We are starting having it now, but we may have it clearly in the future and how social rights specifically in a Latin American context are key issues and are key issues that may end up breaking democracies.  I mean, fragile democracies, as we have in some countries in Latin America.  And we have just seen that by the end of 2019 with massive mobilizations in Chile.  And my biggest worry nowadays is how are we going to think about issues that are not currently on the agenda in a Latin American context, in Chile particularly, and how we start thinking about these issues, no.  How do we start thinking about companies who are taking the place of governments in some issues, specifically in deciding, for example, social programs that are developed in national context. 

And what are the responsibilities for those companies first on the person?  And secondly, what will be the responsibilities of governments in dealing or having these partnerships with private companies?  And here there are specific programs very well described in safeguards in different contexts.  But in Chile as well, no, we have health programs which are led by private companies, decisions on, for example, children, social programs that are decided through algorithms, that we really don't know what is happening.  We cannot open that black box.  So issues of transparency when we have automated decision‑making are specifically difficult.  And my biggest worry is that it's not part of the agenda.  And I'm sure that that will be a problem in the near future, especially when massive mobilizations in the Latin American context come from the inequalities made by social programs in our country.  So how these issues are going to leverage the importance of social policies and how it's not yet‑‑ not just in the Latin American agenda but mostly in the digital environment where we mostly talk about privacy and freedom of expression, but we don't talk much about, for example, social issues, social rights, and I think that that's an issue.  And I think that that's a huge gap in the digital environment itself, and from the UNGPs, I think that we have to start thinking about those issues. 

>> HENRY PECK: Thanks.  And just to hold you for a second, there's a question from the audience which links well to that, which is the underlying issue in the problematic areas, AI privacy, freedom of expression, and to continue your chain, the more social rights, is the use and abuse of personal data, do we have or do we need principles and action to clarify and take action on matters of personal and impersonal data?  And we've just got a couple of minutes left, just to let you know. 

>> SEBASTIAN SMART: Well, definitely.  I mean, it's an issue also.  We are a couple of ‑‑ we still have to do some progress in order to have good regulation in terms of the usage of personal data.  For example, in Chile and in most different Latin American countries as well, we don't have data protection authorities yet.  And that's how we have to start thinking or looking at good experiences but also at looking at what has gone wrong in other countries in order to ‑‑ not just to replicate it in the national context but to adapt it to the national context and how that usage of personal data, of course, may end up generating human rights abuses.  And when we talk about human rights abuses in the Latin American context, we talk about gross human rights abuses.  I mean, when you use personal data, you end up, as the Mexican example given by Lucie before, I mean, you may end up making extrajudicial executions in the Latin American countries.  So that's the issue that we are dealing with here. 

>> HENRY PECK: Thank you.  Thanks very much.  We've just got a minute or two left.  So if there are any final responses or interventions, otherwise I don't think we really have time for a new question thread.  But, yeah.  So speak now or forever hold your piece.  Okay.  I'm being told we need to close.  Thank you, Lucie, Moira, Sebastian, Theo, and Deniz, our terrific speakers today.  And thank you for the discussion.  Thank you, Isabel and B‑Tech for cohosting and to the IGF and the few attendees who did manage to make it to our room in person.  Really glad to have had this conversation and looking forward to continuing it in due course.  Thanks very much. 

>> LUCIE AUDIBERT: Thank you, Henry.  Thanks, everyone. 

>> SEBASTIAN SMART: Bye.