The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.
***
>> MODERATOR: Welcome to this panel, Ensuring the online coexistence of human rights & child safety. My name is Mia McAllister. I'm a Program Manager at the FBI.
Today's session aims to provide meaningful insights into the complex interplay between technology, privacy rights, and efforts to protect children in the digital space.
As digital technologies continue to evolve, they offer both opportunities and challenges. Today's panel brings together experts from diverse fields to explore how we can foster an online environment that respects human rights, while prioritizing child safety.
I'll go ahead and introduce our panel today. Online, we have our moderator, Stewart Baker. I don't think he's joined just yet but Stewart Baker is a Washington, D.C. based attorney spe-cializing in homeland security, cybersecurity and data pro-tection. He's held notable government roles including serving as first secretary for policy at the Department of Homeland Security and general counsel of the national security agency.
Stewart is also an author and host of the weekly cyberlaw podcast.
His vast experience in cybersecurity law and policy adds to this discussion on human rights and child safety in the digital age.
Next we have Dan Suter, a principle advisor in New Zealand. With a background as a criminal defense lawyer and a prosecutor specializing in serious organized crime, Dan has also served in international roles including as a liaison to the prosecutor to the United States. He has interested lek
Next we have Mallory Knodel, she's a technology and human rights expert, specializing in Internet governance and digital policy.
Mallory is active in Internet and emerging technical standards at the IETF, IEEE, and the United Nations. Her background brings a unique perspective to the intersection of technology, policy, and human rights.
Next in the room we have Katie Noyes. She's the section chief for the FBI science and technology branches next generation technology, and lawful access section.
She serves as the organisation's lead on 5G, Internet governance, and technology standards development.
Katie is a senior strategic advisor for technology policy at the FBI with over 20 years of experience in the intelligence community.
Including service as an Army military intelligence officer and various roles with the defense intelligence agency. Katie brings extensive expertise in security and policy development.
Lastly but not least, we have Dr. Gabriel Kaptchuck. Gabe is an Assistant Professor in the computer science department at the University of Maryland college park. Gabe's work focuses on crypto graphic systems. His expertise is academia, policy, and intel ads and the United States senate. Gabe's insights bridge technical and policy rooms contributing to secure online environments.
So as you can all can see today we have a wide range of experts and I'm really excited for today's discussion. We'll have about 60 to 65 minutes of moderated discussion and then that will leave room for questions from both the audience and online.
So without further ado, since Stewart is online now, I'll turn it over to you, Stewart.
>> STEWART BAKER: Okay. That's great. Hopefully I can turn on my camera as well. Yes. There we go. All right.
Thanks, Mia. That was terrific and a great way to get started. I thought it would be useful to try to begin this by talking a little bit about where we are today, what's happened over the last year or so, that would give us a feel for the environment in which this discussion is occurring.
and particularly, because there's been a lot of movement in Western democracies on this question. I thought it would be useful to ask Dan to start by giving us a feel for what some of the British commonwealth countries have been debating with respect to protection of children and the worries about undermining strong encryption. Dan, do you want to kick us off?
>> DAN SUTER: Thanks, Stewart. Thanks to everybody over in Riyadh. Great to be part of such an esteemed panel today. All the way from New Zealand in the small hours over here, we're only about 1:00 in the morning. Look, it's really important at this point to highlight that I'm not I'm going to speak about Australia and the UK but I'm not a representative from those jurisdictions, but you are right, Stewart, the legislation in both countries really has been a point of discussion on how it can be used.
but look, I really want to say there are practical impli-cations on what can be achieved by regulation in this space, and a more meaningful strategy would be to consider how governments consistently engage with tech firms on the issue of child safety and lawful access. It's really not enough simply to recognise the risk, as probably we have done, as five countries. I'm looking there at the UK, Australia, Canada, New Zealand, and the U.S. We need to really raise our ambition and develop a collective approach to engaging with each other and towards safety by design ethos, including design in lawful access that does not undercut cybersecurity or privacy. And certainly, that's exactly where those five countries are moving towards in relation to and I may speak to this a bit later, and others on the panel in relation to 2023 and 2245 country ministerial communications. One of the primary duties of the government is to keep their citizens safe from serious harm and here we're talking obviously about child safety as well, and carefully governed and exceptional lawful access should be a crucial part of those efforts to protect the public from harm.
So when I speak about the legislation that follows, this primary duty is reflected there. We're very much an incremental approach either through consultation or voluntariness.
So in relation to Australia, here we're going to get a little bit more technical, but Australia has their telecommunications and on their legislation amendment bracket assistance and access act 2018. So shortened to TOLA. That introduced a framework for Australian agencies to make both voluntary and mandatory requests for industry assistance to gain access to encrypted data.
Part 15 is the really important aspect of TOLA. To emphasize this, it establishes a graduated approach for agents in Australia to receive assistance by establishing three main powers. So one is a technical assistance request where agencies can request this voluntary help from designated communications providers. So industry, where they're willing and able to give assistance. Secondly, technical assistance notices, or TANs. Agencies can compel designated providers to give assistance where they have already the technical ability to do so, or a technical capability notice, or TCN, which requires to help authorities.
So these can be used, and therefore customers of those platforms may not know if data has been requested or even requested under TOLA.
There is independent oversight. That's already there in Australia related to actions conducted by intelligence agencies, by an inspector general of intelligence and security, and equivalent oversight for law enforcement agencies as well.
the operation of the act is subject to ongoing review by Australia's parliamentary joint committee on intelligence and security and that actually reports on how many orders have been issued.
I can tell you in 2019, 2020, there are 11TARs issued, and 2021, 2022, 30 technical assistance requests ordered.
So let's move on to the UK. The UK passed its investigatetory powers act in 2016 including an obligation on service providers to remove encryption or electronic protections from communi-cations for investigator purposes upon the proper notice. Notice again this is an incremental approach. There's a confrontation phase to those service providers before a technical capability notice is issued.
So again, really robust safeguards. There's a double lock mechanism for example with independent oversight by the investigatetory powers commission.
in terms of my own jurisdiction as well, quickly, New Zealand, obviously most of at the major communication service providers are based offshore. The main issue with relation to New Zealand therefore is extraterritoriality and enforcement. There are a couple important provisions within our legislation being the telecommunications bracket act, 2013, commonwealth justifica-tions often have really convoluted act names, but there's a duty for service providers to assist with and we're talking about encryption here, decrypting telecommunications when there's an execution of an interception warrant. So the legislation, the shorter name is TICSA, the overriding objective is to ensure that telecommunication service providers are ready to be able to intercept communications and there is a provision for a minister to direct those communications service providers to be able to say, look, you need to be in a position to be intercept ready and accessible and part of that duty will be in relation to decrypting communications.
I'm not aware of that ever having been done but the simple fact is it's really difficult to enforce for those over the top providers, such as Meta with WhatsApp and Facebook Messenger, for example, to be able to enforce any of those provisions through a ministerial direction.
Again, look, just to be able to complete this phase, reg-ulation is one aspect and there have been those points of discussion with the use of these particular pieces of legislation and the provisions that they provide, but the real emphasis should be here on what we can all agree and moving the debate on to ensure that we reach a position where we understand in terms of that safety by design ethos and progressing towards where we have common-alities in the debate. Passing back to you, Stewart.
>> STEWART BAKER: Yeah, Dan, just one follow up question. When the investigator powers act and the Australian bill were moving through parliament, there was confidential anxiety expressed by industry, and Civil Society, that these acts enabled the government to insist that encryption on a particular service be redesigned so as to allow for lawful access and in a way that might impact the security of communications. Do you think that's a reasonable reading of those bills? And as far as we know, has there ever been an order, a capability notice, that required that kind of motivation?
>> DAN SUTER: Look, of course, in terms of the debate, there's always going to be focus on what the extremist point can be in relation to legislation. But I think it's really important to again reemphasize that there is that incremental approach working towards in relation to a point where ultimately it is for governments to determine in terms of the protection and the safety of their citizens.
but built in within that legislation, of course, when we might have a debate about this, we're not going to focus on the intricacies because we haven't seen this work through in terms of how it practically applies but there are robust safeguards that have been there well established for a long time. They've not just been plucked out for the benefit of government. We know they're there and they work. The double lock mechanism, the oversight in relation to intelligence agencies. It has to be, in terms of any legislation, to ensure that there is that social licence, that these safeguards are built in. But there also has to be balance that where the public do need to be protected the power is available.
but I can tell you from a New Zealand perspective in terms of the legislation I've referred to there has to be a balance with cybersecurity and also privacy in relation to preventing any collateral intrusion in relation to individuals and these have to be specific and targeted and to ensure there isn't that collateral intrusion. I think it's really important when we talk about this debate we're understanding that we are talking about protection of citizens. We are talking about that being at a very last stage. But there has to be the power and capability there if needed with those safeguards built in. Back to you, Stewart.
>> STEWART BAKER: That's great. Mallory, do you see this the same way? That the English speaking countries, other than the United States, have given themselves the authority to have a pretty dramatic impact on the encryption services that are provided, but have, for a variety of reasons, not gone to the full extent of their authority? And have built in a number of protections for privacy and security?
>> MALLORY KNODEL: Right. So because we are so in such a late stage of these debates, and not a lot has changed on the regulatory side for a while, I'll have to say no. I think that's not a surprise. I think we've obviously had a similar debate now for a very long time. I do actually think a lot of other externalties have changed besides government positions on this. I'll only mention because we're, of course, really short on time by now, what's relevant to what Dan was just saying is in Australia because of TOLA you now have one less privacy startup. So there's an application folks were using called Session can which is an end to end encrypted app. It's interesting because it doesn't using phone numbers or user names in a persistent way so it provides a little bit more pseudonymty when using the application. That's what kind of differentiates Session from maybe other apps. They announced very recently they have to leave. They're going to Switzerland because they've been visited by the authorities and they're quite worried they'll be asked to back door it or provide user information to the police. That's exactly what privacy companies have said about the online UK safety act. It's unfortunate that OFCOM, the regulator has been somewhat silent on how they would handle orders to back door, whether they would do it under a gag order or be transparent about that, but we've heard from Signal at least and certainly WhatsApp has not been shy about expressing Meta's position on this, but that they would leave the UK before back dooring the software, for sure, and already, right, this gets into more of the solutions space, already there is data that can be obtained that can be provided and that is provided based on leaks from a few years ago and, I don't know, it was like a slide deck that the law enforcement community was using to explain which of these encrypted services have which meta data and how you can get it. This sort of already exists. Right? So once an application decides to completely leave a jurisdiction or completely not comply with requests like a back door, then you also lose access to that meta data as well. You also lose access to the helpful service data that you could have potentially used.
So it's not a great move for anyone. Right? When this happens. But it will continue to happen because what is being provisioned in these laws amounts to mandated back doors that change the software for everyone all over the world, not just in that jurisdiction, and it changes it in a persistent way so that back door or capability is always there and it changes it for everyone who uses the application whether or not they're suspected in a crime. It's just a much too overbroad piece of legislation.
and yeah. So what you're talking about, Dan, where we would rather in complement regulation with the ability to work together and find solutions you take that off the table when applications start leaving jurisdictions over your risky laws.
>> STEWART BAKER: One question, Mallory. Sessions left Australia, as its corporate headquarters. Maybe they also plan never to sell in Australia. I'm not sure we know that.
>> MALLORY KNODEL: Yeah. That's potentially
>> STEWART BAKER: Quite significantly, nobody else who provides end to end encryption has said we're leaving. That suggests that maybe Sessions concern is over a capability notice that might have affected their efforts to make pseudonymous accounts
>> MALLORY KNODEL: No. Just to interrupt you Stewart. Because I know where you're going with this question. It's because the laws of companies in Australia having to comply. As Sessions leaves the jurisdiction, they're no longer subject to this regulation. Also I'll note as far as I can tell staff members have had to relocate physically because they're worried about the government.
>> STEWART BAKER: Obviously because they're worried about the jurisdiction. Okay.
So this is the question of the Australians having limited their authority to people who are located in their jurisdiction as opposed to people who sell services in their jurisdiction. Because it wouldn't be hard to extend jurisdiction to people who are offering those services.
>> MALLORY KNODEL: I think it's hard. I think it's definitely hard. I think that's what the UK wound up doing eventually but TOLA was some years ago. I wanted to also mention that I think it's interesting we're just basically talking about the Five Eyes country because there's obviously and concerted and coordinated effort to work on legislation as a block. So you had Australia sort of doing the impossible, getting any kind of back door law on the books first, take that hit, but kind of with some measured approach so it wasn't like every end to end encryption act on the planet but just in their jurisdiction. Now UK is coming in later, putting a back door legislation on the books, and it's limited powers. So you see, all these countries Canada has also mentioned to do something, and it follows from there. This is certainly an effort done. I think Australia doing something more measured was a tactic to get something that people could live with. They probably would have rejected something a little bit stronger.
>> STEWART BAKER: Yeah. You're absolutely right. It feels as though people, the attackers are circling, and taking occasional nips from their target without actually launching an attack. Why don't we move just to focus on what's happening in Europe as well.
So we have a complete picture.
Katie, can you give us a sense of how or where the debate is in Brussels?
>> KATIE NOYES: Yeah. First of all, let me just, you know, extend my gratitude. I wish you all were here in the room. We have an awesome audience of folks here. You can't see half of them. But you're all missed here. We wish you were here. But I think really let me just kind of hit a tick before I get there, if you're okay with it, Stewart, which is the whole goal of bringing this to the Internet Governance Forum was because we're Multistakeholder, we're representative of that on this panel, and I'm really grateful for that. I will sort of now bring this home which is that's what's going to solve this problem. Candidly, I don't think it's going to be governments. Certainly not alone. It's not going to be the Private Sector and the company alone. It's not going to be just Civil Society. It's also going to be people at their kitchen tables. I absolutely want to bring this home to people in the room who are very interested in policy but I think we all want to know what this means in tangible terms. Going back to Brussels for a minute and how this even affects the FBI, these are global companies with global capabilities. We have global public safety challenges. There are global terror organisations and global fentanyl traffickers, and global trafficking and child sexual abuse networks that work across the globe. I want to highlight that first because it's not an European problem or Asian problem or African problem; it's an all of us problem. We're all trying really hard to learn from each other. I think the idea of trying to harness best practices is key.
on this, the European Commission actually just put out a report, so it's very timely, in November. They had commissioned the high level group. And the group was specifically to look at, I want to make sure I get the title right because it's key, it was access to data for effective law enforcement.
if you get a chance to read the report I highly recommend it because I think it goes to some of the things we've been talking about. I guess I will take a slightly different approach and say I think things are very different and I think they're very different around this conversation because I was sitting in Berlin at the Internet Governance Forum I think a year before COVID and the conversation was very different. It was I'd say very strict on the privacy side. There seemed to be and please don't take this as a pejorative comment, but there was a lot of trust in technology companies in that they were solving Civil Society's problems. And that sort of idea that public safety might come in and sort of mess that up or be a chilling effect. I have found the last two days I've been sitting in on multiple panels it is a wildly different conversation. And the conversation is coming down to responsibility. What roles and responsibilities do each of us have? Again, I want to go right into the face of this narrative that somehow safety, security, and privacy are Diya metrically opposed. I think it's a false narrative. If you go back to the UN rights, there's a right to safety, a right to security, a right to justice, a right to privacy. There is an expectation that this all coexists, thus the name of the panel. I think when you read what they're doing in the European Commission it really does look to us and it's something we're also trying to emulate with a partnership we newly have with UC Berkeley where we had a summit to have the same conversations, the major themes around responsibility.
So it talks to, what are the expectations of government in this realm? Is there an idea around incentivizization? It's putting a more active role and a more active responsibility on governments as well to meet industry, to meet Civil Society, to meet the needs.
Because again, we do need to achieve that.
and then take it one step further. Again, it is not up to government, and we all understand that, to prescribe a technical solution. That's not what we're trying to do. But we do recognise it probably does take some government incentivizization, some communication of priorities and needs, and I think there's a lot of space there to achieve that.
Again, that going back to that report, it actually details out some of these approaches and fresh off the presses from November.
>> STEWART BAKER: Katie, I understand all of this. And there's no doubt that the European Commission has proposed legislation that would clearly incentivize better access and more law enforcement insight into what's going on, on some of these services. But that proposal has really been stuck for a few years now due to strong opposition from some members of the European Union. Do you think that's changing?
>> KATIE NOYES: Yeah, you know, I can't speak to how the process works there or take any bets on that Stew. Let me get to some of what we're hearing. We heard it out of the G7 by the way. I don't know if folks are aware, but the G7, a group there, actually commissioned a Working Group and it ran last year and they voted to renew it for the upcoming year as well with the Canadian presidency. I think it's key because I think the landscape actually has changed. I'll give you two areas where I think it's the combination of these two issues that kind of intersecting. One is the threat landscape. We have solid data, and it's solid data not coming from law enforcement this time. It's coming from outside nongovernment organisations. Many are familiar with the national centre for missing and exploited children. My colleagues around the room. It's a U.S. based nonprofit that really takes tips and leads from the electronic service providers.
Last year was the highest number of tips ever received by the electronic service providers, like Meta for Facebook and Instagram, if you're wondering what an ESP is, but it was 36 million reports.
Then, and NICMIC is very public about this, they take the tips and leads and provide them to law enforcement all over the globe. We in the FBI get a number of those and start an investigation looking into an assessment of whether there's a threat or not. So the threat environment is booming. Why? Because the technology industry is booming. Sitting around the table, years ago, as a teenager, I wasn't we weren't talking about social media and gaming platforms where you are connecting to others. But that tech boom sort of comes with a little bit of a cost or a tax, which is the tech industry is moving at such a fast clip. This is where I think some of the difference is. I think the multistakeholder environment, particularly Civil Society as I've heard here, but I also heard from a few company representatives, they're taking a slight pause to say, okay. This is a good one to talk about. It goes to sort of I think what Mallory was getting to as well which is the focus, when something is deployed, we know Meta, Apple, all these companies are going back now and instituting ability for reporting. So somebody can report if something has been harmful to them or potential criminal activity. They've all now gone back and created reporting mechanisms. That's very new. A lot of it was announced at the senate judiciary committee in January. So I think this landscape changing where more questions are being asked by legislators, and again I'm using a U.S. example, because I apologise I haven't followed the process in Europe as closely, although we're seeing a lot more reporting and I think real push for some of these changes to bring industry and governments together to solve these challenges. Again, just a quick summary, I think the threat environment has changed. We see digital evidence in almost every single one of our cases. If you asked me that question even five or six years ago I would have given you a very different figure and then we're seeing the ubiquitousness of tech deployments and now we're seeing that ubiquitousness of adding that end to end encryption that can't be pierced. And by default, by the way. So a user doesn't get to decide for themselves anymore. Now the company is deciding.
Again, let me just, last point, and I'll turn it back over to you, I think that's the key point here. Maybe what we're seeing is maybe this issue is really finally going to a Multistakeholder conversation. I think with very prominent cases like sexual extortion hitting, actually, ending up with 17 year olds and teenagers in the U.S. committing ew side, people want to have this conversation because they're seeing it in their neighborhoods and at their kitchen tables. Back to you.
>> STEWART BAKER: Mallory, do you see this the same way? That despite or maybe because of the fact that legislation has not really gone to the ultimate point of mandating lawful access, that there is better opportunity for more voluntary co operation?
>> MALLORY KNODEL: Yeah. So I think from my perspective, again, we've been having the same public debate for a while. It's been a couple years now that I've been on a stage with NICMIC and FBI talking about the same thing. It was IGF, but U.S. IGF. The conversation is the same. The externalties have changed. My employer centre for democracy and technology put out a report around that time suggesting that these features and end to end encrypted apps are a good way forward and meta data. Civil Society suggests it. Companies do it. Companies have now expanded very significantly trust and safety, as a whole area of work, that all of them are concerned about, because as we know this problem of child safety exists far beyond the boundaries of end to end encryption. It is all over social media in the clear and it's still a problem.
So working to clean that up has been a huge effort and probably there's a lot of explanations for why those numbers have been changing. We don't know what those numbers mean. It doesn't necessarily mean that there's more threat or risk. It may mean there's a lot more reporting and there's a lot more awareness of it. We don't even know how much of that is new versus old content, et cetera.
So I think that, yeah, there's a lot of really interesting solutions that are cropping up. I think the tragedy is that there's a lot of us still stuck in this back door conversation that's not really going anywhere and it has not for a long time and it would be great to truly actually engage in solutions but I think that requires, which is what Civil Society and industry have done, a sort of acceptance of end to end encryption as a feature that users all over the world have required and wanted and requested and begged for, because they want that protection.
We didn't see such a demand for end to end encryption until it was revealed that the Five Eyes countries were spying on everyone in 2013, so there's that part of the story.
So encryption protects kids and businesses, et cetera, et cetera, so we can really build some cool stuff on top of it and try to fix this issue, so I'd love to see us get into that space.
Then I'll add one more thing we've also seen externally that back doors don't work, too. Outlining that happened very recently is for some communications that have been built in lawful access back doors, I'm talking mostly the network layer, so this is where telecommunications services have encryption ostensibly but it's been by law back doored, the law in the US is called CLEA, that was exploited just like other Civil Society and security professionals were saying it would be, which is the S ALT typhoon act. And so we've seen major successes in how to do child safety on top of end to end encryption and we've seen major fails where we've had insecure communications and how that's negatively affected businesses and the security of all people using those.
>> STEWART BAKER: Mallory, let me ask Katie if she wants to address that because I'm not sure everybody agrees that that's what happened with the soft typhoon hacks.
>> KATIE NOYES: Yeah, we certainty don't agree to that. We actually, the media quite frankly got that one a little bit wrong. Can you all hear me? Okay. I can't hear it in my own. But we have gone out publicly by the way to try to dispel this myth and correct the record. What we're actually finding because we are investigating, so Mallory if you have direct access to the information certainly would like to talk to you more, but from the investigation what we're actually learning, again, not to say that when we get through all of the investigations because there are multiple here, that we won't find there was some vector or something but I can tell you right now the investigation has not yielded that, that the CALIA, the lawful intercept capability, was not appearing to be the target. Actually, what we've seen in two specific targets, that the perpetrator of SALT typhoon, the Chinese salt typhoon group, actually had access to the network well before they actually accessed the CALIA capability. So that tells us it wasn't the vector and it wasn't the main target. We already do now, too, and we put this out very publicly, so if anyone is interested we do have published awareness on our website FBI.gov, you can find it, but we certainly do not want that to be used or leveraged in this debate when it iser erroneous. Again, does not mean there shouldn't be strong security. Doesn't mean there actually even shouldn't be encryption. We're very supportive of encryption technologies. We just want them to be managed in a way, much like in the telecommunications. Again, I'm with everyone who says there should be stronger security and stronger security even around CALIA. Absolutely. We join those calls.
but certainly want to make sure the record reflects accuracy here that does not appear to be the target or the vector, but we did see access. So that is the actual truism.
>> MALLORY KNODEL: Yeah, target versus vector versus leveraged, the fact that widespread communications have this capability I think are maybe three different things but also significant.
>> KATIE NOYES: I think it also matters the general population I would say as a jinn myself, what else did they have? Most people law abiding citizens you don't want any of the security to change for that. Well, those law abiding citizens wouldn't have been in that CALIE data anyway. This is where we have some sort of predication or authorized access. Again, I'm not arguing it's not a terrible security problem. Don't misunderstand me. It's a terrible security problem. And it should be enhanced, the security. Again, go back to encryption is one of those, but also multifactor authentication, strong passwords. All of that was a factor in what we're seeing here. I don't think isolating this to this one issue makes very much sense.
>> MALLORY KNODEL: So. So I was going to say, I think there might be a lot of elements to it but we are talking about encryption right now. So of course we're going to talk about the things only affects encryption. I think that's totally fair game.
>> STEWART BAKER: Let me ask you about encryption. Katie, one thing the FBI suggested people do is they're concerned about the salt typhoon hacks which are certainly a major security threat is that they use strong encryption and I assume end to end encryption. And a lot of people in Civil Society have said, well, there you go; even the FBI thinks you ought to have strong encryption. And isn't there some inconsistency between wanting to have lawful access and wanting people to use strong encryption to protect against very real threats?
>> KATIE NOYES: Um, so absolutely not. We, again, are back to we think that we can achieve all of these things. Will there be trade offs to some degree? Certainly. Will there maybe be differences for the way we approach the entirety of a population, of a user base, against perhaps looking at, you know, a scaled solution only for individuals where we actually have authorization and our authorities warrant some type of access to content data? We're very open to the conversation. But yes please let me say for the record FBI supports encryption. This is the part of the debate that I think is also not new and I'm very surprised we continue to have to answer this question but happy to do it again. We're very supportive of that, particularly from a cybersecurity perspective and the FBI is a user of encryption but we don't willfully blind ourselves to all the activities because there is a responsibility. Again, we are all responsible. I think this is where the debate, I do feel it's changed. Again, I go back to I understand. I feel like we're here today to talk more of an action plan. At least that's what I'm here to do. I think the FBI's point of view in this debate today I'm hoping we'll get to that conversation of something that could be achievable. Because agree with the UN, got to achieve all four of those. I think the discussion now needs to stop being should we and now needs to be not that we accept we can't and we just stop trying, but that we're the best innovators in the world. We all represent countries and institutions that are the best innovators in the world? We didn't say, oh, cancer is a hard problem so don't do anything. No.
>> STEWART BAKER: Dan is a technical expert cryptographer, and there have been some interesting suggestions on how to square or at least accommodate both security and lawful access, including the idea of scanning for objectionable material on the phones of the sender before it gets sent so none of the private commu-nications are compromised unless there's a very, very good reason to believe a particular communication has objectionable material in it. Gabriel, if you could talk a little bit about both that proposal which came up in the EU debate and any other technical approaches that you think are promising to get us out of what's a pretty old debate?
>> GABRIEL KAPTCHUK: Yeah. Thanks. It's an interesting place to be and that some really core parts of the technical puzzle here have not meaningfully changed at the same time. At the same time we have capabilities that are different than before. This allows end to end processing happening at end points. To pick up on something Mallory said earlier we've seen a lot of changes happening on what's available to users on their end points. This is not shifting, is there a back door or not, in the actual encryption layer, but rather saying can we put a processing on a client's device that locally processes and gives them more information? One thing that came out a couple of years ago proposed by Apple is they would blur certain images essentially on youth accounts, and if they wanted to look at the actual image it would notify an adult. Two things happened there. One showcased the ability to do powerful stuff on the end point and one of which showed the kind of brittleness of this approach. Right? On one hand we now have the ability to actually process images on somebody's phone and say, maybe we should blur this thing and maybe not just show it to people no matter what.
I think there's a fair amount of consensus this is not a radical idea. If I blurred every image I got or ones locally determined to be not something great, that would not be that problematic. Where there's a lot of pushback from the community was that the fact then there was an automatic trigger of something then pushed to another device, in other words, breaking out of that model of encryption and went somewhere else. That proposal was found to be most objectionable. So now we have ways of thinking about this. If we can identify this on the device itself, that this is content we're concerned about, we can give users you can say push or maybe usable type of ways to control the information that they see or report the information they see. That's something we really know how to do. When it comes to active scanning that then pushing information off the device itself this is where things start to get a lot more complicated and a lot more controversial and difficult to do.
in particular, you kind of brought up in the EU we have seen a push, a concerted push, to move away from kind of an old paradigm particularly around child abuse material to kind of flag the known instances of child abuse material. This is an image when matches another image that NICMIC has and with high confidence we can say this image is a problem image and kind of with confidence I'm going to return to that in a moment, but with some degree of confidence, that there's a match there. And there's been a push to shift away from that paradigm and towards detecting new images or new content or the solicitation of images or solicitation of content. This is a much trickier problem. As a technologickist I don't know how to write down a problem on a client's side with 100% certainty actually differentiate between this is a problem conversation and this is not a problem conversation. The ramifications of getting that wrong is that people's information will get pushed to a server and get kind of opened. That's a really high risk environment to write that kind of programme. That's not a low risk kind of choice and it's not the kind of thing you want to get wrong.
and this is kind of where it's important to start making technical differentiations between the types of access that are being requested. If it's detecting new content, that's really, really difficult. And I don't think we have the technical capabilities to actually meaningfully
>> STEWART BAKER: What about detecting old content that's been tweaked in order to evade this.
>> GABRIEL KAPTCHUK: This is an older paradigm. There are more things to pull apart here. We have seen some work in called perceptual hashing. You take two images and run them through an algorithmic function and determine whether they're a semantic match. On one way it seems a promising way forward to match two images with minor edits made to them but still fundamentally the same. Unfortunately, the reality of it, our modern perceptual matching does not live up to this. Apple also released this neural hash, particular hash function that was supposed to do this, and it took people about a Week and a half to reverse engineer it and start to find ridiculous "collisions" but two images that match according to the function but are actually widely different from one another. This is a really hard computer vision problem to determine whether two images are the same. You can think of this going out of the context of child stuff and thinking just back to the way the U.S. thinks about pornography. Right? I can't define it but I know what it is when I see it. That kind of says that people are the ones who are able to determine if content is a match and there's edge cases where they won't agree. To get a computer doing that when humans actually have a hard time doing that, that's a problem. That will mean you will inevitablely build functions that will do scanning of some variety and they'll be overbroad and have obvious fail cases or really obvious ways to abuse them. And something I can kind of manufacture images that look like, according to this hash function that they are child abuse, and send them to somebody else, when in fact they're not child abuse. But I've just exploited relatively easy ways of modifying images so that look according to the algorithm the same but not to our eyes.
So that's kind of where we are today.
and there is kind of a push for scanning on end points. In my opinion there are ways in which this could potentially empower users to do to have an easier time moderating the content they see or making better decisions for themselves. At the point that data then gets pushed off device, that starts to open up a different type of rights impact assessment that needs to happen. And we have to have a different kind of confidence level in the technology than we have today.
>> STEWART BAKER: Let me ask you from a technical point of view, we've heard a lot of talk about how valuable it would be to have more conversations and to find common ground, but I wonder if having Signal long been in this offered end to end encryption by default, Apple, WhatsApp having done the same, and now Facebook adopting the technology for its other services, isn't this debate really over as a practical matter? The big companies that offer these services have all moved to default end to end encryption and they're showing no signs of saying, well, maybe we should look for common ground here. They've done it. And maybe I'm misunderstanding the impact in the market. But, what's the incentive to look for some mechanism to satisfy child safety and law enforcement, given what has happened in the market?
>> GABRIEL KAPTCHUK: Yeah, I guess if the conversation is over we can all go home and go on with our day. I don't think it's quite that simple. I think what we're seeing is a deployment of end to end encryption technologies on many, many communication platforms was being a very clear signal this is what users want. If nothing else, this is like we're trying to fill a market need. And or market want or something like that. And importantly I want to pick up on a thread that I think popped up a couple times in what Mallory and Dan and Katie all said, this question of by defaultness and what is the value or the risks around by defaultness. And from a technical perspective I like to think that by defaultness is the only reasonable way forward because you want end to end encryption to protect people who are not going out of their way to evade surveillance of any kind. Those are the people you want of want to protect. If you don't, the system is not getting you very much. The ability to build encryption platforms is something we've seen criminals do for quite some time and there's a lot of conversation around the ways that inter-national law enforcement have tried to kind of approach those systems and whatever, but putting those aside we know people are trying to evade surveillance, and they will build these services and use encryption. You want end to end encryption by default to make sure it's you, your spouse, your kids, to protect somebody within a tech company stalking them. We've seen people do this before where people elevate the powers they have and abuse them within a tech company or a company is breached by a foreign company who wasn't supposed to have access, whatever it is. So we want encryption by default in order to protect the people you're trying to protect. That's an important part of the puzzle here.
in terms of if we're done with the conversation simply because it's being deployed everywhere, that's like giving up on trust and safety. That doesn't make sense. Trust and safety is obviously going to be part of tech platforms' responsibilities going forward. The question is, what tools will they use and what capabilities will they build into their systems to ensure that users have the ability to protect themselves. We get into tricky waters in terms of what is the correct thing to do there. I've advocated for what I've been saying is this user and ability to control the information they're seeing and the ability to report and stuff like that, is an important mechanism, as we've seen over the past couple of years.
One more piece of the puzzle that maybe moves us to a different point in the conversation, one big risk that I think is new, is trying to understand is there any way beyond kind of an all or nothing capability? This is something I'm technically interested in and I think is an important part of the conversation. In particular, right, lawful access or back doors as a paradigm is fundamentally an all or nothing from a technological trade off. Either there's a key somewhere that lets everybody into the communications and there's a bunch of protections, maybe those are social protections about who gets that key. Or there is no key. That key doesn't exist and therefore could not be materialized. I want to offer this pushes from a regulatory perspective to an opportunity for a worst case scenario. If we mandate there must be a back door and that means there's a key now and that key is a high value target and somebody is going to go out and get it. Whether the salt typhoon is evidence of something or another, it's evidence about paradigm in which there's the willingness by international governments to put a lot of resources going after these capabilities. The minute there's a key, that key will be a high value target.
One thing I think that's interesting in this conversation is wondering if there's a way to get a key that only works for certain types of content? That's something in the crypto graphic world that may or may not exist. There's kind of ongoing research. But as a paradigm, I think it is a different part of the conversation which starts to shift us away from, you know, we have to accept that there's never going to be any back door or we have to accept there is going to be a back door. And say, what is this back door for? If we want it and we want to just talk about kids, can we talk about a specific limited back door that doesn't then just make everybody else vulnerable at the same time because this mere key's existence is a vulnerability? It's a difficult paradigm to work with. It's a hard design space. We don't know much about it. But I think it's one potential way we can start thinking about avoiding this worst case scenario of keys actually created and software actually made that's really, really vulnerable.
>> STEWART BAKER: Okay. That's the first suggestion I've heard that there might be a way out of the all or nothing aspect of this debate.
but let me ask Katie and Mallory to weigh in on whether a content based lawful access mechanism is available? I suspect Katie will say yes and it's a warrant. But so let me, having previewed what I suspect Katie's argument is, let me start with Mallory.
>> MALLORY KNODEL: Thanks. No, it's okay, I'll be really quick. I also wanted to connect what Gabriel was just describing to what Katie said earlier, because I think this idea that what I'm putting forward where we sort of accept the constraints of end to end inscription as sort of giving up, I think suggests that the goal is the back door. Right? And I think that for technologists like Gabriel and myself and other public interest technologists and case study and academia and industry, the problem space, the requirements are we need to keep people safe. That includes kids. We need to make sure our communications are secure. And that is a wider frame. It's sort of you list the requirements and then you build the thing that meets the requirements. Maybe that's a back door but maybe it's a whole lot of other things. So when we say we're giving up on back doors and I suspect that that is true, that that's been the goal all along, it's also the UK tech safety challenge a few years ago was the same. They said it was about finding solutions on child safety. They created a brief for it that said it needs to be about scanning images and end to end inscription. It was a prosupposed goal and that narrows the field in terms of what innovation you get so you get varying degrees of success and the final one was not very good because perceptual hashing is really hard.
So these are really interesting ideas that Gabriel is saying. I have more of a technical background in the Internet networking and encryption. I have less of a technical background in AI. But I've had to learn it in the context of this work. Because it's similar to, yeah, a paper that's coming out very soon that I'm working on, because there's a lot of imagination around what you can do with this data. I think some of it could be very interesting and fun. Right? Like let's think about how these secure online platforms are being used a lot more like social media platforms, et cetera. That's great. That's what people want. That's where they feel safe expressing themselves increasingly in a world that seems kind of scary. And that yet, will still have some of these features. Can you do cool things with content that allow users to protect themselves and allow platforms to make sure the experience is enjoyable? That's another incentive nop nobody wants to use a platform that has all kinds of unwanted or gross content on it, then yes wet get to more of a solution space. So let's continue to live in that innovation space. I think that's sort of a good idea.
>> STEWART BAKER: Katie.
>> KATIE NOYES: I couldn't agree more. That's what we've been trying to do more is get to the table and discuss. I do think I think I like it that this panel has gone this way. I think we've all moved off of the absolutist points of view which is, look, there is going to be compromise around and a lot of innovation needed on how this all can actually be achieved and coexist and for our part we're very willing to come to the table. This sort of giving up thing, I'm keen on it because I think the idea, too, of thinking through I'll just give a case example because we haven't talked a case example and I think it's worthy for these type of conversations. I'm going to talk about a quick success but this is what we're afraid of. Take this extortion. I think many people are suffering this challenge which is why I picked this cause because it's a universal. It's in many different countries. And by the way, the actual subjects were Nigerian. So if you haven't followed this case, he have a case out of our Detroit field office, a young gentleman named Jordan Demay. He, again I'll pierce through all these preventative measures, they're wonderful and please let me give a plus one to all companies doing this great work, but here is the challenge. Right? The hacking thing, you're right, if things are available and a criminal thinks they can benefit from it, they're going to target it. They targeted dormant accounts sold on the dark web. Dormant Instagram accounts, hijacked one of them, and enticed an individual who thought he was talking to a 16 year old gym and pretty soon images were shared and that's when the extortion starts. A young Jordan pays the first ransom but couldn't pay the second.
We on the panel understand this but I'm not sure everyone follows this here at IGF the way we are. Here is where content matters. If the only information we had was meta data that Jordan's Instagram was talk to this fake Instagram account, there's no real prosecution there. We can see it and unfortunately Jordan took his own life. The mother has gone out very publicly, I only use it because she's gone out publicly, and she told law enforcement she never would have known why her son committed suicide if the FBI was not able to show the content and showed the communication and it showed this subject goading Jordan to take its own life. That added to the sentencing and prosecution. This is what is at steak. We talk about this very academeically and I do it, too, so I'm cas tick gating my own self. I think people need to understand the design choices and the way they're affecting them. Right? I think it's key.
I also resist this idea of a back door. I can't stand the definition. The definition as I try to look, what's the universal definition of a back door? Five or six years ago it was the FBI and law enforcement having direct back door access to commu-nications. That is not I don't want anyone to think that is what law enforcement is asking for. We're asking for the technical assistance side.
the other thing I kind of resist a little bit on that is there's this idea that you're somehow in your own home, and it's a back door to your own home but you're not in your own home. You're in a provider's home. I'm wise to the fact all these are not created equal but there are all these access points and all those access points could be vulnerable. And again, vulnerable for different reasons. I see Gabe laughing because we had this conversation. I'm not saying all these accesses are equal but they're there and they're there for a reason and not a bad reason. They need to make sure they're updating any vulnerability they actually find or one by the way we might find seeing other victims and sharing that vulnerability that was a tactic or procedure by a criminal using it here for further criminalization.
but my answer here is, yes, there are absolutely solutions. We are willing. We know there has to be a negotiation. We know it's not going to be absolute access. By the way, there have been really interesting discussions around, and I'll chuck them out there because we had great conversations with UC Berkeley with an academic institution, thinking about homomorphic encryption and identifying additional sort of categorizations of the data and what it offers but also this idea that someone raised about us about a prospective data in motion solution where you're not affecting all the users but perhaps we're affecting a specific subject's design or architecture. I raise it. It's been raised publicly. I think, Gabe, I think it was in your article and I think you even said abuse proof lawful access when we're talking about the way a prospective solution, meaning today forward, and orienting that way, would also offer additional oversight. And we agree to that as well. So anyway, a resounding yes from us, Jim. We stand at the ready to start getting working on an action plan to get together and kind of start talking, taking our law enforcement operational needs and what we're seeing from our cases and bringing it into the conversation with folks like this. Again, one quick hit again nor the Multistakeholder approach. This is the best way we solve these problems. Thanks.
>> STEWART BAKER: Gabriel, do you think there is abuse resistant encryption?
>> GABRIEL KAPTCHUK: It's hard to say when you've written a paper called abuse resistant law enforcement action mechanisms that they don't exist. It's difficult to quite put that back in the bag. I think the working we did in that paper was try to understand this design space more. And try to think about if we are in a world where the TOLA, folks using TOLA, start issuing technical capabilities notices left and right and suddenly there's keys everywhere, that's the worst case scenario. Is it even possible to build a system that meets the technical requirements without being a total disaster? That's what we're trying to ask and that's what we call abuse resistance. That's not a global abuse resistant. We were careful to say we need definitions on the ground so the Technical Community can go back and answer a technical question instead of saying, ah ha ! it's abuse resistant. So in that paper we were trying to say, is there some way where you have warrants activating back doors in some way, okay, is there a way if that key gets stolen we would at least know or be able to tell? We would be able to say something terrible has happened? A foreign government has taken this key and rampantly using it to decrypt this stuff? If we're in that world, can we detect it and say, we need to rekey the system right now. Something very bad is happening. These are notions that haven't been part of the conversation and we risk really, really bad solutions if we don't explore this space.
>> STEWART BAKER: Let me push on a point that's always bothered me about the argument that this these keys will be everywhere, they'll be compromised, all of the communications are going to be exposed. And that that's a risk we can't take. It does seem to me that everybody who has software on our phones, or on our computers, has the ability to gain access to that computer or that phone and to compromise the security of my communications on my phone. I am trusting every single provider of every single app that is on my phone. And obviously, that's a worry. But we expect the manufacturer to undertake the security measures to prevent that from becoming the kind of disaster we've been talking about here.
Why doesn't that same approach, saying to the company that provides the communication service, you also have to have a mechanism for providing access, and we expect you to maintain that every bit as securely as you maintain the security of your patch update system? Why is that not the beginning of an approach here?
>> GABRIEL KAPTCHUK: Yeah, so let's talk about this. Let's start to split these into technical categories. There's multiple things happening here. The first is whether or not I need to trust, Duo Lingo. Like I need to trust every provider in my phone. Turns out Apple has done a great job sandboxing these things.
>> STEWART BAKER: They have made sure we have to trust Apple and nobody else.
>> GABRIEL KAPTCHUK: Great. Talk about apple for a moment. There's software keys part of the ecosystem today exactly for the reason you mentioned. One important part of the puzzle is thinking about the hotness of these keys, like how much access does this key need? How live is it? For a software signing key, that thing isn't living on a computer somebody has access to. It's living inside of a TPN offline sitting somewhere. If you want to sign an update you have to have get up and walk over and do the thing. That reduces the amount of exposure of that key and you're not doing this every day. You're doing it I don't know how many updates, how many times I should be updating any phone but it's not that frequently from Apple. It's a very slow and methodical capability audited by a lot of people and there's a lot of eyes on it. This is a very didn't world than getting access to people's messages. Like do you think this will only be asked for with this key once in a while? No. It will be hundreds of thousands of requests from countries around the globe and with very short time work arounds. We need this content in the next five minutes because there's a kid and we need to find them. We see that capability has been exploited in practice. Verizon handed over data to somebody who impercent nated a member of law enforcement because they said, hey, we need this data right now and then handed them the data first and then do the due process later and the person was using an owned account of some kind. This happened in 2023. The hotness of these keys makes a tremendous amount of difference because the number of times you have to access it shifts the dynamics around it. That's one piece of the conversation but there's more to unpack there but I'll stop now.
>> STEWART BAKER: I want to make sure I've left enough time and I'll ask Mia to keep me honest here. Should we be moving to questions from the audience? If we should, Mia, Wu I'll ask you to begin the process of assembling them.
>> MIA McALLISTER: Yeah. We have about 15 minutes left in the session so let's get questions from the audience. Let's move to the audience and then we'll pivot online. I know there are already some in the chat.
>> AUDIENCE: Is this working? Hi, Andrew speaking, I'm a trusty for the Internet watch foundation. I should firstly say there's not agreement in Civil Society on this issue. There are lots of different points of view, true of all different parts of the Multistakeholder community. There's a lot of on frustration for some of us that the weaponization of privacy is being used to override the rights of children and other vulnerable groups. Completely forgetting that privacy is a qualified right and all of the human rights of children are being transgressed up to and including their life as we heard just now. So I think we just need a reality check on that.
and also, we shouldn't use encryption interchangeably with security. They're not the same. They're quite different. When we start to encrypt indicates compromise and other meta data, A, we weaken security and completely trash privacy anyway and it's generally bad practice. The scale of the problem we haven't talked about. Just to give some non abstract sense to this, we're looking at about 150 million victims of child sexual abuse per annum, and that's three every second, this is something which has been greatly magnified by the Internet. This is a tech sector problem. Not something which is a societal problem. It's on us to fix this problem. And end to end encrypted imagine you are messaging happens are widely used to find and share CSAM, an enormously large sample size of data to back that up. We don't need to back door them. Client side scanning would immediately stop the sharing of no 1CSAM and it has no impact on privacy if it's known CSAM images and it certainly doesn't break encryption either. And also, simple things like age estimation or verification would at least keep adults off of child spaces and vice versa. There's some easy steps we could take here with known technology which would immediately affect this problem.
and then finally, let's not forget the sector is hugely hypocritical here. A lot of these problems apply in democracies and don't apply from other types of states. As a trivial example, Apple private relay is not available in China. Because it's illegal in China. They care a lot less about the negative impacts of some of these technologies in democrates but concede to the auto cratic states and trade it for market access. So we have a sector here that is very hypocritical.
and in a session earlier, we say sometimes we do need to pierce the veil of anonymity for law enforcement and I think that's absolutely the right approach. We can't treat privacy as an absolutely right when that's wrong in law and has serious consequences. I'm not sure there's a question in there, but adding to the conversation so far, let's talk about some of the victims. There are fixes here. And some groups are stopping us from making progress when progress could be made tomorrow, if there was a willingness to do the easy things. Thank you.
>> STEWART BAKER: Let me that is sort of a question in the sense of a long set of propositions followed by the words "do you agree." So let me ask Mallory, if she does agree? There were a lot of ideas there that anonymity needs to be limited. I'm not sure that it is raised by the encryption debate because you can have encrypted communications that are fully attributable, but that client scanning would be a straightforward approach to this, that age limits on access to communications services would be worth doing, and that we're a bit too high on our horse when we say encryption is about privacy because it certainly also becomes a vector for transmission of malware that wrecks people's security. So it's a double edged toward.
So Mallory, with those thoughts upper most, what do you find in that, that you can agree with?
>> MALLORY KNODEL: Well, that's an interesting way of phrasing the question, Stewart, thank you. It will challenge me. But first I wanted to just say I'm particularly frustrated by the fact that the EU child protection regulation has been stalled for years because of the encryption mandate. If that were removed, that whole piece of legislation that has all kinds of aspects of child safety could have moved forward ages ago. The fact this is the one thing holding it back I think should infuriate everyone who cares about child safety. So again, maybe it's not worth saying, you know, these folks have held this issue back or these folks have held this issue back, because again what we're trying to do is come up with a list of requirements and constraints and that's going to differ per jurisdiction. That's going to differ per, I mean, sort of culture, et cetera. We're in different places of the world. I think we can all agree that's sort of the promise of the connected Internet is we all kind of come with our own version of that and interconnect and that's the whole idea.
One size fits all platforms are not going to be I don't think they're the way sort of moving forward. I would certainly agree with that. I think there's some of the things in there that have been said that then accommodate these kinds of other design ideas. But the issue is that back doors or whatever you're calling it, these measures have then been mandated for everyone at scale. So if we can start to chip away at that idea then I think you get all kinds of different messaging apps that can thrive, that do varying degrees of encryption and varying degrees of scanning, but that you would mandate everyone to do that the same, that you would mandate everyone to do that the same for every one, those are the problems. If you look at, for example, the statement about the EU mandated back doors and chat control from the Internet architecture board they get to the heart of that. It's something Gabriel said before. Encryption exists. People are going to use it. Even if you were able to sweep up say the largest providers, like WhatsApp and then you'd just get it with WhatsApp, wouldn't you? You'd just sweep up WhatsApp and everyone else could do what they wanted, then you just disenfranchised all the WhatsApp users. That would be a fundamental change to the softwares. You might get migration to those that don't provide back door access, to game this out. So I'm all for a very pleural world in which we have lots and lots of different communications providers. What I don't think is fair or what we actually want, right, is then requiring them all to work exactly in the same way and requiring them all to have struck the same balance when it comes to user privacy versus content moderation, because different users and different jurisdictions do want a different answer to that.
>> STEWART BAKER: Mia, do you have more questions or do you want to go back to other panellists?
>> MIA McALLISTER: Yeah. Dan? I want to bring you in. Are there any questions? There's a lot of we have someone online from Germany. Any questions you want to address in the chat? Dan?
>> DAN SUTER: Mia, I can see there's a lot of comments there from Ben and Andrew and we've also heard from Andrew and equally in relation to Leah. I think the thing that's really coming over and obviously we know this in terms of how this is a really difficult space, we often hear in terms of, well, we need to be regulated, from industry, but equally we hear about then we'll have companies that will leave and go offshore. There's so much that can be done and we need to move to that place where we are actually doing it. Look, these are thorny issues. Wicked problems as a former prime minister in New Zealand used to say. That requires people to come into the room and discuss and understand our commonality. Often, there are points here where we do have a common approach. I hear absolutely everything that Andrew just said in terms of this question and answer session. Believe you me as a former defense lawyer and prosecutor absolutely hear in terms we should be speaking more about the victims' voice. We really should be here. It's so important. Equally to say, from a New Zealand legislation point of view, we need the content. We have a high court ruling that says we cannot prosecute without the evidence of the content in relation to childhood sexual abuse content. We have no choice here. Do we want to change to where we can convict people on meta data? Is that really what we want to go to? I don't think so. So that's where we talk about regulation pushing into this space that makes children safer online. Do you know what? We're being pushed into a place where that's self reporting. That equally isn't a good space to be in, as well.
I can't see my 15 year old child, we talked about the sextorsion case, is he going to be self reporting in relation to that case? Should we be pushing the responsibility onto my 15 year old or other children? Again, I don't think we want to be in that space. I'm sure we would also hear there's agreement in relation to that. So we can absolutely come together. But who will lead that? That's a big question left hanging here because I can really see the positivity coming out in terms of this panel but who is going to take the lead? Is it where most of the service providers are located? Is that what's required? Is it required in terms of a multilateral institution? Having taken part in the UN cyber crimes meeting, it's not easy. But we need somebody to come to the fore and say we need the right people in the room, we need technologists and academia and NGO and Civil Society and governments, and we need to do this, because we do have victims and we do have people who are dying and we need to move this point on sooner rather than later for all the points we talked about today. Back to you, Mia.
>> MIA McALLISTER: Thank you, Dan. We have time for one more question in the room. I'll look to this side. Oh, thank you. Is your hand up? Okay. It looks like no more questions in the room.
Going online. Anymore questions online? You can just come off mute. All right. Not seeing anymore.
>> STEWART BAKER: Okay. Yes, well then we can give the audience back three minutes of their life. (laughs.) They can go to break early. I do think our panellists have done a great job of enlightening us about the nature of the considerations that are driving this debate and why it has been so prolonged and so difficult and so I hope the audience will join me in thanking Mallory and Dan and Katie for their contributions. And Gabriel. Pardon. Thank you.