IGF 2017 - Day 0 - Salle 2 - The DNS and Emerging Identifiers Including DOA


The following are the outputs of the real-time captioning taken during the Twelfth Annual Meeting of the Internet Governance Forum (IGF) in Geneva, Switzerland, from 17 to 21 December 2017. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid to understanding the proceedings at the event, but should not be treated as an authoritative record. 



>> ADIEL AKPLOGAN:  Welcome, everybody, to this session on DNS an Emerging Identifier.  If you can take your seats.  We will have to kick this off.  Sorry for the delay.  We are missing one of our panelists, but we will start and shuffle a little bit.  So this panel will address a specific aspect of the evolution of the Internet and its application as we all are witnessing the Internet is evolving its application as well, and over the past few years, the Internet of Things has been in the middle of that evolution increasing the landscape of the Internet as we have known until now.

Because of such evolution, there are work being done by several people around to look at different type of identifier, and this session will look into those emerging identifiers, what are they, how do they work, how do they interact probably with already existing systems like the DNS.  It will be an interactive panel with experts from different organisations that have spent time working on this.  The first part will be the discussion between the panelists and moderated by Ron da Silva and the second part we will present the research we have done on some of the different observations, and that will kick off the discussion with the panelists and also the audience.  I will then pass this over to Ron da Silva.  He is an ICANN board member, but also a very well-known technology leader in telecommunication and broadband area.  He is the CEO of Network Technology Global.  He will moderate the panel.

>> MODERATOR:  To add to the format each of our panelists will take ten or fifteen minutes to stage our conversation and we will come back together and have a number of questions, and I invite the audience as we get through the materials, if there are specific questions that come up, please take note of them and I will collect them in the end and engage the panel in a conversation on the materials.  First up is Christophe Blanchi, the Executive Director of the Digital Object Network Architecture or DONA which in collaboration with the foundation's multi primary administrators, the MPAs operates the Global Handle Registry.  He facilitates around the world the DOAP and related standards.  Christophe.

>> CHRISTOPHE BLANCHI:  I think a lot of you have heard this part of the before.  I will talk about the handle system and I want to mention the digital object architecture because the handle system is an intrinsic part of the digital object architecture.  It's a general purpose information management on network architecture.  What does that mean?  It means it identifies a set of services and functionalities we believe are absolutely necessary for this to work.

So it includes uniform resolution and interoperable access to heterogenous information systems, resources or other entities.  What that means is that you have to have an identifier system to point to digital object information, or digital information and what I call digital surrogates which are digital things that may not have a system to talk on their behalf.  It requires the data model and protocols to facilitate search of these digital objects.

And have you to have an extensible typing framework to accommodate new types of data, new interfaces and new services without having to change anything of the underlying architecture.  The system also has to be defined in such a way that you are completely separate from the particular technologies that you are using for searching, for storing, what are the IoT devices.  None of this matters at this level.  So this independence from the hardware so to speak is what served, you know, the Internet TCPIP so well over the year and you have to be able to integrate this with an existing system because you don't want to force people to create entirely new systems to accommodate their data.

The system has integrated security and is highly scalable.  What does this look like from a more graphical standpoint?  You have the handle system which allows you to resolve identifiers.  What do those identifiers resolve to?  They resolve to as we call state or digital object state metadata.  This could be metadata, could be location information, it could be keys, it could be signatures, certificates, whatever you need to verify that the information that you are going to talk to is what you expect it to be.

Then you interact with digital objects.  And it's simple you issue a digital object request over the digital object interface protocol and you get responses.  All digital objects are the same no matter what they are, you could have simple DOs or things like a simple sensor a phone that needs to give its ID.  It could be people that's very controversial, but a lot of members are looking into these things.

It could be more complicated systems, systems that respond about the aggregation of information that they have, but they are still in the physical realm.  And then you can have much more complicated digital objects, digital objects repositories that hold digital objects, those could be medical images, they could be documents, and to help all of this work, you need type registries to allow you to search for the existing interfaces to these objects as well as the data types that you will encounter when you talk to these objects to figure out what they are about.

And you have digital object registries that index the objects themselves so that you can do proper retrieval.  And you can have complicated things that aggregate this in supply chain management where all the way to consumption you can congregate this using digital objects.  Now, I will talk more about the handle system itself.  We are talking about identifiers here.

As I mentioned before, the handle system is a component 69 digital object architecture with a defined protocol model.  It's a basic identify resolution system for the Internet, and it can be used in other computation environments as well.  It's simple, you take a handle where we call this identifier, and you resolve it into state metadata.  What the state metadata is completely in the realm of the entity that is creating those handles.  The officers persist because the officer, the syntax of the officer is independent of the services that run them, so if a company gets bought over, you don't have toy about the names sticking along and the identifiers breaking.

It's a logical single system, but it's physically distributed, highly scalable in the trillions.  That was the idea is that you are going to have trillions of things, you need to be able to do it to identify and resolve all of those.  Typical use is you have a handle and it points to IP addresses, public keys, URLs, metadata and such things.  One of its features is that the resolution mechanism has built in security using an integrated PKI system and it's optimized for resolution speed and reliability.  So this is something about DNS.  I just wanted to say that both systems are identified resolution systems.  The handle system is compatible with DNS, local handle service can resolve DNS requests as well.

One of the applications is that since there are security built in the handle system, you can modify in effect the hand will records in a secure way and have it resolve through the DNS interface if you don't have DNS available.  So some of the security features of the handle system, authentication, built in PKI capability, the handle have access control at the level of the record, authorization, there is confidentiality on option if you like for systems that can't afford the overhead of encryption.  You can specify all requests and responses to be encrypted.  There is newspaper repudiation and integrity.  All record responses can be signed by the server from which they came from, and you can have other records that can be signed by authorized administrator such as a Global Handle Registry.

The signatures are defined so you can use them anywhere you like and they are audit blocks.  So what is a handle?  You have a prefix, a set of numbers with a dot or multiple dots, doesn't matter, and a suffix.  Handles are globally unique and resolvable.  You don't have a handle if it's not resolving.  I mean, you could say it's a handle in the making, but the fact of creating the handle and resolving it verifies its uniqueness and the fact that it resolves to data.  So the prefixes are lauded to local handle service providers, and some of them are stored in the Global Handle Registries and some can be stored at a local handle service.

The handle prefix is typically resolvable by the Global Handle Registry to one or more IP addresses for local handle resolution services.  The last step is to resolve that handle at that particular service information.  Handles are Unicode 2.0 and encoded in UT8.  The prefixes tend to be numeric in value, but as I will talk later the NPAs have some sense to what they are going to be using.  As far as the suffix, there are no restrictions and as far as the length of the suffix, it becomes a policy issue.

So it's, you know, to each GHR service provider to decide how long that suffix has to be.  So if you resolve a handle, what do you get?  This is typically what you have.  You have a handle and you have a set of defined type value pairs and you can put anything you want in there.  The type of these values can themselves be handles.  And actually our recommendation is that they be handles so that if you recognize, if you see a value you don't understand, you resolve the type and it will tell you potentially what that type is.  Maybe some utilities for parsing it, and this is how you can in effect have latent interoperability.

A quick review of how the handle system works, you have the circle at the top which is the GHR and the set of all handle services.  You resolve a handle by talking to the Global Handle Registry.  You get a service information pointer back.  It points to a handle service.  A handle service can be made out of mirror service, and a thought service or mirror service could have as many servers as they like and this is the server that the server talks to directly and results into a handle.

A word about the Global Handle Registry, the Global Handle Registry is a multistakeholder system.  All of these GHR that you see in there contain GHR records that mirror each other.  Each of these services is managed by Multiprimary Administrator, MPA for short.  They get allotted a prefix and they are the ones that can derive the prefix and allot it to any organisation that wants to create handles.  How does this work?  This in effect is trying to maintain a consistent notion of what all of these top level prefixes are.

So when CNRI, for instance, who is now an MPA of this GHR makes, gets a request to get a prefix, they create it, they push it to their system and they get pushed to all of the other systems, and they get cryptographically validated, and each system makes sure that the information that they received is indeed the one that validates the certificate chain, and so on, so forth.  So DONA in here has responsibility and can only in effect credential new MPA prefixes, but we cannot actually create handles within this system.

All MPAs replicate all of the other MPA's refixes.  They can allot unlimited amount of derived prefixes and an MPA can only derive from a prefix like 20, 21.  IDF, for instance, could never create a prefix off of 20.  So I described a little bit what the MPA does.  It's a process that DONA oversees along with the existing MPAs who can become a new MPA and get a prefix to manage with their own policies.

What are the applications of this?  So I'm jumping back a little bit.  We talked about the MPA's, and now I will talk about what the MPA's do with these things.  I think everyone is familiar with the DOI used by scientific journal publishers and document management systems in general, but data set identification through data site, for instance, is one of these relatively, well, relatively last 10, 15 years applications where you can start referencing large data sets and subparts of these data sets using these handles that makes them persistent over time. 

When a large application is combating counterfeit, this is, excuse me, this has been used widely in China for doing milk supply chain and making sure that there is a traceable, each can of milk is traceable back to when it was made, who transported and where it should lie.  ITU uses it also.  Their heavy manufacturers use ‑‑ their financial applications which are security tracking and in IoT, again, going back to China, they are pushing smart building politics for controlling everything in the building, smart manufacturing, accessing, monitoring, controlling all of the manufacturing equipment, the resulting parts and how they get integrated.  They even have a way to talk to something like 5.5 million cell phone towers.

Big data is another application of this.  I talked about data set identification, but there is this issue of description typing.  How do we capture work flow, issues of provenance, authentication, distribution?  Who do you pay to get access to the data sets?  And block chain is one of these things where depending who you talk to it is a digital architecture.  I wouldn't go there, but what I would say is there are a lot of synergies possible where you could use the handle to resolve into the data.  The identifiers within the block chain could be handles that point to the data that the block chain is certifying.

A little bit about DONA.  We are based in Geneva, and our goal basically is to provide coordination, software, and other services for the facilitating the adoption of the DOA.  We promote the ITU recommendation which is based on the digital object architecture.  We operate the GHR, as Ron mentioned, and we are involved directly and we are the only ones who can credential new MPAs candidates.  The idea is to get global even representation across geographical, an even geographical distribution, different sorts of industries.  And the variety is what we are looking after as well as reliability.  And I think I will stop there.  Thank you very much.

>> MODERATOR:  Thank you, Christophe.  Next, we are going to have Benoit Ampeau.  He is the head of Afnic Labs, an internal research team within Afnic, the operator of the dot FR top level domain.  He is working in IoT and is also involved in several industry standardisation organisations such as the GS1 and also the LoRa Alliance, about lower radio frequencies.

>> BENOIT AMPEAU:  Thank you for the introduction.  Good afternoon, ladies and gentlemen.  So first of all, for those of you that don't know us, Afnic is a French CCTMD, but we act as a technical GNS back end enterpriser for other GLEs.  We are involved in IoT for a few from now, and we are the principle contributor of the ONS, Object Number Service standards, and we are as of today a member of the LoRA Alliance, an institution member on Bmax of technical committees and workshops.

First of all, I would like to share with you some terminology to get deeper in the subjects.  So let's define now what is identity.  Identity is something refinable and recognizable.  Like my watch is a name not need to be unique and does not follow any particular naming convention.  Identifier subjects for today is identifying an object.  It could be either physical or virtual.  It needs to be unique and follows a particular naming convention.  Then we have also addressing.  Addressing an object, we need to address an object, sorry, uniquely in the scope of its communication.  It needs to be unique and needs to follow a particular naming convention.

We know IPv4, IPv6, and in this case address and identifier can be the same or different.  At least we have the service discovery.  Its maps a unique identifier to an appropriate unique service information.  So any naming convention is a convention for naming things in an agreed scheme.  A very, very short review of Internet.  What you can see here is related principle mechanisms, the resolution part, how you resolve identifiers to a naming service to an application, and the hierarchal model which naming authority on delegation model.  So in the real world, you have got exactly the same model for post addressing.  Post addressing is managed at the country national level, and afterward at CT level, for instance, for the streets and so forth, so on.

Back to IoT concerns, one major concern is making things identifiable.  We used to distinguish in IoT the entity of interest and the devices itself.  In IoT, the truth is that identifier is ported by the carrier device and not the thing itself.  On the technical side, we used to say also that a thing is considered as dumb as soon as it's not connected.  Once it's connected, we say it's smart.  These slides, I wish to show you the Russian IT, it's maybe like cows ‑‑ rationality, maybe like cows, you can assume there are cows in the naming convention too.  So in case of the need of global naming convention, this would imply to meet, migrate and create a large naming space.  We already have IPv6, for instance, but as of today the question is how to communicate and get some interoperability between identifiers and naming convention.

It leads to a point that we need an Internet of Things and something which is persistent.  We need for non‑prom root like Internets.  Identifiers, we have got two main categories, the hierarchal one and a flat one.  So let's dive into a concrete example in the supply chain industry.  EPC is the acronym for Electronic Product Code known also as bar code, cure code and the presenting RFID tags.  The naming authority is GS1, and this is the naming convention for bar code.  So you have a prefix, you have a company code, a product ID, sale number.  Using DNS, the bar code you can connect things from the real world to the Internet using the bar code connected to the DNS, and access to application and extensor usages such as extended packaging, supply train track and trust applications and so forth and so on.

How is it done?  I won't be very technical, but basically you have got an ID, you connect it to your end, and you get actually at the end a domain name.  So from a bar code, you can have a domain name which makes the things resolvable on the Internet using the DNS.  So these codes are fully qualified domain name.  So after you talk about your hierarchal identifiers, but we could do the same with flat identifiers.  Here is a fictitious approach.  So let's take an example of the string.  You could easily on the last line put this identifier into a GNS record so that you can access some services and applications on the Internet.

Back to standardization, naming services.  Here are some examples.  What do they have in common?  They are all using DNS at points.  Domain names, Electronic Product Code such as I have mentioned, OID, and also DOE when inserted as it has been presented previously, I'm sorry, as DNS used in certain stage.

So how is our approach?  Basically we used to say we have two approaches regarding innovation and emerging technologies and identifiers, a disruptive one or an evolutionary one.  In our case, we trust an adaptive approach by leveraging existing stable Internet infrastructure to make it complimentary to new usages and new identifiers.  For instance, in newer run, the latest specification uses GNS to allow identification of unknown objects and identifiers on a network to go to resolve and go, sorry, to the correct network of the object.

So it's a very good example of what can be achieved in DNS with other technologies.  So as a wrap up, what are the requirements for naming services?  It must work for both legacy and new naming convention, any new identifier, and should work for both hierarchal and flat identifiers.  So our vision, we do believe in DNS to be a report element for imaging identifiers and technology.  DNS could be the universal service for discovery for any naming extension.  Thanks a lot for your attention.

>> MODERATOR:  Thank you very much, Benoit.  We go from digital objects to leveraging DNS.  We will jump into a new space.  Nick Johnson is with Ethereum Foundation leading the development by a worldwide team of passionate developers.

>> NICK JOHNSON:  Hello, hi name is Nick Johnson.  I'm a developer at Ethereum Foundation.  The name service is a system that maps human readable names such as Inigo Montoya, such as Ethereum accounts, publication keys and legacy DNS records, provides that distributed and decentralized resolution service, it's distributed along with the Ethereum block chain.  It's transparent and identifiable because it's on the block chain and easily upgradable.

ENS like DNS is hierarchal.  There is a root which is presently owned by a multi‑SIG owned by a selection of key holders and Top Level Domains and Second Level Domains and so forth, and the authority, the authorization to make changes is likewise distributed hierarchically.  The ENS architecture is broken up into two main components.  On the left here, we have the ENS registry.  This is a piece of code that runs on the Ethereum block chain, and it maintains a simple mapping from every name in the system in the hierarchy to the owner of the name and the resolver.  The other has the right to change the resolver.  While the resolver is another reference to the address of another smart contract that is responsible for answering queries.

The second component, of course, is the resolvers themselves.  Their job is given a query such as please resolve the Ethereum address of wallet dodge EITH to respond with the appropriate record time.  Representing a name on Ethereum is a two‑step process.  The example calls at the registry asking for the resolver address of the name they are interested in here it's FU.EITH and the registry responds with the address of the resolver.  Then the user code goes off and asked that resolver contract what is the address responsible for FU.EITH, and gives back an identifier.  This system works similarly to DNS that we define different record types defined by different record profilers, and those are Internet resource, block chain resource, so forth.  We have several predefined such as adder for Ethereum addresses and plan to expand this over time.

ENS launched on the Ethereum main network on May the 4th, 2017.  We had a soft launch period during which names under the.  EITH pseudo LTE were given out during the period and during that period of time was responsible for 24% usage of Ethereum.  During that period 180,000 names were optioned using a victory option system that distributes names to people based on bid prices.  As you can see here the level of the bids after a national surge was fairly steady throughout the period, and the funds for those names accounted for 170,000 Ether and presently Ether is about $700 each.  That was substantially lower at the time funds for auction names rather than being spent locked up in a deposit contract that provides an economic incentive for people to relinquish names when they no longer require them, so these funds are locked on the block chain and they are accessible to users if they decide to relinquish ownership of the name.

We have had wide client adoption inside the Ethereum ecosystem.  We have most of the major clients for interacting with Ethereum now support ENS along with a couple of exchanges and other infrastructure systems like Ether scan which is used for investigating the Ethereum block chain.  We recently announced in November at the DevCon 3 the Ethereum Conference support for integration with DNS system.

The way this works is a recent upgrade to the Ethereum system makes it possible to verify RSA signatures, which means we can construct a proof which contains the record we want to prove and the signature in one package.  We can submit it to an oracle on the Ethereum block chain.  That oracle is capable of processing the proof and verifying the signature at which point it adds to its internal registry, and then the user can submit a claim to a registrar on the chain.  It queries the oracle, establishes that they have a valid claim to the record, and if the claim is valid, then it calls ENS and sets the ownership of the record.

The result of this is that a user can take their DNS, add a text record to a subdomain of that specifying that they own this domain and they wish it to be associated with a particular Ethereum address, and then go through this process on Ethereum thus importing the domain into ENS which means you can now associate block chain resources with us.  You can use it in a context you could use a dot ETH name, and we hope this opens lane of collaboration which will mean that both DNS can be used on the block chain and ENS can be used to host DNS names for DOS resistance and better distribution of zones.

Next, we are working on making it easier for users to register domains and sub domains since the process is still highly technical.  Voluntary dispute resolution since the current system is entirely automated and has no built in resolution system.  Our next step is to work on best steps with this.  Following ICANN's process with the UDIP, better support in a wider variety of clients and designing a permanent register that will learn from relations with the current registrar.  Thank you very much.

>> MODERATOR:  Thank you, Nick.  Our last presenter is Alain Durand from the CTO office at ICANN.  The CTO is directing his team and a number of different research areas in this particular area and emerging identifiers is a work Alain has been doing and he will share some of what is going on there in a minute.

We have gone quickly into a lot of technical details.  If you are not intimate with some of these emerging identifiers, it could be challenging to follow along and I will try, and don't be afraid if you have got basic questions you want to throw out to the panelists here, but I will try to bring it up a level when we get to the end and invite you to participate.

>> ALAIN DURAND:  Good evening.  My name is Alain Durand.  I work in the office of the CTO at ICANN.  I will talk about work we have done on emerging identifier, and some feedback on technology we have been looking at.

So some of the activities, we had a workshop at the ICANN 58 meeting in March this year in Copenhagen.  It was a first of a series of workshops on emerging identifiers and we had invited participants from different new technologies, so we had a presentation about DONA that Christophe actually made.  We had a presentation on programs and a presentation on name coin, which is essentially a technology model on bit coin to provide a name resolution system.

In ICANN 60 just in October a couple of weeks ago we had another workshop in Abu Dhabi.  There was a panel on block chains and at a short presentation on some risk analysis for the community has asked us to do on DOA and the handle system, and we had a demo of a system using the DNS infrastructure to help an IoT device to check its firmware and to auto update if necessary.  So I will talk about this a bit later.

So I wanted to share some observation that we have made on different technologies.  So, first, on DOA and the handle system, Christophe talked about the DONA Foundation.  The DONA Foundation is doing a number of calls for the technologies, but traditionally the Internet technology have been somewhat separated.  For example, the ITF develop technologies, develop protocols, ICANN develops policies, operators like Top Level Domain operators are doing the operations.  Some of those roles are mixed into the DONA Foundation, and one of the reasons is that the DONA Foundation is still a new entity and we are still understanding how to operate.

But that raised some questions and risk in what will happen in case of failure of the foundation, and what will be the exact recovery mechanism.  Another observation is about documentation.  Publicly available documentation of the current protocol is unfortunately not available, so we have been promised that some of this will be updated and made available soon, but it's not available as of today.  I just checked a few minutes ago.

We have in documents that date back from 2003 that the document version of a protocol that is not the one that's being used as of now.  The risk is that it's getting difficult to write a new independent and interoperable implementation from scratch.  We are aware of one implementation that is openly available which is one from CNI.  We are aware of one implementation that has been done by another party, but that's not publicly available, but we are not aware about any other ones.

So if there is a difference between having an open source implementation of something and having an open standard where people can come and participate and propose changes and there is a mechanism to integrate all of that.  Another observation is in terms of deployability, in terms of operations.  So there are very few clients that are actually available, and, for example, it's not integrated into a browser.  It's not integrated into an operating system.  So as a result, there is a usage of proxies.  So proxy is used for DNA and the Web to go and resolve the handle system names.

Depending on who is operating the proxy and what is the code that is run on the proxies, there might be some confidentiality issues because a proxy essentially is a man in the middle and sees all of the traffic.  So proxy will see what the users are trying to resolve and we also see what this thing is resolved too.  So the current code is very careful at not logging too much, but it could be operated by an unscrupulous party that may do a little bit more logging than we would desire, and that will raise some privacy issues.

Now, shifting gears, talking about block chain, because this is a fairly hard topic, if only we look at the valuation of all of the systems in the world.  When you talk about block chain, the first thing we realize is that essentially block chain is a chain of block that is replicated in many different places, and every time there is a new transaction which we add a block to the block chain and the block chain goes to infinity.  So there is a scaling issue right there.  Not only do you need to keep the current state, but you need to keep all of the history past and potentially the future too.

So it may be difficult to scale those things to the size of some of the things we know on the Internet because machines will have to include the entire chain in order to participate while maybe there is a concern that only very large nodes will be able to participate to this.  Block chain relies on the concept of proof of work in order to essentially limit who can participate to this.  You need to provide computing power.  Computing power means electricity.  And if you are in the place where electricity is expensive, participating to a block chain may be prohibitive.  If you are in a place where electricity is not expensive, then you have an advantage there, so that may create a bias.

But in the end, it's a lot of electricity, a lot of computing power that is used to look at random numbers to try to essentially find one that satisfies the requirements.  The rate of adding blocks is fixed, that is most block chain, not all of them.  That means transactions are not recorded immediately, so there could be a race, oh, you are paying me more in order to process your transaction, I'm going to do it first.  So maybe you can talk about neutrality of transactions and things like that.  But more importantly, transactions cannot be deleted.  There is no delete.  There is no way to correct a mistake.  That could be a problem in some cases.

Also, all of the history is visible, so there is no privacy there.  There are no right to forget.  So the transactions are not tied to the identity of a person, they are tied to a cryptographic hash which is essentially an account number, but if I look at all of the transactions I can see this account number as both that thing, that thing, that other thing and I can reconstruct the puzzle with that.  Not having a right to forget may be a little bit problematic too.

Control of this is done through public key, private key, which is great.  It raises the security level, but at the same time it creates a new risk.  For example, I keep forgetting my password.  Usually what I do is I go to the website, I say click on the button if you forgot your password, you get an email.  Okay.  That's how I remember my password, I click I forget my password.  Can't do that with public key, private key.  If you lose your privacy, you are done, there is absolutely no way to recover it.

Now, all of this is to talk to you about this new project that we have started, we call project OX.  It's like our X project in ICANN, and the question started by can the DNS provide persistent identifiers.  Sometimes we will call it non‑semantic identifier.  Essentially DNS is used mostly to map names to IP addresses, but can we do something else with it?  The short answer seems to be yes.  We need three things in order to achieve that.  We need a branch in the DNS space to put those things.  We call those things persistent hand calls there.

Maybe one just to have competition.  We need a naming convention.  Christophe talked about the naming convention in the handle system, which is to not use mnemonic names, but use maybe numbers.  Do not try to map the organisation's structure.  Try to use flat space as possible.  All of those naming conventions can be used in the DNS as well.  And we need a new record type in the DNS to put structured data in there.

We call this OX for object exchange.  So we did a demo at the ICANN meeting in collaboration, in ICANN 60 meeting a month ago in collaboration with the capacity organisation and the University of La Plata in Argentina, and the demo was about a small IoT device.  So I will skip this quickly.  Yes, that's this one, yes.  This is a one dollar IoT device.  Nothing fancy, nothing expensive.  This thing has a Wi‑Fi interface and USB to provide power.  That's all it does.  And we integrated it into an existing infrastructure about DNS, and what we did was create an identifier for this IoT device, essentially a structure in this persistency anchor with a manufacture of number and device number, device model number.

Not a vertical number, a model number.  So the way to access that, there is some information put in the DNS before we start this thing, and when the device works it's going to ask for an OX reported in the DNS and that OX record will return information that is structured that will include the firmware version that is supposed to run and will include the location where to grab the firmware if you need to.  So the little device will check if it's actually running the correct firmware because now it knows which one is supposed to be running, and if it doesn't have it, it will request for NHTTP get the new firmware, install it and reboot.

We did all of this in three weeks.  Three weeks from the time we started a conversation on how to implement this to the time we made the demo at the ICANN 60.  For me that's a proof, to use the language that Benoit was talking about is evolution.  What you try to create is evolutionary, and you rely on infrastructure that already exists, and we make relatively small changes to it.  We can deploy new technology very rapidly and we can align the costs and benefits to do that, then it sounds fairly promising.  So that's what we have done.  And thank you for your attention.

>> MODERATOR:  Thank you, Alain.  Before I jump into some questions on the panel, let me first turn to you, the audience.  Anybody want to raise an issue, concern?  Yes, the one in the back.

>> AUDIENCE MEMBER:  My name is Sails.  I'm the officer on the experiment. The whole schematics sounded like shifting even the process of putting an IoT to the Cloud, to the identifier so can you clarify on that.

>> MODERATOR:  Do you want to direct this one at Alain?

>> ALAIN DURAND:  So what we did we took this little IoT device, right, and we used this top level domain, and we created a domain underneath, persistency.Latvia.  And under that we created another domain that was a number that was associated to the manufacturer of that device.  We had 1, 2, 3 persistency.LAT.  And for this particular model of device, model number 1, 2, 3, 4, we created the new zone, 123.123.persistency.LAT in that DNA zone when we created this record which is a structure record that we have defined and was drafted right here to describe this thing, in this record we can put structured data. 

In the structured data we have a pointer to a web page, you can have an email address for contact information.  There is a field that contains a version number of the firmware you need to have, and there is a URL to download the firmware.  So if a version number is 1.7 and device reboots and says he has 1.6, it knows, okay, I don't have the correct one, I will fetch the URL that was given to me and I can reboot.  So that's how this was done.  That's the structure that we put inside of a DNS record to allow this auto update.

>> MODERATOR:  Yes, here, please.

>> AUDIENCE MEMBER:  Hello.  This is Wally Becker.  My question is this about the architect of the IoT device just presented now.  Can you please explain to me kind of authentication and synchronization mechanism integrated into that device in terms of, see, for instance, to identify every IO device that's the microprocessor, micro code you have in that, so auto authentication and (?) in terms of which and which device is assessing a particular ‑‑ thank you.

>> PANELIST:  So this is a demo of proof of concept.  I mentioned we did that in three weeks.  This is not a complete product.  This is not something you can go buy tomorrow.  The way we did that is in the code at boot time, we add a file onto a device that contains the model number.  In the real production environment, we will read that for maybe a nephroma or something like that that is burned into the device.  In terms of security, what we did was version 1 of the demo.  The version 2 we are thinking about is to use DNS SEC.  That will allow us to validate information we get from the DNS to make sure this has not been tampered with, that it is the actual, real information that we need to get.

So we had question at DNS SEC because a lot of processing.  That's actually true, however, there is only one record to verify, and if it takes like five seconds to go and verify that one record.  It's not going to take too much processing power, too much battery life on the device and that's a cost that we think is worth bearing, but that will be for the next step of this project to use DNS SEC for validation.

>> AUDIENCE MEMBER:  Thank you for that.  Another question is this, it's about the operating system of the device.  It's going to be operating in time, and is there any way you could, I mean, the device can maybe update the firmware and carry out a kind of update periodically to identify these devices that is assessing the Internet?

>> PANELIST:  I'm not sure I understand your question. Would you repeat?

>> AUDIENCE MEMBER:  My next question is just about firmware update.  So how periodically this will be done.

>> PANELIST:  So this was done at good time, so whenever device reboots, then we will do the check.  It's quite possible to program this so that every day, every week, every month, every whatever frequency you want you can do the same check.  So for the demo, again, it's a proof of concept.  It's not a product.  But for demo, we did that at boot time, but it's very easy to do that on a regular basis.

>> MODERATOR:  Let me ask someone else on the panel here.  Benoit, you have been looking at putting additional information in the DNS as well.  Do you want to comment on these questions, your observations, some of the things you have looked at on your team with IoT specifically?

>> BENOIT AMPEAU:  So just to maybe do some additional explanation, for instance, regarding the protocol.  There are certain use cases where DNS is useful.  Basically there is a device ID.  This device is emitting with radio signal to a gateway.  The gateway is from an ISP, this network ISP is not knowing this device.  So it is a process in which the high ISP will request its first anchor which is the DNS and will forward the request to the old network.

So we are getting as a use case you mention, there is a very good example of what can be achieved using DNS because it has, it does not replace a security mechanism in this case, but it has a trust anchor of in device and it allows interoperability with networks that does not cross each other.

>> MODERATOR:  Good, thank you.  I know we are very deep in the weeds.  Let me attempt to do that, and then I have another question in the back.  It's this.  We are talking new identifiers and if you look at them on paper it's a mash up of all of these new characters separated with dots or colons, and maybe it looks like IP addresses and domain names mashed into some new string.  So my question for the panelists is to what extent are end users going to have to interact with some of these news identifiers.  Consumers are generally used to putting in some domain name into their browser to resolve and get access to some content on the Internet.  What's the engagement with end user in these new identifiers?

 We will start with Christophe.

>> CHRISTOPHE BLANCHI:  I would like to take a crack at that one.  I'm on a panel tomorrow with interoperable identifiers as well, and one of the things the magic disappearing DNS is what we seem to be experiencing.  When you go to your browser, you search for stuff and you get a link and you click on that.  I mean, I'm trying to think how many times I type a URL in the year, and if it fits on both hands, I don't know what I'm doing.  So I think what happens is when you have resolvable identifiers, you put the metadata about what this identifier resolves so within the identifier.

So I don't need to know what the URL or the reference to where this thing is, I just need to know what is the metadata, and that's where I'm going to get the description from.  So I think going forward, interactions with identifiers will be done by the machines, and it will be interacting with the metadata of it.  So I think in the case of the handle system, this is what is being done.

We have metadata within the handle itself, the identifier is never typed manually.  There is such a service as, you know, short DOIs, but that's really just for the, for a small community.  But what we anticipate that people are going to deal with metadata and use that to interact with the identifier without ever seeing it, but they will be able to verify cryptographically what is on the other side, who said what about the content and what the state of the content should be.  So I think overall, it will be a more powerful way to interact with your information than through straight links.

>> MODERATOR:  And, Nick, on Ethereum?  No?  The end users will never see any of these identifiers or be confused by them?

>> NICK JOHNSON:  So our main goal has been to identify end user so most of the identifiers we are inventing are aimed at being entered directly by users, but I think there is certainly a lot of room for identifiers and resolution services resolving names that aren't entered by humans.  I mean, the trivial long‑term example here is reverse DNS.  And we have the same with Ethereum, for instance, you can resolve an address to get a name.  So I do think it would be unreasonable if there were use cases that expected users to enter these long identifiers, but I don't think that's the target use of these.

>> MODERATOR:  I had a question in in the back here.

>> WALID AL‑SAQAF:  I'm on the Internet society block chain and specialty interest group and I have been following discussions on this at the latest ICANN.  We presented a paper describing what has been done in this space and we came to the conclusion that while not perfect, destructive technologies often need to be looked into, examined.  Has ICANN taken a bold step or is considering to look into these not necessarily the models existing, perhaps, for example, the idea of the distributed ledgers in general?  For example, there is this emergence of the hash graphs which are much faster in terms of speed, scalability‑wise.

The IoT, a cryptocurrency and how it works, and also if we consider Ethereum transitioning to proof of state has that potential of solving the scalability problem.  So this is an open question that we believe disruptive technologies will remain somewhat problematic at times, but that doesn't mean that they need to be ignored.  So what do you think ICANN should or can do in this regard?

>> MODERATOR:  I think I would like to pass that to my right.

>> PANELIST:  Yes, thank you for the question, and in fact, as you have seen from Alain's presentation, this is not the first instance of this discussion we are having at ICANN.  We had already two panels within ICANN meeting framework and this is the third one.  So we have an interest in looking into this, but at this time they are emerging technologies, so we are researching, we are looking at what exactly they have on the table, which problem they are trying to solve, and, you know, going deeper with the help and the input of the community how can this apply to ICANN mission?  What exactly do we do with it?  And naturally, we are trying to even put in place a wider consultation with the community to input on this emerging identifier.  Where do they think? 

So, yes, we are going to look into that, but we don't have a precise answer into how we are going to use block chain or Ethereum or any other technology.  What is important is we are not endorsing any of them.  Research department is looking at each of them as they come up and putting together white paper.  We are going to publish soon a white paper, for instance, in one of the identifier, and probably another one to address the block chain aspect as well.  So, yes, it's a work in progress.

>> MODERATOR:  Next, I had a question from the lady in the back in white.

>> MADELINE CARR: Thanks.  My name is Madeline Carr. I'm from university in London, UCL.  We are part of a large 30 million‑pound consortium that is looking into the cybersecurity of the Internet of things from a socio‑technical perspective, and I'm really interested in this issue.  I can see all kinds of applications for it, but I can also see some privacy concerns that would come out of this, and I was wondering if any of the panel could tell us are there discussions going on in the technical community about those concerns or is that something that needs to be taken up elsewhere.  And then I just had a very quick question for Benoit, if I may, something that you said, you said that identifiers are not for a device, but for a carrier.  And I didn't understand what you meant by that.  I was hoping you could explain it.  Thank you.

>> BENOIT AMPEAU:  So, regarding the identifier and the carrier question, basically when you create things and you put identifier in it, it is a ‑‑ the identifier is written on it, so this is the manufacturer carrying an identifier, and one problem we have is, for instance, on the devices that last for decades, for instance, you have got one identifier, but this object can go through several owners.

So you need even if the owner is changing, the identifier remains the same in most of the cases.  So it's an issue in the industry to be able to change identifiers.  So that's why you need to put in place mechanisms to answer these questions.

>> MODERATOR:  On that second topic of privacy and securing the information, I think, Alain, you raised this a little bit in your slides, specifically in DOA and proxies and confidentiality there, and also in block chains and having some record and ability to figure them.  Before I turn to you because I know you want to speak to this, I think I would first come back to Christophe.  Do you want to talk a little bit about the criticism regarding proxies and confidentiality in DOA, and I will ask Benoit to likewise talk about the block chain industry.

>> CHRISTOPHE BLANCHI:  The weakness comes from DNS.  That's the problem with proxies.  This is more reflection on the state of what we call the browser technology industry is that you could have a URI speck and they are not going to implement it, so you write a URI speck for the handle system and you think Google is going to pick it up?  No.  They are going to pick it up if there is a user community behind it pushing for it.  So standard user community is a little bit of a catch 22.  So in the case of the handle system, the proxy was sort of the next worst solution, I guess, which is people use these publications on the websites so they can build websites and then we will use the resolution.  I think we are starting to see the end of this because of JavaScript, not that it's my favorite solution, but it's a solution.

Not only that, but the JavaScript can scrape your pages, figure out handles in their native form as opposed to some weird URI form which is not standard and then provided added services like resolving verifying the hashes on who made the statement and what is the status of this reference before you even click on it.  So I think the proxy is there, and it is probably going to remain for a while, but that it's correct, it is a weakness.

But I would say, for instance, we have a series of proxies.  You have HDL.handle.net.  You have DOI.org, and then you have other ones, and they are sort of playing this role of sort of branding, and it's up to the organisations that branding these things to maintain the integrity.  The proxy itself, there is nothing particularly weird about it.  It's doing handle resolution within the protocols, and if you don't trust it, you can implement your own.  I would like to point this out, for some odd reason, you know, call it the environment of the technology.  There has been a lot of resistance in the states and other places, but in China, they have no resistance.  They are trying to figure out what works and they have actually done clean room implementation of the handle system based on the RFCs.

So I think, you know, and they used it actually to solve their DNS problems, but going back to the security aspect, I think it's a critical item.  And the handle system is actually taking security very seriously, and everything is actually certified cryptographically verified from the top.  You could say we don't trust DONA.  Well, don't take our word for it.  Take the word of the NPAs.  They are the ones certifying their own content, and so you could say we are not going to trust, but we will trust the IDF because they are operating their own MPA service and whatever they say we will trust.  So it's a dynamic environment.  It's consensus at the GAR level.  But from there there is a series of either trust from the root or trust from the community, and it's up to the IDF to define how many co‑signatures do they need on their root key to determine that everything, the stuff they sign later on is actually acceptable to this community.

So that's a dynamic thing, and our MPAs are very active in doing things that are extremely sensitive like, you know, population databases, and you could be certain that, you know, security and privacy are things that have to be taken with the utmost seriousness.  So from the DONA Foundation, we take security extremely seriously, and this is what keeps us awake at night so to speak.  Thank you.

>> PANELIST:  One of the goals of ENS is to minimize the amount of personal information that has to be gathered and stored.  And so it doesn't have an integration to a system, for instance.  Registrations are pseudo anonymous.  The counterpoint is because it's a block chain, of course, everything is recorded publicly.  So it's up to people to exercise their own, you know, sort of information hygiene when it comes to associating that handle with other activities on the Ethereum block chain, and future technologies such as ZK snarks which provide zero knowledge proofs of just about any mathematical system can be used to enhance that further.  But for now, we are focusing on effectively minimizing the information we gather so that it's necessary to operate the system, and leaving the rest for a later stage since we are still very early.

>> MODERATOR:  So we are adding privacy and consideration.  I would like only to mention the works done at IGF regarding DNS privacy.  So there is a lot of work regarding DNS privacy so that DNS in a very schematic way only provides the necessary information at a certain level of the request.  So, yes, this is also something to consider.

>> MODERATOR:  I would like to make one final comment on this is that with the MPAs, the reason why somebody, an organisation or a multistakeholder becomes an MPA is because they went to set their own policy on their own name space.  DONA's perspective is you become an MPA, we convention you, you operate according to your own policies.  We can make recommendations, but ultimately, it's up to them to decide what it is they want to put in these records, what is legal in their countries, and this is sort of the way that we are providing, because not all sizes, not all solutions will work everywhere. 

And so you have to let the organisations decide for themselves, and warn them if they are going somewhere where you know they are going to head into trouble, but, you know, they are free to do their own.  That's why they became MPA.

>> MODERATOR:  I have several questions in the skew now.  Before I jump to it now, David, do you want to speak on this topic or something else.

>> DAVID CONRAD:  CTO of ICANN.  One of the issues that come into play whenever considering the security of these identifier systems is the level in order to establish a trust in that identifier system, you have to have a level of openness and transparency.  In the context of the DNS, in order to insure a level of trust for securing the DNS, we actually, the community, ICANN community came up with a fairly elaborate set of ceremonies and mechanisms by which anyone on the Internet at any time can verify the trust of the system.  This obviously plays into, you know, privacy considerations because in order to have privacy, you have to constrain the openness to some extent, so there is a natural tradeoff that occurs.  But in my view, just, you know, personally speaking, I think at a fundamental level when you are talking about the underlying infrastructure upon which identifiers are being created and manipulated, they need to be as open and transparent as possible so that everyone can have the same level of trust and are able to verify that the information has not been modified, that the keys have been generated in a way that allows you to insure that the, that there is no malfeasance being implemented.  So when one is considering an identifier system, one does need to look at how the trust is being propagated through the system, how it's being generated because as if you are in the security field, you know that trust is not transitive.  You have to be able to verify the trust yourself, and the only way that I know to be able to do that is to have access to the mechanism by which the trust is defined.

>> MODERATOR:  Thanks, David.  I will switch to another question from the audience.  Gentleman here in the light gray jacket.

>> AUDIENCE MEMBER:  Thank you very much.  My name is Im Ame.  I used to go to the ICANN meetings until seven or eight years ago and I sort of feel I came back.  I was involved in the very early process of the community of the WSIS in this room where the inter governance debate amongst the Government and the multistakeholders very much started from, and I was directly involved.  So I was asking myself by hearing all of the interesting fascinating presentations about what kind of governance issues are we really identifying, how and when?

It feels like the DNS system for some of the new ones like the handle, the mapping with the DNS, so with Alain's presentation I felt that DNS is becoming maybe a hub for other identifier systems.  Perhaps it's better to be not too much distributed, and you have all of these different central servers, different identifier system rather than have some more unified one.  And DNS can with IP addressing play a key role for that.  That's one interpretation I made.

But if so, I'm not too sure if that happens or not, but if so, the DNS and the body that governs the DNS will have more powers or responsibilities than we do have today.  And if they really propagate, we don't know when and if with the catch 22, but by the time we became aware it might be a little bit late or too late, and if we do too early, we interfere too much.

So these are the kinds of issues I failed to listen to, and I'm not too sure being a non‑expert about all of the technologies, but when sometimes the technologies have to decide the specifications like IPv4, and it didn't have the compatibility fully with IPv4.  That's one challenge we are seeing today after 20 years, perhaps, of the decision.

So we need to sort of take the lessons going forward, but these are my sort of feelings, gut feelings.  I'm not too sure how the esteemed panel members feel like.  Thank you.

>> MODERATOR:  Would one of you like to comment on governance in that space?  Yes, Nick.

>> NICK JOHNSON:  It's a thorny issue in the block chain space because there is a gentle sentiment towards extreme decentralization and a distrust of human institutions to run a lot of governance processes.  So in the case of ENS, we are trying to tread the fine line of minimizing the amount of human interference without removing the ability to get out of bad situations.  So the canonical example here is dispute resolution.  People want the ENS system to be largely decentralized and free of censorship, but a name system is also only useful to the degree that the names are sort of unsurprising, I guess I would say, that they resolve to what people generally expect them to.  And if you have a system with no human oversight, then you can end up with a system where all of the names are owned by squatters and by people who are interested in engaging in deceptive practices.

So at least from Ethereum's point of view, this is an evolving conversation of how do we limit the opportunity for censorship and for interference with the system while still maximizing the usefulness of the system.  So I guess I don't have an easy answer for you, but it's something we are taking seriously and would like a lot of input under the community on.

>> MODERATOR:  I have time for one more question.  The gentleman in the front in the dark gray, if you would.

>> AUDIENCE MEMBER:  Yes, thank you very much.  Pea from Afnic.  I had a question for all of the panelists at the same time because I felt that some proposal was aiming at finding solutions that are not relying on DNS, some other finding solution, and identifiers that are relying on DNS, so I would like to know maybe from each of you what are in your views the two main advantages or qualities of DNS when it comes to IoT identifiers, or the two problems or lack of interest of DNS when it comes to IoT identifiers because from the room, and especially when it comes to DONA but maybe Ethereum also, I'm not sure, I'm not a specialist in that.  It seems like there are people that think that we can build a new thing on an old pot, that is DNS.  And people think that we have to do something to separate from it, and interoperable with it.  So I would like to have more clarity on that.

>> MODERATOR:  This is a great question.  What I would like to do is give you a chance.  We will start with Alain and go this way.  You have 45 seconds so make it curt.

>> ALAIN DURAND:  My answer is three weeks.  It took us three weeks to actually start from a conversation about trying to do something to having a live demo at an ICANN meeting.  If you know what a live demo means in front of a board, it's very stressful, three weeks.  That's why I think it's a very interesting avenue to explore further.  It doesn't mean there is not room for anything else, but it's a very interesting avenue to explore because it's easy to deploy.

>> PANELIST:  My answer is just create any identifier you want and for sure there is a use case for DNS working on top of the identifier and finding DNS service working with it.

>> PANELIST:  I guess I would point out that for any system that you might conceive of building alternate to DNS, the first step is almost certainly going to be resolve its DNS name.  So in that event, and absent compelling reasons to avoid doing so, it often makes a lot of sense to simply host the information if it's sufficiently light weight in DNS particularly now that we have the level of security we have that we do with systems such as DNS SEC.

>> PANELIST:  The handle system was created from scratch for some reasons which is DNS didn't have the functionality we are looking for and security and  extensibility of the types of records we could put and we wanted it to be non‑semantic relevance, that that is a key aspect.  As all resolving systems, I think we can all interoperate.  There is no reason that one will kill the other or vice versa.  You take the one that suits your needs better.  In the end I would say it boils down to policy and governance, which system provides you the policies and governance you are looking for.

The functionality, I think, is, you could evolve DNS and do things that are handle‑like.  Handle can do DNS things today if we want to.  So I would say pick your policies, your specifications and your regulations and see which one suits your requirements best, and I think in our case, we believe that the MPAs have to make this decision and they have to be able to evolve their identification and resolution schemes according to what is in their interest.  Thank you.

>> MODERATOR:  Thank you for the question.  By the way, the purpose of the panel was to elevate the awareness of these emerging identifiers and the technologies behind them.  I hope we have achieved that some today for those participating in the audience and I want to thank Adiel for pulling us together and putting together the panel, and I will turn it back to him for a close.

>> ADIEL AKPLOGAN:  Thank you very much, Ron, for moderating and helping everybody, you know, bring out the information that is useful for the community.  I will say that two things we can take out of the discussion is, one, how do we build trust within the different emerging identifiers, when is the time to start investing more deeply into the policy aspect, the governance aspect.  That is one of the reasons why we are spending some time at ICANN looking at this, building bridge, letting the community know about what is coming, and also shaping when it's not too early, when it is not too late to start looking at when it is for the evolution of the DNS to include this and any other thing and know where to go to influence the evolution of that.  So thank you very much for the request and your participation.

As I mentioned, we will continue looking at the emerging identifier aspect from ICANN perspective, and we will welcome all input and contribution from the community into how in which direction we should take this.  Thank you.


(Concluded at 6:00 p.m.)