IGF 2017 - Day 4 - Room XXIII - OF17 Building Blocks of Trust for a Sustainable Evolving Internet

 

The following are the outputs of the real-time captioning taken during the Twelfth Annual Meeting of the Internet Governance Forum (IGF) in Geneva, Switzerland, from 17 to 21 December 2017. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid to understanding the proceedings at the event, but should not be treated as an authoritative record. 

***

 

>> GREG SHANNON:  We'll start in a few minutes here.

Good morning and welcome to the panel on "Building Blocks of Trust for a Sustainable, Evolving Internet."

It is Thursday morning here in Geneva, and we have diligent attendees who want to get the most out of the Internet Governance Forum this year and we appreciate that.

We have a panel this morning to talk about trust and, you know, evolving Internet, particularly as it relates to artificial intelligence, autonomous systems, and this confluence of connectivity and technology, that we all continue ‑‑ that continues to evolve.  I'm Greg Shannon, I'm the chief scientist for the Cert division, where we do cybersecurity in that division, and I'm delighted to work with IEEE as part of the Internet initiative that they sponsor.

My co‑moderator here is Ishrak Mars and she will be co‑moderator with me.  I have to leave a little early to get a flight.  Ishrak.

>> MODERATOR: So here today, we have Marina Rijari.  And the vice president technical activities.  We have as well Arisa Ema, assistant professor of the University of Tokyo and visitor professor at RICKEN Center for Advanced Intelligence and we have Danit Gal which is a scholar at Peking University and international strategic advisor in China.  And, yeah.

>> GREG SHANNON:  We will take about five minutes to make comments on the topic and then we'll have 25 minutes for questions from the audience and ‑‑ (No audio).

>> Thank you.  I took a close look at the importance of trust in my daily routine.  So my phone went off at 5 a.m.  It did not fail me.  It did not betray my trust.  I chose my outfit for the my trust in the forecast.  So my trust in the forecast paid off.

I would like to remark that in every state of our daily lives we place a tremendous amount of trust in technology, people and institutions.

So I think, like, it's important to have trust ‑‑ mainly in the online staff that we have on the Internet, using the Internet on itself and actually, the online trust is a mechanical one because we are having trust in machines, and just as we trust, for example, our cars and we are sure they are not going to break down at any time.

So what I want to say is I have like perspective and concern and our data when we use the Internet like our trust.

If people were truly concerned about their data, why are they really using the mid‑year and the digital media?  So we have the privacy ‑‑ the privacy ‑‑ the data privacy and the security in one hand and in another hand we have, like, people are just asking about how can the application and the Internet services and providers are taking our data and using it.

So like many of them, and many of the Internet users, they are using services and they can be anywhere of the full extent of personal data that is collected and stored.

So this is mainly what I want to say concerning the ‑‑

>> GREG SHANNON:  This is her first time as moderating.  We are training new talent in the ‑‑ to have your first time to moderate at the UN, I think that's a wonderful treat for her.  So introduce the next ‑‑ or defer to the next panelist.

>> MODERATOR: Okay.  So we move now to Danit.

>> DANIT GAL: So I will be taking this opportunity to talk about trust in technology through the eyes of what I have been seeing in China.  And I think that the topic of trust in China is a very fundamental one, because there is a very strong incentive for the government to cultivate trust and I think that in a lot of policies that it put out, the vision that it has for trust is to have social interaction and through that develop a sense of mutual trust.  And there's a very interesting concept in that, in that they believe that through the technology and through robust and meaningful and kind of purposeful technology that provides services and convenience and helps alleviate a lot of difficulties that China still has in developing countries is a really sustainable path to creating trust.

So in that sense, even when we talk about social credit system, which is explicitly mentioned in its next generation of the IEEE development plan, people think about it in the West in negative way.  But from the Chinese perspective, this is actually a very meaningful and useful tool to cultivate trust between the government and between the citizens because it helps people keep everyone in check, in the sense that the government keeps the citizens in check and the citizens keep the government in check.

And there's this idea of having a mutual interaction through AI and through block chain technology to really help cultivate that kind of sense of mutual benefit and interaction.

Now, I think that this is an interesting aspect of that from the industrial perspective, just because China is cultivating a very robust AI industry.  We have been hearing a lot in the news about Chinese companies getting into the forefront with a lot of innovations and a lot of development, the example would be the big three, but we also have really robust companies like Iflydeck and bike dance and Face++ and others that really create these kind of technologies that help improve the situation for the society and for the public.

And I think through that instrument of having the technology as a connector between the government and the citizens, we could really hope to achieve a better interplay, a better future, a more attentive government to the needs of people because if you think about it, China is a pretty big country, it has almost $1.4 billion.  How do you interact with these people?  How do you listen to them?  How do you understand what their needs and wants are?

So in that sense, there as a lot of optimism about the ability of technology to really create that kind of connection and foster the kind of meaningful policies and government planning that could deliver better services and trust to the citizens.

So in that sense, I think that the role of AI and the technology in general in China is something that I'm very, very optimistic about, if done right.

And that means that we still have time.  The technology is at a very early stage, but from my perception of the policies that have come out ‑‑ and those of you who follow Chinese policies know that they come out about every couple of weeks or so.  So there's a lot to read!

So there's a really very strong optimistic push towards the incorporation of technology to make people's lives better.  So not just the very developed cities, like Shanghai, but also the villages and the provinces that have less developed infrastructure to kind of help them leapfrog and get to a better living quality.

>> MODERATOR: Okay.  Thank you, Danit.  So we move to Arisa.

>> ARISA EMA: So hello.  So I would like to discuss or introduce the trust between the human beings and the robots or the AIs.

So when I was talking with my European colleagues, she asked me what the main idea is, that has been discussed in Japan, considering the AIs and robots.  She said that in Europe, the freedom of privacy autonomy is really important.

And I think in return to her comment, I said maybe in Japan, harmony or maybe more like coexistence with the robots will be a very important thing.

And the coexistence, it's not like a dystopian way.  Like the people being employed by the robots.  It's more like a positive way living together.  So robots and AI contribute not only the efficiency or the economic growth, but it is considered as a partner of human beings.

So, for example, I would like to show the videos, two videos.  One is the robot hotel.

Yes, just show that.  So you sigh this hotel is a robot hotel, and the porter and the concierge robot are introduced.  This is the reception robot.  It's introduced not only for the efficiency, because in a way, they actually reduce the employees to a third by introducing this kind of robot, however, it also has the ‑‑ it focuses that somehow it substitutes the emotional labor of human employees.  So it challenges the concept of hospitality, but it's also ‑‑ it seems that the guests are welcoming this kind of robot hotel for the entertainment way.

So thank you.

And also, could you show the other?  Yes.  There's some researchers who raise the concept called minimally designed accompanying robot.  So you can see this one.  This is a robot that requires humans help to accomplish tasks.  For example, there's a garbage robot with wheels.  And it does not have hands to pick up rubbish but it just exists.  And it finds rubbish and moves closer to it or like just moves around and it waits until some generous or some kind person finds the rubbish and puts inside into the robot.

Yeah.  So you can see that.

So the child is really interested in what is this?  And they actually wanted to find the rubbish and then they ‑‑ they hand it and it actually contributes to cleaning the rooms or ‑‑ so it's not forcing them to pick up the rubbish, but, you know, with the collaboration now of this ‑‑ the robot and human beings and this minimal designed robot or the researcher said it's weak robot.  Yeah.

Thank you.

So it's ‑‑ you know, it's far from the image, like the very frightening robot.  It really needs human being's helps and it requires a collaboration and it's not just cleaning robots.

So this garbage bin ‑‑ garbage robot accompanies the ‑‑ the ‑‑ accomplished the task by collaboration with the human beings.  So it's more like considering affordance of things and many Japanese think that creating trustworthy robot is essential to be accepted in the society.

However, these adorable appearance and behaviors somehow desqueeze the invasiveness.  For example, peppers will be introduced and since it's really friendly looking and adorable person tends to speak their personal information.  However, it collects their personal information.

So this kind of like a collaboration between human beings and robots and behind that, there exists some kind of certain responsibility or the privacy issues and autonomy issues as well.  And this leads to the creating users guidelines of AI.  So minister of internal communication affairs are currently considering to create the user guidelines so ‑‑ to consider about the user literature or, like, user responsibility of using this kind of technology.  So it's not only the researchers but also the users that needs to ‑‑ that users create and educates robots AI.  So in that response, transparency is ‑‑ the users also have to consider those kind of issues.

So, yeah.  That's what I would say.  Thank you.

>> MODERATOR: Thank you very much.  Now we move to Marina, if you can give us your thoughts on this subject.

>> PANELIST:  Thank you for having me.  Perhaps I would like to start to say that to me, trust is not a theoretical concept.  And I have sensed that this year by chairing 39 societies, seven councils and almost 400,000 people for the conferences, all the publications so that you know they come from there.

So you see that the trust exists automatically when you have content.  When you have no content, then you start asking if what you are doing is trustable or not.  So we launched the three communities this year, with three new topics for IEEE.  At IEEE we are never collegially there.  In food engineers, smart agri, we created it automatically, trusted community, big community, people that never met and never taught to cooperate on this topic.

The first equation, and then I hope you will ask questions.  I would like you to connect two topics, trust and content.  And naturally, trust and Internet are very related.  So you see if Internet is a means of content, there is probably not a very serious problem of trust.  However, we cannot consider ‑‑ and in the trust approach, and naturally, there are things that have to be looked for and others that come for granted, if you do the first are in a good way.  So responsibility and human centricity or the two areas that are I think are essential in building trust in the net.

And then if you do this, transparency is almost automatic, again, because you are providing content and in this way, trust increases and you can arrive also to transparency.

Naturally, how we can deal with this trust.  If we don't deal also with ethics and privacy.  In particular, in Europe, we have a very hard data line, May 2017, this will be the day in which we will understand in we are able to comply with the very complex approach to privacy, but, again, I see Internet as an opportunity.  I'm not scared about Internet.  I like the technology.  I like ICT.  I'm an ICT scientist.  And I think that through the Internet, through ICT, we can show to the social communities how we can really work with ethics, with trust, and in a way that respect the laws about privacy in the different countries according to the different regulation.

So this is the second thing that I would like you to think about question about.

Third thing, we finally found a way to be forever young.  We have internet now.  And I think that while I talk, the first people from generations Z, the young people born between '95 and 2012, that see millennial as very old people.

(Chuckles).

They are taking their first graduation.  So they are there.

So I have no fear because this will be the generation who will lead the Internet, who will lead the content, and probably they will have also role in our society and they will lead something within our society.  So we are forever young if we understand this concept.

So three solicitation, trust and content.  Trust ethics and privacy.  Together and we all have to be become Generation Z in mind that we cannot do anagraphically, unfortunately, but we can do as a state of mind.

>> MODERATOR: Thank you very much, Marina.

So if you want to ‑‑

>> GREG SHANNON:  Thank you, Ishrak.  When you talk about trust, and you look up the definition, it talks about how vulnerability is an important aspect of trust.  It's about willingly making yourself vulnerable to another and trusting that they will not exploit that, that they will nevertheless maintain a sense of security, privacy, resilience in that interaction.  That's the opportunity of transaction is to facilitate that.  There's mechanisms to facilitate security.  Mechanisms to facilitate privacy, mechanisms that facilitate resilience.

Accountability, we have some challenges but there's ongoing research to facilitate that.  Through the interactions with IEEE, I think the opportunity is to provide technical capabilities that enhance trust.  And where I have seen trust experienced, I mean, you know as a chief scientist for Cert, cert.org, you know, certs embody a big part of trust.  It's about building relationships in situations where organizations are in a compromised or in a vulnerable position and you need to exchange technical information.  You need to be transparent about what you are doing.  You know, we develop the responsible disclosure principle that is used by many governments and many companies about how to disclose vulnerabilities and address them in a timely manner to mitigate errors and problems in technology.

And so as ‑‑ you know, as we look at artificial intelligence and how that ‑‑ and robotics and autonomous systems, I think there's a couple of things to consider.  You know, I like the point about literacy for the users and you think helping users understand the choices that people are making, when they are choosing to make themselves vulnerable.  I think one of the challenges is when we want to give the users the opportunity to say, I don't want to be vulnerable.  I don't want to use that technology.  That's a hard social problem.  I agree with you, Danit, that it's a conversation between society and government, society and industry, about how to are ‑‑ how no balance that.

Some things that's going on at Cert and Carnegie Mellon, is the notion of explainable artificial intelligence.  They want to understand why an autonomous system, why an agent is making a decision.  So whether it's a ‑‑ a system that's ranking loans, loan applications, you want to be able to explain why the choice to accept or deny that loan application is being made.

If you have a robot that's moving in an environment, you want the robot to be able to explain to the user what they are do.  If you can imagine being in an autonomous car, that's driving, you know, if the car slows down, and you don't understand why it's slowing down, is there a threat?  Is it just because it's making a turn?  Is it because maybe it senses you like to look at this scenery and would you like to drive a little slower to enjoy the view?

You want a system that will actually explain it.  I think that explainability is a key part of, you know ‑‑ as marina said, it's about the content, and so the explanation is part of the content that we are trying to deliver.

So, again, it's ‑‑ you know, the other one final thought here is thinking about the future.  You know, the people who do trust ‑‑ research trust, highlight an interesting dilemma that we all face.  Every day we each are trusting our future self to do something or not do something.  Which is an interesting concept.  You know, what am I going to do in ten years.  Am I going to be responsible with the ‑‑ will I spend my money responsibly that I'm saving today and deferring?

And so there's this notion that you are trusting the future in many ways, at least in yourself.  And I think with the various challenges of data breaches, of the technologies being used in unintended ways, we all are actually trusting the future and those who control the technologies, who develop the technologies, who repair them, and maintain them, with our trust that they will be used well, and they will be used ethically and used properly.

I think that part is one of the most challenges things we have.  We can't control the future but we are all vulnerable in how ourselves move forward in time.

So, again, coming back to the technical capabilities that, you know, engineers such as those in IEEE can provide, it's about providing security, privacy, resilience, resiliency and accountability in order so that society can make choices about how to put these together.

So with that, I conclude my comments, and I would like ‑‑ we would like to now go to the audience participation session, though, first, I would like the panelists ‑‑ are there any particular questions or thoughts that you have, based on what we said so far that you would like to ask the other panelists as part of this?  Danit?

>> DANIT GAL: I really, really enjoyed what you said about the idea of explainability in addition to transparency because I think we all know that just being able to see what happened doesn't necessarily mean that we understand what is happening and I just wanted to add on top of that, that I would also like to highlight explainability and accountability on the side of the people who design and regulate the technology, in addition to the users that Arisa mentioned because oftentimes even if we understand what the technology is doing because it explains it to us, we don't understand why it was designed in a specific manner that allows us to enjoy the view or allows us to have this extra security measure.

Let's say, for example we have extra safety measures for children.  So we need to understand why the people who designed the technology design it in such a way, because at the end of the day, technology is not separate from human beings.  We design it.  We consume it and through our consumption, it shapes the way we behave, and that's kind of a reinforcing cycle.  I think there's another interesting level of trust between the people who use the technology and the people who design the technology and the interaction between them.

So in that sense, that's just another point of highlight that comes to my mind where you gave your comments.

>> PANELIST:  I would like it say Arisa, the human/robot cooperation is exactly the way to grow the trust, because, you know, it's true we consume technology but not all the people can understand the details of the design.  So I don't see possible what you were saying, that we have to understand how it's been designed.  You have to trust it.  You have to trust it because, for instance, there is a good IEEE standard behind that, just to talk about something I know very well, or because the human robot corporation has been grown ethically, and both humans and robots respect each other because they are both weak and both strong.  So that's why I like your example of these smart garbage but not enough to enhance.

Because this is exactly the environment we have to grow the trust.  And as the net and the ICT will be all AI based and after Generation Z, there may be another generation but we finish the letter.  So I'm not sure which will be the name of this generation, that would be AI.

This means that they would see a robot and not a nurse, when they open their eye.  And for them it's natural to trust.  And so the future is much better than we can think about.  But human robot corporation is at which we can invest energy.  We have to be wise enough to start doing the right things.  So I appreciate very much your example.

>> ARISA EMA: Yes, I like this discussion.  I think it's really important.  You showed me ‑‑ you said the explanation, explainable AI, like Danit said, explanation, you can't predict the unintended use of users.  You can't explain all down or all the IEEE items.  So what I think is to, like, design the ecosystem, or, like, how the ‑‑ how the technology is used or like how ‑‑ like, Danit said, what is the designer's intention.  We have explain first we have to describe what is the intention and what is the expectation of the users.

However, I think it's really important to consider.  So I think it's important to see how it's used and you do some kind of case study issues.  So it depends on the domains, like, what kind of AIs would be used.  It might be very different from the auto mobile and also the medicine and everything.

So I think during the user studies is also important to see and also ‑‑ to see the trust among human beings and robots.  I totally agree you must have some type of trust between human beings and robots or to the system itself, or like the insurance system as well, not only the robot, but also the system, the trust, yep.

>> GREG SHANNON:  Ishrak, do you want to make any comments before we go to the audience?

>> MODERATOR: No.  No.  So if we have any questions from the audience?  Yes?

>> PARTICIPANT: I'm Depak here.  So Marina, you about the trust and the content.  What happens when you have a fabricated false content like fake news, et cetera, and so that's also content, even if it is a malicious intent, and how do we take care of that?  So that's what I just wanted to ‑‑ and also I just want to mention one thing.

It takes a couple of months back and one very interesting thing we observed there, during the break, we had displayed short movies.  In fact, it so happens that many of those movies are from Japan but there's Robosfilms.com or something like that, and it's interesting to look at those perspectives there.

>> MODERATOR: Thank you.

>> PANELIST:  Thank you very much.  You brought up another key topic.  Fake news belongs to this amount of data that we are very hard time to manage.  You know, big data could be good, but why they are so big?  Because we lost to completely track two main things among the data.

First of all, the information.  So Claude Shannon would not be very happy about us, because we really spread the entropy information among an amount of things that we have a hard time to store and much harder time to retrieve.

And even that is not enough because information is not all we need.  We need knowledge.  So you touch base on a very important thing.  The difference between data, information, and knowledge.

And I think the ICT community, as a very important duty, is called to go back on the information theory and to adopt the entropy approach, theoretical approach.  So ICT scientists are needed in order to cope with an environment like the Internet, where users and thoughts interact but also interfere.  It's the productive interference.  So when we talk about content, in my mind, and I'm an ICT scientist and I have a group that's working on this topic.  So I know that content is already the knowledge based approach.

That means that you have taken out the noise.  So we have now pay new concept of noise.  It's not the channel noise anymore, what should worry us, but these are the self‑interference, because we could be ‑‑ thanks to the net ‑‑ both source and user of the same news.  And this is very scary.

So if there is something I'm scared about, it's about this confusion, and I thank you very much for bringing up this important topic.

>> MODERATOR: Very interesting, Marina.

So if we have any other question from the audience.

Okay.  Any remote questions?  No?

>> PANELIST:  If not, maybe I could ‑‑

>> PARTICIPANT: Thank you so much.  Actually, we were talking about trust in AIs and Arisa give us an example on how in China they use garbage collectors, robots ‑‑ Japan.  Japan, I'm sorry.

But on the other hand, we also hear about AI robots, that are racist, sexist and how to solve this equation, between the positive side on AI, and how to, you know, manage between trust and ethics, you see?

>> MODERATOR: Thank you for your question.  So Arisa?  Do you want to ‑‑ who wants to take the question?

>> DANIT GAL: I think this closely relates to the idea of face news.  When you say the robots are racist and sexist, it was let loose in a way and it started accumulating a lot of data and it didn't have any mechanism that allowed it to distinguish between good and bad data.

I think the important thing to remember, even with fake news is data is flawed.  Why is data flawed?  Because people believe different and certain things and that's what they express.  It doesn't mean that it's not viable data.  It's a reflection of who people are and what they think.  If you say fake news, how do we counter that?  And that's an example that could be useful for that.

Platforms feed their Times fake news.  They give them more data to teach them the difference.  And I think this is an evolution that we are seeing with fake news that we will start seeing with chat bots like Tay that says this data is classified as racist.  This data is classified as impossible or ambivalent or something like that.

I think there's a learning curve that we are going to start seeing with algorithms that are fed to data and are linked to distinguish between what is acceptable and what is wanted and what is something that could be discriminatory or offensive towards people.

I think there are actually certain standards in the IEEE P7000 series, which I'm closely associated with.  It actually helps to distinguish or help methodologies that really teach those systems between discriminatory or offensive data.  It's a learning process.  I think it's a learning process.  Why is data so big?  Because we have so many people producing data but now we actually have the tools to collect and analyze them.

So in that sense, there's a very long learning curve that we as humans and also as developers and algorithms will have to go through to get to the point where they can actually take all of these, you know, large data sets and really try to shape them into something that is usable and ethical for everyone.

>> ARISA EMA: Sometimes what is good defends on the culture and the context.  For example, in Asia ‑‑ well, in Japan, sometimes people feel ‑‑ think that it would be better to be protected and somehow they give up the privacy.  So it's kind of like between the national security and also the ‑‑ it depends on the privacy.

And also, similar thing occurs, like ‑‑ I think you might have seen some type of Japanese animation issues.  So there's sometimes ‑‑ they are considered as cute or handsome, but in a way, some ‑‑ seen from the western way, it's sometimes too extreme or too girlish.

So it depends on the culture and ‑‑ but I'm not a real activist.  So there should be some kind of lines that need to be distinguished between good and bad, but sometimes there's a gray zone and you have to ‑‑ you have to not only say that that's not good from the western perspective, but somehow you have to respect that each culture's view point, the histories and what they think it's good.

So I think we need to discuss what is good and what is mad, and what kind of issues.  So I really thank you for this question.

Marina, you have any comment on this.

>> PANELIST:  Yes, I would like to comment on Arisa's last words.

Still, I think that we are seeing things from our viewpoint, even if you are much younger than me, you are not young enough for what is coming.

So think about we personally was a child and a teenager in a country where the culture was historically related.

Now, if I see my two sons and daughter, they are three years apart.  One is a millennial and one is Generation Z.  They are completely apart.  Generation Z, SmartPhone, from the beginning of your life, you don't know a phone.  This means Generation Z.  So what I think is we will assist to a process where the cultural difference will become lower and lower, why?  Because we are included in a very global.  The net is the global tool.  And the society is influenced by the net.

Wile at the beginning the society was influencing the net.  The fake news is the product of how we are in our society.  But if we live on the Internet, and the Internet is the place, the cultural difference, first of all, are better understood because we have more knowledgeable because of everything through the net.  Also the global approach will make difference become smaller and smaller, and as you were saying, we could find a very good basic set of principles that are good for everybody.

Then there's more difference that will be nice, but they will not see the overall good.  It's a process, and we are just in the middle of this process.

That's my viewpoint.

>> MODERATOR: Thank you all.

Actually, I have a question, like, if we talk about future of the Internet and how we have the human centricity and, like, the transparency while using the Internet, how to build it on site, like what would be the role of engineers, of institution of the government.

Who wants to start?  Each one in the community.

>> PANELIST:  I'm also an educator.  This is a question have to reply.  Not all the engineers.  The technical people as a group have the responsibility of growing the technology in a way that is ethical aligned, that's why now we are developing these he 11 groups that are working to create the consensus‑based approach that these ‑‑ that ‑‑ the way for creating possibly a standard in the future, bust especially what is variable is the consensus building approach, that means that people coming from government, academia, industry, private citizens to participate to the discussions, sit together and think about it.

So the main role for the technical people in the future is to have the ethically aligned approach from day zero of their design, of everything.  For a professor this includes that when if goes to the class and teach, the content of the teaching is ethically aligned.

So if we do this approach for all the technical dimension, we will create a technology that naturally is for the benefit of humanity, is by design and this is the main role that technologies have to have from now on.

>> MODERATOR: Danit or Arisa.

>> DANIT GAL: I will sit here in the United Nations and say something that will probably be counterintuitive to the United Nations.  There's a noble ambition in trying to create an ethical baseline for technologists around the world and I think that this is a very positive mission; however, being in different countries and speaking to technologies and really different cultures, I understand very clearly that what seems ethical for one may not seem ethical for the other.

And I think that this is a very serious contradiction that we are starting to see in the ethical discourse.  When you see different countries and cultures to the table and one person thinks this is ethical and for one, they say, this is not really an issue.  I think something else is much more pressing.  And I think that the role that governments and industries and even society has in that sense is to really engage in that kind of conversation within their country, within their cultures and try to figure out what is good and what is bad and what is desirable and what is not.  And this has to be a system of checks and balances between the groups to really make sure that, you know, there is a ‑‑ an inclusive benefit in that sense.

And I think key point for me, for joining IEEE is because they have that kind of technical baseline.  So I think that even if you say that privacy matters more or less, I think, though, we can all agree on the very basic concept of safety.  I think that we can all agree on the basic concept that move around design, you know because even when we talk about trust, how do you define trust.

When you talk about safety, there are actual procedure Ms. And actual technical definitions that allow us to achieve that and I think we can all agree that not compromising anyone's safety is something that is usable and tangible across cultures.

So in that sense, I would call more for the establishment of a technical baseline, through which we could establish these kind of ethical differentiations that could really sit on top of that.

>> MODERATOR: Thank you, Danit.  It's really difficult.

Yes, I guess ‑‑ it's not only from the technical side but we need to think from the social science and the humanities side.  So it's sometimes ‑‑ it's also challenging the basic concept, like the security and the fairness and the responsibility and the accountability.  So I guess to build the trust between the stakeholder is really important, but, I think we need ‑‑ we need some kind of time, and in that sense, we need some kind of money, or we need some kind of, you know, institutions that will support that kind of conversation, discussion and I think IEEE is doing very good job of, like, creating that kind of bases that you could go and discuss.

And I think many companies, many researchers or many social scientists and researchers want to discuss those kind of issues within themselves or with other stakeholders, but first of all, we need to create the places so that they could go there, discuss freely without restriction or without being blamed by the other colleagues, you know.  Sometimes it's really hard to discuss ethical issues.

So I think it's ‑‑ it's ‑‑ first of all, we need to convince the stakeholders that this is important and also we have to create the places and have to assure them that doing this kind of discussion is required and we need to create this kind of global network hub and that kind of thing.

I think what we are doing in Japan is creating that kind of place, the safe place to discuss.

It's sometimes really difficult that you have your context, like you are from the government, you are from the industry or, like, from the researchers.

So I think it's really important to discuss that kind of issues from your individual point and from your ‑‑ your ‑‑ with your background and it's not ‑‑ don't be blamed by doing or saying such kind of thing.

>> MODERATOR: So I think we have one final question over there.

Okay.

>> PARTICIPANT: Hi, I'm from Telefonica.

I want to provide a different view and maybe the video from the kids, the garbage cans gave me a hint to it.

I think that the approach to technology is more neutral.  We saw with the kids that they were not afraid of that.  So maybe the question is not that we should trust technology, but we should trust whoever is providing us that technology.  I mean, we have this service provided by the government or company X or Y, it's not the question of technology, but the use ever that technology.  And maybe the issue of trust is related to those providing the technology, not just the technology itself.

>> MODERATOR: Thank you for your question.  Who wants to start?  Danit, or Arisa, do you have any comments on this question?

>> ARISA EMA: Yes, I agree.  It's not only the technology, but you have to also consider who actually creates the technology and who ‑‑ what kind of intension and what kind of design is incorporated in this kind of technology.

So ‑‑ yeah, so I think ‑‑ I think that ‑‑ so it's not only the ‑‑ I think it ‑‑ the same thing could be said not only the technology itself, but whether the Google or the Amazon, like who actually create that kind of platform also has a similar structure or like, who is creating the technology pane who is obtaining the data and that kind of issue.

It's really a pervasive question and I think what we have to consider and we have to focus not only the technology, but also the background, the culture who actually creates it and, yeah.  So thanks for raising those comments.

>> DANIT GAL: I certainly agree with you and I think it connects closely to the point I made before.  We should place a strong emphasis on the people who give us the technology and how they design it, but I think we should also place a strong emphasis on how people use it, because in the video, we saw that people were really excited.  The kids were really excited.  They wanted to find rubbish to put in the smart rubbish bin.  But at some point, they pushed it and dropped it and they were blocking its way to see if it could progress.  I think it's interesting how the kids experiment with that kind of technology and not just about making ‑‑ cleaning streets, something that is fun but also trying to really limit or, like, to test its borders and really try to see what happens.  And what happens if we block its way.

I think in that sense, another interesting thing to think about is not just the intent behind the development and the distribution of the technology but also the intent us using it, and how we use it to perceive the world us and really test to see what we can and cannot do.  And in that sense, I think it's an interesting educational experience because if you were to do that to another kid, you know, block his way or push him, he would probably start crying and you would probably get into a fight.

So having that technology that allows us to test what happens to do, without getting the significant pushback, I think it shape our interactions in the future and kind of in line with Marina and how we will interact with technology and how we shape the way that we as human beings develop and evolve.

>> MODERATOR: Marina?

>> PANELIST:  I think another trend.  Due to the softwarization, the technologist and the users are getting closer and closer.  This is another trend.

So some day, you will be the one designing your device, assuming that your device is external to you.  It might be partly internal to your body.  So the point is that the user and the technologist may collapse one day, not immediately, this is a little bit more futuristic, but this is the trend.  So, again, it plays ‑‑ like what Arisa was saying, the is place IEEE and the robot is the human.

When we think of implantable things we were thinking that we could become kind of robot inside.  This mace, where education and human robot corporation including self human robot corporation should be a place where we will fix that and it's an educational part and we have to learn how to make education on this very special class, a class where you have human beings, robots, and human beings that are partly robotized.  This is a place that we do not have in our imagination yet, but we have to start building it now, because the AI in that generation is the next one after Generation Z.  So be ready for that, and thank you for bringing the point.

>> MODERATOR: Okay.  So I think it's already time to ‑‑ yes, our session came to an end.  So I'm going to ask you if you have any final thoughts and insights.  Like, the most interesting insights if you want to say.

No one?

>> PANELIST:  Internet is our opportunity.  If we miss this opportunity, we will miss the opportunity to have a very good society for the future.  It's not the opposite.  The paradigm is completely reverses.  Internet will be needed to teach the society how to work.  So Internet is our opportunity.

>> MODERATOR: Thank you.  We want to thank you for being here and we thank also our audience and thank you, everyone for your time.  So thank you.

(End of session).