IGF 2017 - Day 0 - Salle 5 - Good Governance is a Professional Standard, Which Builds Trust and Cybersecurity in the Entire Digital Ecosystem

 

The following are the outputs of the real-time captioning taken during the Twelfth Annual Meeting of the Internet Governance Forum (IGF) in Geneva, Switzerland, from 17 to 21 December 2017. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid to understanding the proceedings at the event, but should not be treated as an authoritative record. 

***

 

 

>> MOIRA DE ROCHE:  Hopefully more delegates can join us as we go along.  If I can open up the presentation.

It's on the slide show.  I just wish it would disappear on the bottom.

Okay.  What I'm saying, covering the top of the slide.

Anyway, today we're going to cover Good Governance is a Professional Standard, which builds Trust and Cybersecurity in the entire digital ecosystem.

This is why I only use my own laptop.  I can trust my own laptop.  Come on.  Man!  Please, a technical person?

>> It's because there is captioning.

>> MOIRA DE ROCHE:  Can't we just take the captioning away?

>> It was asked to be set up this way.

>> MOIRA DE ROCHE:  Now, because if you don't know there is captioning the whole top of the slide is covered, the bottom of the slide is covered.  You know, so all my headings are covered.

Okay, so our agenda today is I will quickly introduce the speakers to you and we will have a quick look of who IP3 are.  Our professional governance of what a standard, what makes a professional governance as a glue that holds it all together, and then what professional standards are, in our opinions, professionalism of cybersecurity, the challenges and issues, what if we don't do the right thing, what is happening around the world, and a framework for implementation.

And then Anthony Wong will talk to you about trust.  We have a little campaign called ‑‑ we look at trust as a multidisciplinary concept.  The concept of the Fourth Industrial Revolution.  And ethics of technologies and then we will open the panel discussion.

Okay.  Just to introduce the speakers, I'll start from the bottom.  Dr. Jill Slay on my left is a Professor of Cybersecurity at UNSW, Australia, the University of New South Wales.  And a Doctor of Cyber Resilience Initiatives for the Australian Computer Society, and I'm truly delighted to have Jill along with us today because she is genuinely an expert on the subject of cybersecurity.

Anthony Wong, who is outside, but will be back sitting with me.  He's the President of the Australian Computer Society, a computer professional and fellow of ‑‑ a legal firm, he works on legal issues as well as intellectual property and he's an internationally renowned speaker on artificial intelligence.

Myself, I'm a professional member of the IT Professionals of South Africa, I'm a Learning Specialist, and I'm a self‑appointed ethics evangelist.

So, to tell you a little bit about IFIP, we are all about partnering for trust in digital.  When we first formed we were very about IT professional practice, but we realized we have a real role to play, if you like, as guardians of IT professional standards to promote trust and to partner with other organizations to develop trust.  We are independent and not for profit.  We are totally a product or certification agnostic.

We define and maintain global standards for ICT, and we recognize and certify professionalism.  IFIP, the International Federation for Information Processing, the leading multinational, founded by UNESCO in 1906 and is recognized by the United Nations and others, and represents about 56 countries.  And it's very difficult to give an exact number because people join and leave, but the number usually stays constant around 56.  We cover five continents and have a total membership of over half‑a‑million people.  Our reach is over half‑a‑million people, and links with more than 3,500 scientists from academia and industry, and over 100 working groups and 13 (?)s and Jill, for instance, is on one of the IFIP.

We also, as part of IP3, a Global Industry Council, described of industry leaders both in academics and in global companies that you'd recognized to the likes of CISCO, Microsoft, Google.  We have senior people that sit on the Global Industry Council, and then an advisory board to IFIP, and they provide us with advice and insights, and one of the outcomes we've had in the last few years is a Skills 2020 Guide where we looked at what the skills requirements for IT will be in 2020.

In 2018, we'll be updating the guide with an addendum that will focus on two areas.  First of all, cybersecurity and employment draws that come with that.  And secondly, digital skills.  And it is my firm believe and IFIP, firm believe that there is a need for digital skills both among users but also among IT people, so it's not enough to say well, the users must understand privacy and security and document management.  It actually applies to everybody.

So what, in our opinion, makes a professional?  Well, a professional should have the skills and knowledge, and along with that, the competence to perform those skills.  And in terms of the skills, also need to undertake continuous professional development and life‑long learning where they will actually maintain their skills and keep them up to date.  That's important in any profession, and possibly even more so in IT because it changes so quickly.  There is something new all the time.

Professionals should have an attitude of service to the internal community, the external community, and indeed, I believe that part of that service attitude should be that they should serve.

So here is my problem, if I knew the top part of the screen would be taken up by captioning, I would have put the header on the slides lower down.  Now every single slide here is covered.

>> I apologize, you can click and drag the captioning bar, but we ask that you keep it up there for people who may need it.

>> MOIRA DE ROCHE:  I can click and drag to the bottom?

>> Right.  This one is okay because it's high enough.  You want the font a little smaller?

>> MOIRA DE ROCHE:  Yeah, then it will make the total bar a little smaller.

>> I apologize the interruption.

>> MOIRA DE ROCHE:  We accept and appreciate the interruption, actually.  Okay.  Thank you very much.  Appreciate the help.

So, service should also be service to communities, so we encourage professionals to undertake unpaid work in developing community NGOs.

Okay.  Thanks.

Trust and trustworthiness both in the work they do, and any products that a professional creates is paramount and very much the theme that will follow through this whole presentation today.  Accountability and responsibility, that they're accountable to themselves and to the employers and to the countries for the work they do.

Ethics people should subscribe to a code of ethics and good behavior and really live and breathe ethics.  Apart from it being a professional hallmark, I sincerely believe that if every single one of you undertook to be like me, an ethics evangelist, and we spoke ethics, we would then influence our own communities who will perhaps become more ethical and they'll influence more people, so eventually we would maybe end up with a more ethical world.

And professionals should be proud of the profession that they work in.  I usually am proud of a profession, but I really like it when the technology works.

Okay.  Thank you.  I tried the arrows which should work, but okay.

So IFIP has some certifications, and we don't administer them directly, but rather we leave member societies to administrate certification and then we accredit the certification screen.

So IFIP or IP3 is a functional certification that we believe every professional should aspire to.  I mentioned it's vendor neutral or vendor agnostic.

We tie it to the skills network for the information age, we tie it to SFIA as a benchmark but don't say everybody has to use it.  People can use their own framework, we see the EU.  The countries have their own framework, that's fine all we would do is map the framework to SFIA and make sure it meets the standard.

Everybody should have a common body of knowledge or some people call it a core body of knowledge, and around the core body of knowledge, there should be very specialisms.  And again, one of the specialisms we will focus on today is the cybersecurity specialism.  That has been created and IP3 has adopted so we will be creating the cybersecurity specialism.  People would still need the common body of knowledge, but very much a focus around the specialism, and we can accredit that specialism.

I must say, while we're talking about certification, ACS are certified for both the IP3 professional as well as the IP3 Technologist.  My own society is accredited for professionals, and in the near feature, will also be accredited for technologist.

So, we see the certification as really being the gold standard around quality assurance, and one of the things we do when we accredit a member society is to make sure that the quality management system is rigorous and in place, and that there is accountability for the members to the society.  That word Trust, you'll see more than once.  We believe it helps create a world that works and also creates equivalency.

We would like to believe that someone who is a professional at ACS and professional at in Japan and Tanzania and professional in South America will all be at the same level so they there can be the recognition of equivalency.

It's very important because it promotes mobility of people and also that a global organization can say, well if I employ people in South Africa, I know that they're the same level as IP3s somewhere else.

We see governance as being the glue, and governance is needed when any group or community gets together to achieve a goal or purpose.  Even if you're just a community‑based organization in a housing estate, you need some level of governance, and in practice that's all about authority, decision making and accountability.

Governance can be good or bad.  Some people think that because there is governance it has to be good, but not necessarily.  In a country like Oman a lot of governments at government level is not good, and it can be effective or ineffective, and it shouldn't only be a tick boxing ‑‑ a tick boxing ‑‑ a box‑ticking exercise, it should be about really living the whole concept of good governance.

This is just a definition that I quite liked, and so it's derived from independent oversight by knowledgeable persons with authority to ensure implementation of corrective action and to guide behavior competence into a habit of compliance.  It's all about oversight and developing that habit of compliance.

I think once that habit of compliance is engrained, it becomes less of an issue because it's done sort of automatically, but it's all about behavior patterns.  I don't know if any of you have picked up the news on an organization called Steinhoff that has done some bad things and lost lots of money for vulnerable people from pension funds and so on.  A commentator says we have King 3 and King 4 and they put together governance and all our company laws based on that, but we could have King 10 if people weren't interested and if people do the wrong thing, then they'll do the wrong thing.  You can make as many laws and rules and look at the whole thing as much as you like, but ultimately, it comes down to the people who have that oversight role.

I think that's an important concept, is that trust is about people.  It's not so much about methodologies or processes, but about people.

Professional standards, it's an ethical and legal duty, level of care, diligence, and skill prescribed in the code, and the same as other professionals in the same discipline would do in some of the circumstances.

The thing about ICT is that in most countries, it is not illegally ‑‑ professionalism is not legally mandate so we can't rely on the law.  We really have to rely on organizations like IP3 and IFIP's member societies to ensure the professionalism of people that work in ICT.

And if you look at anatomy of a professional, I won't go through all of it, but you'll see that governance, that governance aspect is what the member society who accredits the individuals has to have, the capacity to undertake a mission and objective with strategy and compliance, and then the influences of ensuring professionalism are the code of ethics, the standards of practice, the body of knowledge, and a competency framework.

By the way, if anybody wants this presentation afterwards, I'm happy to give it to you from the stick, and I'm also happy to ‑‑ it will go up on our website or I'll even email it to anybody.

So, in a digital world, and I think that you'll find that certainly Anthony and maybe even Jill will mention this, so we apologize, but we always think if you say something more than once people will sit up and take notice.  So, in a digital world, there are various aspects that need to be considered around governance and trust, and that's an algorithm design, machine learning, autonomous transport devices, the impact on the environment of everything we do, cryptocurrencies, cybercrime and vulnerability and networks in terms of rights and responsibilities, and I suppose, access to the Internet, which is what this whole conference is about.

I have to say that I haven't become rich with bitcoin, I hope some of you have because you're more trusting than I am, but my problem is, I suppose maybe because I'm old, but if I don't have visuals ‑‑ if I'm not able to go and get that money in some kind of physical form, in my head it goes to, well, if it's in cyberspace, can't it be hacked?  And maybe Jill will answer that in her presentation because she knows the answer to that but I don't.  But I think it's very important that the people who look after cryptocurrencies have very strong governance.  I don't know that they do, but I also don't know that they don't.

So, we've mentioned that for professionalism, we need expert and competence, we need the life‑long learning, trustworthy computing, people who subscribe to and live by codes of conducts and ethics because it's never just signing on the piece of paper.  You really have to live the code of ethics, and it needs apply to practitioners in all roles, so even those that haven't reached the aspirational IP3P should really live by the same standards, I think, as ICT practitioners.  So even people lower down who are still developing their careers should subscribe to the same code of practice.

In Canada, university students studying IT actually have a little vow that they take to say that they will honor the code of ethics.  So, professional standards are compliance habit, and as I mentioned earlier, it's very important that it becomes a habit.  Active mitigation of risk is very important in ICT.

Defending‑based practice, irrespective of what it costs, and whistleblowing, so when people see that things are wrong, they should take action and highlight it.  Ethics ‑‑ I always say ethics more and more, and leadership must ensure compliance and but must lead by example around governance and ethics.

Now it's over to Jill.  I think I will have to sit there.

>> JILL SLAY:  Okay, so thank you for listening to me.  I'm basically going to talk to you about the experiences that we've had in Australia in developing professional standards in cybersecurity.

>> MOIRA DE ROCHE:  Just click the left mouse button.

>> JILL SLAY:  Okay, so the context in which we're working, and I think you might understand this with me, that we use the term cybersecurity, but the definitions are not clear.  It was only about, I reckon six years ago, we started using cybersecurity.  I started my current job four years ago when I was told I was to be called professor of cybersecurity.  I'm actually a professor of digital forensics, I was happy with that and worked hard to be a professor of digital forensic, but all of a sudden, I'm a professor of cybersecurity.

I can talk about the Australian context, but I think it's generalizable.  There is a strong demand for cybersecurity practitioners, but the understanding of the professionalism, and actually what it is that they can do is absolutely not explicit and it's not clear to employers.

And I want to talk about the way that the pseudo‑professional standards have developed, and in some ways that they're convenient, and also, what we've done in Australia in the development.

So we, at my center, we use this graphic to help people to understand what it is they might be securing and in what ways could systems be attacked, and which ways could they respond.  Perhaps we're quite dramatic in our language, but we use this to help technical experts to see that cybersecurity is not just network security, but that it's about ‑‑ particularly, it's about which we've talked about and mentioned before, that it could be around ‑‑ for us we would just call it about the whole industry of development.

In Australia it's about economic development, about a startup economy, copying the Israelis, copying Singapore, or whoever in our government is asking us to look at at that time.  There are certainly issues with software security, payload power and power supply, and so but it's definitely for the technical practitioner they have, and they have to understand that there are legal and regulation implications and also government domestic and international policy implications.

So, what we use to try to describe a context in which this discipline is, and I'm thinking like an academic, but I mean this profession is actually both multidisciplinary and cross disciplinary, so it's in that context, even as ICT professionals, that we have to understand professionalism.

So, if we ‑‑ I'm sorry about the fonts, it's a little weird.  If we were to try and look at what does the Australian Government think about cybersecurity, I put all my headings at the bottom.  Sorry.  They're just disappearing.

So, current definitions of cybersecurity, which come from our Department of the Prime Minister and Cabinet where the current focus ‑‑ so the Prime Minister decided to be in charge of cybersecurity, and he has a Minister who assists measures relating to the confidentiality, availability, and integrity of information.  If you're a cybersecurity professional, that's a mantra, CIA of security, that process stored and communicated by electronic means.

If you were to live where I do in Canberra in Australia where I do and listen to the conversations of government and be part of working groups, cybersecurity is about the law and it's about domestic cyber policy.  It's about international cyber policy.  We've had a lot of focus on how do we communicate cybersecurity issues ever since we had ‑‑ we had our census done electronically, and we had a possible denial of service attack on our census while we were actually all completing the thing online, but it was ‑‑ we had the Prime Minister, his advisor, and the guy who is the head of the Bureau of Statistics on camera at the same time all using contradictory technology on whether this was an attack or not an attack.  One part of the government banned the use of the word Attack, you can say the word Event.  And you may identify with me, but communication with these issues is difficult.

Many governments are talking about offensive cyberattacks or warfare, Cyber economies and development of national economies and cyber forensics.

>> MOIRA DE ROCHE:  Is that a question?

>> AUDIENCE MEMBER:  Yes, if I may.  My name is Jacquelyn (?) for the record.  I was just wondering how the gap is being bridged between the rather technical and, let's say, systems‑oriented definition forward in probably the national cybersecurity strategy, versus the implications that you're actually listing there, and let's say more socioeconomic implications of it?

>> JILL SLAY:  So, the people who actually developed the cybersecurity policy are the ones who are actually dealing with all of these issues as part of the ongoing work, and so what you find is I ‑‑ if you know our policy and we now have an international policy ‑‑ do I want to say this motherhood statement?  I guess I do, but it's like the translation from the formal policy into what's really happening is what I'm pointing out.  There are certain gaps, but it's the use of the concept of cybersecurity, and for me as a professor, trying to educate people and developing new programs.

Before I got involved in this national standards project, I was trying to cope with the complexity of what does cybersecurity mean, so I guess that's what I'm talking about.

So the other part of the problem in Australia, in particular, is there has been lots of prediction about needs, but I think the ones ‑‑ because remember, we're a large country with a small population.  We need 8,000 more practitioners to meet the status quo by 2025, but 10,000 or 11,000 more to deal with growth.  The problem is that we have not, until recently, defined what the cybersecurity practitioners are or what they're going to do or what disciplinary background they come from, and that is very, very confusing for industry when they want to employ people.

I had an example of a very large company came to me, and you see, we've got the equity issue, the equity and diversity issue as well.  And so because I have developed five master's programs, and so we have vacancies with cybersecurity degrees and we need 18 females in cybersecurity.  What other jobs are they going to do?  Can you tell me what kind of cybersecurity professional you're actually looking for?  And they couldn't really define it, they just wanted females.  I could find two.  I know where all the girls are, among the hundreds, but I think that there is a naivety, and it's coming from both ways of targets to meet government requirements and lack of females in the pipeline, anyway.

>> MOIRA DE ROCHE:  Jill, if I could just interrupt.  I hope you'll stay until Anthony is finished speaking because then we want to have the panel discussion and really interrogate topics like the question you just asked, because it's difficult to answer it straight away, but I think once we've covered all the presentation, we would love to engage more fully.

>> JILL SLAY:  I'm trying to give you ‑‑ what I'm trying to do is give you the background to what we've done, and then I'd love you to pull it to pieces.  I'll go quickly so there is more time.

And so, we put together a task force, the Australian Computer Society put together a task force.  We're in a fairly privileged position, I would say, because we are the body which through legislation actually does, sort of, own as it were the professional standards in cybersecurity.  So that means we're the people that accredit the curriculum, we're the people who determine the professional standards, and we also deal with immigration, the skill migration, so we're the people who are actually the ownership of that common body of knowledge in ICT in a general way in Australia.

So, it was actually the government who asked the Australian Computer Society to develop national professional standards in cybersecurity.  It was a very simple request to Anthony and to me.  We actually ‑‑ we need to know, now that we've launched and we've got a policy statement, we need to know who is a cybersecurity professional.  They're all coming out of the woodwork saying employ me, employ me, but we would like to know who is one and who is not.

So, what we decided to do was to, using the concepts out of IP3, establish the minimum professional standards of competence for cybersecurity practitioner, and originally in engineering in Australia or British organizations, we're very clear on what an engineer is.  We're very clear on engineering education; and therefore, we can see a parallel in engineering or perhaps accounting.  But this standard that we need needs to provide a complete requirement for the full professional of cybersecurity practitioner for the employer's sake.  We have to have some kind of disciplinary code for a complaint and we can't ensure the issues that Moira has talked about like governments and duty of care, but in a technical sense we can't be sure we can depend on our results, we can't be effective or efficient, and we can't actually develop a future workforce unless we've actually got the professional standards for them.

The concepts of security or cybersecurity in Australia, so remembering the Australian Computer Society is the body which is now sort of telling academics what's the right curriculum, even though we understand that they're going to look to ACM anyway, so we're still stuck with an older model which is looking at computer system security, which we all studied hardware and architecture kind of subjects.  We look at physical security, operational security, procedural security, and other security, but this does not get a very big part of most curriculum, and also, modern concepts of security, particularly, the concept of adversarial thinking which is now involved in the curriculum, just lacking in our curriculum, so those are all perspectives on what this work should be.

So, the work of our task force, which was requested by the Minister assisting the Prime Minister on cybersecurity, was to develop the professional standards, identify the job roles and occupations, to establish a baseline of knowledge and skills which are the minimum expectations of cybersecurity for the technologist and for the professional as with IP3, to find techniques to assess applicants who want to become certified, and to align that with best practice.

So, what have we do?  We've looked internationally to see how we can ‑‑ what we could, as it were, Australianize, so we've asked ourselves, what can this cybersecurity framework look like, what does international best practice look like, and how to Australianize it.

We've also looked at the practice of a range of countries.  We have looked at the U.S., who have certainly been my biggest influences during my career to the UK, and particularly, to Singapore and to the cybersecurity agency there who have been ‑‑ we've been sort of developing our ideas in parallel with Singapore so as to determine the task ‑‑ the kind of tasks which are carried out, and also, to look at the current competencies which are assessed by common certifications and I'll talk about that in a minute, and to take on board existing criteria that we have in Australia for those who are deemed qualified to say, to assess government secure network, to say what has the government already said that they think are the right kind of criteria for this cybersecurity professional.

So, if we are to look back at what I call pseudo‑professional standards, my memory of this is that it started with the DOD directive of the U.S. in 2005, which was a U.S. response after 911 out of the Patriot Act which said that anybody who ‑‑ and it was called information assurance in those days, anybody who is going to work in information assurance needed a particular set of certifications.

I was on the Board of IC2 who developed the certification, which they say is the gold standard, and what I remember about that defense directive was that this was the first time that the ‑‑ all those different bodies like SANS and ISOC, certifying bodies, and your country may have a different one, but in America these people got together to actually work together to what you could call a common set of criteria, and so I see these as a place or method to stop, but that and the next part of the thing is that I think if we work in this area, we all understand about the nice cybersecurity framework from the U.S. based on this, and what the Americans have done to actually have a standard set of definitions, a standard set of tasks, and to develop career path for Americans in an American context.

The CEO of the ACS asked me why we weren't doing this.  The problem is that Australia is actually not America, and for us we just do not have that many people who are developing software, except maybe for American projects.  We do not have a workforce that looks like the American workforce.  We have at least 60% of our companies are small to medium enterprises, so if we try to follow what I'm calling an American pseudo‑standard, we're to some extent wasting our time.  And I would argue, without being at all anti‑American or anti any other country, we can look at the principles of how this is being done, but we cannot just transfer the model completely, and I don't think that my people who I know very well in IC ² or ISOC would agree with me, we have to have an appropriate model.

This is how the Americans used that in the DOD 8517 in 2005.  It has been updated, but I wanted to show that basically, there was a technical level and under each technical level, they had sets of certifications which were acceptable for a person who was working in that field.

Then there was a managerial level and these were a set of certifications which were acceptable, and then there was a specialist level, and ironically, specialist level three was not available to Australians anyway, it was only available to Americans.

And so there is ‑‑ but the pattern that's in there, which says that across the board, what was done in America in terms of mapping was to say, well let's take the basic certifications which exist, let's get the certifying bodies to work together to have MOUs, to share data, to come up with a model which says that this person has the skills, the knowledge, the ability to work in this field.

The good thing about this is for example you can't get CISSP unless you have four years of relevant experience and each of these certifications has some kind of work experience built in, and so using that model ‑‑ I'm sorry about that.

We use a similar method, and the skills framework for the information age.  I'm not sure if you know it anyway, but it describes at a fine‑grain level the skills required by professionals in roles involving ICT.  It just so happens to have included all the outcomes that you might need within cybersecurity.  It has them at a high level.  For instance, it defines what is pen testing, and what is pen testing at different levels from, we would be looking at level three, so level three up to level seven.

So, we mapped this from an Australian point of view, and we also looked to our markets and at what's available to us, so it was like our Australian pseudo‑standards if you wanted to say that, is that we have something like 1,800 people with a CISSP, and we have 5,000 people who at one time have belonged to ISOC, but we don't know if they're all there now.  We use SANS for boot camp and there are universities like mine that have done a lot of training, and we tried to map them, but never have known what to map them to because we never had any standard.

We identified that we would just use the certifications from IC ² and ISACA so the very complex table for the U.S. turns into that for Australia.  MACS is a member of the Australian Computer Society and you can be a certified technologist or certified professional, and so we're saying if you had one of those certifications, then we will assess you along with the assessment to join the organization and we will accept that.

And so, we've matched those, and so when we try to look at ‑‑ well what kind of outcomes are inside the IC ² certification, what are in the ISACA ones and what are the overlaps because we're still challenged by what is a cybersecurity professional.

When we did that we worked out ‑‑ we've recommended that to be a certified technologist, you need three skills, and to be a certified professional you need four at a different level.

So, what we've come up with is that for a ‑‑ this is the certified professional, that what we could identify as the industry‑based skills which exist within the certifications range from IT governance, information management, information security through much more technical ones like pen testing and secure systems software writing and testing.  A strange combination, but this was the result of the mapping.  And we had a different set of outcomes from the certified technologist mapping.

So, we've started work with those.  This was launched on September the 1st and we've now got to the order of 25 people who have already gone through this process.  I've overseen some of the original applicants, and we've got people with PhDs, people with 20 years’ experience, people who are pen testers, but what I see from most of their careers, encouraging to my government, which they're actually coming from backgrounds where they would have never called themselves cybersecurity professional, they're ICT people that have in some ways moved into cybersecurity, but if you can assess on the basis of work experience and knowledge and ‑‑ could be micro‑credentials or just certification, you can actually map to the outcomes which I have got in much great detail.  If anybody ever wants what we've done, I'll send it to you or even help you to do it.

Also, we've got people from government departments, from large industry, and also people from overseas who want to do it.

Working with government departments and defense contractors and vendors to map learning outcomes of other people's training, and so that we don't just say that you have to do any particular kind of training, but that you can do any kind of training and the ACS will map your credentials to these professional standards.  The aim is to have a repository of resources for self‑education and then some micro‑credentialing.

And so, what I would say from that is that, what we've done is we're trying to help individuals to be certified for the sake of an individual career, for employers to look at future work needs.  There is more to be done because we need to profile cyber positions, and we need industry to say what exactly these people are meant to do at work because that is lacking, but I think the professional standard framework we're develop something a beginning and is a vehicle for doing this work.  I would recommend other countries to have their own professional standard frameworks, and I would recommend this as one way of achieving a set of professional standards in what is a very, I think, a very messy, foggy world.  Thank you.

>> MOIRA DE ROCHE:  Thank you, Jill.  Okay.  Anthony?

>> ANTHONY WONG:  Thank you, Moira.  Before I start the presentation, I would just like to get to know who the audience are, what interest you have in our topic so we can tailor, perhaps my presentation and the workshop later towards where your interest would be, so maybe starting with yourself?  What's your interest in this topic?

>> AUDIENCE MEMBER:  I'm not ‑‑ I'm sorry.  My name is Yoko and I'm from UK, and actually I'm Japanese, and because there isn't actually lots of infrastructure, (?) damaging from in the opposite direction, just question society itself to be in danger and that's why I'm interested in cybersecurity.

>> ANTHONY WONG:  Where are you from?  Myself, (?), we are looking at from the individual ‑‑ you're from a civil society.

>> AUDIENCE MEMBER:  Yes.

>> ANTHONY WONG:  Thank you.  What about yourself, sir?

>> AUDIENCE MEMBER:  I'm from a community and work on routing and other area of Internet, but have mentioned that all the IT professionals, actually, sometimes focus (?) and so there is, actually, we need to look for security and steps, so for an interesting discussion actually for here that how we can standardize a thing.

>> ANTHONY WONG:  Thank you.  So, you're an academic?

>> AUDIENCE MEMBER:  Techie.

>> ANTHONY WONG:  A techie.  Okay.  What about yourself?

>> AUDIENCE MEMBER:  Hi, originally from India but live and study in Berlin, finished public policy degree, generally interested on topic of trust and how it enables Internet usage and how this industry is getting more standardized and professionalized going forward.  Yeah.  Thank you.

>> ANTHONY WONG:  So you are an academic?  You teach in India?

>> AUDIENCE MEMBER:  No.  Now I'm in Berlin doing my masters.

>> ANTHONY WONG:  Germany?  A.

>> AUDIENCE MEMBER:  Yeah.

>> ANTHONY WONG:  Thank you.

>> AUDIENCE MEMBER:  My name is Jacquelyn.  I consider myself a member of the academic community.  I'm looking at cybersecurity norms but interested broadly in the complexity relating to cybersecurity and especially what cybersecurity professional actually includes or doesn't, which I'm uncertain of as well.  (Laughing).

>> ANTHONY WONG:  Thank you, Jacquelyn.  What about yourself, sir?  What is your interest?

>> AUDIENCE MEMBER:  Yes, I am from Armenia in technology, my interest is in education and I'm from the University as well.

>> ANTHONY WONG:  Cybersecurity?

>> AUDIENCE MEMBER:  My main interest is IoT but of course, I'm interested in cybersecurity.

>> ANTHONY WONG:  Thank you, and what about yourself?

>> AUDIENCE MEMBER:  Good morning, everyone.  My name is Dennis.  I'm from United Nations Department of Economic and Social Affairs.  Excuse me.  I just wanted to find out what governments can do to increase their security, resilience of their IT systems, and that's why I'm here.

Unfortunately, I don't have much time.  I need to leave in 15 minutes, but I found the presentation very interesting.

And for the upcoming publication, we have an e‑government survey and look at all national portals and one of the chapters will be on what governments can do to increase their resilience and that's why I'm trying to understand.

>> ANTHONY WONG:  Okay.  See, I can tailor my presentation to help you with that objective.  And the last attendee?

>> AUDIENCE MEMBER:  Hi, I'm Paul Wilson the Head of Asia Pacific Information Center, the IP address registry for Asia Pacific.  We're a service organization for members that are network operators primarily, and provide a lot of training with increasing demands in security training, and we're also an ICT‑based organization so we have our own needs for professional development training certifications and so forth as well, so that's my interest.  Thanks.

>> ANTHONY WONG:  Paul, I think I met you before.  Thank you.  Just a bit of background.  So, what I'm going to talk about in my presentation before our workshop is to look at briefly topics we've already covered, but particularly, my talk will be about the emerging ethics issues from new technologies and also, particularly about duty of care and why professionalism now is so important because of that.

We look earlier in the two presentations from Jill and Moira about trust in the digital economy.  We look at why we're digital.  There is so much now collaborations online, communication, connectiveness, information, infrastructure, including critical infrastructure, which are critical for human survival, information sharing, Big Data, products and services and new emerging transformations and realize that we cannot live without the digital economy, it's happening as we speak.

So, looking at some of these converging technologies, you guys are probably all across, because I'm going to touch briefly on some of the emerging ethics coming out of those technology, artificial intelligence, automation, robotics, Internet of Things, and Block Chain.

So why is trust important?  Trust is a multidisciplinary concept, it relates to security, technology, privacy, it talks about safety, access and a whole range of things.  When we look at introducing of new Internet of Things devices, perhaps now the more crucial thing we can look at are security and trust.  How do we encourage consumers to trust those things we're putting into the market?

So recently, we have ‑‑ I've spoken with the cases across the world, and even in the United States we have law suit again, Volkswagen Audi have settlements of millions of dollars on covering up digital carbon emissions, so we're facing the same in Australia on the big scale both by the government and also by private actions against those two manufacturers, Volkswagen and Audi.

And then we have issues of Wanna Cry, those are common things today and that's just one example of things happening in our cyberspace.

So, some of the ethical considerations arising in our new digital ecosystem.  We saw in recent times the ethical issues about algorithm, what happens if you have a driverless car, it has to decide, or program how to decide who to kill and who to save in that particular instance.  So, then the question, is that a designer thing or the developer algorithm or the owner of the car who could be charged if something goes wrong and someone gets killed with a driverless situation?

So that poses new questions that we don't have answers to at the moment, and including, what are the ethical values when we talk about building algorithms to decide on an outcome?  Whose value do's we put in those algorithms?  Is whose ethical and whose social values because they are quite different across different cultures, they're quite different in national geographies, and they're quite different in terms of different religion, morality, and ethical values.

So these are some of the important things, knowing with machine learning and algorithms, those automated algorithms are going to learn from our data.  So recently, as I'll be showing some situations of algorithms learning from Big Data and actually relating in different ways.

So recently, we also heard about killer robots, whether we allow automation and robots to be used in global warfare, and anticipate nuclear warfare and how they can be applied.

We have a, I believe, a number of dignitaries from the United Nations about the use of those things in the global arena.

And also, we're looking at where it's going and whether at some stage in the future of the human race with this topic of singularity, whether the automation and the AI will actually reach the human potential to think independently and to be able to develop themselves independently.

So, these are some of the questions.  This is a tweet recently about whether AI actually poses a bigger risk in North Korea with nuclear war.  The potential there and happy to debate and discuss it at the workshop later after my presentation.

But some of the issues that we've already seen with AI, with even driverless cars, where the driverless car mistakes a kangaroo for something else, or where we have the case of the chat box with Microsoft which was taken off less than 24 hours after the launch and the users with comments it became abused, to it has to be taken offline.  Or even with Google, by analyzing photographs and mistaking two black people for gorillas, and so these are some of the big dimensions of ethical questions that is arising from the emerging technologies.

Even with IoT, now this is one of Jill's slides, so by 2020 we're looking at 20.4 billion IoT devices that will be collecting a lot, lot of data from many places across the globe.

The question then with data, data is like droplets of water.  Once it's out there, it's hard to control.  Once it's out there, it merges with the massive amount of data.  It's not just from IoT.  We're looking at all the different devices we are using, collecting across from public, public research data, transaction from telecommunication use of mobile phones and IoT devices.

The big question then arising from these are, who owns the data?  Who actually controls the use?  Because when we look at traditional concept of honorship, it talks about the right to possess the physical thing, the right to control the use of those things, and the right to remain in control, and the right to use those things in exclusion of other people.

So, the traditional concept in relation to data and IoT are indeed very much challenged, and so the question then is, do we have consensus about what security that we got to have for IoT devices and how are we going to limit the expansion of the tech service for our IT devices where it's things you wear, the drones you have, the home security, the TV, Smart TV that you have and so the skill has gone over the time that we employed those IoT devices.

So, we're living it in a very complex ecosystem in cyberspace where there are many devices, operating systems, hardware; and then the question is, how do we go forward knowing those challenges?

So, one solution from the United States, they're looking at legislating to make sure that the devices that we bring about are properly certified and contain the right technology to cover any vulnerabilities either in the software or the hardware, so that could be a question for our workshop.  Would that actually help with IoT and security?

We've seen with AI, robots that can work 24/7.  They don't get paid.  They can do new things, they can do things much faster than human beings can in some instance, so where would that end?

We also look at the impact of automation on jobs.  For one particular example, we look at the finance industry in Wall Street where in the turn of the 21st Century we have nearly 150,000 financial workers employed in New York City, and by the turn of 2013, the number is barely more than 10,000 when the transactions across the globe has increased tremendously.

The question is what happened to all those workers in the finance industry?  And this gives a good example of what's happening, Goldman Sacs employed 6,000 traders in 2000 and today only 2 and the rest are covered by computer engineers.  So good things for computer and ICT professionals, and so for every four traders, they can now be replaced by one computer engineer.

So moving to my third area of my presentation, what drives Duty of Care?  When does it arise?  And as a lawyer and technologist, it poses interesting questions that you may like to raise during the workshop as well.

Primarily, as a lawyer, I see those driven by legislation, by policy, by things we create in legislation.  There is one of the legislations of Australia about putting security in our critical infrastructure with energy, electricity, gas, even with the shipping ports that we have, and so actually legislating to protect against cybercrime and cyber hacking because all of those things are now connected through the Internet.

So we also, if you're from the European Union, looking at the European General Data Regulation, GDPR for short, coming next year.  All trades established in the EU or offer goods or services or monitor behavior of EU citizens will be recorded so this is one of the most pervasive legislation of its time which covers a lot of people around the globe.

So the key considerations are, this legislation talks about whether you have created appropriate technical and organization measures in your businesses to take care and comply with this particular piece of legislation, and there are new rights, new rights as I equate to duties of care that will imposed on those people who are covered by this piece of legislation.  Of that many rights that are now in the legislation, which is going live in May next year.  What I find most interesting is the particular last item on automated decision‑making and profiling.  How do you actually describe something of what an algorithm does?  How do you describe how it comes to a decision, especially, when it learns from machine learning, learns from data?  How do you go about actually explaining what an algorithm has learned over time with data?

So, the fines and penalties for breaches of this particular European regulation is pretty steep, up to 2 million dollars or 10% of your global business turnover, not just your local turnover in your country, but across the world.

So if you look at this legislation, the question then is, what is your duty of care in regard to the EU citizens?

So, a recent survey done by TrendMicro has predicted a number of statistics about compliance across the globe, knowing that 57% of sea‑level executives shun the responsibility of complying with the GDPR, but with the massive amounts of fines of 10‑million dollars or 2% of global turnover.  Would that change the behavior and the compliance for duty of care, which is yet to be seen.

So another example is cybercrime, cyberlaws, I won't go through them in any detail, those will be available to you on the slides, but you can see what some of those are as we move forward.

So, the cybercrime legislation that we have talks about an authorized access to computers, getting the data, or impairing your computers, so those are the standards so those create duties of care across the world.  Many legislations, including Australia, have specific legislation on that and so depending on where you come from there is a particular legislation that is created to have that duty of care which covers your area work.

And the last example that I'm going to list is the privacy framework, which most countries now do have, which is I mentioned the EU and this example four, relates to the industrial perspective.

So, with the IFIP Duty of Care of Everything Digital, what are we trying to achieve?  Because of the poor ethical behaviors that we've seen, the low quality of devices like IoT in terms of security, there is a need about acknowledging that there is a duty to consumers, to users, with the new things that we put into our marketplace, and it's to remind our providers and consumers that they owe a duty of care to us and to act responsibly in relation to this new digital world that we are now living in.

So, the duties apply to government, it applies to individuals, it applies to communities, and it applies to businesses.  So in terms of businesses, we're looking at boards of directors and officers who act for companies about duty of care.  In most jurisdictions that will be in your company's legislation, if you're from the English sphere of common law, about the law of negligence and the duty on to your consumers and to other people that we deal with.

So, in terms of businesses, privacy and security are paramount.  Just because you outsource the ability to put it in the Cloud doesn't mean he outsource your duty as well.

It says clearly even in the EU, if you outsource those things, your processes have to be responsible and you actually have to take account for that as well.

For the duty of care for individuals, we believe the individual has to take responsibility, like in the case of the diesel with Audi and Volkswagen, the people who let themselves be used as tools to tamper with those, they are the experts of care and should have a duty of care.  If you act as a programmer, as an IT professional, that programmer also has a duty of care to the automobile consumer and community in the people that we work around, and so that's in terms of the life cycle, the software that we create, the algorithms that we write to play some of the functions that we now do physically.

So community, likewise, there are many duties of care and duty for government in terms of providing the right standards, regulation, and that's where I think an arena like this forum would be immense use to look at agreement on standards for IoT standards for what's ethical, what's the right behavior with AI and automation, what's a regulation that we should establish a benchmark for worldwide, and how do we get agreement across the countries to have some standardization because now we're living in a very globalized world.

So, on that, I'd like to hand it back to Moira to actually start the discussion and the workshop.

>> MOIRA DE ROCHE:  Thanks, Anthony.  And just to get, while people are thinking of questions, I just have a couple.  Jill, just to go back to what I referred to before, what or how safe are cryptocurrencies in your opinion?  Or either of you can answer that.

>> JILL SLAY:  I have no idea.

>> MOIRA DE ROCHE:  Okay.  (Laughing).  That doesn't reassure me at all.  Does anybody here have any idea?  Do you own cryptocurrencies?  No?  No rich people?

>> AUDIENCE MEMBER:  One of my staff has invested a large amount of money in cryptocurrencies when his (?) was bought and is very rich now.  I can't talk about it from a technical hacking point of view, but it I can talk from an economic market point of view.  For us in Australia, if something goes wrong with the economy, it affects everything, every part of us.

So, I do believe that we have already had documented hacks on some kind of cryptocurrencies already, haven't we?  And so it's actually ‑‑ every kind of system can be vulnerable if there is ‑‑ if somebody is determined to hack it.  Also, I'm very dubious about it as well.

>> MOIRA DE ROCHE:  Okay, so one other thing I must say, and it's a little bit frivolous, but still, is we talk about cybersecurity and we talk about standardization, but at the same time, we haven't yet decided yet whether cybersecurity is one word, two words, or whether it comes with a hyphen.  I look at our presentations and it probably varies depending on who wrote it, and if you search on the Internet, you see it referred to by all three.  You seem to have it as one word, Jill, so seeing as in my mind you're the person of all things cyber, it should just be one word, but that's just a frivolous thing.

>> JILL SLAY:  Maybe we could discuss that.  (Laughing).

>> ANTHONY WONG:  Jill and I were involved in a panel with the Australian Government about creating glossary of cybersecurity words and we debated whether cyber should be joined to security.

>> JILL SLAY:  I don't remember the outcome though.

>> ANTHONY WONG:  We'll find it's a hard thing to do because those things are still developing as we speak, so it's constantly changing and you can't really control it, as you know, so it talks about who is going to be the most used one moving forward.

>> JILL SLAY:  And also depends on which part of the community you come from.  So, I've been in contexts where people who come from international security, so from policy direction, want to separate them out because they say it's security work and we put them all together.  But professor of cybersecurity, I think I put it in two words, and just for me it's (?) actually.

>> MOIRA DE ROCHE:  Thank you.  And Anthony, a quick question for you.  What can consumers do to increase trust?  So, saying that consumers have a duty of care, we should hope that the people who provide services and products have a duty of care, but what can we do to ‑‑ in terms of our duty of care?  What should consumers ‑‑ because all of us are consumers?

>> ANTHONY WONG:  I think consumers should go to government to take more interest in security of things of IT devices and you do actually have a choice in buying what products.  You need to be careful and select the right products for you to make sure there is security built in those things because otherwise you could have potential breaches of your data and things you do, and so I think consumers have ‑‑ as a matter of choice, have those privileges in driving change that produces behavior all the time.

>> JILL SLAY:  I think I totally disagree with you, actually.  We talked about the American legislation, where I don't know if it's become legislated, but they have a bill to force, basically, those who wish to sell IoT devices to the American Government will have had to have tested them and prove their security before they're allowed to do it.  But we would know from our market and from the American market, there is the import of commodity devices from China and Southeast Asia, and we don't have the money to test each device that would be required. 

We have the same problem in Australia.  The Prime Minister decided there is going to be a kangaroo tick‑of‑approval mark.  Everything is a kangaroo in Australia.  And essentially, we're going to have the same kind of mechanism like we have on imported vehicles, to sort of put numbers on every IoT device that comes in the country and then if it is discovered ‑‑ if the IoT device is insecure, they're going to recall it.

Now, I believe that the complexity or the fact that many people don't know they've got an IoT device when they're using it, all those lightbulbs that, you know, all over the place, I think it's impossible to do.

>> ANTHONY WONG:  What does the audience think?  What do you think?

>> AUDIENCE MEMBER:  I think it's worth looking at ‑‑ though the way bitcoin was put together and the principle behind it is that it is designed for a trustless environment, and the Internet was also designed as a trustless environment.  It didn't, you know, the Internet itself didn't build in security that had to be trusted.  It relied on ‑‑ it's not that you never trust anyone, but you choose who you trust.

So, in terms of bitcoin, for instance, the software is Open Source.  You really don't have to trust anyone if you've got the personal capability to examine the Open Source and understand how it works and it's being deployed for a trustless environment.  What that means is, you have to trust someone.  You either trust yourself or you trust someone who produced software that you trust, but you've got a choice of this implementation or that implementation so that the choice that you choose to have is your choice.  You're not being asked to trust a bank or some institution over somewhere else.

So, I find it interesting that people want to trust the Internet more and promote more trust in the Internet.  Whereas, it actually can be more empowering to promote less trust in the Internet and more awareness and personal responsibility on the part of users as to who they trust to represent their own interests.

So, I mean, in terms of product certification, if there are different competing product certifiers with the kangaroos or wombats or other national things, but then maybe it's a feature instead of presenting one monolithic trustable institution, you actually work towards allowing users to choose who they trust, so the trust is a framework and not an institution.  I don't know if that makes much sense, but I think ‑‑ I think bearing in mind the where bitcoin actually came from is quite instructive.

>> JILL SLAY:  Yeah, I'm professor of digital forensics, when I started about 2004, I was really (?), the Australian Federal police came to me and said, could we please test every mobile phone as it came out on the market to see how we could, you know, suck out the evidence of standard operating procedures?  I'm so glad I said no because of the scale.  You see, we're in the stage where we're developing so many mobile phones a day, but I did attempt to do the forensics of many mobile devices for them, but it went too fast, and I think that's what's happening with the Internet of Things because there are so many people who want to put sensors on so many things that move.  I don't know how to keep up with it.  I think that if we try to regulate, that's what we might end up having to do, to test.

>> ANTHONY WONG:  That's what happened in the old days of approval where no one could connect anything to the phone line unless they paid for approval, and as a manufacturer or importer, you had to fork out tens of thousands of dollars and wait for a long time for them to come back and certify your plug or whatever it was, and that's completely unsustainable these days. 

So, I think the attitude these days is more that the phone network, such as it is, should be designed so that it is robust and it can't be destroyed by someone plugging in a device, so you actually ‑‑ that again is another trustless environment.  You're not investing someone to take authority, you're actually taking responsibility for your participation in the network.  It's very much the way the Internet is run from the start.

>> MOIRA DE ROCHE:  I think what you see is it shouldn't be institutionalized, so I think people have to take key because we can't have everything checked before we get it.  But I recall at the forum earlier this year, somebody in the audience ‑‑ we spoke also about a similar topic, and she said, you know, she got a baby monitor, it's an Internet of Things device, but it came with a preset password that she couldn't change and that made her very uncomfortable, as it should, because ultimately, it was connected to other devices.

It's also about then, what do you do?  Before you buy the device, do you try to check to see if you can change the password?

So as consumers, we've got the duty of care, but then we've got to motivate to suppliers and to governments that they should be, at least, legislation around making sure that people can keep themselves safe, and also that the duty that they have of reporting ‑‑ companies should have to report data breaches, for instance, which at the moment they don't.

>> ANTHONY WONG:  I would like to add to that.  More in Australia, particularly, when you have a product you actually have a duty of care, whether you're a manufacturer or distributer of those products.  And so, because most countries now have consumer legislation, right, so they can harm or kill people or create data breaches or actually impact them in some way, so there is a duty of care for devices in that regard.  It's commonplace now around the world.  Maybe an attendee would like to add to that debate?

>> MOIRA DE ROCHE:  Or your own question if you have one, please?

>> AUDIENCE MEMBER:  Yes, I would just like to say that related to the trust that you mentioned to create a kind of ledger or whatever for registration of all IoT devices, this scalability of taking into account the scalability of IoT, it won't work.

But from the other side, I think that in terms of regulation, the only way is to force the standards, and the standards already in place, for example, in hardware development of self‑driving car, there is IEEE, this 2‑6‑2‑6‑2, if you know, a functional safety standard, and this is a standard that if you are producing a chip, then you need to comply to these standards.

Similar standards should be forced in terms of device manufacturing so that if you are importing some device which is not known or not, let's say, coming from China or somewhere else and is not proven silicon, for example, then this  trust, and this kind of compliance can be in the networking area, in software development area, in Cloud area, and in ‑‑ so the only way of having trust is to follow these standards, and these standards, there are a lot of task forces, working groups in this area.  This is just a comment.  Thanks.

>> MOIRA DE ROCHE:  Thank you.

>> ANTHONY WONG:  Thank you.

>> AUDIENCE MEMBER:  Hi, Krishna.  Going back to the conversation on bitcoin, so I understand that bitcoin kind of goes with this trustless economy.  But then for a normal user who doesn't understand technology, I have to ‑‑ mining because I don't do that, so if I want to enlist in bitcoin, I rely on changes, and there are websites where I can technically buy coin through multiple levels of verification.  But my fear is, it readily works because there is so much euphoria about bitcoin, everyone is logging in at the same time, which crashes the servers and it's really difficult to get in and buy bitcoin.

So my other fear is, if bitcoin is going to go down on a particular day, there is no way I can log in and sell it and sell my investment, right?

So, this prevents me from investing, and yeah.  So I'm like trying to understand how even though we want to create something with no trust at all, but over a period of time it gets institutionalized because, as a normal user, I rely on something that I can trust on, but even there it gets even more complicated because it's not legitimate.

>> ANTHONY WONG:  Would you like to comment?

>> AUDIENCE MEMBER:  Yes.  Hello.  My name is Christian, from Berlin.  I was a little late due to my flight.

>> MOIRA DE ROCHE:  Thanks for coming.  You're welcome.

>> AUDIENCE MEMBER:  The discussion I followed on IoT was very interesting.  For me, we conducted research projects on the issue of trust in the IoT, and at the beginning I really hoped that consumers would ‑‑ that this consumer solution would be a way to solve it, but after conducting also some empirical research, I'm not too sure whether this works because we saw that many hypotheses in this field, like the privacy paradox, are indeed true.  And as long as systems work and are designed well, consumers are trusting them without looking into it, and so I think you would really need to do an economic analysis and find ways in which you can motivate those who can really increase security to do so.  And we see in Europe that the General Data Protection Regulation is, in fact, a way. 

So, data protection questions now have relevance.  If you talk to people, data protection officers in company, they would now have access to the higher levels and this ‑‑ I mean, it was a harsh, as you mentioned, it's quite a source that can be quite severe payments, but it really is a game changer and you can see now before so.  And this is my comment.

And a second brief comment, I would like to think about what the problem ‑‑ what the problem apart from the economic incentive to produce fast and cheap is, and one thing, I don't know whether it has been mentioned before is the knowledge problem, and I think to produce state of the art solutions in the IoT economy, it's very important also to see that many companies coming in are not traditional IT companies so they don't even have the knowledge and know what state of the art is.  And we see this is a huge problem in Germany where we have a lot of industry now digitizing, so how do we solve this knowledge problem?

>> ANTHONY WONG:  So, what have you found out in your research?  Are you able to share with us, at some stage, your research outcomes in relation to this IoT question?

>> MOIRA DE ROCHE:  Is your research published?

>> AUDIENCE MEMBER:  Yes.

>> MOIRA DE ROCHE:  Perhaps, give us a link?

>> ANTHONY WONG:  Perhaps you can give us a link to the website, yeah?

>> MOIRA DE ROCHE:  That would be good.  There is a gentleman over there that hasn't said anything yet.

>> AUDIENCE MEMBER:  Thank you.  At the beginning of the session where you were talking about data, standardizing the skillset of the people so different people in different skills can get standardization of the skillset.  One thing not discussed in this room is standardization of the law.  Every country has different law, and these laws are not the same.  For the same crime, inside the country it can happen ‑‑ let's say stealing money from the banks using hacking or something, or stealing money physically, there are different laws for the same kind of criminal activities.  There is one challenge we need to actually address, that crime is a crime.  Rather, we should talk about how we can collect evidence.

It has to map with other country's laws, different countries handling it differently.  That kind of standard we need to move forward, otherwise it's very difficult and so that's a big challenge.

And another thing I actually didn't get any answer from here, that you're mentioning that an automated car, who to kill, that ethical question of how we can handle ‑‑ how to program that and who it should kill the passenger or the (?) or kangaroo or child or say human, and so in that area, actually, how we can address that?  I'm very interested to hear, but if you can put some light on that I would be happy to hear that actually.

>> MOIRA DE ROCHE:  I can perhaps, answer your questions first.  First of all, in terms of professionalism and having a global profession, well, a lot of global professions are around, and so doctors, engineers, but even with a medical degree, you can't necessarily go and work in a different country because you have to, at least, do some cross training to learn what their laws are, and it's usually around the legislation that it changes, so the job itself is no different but the legislation and the requirements and even the norms are different.  In some cases, even the language is different.  We often have this discussion with our colleagues in Japan who are just about to be accredited, that they couldn't say, well fine, you're an ICT professional in South Africa and come and work in Japan because the language would be a barrier.

So, yes, the commonality, what all software engineers do, but then they must be part of it, must be about, well what applies in your country, and in terms of ICT, it's data legislation, et cetera.

And then your other question about the driverless car.  There was a big accident where a driverless car left to be autonomous crashed into a truck and killed himself actually, and they had that whole big investigation.

And what they discover was that the driver actually trusted too much in autonomous driving, so it was like a pilot saying, I'm just going to put this on self‑pilot, auto‑pilot and then I'm not going to bother anymore.  So, he actually missed his duty of care to knowing when and where, and perhaps the manufacturers must admit duty of care to not inform the driver better to say that you shouldn't use autonomous driving in, for instance, when there is a lot of fog or whatever.  So, it has to be from both sides that there is that duty of care, but it will remain a question, and I'm sure Anthony has more to add about that.

>> ANTHONY WONG:  Our supporter has turned up.  So, going back to your first question of why there are different laws for humans as well as different laws for perhaps robot advisors, because I personally came across that and I wrote a column in the Australian newspaper about that when our Ministers decide to legislate on financial advisors to get them back to the university to learn about ethics and about how to probably advise their clients ethically.

That we're doing for humans, but then my question is, if we automate some of those things that financial advisors provide in the marketplace, which is happening a lot, and you get programmers to create these algorithms, do they need to go to the university too because they're creating those rules which we now use to trade without even knowing because with a click of a button we get advice by robot advisors and then we're making an investment?

So here, without the humans we get the program in the algorithm by people who may not even be certified as ICT professionals, and so the question I have for the Minister then is, why do you stop at regulating the humans?  If those people who are to put the pen to write the algorithm, shouldn't they be regulated as well?  Are they actually advisors as well which need to be regulated?

So, those are interesting questions because they're changing the world.  And the way you look at most countries around the world, our legislation is all very piecemeal.  Nobody actually sits back and thinks, should we do a revamp of the whole thing, because all these new emerging technologies are challenging the fabric of our human existence, it is challenging the regulatory framework that we live in which grew over thousands of years, and so sometimes those are being disrupted as well.  And so, who is looking at the broader questions rather than keep adding piecemeal changes which may not address some of the fundamental questions that these emerging technologies are asking.

So in your area that you asked the last question, the ethical question, that's a very philosophic question.  It's a standard for humanity.  If you have to make a decision in a split second to save or kill certain people or animals, whose life has more value, right, those are deep human philosophical questions that people like Socrates and Aristotle had pondered that, and people like him will ponder that for years to come, right.  There are no simple answers to those questions, so even though if we input them in algorithm, how do you actually stipulate that?  Because that's very interesting.

If you make that call as a human being, you make it based on the decision, a split‑second decision to save someone's life.  How do you come to that decision?  If you analyze that logically, and put that in the algorithm, can you actually do it?  All right.  Those are the big fundamental questions, and so I'm sure Jill can add to some of that.

>> JILL SLAY:  Yeah.  How do I turn this on?  All right.  So that's why I'm a very technical person.  I told you, I'm a professor of digital forensics, but I sometimes stop doing my research so as to do things like the thing I showed you, which was just like a framework for deciding who is a professional because I have taught at the university, taught cybersecurity or information security since about 2001, and I counted up in Australia and Asia, I've taught about 2,000 undergraduates in information security, and plus the masters and the PhDs, but I'm one of these people, and we are just usually taught technical content.

I know the ACS requirements take us approximately 12 hours of a three‑year degree is on ethics, but if you, as one of the universities, chime into this and say, we've got to be talking about ethics, we've got to have philosophy for engineer, which I actually did study in the UK, you're just laughed at because we need all of this time for the technical content.  You've got to teach them technology, but if you have a whole generation of introverts and technology, that's where my PhD students usually still are.

If we haven't challenged them when they're younger, I don't see this being reflected in their decision‑making.  I'm doing machine learning, I'm supervising machine learning, and I've never thought to ask my guys to start thinking about the ethics of the algorithms, okay.

I understand at one level that we have to do it, but I have to think, when should I have started teaching them about this?  When they were 5 I think, or when they were 11 or whatever year you go to high school, but it's just not part of our system.  It should be.  It should be. 

>> AUDIENCE MEMBER:  There is a new initiative in the IT Engineers Task Force you might be aware of which is about human rights considerations in technical standards, so human rights considerations being privacy and consumer protection and all those issues, and that's ‑‑ not that I've been involved directly, but I've observed the same sort of challenge of trying to bring a completely new way of thinking into, you know, a pretty well established set of minds and persons, but it's actually moving along pretty well and hasn't taken that long to become a regular feature of IETF meetings and something people talk about here as well.

>> JILL SLAY:  In my new job, I'm actually just leaving, I wrote a new cybersecurity degree.  The new ACM, the new curriculum, so what you knew and believed and there was common agreement on societal security, organizational security, risk framework and adversarial thinking, and I have a great support for that, but I couldn't get my university to move away from the fact that we need four courses in the first year which are all roughly mathematics, and I'm thinking, can I not just move one of these out and put some of this stuff in there.  No, you can't because we just need to give them technical content, so it's going to take a long time.

>> MOIRA DE ROCHE:  Before I get to the man at the back, to add to that, our recommendation is that ethics is taught in every course, not as a separate subject, but any time you're talking about making decisions, how do you decide this or that?  That you actually bring the ethical conversation into it.  The gentlemen in the back?

>> AUDIENCE MEMBER:  I think I completely agree, but I also see that it's not easy.  And the question I would ask is, why we talk about so much about ethics for engineers and not about technology for ethicists because I think that the disciplinary exchange requires also for the other disciplines to have a better understanding of technology and really be part of design teams in order to help and design together because otherwise we leave the decisions up to a few people and it's ‑‑ I completely agree with you.  We cannot burden a certain part of science to do the technology and also decide about all social implications, so I think we should also ‑‑ one thing that I'm thinking about is to increase the people doing technology and to really have a conversation.

And just a very brief remark, also in the line of what Anthony Wong said, because in Germany, there was a commission by the Ministry of Transport for ethical issues of automation.  It issued the first report, which is very interesting but is not translated to English yet.

But one interesting aspect is that they didn't solve ‑‑ or they didn't even try to solve the trolley question because they also said that this is something that maybe also the state cannot do at this point, so it's really ‑‑ they regard it as a completely open question.

On the other hand, I think they made some interesting observations, and you can, if you use an AI translation system like Deep L, it has some useful hints and report.

>> MOIRA DE ROCHE:  Mr. Morel, have you got something to add to the discussion?

>> AUDIENCE MEMBER:  I think we're working with the wrong hypothesis with the middle age but the past two centuries, and this wrong hypothesis is to take a person, more or less well educated by the previous generation to educate the next generation.  If you have two generation delay in a world which is very dynamic with static information and (?) as structure, but everything is digital and with network, you can't succeed.  You have to (?).  It's not possible because those who are in school now, what they will need in 27 or 33 or 42.  Usually they have to work to 60, 65, to 2065, okay, but you can't prepare these people on actual curriculum and actual way to teach.

I think the word teacher should be capped, not cap the teacher, but completely engineering of the system, a smooth curve slowly, but in the direction where you push things in a different way.

I have in my bag here, maybe you know of it, a nice article from the Institute of the Future, which is, future work skill here for 2020, okay, but you can extend that and you have six trends for the next 10 years and 10 skill competencies and nothing to do with actual, at any level of education in the system, and so I think we should be a bit aware of that and more modest and begin to work in another way and not to keep the two generations delayed between the initial people and the feature.

Look in 1989, I was already 25 years in the educational system as a teacher.  I never learned Internet.  In 1989 you have not Internet, just the beginning with the people, no social network, no Big Data, no IoT, no Cloud, and so on.  And so, people inside the system, you have to be able to understand the meaning of that and to understand what it is and then to understand how to handle it, to behave.

And I think, it's around these ideas that you have to push competency for the future to be able to produce people able to ask questions, to look for information for things that you can't imagine today.  Okay.  Otherwise, you will just format people to go against the world.

In '89, I choose this date not only because of the web, but in '89, you have very huge concerns in Paris organized by UNESCO, ICT, and education.  I was there, okay, and it was very funny to see that we have 4 pages with 15 recommendations.  I put some order in my 400 boxes and I have these 4 pages.  Oh, quite nice, and so I ‑‑ okay.

I was one of the writers of these four pages, but I don't remember that.  And you can use it today, but we have done nothing during 28 years, just be distracted by the evolution of technology.  No pedagogical content changes.  It's a bit of exaggeration, but I think we are still in the starting work, okay, and we just have ‑‑ we are formatted to handle in a linear way and the technology is exponential and we are crossing that some people call that description, okay, but we don't understand what does that mean.

And I think it's urgent to take that into account, and for people, for information people to consider what they are, a code of conduct or ethical rules, and to help the medical doctor to keep the level of competence regularly, not in artificial way.

>> MOIRA DE ROCHE:  Thank you.  We have to start wrapping up now.  I just want to say, the first thing we should do is lose the word pedagogy.  Pedagogy is about the teacher being central, and learning in today's world is about learners being central, so that's just my little thought for the day.

Never won on that one.  Okay.  So we would like to wrap up now.  One of the things we would like to have as an outcome from this is to say, say anything that IP3 can do either to partner with any of your organizations, or more importantly, to take this conversation forward.  Is there something that we can do to, even if it's just one thing that we should be following up on or driving or driving into our agenda for our next engagement, which will be at the World Summit for Information Society in March or the World Computer Congress in September and so we don't want to just leave it here and actually find yourselves presenting the same stuff and having the same conversations in three months’ time.

Can anybody think of what we can and should be doing to really move this conversation forward?

>> AUDIENCE MEMBER:  Don't forget in May the AI for Good Conference

>> MOIRA DE ROCHE:  In May, AI for Good Conference that the AU will be running.

No?  You're thinking.  Good.  Keep thinking.

>> JILL SLAY:  I personally would like us to have a global understanding of what our cybersecurity professional is.  I'm not suggesting that we should adopt the Australian thinking.  I'm suggesting that I've come up with a methodology to develop a framework which could be useful.

And I agree totally with what you were just saying, you see so when I employ people with PhDs to be lecturers, to be academics.  I won't employ them unless they've actually got an industry‑based certification.  I don't actually specify which one it is, but I need them to have one which is actually got continuous professional development in there, something which forces them to keep thinking.

I don't care about their academic papers, my university does.  If they can publish in IHHH that's great, but I'm actually more interested they know the feature of that student for the industry is for the next however many years is.

So, I would like us to have, particularly with what we're calling cybersecurity, because those of us who have worked in this area as academics, we know that nobody has really specified what the curriculum is until very recently with the IEEE ACM the 2017 cyber curriculum.

(captioner will end in a few minutes).

I would like us as local computer societies to adopt something like that because then at least we don't teach the same thing that has the broad societal implication, but also if we could then know what was in the curriculum and what the professional bodies were accredited, then we've actually begun to define what cybersecurity is because at the moment it has huge holes in it from an employability point of view.

>> ANTHONY WONG:  Thank you, Moira.  Any more ideas?

>> MOIRA DE ROCHE:  I'll give everybody my business card, and I invite you to please ‑‑ (Speaking off mic).

>> AUDIENCE MEMBER:  Please carry on.  I heard your discussion about ethics and how can ‑‑ it could be ethics where when people have ‑‑ will not have the same level of understanding, especially on science, technology, and engineering and mathematics.  It's a great question when you know that the distribution on statistics of the people who can ‑‑ who are able to grow in this field is one for a thousand people.  How can it could be fair for everyone to access to this teaching I'm also (?), and I see with (?) industrial revolution is coming and it's right here and was here for a long time and now we see some developments, but it's on the starting block to a long time ago.

(captioning completed at 6:01 a.m. CST)