IGF 2024-Day 0-Workshop Room 9-Event 173 Building Ethical AI- Policy Tool for Human Centric and Respo -- RAW

The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.

***

 

>> MODERATOR: Good afternoon, everybody.  How are you doing?  Check, check.  Is that better?  Cool.  Again, hello.  Welcome.  My name is Chris Martin.  I'm head of policy invitation at access partnership.  We're a global tech policy and regulatory consulting firm.  So pleased to be here with all of you, and with our partners at the Digital Cooperation Organization.

Perhaps, we can get started with I think a little bit of an acknowledgment that artificial intelligence is no longer really a technical challenge.  It's a societal one.

Every decision that AI systems make, what they power, are going to impact and shape our lives, how we work, and how we interact.  And the stakes are monumental.  They demand that we get this right.  And at the same time, key questions remain.  Most especially, how do we ensure that AI is both a powerful tool, but also ethical, responsible, and human centric?

Today, we stand at a pivotal moment.  Policy makers, technologists, and civil society, are coming together to navigate the complex intersection of innovation and ethics, and together, we need to develop frameworks that both anticipate the risks inherent in these systems, but also seize the transformative potential of AI for global good.

Now this session isn't just about policies; it's about principles in action, defining who we are, what we as a global community value and how we correct those values, especially in the face of rapid change.

I invite you to take this opportunity to explore these possibilities with us, to ask some hard questions, and build pathways to ensure that AI serves humanity and not the other way around.

With that, please let me introduce my colleague, Mr. Binder from the Digital Cooperation Organization.

>> Hello.  Good afternoon, everybody.

I see a lot of faces from all around the world, which is really, really fortunate for us to be able to gather you all here together and showcase some of our work.  Actually, tell you who we are, as Digital Cooperation Organization, and discuss some of the work that we are doing and seek your input.

So, we're really meant to have this be a very interactive discussion, a round-table session to say, let's see how we can make this into a round-table discussion going forward.

So, my name is Ahmed Binder, and I represent the Digital Cooperation Organization.  We are an intergovernmental organization.  If we can go to the next slide, please.

I'll continue.

So, we are represented by the ministers of digital economy and ICT for the 16 Member States who we represent, and the Member States come from as you will see from the Middle East, from Europe, from Africa to South Asia and we are very rapidly expanding.  We have a whole network of private sector partners that we call observers, as you will see in other intergovernmental organizations, and we have over 40 observers who are already with us now.

Since our existence, which is we are quite young, so we started our -- we came into conception from the end of 2020, so we are in our fourth year.

So, our organization, what DCO is how it works.

This year, our work is to look at the ethical governance of AI, and we thought that while a lot of work is being done on ethical and responsible AI governance, we wanted to look at it from a human rights perspective.  So we identified which human rights are most impacted by the artificial intelligence, and then we reviewed across our membership and across the globe how does AI policy and regulation and governance intersect with those human rights and what needs to be done to ensure that we have a human rights-protective ethical AI governance approach.

There are a couple of reports that we are going to publish on that, and we are developing a policy tool, which will be the crux of our discussion today.  We have developed a framework on the human rights risks associated with AI, what are the ethical principles that need to be taken care of and then the tool is going to provide our Member States and beyond a mechanism where we can -- can you hear me all right?  Okay.  So we'll provide the AI systems developers or deployers with a tool where they can assess the system's compliance or their closeness to the human rights or ethical principles, and then the tool is going to recommend improvements in the systems.

So again, I don't want to kill it in my opening remarks.  We have our colleagues who we are developing this tool together with.  So I will give it back to Chris to take us through this and I will look forward to all your input into the discussion today.

Thank you so much.

>> Thanks.  Well, everyone, I'll walk through I think a little bit of this presentation here on what DCO's view is on ethical AI governance and then my colleague Matt Sharpe will walk us through the tool itself and as Ahmed previewed, we'll break out to do a little bit of scenarios and get a chance to play with the tool yourselves.

I think the first question, why is this important for DCO?  Well, it's a big deal everywhere, and DCO working with Member States and stakeholders wants to really be an active member at the forefront of this talk about the.  They felt the tool is a great way to get that moving.

These are two of the objectives to get started.  And the tool can be seen as one way to instil alignment and interoperability between regulatory frameworks.  I think we are recognizing there's a real, wide divergence right now of AI readiness and regulatory approach.

And then once you start to see that, actually proposing impactful, actionable initiatives is critical.

DCO feels that's important.  And lastly, facilitating interactive dialogues like the one we're here today to have.  So, a bit deeper on what does a human rights approach look like in AI governance for DCO?  It starts with four things.

First, looking to prioritize protection and promotion of human rights, the name of the session; to design and uphold human dignity, privacy, equity and freedom from discrimination; third, to create systems that are transparent, accountable, that are inclusive and ones that don't exacerbate inequalities; and lastly to ensure advancement and contribute to the common good while mitigating all the potential harms we're starting to see evolve with AI.

For the toolkit that we're developing will take a human rights-centred approach across four different areas, again, looking at inclusive design and ensuring that there's participation from diverse communities, especially marginalized ones.

It will look to integrate human rights principles like dignity, equality, non-discrimination, policy at each stage of the AI life cycle.

It will seek to recommend the use of human rights impact assessments as a way to get in front of AI deployments and ensure that you can mitigate those potential problems early.  And then lastly, promote transparency, and looking at disclosure of how AI makes its decisions.

Taking a little bit of a step back, and I think illustrating the moment we're in, is that AI diffusion is pretty uneven across the world.  This looks at the market for AI, concentrated across Asia Pacific and North America and Europe to a greater degree, but still, a lot of opportunity for growth in the Middle East and north Africa where currently a lot of DCO Member States reside.

So, this is an important moment to get involved at an early stage.  On the governance side, DCO sees really seven different areas where global best practice can be leveraged to advance AI governance.  The first looks at institutional mechanisms.  Ultimately, these involve how do nation states govern artificial intelligence within their jurisdictions?

Do they develop an AI regulator?  Do they do it sector by sector?  These are questions that are alive at the moment across countries.  How are they going to plan for that at a government level?  Is there an AI strategy or an AI policy that helps dictate different stages?

And then beyond AI specifically, where are they in policy readiness?  Cybersecurity frameworks, privacy frameworks, intellectual property; a whole range of different areas that impact AI and are important to consider.

And then shifting beyond just the government-specific places, but how do you build an innovation ecosystem?  On the government side, can you foster investment and entrepreneurship in AI?  Also, how do you build a culture around that?

And how do you do that in a way that also brings in that diversity of participants and voices?  That's really critical to getting it right.

The sixth area is future-proofing the population.  And by this we mean getting a population ready for AI.  There's going to be displacements in the workforce.  There's going to be educational requirements and countries have to address those as they build these into their societies.

And then lastly international cooperation is so fundamental, that's why we're here at IGF today and there's a lot of processes under way to allow international collaboration to happen and being a part of that is important.

I think some of the findings across DCO Member States are interesting in the sense that it's a unique paring of different types of nation states and we see that it has a lot of varying levels of AI governance across it.  It's not to be unexpected when you have both regionally diverse and economically diverse countries within a single group.

And that's I think reflective of the case that we face globally.  That feeds into the diverse definitions and approaches to AI, and it also feeds into the potential for further engagement and international cooperation within the DCO's membership itself, but also in events and engagements like the one we're doing.

There's a view that we are building around the generic ethical considerations of AI, but our conversation today is to help us think about this, are we getting it right?

And right now, there are very limited recommendations and practical implications to address human rights among DCO Member States.  And so, this tool and this exercise is part of creating that for DCO and potentially beyond.

I'm going to walk through these ethical principles very quickly and then I'm going to pass it to my colleague, Matt, to pick up the tool itself, but the ethical principles that governor this tool are sixfold.

The first deals with accountability and oversight.  We want to ensure there's clear responsibility for AI decision making.  Addressing gaps in verification, audit trails, incident response.  We want to look at transparency and accountability as already discussed.  Clarity in how you make these decisions is important, and you don't want the complexity to undermine the user's understanding.

We have fairness and non-discrimination as our third principle -- (Audio fading out) We also care about -- are concerned about this as our uses of different technologies now feed the AI ecosystem, make sure there are those safeguards in place that respect privacy rights.

Sustainability and environmental impact.  I was on a panel right before this one and they talked about how AI is going to require the equivalent of another Japan in terms of energy use -- that's going to put a strain on resources, and we've got to address that.  And the development of AI -- environmental goals.

And then lastly, it's got to be human centred.  It's got to be looking at social benefit and assuring that it's meeting societal needs -- and aligning -- with these needs.  I'm going to pass it to Matt.  He can walk you through the tool itself in a little further detail and then we'll pick up the exercise.

>> The six principles are based on expansive research -- which we try to distil into these six areas to focus -- this is a brief description of the tool that we've developed.

(Audio faint and unclear) (Audio difficulties) A human rights approach which maps AI systems to look at universal human rights -- (Audio technical difficulties)

>> So yeah, if we could -- if you mind just using the QR code to answer a couple of quick questions.  Once you've answered those two questions, we have a breakout activity, which is designed to basically understand the logic of the AI ethics tool that we've developed.  So, Kevin I think will be handing out worksheets that you can fill in.  There will be different AI risk scenarios, and the idea here is to review the framework that we presented for the AI evaluator tool, AI ethics evaluator tool, and then identify two ethical risks related to each scenario.  The scenario that you're given.

And then do a scoring exercise where you score both the severity and likelihood of the risks you've identified.  So, you can pick like two of the principles that are relevant for your particular scenario.  You score the severity and likelihood, there are definitions on the worksheet.  You calculate a combined score, an impact score for each risk and then you're able to rank them from most to least critical.  And then you develop actionable recommendations.

Try to come up with two recommendations for the two risks for developers.  And this whole exercise should take 15 minutes.

>> Sorry to put you through this.  We intended to make it an interactive discussion and we wanted selfishly to get your input brainstorming some of these scenarios so I do apologize in advance to the organizers for the chairs, but I think we should convert how we are sitting into I think three breakout groups and move the chairs around.

And let's go through this exercise so we can have more interactive discussion.

So, we are well within the time for the session.  We have half an hour to go so for 15 minutes, let's go through this exercise, and then we would love to hear your thoughts on this.  Thank you.

>> And guys, I know this seems daunting.  It is not.  I promise.  I did myself last week.  It's actually kind of fun and gives you a real sense of how to actually put yourself in the mindset of assessing AI risk.  So, we are thinking maybe this side of the room could be one group and then split this side of the room in two.  Those of you in the back one group, in front another.

We've got these sheets that my colleague is going to start passing out.  We'll hand out one set on this side and one set there and one set here.

Happy to go around and check in with you guys as we take this forward and see how we can actually pull this together.

>> Feel free to rotate your chairs.

[Breakout groups]

>> I think we actually need to wrap up.

So, if you could just nominate one person to present the main results, that would be great.

>> If we could spend one minute for each group presenting their results and we can provide some feedback.

>> Hi, everybody.  Our group had social media scenarios -- (Speaker off microphone)

>> Hi, we're social media platform, using AI to identify and remove harmful or inappropriate content.

We had the transparency and user rights being a key risk area.  We saw the severity of that being medium, but the likelihood high, so a rating of 6 on the impact.  We saw negative impact on wellbeing, being another risk category from this, where medium and high, like the first one, and so six.

And then we really kind of thought the last couple of risks were more problematic.  First, inadequate human verification.  So if stuff is getting taken down across the content platform, we think the severity of that is going to be high, and the impact is -- the likelihood is going to be high.  That's going to be a very high-risk category.

And across a whole range of fairness categories, I think one of the key questions is, how to determine what is inappropriate content on a platform?

The use of discriminatory proxies is going to be an issue.  That's going to be high severity and high likelihood, high risk.  The discriminatory impact on vulnerable groups, same thing.

And I think for kind of working backwards then, the recommendations we had were you're going to need valid testing of these thresholds to understand what is going to be correct for your platform.  So, validation and testing are going to be one remediation measure and then a continuous evolution to improve that.

For the inadequate human verification, we saw that you have to have humans in the loop.

And then for the last one, we really -- that's what we have.

>> Thank you very much.  We have 30 seconds, and I'll give the mic to you.

>> So, our case is the use of AI for diagnosis of critical emergencies.  So the first risk was related to expandability, since we were talking priorities and incorrect answers can give issues here.

Also, discriminatory issues, more specifically we talked about gender-based discrimination.

And privacy risks like data leaks.  Very highly sensitive data.  So, for scoring, the first one we did a 6 in the end.  So, for discrimination, a 4.  And privacy we think it's most sensitive here, gave a 9 because from there, many other issues may happen.

>> Let's jump to the last group.

>> And following this recommendation, we suggest for the explainability, so documentation and reporting.  And for the discriminatory monitoring impact, we suggest validating and testing.

And for the privacy, we suggest data management.

>> Let's have 30 seconds here.

>> So, our scenario is we have a multinational COP that is deploying an AI system for screening job applications and to do that, they are using historical data to rank the candidates based on prediction.  So for us, we thought it's a risk on fairness and non-discrimination, because we are looking at it from a perspective that historically, people in the engineering field are mostly white males, and now, you're using that historical data to make an assessment on people who are applying who may look like me.

We said fairness and non-discrimination, that's a risk, especially discriminatory impact on vulnerable groups.  And the scoring was quite high likelihood, 3.  Severity 3.  Everything.  Quite high.

>> Thank you so much.  Before we are kicked out, I would pass the mic to our chief of digital economic foresight at the DCO for some closing remarks and sorry to rush through the whole thing.

>> Hello, everyone.  I was honoured to join this session.  And I have seen a lot of amazing conversations.  At DCO, we are really as our name says, we are the Digital Cooperation Organization.  We believe in a multistakeholder approach, and we believe that this is the only approach that will help in the acceleration of digital transformation.

And the topic of ethical use of AI is an important topic, because again, AI now is being one of the main emergent technologies that are offering a lot of advancements and efficiencies in the digital transformation of government and different sectors.

This is why it was very important for us as the Digital Cooperation Organization to provide actionable solutions to help countries and even developers to have that right to make sure that whatever systems that are being deployed have the right risk assessment from human rights.

And to have that tool available for everyone, and this is why we wanted to have this session to give the feedback, to really understand if what we are developing is in the right way, and thank you so much for being here and locating the time and effort to join this discussion and provide your valuable input and we are looking forward to sharing with you the final deliverable and the ethical tool pretty soon.

And hopefully, together, we are building a future to enable digital prosperity for all.

Thank you very much for your time and for being here.

>> Thanks, everybody.  We also just put this up.  If you want to provide feedback, we certainly welcome it on this session.  Take a picture.  Shouldn't take long.  And thank you all.  We really appreciate your participation.