IGF 2024-Day 2-Workshop Room 5- WS236 Ensuring Human Rights and Inclusion- An Algorithmic Strategy-- RAW

The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.

***

 

>> MODERATOR:    Welcome, everybody.    We're just about to start our session on Ensuring Human Rights and Inclusion: An Algorithmic Strategy.

We have two of the speakers on site and two of them online.    I would like to welcome you all.    To start with, we will give    we will be starting with opening remarks and then likely move on to questions.    So I'd like to request Monica to start with her opening remarks along with her introduction.    And then likely go for the second round of questions.

>> MONICA LOPEZ:    Okay.    Yes.    Can you hear me okay?    Yes?    All right.    Well, first of all, thank you for the forum organizers for continuing to put together this summit on really critical issues related to digital governance.    I'm excited to be here online and I want to thank Paola Galvez for brings us together in person on virtually.    I'm Monica Lopez and I'm been in the intersecting fields of human intelligence, machine intelligence, human factors and systems safety now for 20 years.

I'm an entrepreneur and the CEO and cofounder of cognitive insights for artificial intelligence and I essentially work with product developers and organisational leadership at large to develop robust risk management framework from a human centered per.    I'm an artificial intelligence expert on scaling and global partnership on AI.    So I certainly do recognise many, many individuals.

As for my contribution I really do hope to complement the group here.    I'm coming from the Private Sector perspective.

As we know, in today's rapidly evolving digital landscape we know algorithms have essentially become the invisible architects, perhaps we could call it, of our social, economic, and political experiences.

and so what we have are very complex mathematical models designed to process information and make decisions, many times fully automated, that now essentially underpin every aspect of our lives and as we all well know at this point from job recruitment to financial services to communal service and social media interactions.

So this promise of neutral tralty masks reality one where algorithms are not objective but instead they're reflections of biases and inequities and prejudices across our societies so they're essentially embedded into the design and training of data.

So this as we all know as well has direct human rights implications of algorithmic bias that are profound at this point and really far reaching.    The systems essentially perpetuate and amplify these existing inequalities and are creating digital mechanisms of exclusion that are systematically disadvantaging marginalized communities.

Very quick before we enter into why we need a human centered perspective on this, but I'm sure very clear examples you may be familiar with already are with facial recognition technology, or FRT, and demonstrated significantly higher error rates for women and people of colour.    We continue to see that problem.    AI driven hiring algorithms that have shown to discriminate against can dates based on gender and race characteristics and other protected characteristics.

I'm based in the United States so we've shown it has continued to perpetuate racial biases leading to more severe sentencing recommendations for Black defendants compared to white defendants with similar backgrounds.

So essentially, why do we have this?    The root of these challenges really lies in the fundamental nature of algorithmic development.

and we know that machine learning models are trained on historical data that inherently reflect, as I mentioned earlier, these societal biases, power structures, and systemic inequalities.    I want you to take a moment right now to consider what a data point even means.    How a single data point has limits.    As those of you know for those who work closely with data on a daily basis and by that I mean whether you're collecting it, whether you're cleaning it, analysing it, making conclusions from it, you know the basic methodology of data is such that it systematically leaves out all kinds of information.

and why?    Because data collection techniques, they have to be repeatable across vast scales and they require standardized categories.    And while repeatability and standardization make data based methods powerful we have to acknowledge they have power at a price.    So it limits the kind of information we can collect.

So when these models are then deployed without any sort of critical examination, they don't just reproduce existing inequities, they actually normalize and scale them.

So here is where I would argue, and I know the rest of the panel will continue to discuss this, but why a human centered approach to algorithmic development offers essentially a critical at this point pathway to addressing these systemic challenges.

and essentially, what this means is that we need to reimagine technology as a tool for empowerment and well being instead of a tool for exclusion.

So in this regard, prioritizing human rights, equity, and meaningful inclusion in every single step of technological design and implementation across the entire AI life cycle is essential.    I work with a lot of clients as I mentioned earlier I am in the Private Sector and there are key strategies right now that are very clear right now that we know can advance this human centered approach.    We need comprehensive diversity across algorithmic development.    I'm sure you've been hearing that a lot but the problem is that the transformative change has not really begun.    We know if we diversify teams we do get more responsible development of algorithmic systems.    We do get new perspectives at the table.    I would say that's absolutely essential no matter what moving forward.

the second element is rigorous auditing and transparency.    That's another element we have seen.    It is now part in fact, in part, related to the European Union's AI Act.    And a requirement.    But what we need is we need to see this across the three perspectives of equality, equity, and justice.

This is not just for big tech companies to be engaging in.    This is truly for everyone.

and we know that irrespective of emerging legal requirements in some jurisdictions and some where there isn't much work happening on the legal side, all organisations must implement mandatory implement assessment for potential discriminatory outcomes before deployment and then continue to look at those as models drift.

I have noticed when companies do that, no matter the size, we do see better outcomes.

the third is proactive bias mitigation techniques.    There are all sorts of technical strategies for that.    Some of them are based essentially on what I was mentioning in regards to you really need to think about what data means.    Careful curation of the training data.    We need to make sure it truly is representative and balanced across the datasets.    It does matter and it does change outcomes.

Implementation of fairness constraints.    Also the development of testing protocols that actually specifically examine the potential for discriminatory outcomes.    We know when you identify that beforehand and you actually look for that you will see it and you can actually mitigate change and actually improve on the issue.

the fourth element is of course the classic need for legal and regulatory frameworks and so here I can't stress enough at this point that governments and international bodies, we have to truly come up with comprehensive regulatory frameworks that treat algorithmic discrimination has a fundamental human rights issue and from a business perspective what this means is there needs to be clear legal standards for algorithmic accountability.

There also need to be very clear mechanisms for individuals to be able to challenge algorithmic decisions.    There certainly are not enough.    And even in some cases where we have the requirement for companies to actually put on their website their auditing results, that is still not enough.

and then of course we need significant penalties for those systems.

and then the last issue which is the fifth is that we need ongoing community engagement.    I also cannot stress that inclusion does matter and it requires continuous dialogue with communities most likely to be impacted by algorithmic systems.

and it is not an easy task.    It's a lot to ask for.    But we know and I've seen it with companies that actually make concerted efforts to create participatory design processes across the AI life cycle.    That essentially means you're establishing relevant feedback mechanisms of communications as you create and design these systems, you pilot them, and you work with those individuals.

and then you essentially end up empowering marginallized communities to actually want to actively provide their input because it is of value.

So what I'm calling for here essentially to conclude is we need a find reimagining of technologies.    We know these algorithms perpetuate or challenge these power structures.    So if we, every one of us, look at this when we design for today, I think we will shape the future of human rights in the digital era in a positive way.    I know we'll discuss this in more detail and I look forward to your questions.    Thank you for listening.

>> PAOLA GALVEZ:    Thank you, Monica, for all your thoughts.    I'll go to Paola to give us her introduction and for the question let's wrap within five minutes and then likely go for our second round of questions.    For Monica, I think we will be going for the second round.    So Paola, over to you.

>> PAOLA GALVEZ:    Thanks, Ananda.

Thank you so much for joining us everyone for this very, very critical conversation.    I would like to pose a question.    What does it take to make society more interesting?    I was inspired by my grandfather who was a judge in Peru who spoke about socials disparities he witnessed.    He went to law school talking about inequalities and social disparities, but my first year lacked inspiration.    I think I was disconnected from real world problems but my perspective really changed in 2013 when I began an internship in Microsoft.    I was looking at the...(microphone volume low)...and helped visually impaired perceive their surroundings and really showed the profound impact this technology can be for the social good.    So I said, I really felt, as a lawyer, to help in inclusion and I think public policy and health drive human centric and evidence based policy, and that's when my commitment started to transform into a more inclusive Civil Society and I think that's what led to the path I'm doing now and beyond.

So I worked in the Private Sector for a long time.    I was a physician with Dr. Monica Lopez, who was mentioning how Private Sector is.    Then I received a position from the government to work there to help them with national AI strategy and (?)    Most of my friends told me you're going to be so frustrated.    You're used to Microsoft, big techs, but I said, no, I can actually bring and shed a light on disruptive voices in government.    So I decided to do it.    I'm a firm believer in participatory processes so the first thing I did was form a committee to do this policy.    And we're here at IGF.    At Internet Governance Forum we're talking about AI and data at a global level.    And I have seen first hand a lot of experience bringing in Civil Society, academia, Private Sector, to bring together to find solutions to challenges and I think one of the most challenging I think is AI policy.

I do believe that protecting democracy, human rights, the Rule of Law, and establishing clear frameworks on AI is a responsibility that a government alone cannot do, nor a Private Sector company nor academia.    It is an endeavour that must be taken in a Multistakeholder approach.    But I do think that Multistakeholder is crucial in this, and the Civil Society.    The youth must be included and youth engagement is a critical area that we need to protect now.    That is what I believe, in the remarks I wanted to mention, because I do see genitive AI producing fake and biased content.    Large language models bring polarization, and poorly designed AI powered technologies, that have discrimination against youth with disabilities, and I have the expert here.    I will mention that.    But apart from that, I sincerely believe AI holds immense potential as a technology if we use it wisely.    AI systems break down language barriers.    I mean, if IGF is as powerful as it is and the Youth IGF and youth Internet Society, more than 2,000 youth connected, we sometimes use technologies powered by AI, and that is powerful.

Suddenly, AI has yet to live up to its potential.    Dr. Monica Lopez mentioned most of its challenges, who I absolutely agree with.    AI is reproducing society's bias.    It is deepening inequalities.    Someone heard    I heard someone saying, but that's just the way the world is.    The world is biased, Paola, what do you think?    That's what AI is going to do.

and yes, that's true, but I agree at one point, that it depends on as we want to develop this technology and how this technology will provide as unoutput because data is the oxygen of AI and transparency should be at its core.    So it's up to us to shape the future of AI now to talk about the data that should be more representative and the focus of IGF bringing youth to the discussion I think is a great tool to really congratulate, because we have a big youth community at this IGF so I'm really looking forward to this discussion and to you, Ananda.

>> ANANDA GAUTAM:    Thank you so much, Paola, for touching on how powerful it can be and the work of the government and bringing a Multistakeholder community was a great endeavor.

I'm now like to go to Yonah Welker, and I'll give you five minutes to introduce you and touch on what Paola and Monica has said.

>> YONAH WELKER:    Yes.    Thank you so much.    It's a pleasure to go back to Riyadh.    Three years ago I had an opportunity to curate the global AI summit on the good of humanity, and we continued this movement.    I'm a visitor from the Massachusetts Institute of Technology but I'm an ambassador from the EU region and my goal is to bring all these voices and ideas to actual policies, let's say, EU AI Act or code of practice.

and today, I specifically would love to address how it may affect the multiple vulnerable groups.    As Paola mentioned, those with disabilities.    And that's why I would love to quickly share my screen.

Hopefully you can see it.

So 28 countries signed the agreement about AI safety, including not only Western countries, but the countries of the Global South, Nigeria, Kenya, Saudi Arabia, and EAU.    Currently, 1 billion people, 15% of the world, are living with disabilities, according to the World Health Organization.    It's important to understand sometimes these disabilities are invisible, let's say neurodisabilities.    One in six peoples living with neurological conditions.    It's actually a very complex task to bring all these things to the frameworks.    For EU, we have a whole combination of laws and frameworks.    We address taxonomies in the disabilities act.    We're trying to address manipulation and addictive design at the level of AI act, digital services Act, GDPR, we're trying to understand and identify high risks for systems related to certain critical structure, transparency risks, and inhibited use of effective computing.    Still, it's not enough because we need to understand how many systems we actually have, how many cases we have.

for instance, for assistive technologies we have over 120 technologies for the OECD report.

We use AI for smart wheelchairs, walking sticks, and other tools, and hearing impairment, computer vision, turn sign language into text.    And cognitive disabilities like ADHD, autistic, and others.

but also we need to understand all the challenges common with AI, including recognition errors and videos with facial differences not just properly identified by facial recollection system, as was mentioned by my colleague.

or cues identification errors that individuals can't hear or see the signal.    Or when they deal with excluding patterns or errors or inclusion by generative AI or language based models, also have all the complexity driven by different machine learning techniques.

Supervised learning which are connected to errors induced by humans.

Unsupervised learning which brings all the errors and social disparities from the history.

or reinforcement learning which is limited by training environments, including robotics and assistive technologies.

and finally, we should understand that AI is not limited by software.    It's also about hardware and human centricity of physical devices.    It's about safety, motion, and sensory integration and dynamic nature of real life practices.

and production and training cycle.

So overall, working on disability centric AI is not just about words.    It's extremely complex and building environments with multisensory and multimodal approach.    We have to identify areas of use and misuse and all different types of design.

So that's why this oversight should include all these parameters.    We talk not only about risk based approach but understanding different scenarios, workplace...working with UN.

UNESCO, OECD, and finally we try to understand the intersectionality of disabilities, thinking about children and minors, women and girls, and all the complexity of history behind these systems and context.

Thank you.

>> ANANDA GAUTAM:    Thank you, Yonah, for your wonderful presentation on how AI can be used with assistive technologies.    There are challenges like very minor issue, we cannot accept even a minimal level, like in the health care system.    We come back to you on this question.

I'll ask Abeer to talk about herself.

>> ABEER ALSUMAIT:    Thank you.    Hello, everyone.    It's a privilege to be part of this discussion and I'd like also to thank Paola for initiating this and kick starting it and I'd like to thank the rest of the panel and the moderators as well and event organizers.    I'm a policy expert with over a decade of experience in cyber security.    I work in the Saudi government and hold a masterrers degree and a bachelor of science in computer information sciences.    My interest lies in shaping inclusive and sustainable digital policies that strive innovation and advance the digital economy.

I would like to briefly just start the conversation by mentioning examples that show why algorithms and AI promise efficiency and innovation, and they have the power to implicate and amplify inequalities when not governed responsiblely.

the first example I would like to mention is from France.    In France, welfare agency used an algorithm to detect fraud and welfare and errors and payments.    And this algorithm, while in text was a wonderful idea, in practice it ended up impacting specific segments of its population and marginalized groups specifically, single parents, and individuals with disabilities, far more than any others.

and putting them at even more high risk more frequently than the risk to the beneficiaries on the system.

This impact was profound on those individuals and led to investigations and stress and even in some stopping of benefits.

So this year, a coalition of human rights organisations launched legal action against the French government for this algorithm used by the welfare agencies, arguing this algorithm actually violates the privacy laws and anti discrimination legislation.

So this case reminds us of how risks can be inherent in some opaque systems and maybe poorly governed AI tools.

Another thing to highlight in the health care sector, in a study of 2019 from Pennsylvania University, highlighted an AI driven health care system that was used to allocate medical resources for a little over 200 million patients.    And that system relied on historical health care expenditure as a proxy for health care needs.    This algorithm was not considering the systematic disparity in health care access in spending in society at that time.    And it ended up resulting in Black patients being less likely to be flagged for need of enhanced care to a percent that reached 50% than their white counter parts.

So this was supposed to stream line things but it ended up deepening miss trust and disparities in AI and health care overall.

So I want to amplify existing injustices and exclusion often impacting the most vulnerable population.

These challenges and issues led to actions by governments on international level, like the EU AI Act, as mentioned, for this year.    It classifies AI systems based on risks and welfare and health care as areas of high risk, where very high standards of transparenty and equality and human intervention is required.

a lot of nations and governments followed suit, I believe.    One example for that is here in my country in Saudi Arabia that data and artificial intelligence authority established a few years ago started or adopted recently the AI ethnics principles that emphasizes transparency, fairness, and accountability.    Therefore, I believe governments play a very important role while every actor and every player is really important in this discussion and conversation.    Governments have critical roles in regulating and establishing responsibility and advancing the way forward for AI adoption in an equitable and fair way.

Thank you.

>> ANANDA GAUTAM:    Thank you.    I'll come back to Dr. Monica.    What could be the role of the Private Sector?    What are the major things that Private Sector, what steps could they take, to overcome the biases, along with other stakeholders?    And if there are any other best practices that could be shared?    I'll ask you to wrap very soon.    Thank you.

>> MONICA LOPEZ:    Yes.    Absolutely.    Thank you.    Thank you for that question.    I know I briefly mentioned some of them but I think right now I'll highlight some of them.    The first one, one that is starting to happen, but not to the extent that I believe should happen more is the whole question of diversity in teams.    Again, we hear this a lot.    We hear that we need to bring in different perspectives to the table.    But at the end of the day, unfortunately, and I have seen even, you know, startups, small and medium sized enterprises who make the argument, we don't have enough resources, we can't.    And they actually do.    And sometimes it's as simple as bringing the very customers or very clients that they intend for their product or service to be for to the discussion.    So I would say that's one very key element and we just need to make that a requirement at this point.    It needs to essentially be something, a best practice, frankly, at this point.

to other one is the    the other one is the bias audits.    We are seeing across legislation the needs for the requirement for, so one needs to comply now with providing audits for these systems, particularly on the topic of bias.    So to ensure they're nondiscriminatory, non bias.    So that's a good thing.    But what ends up being the problem is we haven't yet standardized the top of documentation, the type of metrics, and the type of benchmarks.    So that's the conversation not just in it Private Sector but in academia as to what    I am in communication and work with individuals from IEEE and ISO who set the industry standards, so this is a very big topic right now of debate as to how do we standardize what these audits should look like.

and how do we make sure that not only we standardize that but we actually have the right committees in place, experts, who can then review this documentation.

So I would say that that in a way, while extremely important, sometimes does become a barrier of sorts, precisely because individuals or organisations, rather, companies don't know what needs to be put into these audits.

So that's the second element.

the third and final point here is the whole issue of transparency and explainablety of these systems.    We've heard many, many times about the block box nature of these    black box nature of these systems but to be quite honest we know much more about these systems.    Developers do know the data that is involved.    We do make mathematical assumptions.    So there's a lot of information at the very pre beginnings stage of data collection, of system creation, for which we have a lot of information about.

and we're not necessarily being very transparent about that in the first place.

So I would say that that in and of itself is extremely important but also is becoming a type of best practice because if you can establish that from the beginning, it does have downstream effect across the entire AI life cycle which then becomes extremely important when you start integrating a system.    Let's say you have a problem, you have a negative outcome, as someone ends up being harmed, and then you can essentially reverse engineer back, again, if you have that initial very clear transparency put out in the beginning.

We are starting to see some good practices around that.    Particularly around model cards, nutrition like labels, especially in health care.    An example was given in health care.    I do a lot of work in the health care industry.    There's a very big push right now to standardize and normalize nutrition like labels around AI model transparency that I think then should be utilized across all systems, frankly, at this point, all contexts and domains.

Thank you.

>> ANANDA GAUTAM:    Thank you, Dr. Monica.    So I'll go to Paola.    I'll go to Paola that you have already worked on the AI readiness assessment for the country and how countries and regions are making declarations how it can be transitioned into the action, you know, based on your expense, please.

>> PAOLA GALVEZ:    Sure.    We've seen so many commitments already, so great call.    Thank you for the question.    I'd say first of all we need to start by going into the international frameworks of AI.    If there are countries who have not adopted, they will be left out.    So that ensures alignment and best practices and also allows local businesses to connect with this.    Second, if you formulate national AI policies, we, governments, need to develop stricter meaningful public participation process.    This means receiving comments from all stakeholders, but it's not only that because that happens a lot in my country, I can tell you.    By law they need to publish within 30 days, the second act of the AI act was published, but for meaningful participation we need governments saying how they took these comments and if they are not considering, why.    I believe that the citizens and all the similar organisations and private sectors need to know what happened after they commented.

Third, accessibility, any policy AI material must be readily accessible, complete, and accurate to the public.

Then, independent oversight I think is a must, Ananda, curating and designating an independent agency.    The Saudi data and AI agency I think is a very good example.    Sometimes governments have a challenge with this because they say, oh, it's a lot of effort, people, resources.    Right?

but if it's not possible of having a new one, then let's think, maybe the Data Protection Authority can take on AI capacities, right?

Also, I think this cannot be left behind, is AI skills development.    That's a must.    We can have the best AI law but if we don't help our people understand what is AI, how to read and know that the AI can hallucinate, we will be lost.    So AI education for the people is a must.

Just to finish, for always, what I've said, from a gender lens.    Gender equity and diversity in AI is a must.    It's not something that should be looked at as it should be.    You mentioned I conducted the AI presidential...UNESCO and I'm proud to say the UNESCO recommendations on the ethnics on AI is the only document at the moment that has a chapter on gender and it must be reviewed because it's very comprehensive and it has practical policies that should be taken into consideration and into practice.

and of course, environmental sustainability in AI policy should be considered.    It's often overlooked.

What is the impact on the energy?    Should we promote energy efficient AI solutions?    Definitely.    Minimise carbon footprint?    Of course.    And fostering sustainable practices, because there's data, but when you send a question to a large long model as we all know, ChatGPT, Gemini, et cetera, it's the same consumption that an airplane has in a year from Tokyo to New York.    So we should be thoughtful on what are we sending to AI, or maybe Google can do it for us, too.

Thank you.

>> ANANDA GAUTAM:    Thank you, Paola, for your strong thoughts.

I'll come back to Yonah.    You have mentioned about AI in assistive technologies so now I'll come back to how legal frameworks can complement on the assistive technologies while protecting the wonderful population that are using those technologies.    You have briefly underlined that, in the case of assistive technologies.    Over to you, Yonah.

>> YONAH WELKER:    Yes.    First of all, we have a few main elements of these frameworks.    The first one is related to taxonomies and repositories in cases.    Here, I would love to echo my colleagues Dr. Monica and Paola.    We actually need to involve all of the stakeholders.    For instance, cooperating with OECD, and other organisations to understand existing barriers of access to these technologies, affordability, accessibility, it's energy consumption, its safety.

its adoption technique.

That's the first thing.

Second is accuracy in the digital solutions.    One of the lessons we learned in both EU and MENA region, we can't localize open AI or localize Microsoft solutions, but we can build our own solutions.    Sometimes not large language models but small language models, not with 400 billion parameters but maybe five or ten or 15 billion parameters.

or more specific purposes for languaging.    For instance, for Hung arian languages, and many other non English languages, it just doesn't work from a realistic perspective and also for a scientific research perspective.

Another thing, dedicated safety models, sometimes we can't fix all the issues within the model but we can build solutions which track or improve existing systems.    For instance, currently, for the commission I evaluate a few companies and technologies which address the privacy concerns, compliance, with the GDPR, with the data leakages, breaches, and also online harassment, hate speech, and other parameters.

It's also complemented with the safety environments and oversights.    It's the job of the government creates so called regulatory sandboxes, a kind of specialized centres where startups can come to in order to test their AI model to make sure their, on one hand, complying, and also they built actually safe systems.

It specifically related to areas of so called critical infrastructure.    It's areas of health, education, smart cities, and for instance Saudi Arabia is known for so called cognitive cities.    All this is part of our work when we try to build efficient, resilient, and sustainable solutions.

and finally, co operation with intergovernmental organisations.    For instance, we work on frameworks called disability    digital solutions for girls with disabilities, with UNICEF.    We work with UNESCO on AI for children.    So we're trying to reflect more specific scenarios, cases, in adoption techniques related to specific ages.    Let's say from eight to 12 years olds.    Or specific regions.    Or specific gender.    Including both specifics of adoption but also safety considerations, and even unique conditions or illnesses which are very specific to particular regions.

for instance, we have a very different statistic related to cognitive and sensory disabilities in we compare the MENA region and EU.    So it's a very complex process.

As I mentioned, now our policies have become overlapped.    Even for privacy and even for manipulation and for addictive design, we have an overlap not only in AI act but also other frameworks.    Digital services act.    For data regulations.    So some essential pieces of our vision exist in different frameworks.

So all, even governmental employees, are aware of it.

and the final thing is AI literacy adoption.    So we're working to improve the literacy of governmental workers and governors who will employ these policies and bring it to life.

>> ANANDA GAUTAM:    Thank you, Yonah, so much.    I'll come back to Abeer.    We have been talking about complexity of making AI responsible.    And when it comes to making responsible AI, it demands accountability and transparency, while we're seeing many automated AI systems, like an automated car kills a man in the street.    This has very serious consequences and there are other consequences.    So how can governments ensure responsible AI while ensuring accountability and transparency?    Kindly go through.    Thank you.

>> ABEER ALSUMAIT:    Thank you.    So I think this question actually relates to what Dr. Lopez mentioned.    The key words here are transparency and explainability.    Of course, regulation and law establish responsibilities and make it sure every actor involved in any event knows their role and when to be responsible, but also the fact that they can explain and be transparent as to how they work and can explain how they affect other individuals, specifically vulnerable populations, is really key.

and also the Private Sector knows maybe more than we understand but we're not very clear on how we want the transparency and accountability to work.

Maybe my thought is government should work hand in hand to have this happen as soon as possible and establish their responsibility and be clear on what it means to have transparency for AI and algorithms.

One thing I think government should also focus on is to establish a right, establish a way, for individuals to challenge such systems and impactful algorithms on their life.    My idea is that there should be continuous evaluation and risk assessment of how this is actually working in real life, in case any instance of bias or discrimination happens.    There should be a clear way, clear procedure, for governments and for individuals, to start auditing and reviewing any systems that are impacting life.

>> ANANDA GAUTAM:    Thank you, Abeer.    Maybe we'll come back to you after the questions.    We'll have a question from the audience.    I'll ask her to bring.    And then we'll bring in discussions from the chat and if there are any questions online.    We'll go to the questions.    Over to you.

>> AUDIENCE:    My name is Ze ma I'm representing the Saudi degree and building forum which is a nongovernmental and nonprofit organisation that promotes green practices and decreasing energy emissions and decreasing energy consumption.    Of course, it contributes to the digital transformation that the world is now witnessing and for that I would like to participate and give an idea that we're going through a critical perspective.    Which means we're having as algorithms operates an immense potential for our everyday life yet we face challenges to biases and exclusion.

So as Dr. Monica said, it lacks transparency, which of course perpetuates a social disparities and exacerbates discrimination against marginalized communities.    In the absence of the proper scrutiny sometimes they contribute to human rights instead of addressing them.    So what should we do about that?    We need to take action and call for greater transparency and accountability to ensure algorithms are open to scrutiny and include clear mechanisms for identifying and addressing biases.

of course we need to integrate human rights into design which means we need to develop human centered algorithms which prioritize the needs for marginalized groups.

of course, we need to foster a multilateral collaboration to engage all stakeholders as you all mentioned to ensure algorithms are fair and inclusive, considering diverse cultural and social dimensions.

Now, we recommend the following.    First, we need to launch a global algorithmic transparency initiative establishing an international platform to set standards for evaluating the impact on algorithms on human rights and transparency.

Second, design inclusive algorithms that prioritize accessibility, improve service delivery for People with Disabilities, and ensure greater transparency.

and finally, building capacity of designers and makers to understand biases and address them directly.

>> ANANDA GAUTAM:    Thank you so much.    If we have any questions on site, there are no online questions, I believe, so while asking questions, mention who are you asking, so it's easier to answer.    If it is common, please let it known as well.

>> AUDIENCE:    Okay.    My name is Aaron Promise Mbah.

So I am very excited about this and I have a question.    I would like Dr. Monica to help me address it.    I understand you talked about the algorithms helping for getting some other business, and the kind of divide that divide or risk that comes with it, with respect to the rights.    And then Persons with Disabilities using social media and all of that.

and then there's the critical case where I think Abeer also mentioned about oppression, societal rights.    So now you have someone click on Spotify to listen to music, maybe he's feeling down, and then after that you see Spotify recommending music, that kind of music.    How do we address this?

and Paola also mentioned something about    sorry.    Let me get it.    Standardization, having a policy.    And then countries are making declarations but how to take action on this?    She talked about ownership, right?    Public participation.    Now, when you are talking, you talk about a particular policy    I'm from Nigeria.    Nigeria has a lot of policies, even AI policy.    Right?    We're always at the forefront of adopting when we look at other countries doing a lot of things and then we start doing our own and then we have a lot of documents and then there's no implementation and enforcement.    Right?    How do we ensure it's not just paperwork, that we just do all this, but that it's actually enforced and followed through with implementation?    If you can share some of your insight about that.    Thank you very much.

>> MONICA LOPEZ:    Thank you for that question.    Very complex.    You really touched upon many, many aspects but I think something that really stands out and perhaps Paola I think had also mentioned this at one point is that there really needs to be I think at this point what makes    let me backtrack a second.    Yes, everybody is talking about regulation.    Everybody is talking about standards and everybody is talking about we need implementation and how do we do this enforcement?    But I think part of the problem lies in we simply do not have enough public awareness and understanding.    Because I think if we actually did have more of that, there would be more of a demand.    And I see this in terms of, I mean, yes, we hear even some very tragic examples.    You did mention about someone who has depression and they use Spotify and then get recommended different new types of music to apparently, quote, unquote, improve or fix.    One has to be careful with the words used here, to deal with that situation.    And we've seen recent suicides from chatbot use because of anthropomorphism of these systems.    I think it goes back to maybe most users do not even understand these systems fundamentally.    That's an education issue and an education system.    If you know and understand, then you can critically evaluate these systems.    You can be more proactive because you know what's wrong or you see the gap.    You see what needs to be improved.    I say it so I'm also, I didn't mention this, but I'm also in academia and I do teach at a school of engineering at Johns Hopkins University in the Washington, D.C. Maryland region in the United States and I teach the courses on AI ethnics and policy and governance to computer scientists and engineers.

I love when they come to the beginning of class with no awareness and at the end they are absolutely more engaged and they all say "we want to go and be those engineers who can talk to policymakers."

So me that's very clear evidence, whether high schoolers, undergraduate students, working professionals going back to school, whatever it is, I see this change.    It's changed because of the power of knowledge.    My main really call here is we need far more incentivization to make much more educated users in everyone, all ages.    Then we're going to see the need for, and I really think, because there will be that demand from companies, that we want to ensure that our data is private.    We want to ensure that we're not being harmed.    We want to ensure that we actually have benefits from these technologies.    I'll stop there.    Yeah.    Others can add to it, I'm sure.

>> ANANDA GAUTAM:    Paolo?

>> ANANDA GAUTAM:    Thank you, Monica, for your wonderful response.    We have only five minutes left.    So Matilda, is there any online discussion or question or contribution?    No?

if there is any question, please feel free.

and contributions are also welcome.    We have five minutes.    Please give the time to both speakers.

>> AUDIENCE:    Thank you, I'll be quick.    It's been a great discussion.    We do get the point that education is very well needed.    We realize in our work, I work in India, we realize with specialized population like judges and lawyers, it takes a lot of conversation and a lot of detailing to get to a point where something like bias that judges work with daily for them to start to understand what bias in an AI system might look like.    So my question, I guess what I'm trying to ask, when something requires such specialized detail and understanding, then clearly the problem isn't with people being able to understand.    Maybe it's with the technology not being at a stage where it's readily or easily explainable for societal use.    Frequently we get keeping these discussions on maybe there's a need to pause, especially with technologies like deep fakes, or which    everyone who does research in these area knows are primarily going to be used for harm, not primarily, but massively used for harmful end.    So is there any credence or currency for pushing for a pause at certain levels or are we way past that point already and we just have to mitigate now?    That's a small question.    Sorry if it's a little depressing.

>> ANANDA GAUTAM:    Thank you so much for the question.    If there are any questions, let's take it, and I'll give each speaker with one minute.    And then wrap it up.    Any questions, contribution from the floor?    No?    None from online?

So each speaker can have one minute and respond.    If not, they can proceed.    Yeah.    Okay.    Like a just one liner that you want to give for the wrap up.    Thank you.    Yeah, you can start with Abeer, maybe.

>> ABEER ALSUMAIT:    I think we're disappointed about it.    I don't think there's a real question.    But beyond that point, I don't think so.    But should we pause?    I also don't think so, to be honest.    I think we can put more effects into making things more explainable and just bridging the gap.    I think that's what every player should work towards.    That's my thoughts.

>> PAOLA GALVEZ:    I also agree.    Absolutely, we cannot pause because if some group decides to do it then some others will compete.    It's just putting a blanket around your eyes, so we cannot do it, but we can just use what we have.    If we don't have a data protection law or an AI national strategy, we need to push for it to happen, because if a country does not have it, an idea of how they want this technology to develop, what is the future of our citizens?    I pose this question for us.    Let's reflect on how we can contribute to the future of AI.

>> ANANDA GAUTAM:    Thank you, Paola.    Now Monica and Yonah, please.

>> MONICA LOPEZ:    I would agree absolutely with both comments.    We can't pause.    We can't ban.    That won't work.    Absolutely.    We're moving far too fast anyway at this point.    But I would say where there's a will there's a way.    So if we all come to the agreement in acknowledgement, and I mean all of us, not just those of us right now here and our colleagues, everyone, that we need to do this, then I think it's possible.    And we need to act.

>> YONAH WELKER:    Yes.    I'm always on the positive side because finally we have all the stakeholders together and it includes also the European Commission.    I would love to quickly respond to the question of error and the key words of "suicide." It's actually about awareness.    Yes, if you know that a recommendation agents use so called" stop words "if you know how the history of these agents works, you can easily fix it through regulatory sandboxes.    And emerging companies and startups that come into the centres, you can provide the oversight to fix these issues.    The same with the bias.    Then you know bias is not an abstract category but just the problem of under or overrepresentation, just bigger error for smaller groups is a purely data and mathematical things coming through society.    You can clearly identify the issue.    It can be a technical issue or social issue.    Then you see it; you can fix it.

That's why now we have these tools, regulatory sandboxes, policy frameworks, and all the stakeholders working together to come up with a real life terms and understanding and, finally, we can fix it together.    Thank you.

>> ANANDA GAUTAM:    Thank you, Yonah.    Thank you all of our panellists.    Thank you, Paola, for organizing this.    To all of our on site audience and audiences online.    This is not the end of the conversation.    We are just beginning.    You can just connect with our speakers in the LinkedIn or wherever you are.    Thank you so much everyone.    Have a good rest of the day.