IGF 2024-Day 3 - Workshop Room 3 - DC-DAIG & DC-DT Data and AI Governance from the Global Majority-- RAW

The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.

***

 

>> All right.  I think we can get started.  Do we have our panelists already online?  Yes?  Fantastic.

So good morning.  Good afternoon actually to everyone.  I think we can get started.  So we have a very intense and long list of panelists today.  These are only part of the panelists.  We have also online panelists joining us.  Due to the fact that we have a lot of coauthors for this book that we are launching today.  So this session on AI and data governance from the global majority is organised by a multi stakeholder group of IGF called Data and AI Governance DAIG, coalition with the IGF.  Together with the trust.  We have merged our effort.  This report is the annual report of the data and AI governance coalition that I have the pleasure to chair.  My name is Luca Belli.  Actually part of my lack of politeness.  I forgot to introduce myself.  Luca Belli.

I am going to briefly introduce the topic of today in what we are doing here.  And then I will ask each panelist to introduce him or herself.  Because as we have an enormously list of panelists, I might spend five minutes only reading their resumes.  So in the interest of time management, it is better if everyone.  I will of course call everyone.  But then if they want to introduce themselves, they do.

Are you hearing well?  I see people doing    yes?  Perfect.  All right.

So the reason of the creation of this group that is leading this effort on data and AI governance is to try to bring into the perspective of data and AI governance debates.  Ideas.  Problems.  Challenges.  But even solutions from the global south, the global majority.  This is why this report is precisely dedicated to AI from the global majority.

As you may see, we have a pretty diverse panel here.  And even more diverse if we consider also the online speakers.  Our goal is precisely to assess evidence, gather evidence, engage stakeholders, to understand what extent AI and data technologies can have an impact on individuals life.  On the full enjoyment of human rights.  On the protection of democracy.  And the rule of law.  But also on very essential things like the fight against inequalities.  The fight against discrimination and biases against disinformation.  The need to protect cybersecurity.  And safety.  And all these things are explored to some extent in this book.

We also launched another book last year on AI sovereignty.  Transparency.  And accountability.  Some of the authors at least of last year's book are also here in the room.  And all the publications are freely available on the IGF website.

Let me also state these books we launch here are preliminary versions.  They are then, alto they are very nice design.  They are printed.  They are preliminary version and then official published with editor but takes more time.  So AI sovereignty book is released in two months with springer.  This will be constantly updated.  So if you have comment, we're here to receive your feedback and comments.

I had the pleasure to autoer a chapter on AI meets cybersecurity, exploring the Brazilian perspective on information security with regard to AI.  And this is actually a very interesting case study.  Example of a country that even if it has climbed cybersecurity ranking, like the IT of cybersecurity index.  Being now the 3rd most cyber secure in the America, according to the index.  It is also at the same time in the top three of the most cyber attacked in the world.

And this is actually very interesting case study.  Because it mean that even if formally it has climbed the cybersecurity index because it has adopted a lot of cybersecurity regulations.  Like in data protection, like in telecoms sector, in the banking sector and so on.  Energy sector.  But the implementation is very patchy chi and not very sophisticated in some cases.  So one of the main takeaways of the study, and I will not enter into details because I hope you will read it together with the others.  Is precisely to adopting a multi stakeholder approach not to pay lip service to all the stakeholders, join hands and find solutions.  But because it is necessary to understand it what extent AI can be used for offensive/defencive purposes.

And so what extent geeks can cooperate with policymakers to identify what are the best possible tools.  But also what kind of standardisation can be implemented to specify what are very vague elements that we typically find in laws.  Like what is a reasonable or adequate security measures.

"Reasonable and adequate "are favourite words of lawyers.  It means pretty everything.  If you don't are regulator or standard, tells you what is reasonable or adequate security measure, it is pretty much impossible to implemental.

Now I'm not going enter too much into this.  I hope you will cheque and it try to give the floor to our first speaker, hoping they will respect the 5 minutes time each.  Save those words in the presentation that will three minutes per person.  Start with Ahmad Bhinder.

>> Thank you very much Dr. Luca.  And I'm really feeling all around to be engulfed with such knowledgeable people.

My name is Ahmad Bhinder.  I represent Digital Cooperation Organization.  An intergovernmental organisation head quartered in Riyadh.  And we have 16 member states.  Mainly from the Middle East, from Africa, a couple of European countries and from South Africa.  From south Asia.

And we are in active discussions with the new members from Latin America, from Asia, et cetera.  So we are global organisation.  And we rep    although we are global organisation, the countries that we represent, they come from the global majority.  We're focussed horizontally on the digital economy and all the digital economy topics relevant.  Including data governance and AI governance.

Very important to us.  So I will quickly introduce some of the work on a preliminary level.  And then how to action some of that work.  So on  

>> LUCA BELLI: Keep it like this.

>> Yeah.  Okay.

So yeah, so we have developed two agendas.  One is data agenda.  And since data governance is bedrock of AI governance, so we have something on AI as well.

So very quickly, we are developing a tool for assessment of AI readiness for our member states.  Self assessment tool.  And this tool is, we'll make it available in a month's time to the member states.  Across different dimensions of the AI readiness.  That includes governance.

But that goes beyond governance to a lot of other dimensions from for example capacity building.  The adoption of AI.  And that is    that assessment is going to help the member states assess.  And it will recommend what needs to be done for the adoption of AI across the societies.

Another tool that we are working on is quite interesting.  And I'm actually working actively on that.  There are a lot of now I think what we've covered in the AI domain is to come up with ethical principles.

So there is kind of a harmonisation from lot of multi lateral organisations on what ethical principles should be.  For example, explainability, accountability, et cetera, et cetera.

We've taken those principles.  And as a basis.  And we have done assessment for the member states on how does AI interact under those principles to the basic human rights.  We've created a framework that I presented in couple of sessions earlier.  So I will not go into the details.  But we are looking at, for example, there is data privacy.  Or data privacy is an ethical AI principle.  Looking at data privacy and seeing what are the risks that come under attack from the AI systems.

And then mapping those risks against the human rights.  So basic human rights of privacy.  Or basic human right of whatever.

Once woe take that tool through the framework, we will make it, the tool available to the AI systems.  Deployers and developers in the member states.  And beyond as well.  To answer a whole lot of detailed questions.  And assess the systems.  How under those ethical principles considerations.

So basically we are trying to put the principles that have been adopted into practise.  And the system and also recommendations how AI systems can improve.  So this is on AI.  Very quickly I think I have a minute left.

So we are focussed on the data privacy.  And we are developing data privacy principles.  (Audio dropped )...

>> LUCA BELLI: Thank you very much Ahmad.  You have been leading work on e AI ethics globally.  I would like to give you the floor for what are the challenges and possibilities to deal with this.

>> Certainly.  Thank you very much Luca.  And it is a pleasure and honour to be able to join the panel today.  So yes, my name is Ansgar Koene.  EY global AI ethics regulatory leader.

We try to help organisations, be it public or private sector in most countries around the world with setting up their governance frameworks.  Around the use of AI.

And one of the big challenges is for organisations to clearly identify actually what are the particular impacts that the systems are going to have on people.  Both those who are directly using the system.  But also those who are going to be indirectly impacted.

And one example, for instance, that is probably particular concern for the global majority is question about how the systems are going to impact on young people.  The global majority of course being a space where there are a lot of young people.

And if you look at a lot of organisations, they do not fully understand how young people are interacting with their systems.  Be it systems provided through online platforms or be it systems that are integrated into other kinds of tools.

They do not know who and from what ages is engaging with the platforms or what kind of particular concerns they need to be taking into account.

A different kind of dimension of a con Syrian is how to make sure as we're operating in AI space by system produce bade technology leading company, but then deployed by a different organisation.  That the obligations, be it regulatory or otherwise, fall onto the party that has the actual means to address these considerations.

Often, the deploying party does not know fully what kind of data went into creating the system.  Does not know fully the extent to which the system has been tested.  Whether it is going to be biased against one group or another.  And does not have the means to do so.

It must rely on a supplier.

So do we have the right kind of review processes?  As part of procurement.  As part of making sure that as these systems are being taken on board, that they do benefit the users.

>> LUCA BELLI: That was excellent.  And also fast.  Which is even more excellent.

So we can now pass directly to Melody Musoni.  Policy officer at ECDPM.

Melody the floor is yours.

>> Thank you, Luca.  I was    when I was preparing for this session, I was looking at my previous interventions at IGF last year.  Question lot has happened in terms of what Africa has been doing.

(audio fading in and out)

 

And speak more on the developments of AI governance in Africa in trying    of the  

So I will try to speak about the developments on AI governance in Africa and trying to answer as well one of the policy questions we have.  How can AI governance frameworks ensure equitable access to and promote development of AI technologies for the global majority?

So this year has been an important year and very busy year for policymakers in Africa.  We saw earlier at the beginning of the year the African union development agency developing a white paper on AI, which kind of gave a layout of the land of what are the expectations from a continental level.

And the priorities that the continent has as the development of AI on the continent is concerned.  And later in June this year, we saw again the African Union adopting a continental strategy on AI.  It was in response to conversations we at platforms like this.  That at least if we can have a continental strategy which gives or directs us and guide us on the future of AI development in Africa.

Apart from the two frameworks, we also have a data policy framework in place from 2022 and there to support member states how to utilise or unlock the value of data.  So it is not only looking on at personal data.  It is also looking at non personal data and issues on data sharing are quite central in the policy framework.

Issues on cross border data flows are also quite central.  And again, we are towards the finalisation of the African Continental Free Trade Agreement and the (audio dropped)...

  We have more and more people with AI skills.  We have more and more people working in the S.T.E.M. field, for example.  And lot of initiatives are actually going towards our own human capital.

And I guess with people who are already late in their careers, there is also that question of "how can we best rescue them?"  And I think that is where we need the support from the private sector mostly to support a lot of people who are advanced in their careers on how to re skill and get new skills that are relevant to the age of AI.

And an important area again, an important pillar for Africa is (audio dropped...   ).

    digital infrastructure.  And that is a big challenge still for Africa.  So it is not just talking about AI.  It is coming back to the foundational steps that we need.  We need to start having access to either internet.  We need to have access to basic infrastructure.

Building on that.  And then of course with AI, there is discussions around computing power and how best can we have more and more data centres in Africa to support again AI innovation.

And I'm not going to talk about enabling environment because that is more regulate irissues.  And I'm sure we have been talking about the issues how best to regulate.  But there just to emphasise again, that these, the discussion apart from regulating AI and personal data.  Discussions around how can we best have laws, be it intellectual property laws, tech session laws and different incentives to attract more and more innovation on the continent.  And then I'll guess the most important for the continent is building of the AI economy.  How do we go about it in a way that is going to bring actual value to African actors and African citizens.

And there again, there are promises.  I see I'm running out of time.  Can I just go to    yes.  So another important issue again is the importance of strategic partnerships.  We cannot do this by ourselves.  We are aware of that.  And there is need again to see how best can we collaborate with international partners to help us to develop our own AI ecosystem.

>> LUCA BELLI: Fantastic.  And exactly these are points that apply around the full spectrum of global south countries.  But it is very very important to raise them.

Let's now move to another part of the world close to you.  Professor Bianca Kremer.  She is member of the board of C...  The floor is yours.

>> Thank you Luca.  I will take off my phones because it is not working very well.  And I didn't want to bother very much the conference for now.

So thank you so much for inviting me.  It is a pleasure to be here.  This is my first IGF.  Despite working with AI and tech the last 10 years.  I have been professor, activist in Brazil and also researcher on the topics of AI and algorithmic racism.  Its impact in our country, Brazil.  Understanding also other perspectives to improve develop and also use the technology in in our perspectives.  In our own terms.

This is something we have to consider when we talk about the impacts of AI and other new technologies.  Because we don't have only AI.  AI is the high for now but we have other sort of technology that impact us us socially and also economically.

So I have been concerned on this topic.  This specific topic of algorithmic bias in the last 10 years.

And from 2022 to '23, I have been thinking about how to raise awareness after problem in our countried.  Developing research.  And also understanding the impacts for our society on this topic.  But this year I have been changing little bit my perspective.

Because I have been concerned about raising awareness on the topic for the last year.  So I thought that maybe it was important to give a next step to the research.  So I have been developing a research that has been funded also.  It is partially, one part of my research I have been developing research on data AI and after university with professor Luca.  And the impact of our Brazilian data protection law.  And economic platforms as well.

But in the personally, I have been working on the topic of the economic impact of algorithmic racism in digital platforms.

This is very complex to do.  We have to raise indicators to understand the economic impact that could, when we could see and observe the specificities of these impacts.  We can maybe provide some changes in our environment, in our legislation.  Also in our public policies.

So this is something I have been up to.  And just to address a little bit about why this is a concern for us.  Until the last year, I have been working specifically if one type of technology that is facial recognition for example.  Just to clarify little how the algorithmic racism works in Brazil.

We have been addressing a huge amount of acquisitions of facial recognition technologies in the public sector.  Specifically for public security purposes.  And we have found that 90    90.5% of those who are arrested in Brazil today with use of facial recognition technologies are black and brown.  The brown people in Brazil are called BAHADOS, we have more than 90% of the population being biased with the use of technology.

And this is not something that is not    it is trivial.  Because Brazil today is the third population that incarcerates the most in the world.  And so we are the third place.  We only lose to China and United States.  For example.

So this is important topic for us.  And which are economic impact of these technologies?  What do we lose when we incarcerate these amount of people?  Which are the losses?  Economic losses for the person, for the ethnic group that is arrested.  And also for society.

Which are the heritages that we few?  Now with the use of these technologies there are back from the colonial heritage.  So this is something I have been working with, trying to not only raise awareness but also understanding the actual economic impacts.  With the use of economic metrics, for example.

It is ongoing.  But it is something that I have had to understand a little bit.  So thank you very much Luca for the space, for the opportunity.  I'm looking forward to hear little more about my colleagues on their topics.  Thank you.

>> LUCA BELLI: Fantastic.  Thank you very much for being on time.  And indeed actually, as the human rights argument, are some things we have been repeating for some years.  Probably the economic ones might be more persuasive, maybe with policymakers.

Now let's go directly to the next panelist.  We have Liu Zijing.  Sorry, pardon me for for destroy your name with horrible presumption of innocence nation.  From Guanghua law school of staging university.

>> Thank you so much Dr. Luca.  And hello everyone.  I'm Liu Zijing from China.  This is my co Lin Ying.  We love to share Chinese experience about artificial intelligence utilisation.  Our report is about building language models from the experience from China.  China has smart culture reform and it was 2016.  But before that in 1980, China's leader has to consider how to utilise the computer to modernize management and also modernize legal work.  And until 2016 China government officially launched a programme named smart code reform to digitalize the code management.

And now in these years, it has entered into the cert phase which is AI phase and Chinese has launched unique models which is very impressive.  We'd like to share some experience from China.

And in this year, 2016 and 2022 the Supreme Court launched a system which is driven by the large legal language models.  And it helps the judges to do their legal research, as well as the legal reasoning.

Also in the local court level, such as in staging province.  The high court launched their own model named the Phoenix and also have a AI copilot.  And it was being used in the court, especially for the per litigation mediation.  Also a feature of the province.

And also in Shanghai.  Shanghai, the high court they launched a system named 206 system and it was especially for the criminal cases.  So you can see there are many features in China's utilisation of the large language models.  Especially in the judicial sector.

And we also concluded several features about China's success.  First is we have a very strong extent of (?).  And the second one is that there is a weaker resistance within the local, within the judicial sectors.  And also one of the most important fixtures is that in China there is a close cooperation between the private sector and public sector to develop the large language models by themselves.

Because we would in this year lots of judges and others they also use AI chat box such as ChatGPT.  But in challenge their own language model.  So it was quite unique.

And I will share my time with my go writer.

>> Hello everyone.  I'm I didn't think.  I would like to provide some initial suggestions.  There are many concerns for us.  One is about environment.  As we know advanced AI requires substantial financial resources.  And only few regions can afford it.  As we mention before like Shanghai.  So calls for special fund for other regions to assist.

There are also issues about public private partnership.  The big problem is public input by the private output.  What if private companies use those data and similar products for their own benefits?  What if those private companies dominate relationship and put great influence in judicial decisions?

So I'll read, to prevent undue influence and ensure transparency.

And second, on one AI assistance breeds concerns about transparency and due process.  Can judge really know how the algorithm works?  And decision is really made by AI or human being?

And authority to the AI system responsibility and potentially working into judicial accountable.  And due to this autonomous process there is also issue about whether all parties in the cases represent them fully.  And this emphasises importance of transparency and (?).

On the other hand there are substantial issues, AI biased and sometimes make up things.  We need human in the loop.  So integration of single framework and guidelines into AI system are helpful.  And ongoing dialogue between legal experts and AI development will also work.

And last one security.  When making judicial decisions, it will involve massive process of sensitive personal data.  Will lead strict data security protocols and recommendations requiring assist and used by private and government.

And when smart court in the international level, there is a issue like national security risks.  So (?) may prevent to authorise the assist, (?) essential.  And to ensure the integrity and security of the smart world system in China.  Thank you, that's all.

>> LUCA BELLI: Thank you very much also for being perfectly on time and for raising two very important issues at least.  Even if we build AI, then it has to run on something.  So not only the model.  It is also the compute that is relevant.

And second, the fact that it needs to be transparent.  Because probabilistic system are frequently very opaque.  And it is not really acceptable from due process and rule of law perspective to say we don't know how it works.  But it needs to be explainable.

Fantastic.  Let's get to the last couple of speakers in person.  Rodrigo Rosa Gameiro.  And Catherine Bielick from MIT.  Please the floor is yours.

>> Hello.  Can you hear me.  My name is Rodrigo.  I'm a physician, also a lawyer by training.  I grew up in Brazil but currently live in the U.S. it.  Work at MIT with Dr. Bielick here.  We do research in AI development, alignment and fairness.

One question I had in mind while I was thinking about this panel is "how do we make sense of where we stand with AI globally today?"  And I often find myself turning to literature for perspective.

There is one line from "a tale of two cities" that feels especially fitting.  It is:  It was the best of times, it was the worst of times.

Because for some this is indeed the best of times.  AI can work and does work in many ways cases.  In health care, AI enabled us to make diagnoses that were simply not possible before.  AI is enabling us to accelerate drug development, our understanding of medicine in ins ways we never imagined.

The problem is this is also the worst of times.  The benefits of AI remain largely confined to a handful of nations with robust infrastructure.  Meanwhile, the global majority is pushed to the side lines.  Even within countries that lead AI development, this technologies often serve only to the privileged few.

We have documented, for instance, AI systems recommending different level os care based on grace.  And vast region of the world where these technologies don't even reach communities at all.

The divide isn't just about access.  Is about who gets to shape the technologies, benefits, (microphone...   )

 

(No audio).

    because there can be no AI for the global majority if it is not from the global majority.  And this brings me to our chapter in the book which is from AI bias to AI by us.

And at our lab at MIT, led by Dr. Leo Chelli.  We've measured complete ways to measure progress and drive change.  What we've learned is powerful.  When you give everyone a seat at the table, innovation flourishes.

Let me share a story that illustrates this.  Through our work we connected with researchers in Uganda.  We debate come as saviors or teaches are.  We came as collaborators.  Today as a result of our collaboration, the team there has built their own dataset.  Developed their own algorithm.  To solve their own local challenges.

This also secured international funding.  In fact, they taught us much more than we taught them.

And this is not an isolated story.  Through FISANET, our platform for sharing health care data.  We reached across 20 countries.  And leadership worldwide collaborate on solving local problems.  More than 2,000 publications with 9,000 citations.  But most importantly, AI solutions that actually work for the communities they serve.

But here is what we have to learn about all else.  Our approach is not the only answer.  Effective AI governance needs more than individual initiatives.  It requires all stakeholders working together towards shared goals.  And my colleague Dr. Bielick will explain this further.  Thank you.

>> Thank you Dr. Gameiro.  My name is Dr. Catherine Bielick.  I'm infective disease physician.  Instructor at Harvard medical school and scientist at MIT studying AIs outcome and improvement.

I work here at MIT critical data.  We are publishing here as a case study.  But I think.

(audio fading in and out)

 

...

And think one way that I would like to think about international governance of AI global majority is to think from a historical precedent in context.  Because we don't want to reinvent the wheel.  We don't think everyone around the world should be doing.  Individual countries have individual needs.  I think there is already a precedent that we like to contend as a good framework that we can emulate going forward for AI from the global majority.

I'm talking about actually the Paris agreement, the climb accords.  Where to over nearly 200 countries came together on one common goal with individual needs per country.  Based on their own unique populations.  And I think there are five core features I want to take away from the Paris agreement and way that question parallelise it from AI to the global majority.

The main thing that this is a global response to a crisis of what I will say is inequitable access to responsible AI.  All those words carry lot of different meaning and weight.

(audio fading in and out)

 

...   internationally with differentiated responsibilities.  Where I think that the wealthier nations carry more of the burden to have open leadership.  And knowledge sharing.  The second is, I think maybe the most important which is localized flexibility.  There are National determined contributions in the Paris agreement that I think parallelise over to AI from the global majority.

Where each country defines priorities for their own people.  And we come together and we put them together and agree on a global standard.

Because I think implementation domains differ in so many areas.  In health care, in agriculture, disaster response.  Education.  Law enforcement.  Job displacement.  You can go on.  Economic sustainability and environmental energy needs.

There is just no one size Fits all.

And what comes with that is a core feature of transparency and accountability.  And that is accounted for in the Paris agreement.  Which Iky also can parallelise to us today.

There are regular reviews from every country.  And there are domain specific nonnegotiables.  Reducing carbon emissions by a qualifiable amount per country.  And in this case there can be a federated auditing system which would be similar to federated learning in a way that protects privacy.

The last two include I think a financial supporting channeling.  Where developing nations must have resources.  Channeled over.  Where people cannot only use those resources and technology sharing to develop and implement their focussed AI tools.  But the infrastructure to evaluate those outcomes a as well.  Which is just as important if not more.

And lastly the global stockade term which was used a lot for the Paris agreement I think.  With the key here is there are specific outcomes determined by specific groups, by specific countries and we can aggregate those towards a single tracking of process.  I think with this unified vision for the future, it takes us out of the picture I think.

Because I don't think we can or should be prescribing what the global majority wants or needs from.

(audio fading in and out)

 

  I think every stakeholder needs an equal voice in this and this is the pathway.  And why can't a meeting like this.  Why aren't we talking about the equivalent of international agreement where we can all have a same, equal voice in participating towards a same common goal.

We're all here.  There is no shortage of beneficence from all of you.  Non maleficence, equity.  These are medical ethical pillars.  And there is no shortage of resource I think when we can come together for a unified partnership.

Thanks.

>> LUCA BELLI: Fantastic.  So we have lot of, already lot of things to think about.  And so I also would like to first ask the people in the room to start thinking about their comments or questions.  Because the reason why we are trying to do all   .  Is to have a debate with you guys.  Lets pass to online panelists also a bunch.  I really hope they will strictly respect their 3 minutes each.

We should have already a lot of them online.  So that is the our remote mod rater friends should be supportive.  First professor Sizwe.  Snail.

Are you hear with us?  Yes we can see you.

Good afternoon Sizwe.  You can go ahead.

>> Thank you very much.  Dr. Belli.  Thank you delegates and everyone in the room.  Indeed, IGF time all is a good time.  And it is always a good time to collaborate.

I've had the pleasure of working with two lovely ladies this year.  Ms....  One of the attorneys at the form.  And Ms....   a paralegal.  On looking at the evolving landscape of artificial intelligence policy, in South Africa on the one hand, as well as possibly drafting artificial intelligence legislation.  I'm mindful of the 3 minutes that's been allocated to us.  I want to fast forward and say in South Africa, the topic of artificial intelligence has been discussed over the last 2 3 years on various levels.

One level there was a presidential commission.  In terms of which the President of South Africa had made certain recommendations in terms of a panel he had constituted on how the fourth industrial revolution should be perceived and what interventions should be made with regard to spectrums such as artificial intelligence.

It was a bit quiet.  Covid came and went.  And data protection was the big, big, big issue.  However, artificial intelligence is back.  It is the elephant in the room.  And South Africa has been trying to keep up with what is happening internationally.  On the one hand, South Africa drafted what they called the South African Draft AI Strategy and this was published earlier on this year.  And the strategy received both very warm comments and very cold comments.

Some of the authors and some of the jurists in South Africa were very happy saying it is a way forward, the as good way forward.  And other jurists were of the view to say, but this is just a document.  It is 53 pages.  Why are we having this?

South Africa then responded in early August after all the critique and everything that was said with a national artificial intelligence policy framework.  This document has been reworked.  It looks much better.  It has objectives.  And it has been trimmed from the 53 page document.

Having a look at what is happening in Africa as well, I think it is in line with some of the achievements that people want do in Africa.  With regards to artificial intelligence and the regulation thereof.

Thank you.

>> LUCA BELLI: All right.  Thank you very much Sizwe for having respected the time.  And again, we are mindful that every short presentation is providing only a very    a teaser, of broader picture.  But we encourage to you read the deep commentaries in the book.  Actually we can have also copy that are freely available here.  Some of you have already taken them.  Otherwise the full book is available on the IGF website.  Which is little bit Byzantine to explore.

So we have created a mini url that is bit.ly/DAIG 24.

Next is speaker is Stefanie Efstathiou from the EURid committee.  And also expert in AI and arbitration at university of Munich.  Stefanie are you with us.

>> Yes.  Thank you very much professor Luca.  And I'm happy to be here.  Based in Germany, ib house counsel and Ph.D candidate on researching on AI.  However I'm here in my capacity as member of the EURid youth committee.  Being of course the CCTLD registry for dot EU.  And trying to bring in the more of a youth perspective.

I would like to draw Texas to discourse on regional approaches to AI governance as highlighted in the recent report "AI from global majority."  This report underscores while artificial intelligence promises to reshape our societies, it must do so inclusively and equitably.

So from Latin America to Africa and Asia, regional affluence as we see in the report demonstrate resilience and innovation.  Latin American nations are forging frameworks inspired by global standards yet rooted in local realities, emphasising regulatory collaboration.  And in Africa T rise governance framework exemplifies a vision for integrated data governance, emphasising cooperation, accountability and enforcement.

These efforts reflect not only the unique social political context but also shared aspiration to ensure AI serves as tool for empowerment and not Emmet ployation.

A committee dimension is role of youth in shaibing AI's trajectory.  The younger Jennings across but not limited to the global majority should not only adapt to regional frameworks but should actively participate and lead the change.

Youth should be more in the focus and participate as a stakeholders, since I has a unique inherent advantage.  Her to the ones who will have to adopt more than any other generation to the change.  And effectively live in a different world than other generations before.

The involvement can have various forms.  However, starting from data protection driven policies on entering student data privacy in Africa, to youth led innovation hubs in Latin America is a good way to go.

Nonetheless it is our duty to amplify these voices and incorporate their ideas into policy making processes, as well as it is the duty of the youth to actively participate and emerge itself in this sphere of responsible AI innovation and policy making.

The energy and the creative of the younger generation shall signal a brighter future for AI governance.  However challenges persist and we have seen this digital colonialism data inequities as well as systemic biases threaten to widen the divides.  As the report highlights however, it is imperative to address these disparities by adopting inclusive frameworks, fostering regional cooperation and prioritising capacity building initiatives tailored to each region's needs.

However, with a minimum common global understanding similar to what Dr. Bielick described earlier.

As we move forward, let us reaffirm and I want to close with this.  We shall e reaffirm our commitment to an AI future that embodies fairness, sustainability and human centred innovation.

However grounded in regional diversity.  But without causing fragmentation.  And inspired by the vision and the drive of youth.

Thank you very much.

>> LUCA BELLI: Thank you very much Stefanie.  Actually it is very good introduction also one that you and Sizwe provided to our first slot dedicated to regional approaches to AI.  So what kind of approaches is emerging at regional level in various regions of the world.

Our next speaker, Dr. Yonah.  Welker.  MIT and former tech envoy and also leading multiple EU sponsored projects.  Worked quite a lot on this.  And also a little bit of presentation for us.

So we have it.  Our technical support can confirm he can share his presentation.

>> Yes.  My pleasure to be here and it is my pleasure to be, go back to to Riyadh.  I would love to be mindful about the time and address the issues of disabilities.  Educational and medical technologies is extremely complex area.  And almost one year since 28 countries signed the declaration.  And unfortunately this area is still underrepresented.  Not only complexity currently, there are over 120 countries working on assistive technologies.  Models, wieses related to supervises, unsupervise.  Reinforcement learning.  Recognition yous, exclusion.  I would love to quickly share the outcomes what we can do to actually fix it.  First of all, I believe we should work on regional solutions.  We regionalize ChatGPT.  Because for most regional languages, you have 1,000 times less data.  And we need to build our regional solutions.  And not only LLM but also SLM with maybe less parameters but with more specific objective and efficiency.

Second we should work together to create open repository, cases and taxonomies, not only of use cases but also what we call "accidents."  And what we work with OECD.  First is models help to improve accuracy, fairness and privacy also dedicated safety environments and oversight.  Specific simulation environments for complex and high risk models.

Also, we actively working on more specific intersectional frameworks and guidelines with Unesco or UNICEF, for instance.  Digital solutions for girls with disabilities in emerging regions, who AI and health or OECD disability in the AI accidents repositive stories.

And finally we should understand all the biases we have today in technology.  Is actually reflection of historical and social issues.  For instance, even beyond AI, only 10% of the world population have access to assist technology.  50% of children with disabilities in emerging countries are still not enrolled into schools.

So we can fix it through one policy but through combination AI, digital, social and accessibility frameworks.  Thank you so much.

>> LUCA BELLI: Thank you very much Yonah for respecting the time.  And now let's move to another region.  We have our friend Ekaterina Martynova from higher school economics.  Very nice to see you again if only online.  Please the floor is yours.

>> Thank you so much professor Luca.  I will be very brief just to give overview of current stage of AI development here in Russia and the process should note is increase in spending from the budget.  Unprecedented level of spending and actually development of AI is one of the key priorities of the state.

Though the approach in terms of regulation is still quite cautious.  So it seems that the priority is to develop technology, not to hinder somehow the development.  So we don't have still comprehensive legal act as a Federal law on AI.

And we have some national strategies as a piece of subordinative legislation.  And also some self regulation in the market driven by the market players.

In terms of practical application, AI is being used quite intensively in public services.  Providing and we have some sand boxes, especially here in Moscow.  First of all in the public health care system.  And of course in the field of the public security and investigating.

So here I come to the main concerns with using AI in this fields.  Is of course the first one, the obvious, the human rights concern which has already been raised.  And it is very cute for Russia and it was also a question considered by the European code of human rights in terms of procedural safeguards provided to people being detained through the use of facial recognition system.

And we still need to develop very much our legislation here to provide more safeguards.  And here we look very closely at the council of Europe framework convention on AI and human rights, democracy and rule of law.

The Russia is not currently a member to council of Europe.  Still we consider these provisions on the standards of transparency, accountability, remedies can be useful for us, for our national development and maybe for development of some common bases within the biggest countries or with our partners the Shanghai    (?)

Second problem is data security and (?) problem.  And here we have a special centre created under the auspices of the ministry of digital development.  So that to be the central hub of this data (?) process and (?) of data.  Especially diametrical data used this service digitisation process.

And finally actually what Luca you have mentioned to the bin beginning in the opening speech is problem of AI and cybersecurity.  This particularly the topic which I research.  And the problem of the AI powered cyber attacks, which Russia is being targeted these years.  And we here consider which are the mechanisms, which legal mechanism which can be developed to hinder use of such    use of AI or malicious activities in cyber space by state actors, non state actors.  And here of course we need some efforts joined international level to develop a framework of responsible use of AI by states.  And the rules of responsibility and the rules of attribution of these types of attacks to the states which can be sponsoring such operations.

So I will stop here.  And thank you very much.

>> LUCA BELLI: Thank you very much e Ekaterina for these very good points.

Let's concludes the first second with Dr. Rocco Savarino.  Vrije university of Brussels.

>> Thank you.  I'm not yet a doctor.  Ph.D candidate.  But thank you.  And yes of course I am one of the author of the paper we submitted.  And with my colleagues here at the university at the Vrije university of Brussels.

To respect the time I'm going to be wrapping up the key points of our papers.  We look at the global question and how incorporating AI rules into data protection of frameworks influenced by these global trends.  Particularly the new digitalling revelations.

And this also leaded to the emerging of AI regulations in Latin America.  And because of this we are analysed in particular the case of Brazil and Chile.  Establishing specialized AI regulatory bodies reflecting the region's awareness of the complex issues.  Of the AI technologies.

And we look at the Brazil approach with the law 28 of 2023.  But in this case we should make a disclaimer.  Because of course as many of you know on 28 of November was presented another proposal.  And that we couldn't update in our paper because it was already submitted.  But we analysed the previous one where the role of the data protection authority was very important.

And we looked also at the Chile approach.  Because Chile is advancing in its AI governance model.  Proposing an AI technical advisory council and data protection agency to enforce AI laws.

Of course when talk about AI, we also talk about data governance.  And data governance is key factor in shaping AI oversight with focus on transparency, extendability and data protection of rights.

These lead to challenges and opportunities.  Latin America countries face challenges such as need for coordination among the regulatory bodies, developing specialized expertise and allocating sufficient resources.

But also opportunities.  Because the region has the opportunity to shape AI governance.  Adopting a risk based approach.  And integrating AI governance into existing data protection frameworks.

We believe Latin American countries can contribute to the global AI governance discussion by developing their regulatory models.  That reflects its unique socioeconomic and cultural context.

Thank you for having me.

>> LUCA BELLI: Excellent.  Fantastic.  And now we have concludes our regional perspective.  And we can enter into into the social and economical perspectives.

Actually the first presenter is Rachel leech.  Coauthoring one of the papers on AI impact in terms of environmentel and economic and social impacts.  So Dr. Leach, please the floor is use.

>> Thank you.  Also not a doctor yet.  But thank you.

>> LUCA BELLI: Your papers are so good, you will become soon doctors boast of you.  I'm sure about it.

>> Thank you.  Our project is an exploratory analysis of AI regulatory frameworks in Brazil and United States, focusing on how the environment particularly issues of environmental justice are considered in regulations in these countries.

And broadly we found that regulations in both countries are furthering the development of AI without properly interrogating the role AI itself and other big data systems play in causing harms to the environment.  Particularly in exacerbating environmental disparities within and across.

For example, in July 2024, the Brazilian Federal government launched the "Brazil plan for artificial intelligence" investing four billion and hoping to lead regulation in the global majority.

The plan centred the benefits of AI with the slogan "AI for good for everyone" and p invested to mitigate extremeworth including a super computer system to predict such event.

Additionally in the U.S., President Biden's executive order order on the safe, secure and trustworthy development of artificial intelligence operates under the assumption AI is tool with potential to enable the division of clean electric power, again without examining environmental issues raids by the technology itself.

These just a Santana snapshot of the trend we identified.  Both country a largely tech   .  What this means is the regulations tend to operate under the assumption that there is a technological solution to any problem.  This approach leads to regulations that vastly underconsider the externalities or harms of technology and centre technology and solutions even in instance that may not be the best approach.

So turning now to the solution we want to highlight.  First, when considering environmental and social costs of AI, it is crucial to consider embodied carbon.  Meaning the environmental impact of all of the stages of the product's life.

As many people have discussed, developing and using AI involves various energy intensive processes from the extraction of raw material and water to the energy and infrastructure needed to train and retrain these models to the disposal and recycling of materials.  Often these environmental costs fall much harder on the global majority, particularly when energy centres from    or data centres from U.S. based countries are citing a lot of data centres in Latin America, for instance, and exacerbating issues such as droughts in that region.

The second action we want to highlight is importance of centering environmental justice concerns, comprehensively across all discussions about AI, from curriculum to research to policy.

We think this is really important to interrogate the assumption that AI technology can necessarily solve social and environmental problems.  So yeah, thank you again for having us.

>> LUCA BELLI: Excellent also.  Very good that you are most all on time.  Next one Ph.D candidate at centre for politics and political theory.  University new Dell.  Do we have Avantika Tewari.

>> Great to be here with you.  I'm going start without much ado.  I think just to give you a little bit of context about this paper, in India we have something called the data empowerment.  And protection architecture, which essentially all the debates around AI governance are also hinged on control and regulation and distribution of data.  So there has been an emphasis on consent based data sharing models.

And that is devised to basically make a data empowered citizenry.  So it is in the context that I have written this paper.  And I want to foreground that why this technologies such as ChatGPT and generative AI technology appear to be autonomous.  The functionality depends on vast networks of human labour.  Such as data and data moderators and data labors.

Platforms like Amazon mechanical (?) outsource (?) global majority.  Reducing to fragmented tasks that remain unacknowledged and (?).  Sustain AI systems at disproshesly benefit corporation in the global north.  Transforming (?) through cheap appropriation of land labour resources for computer technologies and digital infrastructure.

Similarly, digital platform users are framed as empowered participates with the like shares and posts generating immense profits for tech giants all without compensation.  Represents the double bind of digital capitalism.  Where unpaid participation of users is reframed ads agency and the labour (?).  With the global majority bearing the brunt of both.

The platform economy built on twin pillars of fragmented (?) rebrands user exploitation as agency and convenience.  By embedding in digital encloirses transforms artery cultures in into systems of unpaid labour.  Commodifying interactions previously not commodified.  Such as social relation of interactions and communication.

What emerges is that is what I term an un(?) dimension of social enjoyment which is relentless pursuit of meaning success and community, which is inherently a mediated by algorithms.

Yet the promise of satisfaction remains elusive and snaring individuals in loop of alienation and exploitation, which while making their engagement complicit in the production of data analytics and AI.

Duty is thus (?) commodity.  Retroactively imbued with meaning as valuable information fuelling market expansion diversification and stackification which paradoxically framed as governance model, where data is framed as something    as a resource that can be reclaimed as extension of the self or as a social knowledge.  Yet transformation conceives deeper reality which is the labour upon which platforms depend is increasingly fragmented into base tasks, base work.

This labour sustains the development of AI technologies that paradoxically aim to automate the low skilled tacits on which they rely.

The shift towards low skilled task based on demand work is not merely a strategic adaptation by platforms by ideological reconfiguration of labour relations wh what I call ideology of consumerism.

Increase fragmentation is attempt by capital to overcome its own depends on labour.  What I want to foreground in the paper is the real paradox is not whether technology can empower but how monopoly capitals drive to overcome dependence on labour leads to fragmentation of global division of labour which disproportionately impacts global majority.

And this results in the now partialisation of work, automation of tasks that are actually produced by the severance of labours embeddedness within the production process by the kind of, you know, fragmentation of work processes.  I'll stop here and thank you again.

>> LUCA BELLI: Thank you very much Avantika for brings these considerations about labour, the difference between consumer and proSumer.

Now staying in India, next speaker is Amrita Sengupta.  Research and programme lead centre for internet and society.  Amrita the floor is yours.

>> Thank you so much.  I'm also joined by my coauthor o also online.  I see the impact of AI on work of medical practitioner is actually a part of a larger mixed methods and empirical study that we did trying to understand the AI data supply chain for in health care in India.

So in this particular essay through primary research with medical professionals we did a survey of 150 practitioners and in depth interviews.  We tried to look at (Too fast) and also what are some of the new collages and perceived benefits.  Through this we also try to read certain concerns and issues about current views and what are the sort of cost and benefit to the work that the doctors and medical professionals have to put in now in the AI systems.  As you know as they start developing the systems.

So therefore bring issues we want to raise.

First is short term doctors have to put in additional time and effort in preparing data for labelling annotation.  But also learning these technologies and providing feedback on AI models.

These are real costs that need to be considered before we burden already overburden health care system.  For example in our survey we heard nearly 60% practitioner express lack of AI related training and education as bare yore to adoption of the AI systems.

Also raised concerns on efforts and infrastructure required on their side to digitize (?) because of digital health data exists in the current health care system in India today.

Second is also about the current use of AI in private health care and less so in public health care.  Which is where there is much larger need for meaningful interventions.  And, you know, for providing more efficiency time saving and providing meaningful help.  Which actually raises question (audio froze )    serve and who is it privileging through the ways which it is currently operated.

Third issue and critical is one of liability.  Academics and medical professionals in our study (?) liability.  Who would be liable for    diagnosis made by AI application that aids medical professionals.  Common concern we heard from doctors and academics was AI meant to assist doctors.  But often enough doctors felt the pressure that AI could take their place or was threatening to.

The last issue we want to also raise is sort of longer term impact of AI and our survey 41% of medical professionals suggested AI could be beneficial and time saving but also help improving clinical decisions.

We ask what are the kind of risks this raises with overreliance on AI, leading to loss of clinical trials or of course representational biases that the AI models may present because of the way the data is coming from.  Promise of reliance on (?) so on.

Last by if we need to prioritise AI we should prioritise areas they could most benefit and in larger public interest and with least disruption to existing work flows.

And be considerate of whether the costs actually outweigh the benefits.

>> LUCA BELLI: Excellent.  And now we are going to start to see how the global majority is reacting to AI which kind of innovative thinking and solution is put forward in our last section.  And then we will have open the floor hopefully for debate as we have started with some minutes of delay.  I hope our colleagues will be    will indulge on us and give us 5 extra minutes.

We have now Elise Racine from university of Oxford.  Please go ahead.

>> Hi everyone.  So I shared a presentation pdf in the chat.  I'm Elise Racine, doctoral candidate at university of Oxford.

I study artificial intelligence including reparative practises.

AI really does promise transformative societal benefits but also presents significant challenges in ensuring equitable access and value for the global majority.

Today I'll introduce reparative algorithmic impact assessments.  A novel framework providing rowbs accountability mechanism with reparative practise to form a more culturally sensitive justice oriented methodology.

The problem is multi fasted.  Global majority remains critically underrepresented in AI design, development, research and governance.  This leads to systems as we've discussed that not only inadequately service but also harm large portions of the world population.

For example AI technology developed primarily in western contexts often fail to account for diverse cultural norms, values and social structures and while traditional algorithm irk    (?) they often fall short in ameliorating injustices and omit marginalized and minoritytizeised voices.

Reparative    adjust through five steps.

First social historical research delving into the context and power dynamics that shape AI systems.  Second, participant engagement and impact harm co construction that goes beyond tokenism and (?) power.

Three, (?) reparentive practises that    communities retain control over their information.  Fourth is ongoing monitoring and adaptation focussed on sustainable development.  And adjusted based on real world impact.

And fifth and last step is redress.  Moving beyond identifying issues to implementing concrete actual plans that address inerkts.

To illustrate steps in practise consider a U.S. based company deploying AI powered mental health chat bot in rural India.  A reparative approach may, for instance, employ information specialists with (?) to ground research in actual reality.  Implement flexible participation options with fair compensation and mental health support to drive meaningful community engagement.

Establish community controlled data trusts.  Develop new evaluation metrics that incorporate diverse cultural values and priorities.  And partner with local AI hubs and research institutes that empower communities to develop their own AI capabilitsich.

These are just several examples.  There is a few more again in the pdf as well as in the report.

But through this comprehensive approach I want to emphasise how reparative algorithmic impact assessments move beyond merely avoiding harm to actively redressing historical, structure and systemic inequities.  That was a large focus of the paper.

By doing so, we can foster justice inequity.  Ultimately ensuring AI truly serves all of humanity, not just a privileged few.  Thank you very much.

>> LUCA BELLI: Thank you very much.  We are almost done with our speakers.  We have now Hellina Hailu Nigatu from UC Berkeley.  Please the floor is user.

>> Thank you.  I am going to share my screen real quick.

He will everyone.  My name is Hellina.  I'll briefly present my work with collaborator.  Social media platforms such as YouTube, Tiktok and Instagram are used by millions of people across the globe.  And while these platforms certainly have their benefits, they are also a playground for online harm and abuse.

Research showed that in 2019 majority of the content that was posted on YouTube was created in languages other than English.

However non English speakers are hit the hardest with content they, quote, regret watching.

Social media platforms also have resulted in physical harm.  Facebook faced backlash in 2021 for its role in fuelling violence in Ethiopia and mean Myanmar.

When we look at how platforms protect users, platforms rely on automaticked systems or human moderators.

For instance, Google reported 81% of the content flagged for moderation is self detected and most of the content that is    most of the content is detected by automated systems and then redirected to human reviewers.  Additionally Google uses machine translation tools in the moderation pipeline.

However automated systems    show intersection of social political and technological constraints resulted in disparity performance for language spoken by most of the world population.

Transparency reports states about 81 percent of the human moderators are created in English.  And of the non English moderators.  Only 2% operate in languages other than the highly resourced European ones.

Majority world is a term koined by...   to refer who what we're mostly called third world, developing nations, global south communities.  Et cetera.  And the term "global majority" emphasises that collectively these communities prize majority of the world's population.  And as with their size, the communities vary    are very diverse in terms of race, ethnicity, economic status, culture and languages.

Within NLP the majority world, excluded from state of the art models and research.  They are hired for pennys on the dollar as moderators with little to no mental or legal support.  They are exposed to harmful content when conducting their jobs as moderators and harmed by the (?) existing appliance.

Can w this cycle of harm we see there are two major lines of (?) including or not including these languages in AI.  Either you are included in the current technology and as a result are surveilled.  Or you are left in the trenches with no protection or support.  We argue this is a false dichotomy in our paper and ask that if we remove the guise of capitalism that currently dominants content landscape, is there a way to have (?) with power residing in the users.

Thank you so much.

>> LUCA BELLI: So now we have only two speakers to go.  Isha Suri is research leader for centre of internet and society.  Please o floor is yours.

>> Thank you professor Luca.  I'll just quickly share my screen.

I'm joined by my coauthor.  And we looked at countering false information and policy responses for the global majority in the age of AI.  I'll quickly give you a teaser.  I'll be happy to take any questions when I'm done.

One of the things that we    something wrong with my screens here.

Background and context.  World economic forum recognises false information including misinformation and disinformation as the most severe global risk.

Multiple studies have demonstrated social media's designed to (?) hate speech and disinformation.  For instance, internal Facebook study revealed (?).  And if left unchecked would (?) gain user attention and increase time for platform.

Factors that emerge.  Integrated structures.  Profit maximizing incentives.  Ensure that platforms continue to employ algorithms recommending divisive content.

For instance, a team at YouTube tried to change its recommended systems to suggest more diverse content.  But they realised their engagement numbers were going down.  Which was ultimately impacting the advertising revenue.  And they had to roll back on some changes.

And this as we found leads to a lot of harmful divisive content being promoted on these systems.

Then (?) emerging from countries.  And sort of realised it was bucketed into one of three large categories.  One was amendment to existing laws.  Including penal code civil law, electoral law and cybersecurity law.  And (?) where false information is defined broadly.  And later found that carries significant risk of censorship.  We also go into India.

Specific Casey study where researchers demonstrated platforms overcomply.  And that leads to a chill effect on speech and freedom of speech and expression.

Another aspect that se merging is that legislative proposals are transferring the obligation to internet (?).  Legislations are being tight beside platform.  I think the German example comes to mind.  Where for profit platform with more than 2 million users (?) obligations    illegal content and has to be taken down.

Also obligations (?) digital services in EU.  Digital Services Act is one piece of legislation that clearly transferred the obligation on platform providers to have more transparency in how their algorithms with working.

In addition to regulator responses, I think fact chequing initiatives have also e nernlgd as response to counter false information.  Meta.

(Scheduled captioning ends in three minutes)

 

    promise.  But again, it leads to questions of inherent conflict where also concerns about the payment methods, how is Meta paying or reimbursing fact checkers.  And there is lack of clarity whether there is sufficient independence within the organisation as such.

We sort after also see a trend within majority countries to mimic EU or the global north regulations, also known as the Brussels effects.

And also segue into conclusions and tie down what we discussed in the past two minutes.

This is a broad table we have in the (?).  But just to give you an overview how we've categorized some of these countries and looked at what the instrument and response is.  What are the sort of criminal sanctions whether it is intermediary liability framework they introduced.  Whether there there is a transparency and accountability, sort of an obligation they have introduced.

European Union and Germany as an example has been given.  Because we felt they have additional transparency and accountability requirements.  As opposed to some of the other countries that you see on your screen.

>> LUCA BELLI: Thank you very much.  If we can wrap up.  Because we have still one last speaker.

>> I'm on the last slide.  I'll probably just walk you through the broad recommendation.  One was unbundling of platforms.  Adopting (?) approach and developing inclusive (?) that we've discussed by previous present sore I'll stop here and thank you so much.

>> LUCA BELLI: Fantastic.  And last but of course not least.  Dr. QuangyuQaio Franco.  The floor is yours.

>> Thank you.  And thank you for staying around for my presentation.  My contribution is coauthored by...   ferry university Brussels also present online today.

Ow research on mandatory AI governance.  We highlight the concerning and dividing gap between north and south in military AI governance.

One striking observation is the limited and decreasing participation of global south countries in U.N. deliberations military AI.  2014 and 2023, fewer than 20 developing countries contributing to U.N. CCW meetings on systems on a regular basis.

Our interviews indicate different priorities in AI governance.  While the global north emphasises security governance and ethical frameworks.  The global south prioritises economic development and capacity building.