IGF 2023 - Day 0 - Event #51 Shaping AI to ensure Respect for Human Rights and Democracy

The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.

***

 

>> MODERATOR: (microphone muted) -- from different time zones.  Happy to be here in Kyoto.  I'm Thomas Schneider.  I happen to chair the negotiations on the first binding AI Treaty at the Council of Europe.  That is a treaty not just for European countries, it is open to all countries that respect and value the same values of humanized democracy and rule of law.  We have countries like Canada, the United States, Japan as well participating in the negotiations but also a number of countries from Latin America and other continents.  But I'm not here to talk about the convention right now. 

               You will hear a lot about the convention.  Maybe in this session but also in others.  I'm here to help you listen to experts from all over the world that will talk about AI and how to ensure while fostering innovation how to ensure human rights and Democratic values to be respected when AI is used, developed.

     So as we all know, AI systems are wonderful tools if they are used for the benefit of all people and if they are not used for hurting people for creating harm.  And so this is about how to try and make sure that one thing happens and the other doesn't.

     But before we go to the panelists, I have the honour to present to you a very special guest, Bjorn Berge, the Deputy Secretary-General of the Council of Europe and will give you a few remarks from his side.  Thank you.  Please go ahead.

     >> BJORN BERGE: Thank you very much, Ambassador Schneider.  And good afternoon to all of you.  It is really great to be here in Japan and at the very important occasion.  And -- okay.  And, of course, it is 17 years now and it is the 18th time that the IGF is meeting.  And it has really proven to be both an historic and highly relevant decision to start this process.

     And technology, as we know, is developing in a way and at a pace that the world has never seen before.  It affects all of us, every country, every community around this globe.  It, therefore, makes really perfect sense to keep up the work and do all we can to ensure enhanced digital cooperation and the development of a global information society.

     Basically, this is about working together to identify and mitigate common risks so that we can make sure that the benefits that the new technology can bring to our economies and societies are, indeed, helpful and respect fundamental rights.

     Today, it is good to see the Internet Governance Forum making substantial progress towards a global digital compact.  Human rights established as one of the principles in which digital technology should be rooted.  Along with the regulation on Artificial Intelligence.  All to the benefit of people throughout the world.  The regulation of AI is also something on which the Council of Europe is making good progress.  In line with our mandate, to protect and promote common legal standards in human rights democracy, and rule of law.  And the work we do is not only relevant for Europe alone but has often a global outreach. 

     So, dear friends, I believe all of us are fully aware of the positive changes that AI can bring:  Increased efficiency and productivity with mundane and repeated tasks, moving from humans to machines.  Better decisions even made on the basis of big data, eliminating the possibility of human error and improved services based on deep analysis of vast information leading to scientific and medical breakthroughs that seemed impossible until very recent times.

     But, with all of this comes significant right-based concerns.  And just as a matter of fact, a few days ago, the Council of Europe published a study on tackling bias in AI systems to promote equality.  And I'm very happy and pleased that the coauthor of this excellent study, Ms Ivana Bartoletti is here with us today online and she will speak after me, I think. 

     So, there are also other questions related to the availability and use of personal data.  Responsibility for the technical failure of AI applications.  And on the criminal misuses in attacking election systems, for example.  And on access to information, the growth of hate speech, fake news, and disinformation and how these are managed.

     The bottom line is that we must find a way to harness the benefits of AI without sacrificing our values.  So how can we do that?

     Our starting point should be the range of Internet Governance tools that we have already agreed upon.  Some of which have a direct bearing on AI.  If I focus on Europe for a moment, this includes the European Convention on Human Rights which has been ratified by 46 European countries.  Also, the European Court of Human Rights with its important case law.  And let me just mention one concrete example now from such a court judgment.

     A case that clarified that online news portals can be held liable for user generated comments if they fail to remove clearly unlawful content promptly.  This is a good example of the evolution of law in line with the times.  Drawing from the European Convention and the court case law which, of course, again builds on the universal declaration of human rights, we also developed specific legal instruments designed to help Member States but also countries outside Europe apply our standards and principles in regards to Internet Governance Budapest convention. 

     The second edition of protocol is designed to improve cross-border access to electronic evidence, extending thereby the arm of justice further into cyberspace.  Our Convention 108 on the data protection is similarly a treaty that countries also inside and outside of Europe find highly relevant.

     And this convention on data protection has also been updated with an amending protocol, widely referred to as Convention 108+, which helps ensure that national privacy laws converge.  Added to this, we have within the Council of Europe adopted a range of recommendations to all of our 46 Member States covering everything from combating hate speech, especially online, to tackling disinformation.  And right now, we are also working on a set of new guidelines on countering the spread online mis- and disinformation.  In addition, we are now looking at the impact of digital transformation of the media and this year we will finance work on the set of new guidelines for the use of AI from journalism.

     So all in all, we are, indeed, involved in a number of areas trying to help and contribute.  But we need to go further still on AI specifically.  And here, we are currently a far reaching and first of its kind international treaty, a framework convention.  And the work is led by Ambassador Schneider sitting next to me that will design a set of fundamental principles to help safeguard human rights through law and Democratic values that AI experts from all over Europe as well as Civil Society, representatives from the private sector are leading and contributing to this work.  Such a treaty will set out common principles and rules to ensure that design, development, and use of AI systems respect common legal standards and that they are rights compliant through their life cycle.

     Like the Internet Governance Forum, this process has not been limited to the involvement of governments alone, and this is crucially important.  Because we need to draw upon the unique expertise provided by Civil Society participants, academics, and industry representatives.  In other words, we must always seek a multistakeholder approach.  Also to ensure that what is proposed is relevant, balanced and effective.

     Such a new treaty, a framework convention, would be followed by a standalone non-binding mythology for the risk and impact of AI systems to help national authorities adopt the most effective approach to both regulation and implementation of AI systems.  But it is also important to say here today that all of this work is not limited only to the Council of Europe or our Member States.  The European Union is also engaged with the negotiations as well as non-European countries.  As well as Canada, United States, Mexico and Israel.  And this week Argentine, Costa Rica, Peru and Uruguay joined.  And, of course, Japan, a country that has been a Council of Europe observer for more than 25 years.

     And that is actively participating in the range of our activities.  And there is no doubt that Japan's outstanding expertise and track record of technological development makes it a much valued participant in our work.  And its key role globally when it comes to AI and internet governance is only reconfirmed by hosting this important conference here in Kyoto this week. 

     So, dear friends, there is still time for other like-minded countries to join this process of negotiating a new international treaty on AI.  Either taking part in the negotiations or as observers.  A role that actually a number of non-Member States have requested and have a say.  The negotiations are progressing, and I must say the negotiations are progressing well.  A consolidated working draft of the framework was published this summer and it will now serve as the basis for further negotiations.  And yes, our aim is that we should be able to conclude these negotiations by next year.  I hope you agree. 

     Let me also underline that the framework convention will be open to signature from countries around the world.  It will have the potential for a truly global reach, creating a legal framework that brings European and non-European states together, opening the door, so to say, to a new era of right-based AI around the world.

     So let me here just make an appeal to government representatives here today to consider whether this is a process that they might join and a treaty that they most likely will go on to sign.  I encourage those who have not yet done so to join the Budapest Convention 108 and 108+ as I just mentioned.  I believe it makes sense to work closely together on these issues and make progress on the biggest scale possible. 

     Let me lastly on this point just say, and more broadly, that on the regulation of Artificial Intelligence we can learn from each other, benefit from various experiences and tap into a large pool of knowledge and expertise globally.

     For us, the Council of Europe, seeking multilateral solutions to multilateral problems is really part of our DNA.  In the spirit of cooperation, make it natural for us to work with others with an interest in these issues as well.  And I also want to highlight here today that we also work now very closely with the Institute of Electrical and Electronic Engineers to elaborate a report on the impact of the metaverse and immersive realities.  And we are also looking then carefully into the -- if the current tools are adequate for ensuring human rights, democracy and rule of law standards from this field.  We are also coordinating closely with UNESCO as well as with the OECD, with the OSCE, the Organization for Security and Cooperation in Europe.  And European Union, of course.

     And I believe also why we are here today as the Internet Governance forum, we shared our spirit and our ambition of international cooperation.  And this is really the only approach for us.  And I'm sure its success is a must, both for the development of Artificial Intelligence and for helping to shape the safe, open, and outward-looking societies that hold and protect fundamental rights and are true to our values.  So with this, I thank you very much for your attention.

     >> THOMAS SCHNEIDER: Thank you very much.  And you said the key of us being together here is to learn from each other, which means listening and trying to understand each other's situation.  And I would be very happy to have quite a range of experts with different expertise here on the panel but, of course, also in the room.  So I'm looking forward to an interesting exchange. 

     And I will immediately go -- you already named her -- to Ivana Bartoletti.  She is connected online so we have this advantage after COVID that we can connect with people physically here but also remotely.  And Ivana Bartoletti works in a major private company, specialized, among other things, in IT consulting.  She is also a researcher and teaches cybersecurity, privacy and bias at Pamplin Business School at Virginia Tech, and founder of Digital Rights and AI, Women Leading in AI Network. 

     Tell us about your work.  What are the main challenges when it comes to bias?  And in particular, gender bias in AI.  And what do you think we need to develop to do and -- develop and foster the appropriate solution to these problems?  So, Ivana, I hope you will appear on the screen soon.  Yes, we can already hear you.

     >> IVANA BARTOLETTI: Wonderful.  Thank you so much.  And thank you for having me here.  And it was great to hear from yourself in the introduction and Mr. Bjorn Berge, the Deputy General Secretary of the Council of Europe.  I want to thank for the trust given to me in putting together this report, which is now available online.

     I wanted to start by saying that Artificial Intelligence is bringing and will bring an enormous innovation and progress if we do it in the right way.  And I do firmly believe, as many, that we are at a watershed moment in the relationship between humanity and technology.  This is the time.  And the Secretary-General of the Council of Europe was articulating it well. 

     We are at a watershed moment in this relationship between humanity and technology.  We have seen some of the amazing benefits of Artificial Intelligence automated decision making can bring to humanity.  On the other hand, we have also seen some quite disturbing sights and effects in these technologies.  And the bias and coaching and automation of existing inequality has been one of them.

     I want to make one point as we start.  And the point that I want to make is that over the last few years and -- sorry, over the last few weeks and months we have seen a lot of people coming out with quite alarmist and dramatic appeals on Artificial Intelligence.  And I want to say loud and clear here in this room that this alarmist approach to Artificial Intelligence has been quite distracting.  The reason for this is that it helps create a mystique around Artificial Intelligence.  We know very well right now what the risks are.  We have been advocating and especially have to say women and human rights activists over the last decade for measures to tackle this.

     So I want us and everybody to remain focused on Artificial Intelligence risks and harms to the nitty gritty as the Council of Europe mentioned now, as a lot of work is going, for example, in the European AI Act in the development of legislation and guidance all across the world; in the work going into convention for the Council of Europe as well as in the world that the United Nations with the Digital Global Compact is putting forward to really focus on the harms that we know of.  On the harms of bias, disinformation, the coding of existing inequalities in automated decisions.  Making choices about individuals now but also, also making predictions about decisions tomorrow.

     So in these studies, Rafael and I have focused on bias and automated decision making and looking at what this bias looks like.  There has been a lot of work going into this and a lot of expertise all around the globe focusing on the bias.  And we have seen that this bias has a very real effect.  We have seen less credit given to women because women traditionally make less money than men.  We have seen countries and government grappling with the terrible mistakes of, for example, families wrongfully identified as potential fraudsters in the benefit system and therefore putting families, parents and children, into poverty.

     We have seen what it means when job adverts target that pay less are served to women because traditionally women have earned less than men.  And we have seen the harms of automated decision making.  For example, portraying images of women replicating stereotypes that we have seen for decades in our society.

     So the harms of automated decision making and the bias is all too real for people and affects everyday life.  And some people argue, yes, but human people are biased.  And I say yes, they are biased, obviously they are.  But the difference is where that bias gets coded into software.  And it becomes more difficult to identify, more difficult to challenge, and then it becomes engrained even more in our society.  And this is particularly complex in my view when it comes to predictive technologies which if we code this bias and these stereotypes into the predictive technologies that what could happen is that we end up into self-fulfilling policies; we induct into replicating the patterns of yesterdays into decisions that shape the world of tomorrow.  And this is not something that we want. 

     So what can we do?  First of all, we must recognise the bias can be addressed from a technical standpoint.  Can be addressed from a technical standpoint.  The bias is much deeper than that.  It is much more than a technical issue.  It is rooted into society because data as well as parameters, as well as all of the humans that go into creating the code is much more than technology.  Ultimately, I like to say that AI is a bundle of code, of parameters, of people, of data, and nothing of that is neutral. 

     Therefore, we must understand that these tools are much more of a social-technical tool rather than a purely technical one.  So it's important to bear in mind that the origin and the cause of bias which could emerge at any point of the life cycle of AI is not -- is a social political issue that requires social answers, not purely technical ones.  So let's never lose this from our conversation. 

     Second thing that is important to realize, we found in the study with Rafael there is often not a novel gap between the discrimination that people experience which traditionally especially in nondiscrimination law are based on protected characteristics and the new sources of discrimination that are often algorithmic.  And the algorithm of discrimination which is created by big datasets, so clustering of individuals done in the computation or an algorithmic way.  And, on the other hand, the more traditional categories of discrimination, they do not overlap. 

     And, therefore, what is happening is that we must look into existing nondiscrimination law and try and understand if that nondiscrimination law that we have in place in our countries is fit for purpose to deal with this new form of algorithmic discrimination.  Because what happens in algorithmic discrimination is that individuals may be discriminated not because of a traditional protected characteristics but because they have put -- they have been put in a particular cluster, in a particular group.  And this happens in a computational and algorithmic way.  So the updating of existing legislation is very important.

     We encourage Member States to expand the use of positive action measures to tackle algorithmic discrimination and use the concept of positive obligations  which is, for example, in the European Convention of Human Rights case law, to create an obligation for providers and users to reasonably prevent algorithmic discrimination.  This is really, really important.  We are looking and we are suggesting mandatory discrimination risk and quality impacts assessment throughout the life cycle of algorithmic systems according to their specific uses.  We are looking really to see, to ensure that this equality by design is introduced into the systems.  We are looking and we're suggesting to Member States to consider how certification mechanisms could be used to ensure that the -- this bias have been mitigated.

     So looking, for example, at how Member States could introduce some form of licensing as they have -- well, actually this systems, this has been -- due diligence has been into the systems to eliminate as far as possible for well-defined uses.  We are looking at encouraging the Member State to investigate the relationship between accountability, transparency and trade secrets. 

     And finally, my last point, we encourage Member States to consider establishing legal obligations for users of AI systems to publish statistical data that can allow parties, researchers to really look at the discriminatory effect that a given system can have in the context of discrimination. 

     So I want to close on this.  It is a vast report that I would encourage everyone to read.  And the bottom line of this report is that discrimination through AI systems is something that brings together social and technical capabilities.  It is something that it needs to absolutely be at the heart of the heart of how we deploy these systems.  We must investigate the way that we can use the systems to actually tackle the discrimination in the first place.  For example, by identifying sources of discrimination that are not visible to human eyes in the first place.  So there is a positive side to all this which we must harness.  But to do so, we encourage everyone to really understand how we can get together, bring the greatest expertise in the world and in this room to really try and understand how we cannot just further our knowledge but also enshrine in legislation the importance to tackle bias in algorithmic systems.

     >> THOMAS SCHNEIDER: Thank you very much, Ivana.  As you say, new technologies create new problems sometimes but they can also be part of new solutions.  And it is good to highlight both. 

     With this, let me move on immediately, as we are slightly running behind schedule, to Ms Merve Hickock.  She is also connected online.  We do, as you see, also have physically present speakers and experts.  Merve is a globally renowned expert on AI ethics and governance.  And her research and training and consulting work focuses on the impact of AI systems on individuals, society, public and private organizations with the particular focus on fundamental rights, democratic values, and social justice.  She is the president and research director at the Centre for AI and Digital Policy.  And with this hat she is also very actively present as one of the very present Civil Society voices in the negotiations on the convention. 

     So, Merve, what are some of the main challenges of finding proper regulatory solutions to the challenges posed by AI to human rights democracy?  And what kind of solutions to these challenges do you see?  Thank you.

     >> MERVE HICKOK: First of all, thank you so much for the invitation, Chair Schneider.  Good to see you in virtually.  And I appreciate the invite and expanding this conversation in this global forum as well.  Also, I'm in great company here today and very much looking forward to the conversation. 

     I actually want to answer the question by quoting from recommendation of College of Ministers of Council of Europe dating back to 2020 where the ministers recommend that human or social beneficial innovation and economic development goals must be rooted in the shared values of democratic societies, such as approved democratic participation and oversight; that the rule of law standards that govern public and private relations such as legality, transparency, predictability, accountability and oversight must also be maintained in the context of algorithmic systems. 

     So this sentence alone for me provides us with a great opportunity to create a summary of challenges as well as an opportunity and a direction for solutions.  First, in terms of challenges, we currently see a tendency to treat innovation and protection as an either/or situation, as a zero-sum game.  I cannot tell you the number of times I am asked, but would regulating AI stifle innovation.  I'm sure those in the room and in the panel probably has lost count of this question.

     However, they should coexist.  They must coexist.  Regulation creates clarity.  Safeguards making innovations better, safer, more accessible.  Genuine innovation promotes human rights.  It promotes engagement.  It promotes transparency. 

     Second challenge in this field that we are seeing is that the rule of law standards which go in public factors use of AI systems must apply to the prior actors as well.  It feels like every day we see another privately owned AI product undermining rights or access to resources.  Yes, of course, there are differences in the nature of duties between private and public actors.  However, businesses also have obligation to respect human rights and rule of law, too. 

     This is reflected in the United Nations guiding principles for business, reflected in the process for AI.  We cannot overlook how the private sector use of AI impacts individuals and communities just because we want our domestic companies to be more competitive.  Market competition alone will not solve this problem with human rights and democratic values.  Unregulated competition might encourage a race to the bottom. 

     And the third challenge, the final challenge is the CEO and industry dominance in regulatory conversations we are seeing around the globe today.  As ministers note, innovation must be subject to full democratic participation and oversight.  We cannot create regulatory solutions behind closed doors with industry actors deciding how they should be regulated.  Of course, the industry must be part of this conversation; however, democracy requires public engagement.  Whether it is in the US, UK or beyond, you are seeing the dominance of industry in the policymaking process undermining the democratic values.  And it is likely to accelerate existing concerns about replication of bias, misplacement of labour, concentration of wealth and power imbalances. 

     As I mentioned that the challenge, the Ministers recommendation actually include the solutions with them that we need to base our solutions on democratic values.  In other words, put civic engagement and policymaking in elections, in governance, transparency and accountability.  I would like to finish very quickly with some recommendations because core democratic values and human rights is core to the mission of my organization.

     We saw this challenge several years ago and set ourselves up for a major project to objectively assess AI policies and practices across countries.  Our annual flagship report is called AI and Democratic Values Index.  We published a third edition this year where we assess 75 countries against 12 objective metrics where our metrics actually allow us to assess whether and how these countries see the importance of human rights and democracy.  And whether they keep themselves accountable for their commitments to these.  In other words, do they walk the talk.  You would be surprised to see how many commitments in the national strategy does not actually translate to actual practices. 

     So finishing my response with offering recommendations from our annual report over the three years that I hope will be applicable to this conversation.  First, establishing national policies for AI that implement democratic values.  Second, ensure public participation in AI policymaking and create robust mechanisms for independent oversight of AI systems.  Third, guaranteed fairness, accountability and transparency of all AI systems, public and private.  Fourth, commit these principles in the development, procurement, and implementation of AI services for public services where a lot of the time the middle one, procurement, falls between the cracks.

     Next recommendation is implement the UNESCO AI recommendations on ethics.  And then the final one in terms of implementation is establish a comprehensive legally-binding convention for AI.  And I do appreciate the Council of Europe, being part of the Council of Europe's work and looking forward to this convention for AI.

     And then we have two specific recommendations with specific technologies because they undermine both human rights and democratic values and civic engagements.  One is the facial recognition for mass surveillance.  The second one is deployment of literally autonomous weapons, both items that had been repeatedly also expressed in a -- discussed in UN negotiations and UN conversations.  With that, I would like to thank again.  I'm looking forward to the rest of the conversation.

     >> THOMAS SCHNEIDER: That was very interesting.  In particular, also this fight against the notion that you can have either innovation or protection of rights but both need to go together.

     With this, let me turn to Francesca Rossi.  She's also present online.  She is a computer -- by the way, have you noticed we have quite a many women here on this panel.  So for those that complain that you don't find any women specialists, actually sometimes you do. 

     Francesca Rossi is a computer scientist currently working at the IBM Watson Research Law in New York and is an IBM fellow and IBM AI ethics global leader.  She's actively engaged in the AI-related work of bodies like the IEEE, the European Commission High Level Expert Group or the Global Partnership on AI.  And she will give us a unique perspective as both a computer scientist and a researcher but also as someone who knows the industry perspective on the challenges and on the opportunities created by AI, and more especially on generative AI.  You have the floor, Francesca.  Thank you.

     >> FRANCESCA ROSSI: Thank you.  Thank you very much for this invitation and for the opportunity to participate in this panel.

     So, many of the things that have been said by the previous speakers resonate with me.  So like I can, you know, of course, everything that Ivana said about the social technical aspects of this science and technology that is AI by several years now that AI is not a science or a technology only but it is really a social technical field of study.  And that is a very important point to make.

     I really support all of the efforts that the Council of Europe and European Commission are doing in terms of regulating AI as in my company and myself I really feel that the regulation is important to have.  And it does not stifle innovation, as it was said also previous, by the previous speaker.  But regulation should be focusing in my view on the uses of the technology rather than the technology itself.

     The same technology can be used in many different ways, in many different application scenarios.  Some of them that are very, very low risk or no risk at all, and some others instead that they are very, very high risk.  So we should make sure that we focus where the risk is and to put obligations and compliance and scrutiny and so on.

     I would like to share with you the fact how what happened during the last years in a company like IBM which is a global company and has applications of its technology and deployment of its technology to many different sectors of our society.  So, and what we did inside the company even though there was and there is in some regions of the world no regulation that really to be compliant with around AI is because we really feel that regulation is needed but it cannot be the only solution.  Also because technology goes much faster than the legislation process.  So companies have to play their role and their part in making sure that the technology they build and they deploy to their clients respects the human rights and freedom and human dignity and bias and many others.

     So the lessons that we have learned in these years are very few, but I think very important.  First of all, that a company should not have an AI ethics team.  This is something that maybe is natural to have at first, but it is not effective in my view because having a team means that then the team has to usually struggle to connect with all of the business units of the company.  So what it must have is a company-wide approach and framework for AI ethics and a centralized governance for that company-wide framework.  For example, in our case, in the form of a board with representations from all of the business units.

     Second thing, this board should not be an advisory board.  It should be an entity that can make decisions for the company, even when the decisions are not well received by some of the teams because, for example, it says no, you cannot sign that contract with a client, you have to do some more testing; you have to pass the threshold for bias; you have to put some contractual conditions in the contractual agreements and so on.

     The second -- the third thing that we learned is that we started like everybody with very high level principles around AI ethics, but then we realized very soon that we needed to go much deeper into the concrete actions.  Otherwise, there was no impact from the principles to what the developers and the consultants were doing.

     Next, next one is really the social technical path.  For a technical company it is very natural to think that an issue with the technology can be solved with some more technology.  And, of course, technical tools are very important, but they are the easy path.  The important and most important and complementary path to the technical tools is the education, the risk assessment processes, the developer's guidelines.  Really changing the culture and the frame of mind of everybody in the company around the technology. 

     Next point is the importance of research.  AI research can augment the capabilities of AI, but it can also help in addressing some of the issues related to the current limitations of AI.  And then that is very important.  So to really focus on supporting the research efforts.  We also have to remember the technology evolves.  So over the years, our framework has evolved because of the new challenges and the expanded challenges that came about with the evolution of the technology.  And we're going from a technology that was just root based, based on machine learning and then based on generative AI right now.  Which expands old issues like issues related to fairness, explainability and the robustness but creates new ones, right.  It was mentioned misinformation, fake news, copyright infringement and so on. 

     And then, finally, the value of partnerships.  So partnerships that are multistakeholder, that are global.  And as the Deputy-Secretary mentioned, this is a really very important and a necessary approach.  It has to be inclusive, multistakeholder, and global.  So I have been working with the UACD, with the board, with the partnership on AI, Global Partnership on AI.  The space is very crowded now. 

     But and we have to make an effort because of this crowded space to find the complementarity and how to work together.  So each initiative tries to solve the whole thing, but I think that each initiative has its own angle that is very important and complementary to the other ones.

     So I will stop here by saying that I really welcome what the Council of Europe is doing under the leadership also of our moderator, but also I welcome what the UN is doing, at least trying to do with the new advisory board that has been built because really the UN can also play an important role in making sure that AI is driven in the right direction which is guided by the UN Sustainable Development Goals.  Thank you.

     >> THOMAS SCHNEIDER: Thank you, Francesca, for sharing with us the lessons learned in a company like IBM from an industry perspective.  But also I think very important guidance is an appeal to intergovernmental institutions and other processes to not all try to solve all problems at once, but to each of the processes and institutions focus on their particular strength and try to jointly solve the problems together.  Thank you very much. 

     With this, let us move, this is our last online speaker, and then we do have the physical speakers here.  Professor Daniel Castano.  He comes from the academic world.  He's a professor in law in the Universidad Externado de Colombia, but has a strong background in working with government.  He's been a formal legal advisor to different ministries in Colombia and actively engaged in the AI-related research and work in Colombia and Latin America in general.  He's also an independent consultant on AI and new technologies. 

     So, Daniel, what kind of specific challenges do AI technologies pose for regulators and developers of these technologies regionally?  In your case, in Latin America in particular.  Thank you very much, Daniel.

     >> DANIEL CASTANO PARRA: Well, ladies and gentlemen, Deputy Secretary-General Bjorn Berge, and Thomas Schneider, thank you very much for this invitation. 

     I think that the best way to address this question is just trying to discuss the profound importance of AI regulation.  Today I must make clear that I'm speaking with my own voice and that my remarks reflect my personal views around this topic.

     So AI as we know it is no longer just a buzzword or a distant concept.  From enhancing healthcare diagnosis to making financial markets more efficient, AI is deeply embedded in our societal fabric.  Yet, like any transformative technology, its immense power brings forth both promises and challenges.  Why, you may ask, you say are regulations are paramount not only to Europe but to the world and to our region, to Latin America?

     At its core, it is about upholding the values we hold dear in our societies.  Transparency in the age where algorithms shape many of our daily decisions, understanding the mechanics is not just a technical necessity, but a democratic imperative.  Accountability.  Our societies thrive on the principle of responsibility.  If an AI errs or discriminates, there must be a framework to address these consequences.

     Ethics and bias.  We are duty bound to ensure that AI doesn't perpetrate system biases but instead aids in creating a fair society.  As we stand on the brink of a new economic era, we must ponder on how to distribute AI's benefit equitably and protect against its potential misuse.

     Now casting our gaze towards Latin America, a region of vibrant cultures and emerging economies.  The AI landscape is both promising and challenging in sectors ranging from agriculture to smart cities.  While some nations are taking proactive measures, others are still finding their own footing.  However, the road to a unified framework faces certain stumbling blocks in our region. 

     Like, for example, fragmentation due to inconsistent inter-country coordination.  I mean we lack in the coordination and integration that Europe has nowadays.  We have deeply technological gaps that are attributed to varied adoption rates and expertise level.  And we have infrastructure challenges and sometimes hampered consistent and widespread AI application.  But let's not just dwell on challenges.  Let's try to architecture on solutions together.

     So first, I will suggest that we require some sort of regional coordination.  For that purpose, we could establish a dedicated entity to harmonize AI across Latin America, fostering unity and diversity.  Also, I would suggest to promote the creation of technology-sharing platforms that would allow for the creation of collaborative platforms where countries can share AI tools, solutions, and expertise, bridging the technological gap.  Also, I would suggest some investments in shared infrastructure for our region.  Consider like pooling resources to build regional digital infrastructure, ensuring that even nations with limited resources have access to conventional AI tech. 

     Unique challenges also present themselves towards discrepancies, variances in technology access, and the data privacy norms necessitate an enormous approach in our region.  But herein also lies the opportunity.  AI has the potential to address regional challenges, whether it's delivering healthcare to remote Amazonian villages or predicting and mitigating the impact of natural disasters.

     So what do I envision for Latin America and indeed the global community?  So, first, I would suggest we know synergies are key.  Latin America countries by sharing Best Practices and even setting regional standards can craft and harmonise AI narrative.  Second, I highly encourage stakeholder involvement.  A diverse core of voices from technologists, to the industry, to Civil Society must shape actively the AI dialogue in our region. 

     We also need capacity building.  I mean we have a huge technological gap in our region.  And I think that investment in education and research is non-negotiable.  Preparing our own citizenry for AI augmented future is a shared responsibility with the world.

     Finally, I also encourage to strengthen data privacy and protection and to try to harmonize the fragmenting regulatory scheme that we are having now in LATAM.  Because I think that would lead a bulkinasation of technology which will only hamper innovation and would only like put us many years back. 

     So in conclusion, as we stand at this conference of technology, policy and ethics, I urge all stakeholders to approach AI with a balance of enthusiasm and caution.  Together, we can harness the potential of AI in order to advance the Latin American agenda.  Thank you all for your attention, and I really look forward to our collective collaborations around this pivotal issue.  Thank you very much.

     >> THOMAS SCHNEIDER: Thank you very much, Daniel, for these interesting insights because in particular people coming from Europe like me -- although my country is not a formal member of the European Union -- I think we have a well-developed cooperation and also some harmonisation of standards not just through the Council of Europe when it comes to human rights, democracy and rule of law, but also economic standards.  And it is important to know that this is not necessarily the case in other continents where you have lots of greater diversity of rules and standards in different ways.  So which is, of course, a challenge also.  And I think your ideas, your solutions to overcome these challenges are very valuable.

     Let me now turn to Professor Liming Zhu.  I hope I pronounced it correctly, he's a research director at the Australia National Science Agency.  A full professor of the University of South Wales.  So let us continue with the same topic from -- move to another region to Asia-Pacific actually. 

     And Liming Zhu has been closely involved in developing Best Practices for AI governance and worked on the problems of operationalising responsible AI.  So the floor is yours.

     >> LIMING ZHU: Thanks very much for this opportunity.  It is a great honor to join this panel.  And so I'm from CSRO, which is Australian National Science Agency.  And we have a part called Data 61.  If you're wondering why Data 61, 61 is Australian's country code when you call Australian.  It's business unit doing research on AI digital and data.

     So just back a little bit on the Australian journey on AI governance and responsible AI.  So Australia is one of the very few countries back in 2019, late 2018 actually, started developing an AI ethics framework.  So Data 61 actually, they were the ones leading the industry consultation and came up in middle 2019 the Australian AI Ethics Framework, which is the high level principles.  And we observe the principles similar to many of the principles in other world and globally. 

     But interestingly, it has three elements in it.  One is it is not just on high level ethical principles and recognised human values, with a plural, being a different part of the community have different types of values and important tradeoffs and robust discussion. 

     The second part of it is include many of the traditional challenging quality attributes like reliability, safety, security and privacy and realized AI will make those kind of challenges even more challenging.

     And the third part of the AI ethical framework included some things quite unique to AI such as accountability, transparency, explainability and the contestability.  That AI, although they are very important in any digital software, but AI will make those things more difficult.  Since then, Australia has been focused on operationalising responsible AIs.  In the mean time, other agencies like the Human Rights Commission, the Human Rights Commission when they heard about this particular topic at this forum, she was very excited and she forward our recent UN submission on AI governance.

     And also you may bump into the E50 Commissioner from Australia in this forum, and she's looking at some for the E50 aspects of AI challenges as well in this forum.  But then the government actually launched about two years ago the Australian National AI Centre.  The Australian National AI Centre hosted by Data 61 is not a research centre.  It is an AI adoption centre.  Interestingly, its central theme is responsible AI at scale.  So it has created a number of think tanks including AI inclusion and diversity, responsible AI, and AI in scale to have Australian industry navigate the challenge of adopting AI responsibly in everything they do.

     In the meantime, at the science agency, you know, I'm a scientist in AI, we have been working on the Best Practices.  Bridging this gap that Francesca has mentioned, how do we get high level principles into something on the ground that, you know, organisations and developers and AI experts can use.  So we have developed a pattern-based approach for this.  A pattern is just a reusable solution, a reusable Best Practice.  But interestingly, in the pattern, not only you have the Best Practice but it also captures the context of the Best Practice and also the pros and cons of this Best Practice.  Not all Best Practices comes, and also many Best Practices needs to be connected.  There are so many guidelines sometimes for governance, sometimes for AI engineers, there are a lot of disconnection between them.  But when you connect those powerful Best Practices together and you see how the society, the technology companies, the governing bodies can make things, responsible AI more effectively implemented.

     Another key focus of our approach from Australia is on the system level.  So much of AI discussion has been talking about AI model.  You have this AI model.  You need to give it more data to train it, to align it to make it better.  But, remember, every single system we use including ChatGPT and others, is the overall system.  The system uses the AI model.  A lot of system level guardrails we need to build in.  And those system level guardrails are actually capturing the context of the use.  And without context, many of the risks and responsible AI practice are not going to be very effective.  So a system level approach going beyond the machine learning models is another key element of our work.

     The next key element we have is, as I mentioned earlier, is realising the tradeoffs we have to make in many of this discussion.  For people familiar with data governance, we know there is a tradeoff between basic utility and privacy.  You can't get both.  And so how much data utility you need to sacrifice, sacrificing the value of data for privacy and vice versa.  How much privacy you can afford to sacrifice.  And for maximizing the utility, this is another question for scientists to answer. 

     However, science plays two very important role.  One is to push the boundaries of the utility versus privacy curve.  Meaning for the same amount of privacy we could do a new science to make sure more utility is extracted.  In the high level panel this morning you have heard of federated machine learning and many other technologies has been advanced to enable this better tradeoff and getting better from both worlds.  But importantly, it is not only utility and privacy.  It is also fairness.

     You may have heard a story that when we actually try to preserve privacy without collecting certain data, it will also harm fairness in some cases.  So now you have three quality attributes you have to trade off:  Utility, privacy, fairness.  And there are more.  So how science can enable the decisionmakers to make that informed decision is the key of our work.

     The next characteristic of our work from Australia is to look at the supply chain.  No one is building AI from ground up.  You always rely on other vendor companies, AI models, you may be using a pretrained model.  How can you be sure what is AI in your organisation? 

     So similar to some of the work in the software builds of materials, we have been developing AI builds of materials.  So you can be sure that what sort of AI is in your system and have that accountability being held and shared among different players in the supply chain. 

     And the final thing is we have just been working is to look at responsible AI and AI governance through the lens of ESG.  Of course, ESG stands for Environment, Social and Governance and very aligned with SDG of the UN goals.  On the other hand, you know, the environment is your AI footprint, environment footprint.  Social elements, AI plays a very important role.  And the governance of AI often is too much of, you know, internal company governance.  But the societal impact of AI needs to be governed as well.  So looking at responsible AI, through the lens of ESG, will also make sure investors can drive the leverage of doing better responsible AI. 

     I will conclude by saying that Australia's approach is really looking at connecting those practices, enable the stakeholders to make the right choices and tradeoffs.  And those tradeoffs are not for us to make.  Thank you very much.

     >> THOMAS SCHNEIDER: Thank you, Liming.  It's interesting to hear you talk about tradeoffs and how we can turn this, maybe change them from perceived tradeoffs to perceived opportunities if we get the right combinations of these goals.

     So let me turn to our last, but not least, expert which is somebody that comes from the country hosting this IGF this year from Japan.  Professor Ema is an Associate Professor at the Institute for Future Initiative at the University of Tokyo.  And her primary interest is to investigate the benefits and risks of AI in interdisciplinary research groups.  She is very active in Japan's initiatives on AI governance.  And I would like to give her the floor to talk about how Japanese actors, industries, Civil Society and regulators see the issue of regulation of governance of AI.  Thank you.  Professor.

     >> ARISA EMA;  Thank you very much, Chair Thomas.  I'm honoured to make some of the presentation and share what is being discussed here in Japan also with my colleagues.

     So as Thomas nicely introduced me, I'm right now at the academic, the University of Tokyo, but also I am a board member of the Japan Deep Learning Association.  It's more of like the startup of companies community.  And also I am also a member of the AI Strategic Council of the Japanese government. 

     However, I -- today this talk I would like to wear the academic hat on it and I would like to talk about what is being discussed here in Japan.  So far, I see a lot of comment, the sharing insights that has been discussed with the panelists.  When we -- when I would like to introduce what has been -- what kind of the status or what is kind of discussion was ongoing here in Japan. 

     So back in 2016, it was like that was the G7 summit.  So before 2023 there was the summit.  And there Japan, Japanese government released the guidelines to the AI development.  And I think, I believe that that was actually the turning point that the global -- the discussion about the AI guidelines actually started and with the collaboration.

     And this year or the 2023 as the G7 summit at Hiroshima, we also see that the -- there is a -- there is the very huge debate discussion ongoing on the generative AI and currently the G7 and also other countries are discussing about the way to create the rules to govern the generative AI and also other AI in general.  And that is called the Hiroshima AI Process.  And I believe there will be a discussion tomorrow morning.

     And with that, the Ministry of Internal Communication Affairs and also the Ministry of -- Economic Ministry is also creating the guidelines to discuss the development of the AI and also to mitigate the risks.  So that kind of thing is right now ongoing here in Japan. 

     But before going to the discuss further about the AI with the responsible use, I would like to talk a little bit about the AI convention that is actually this year is going under the negotiation.  So why I'm here is that because I am actually very interested in the AI convention.  And my colleagues and I are actually organizing an event here in Japan to discuss what is the impact to us the Japanese and also to the world.  And actually we are creating some of the policy recommendations for the Japanese government.  So if we are to sign this convention, what kind of thing we should investigate on. 

     And I think this is really important convention to -- so that we when we are to discuss the responsible AI.  In order to make some of the points that should be discussed within this panel, I would like to raise some of the three points that we published last month in September from my institution.

     The title is called, "Toward Responsible AI Deployment, Policy Recommendation for the Hiroshima AI Process."  If you are interested in, just search for my institution name and the policy recommendation and you can find out.  And in there, I -- we had -- we created this policy recommendation with the multidiscussion/multistakeholder discussion including not only the academics but also from the industry.  And also we actually have a discussion with the government officials.  And there one thing we are -- we think really important or that -- is that the intra-operability of the framework.  So the framework intra-operability of the AI is one of the keywords that's been discussed in the G7 Summit this year.  But I guess many of us have questions what does intra-operability mean?  For our understanding is that we need somehow a transparency about each of the regulations or each of the frameworks that actually disciplines this AI development and also the usage.

     And in this sense, I think this AI convention is really important because, you know, as explained here, the AI convention is a framework convention.  And each country has its own -- it will take their own measures to this AI innovation and also the risk mitigation.  And it is really important to respect to each other's -- the other country's culture or how they regulate the Artificial Intelligence and how actually they can connect to each other's frameworks.

     And it is really important that each country has this -- the clear explainability, accountability of what is the role of each stakeholder has, what would be the responsibility and how they can supervise whether that kind of measurement, is it actually working.  And in that sense I think that kind of intra-operability and also this AI convention have -- somehow has really important views to share.

     And also, the next point we actually raised as policy recommendation is that how we actually consider about the responsibility.  And I think the other panelists also discussed, but I think what is important thing is that to discuss about the responsibility of the developers, the deployers and also the users.  However, the regulation is especially important regarding the user side.  And because there is a -- you know, the power imbalance between the users and developers.  So what we have to do is we not only the rules or the regulation but also we need to discuss how to empower the citizens to create, to empower and to higher their literacy so that they can judge what is the AI actually has -- what kind of developed. 

     And I'm really happy to hear that the professor actually raised like the ESG, like how the investors are very important stakeholders.  Not only the rules or the regulation by the legal framework, but actually there is a lot of discipline that we can actually use.  So, for example, like the investment or maybe the reputation.  And also like the literacy.  So we can -- we can -- and also actually the technology itself as well. 

     So there is a lot of measurement that we can take.  And to -- with all that things concerned, I think we can create a more, you know, better or more responsible AI systems or the AI implemented society as a whole. 

     And in the last part, lastly, but not least, what we also emphasize is the importance of the multistakeholder discussion.  And I believe that here the IGF is one of the very good timing that we actually discuss this here because we actually -- there is a Hiroshima AI process ongoing and lots of countries right now are dealing with their own regulatory frameworks.  And, as I said, Japanese government is also creating these guidelines or maybe updating the guidelines.

     And with that, this is actually the place we actually share what have been discussed or what we can share the values.  And the important values that actually the Council of Europe raises, the democracy, human rights and the rule of law, that important values we share and we start, we can have the transparency and also this framework intra-operability to discuss and make the principles or policy into practices.  And so that -- with that, I will stop here.  But I really appreciate to be in this panel.

     >> THOMAS SCHNEIDER: Thank you very much, Arisa.  And before I react, too, I would like to encourage people to -- those that want to interact, please stand up and go to the microphones.  We have a little less time than planned for the interactive discussion, but we do have a little bit of time. 

     I think what this panel has shown is that although we share the same goals, we do have different systems, we do have different traditions, cultures.  Not just legally, but also socially.  And, of course, it is a great challenge.  And as Arisa has also said, if we want to develop this common framework with the Council of Europe, but not just for European countries, for all countries around the world, one of the biggest challenges is, indeed -- and we heard a few hints how to -- one thing is how to have governments commit themselves to follow some rules.  This is the easy part, let's say, in the convention where governments can commit to keep sticking to some rules.

     But since we have so differing systems of how to make the private sector respect human rights, contribute positively and not negatively to democracy, how do we deal with these differences, how we -- what is the -- how do we responsiblise private sector actors in a way they can be innovative but contribute positively and not negatively to our values.  So this is one of the key challenges also for us working on this convention because we cannot just rely on one continent or one country's framework.  We have to find the common ground between different frameworks.  So having said this, I'm happy to see a few people take the floor.  Please introduce yourself with your name briefly and then make your comment or ask your question. Thank you.

     >> AUDIENCE: Thank you, sir.  My name is Krisfof (?).  I'm the founder of AI Association.  And my question regards continuation of the topic that we just covered regarding responsibility and tradeoffs.

     Continuing on this question, I would like to raise what I call a right of human in comparison to the human rights.  Isn't it the right of humans to endure the test of time?  In order to endure the test of time, is it a right or duty of collective sacrifice?  Is it a right or duty to redefine some of our most fundamental beliefs and values?  If there was a solution which could deliver -- which could deliver a way to that right but which could require us to relent temporarily or risk relenting permanently some of our human rights as defined by the declaration because of speed or because of need for constitution.  For that right to endure the test of time, speed versus right, consultation versus sovereignty, right versus rights.

     If there existed a binary point choice at some point, ladies and gentlemen, what right ought we to choose?  And my question extends to also our colleague at IBM and the broader world communities.  If we are to not try to solve all problems at the same time but instead jointly solve specific questions and tackle the overall question jointly, are we accepting to sacrifice the speed of decision-making or would we accept that one way makes at some critical point in time from the injuring of humans as a species some decisions which require speed beyond reasonably possible by a fully democratic process to be made slightly less democratically for those to whom democracy is dearest. 

     And some decisions which require global consultation to be made slightly more democratically for those to whom democracy compacts a challenge to overcome.  Thank you.

     >> THOMAS SCHNEIDER: Thank you very much.  You pose interesting questions, if I can try and summarize it. 

     Will we need the right to stay human beings and not turn into machines because we may have to compete with machines.  That is at least some aspects I think I have heard. 

     Let's take another comment and then see what the reactions are.  Yes, please go ahead.

     >> AUDIENCE: Thank you for recognizing me.  I'm Kent Katerma.  I'm in the manufacturing industry, but I have an academic hat with the KO University.  My topic relates to some of the American speakers and the concern about bias. 

     So within the Japanese academic world that I live in our concern is about bias being in the sense that the platforms -- Google, Apple, Facebook and Amazon -- are basically, quote/unquote American companies.  So who decides these algorithms?  When we look at the United States, we see extremely wide -- like in the case of abortion or politics, there are many issues that are very divided in the United States.  So if the platformers are deciding these issues in the end, even if it's technically possible to try to avoid the bias, who in the end actually decides which answer to go with?  Thank you. 

     >> THOMAS SCHNEIDER: Thank you very much.  Let me turn to the panel on these two issues or questions.  One is how request we stay humans and given also the competition, the growing competition with machines.  And the other one is I think about bias and where -- it's not just the AI systems, it's also the data, of course, that shape the bias.  And, of course, data is not evenly spread across the world but there is more data from some regions, from some people than from others.  Whoever wants to -- maybe give precedence to the ones that are physically present.

     >> LIMING ZHU: Thanks very much. I think we have a lot of experts online and Professor Ema is here. 

     Just very briefly, reversely, I think who makes those decisions, as I alluded to earlier, I mean as a scientist and as the developers of the AI providers, it's not their decisions to make.  It is the ability to actually expose some of these tradeoffs to allow the democratic process to have the debate with data and informed decision and then there is a dial like privacy, utility and fairness and they decide where to implement.  And the technology assures that implementation is throughout all of the system properly monitored and can be improved by pushing the scientific boundary. 

     I think in terms of AI and human competing, certainly that's a concern.  When I play golf, beat human in golf, in this space, you may know, people used to say they might be worrying people stop playing chess and golf.  But at this moment in history, the number of people interested in playing golf and chess is historically high and a number of grandmasters in chess is already high.  The reason being human find meaning in those work and game.  They will continue to do that even AI surpasses them, they learn from AI, they work with AI, they make a better society.  I think but that speed of change might be too fast sometimes for us to go.

     >> ARISA EMA: I guess the important thing that we should discuss or that we should be aware is that although we are talking about the Artificial Intelligence as a technology but it's more like, as the professor said, it is a system.  So it's not only the AI algorithm and AI models it's more like an AI systems, AI services.  And within the systems the human beings are also included.  So we have to discuss about like the human-machine interaction or the human-machine collaboration. 

     In that way, maybe the partially kind of responded to, the both questions is that so we need to we -- we actually don't have a clear answer, but we have to discuss the human-machine interaction and also the human biases are already included into this kind of human machine, the systems.

     So from the academic hat on my head, I would like to say that we need to focus more on the cultural and also the interdisciplinary discussion on the human-machine interaction with Artificial Intelligence.

     >> THOMAS SCHNEIDER: Thank you very much.  We are approaching the end of this session.  I would just like to maybe close with one remark that maybe showed that I'm not a lawyer, I'm an economist and a historian. 

     Whenever we talk about the crucial moments that we are at the edge of history becoming completely different than before, this is also something that every generation in every point of history thought. 

     And if we look back around 200 years or 150 years when the combustion engine was spreading all over continents, that also had a huge effect.  It did not replace like what AI is about to do, cognitive labour, but it replaced physical labour by machines that you can actually -- where you find a lot of comparisons. 

     If you take engines and compare them, what they did.  They were used in all kinds of machines to either produce something or move something or somebody from A to B.  And we learned to deal with engines.  We have developed not just one piece of legislation but hundreds of norms, technical, legal and social norms for engines used in different kinds of context.  For instance, if you take traffic legislation, we have been able to reduce the number of dead people in car accidents significantly. 

     At the same time, the biggest challenge is like how to reduce the CO2 emissions in engines.  We are still struggling after 200 years of using engines on how to solve that problem.  And there's many more analogies within AI systems and engines.  Very delighted by this discussion.  And I hope that we will continue.  There is a number of AI-related sessions this week in this IGF.  I'm part of a few of these, and I hope to see you again. 

     Also, I'm really interested in together with you finding a solution on how this Council of Europe convention can be a global instrument that will also not solve all of the problems but help us to get closer together to solve, to successfully use AI for the good and not for the bad.  Thanks a lot for this session, and see you soon.  Thank you.

     [APPLAUSE]