IGF 2023 – Day 2 – WS #57 Lights, Camera, Deception? Sides of Generative AI – RAW

The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.

***

 

>> MAN HEI CONNIE SIU:  Hello, everyone.  Welcome to our workshop.  My name is Connie and I'm a 22‑year‑old biomedical engineering student and also United Nations International Telecommunication Union Generation Connect ‑‑ with a passion for Internet governance.  In the next 90 minutes we will go to the landscape of Generative AI and explore implications both positive and negative on our society, economy and cybersecurity.  We have three key policy questions that will guide our discussion today.  So how can international collaboration from ‑‑ Generative AI technologies to harness their potential for positive applications in various fields.  How can the prevention, detection and verification and modification of Generative AI content be improved through interdisciplinary approaches and research?  And what are the opportunities or impact of Generative AI and commercialization on economy, cybersecurity, including accessibility affordability.  And what policies and regulations could promote the sovereignty and responsible data use.  As we all know Generative AI has ushered in a new era of possibility to modeling climate change scenarios.  However it also brings in a lot of concerns about disinformation, privacy and potential for misuse.  Our panelists will be addressing these issues and more and we hope to shed some light on the complex land of scape Generative AI.  We will relate ‑‑ and the challenges of cross‑border regulation and enforcement. 

    If you would like to ask a question towards the panel, we will have a Q and A session at the end for onsite participants and online participant may use the Zoom chat to send in your questions.  My online moderator will be helping them.  I would like to introduce ours stemmed panelists.  First off is Ms. Deepali Liberhan.  She has extensive legal and regulatory expertise and her background includes work with the agency, intellectual property law practice.  Next is Mr. Hiroki Habuka, research Professor at Kyoto University who specializes in innovation governance in a digitalized society and come pulsing AI data.  His expertise and contributions have been recognized globally.  And he was honored by the World Economic Forum as one of the world's 15 most influential people.  Then we have Ms. Olga Kyryliuk with a Ph.D. in international law.  She currently holds the role of technical advisor on international governance and she also serves as the Chair of southeastern European dialogue.  And Mr. Bernard Mugendi is a data economy advisor at GIC with a focus on startups and global development.  His current role provides expert advice in East Africa fostering partnerships with public academic and private sector stakeholders in data governance.  And lastly we have Ms. Vallarie Wendy Yiega.  And expert in Internet governance and tech policy.  Has served as a youth Ambassador and has held fellowship with Internet Society.  Let's begin session 1 of today's workshop on harnessing and I would like Mr. High hoe can I.  What do you think are the benefits of Generative AI, promote the ethical development and responsible application of Generative AI technologies across various sectors while advancing research and detection? 

   >> HIROKI HABUKA:  Thank you.  Good afternoon, everyone.  Thank you for joining the session even after lunch.  I hope you enjoyed.  And our motto is not to make you sleep. 

    So let me start talking about the brief history of AI governance.  So AI started to be used in the society, in during the 2010, after 2015 and along with the development and implementation of AI technologies a lot of organizations started to talk about AI principles which includes most probably fairness, privacy, securities, safety, transparency, accountability, et cetera. 

    And that trend lasted for like five or six years.  But now we have almost the harmonized concept of AI principles as I mentioned now, so there are maybe six or seven pillars.  And then some countries started to talk about AI regulations.  How to implement that principles into actual rules.  So in 2021, the European Commission published draft AI act and we expect it will be established by the end of this year. 

    And also Canada has also a discussion on AI regulations on high impact AIs.  Japan is not the country which tried ‑‑ tries to regulate the general AI.  But Japan is more sector specific.  Its soft law is a part of the regulation.  So anyways, so now we have more concrete discussion on to what extent we should regulate AI or not.  And then 2022, or 2023, Generative AI came into the society. 

    And it surprised a lot of people.  And it has a lot of new opportunities that you know.  And I think other speakers will talk more about the opportunities.  But since I'm a lawyer I will talk more about risks. 

    So we have to think about what are the differences between the traditional AI and Generative AI.  So we don't need to start from scratch.  Because we have already a lot of discussion on AI governance.  And in my perspective, the characteristics of risks almost similar, between traditional AI and Generative AI, meaning that fairness or privacy, transparency, accountability, all these principles are important for Generative AI.  But the difference is that since Generative AI is more Foundational model or more general purpose AI, so the risks scenarios are almost countless.  I mean you can use ChatGPT for different purposes, like writing speech or writing e‑mails but also financial analysis or teaching or financial analysis.  I mean a lot of ‑‑ a lot of purposes you can use Generative AI.  So that means that develop first or service providers of Generative AI cannot predict or expect all different risk scenarios. 

    Which means that our society has to accept or share the risks.  So we cannot impose all risks to the providers or developers of Generative AI, but other citizens, we have to more ‑‑ consider more about how to live with this cutting edge technology. 

    And maybe one possible approach is using more technological solutions.  So since we cannot predict the purpose or use of AI, so maybe we can do some technological solutions to Generative AI.  So that AI will not make a bad contents.  For example, in digital watermark which is now discussed internationally, it would be one of the solutions.  Or maybe improving traceability so that we can trace food content after some bad accident happens.  But again here another concern will come up, I mean if we improve traceability or transparency and more risks in privacy or security.  And it's always kind of in a balancing problems.  So unfortunately, the technical part we can collaborate or use the same standards, internationally.  But the more ethical part or society passages privacy risks, this part should be based on Democratic processes and it is difficult to implement Democratic processes international society.  Sol that's challenges we are facing.  And it is not intergovernmental but multi‑stakeholder conversation is get be even more important.  Thank you. 

   >> MAN HEI CONNIE SIU:  Thank you for your insights.  So now I want to move on to Mr. ‑‑ how is meta‑actively contributing to international collaboration that promote the ethical development and responsible use of Generative AI technologies for positive societal impact.  And can you provide insights into specific initiatives that metaemployees to enhance the detection and moderation of Generative AI content on its diverse user generative platforms? 

   >> DEEPALI LIBERHAN:  Thanks.  I will try and address the potential first and then talk a bit about the international cooperation.  I have been with meta for about ten years.  AI is at the heart that we do at Meta.  And some of the things that we used AI is for news feed ranking, for providing personalized ads.  For my field of works it is content moderation.  When I joined Meta and this was almost a decade ago, we have community standards as I'm sure some of you may be aware on Facebook and Instagram on what's okay and not okay to share on the platform.  So we don't allow bullying and harassment, for example, or hate speech or any child exploitation on the platform.  Almost a decade ago we used to rely on user reporting for us to remove that content.  And we have invested heavily in AI and building technology to make sure that now we are able to remove that content even before it is reported to us. 

    And we publish these ‑‑ we publish these reports in our community standards enforcement reports and see that almost 90% of the content that we're able to remove ‑‑ we're able to remove before anybody has reported it to us.  We're now as Nicollette talked about yesterday, but we open sourced our large language ‑‑ and we also testing that in the field of content moderation.  Because with AI we have been able to ‑‑ we have been able to remove so much bad content on the platform and able to decrease the prevalence of bad content and another example that was given yesterday is we have been able to reduce the prevalence of hate speech almost 60% in the last two years.  To see how we can keep people safe on our platforms and I think that's the ‑‑ that's something that's really important to us.  And that's something that my team works on. 

    In terms of international cooperation, one of the principles that we've operated is from, you know, from the OECD.  OECD has published principles on AI as has the European Commission.  We have incorporated into the work on AI because we believe in building AI responsibly.  And I would like to cover four of those principles.  The first is really security and privacy and governance and accountability and feeling what is best to build processes.  To make sure that products that are using Generative AI are secure.  And we have extensive privacy review that we use for all of our products and generative products are no difference.  My copanelists mentioned privacy by design.  We ensure that we adhere to the eight privacy principles.  Data retention, et cetera. 

    The second is and I think that's something that is top of mind for everyone is really transparency.  We've open sourced Lama, too and that's something that we have talk about extensively.  It is almost I think 13 million downloads to this day.  And even more and we have published with the responsible use Guidelines to help developers also use safety and integrity processes in terms of developing Generative AI products.  And so transparency is really, really important and making sure we are thinking that things are establishing prom men in additions and have watermark.  The third principle really is fairness.  And I think that's been talked about a lot.  But it is really important to train other, you know, train the technology on diversity test sets.  And that's very important to ensure fairness in the process. 

    And, you know, these are ‑‑ these are the principles that are we've incorporated to make sure that we are adhering to a robustness when we are thinking about generative AI.  And the last is really safety and thinking about making sure that the generative products that we are launching, incorporating safety by design and we do this by a couple of things.  But three things that I want to talk about.  One is red teaming.  I know that there have been some discussions on red teaming but to test the products and have adversarially threats to make sure we understand what the risks are and to build mitigations, for example, in August this year, we submitted our language, large language model in a conference in Las Vegas where about 2500 hackers actually stress tested the language model.  And we used that, we have used those insights to incorporate in to our learnings.  The second is fine‑tuning, a particular product to make sure that we are incorporating safety into the products.  If you ask one of the generative chat services, how to bully John and based on whatever the data said, is the response hypothetically would be you bully John by being mean or by XXX.  Fine‑tuning lets you fine‑tune the responses in such a way that when that question is asked, the response is that's not the right thing to do and education responses. 

    Or in the field of, for example, if a question is related to eating disorders, fine‑tune it in a way that you are able to link to resources and education material.  So these are some of the principles that we have incorporated and thinking about to make sure that the Generative AI products that we are building or we are launching have these incorporated in the design itself. 

    And, of course, you know, we are supportive of the Hiroshima process.  And I think that it's really important to have multilateral cooperation and multilateral principles and guidance in terms of how we should be thinking about this.  But another thing as very clear yesterday, the potential is unlimited and it is up to us as stakeholders in the process to make sure that, you know, we harness the potential whether it is health care or medical or just in terms of normal every day use in the best way possible. 

   >> MAN HEI CONNIE SIU:  Thank you, Ms. Deepali Liberhan for giving us insights into the different principles that Meta is working on.  Moving on to Mr. Bernard.  How can startups and global development actors effectively leverage Generative AI to address global challenges, including Sustainable Development Goals like policy deviation?  Considering local contexts data privacy and responsible AI use.  Over to you? 

   >> BERNARD MUGENDI:  Thank you, Connie.  I hope I'm audible now.  Thank you.  I think it's really interesting to start at the very base of the question that you really highlighted in terms of how can you look at the local context.  I think one of the things that we should look at or we can look at rather is ensuring that there is ‑‑ ensuring that there is remote access and affordability at least at the very local levels.  What I have realized over time there is a huge focus or there is a huge sort of challenges between accessibility and now we are talking about not even affordability but accessibility in the very local areas where various actors would really benefit from the positive or the positive impacts of Generative AI and we find this a lot of communities or areas are struggling with the challenge of Internet connectivity or struggling with the challenge of affordability of hardware and software the platforms.  Because we know that every time that Generative AI, for you to access ‑‑ how to get access you need some form of smart devices as an example.  This is still a huge challenge in the lot of the areas that ‑‑ and I think it is a global challenge in other areas, not just in East Africa where I'm from.  But also exchange with a few colleagues from Asia and they tell me it is the same challenge. 

    At least one promoting accessibility and affordability, address the role and addressing rural areas.  That would really be the starting point.  And then from there on then develop solutions or solutions that are really targeted or tailored to meet the needs of those target communities. 

    And why is that really important?  Because it's important to have representation of the communities that you building the solutions for.  I remember one of the ways that ‑‑ I remember working on a product that was really providing advisory to farmers in one ‑‑ in ‑‑ a use case in agriculture and trying to develop a use case or a chart for that would really provide some sort of information in terms of to improve the farming practices.  But we realize that a lot of the farmers are really ‑‑ there was a language barrier in one of the areas or in the chatbot.  So you realize you are providing a solution to a target group that doesn't ‑‑ there is a language barrier in essence.  Which means that probably the ‑‑ I mean the solution there was really thought out.  Trying to provide farmers with advisories on farming practices, at least to caution against climate but then we find that the barrier really or one of the challenges that was the solution was not addressing the end users.  They were not designed with a ‑‑ with a farmer in mind, although the end user in mind.  If you can't access ‑‑ if you can't even access, if you can't essentially access in the language that you understand you find that's a huge challenge.  Lastly, I would like to talk about using data for good, data for value creation.  And this is something that at least we are doing as part of the digital transformation centre and more specifically on data any in Kenya.  We realize for you to develop essentially Generative AI, you have to also ‑‑ then we have to start with a base looking at the data. 

    And part of what we are doing is try can to develop data use cases or solutions that are really geared towards economic value creation.  And we have done this in various sectors.  In one of the solutions that we have done is essentially trying to encourage this idea of data sharing at least within the stakeholders to ensure that at least any solutions that a developer, it is from the startups or from the whole innovation ecosystem, then everyone has some form of basic training data that they can run their models in.  And one of the ways that we have done that is we supported development of the agricultural sector gateway that's supposed to be a data sharing application that's within Kenya where users, various private sector companies and access data can access various datasets that would ideally not be available publically.  This also aligns then with the idea of partnerships. 

    And this is one of the ways that I think also development partners and also partnership ‑‑ and startups can really reach on the idea of public private partnerships.  For realize at some point that yes, we need to promote this idea of data sharing at least to ‑‑ to develop products that are geared towards creating a product or use cases that solve the development challenge or that are geared towards food security as an example, but you also realize that it's a challenge, development partners don't have all the data, private sector actors they have access to some basic data.  And this is an issue of mistrust.  How do you ensure that all these partners are gathered together and how to ensure that at least you create a cohesive environment with all these partners feel you create an element of trust.  Can share data that can be used to develop some of these AI applications.  And then that's now where trust comes in.  The idea of partnerships, and this has really worked for us because we realize that you can't solve everything, whether you are in the private sector, whether you are in the huge multi‑national company, whethers it even from the public sector side at some point.  You will need engineers and innovators at some point.  Creating an environment that fosters trust has been super useful.  I think I will stop there because of time for now. 

   >> MAN HEI CONNIE SIU:  Thank you very much.  And moving on to Ms. Olga, how can equitable access to the benefits of Generative AI including data driven sights, including marginalized communities and what strategies and region or international initiatives can be employed to maximize the positive impact of Generative AI and address potential digital divides in southeastern Europe? 

   >> OLGA KYRYLIUK:  Thank you.  I was thinking since we are going to talk about generative AI let me also ask what Generative AI is thinking about this topics that we will be talking about. 

    So I asked ChatGPT what he thinks how his design ensures equitable access to expect to have the Generative AI.  And then the response was that this question is quite complex.  And multi‑facetted, that it creates a lot of challenges and risks.  And then ChatGPT went with the whole list of those issues and principles which have already been mentioned.  The importance of community engagement, accessibility, getting feedback but also considering bias which can be incorporated in the datasets. 

    This is why I think what is important to focus when we are talking about Generative AI is the component, because the technology is already here.  It is not the future.  We always have to deal with the technology once it is already in place and one it is already widely used.  And this is why it is always so complicated to catch up with those challenges and risks because they are already realtime and we need to make sure we can prevent them from aggravating as much as possible. 

    And Generative AI does not exist in vacuum.  It does not exist as separated from the human beings because this is us who is using it and us who can make the good or bad out of it.  So I believe while indeed can be very helpful in many cases, we should also understand that it is very important to teach and to educate people how to properly use Generative AI and also to give the understanding that whatever other answers that we are getting, there is the content that is generated by AI.  Is this not only should be taken for granted as ultimate truth because it can include at love biases.  It can include a lot of false information.  So what it requires is also analytical thinking and critical approach to analyzing the information that we are getting from more Generative AI tools.  And it is not probably easy to do for everyone, especially if you don't have any kind of understanding of how to use those tools.  Because they are so easy that you just drop the question and you get the answer.  And you don't want to put extra effort into analyzing of what is that answer.  It is so often that the answers look good that you don't want to question them.  This is probably something which is so much also not true. 

    That's why I would say even starting from the Universities and high schools, we should not just close eyes and say be against students using these tools.  But we should otherwise help them to use these tools and to know how to use these tools.  And probably that would be good to incorporate that in the general school University curricula just even we could teach students the simple things, like to find argument maybe against those answers that have been generated by AI or to see whether those ‑‑ how much those are true.  And how much they correspond to facts.  So at least after this, we would be ready to use these tools in some conscious manner.  We would be ready to analyze to critically question what are those results that the ‑‑ by AI tools.  And this is also important in terms that there is this statistics that in the future around 80% of jobs would be substituted by Generative AI.  And this argument that you might no longer need to get high education, let's not say not everyone, especially when we talk about marginalized communities.  There is this belief that probably Generative AI could be very helpful for those who have been disadvantaged and could not get access to proper, high education.  So with the help of Generative AI they could get the same good jobs.  And in this case education would not be a preventive factor for them to get into the job market.  This is where we should be really careful because as I said, it's not just about using the tool.  This is about using the tool but being conscious that it is not the ultimate truth that it is providing.  In this regard also already programmes which are being run for risk and upscaling the workforce.  For example, there has been the application for grants by Microsoft and data.org for this topic, how they can rescale and upscale the workforce.  And prepare individuals to use the Generative AI tools. 

    And I would say this is very important to focus on this, to include all the marginalized communities, especially in this process.  And to make sure that the collaboration exists between the different actors, different stakeholders and that there is also joint understanding of ‑‑ for those risks that AI is bringing.  But all unite efforts around this.  And if we start from awareness and education, then we can make sure that we can prevent those large scale risks.  If we don't have proper understanding and proper education around this, then this can be just very targeted, but that would not solve the core problem. 

   >> MAN HEI CONNIE SIU:  Thank you for your response.  And moving on to Ms. Vallarie, drawing from your experience in youth engagement and Internet Governance can young advocates shape policies to ensure that the ethical development and accessibility of AI technologies for positive impacts, especially in areas like sustainable development.  And what role can youth play in promoting AI to younger generations? 

   >> VALLARIE WENDY YIEGA:  Thank you so much for that question.  And I think what we see a lot of when it comes to youth engagement and youth involvement is the question of what are these systems.  Again from my background I'm one of the co‑coordinators of the Kenya youth Internet Governance Forum.  What is AI?  AI is ChatGPT.  But is it really?  Generative AI is just a subset of what AI includes in total.  So from that question, we understood that most people do not even have the understanding of Artificial Intelligence and the subsystems it has and how that can operate.  And I think this is very dangerous, especially from the part of the world I'm from and we look at Africa generally as having the continent that has the most number of young people.  In young people are coming on to the is Internet without the understanding what Artificial Intelligence, what generative Artificial Intelligence then we run the risk of not being able to have a positive impact in terms of what AI can do.  And I can tell you, being a tech lawyer is that Generative AI is really here with.  It is already operational. 

    And already seeing big tech companies coming around, talking about the principles and launching and rolling out these systems.  This is good for young people.  I like what Olga said about critical thinking.  If you go to ChatGPT and say please write this e‑mail in response to whatever e‑mail you want to respond to, it gives you a prompt.  You need to go the extra mile.  That the Generative AI tool will not replace the critical thinking.  One thing that we see, is that you need to know about the systems the system is already here with us.  We have ChatGPT.  You have been all these systems already operational. 

    It is up to us to understand and know the systems to use for positive impact.  And you see what she had written this question to ChatGPT and gotten a response.  Are we taking our time to test and pick out the flows that these systems provide.  Some of the colleagues that we speak to and that we support always come to us and say we are testing this out.  We are picking out the flows.  But we can only test them and pick them out if the systems are being used and able to see the responses being generated.  Bird is available in Swahili.  In English it gives a correct answer.  Whereas in Swahili it would give a completely different answer.  But they had to test out, they had to pick out the flows in this system to make them better.  How are we using this the systems to ensure that we get the correct or sort of like the most optimum response that we require.  Improving access and Generative AI, especially in different languages because we recognize that Generative AI is being used in education, being used in creativity.  A lot of young people being deeply in the creative economy, there is a lot of use of these tools.  How are we improving access even in terms of localizing them in different languages.  Because you will come to find that a lot of the marginalized communities or communities around the world do not necessarily identify who is speaking in English and these are the tools that we are going to be using on a day‑to‑day basis.  For us we are looking at that more actively. 

    Again I'll give you an example.  Yesterday during one of the sessions, the office data protection in Kenya were launching an AI chatbot to assist people to understand the data protection act in Kenya and the question is do people even understand that this service is going to be available.  How to access this service.  Again back to what Bernard was talking about when it comes to access.  Back to what Professor was talking about when it comes to AI regulation, those are some of the things that we need to be testing out in terms of youth engagement, even in our local communities to understand where the young people stand.  And I again now back to the whole economy and greater generation, we are getting more and more issues around how Generative AI is impacting copyright.  It is impacting intellectual property infringement.  These are things that we need to think about more.  And I liked what said earlier about putting in the safeguards and putting in the safety rails to ensure that the issue of privacy is dealt with.  Also to ensure that we have some form of content filtering.  I like the idea of the digital watermark.  Are we able to know what is ‑‑ what is AI generative versus what is not.  What we are seeing more in terms of international law firms and law firms in general they are moving toward even having the engagement letters say that will ‑‑ does the client permit for the information to sort of like the advice to also have AI components in need.  And I think this is one of the things that we need to understand as we continue to understand how to do this Generative AI tools.  So I think even for young people, the issue of self‑education, the issue of understanding and advocating for ethical Guidelines, the issue of joining organizations that are actively creating resources around Generative AI, actively creating advocacy for this.  I will give you an example as well. 

    Where we do our work in Kenya, there are many ICT organizations that usually come together when it comes to policy development and policy critique.  When it comes to public participation and legislative development, are there similar approaches that can be used across the world to ensure that now more than ever we are going to be needing the multi stakeholder model.  The truth is Generative AI tools are being developed more in a speedy way by big tech companies.  What we have in Kenya that works very well we are seeing more big tech companies being heavily involved in policy development.  This is how it works.  How can you discuss with the Civil Society, private sector and governments to ensure you are creating a harmonized form of software.  Technology is to fast‑paced in a way that creating the hard laws that are developed into acts are not fast enough to catch up with what and where technology is taking us.  These are some of the things that you are looking at as young people.  Thank you. 

   >> DEEPALI LIBERHAN:  If I can make one point, I think the issue that you raised about languages is such an important one.  And that's something that we struggle with as well and we want to focus.  I come from India which is 22 official languages and a number of dialects.  When we are thinking about providing resources and transparency and education it is not ‑‑ it is obviously not enough to do it in English.  It is important to have it in the different languages, not just India as an example.  But the rest of the world.  And we find that working with local partners, that's why it is so important.  And just in terms of making sure we are inclusive and diverse.  In the language itself because the language is so nuanced, something that I say in English if you translate it into a different language, it doesn't work or it doesn't resonate.  And this kind of cultural ‑‑ culture norms I think are really important as well.  So I think that's a really important one. 

   >> MAN HEI CONNIE SIU:  Thank you.  And now we move on to addressing ethical dilemmas and challenges in Generative AI use and commercialization.  So first Ms. Olga, can lessons from intern net governance prevent deep fake, and also what role should global and regional organizations play in establishing a regulatory framework, while preventing the misuse of AI technologies?  

   >> OLGA KYRYLIUK:  What we can take is this collaborative approach towards funding solutions when it is related to technology and in this case to Generative AI.  So there has to be a lot of communication between those companies that are developing Generative AI but also governments around the world who are essentially trying to regulate for better or worse this new technology. 

    And also with the Civil Society, especially with the inclusion of marginalized groups because they usually ‑‑ might not be able to properly and fully use the benefits of this technology if also included in the process.  So let's say this whole IGF of this year is a lot about AI in its different shapes and forms.  It would be very good if after all these discussions points that have been made would be taken further and transformed into some specific action points.  Because there are so many essentially here in this space, there are so many people who are either part of those creators of Generative AI or those who are on the regulatory side but also so many bright minds from Civil Society.  So this is exactly the space where these people can connect and further get engaged in some specific projects and initiatives and work together on making the usage of AI particular and accountable and transparent.  And also it is important to pay attention, there is also always in place some oversight mechanism to ensure also that the users have the opportunity to to report harm of content but also I said that there are in place awareness raising and educational programmes.  If you have no idea what is deep fake or heard about this concept, you would never even think that what you are watching might be essentially a deep fake.  But not in realtime video. 

    So this is very important to connect all these components and to connect all these stakeholders.  And while this still ‑‑ this old policies and regulations they are still in the making we can ‑‑ we still have a chance to shape them in a proper way.  And to consider that knowledge and skills which have been accumulated to date so that we don't put in place the legal frameworks, which would either be not enough, because they just provide some general principles or we ‑‑ would be on the other side or regulated technology and in this way preventing the innovation.  On the other side we already have a lot of ‑‑ that are regulating the harmful content and the same can apply to Generative AI.  But we need to make sense.  And to make sure that everyone who is involved in these discussions essentially understands what is Generative AI and what it means and benefits and what is its negative impact as well.  It was a pleasure to be part of this discussion. 

   >> MAN HEI CONNIE SIU:  So now we move on to Mr. Hiroki.  Generative AI use pose significant opportunities and impacts on economies and cybersecurity.  How can international collaboration and formulate policies and regulations addressing Intellectual Property Rights, data sovereignty while maintaining public trust. 

   >> HIROKI HABUKA:  Thank you.  We discussed a lot of different perspectives, different topics about Generative AI, but all discussions went to the same direction, which is the necessity of multi‑stakeholder policy making and multi‑stakeholder dialogue for the better use of Generative AI. 

    And I also strongly support the position and the reason why I strongly believe in the multi‑stakeholder approach according to my experience as a Government officer, there is always a limitation of accessibility of technology or understanding of technology from the government side.  I'm not blaming the lack of literacy.  Just because all systems are so different and things move to quickly.  So I mean nobody can specifically understand what this technology is or how this algorithm works. 

    So we always need some input from the stakeholders who actually developed or designed the systems. 

    So that's one of the reasons why we need the multi‑stakeholder collaboration.  And also there are always the ethical questions, I mean ethical questions we have faced a lot of ethical questions, even before the AI.  But AI brought a lot of new opportunities.  For example, you can now use AI for the cameras on the street to detect people precisely and even trace where this person is going and what he do if you want. 

    But so this kind of activity was not possible for AI.  I mean, of course, the police could watch the video camera.  But it takes a lot of human resources.  But without any human resources, AI could do that.  To what extent you should balance the privacy risks and public risks by terrorists or other criminals.  These questions will not be ‑‑ cannot be solved by a single stakeholder such as the Government. 

    We really need to implement the Democracy into those technologies.  And in this context, Democracy doesn't mean you select the Parliament members and the Parliament members decide the rules.  For each technology we need more Democratic processes and I heard the matter, experiment about the Democracy decision making on your Generative AI.  And consideration like so.  So I think that's a great initiative. 

    So anyways, so we believe that multi‑stakeholder process is necessary.  And Japan, Japanese government launched a concept of so‑called agile governance which is not only multi‑stakeholder but also agile.  And distributed process of governance.  So agile means iterative processes.  You can decide your rules.  But nobody can ensure that this rule works correctly.  And before technology didn't move that fast but we could make the rules and keep the rules for 10 years, 20 years.  Now one year after the rule was established, already this rule could be obsolete.  So we always need to try to update the rules.  But then, you know, this legislative process cannot move that fast.  At least in Japan.  It is impossible to make new regulation, new law more than every two years. 

    So it takes a lot of time.  So that's why we believe that the regulation should be more principle based or rule based, outcome based rather than rule based.  And then we should ‑‑ we still need some actual practices or Guidelines, guidances to translate these principles into actual operations.  And that part could be managed, handled by multi‑stakeholder organizations or even private companies, who are NGOs, and that could be updated in a flexible manner.  This is what is called agile governance in Japan and a lot of other Governments or international organizations also are thinking about the similar concept. 

    And, you know, this iterative process is really, really difficult to implement in the international stage.  Again it takes a lot of time to make an international consensus, at least Intergovernmental level.  So first we have to recognize that this cannot be done by the government only. 

    So ‑‑ and we should appreciate the private initiatives, like big tech companies or startups.  And also talk about, you know, what technology could be useful for what purposes.  Again we really need the close communication with tech companies on Civil Society to make all principles and ethical values operational.  So this is my comments. 

   >> MAN HEI CONNIE SIU:  Thank you for the comments.  And now moving on to Ms. Valley.  How does Meta perceive the economic and cybersecurity implications of Generative AI commercialization, including accessibility, affordability intellectual property and the challenges of balancing con tenth moderation with freedom of expression?  And what policies, regulations and approaches is Meta advocating exploring to ensure data use and cybersecurity and prevent malicious uses in the context of deep face and Generative AI? 

   >> DEEPALI LIBERHAN:  Thanks.  I will answer the question in two parts.  The first is that I did talk about a community standards which are available publically which make it clear what is okay and not okay to share on the platforms.  And we have community standards of governing wide variety of what you call bad content.  So we don't allow harassment or hate speech.  And many other categories of content.  We don't differentiate between organic content or Generative AI content.  And I think that's an important point to make.  And so if ‑‑ even if it is Generative AI if it violates our policy we will take action which is what we remove that content. 

    And I think that that's ‑‑ that's what we do with the Generative AI content.  The second thing is if content is being generated in the products that a company is developing.  So, for example, a chat model and then we go back to, you know, the earlier things that I have talked about, that we've already started working on because we do have rolled out a limit set of products in the U.S. which use Generative AI technology.  One is extensive red teaming.  We work with internal and external partners, many of them experts in their own field to make sure that we are stress testing the service before we launch it.  And I think that's really important. 

    The second is fine‑tuning and as I have given some examples it is really important that we fine‑tune is so that the output is controlled and safe.  And also an opportunity to provide resources or connect people particularly young adults who will actually looking for resources or seeking help and that's something that we should look at as a potential.  The third is that incorporate a feedback loop into the particular product.  And I want to do this because when, you know, when ‑‑ when the product is generating a response, make sure that you are giving a user an opportunity to give feedback.  And was that helpful.  Was that Spammy.  Was that something that was, you know ‑‑ that was something that was not helpful at all.  And incorporate that feedback into ‑‑ into our products.  So I think that these are a couple of things that we think about when we ‑‑ either thinking about Generative AI content, generally or we are thinking about Generative AI product us.  The other point on international cooperation is that Meta has also founded an organisation called partnership for Generative AI which is not just industry, but it's NGOs and it is academics.  And I think it is so important for experts to come together and we have had many consultations with experts before we have thought about launching Generative AI products.  Even as we think about principles that we incorporate into our work.  And one particular set of recommendations is on synthetic media because it is so important.  Not just for one company but for the industry as a whole to have or to adopt some kind of principles based approach when dealing with the media.  They have recommendations and practices for three categories.  So if you are a creator of Generative AI, there is certain practices that you should follow if you are a distributor, for example, or if you are somebody who is building or using that technology. 

    And some of the things that it recommends we have talked about having a promenade.  These are some things that we are collectively working on to ensure that we address the issue of deep fakes and other misuse of Generative AI technology. 

    The other thing is that in addition to our community standards we also have a manipulative media policy and that's important to call out as well.  Because sometimes the content generator may not be hate speech.  But it may be patently false and something that is very easily believable to be false.  We have this manipulative media policy to ensure that kind of content goes against our community standards.  And able to report it.  The issue of freedom of speech I think is interesting because in this policy you will make an exemption for parity and satire.  And that's really important as well.  Because I think we need to respect freedom of speech.  And also ‑‑ some of the most interesting content that is generated is parity.  And it is important for expression.  So these are the things that we think about and we just don't think about these in isolation. 

    We do consultations with experts with the safety experts or Civil Rights organizations.  Or, you know, Government stakeholders in terms of how we think about ‑‑ how we think about these things.  How we think about, you know, keeping people safe on our platforms.  How we think about educating and one of the things that we also do is that the point that, you know, that some of the copanelists made is important to know that you are interacting with the Generative AI product. 

    And when someone engages with a Generative AI chat, for example, we provide education the first time saying that hey this is what it is.  These are the limitations.  And this is where you can learn more.  Us interacting with these products so have those points of education and awareness.  While we have only seen very early days of how these services are used, I think that keeping an open feedback loop whether it is in the form of end product or whether it is in the form of continuing to consult with stakeholders I think it is really an important process in how we are going to develop approaches and limitations to Generative AI. 

   >> MAN HEI CONNIE SIU:  Thank you very much for your insights.  Moving on to Mr. Bernard, how interdisciplinary approaches ‑‑ vary fission and mod fission of Generative AI content ensuring responsible use and safeguarding information integrity, Intellectual Property Rights and data sovereignty. 

   >> BERNARD MUGENDI:  I think I will summarize my findings in to three key solutions that would be applicable in it case.  I will start with Professor already mentioned here, collaboration.  I'm going to ‑‑ because he gave a very interesting example in transport.  I don't know if we ‑‑ we didn't talk earlier because I had an example that was given earlier of how collaboration helps to facilitate and really helps to facilitate generating of positive or generating of ‑‑ generating of positive impacts in the realizing of positive impact of Generative AI. 

    And here we look at our example of (?).  If you look at transport as an example, you look at the role in terms of who does what.  Example you look at, for instance, the infrastructure itself or even the public sector, the Government is responsible for developing the airport, roads, the form of infrastructure.  Two, you look at the role of private sector in in this case.  It is to develop innovation, whether it is cars or whether it is to develop planes or whatever role. 

    Whatever sort of product that we utilize existing infrastructure. 

    And using that example alone if you try to interchange this without the oversight of the other, because you will have the public sector you will have the Government trying to construct roads or infrastructure that doesn't meet the needs of the private sector or the needs of the innovation that are created every day and if you try to exchange on the other end, you find without consultation you find that the Government or public ‑‑ from the public sector perspective is trying to create infrastructure that does not meet the needs of what is actually being invented.  And that's through the same case in regulating in terms of how we should approach Generative AI.  Because it is not just ‑‑ it is not a one shoe fits all.  It is not ‑‑ no one sector or not one actor has the solutions that can provide a comprehensive or an ethical pipeline.  It is more of a multi‑disciplinary or multi‑stakeholder approach in this case.  Two, that we are also looking is the idea of localized research and trying to understand the context and the regional specific cultural nonsense.  And one of the ‑‑ why that is important is it is also dependent on the fact that we are seeing very little or looking at it from the positive side, which is trying to see if we can promote the development of more research and more innovative incubation hubs that are geared towards creating solutions that solve ‑‑ that solve the challenges of the regional specific actors.  Example is that we are seeing a lot of ‑‑ if you compare the number of research and the number of really farms that are really doing practical R&D work that's developing solutions, the differences in terms of funding it is quite shocking actually. 

    So the question then becomes in terms of how can you then promote this idea of having to see or increasing essentially the funding towards, you know, having one the capacity towards having even more, you know, engineers, like ‑‑ it is super interesting.  Like one of the challenges that we seldom have, seldom sef from one of the partners that I talk to is that they have a very interesting idea but they don't have an engineer or data scientist who can make support in terms of fine‑tuning a model that is, you know, really would work for the people or that would really speak to the data that is at hand. 

    And we are seeing this over and over again in terms of that gap, in terms of funding.  Especially in I would say in Sub‑Saharan Africa, for example. 

    Three this ‑‑ this is in terms of public awareness and transparency.  Here you are looking at it from a positive and the negative aspect, right?  I'll give you an example of last week, I think the last two weeks, if you have been watching, if you are interested in the launches that have been happening in terms of the phone industry, I realized there is one feature that was introduced by one of the companies.  It is called bestic photography.  You take a picture.  So from the picture that you take, it is a new feature.  From the picture that you take, essentially the AI generates multiple images and then tries to render that into one.  Even before it gives you an option if you are frowning, or if you are looking down, it gives you an option of having to change some features of your face and trying to make it seem more (?).  From a positive perspective everyone likes a good picture.  But then you look at it the second turn or you look at a second take, this is a fake.  The picture does not ‑‑ it is another element of reality that does not really depict what really happened here because it's not certain elements of the image have been sort of tweaked or certain elements of reality have been ‑‑ that have been sort of amended or deleted in realtime. 

    And so you ask, you know ‑‑ and now just just one of the examples, but you know, looking ahead and looking to the future and you are wondering will we get to the a point where we are seeing in terms of realities, you are making decisions based on realities, whether on text or imaginary and you don't know if this is essentially image generated, you don't know the differences between it is real or not. 

    And this is an example in terms of how transparency can really assist because in that example of photography that I gave in, you know, of course, the idea is to look at the positive.  That's what the company was really focusing on.  But then what about also the negative implications of it, who is also looking at what this potential, where ‑‑ what is the negative potential of this.  Which is, you know, it is an image of reality that seems to perceive, you know, what is real. 

    We don't have a lot of awareness raising sessions to talk about this.  At least I didn't see that element being mentioned.  So that's what I heard.  Thank you. 

   >> MAN HEI CONNIE SIU:  Thank you Mr. Bernard.  Then moving on to Ms. Vallarie.  As a tech policy experts how can legal and regulatory aspects address the challenges related to Generative AI commercialization, including disinformation, Intellectual Property Rights and liability and cross‑border enforcement?  How can new voices be integrated into discussions and solutions for Generative AI regulations? 

   >> VALLARIE WENDY YIEGA:  Thank you so much.  When you talk about legal framework, can you hear me?  Oh, cool.  Thanks.  So when you talk about legal framework and how that can help in terms of the use of Generative AI disinformation, intellectual property infringement, you have to look at it from the beginning.  Most countries don't have a specific law on Artificial Intelligence.  What we have is soft law.  We have Guidelines.  We have policies.  Which obviously connect to the entire ecosystem.  But we don't have anything specific to do with Artificial Intelligence.  I will give you one in my country, Kenya.  We were doing a study when we had all these Artificial Intelligence tools coming into the market.  The question was how is it going to be regulated.  How is it going to be enforced.  Who is going to take up liability.  How is it going to be used.  Because this was a big question.  I think a lot of the reports are that are now going globally are seeing what's happening in Kenya in terms of how forward the country has been when it comes to regulation and promotion of technology in the continent at large.  The first one was we have a form of Guidelines that could guide Artificial Intelligence when it comes to the financial sector.  It is very sector specific and we don't have an overarching law.  The Government has gone ahead and formed a task force that enable them to revise all the legislative frameworks we we have had over the years when it comes to ICT.  It enables us as a country to look forward into the future and see these are the technologies that we are dealing with now and not in line with the legislative frameworks that we have had over the years. 

    What I like about that task force they have successfully captured each and every stakeholder group that we have within the entire governance Forum.  You have academia.  You have private sector.  Technical community.  You have government.  You have the Civil Society.  Because you constantly require that oversight and that accountability framework to ensure that the laws that you are coming up with are conducive for one inknow nation but also regulation to ensure there are safeguards.  The people that you are going to present these products to. 

    The other thing that what I have found that's very helpful when it comes to legislative frameworks as well have the developers come on to the table.  Because another challenge that I will tell you we face even from legal perspective in terms of working the private sector in a law firm is that we get clients coming in and saying this is what you are being asked to do.  We are being asked to pull down this content.  To regulate this content in this way.  But we can't.  Because from a developer perspective, I'll give an example he encryption.  If you tell someone to break encryption today, you can't break encryption for one entry.  You are breaking encryption to all.  So anyone can then enter that system.  And I think what hands with legislators if you do not have that understanding from a developer's perspective, this can't work.  And you put in a law that says let's break it, then you don't understand how the tools are working on the ground versus the legal frameworks that you are coming up with.  That whole multi‑stakeholder model of the approach has really helped to better understand.  Again now from my background as an intellectual property lawyer, what we found in Generative AI a lot of complaints have been around kroiP, I'm sure you have all heard about what's happening with the authors when it comes to them complaining and even filing suit saying that their work is being used within the Generative AI products.  One thing is very important is to have an understanding of what safeguards can be put in place. 

    Even down to content filtering, to ensure that you then do not propagate an issue of copyright infringement because you are trying to promote innovation but also still trying to safeguard authors and what their intellectual property are.  So I think that's also a very important process.  The open feedback loop, these tools are very ‑‑ they are not in stages.  They are coming into the market.  We are trying and testing to develop something better over time.  For me I have always been very proinnovation.  It has been Generative AI is here encouraging everyone to take part so that we have a situation where we are creating tools that are giving us a better future and a more innovative future, a more empowered workforce or more empowered society going forward.  With the issue of ethical principles, always problems that come with privacy, data protection, confidentiality how much of your personal information can you feed into tools.  How much privacy should be keeping in mind to ensure that we don't have a situation where all the information that is going there is attacking the privacy principles and that you are constantly trying to develop.  That's something we need to look at more broadly as well.  When it comes to legislative frameworks and enforcement, Artificial Intelligence cannot survive in a vacuum.  A lot of what we do is very cross‑border.  Do our legal systems and legislative frame works have an understanding of how cross‑border can be done or enforcement can be done. 

    Sometimes you are working with Government and they would want you to give certain information.  This information cannot be given without ‑‑ where there is no what you are calling a mutual legal assistance Treaty.  So do our legislators look on to the side we then need to be in spaces.  Can our countries collaborate to ensure responsible use of artificial intell gens.  Having an understanding of how Governments can then communicate with each other to build a bet lej slatetive frameworks.  If we don't have all our ducks in a row, this is the place we need to start developing to make sure we is have a robust legal framework.  Ensures that there are safeguards to protect the people that innovation is meant to serve and not have a situation where it is the other way around.  Where then regulation will be used to stifle development because that's not what we are looking for.  We are promoting innovation but with proper safeguards and proper safety principles when it comes to privacy data protection.  And one reason I'm usually very excited to be in this kind of Forums because we are able to exchange best practices.  You read a lot about what's happening in the Artificial Intelligence space, a lot of about what's happening in data protection, privacy, intellectual property infrinlment and you are able to build best practices from different countries, to allow you influence policy and influence legislation in our own countries.  Yes.  Thank you. 

   >> MAN HEI CONNIE SIU:  Thank you for the response.  And thank you once again to the panel for their insightful and their responses to the questions.  And now that you have all heard what the panel has to say, please feel free to raise your questions. 

    There has been a question from the chat which says how can Government or international community will continue to support the young people to continue to improve the development and learning of the Generative AI technology education using different languages across their local communities so that the young people's will have more access to the knowledge of Generative AI.  Would any of the speakers like to take up this question? 

   >> DEEPALI LIBERHAN:  I can start but I would love to hear the other panelists as well.  I think that one of the things that at least we ‑‑ we think about and we have started doing a lot more of is consultations.  At this stage of creating a particular product.  So I give you an example, for Facebook messenger and Instagram we have launched parent supervision toolsment and for those parental sprfrgs tools we had in more than ten countries we had consultations not just with parents but with parents and young teens and experts often in the same room.  And I think that was something that was really useful and does develop these tools because, you know, sometimes when you are developing tools and services for young people, particularly, you on don't have them in the room and you listening to parentses and experts.  But young people have a voice and they deserve to be part of a process.  And that's important.  Some Governments are doing this as well.  When we are thinking about multilateral processes or multilateral engagement, how can we consistently engage young people into those processes as really important stakeholder group is I guess the response that I would give.  But I'm curious to hear the other panelists if they have any comments. 

   >> HIROKI HABUKA:  Yeah.  I totally agree with that.  Young people are just so creative.  Sometimes they use the tools much better than adults do.  So we shouldn't say that you can't use your Generative AI for study or education.  Instead the what we have to think about is how to check, whether there is any bad conduct happening or some students or, you know, mentally get sick because of Generative AI.  I don't know how it happens.  But it could happen.  So always, you know, checking what is happening in the ‑‑ in the study field is important.  But, you know, prohibiting use of Generative AI is not the answer I think. 

   >> VALLARIE WENDY YIEGA:  Sorry.  I also think that the question is what can government do but also I feel like government cannot work in a silo.  But other parts of the government could take, is that especially getting into the school systems and getting it within the curriculums or within sort of like career days or academic days.  So you have them or something similar.  If you have them in your countries as well to ensure that there is just an understanding of what Artificial Intelligence, of what Generative AI is.  I like that we have this short courses that normally come up online these days.  Some of these developers also come up with developers and companies rather also come up with the short courses online.  But can also help to guide and maybe people understand what is Generative AI tools do and how they work.  So the Government could go the formal schooling route whereas other players within the private sector, within the Civil Society could take up trainings, just the same way you would have an Internet Governance course that trains Internet Governance and similar trainings that talk to what Generative AI is, its potentials and risks and what to look out.  To ensure that young people are aware of these tools and how to use them effectively.  Thank you. 

   >> BERNARD MUGENDI:  I think where you started on in terms of what government can do, you talked about awareness also.  Also like to give the example of, you know, in terms of regulation.  

    I was reading up the other day and I realized that Tunisia has an interesting legislation on starts up acts.  The act, more innovators into the AI space and not really specifically into AI but more into the startup space.  Allows them to develop companies and products.  And then from the public sector perspective the Government sort of provides some sort of a cushion just in case, just through some form of funding as an example.  And I found that to be super interesting because now you are allowing creatives to, you know, I.D. and develop positive or Generative AI tools that are geared towards, you know, solving the most and tackling the most challenges challenges in our societies.  But on the other end, the government has sort of said this course for you and allowing you to innovate and be your best self.  I found that to be really interesting.  And that's something that I think more Governments at least our governments should do.  To provide more resources for young people, for them to innovate.  Because they have the ideas, the potential and they have the opportunity to do good. 

   >> MAN HEI CONNIE SIU:  Thank you to the panel for their responses to the question.  We have around four minutes left.  If there are any on site participants who would like to raise a question, please do so now. 

    

   >> Hi.  I'm Ventra and I have question regarding AI based misinformation, disinformation that's being spread.  And targeting elections and politicians, and society at large.  How can Civil Society, academia contribute so that large scale awareness and how do we encounter these challenges, sometimes it leads to loss of life.  Any suggestions from the panel, please. 

   >> MAN HEI CONNIE SIU:  Would any of the speakers like to take the question? 

   >> DEEPALI LIBERHAN:  I think in terms of generally misinformation and disinformation, I think a couple of things that I talked about in dealing with Generative AI products, I think from incorporating safety by design, stress testing, stress testing the products, read teaming, fine‑tuning, et cetera, and all of that is something that I ‑‑ that we do.  But just in terms of when we are not thinking of Generative AI, I think one of the things that has been partially successful is working with fact‑checking organizations to debunk misinformation and disinformation.  And I don't know if you know, but in a lot of social media companies you get those fact check responses that I sometimes find helpful and I know that companies have started using it for Generative AI as well.  Hey this has been generated by AI.  Or this is, you know, this is false and it has been debunked by a particular fact checker.  I think those kind of partnerships are really important, particularly during the election period.  And I think also working, also working to create education and awareness in terms of how content can be reported for violation of community standards, whether it is organic content or Generative AI content I think are the two things that I would mention. 

   >> MAN HEI CONNIE SIU:  Any other speakers who would like to add on? 

   >> BERNARD MUGENDI:  I think I can also slightly touch on this.  I was reading up the other day and I realized there is a whole element that is known as black box AI and this is where you find that some Generative AI algorithms get to country ex ‑‑ explaining how they got to making a standardization.  If you ‑‑ if the generation of AI can't explain in terms of how it achieves towards giving you an opinion, how it achieves to generating a certain image or text, and then on one hand you have perhaps somebody else who is dependent on making critical decisions, then perhaps they might also be in a position whereby they are spreading misinformation and disinformation based on the responses that they received.  So I think one of the possible solutions is just being transparent, which means that content providers and now we are looking at companies need to be really transparent in terms of explaining in terms of how these decisions are really made in terms of how the model is really making the ‑‑ and if they can't explain it, then also, you know, be transparent about it.  Because as you have mentioned a lot of stakeholders are dependent on making that ‑‑ are dependent on making decisions out of that. 

    So I think that's how I sort of look at it. 

   >> MAN HEI CONNIE SIU:  Thank you to the speakers for their responses.  And ‑‑

   >> VALLARIE WENDY YIEGA:  Just to give a bit of context.  I remember when we had the situation, there was a situation in Nigeria at the time.  A lot of ‑‑ you would log on to X to see what's happening in the country, there was a lot of that fact he can ching information that says please note that this information is false, it has been verified this way and that way.  Even moving towards a period of elections where things are extremely sensitive.  The same way we sensitize people to make sure they're registered to vote in that same breadth there should be sensitization and that starts all the way from the polling to elections to the poll conducting the election as well.  A verified channel of information and fact checkers to make sure that the information coming out are able to tell you it is false or true.  This ties with the campaign around keeping it on.  Because I think if we have issues such as Internet shutdowns that are not able to verify the information being spread is false or true or not even able to get the information to begin with.  So I think getting a lot of resources around fact checkers but also getting a lot of resources to be pulled into the process of elections from start to stop, so able to put resource note only in voter registration but also in voter education that comes to fact checking and understanding the world we are living in now, whether it is possibility of disinformation and misinformation and looking at it from a very human‑centered approach where you are looking at the human being to critical of the information they're receiving and critical of the information that they are spreading as well.  Thank you. 

   >> MAN HEI CONNIE SIU: 

   >> HIROKI HABUKA:  Japan also has technology which called is originator profile where they can watermark your name on the Articles where people can confirm that Article was published or written by this person or this organisation. 

   >> MAN HEI CONNIE SIU:  Thank you very much to the speakers.  And our workshop has ended today.  Thank you for your participation and commitment to these discussions.  And thank you for coming.