IGF 2023 – Day 0 – Event #194 Bottom-up AI and the right to be humanly imperfect

The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.

***

 

>> JOVAN KURBALIJA:  We can start.

     Good morning and welcome to our session.  My name is JOVAN KURBALIJA.  I'm the Director of DiploFoundation, head of Geneva platform.  Together with me is SORINA TELEANU, who is Director of Knowledge at DiploFoundation, and a person who is involved extensively in AI development.

     And now while we were preparing for today's session, we talked of having two ways to approach it.  And we'll be guided by your questions and comments about this session.  We want to develop it by genuinely as a dialogue.  We have a lot to offer in terms of ideas, concepts, and the overall approach of Diplo to artificial intelligence, but I'm sure there is a lot of expertise in the room.  And this is basically the key, therefore let me suggest a few practicalities.

     We will talk, but whenever you have a question, just comment, raise the hand and don't feel ‑‑ don't feel intimidated.  The only stupid question is the question which is not asked.  There are few exceptions of this rule, but there for you can ‑‑ I always think when we gather for a meeting or for a course, because we are teaching a lot, I said how can we really maximize on this hour?  This is valuable time for all of us, generally sometimes underestimate that, the importance of moment, the importance of being there.  And I think in Kyoto with Zen Buddhism and other things, relation religious traditions, we can learn more about being there, being at the moment, trying to grasp, trying to really find this unique energy and unique because this is moment, this very second, this very second of our life and our existence and our interaction.  Let's maximize on that.

     Now, Sorina, shall I monopolies the microphone or ‑‑ you're so gently nice.

     I started with philosophy, and possibly this is one of the entry points.  Because artificial intelligence for the first time pushed us to think about the questions why do we do it or questions of why for our existence, the question of our dignity, the question of purpose, the question of efficiency.  Many core questions that civilization has to face.  Therefore, if you see our Leaflet about humanism project, you can see that we approach it through technology, through diplomacy, through governance, and through linguistics and art.  You can get any entry point.  I suggested this philosophy entry point, and you will see why it is important.

     Now, I'm sure you will be using a lot of comments.  Unfortunately, these days, we don't use this.  We just brought my wife Nikon from Europe, and she told me, why do I need to carry this heavy Nikon, with the lenses, you know, Zoom out, zoom in when iPhone is basically good iPhone camera, I see it is doing a lot.  Now, we won't get into this discussion, I'm sure that they're really passionate, Nikon is Canon will say these two tribes, no, no, you still do it with Nikon, but the idea is to zoom in and zoom out.  We zoom in on philosophy and Zoom out on questions ‑‑ zoom in on technology or Zoom out on philosophy.  We try to use that optic within the next hour.

     What is uniqueness of Diplo is whatever we do in digital governance, since the very beginning of our organization, we needed to touch technology.  We did the CPI programming, we did the DNS, we did everything in order to know how it functions.  We wanted to see what is under the bonnet.

     One problem, and I'm noticing I was at the first IGS at the working group in Internet Governance, which is ancient history a long time ago, but I notice we discuss things without understanding it.  We don't need to be techies, mind you, these issues are sometimes philosophical, but you have to have a basic understanding of what's going on and how it functions.  This, again, when we need to have the scale, you know, to understand technology, but not to become techies, because then if you are only techy, you are basically ‑‑ you won't see the whole forest from the tree.  Everything will be just neural networks these days, or yesterday, crypto or block chain.  Day before TCPIP, and that's then, basically a problem.  Therefore, it's a three key exercise.

     We have all of these entry points, and what I suggest, which is in the ‑‑ also in the title of the suggestion, is also there is another aspect that we should keep in mind, that the talk approach works in a way that whole IGF will be reported by our hybrid system combining artificial Intel lay gens and human p intelligence.

     If you go to IGF 2023, it is big watch 2023, you go also download the iPhone or Android app, and you will be having the reports from the sessions coming by mix of Artifical Intelligence and human intelligence.

     Now, how does it work?  We've been reporting from IGF for decades summarizing long session into humanly, basically.  Now we say let's codify that, our reports, and create AI system, therefore we can have something which could be called IGF GPT, or IGF AI.  But basically we trained the IGF on the our reporting and our sessions.  It is now deployed by our AI team.  Poor guys that have to early in the morning, they are based in Belgrade, they are doing AI reporting and doing everything automatically from transcribing, also special language transcribing for AI and terminology.  Summarizing, transcribing, and then making it into the report, which you can visit here for each session.

     Now, as you will see from the reports, and you will see from our work, I think this session is we have just put that it's GMT time, because I was confused this morning.  I said what, 2 o'clock in the morning?  You will have after the session, I don't know about 20 minutes or half an hour, I don't know exactly what it will be, the timing, you will have report from this discussion.  Therefore, again, we think we have to walk the talk.  It's enough to talk about AI, how important AI is, how it's changing the world, ethics.  AI will eat us for breakfast or we may survive, we may not survive.  That's another discussion, which I'm very critical and skeptical about, but let's use AI, and let's see only by using it we can see how it works and how dangerous it is.  We are not naive about dangers.  There are risks.  But many risks are now and here.

     If you just go to the risk in the future, it could be a bit tricky, because whenever future was brought negative in discussions, it was often around certain ideologies.  And the message is, forget it today, forget it now, we discuss future, and last when you come to the bright future, we will be happy.

     What happens in our lives, I won't make references to the historical experiences, but it's a very tricky argument on the future.  Therefore, there is something that you can use now.  But let me again Zoom out and go to ‑‑ basically, if I manage to close this ‑‑ oh, I managed to find.  Great.  I can manage again.  We call it we had enter of excitement, (?) came into the force, it can write master thesis, instead of you block post, you know the whole story, December, January, February ought though AI is much, much older, as all of you know.  Then there is a spring of metaphors.  People suddenly realize, wow, it's coming.  Let's do something with this.  Dangers are (?) the risk of society, or nice ones.  It will help us.

     Then you have summer over reflections.  And we call it Autumn of clarity.  Think about Four Seasons, not the hotel, but the Four Seasons in AI.  Winter of excitement, spring of metaphors, summer of reflections, Autumn of clarity.

     Now, during the summer of reflections, what I did, I said okay, let's see what happened?  Two things we did, Sorina and myself.  She will explain other things.  What she did in the course.  We said okay, let's do ‑‑ let's recycle ideas.  What were the ideas of ancient Greek on axial age?  What can Socrates teach us about AI and prompting?  What about journey of zero from Indian civilization via (?) to Lionardo Fibonacci.  What about ancient Greek?  What about Chinese big philosophers in AI?  What would these people tell us about knowledge, about ethics, about individuals and communities, about the Renaissance with Voltaire and Rousseau, great thinking of Renaissance period.  When you really think about today's era, and under this there is a text.  You can see you have five thinkers who live in Vienna between two world wars who basically set the stage for AI in Geneva and Vienna and they'll show Geneva thinkers.  Hayek, Freud about human psychology, and possibly person who inspire thinking about AI's (?) who basically moved to the probability theory in language as a key element of the philosophy.

     Then we said okay, those are Vienna thinkers, you have then Ubunta thinkers in Africa.  Again, another civilization, and thinking not written in the text, but not codified in the practices.

     And in parallel, what we did during this summer of reflection, Sorina went to deliver the course at ‑‑ with the College of Europe for the group of students from Germany, she wrote the blog post.  Whatever we do, we cod Tau identify, because we believe in creative comments and reaching discussion on, in this case, AI.

     Sorina, you may tell us a few words, you will scroll, what did you do during the course?  What was the purpose of using AI and how did we use it? 

     >> Sorina:  Thank you, everyone.  We won't spend more time talking, but the whole topic is bottom-up AI, and we hope to hear from you what you understand by that.

     What we did at the summer cool is bottom‑up AI.

     Briefly explaining what happened there.  We had a group of 25 students, and for about ten days we simulated the global contact.  You're at GIF, I'm sure you know the discussions around it, so I won't go into that.

     We split the team ‑‑ well the group into a few teams, technically representing some of the biggest countries and groups.  We had China, the U.S., Brazil, and a few others, also Civil Society and technical communicate it too e and the task was to prepare and negotiate how they would see a global digital compact looking like.

     To help them, because many of them were newcomers to the whole idea of digital governance, what our team in Belgrade, Serbia did to prepare this AI advisor.

     How it worked?  We Fed it a lot of documents on Internet Governance and digital policy and also with the contributions that stakeholder made to the global digital compact process.  And then each of these five teams had their own advisor.  What you see on the screen is the ad vie or of Brazil, right.

     The idea was for students to engage with the AI to see how it works, to use it in the process of them preparing their arguments for negotiations, but also to discover the bad of the technology or the challenges.  And that I found the most beautiful part of it all.  At the end we sat a bit and talked about how they used the advice and what they found useful and challenging, and the discussion was really good.  They were able to say okay, we used it to fine tune our language, to be better at negotiating at our position to find things we might not know about our own Country or other own stakeholder group, but we all sue understood that we cannot just rely on what the AI is telling us, but they can critically assess it and actually use our minds.

     Another reason why we did this is, as you probably know some of the schools around the world have taken this very knee jerk reaction saying okay, we're going to ban a user if sharing their agency in schools, which we think is not a good approach to take.  So, the idea to summer school was to expose students to the use of AI for them to be able to develop this cry critical thinking as to how you can use it, why is it is good, and why you shouldn't rely on it, because again it's the technology and sometimes it does illucinate.

     This was just an example of a bottom‑up AI and how we're trying to build this from the bottom.  And I think we can turn to the audience, Jovan and asks what everybody understands about bottom‑up AI before we go into more of what we're doing.

     So, I'm going to move around ‑‑ do we have a roving mic?  There is a mic there.

     A question to you all in the room, because we promised we're going to have more of a discussion and not the two of us speaking for 90 minutes, which kind of defeats the whole purpose.  What do you understand by bottom‑up AI, or if that doesn't sound like an interesting question, why did you join this session?  What did you expect from it, please? 

     JOVAN KURBALIJA:  Is it before or after copy?

     >> AUDIENCE:  The reason I'm here is to help us think about it.  Hope us yes you will the idea to the ground and we'll probably help you wrestle it back.

     >> JOVAN KURBALIJA:  AI won't reply in this way.  That was very smart.  Thank you.

     Try to be, as we move to the next step, basically Sorina explained practical use on the critical issue when universities worldwide are banning use of AI, try GDP.  They tried with anti‑plot theories when software doesn't work.  Open AI stopped using anti‑plot theories in software, and this is not an option, therefore our message and it was successfully accepted, there are some anecdotes how some professors reacted to it, but we won't ‑‑ we won't mention the names, but academic community reacted no, we are in charge.  Forget AI.  We said no, AI can be interlocutory, can sharpen your thinking.  As Sorina proven practically, and students loved that.  Because it can sharpen your thinking.

     Then we comment on the questions that AI ask provided.  Answers they provided and say this is good, this is stupid, this works well, this works good.  That element is critical.

     Now, it is going to change educational system profoundly.  We are on the similar generation, let's put it this way, depending on the education traditions, but there was a lot of learning by heart.  There was a lot of listening to the (?) professors in my educational sort of process.  And few professors who basically acted like GDP in my question and then answered and provide me stupid answers, stupid questions are still people whom I can remember.  Therefore, that element of conversation AI can help us.

     And now our argument, they said, don't kill the messengers.  Don't put your head in the sand.  Let's see how AI can help us achieve elements of an educational system.  It is improving critical thinking, it is improving creativity, and it can do.  Therefore, our argument, and we can substantiate it practically whatever I've been mentioning can be substantiated practically, is that AI can be a great help for the real education.  I'm sorry, not for the Bologna style taking assignments and number of the credits and this and that.  That's another story we can discuss, but for, I would say, ancient Greek or Roman education about inquiry, creativity, questioning, and consider yourself as dignified thinker who can engage in the thinking process.

     Now, let me ‑‑ this is for example about to (?) philosophical issues.  Where you can find really powerful thinking from Africa on that can enhance art fishing Al intelligence and the basically should be codified especially if the companies or hopefully African actors deploy AI in their context.  And this is the first building block for bottom‑up AI, which was the title of this session.

     We have to codify local traditions, practices, ideas, that deal with questions of family, the question of universal individual connectivity, knowledge, happiness, and whatever we ask charge GDP tad or even more advanced system in the future.  It is ‑‑ it cannot be designed only by European philosophical and thinking tradition.  This is the first point on genuine bottom‑up AI.

     The second important aspect, which we've been doing at Diplo, and I have so many windows open, it hopes to develop ‑‑ I'm sorry.  We argued that there is few too points for relevance of bottom‑up AI.  First, it is ethically desirable because it let us preserve our knowledge.  It's not just about data.  It's our knowledge.  This is what defines us as humans, civilization, a culture, a family.  They're speaking about ultimately critical discussion for the future of our society, and each of us individually.

     And what we did, we basically said, okay, what can we do?  And first we went for open source, and as you can see, there is a very critical discussion about big systems bringing the fear and danger as a risk for society mainly by a few big companies, open AI, Google less, few companies, you know, usual some optimum and these people who are towering the Congress and places all over the world, which is a bit paradoxical situation.  They created something and telling us, hey, guys, it's very dangerous.  I said okay, but stop investing in it if it is too dangerous.  Of course, there is a bottom‑up competition argument, but there is something strange on these things.

     And most of them are very nervous about open-source AI, except if somebody told me that he would become one of my heroes, I would be very surprised, Mark Zuckerberg.  Mark Zuckerberg created Meta and created llama, and their own reasons, competition with Microsoft and Google and other actors.  But Llama is doing quite well.  There is falcons and United Emirates, there are new quite large (?).  We can now discuss it is not that much innovation, unilateral innovation, and they were in produced.

     Now you need basically a lot of hardware and a lot of, to be friends in VIDA and basically to have a process in GPUs and a lot of hardware to process.  If you can invest in that, you can train big models.  That's another issue which makes me personally nervous.  Forget garage, forget bottom up in that scenario, except for time being there are push backs.  And there will be dynamics in this way.

     There for the first element is open-source approach.  The second is unit high quality data.

     And that will be an interesting story, because most of these companies more or less process the trillions of, I don't know, whatever, books.  I got a bit lost when it comes to this number oh, the billion, but trillion something.

     And now they come to the point that they cannot get anymore high-quality data, therefore, they are doing so‑called annotators or data labeling.  You know, this Kenya case with open AI, it was the strike of people who are working on open AI data, but basically they sit next to each other and they're basically annotating saying, this is a bird, this is a cat, or this text is useful, this text is bad, and the other things.  I will show you how we do it at Diplo as sort of annotations.

     But this is, I would say, the key diagram, because quantity of data is limited by definition.  You know, there is this idea of AI creating data itself, but I'm not sure that it will go too far.

     And you have the quantity quality of data.  Your quality of data will be critical.  And then even with the small data, if you have high quality, you can create AI.  That's basically what is going to happen in the coming years.  And this is the reason why companies are very nervous, they're rushing to get into quality of data to get it in order to capture that future competition.

     Now, what we do in ‑‑ you can read the blog post, but what we do.  We have a system which basically annotates any text.  Therefore, when Sorina and I read the text, we annotate the text.  And we are ‑‑ our teaching system is based on annotations, therefore by teaching, by doing research, we are creating high quality data.  It's integrated in the work.  And I will show you practically how it works.

     Sorina, if you don't mind.

     Okay.  For example, for example, for example, you're following obviously developments in Middle East.  You are (?), and you are reading the text.  And you will say, I'm not now, I'm just inventing the argument.  You will basically ‑‑ I will use the highlight.  Let me see if I'm in the Google ‑‑ you will use highlight.  You know how it works.  It usually often does not work when it's needed.  When you try to show it.  Pun.  Okay.  Or you can open.  You annotate.  And I write in annotator, Sorina, what do you think about this argument?

     Sorina will answer this.  In this case it's public.  She will answer this.  She will receive the annotator, and two of us are adding a new layer on the thinking of text and Alga zero.

     Because we have been using this as a teaching message for the last 20 years, those of you who are from Diplo (?), I designed this method based on meth Tau for that I like to highlight the text and write something in annotations on the side.

     I have a sticker.  We developed this system 20 years ago.  This is now the critical system of adding the letters of the quality on the text.  Now, when AI comes and sees this text, GDTP will just process it, but in our case, if there is discussion, we say uh‑huh, this paragraph is important.  Sorina, Jovan asks Sorina and then Sorina are developing basically our very local bilateral AI built around knowledge graphs.  Therefore, we can then share it with the rest of the humanity or keep it for ourselves or share with Diplo, share with you, share with the others.  Therefore, our idea is that we can bring AI back to individuals, and then develop big systems.  Ultimately.  Why should I send it to the big system when I can do it ‑‑ we can keep it for ourself.  And then share is our human right, our right of citizen Civil Society with the rest of society.

     This is the key concept behind the AI.  Now has it triggered some ideas for questions or comments, how does it work, practicalities?  Anything else?  It's a big thing to me where you have to stand and walk next to mic, but if you ‑‑ you can shout also.  I am fine for any question or comment so far.  So far no?

     Therefore, this is the basic idea.  Let us preserve our knowledge.  Why this knowledge that Sorina and I will create around discussion, she can comment then on the ‑‑ what's going on today in Israel and Palestine.  Why should we share it with somebody else?  Why we don't preserve, and then share is our knowledge.  It can become much more complex of a newer annotate complex text, philosophical books, other text.  This belongs to us.  Then we, in Diplo, we share it.  You see, it's public.  You see we shared, because we think everything should be creative comments, but we are very nervous because of technical facilities have to contribute it to open AI or to Google or to whoever is basically providing this system.

     Therefore, what happened with Google ten years ago or Facebook and others when they basically co‑modified our data and our source use of Internet is starting on much higher level with knowledge.  And that's basically idea to bottom‑up.

     One thing is that we talk and I explained to you maybe some people got interested in this.  The other question is if you can prove it in practice, and this is different.  If you have a system that can prove in the practice that it can work.  And that's basically what we have been doing with the bottom‑up AI; returning AI back to people with all their strengths and weaknesses.

     Sorina.

     >> SORINA TELEANU:  Maybe we give one more example.  We are having these discussions in Geneva.  A part of our work is to support small and Developing Countries in big digital diplomacy in Geneva and beyond.  We hear a lot, especially from the smaller one how they cannot follow everything and everything, well, there's a lot, and also how sometimes they don't have enough time to research what they have done before to actually come up with the position to present at some organization or some negotiation.

     In discussions with this whole IGF bottom‑up AI and how we cannot use technology, this idea also came up.  Minister foreign affairs develop each own AI system to use for their own purpose instead of putting data in GDP or whatever else, and actually rely on the wealth of knowledge they have developed over the years, and the simple answer is yes, and should they do it?  Again, the simple answer would be yes, because you don't give your data to bigger system out there and you don't rely on all other information that mate be coming from different sources, but you rely on what your Ministry of Foreign Affairs has developed over the years, policy papers, document, and whatever else.  And again, the question would come here obviously, as well, can you rely on completely on AI to come up with the position that your diplomat will negotiate in an intergovernmental process, no, but you can use it as a starting point to save time, because you don't have that much time to actually come up with something.  So, if you have a starting point and bring your own expertise and abilities, that would help.  This would be one example of how we say bottom‑up AI happening and helping smaller countries.

     >> JOVAN KURBALIJA:  Here is the conclusion from last week’s discussion on general assembly.  We processed all statements delivered, you know, President Biden, head of states, we're basically saying what do they want to do?  What are their views and different issues from climate change, Ukraine war, digital.

     We asked the question, what did they say about digital?  And we processed that and we got to the report, which is very interesting report.  You say on Artificial Intelligence, line by line, relevance, what Barbados said, Ethiopia, Somalia, in the bullet points.  What they said.  And then you have also in‑depth report with statements.  What each country basically, what is the transcript of the session and what is the summary?  Lessee.  On Albania, you can see how many words, the speech length.  What is knowledge graph?

     I mentioned knowledge graph is critical.  You can do knowledge graph on anything.  We will be having knowledge graphs about all sessions in the IGF.  This is proximity of thinking.  Could we have Sorina and myself knowledge graph about today's session.  What is Albania is arguing it.  What is the arguments?  What is the speech itself?  What is the summary of the session that was hosted by Albania?

     And then what was interesting is we ask also AI, based on all statements, if you put all knowledge in the general assembly or in IGF, we will do similar things with IGF, ask the question, what should we do to combine action on climate change and gender?  I hope they're not testing the system, because now they're shifting to ‑‑ let's see.  I hope it will work sometimes.  Okay.

     The system gives the question the answer, based on the all speech is delivered in ‑‑ now, we won't read it, but that's basically what is delivered.

     What was the ‑‑ when there was a session in security counsel, we did the same thing, and then each session, you know how it is with Multistakeholder Advisory Group, you have at the beginning of the session you have the key question, and then the answers, but also on ‑‑ based on what parts of the speeches AI generated a text.  Unlike try GDP which will give you just the basically answer, we said no, we want to ask AI to tell us what parts ‑‑ for example, this answer was generated on part of speeches of the Professor from King’s College, mainly his speech, but some other answer was generated.  This is Malt speech.  You can go through 360 questions that was transcript and based you a round the idea, what is the climate change answer, Slovakia.

     You suddenly realize that Bangladesh and Slovakia has something close when it comes to the discussion about climate change and digital commerce.

     Therefore, you basically discover completely different event, and this will happen with IGF.  Maybe we'll have, oh, at the session on AI, there was somebody else discussing bottom‑up AI, which I'm not aware of.  Maybe not calling bottom‑up AI, maybe calling organic AI or something like this.  And you suddenly say uh‑huh, here is knowledge graph between Jovan and Sorina and John and Petro and Mohammed in the other sessions.  Said okay, I didn't know that we were doing the same thing.  And that's basically ‑‑ I'm just giving you very concrete examples.

     What Sorina said, small states got really excited about it.  Because you say, Gibault had three diplomats in Geneva.  They don't have a chance to follow the old sessions in Geneva on health, on migration, human rights, but if they have this system, they say they will receive alert, hey, by the way Gibault at the working group 1700 at the ITU or WHO, there was discussion of relevance for your Maritime security, they're ready, by the way, follow that discussion.

     Therefore suddenly you have equalizing aspect of AI, that it brings small states that they can take care of their sort of interest, specific interest.

     Therefore, we just highlighted a few options and probably we'll close with this.  We started with philosophy.  This is ultimately philosophical issue.  But give you a few concrete application in education in diplomacy, in IGF yourself.  You can follow IGF itself, and it will be interesting to hear your reflections on the quality of the report on the ideas around it, and then about this practicality is how it can improve less say inclusion and global governance.  For small countries, small organizations to follow what's going on on that interest.

     The ultimate message is, let's return AI to citizens.  Let's make it bottom‑up.  Let's build around it and let's find practical uses.  It's enough of the big talks about ethics and AI.  Here are practical uses.

     And last point, which is important, it was part of the title of the session, as we are discussing, let's reserve human imperfection, because we cannot compete with machine.  Weigh should sometimes people are critical about my title of the session that we should let AI hallucinate, as we sometimes hallucinate and as you think through the breakthroughs in human history, it is sometimes when people choose to be lazy, and let's say British Empire time and all sports were invented from soccer to tennis to all major sports, because these people have lots of time.

     Others were working for them.  I won't go into that, but if you can, basically leave a bit of imperfection and there is one blog post which I cannot find, about need for human imperfection.  We should facilitate that.  We want to win the battle with machine optimization.  This is not possible.  We won't win the battle.  But we should preserve spaces for imperfection.  For being lazy, for having time to reflect, for developing arts, for making mistakes, and this is the reason why I went to the flea market in Belgrade, to search for the new Touring test.  Basically, flea markets they're masters of human psychology.  I said they're completely imperfect.  Always on the edge of the criminal media, and the other things, and I was going through the criminal ‑‑ through the market, and ask one of the traders, who is legitimate.  I ask him, okay, tell me what do you think?

     [captioned video]

     >> JOVAN KURBALIJA:  In my search for human imperfections, I go to the flea markets and see what is going to be our niche, because we cannot compete with machines.  They will be always more optimized than us, but we have a right, and we have, I would say, duty to preserve the core humanity which has been passed to us from previous generations in all cultures from Abunto to Zen, to Shin Turksism, to Christianity, to Ancient Greece, underlining element is that humans are in charge.  And that is one thought that I would like to leave you with, that in this battle, we will be having a tough time, but we can do it and we show it practical how it can be done with bottom up AI.

     I'm ‑‑ getting some sign, but my human imperfection is ‑‑

     >> SORINA TELEANU:  I'm looking at the room.  I'm hoping we can have comments.  Thoughts, comments.  Your thoughts of how we implement AI and how we build bottom‑up AI and how we rely on it for whatever your work is.

     Yes, please. 

     >> AUDIENCE:  Hello, everyone.  My name is Manuela.  I'm from Brazil and I represent (?) which is an organization that is focused on defending children's rights on the Internet, on the environment, and focused on social justice, as well.

     I have a few questions for you.  One thing that I thought was really interesting about the diplomatic view and the advocacy view was that these two that you guys presented, this approach could be really good for advocacy organizations, because they have ‑‑ you have like knowledge management system approach that I think could be very helpful and contextual, but how ‑‑ I think my question is very practical.  Like, how to incorporate this when considering that, especially in Brazil, I see a lot of NGOs and organizations, they are not very tech.  I want to know the practical side, how can we benefit from this technology?

     I have a second question, another issue that we face is considering how do we increase voices and participation and such a big word, but how can we increase participation in this matter about tech, and do you think this approach about bottom‑up could be something to be used to organize different participation approaches from different places and categorize knowledge in a way that could be sensitive to local perspectives, but with more, you know, data analysis.  So, this is my second question.

     The last question, sorry, that's just to fill up the divot. 

     >> JOVAN KURBALIJA:  (?)

     >> AUDIENCE:  One thing that worries me a lot is the sector that employs a lot of people in Brazil, and we see the increase of usage of shut bought and automation.  I was wondering what is the bottom-up AI that we're representing.  How can we move economic opportunities for people that are rewarding that, you know, that signify ‑‑ yeah, dignity?  Because we see a lot of unemployment, and we don't see a lot of ‑‑ anyway.  I think you guys understood, like, the basic approach.

     Thank you a lot.

     >> JOVAN KURBALIJA:  Thank you for excellent questions.  Inspiring, let's probably start with the third one.

     This is exactly what I mentioned when I said instead of discussing what may happen with AI, artificial intelligence, basically killing us, which you can hear from some (?) and his gurus, there are things that are happening now.  People are losing jobs.  And there is a risk that whole generation, if I can use the slang, could be basically thrown under the bus.

     Not only anymore blue color jobs, but white-collar jobs, lawyers, accountants.  I would say many of us in this room.  That's a big, big problem.  And how to deal with this now and here.  I hope that IGF we can report in the ‑‑ with AI basically what did IGF say about that, but it's a huge problem.

     Our argument and strong argument is that job is not about only about universal income.  It's a question of dignity.  It's a question of realization of your potentials.  It cannot be reduced of oh, you will get the money at the end of the day and go fishing or go whatever you want to do.  What makes you happy.  No.  This job has been throughout the civilization, the way of realizing our potential and appreciating our core human dignity.

     Now, it's a big issue.  This is why this is a social contract discussion of utmost relevance.

     For example, (?) civilization, African traditions are interesting.  You are, because I am.  And there are different ways of saying it, not just optimization, optimization, optimization.  I don't have an answers, but I would say that should be on the top of the agenda and whoever discusses policy and the other issues.  Do we need to always optimize?  In some cases, we may step back.  It will be counterintuitive.  It will be difficult to promote, but we should introduce this right, human right to be imperfect.  We have that right, because it defines us as humans.  Therefore, that's the ‑‑ Sorina, if you want to add anything.

     >> SORINA TELEANU:  No.  No.  No.  Should we take the other two questions.  We have the other one on how bottom‑up might help better representation from the underserved communities, I guess.

     I guess there are multiple ways, as Jovan was saying earlier.  Making sure we do use knowledge from these communities from developing these AI systems and given this examples of small missions or smaller entities, that would be a way to help them better represent in the discussion.

     But what I didn't understand from your question was whether you're talking about representation in governance discussion or representation of development of AI.

     Then, the example we were giving with following the reporting for instance from the UNGA, which would then be able to alert the smaller countries, okay, this is something that might be of interest for you, this is a Country that you might want to build your alliance with, so in this way it can help foster more meaningful engagement while or where this these countries cannot follow anything, and the example of how it can help build the position to get to that meaningful engagement.

     And then what we usually say, that if you're not at the taken, you're on the menu, then AI in this example can help avoid that very unpleasant situation, especially with the smaller countries that don't afford to follow everything because of limited resources.

     So, we do see these issues, and it's not only us again, it's countries sitting with themselves.  We had quite a few  discussions in Geneva with small organizations.

     >> Jovan:  You don't have a human resources.  Diplo delegation, delegation in this place is three of us in the room and Anastasia will come comparing to other delegations, it's basically statistical mistake.  But we will contribute to public good by this reporting.  And now practically what can be ton, and it's the most important, we are starting the project where we'll try to push some agencies and Civil Society supported by European Union and engagement and inclusion of Civil Society.  You basically, how would it work?  Your organization deals with jobs or ‑‑ child rights.  All right.  You will make your map and say knowledge graph based on your documents, Zoom meetings, wherever you want to put it, it will be your knowledge.  Knowledge graph.

     You will just apply it on the whole analysis of IGF.  And you will say uh‑huh, here is the similar problem that people face in Uganda or in Romania or in whatever place.  Therefore, suddenly out of the transcript you will get and you will say hints how to do it.  Or how to frame discussion next time for the next IGF to be more persuasive.  Because you realize that this argument in child protection didn't fly this IGF, people just brush it and say that's not ‑‑ next question.  You know how it works.  But somebody's rhetorical approach may wonder.  If you get the really deep insights into this and you, what is beautiful, through the process, you develop AI because commenting on what worked or didn't work, you have reinforced learning, and your system is every ‑‑ on every stage stronger and stronger, therefore, in two or three IGFs, even with delegation of two people, you can have impact of organization of 200 sometimes.  Because you know what is your focus, you know what are your strengths, what sessions you will follow, and what you will do practically.

     That's powerful.  Now, how to do it.  The best way is my colleague and Ken brief you later on, or you can exchange details about this project that is starting in January, which will have one of the elements, how to use AI to enhance, basically participation of the local communities and the other actors.  And what Sori in, a said.  By developing your knowledge graph, you will take specificity tees of Brazil and it will be element which won't be generic child safety or child rights, which is developed by big system, no, it will be specific to Brazil or even local communities.  I don't know Brazil very well, but specific problems at existing communities.  Therefore, well, from the problem of future work of jobs, which is big issue, to what Sorina explained about developing system, contact Paulina and you can join some activity.  We have some partner from Brazil, as well, and that can be joined practically.

     And it's very important that we are practical on AI, otherwise discussion will be total (?).

     Let's see if you inspired some questions or comments.  Critical ones or challenges.  We need to ‑‑ or are you just playing with your hair? 

     >> AUDIENCE:  Sorry.

     >> JOVAN KURBALIJA:  No questions?

     >> AUDIENCE:  I'm wondering what you're learning about the bigger systems.  Are there ways in which you are giving them feedback or ways in which you are noticing sort of systemic problems that really ought to be addressed in the mod tells themselves? 

     JOVAN KURBALIJA:  Bigger systems are big, and they're big not only in the number of data, the process of money they track, but also, they basically don't listen to small guys like us.  They have important things to finish to go to U.S. Congress or new Parliament or Chinese, whatever place they discuss this issue and therefore there is a bit of arrogance and element of (?) I would say, which could be dangerous, because it's not only their business, it's also our business about future knowledge.

     We found it a bit ‑‑ you know, in any technology you have magic.  I still remember when I first was using a mobile phone, it was magic.  Technology is a bit magical.  Internet and other things.  For us we are now time, but when you think there is element of magic.

     Now, AI brings magic on steroids, and some (?) can go, I mention him very often, because I'm very critical about this use, and say oh, guys, AI will eat us for breakfast.  I'm using this.

     I say okay, but why, how, when?  Give us something we cannot trust you just on these words.  I mean, you have to ‑‑ and first, as discussed, jobs today, let's discuss this information, let's discuss destroyment of the public spaces, online spaces with AI (?) not only AI.

     We found that problematic discussion, and especially non‑explainability of partial explainability of neural networks adds to the magic.  We put something AI does something, and you get something.  This is why we insist always to have the source of the answer of the question.  Yes, here is the source.  And this is the first step.  We don't know how AI got this answer, but we know, and GDP can know that and (?) and the others, they can know what were the sources for that answer.  This is already first step.

     Therefore, we see a lot of ‑‑ lack of transparency, confusion, and I'm afraid to say that it will be fertile ground for the conspiracy theories, because when you are just saying well, trust us, we wants to regulate you and don't ask questions, just trust what we are telling you, and then you're basically for me personally, I have a problem with that.  I don't think that things can not be explained.  At least source of your conclusion.  I know neural net is not easy to explain technically.  I have a colleague who is into AI and he said, listen, be careful when you go to this IGF of the UN, if you introduced explainability of neural networks, half of us will be in the jail.  And I said okay, that's ‑‑ that is a realistic concern, but there are things that can be done.

     That is my sort of criticism of big systems.

     >> SORINA TELEANU:  To add on that.  To reinforce one of your points.  In all these discussions about AI governance, you probably followed Sam Altman and a few of the other guys.  They say yes, it's a huge mess, AI is coming with all these challenges and is going to break the world and destroy us and we need to regulate.

     What they're saying is we need to regulate future AI.  Not the AI as we as companies have developed, but our future AI.  Let us do our things, we'll continue doing the best, and you should worry about the future.  I think has is problematic, and I think we should hold them accountable with them right now.  We have problems with AI right now that we should be solving before looking at the future.  Not saying we wouldn't worry about the future and what might happen, but maybe put more resources into what is happening right now and how we address today's challenges, and that would be it.

     >> JOVAN KURBALIJA:  I'm looking for one presentation which we may share later on.  Basically, here it is.  I was recently in Brussels, obviously they're preparing the new regulation, and we said okay, let's see what does it mean to regulate AI?  You regulate hardware, you regulate data, you regulate algorithms, and you are the first to see it publicly.  We didn't show, because there was some problem with PowerPoint during that session.  And we reg rate apps.

     What does it mean practically what do you regulate?  For example, as Sorina said, you can't hear Sam Altman say regulate apps, or even data.  Why are they not showing sources?  Obviously if you find the book, which is copyrighted as a source, there will be a problem.  As you know, there are all the court cases in the United States against open AI.  Or hardware computing power where things are happening with Nvidia and the GPUs.  What do you regulate?  Read carefully next time when you hear listen to Sam Altman, oh, regulate AI capabilities?  What does it mean basically?  We created these capabilities.  Let us stop the other developments and basically, I'm now a bit cynical.  Let's have a monopoly on this.  I said no, that's against competitive market.  Against creativity.  But there are problems that we have to deal with.  How apps can be misused.  How people can be thrown out of the jobs.  How this information can be generated.  You know, the whole stories, it's part of public discussion.  But where do we regulate?  You can't hear companies talking about data.  That's nonexistent.  They are already concentrated on this blue one, which is basically vague.  They avoid apps, the red one, because this is very concrete.  And hardware is more geopolitical discussion these days between US, China, and these big players who is going to have hardware capability to process data.

We will be publishing soon an article on this to bring clarity.

     When I started winter with excitement, spring of metaphors, summer on reflections, Autumn of clarity.  There could be disagreements but let us not misuse the magic of technology of AI.  Magic is important.  It can inspire.  But let's not misuse it.  Let's basically keep the magic of technology while discussing governance issue where they are.

     That's it.

     >> SORINA TELEANU:  Looking again at the room.

     >> Jovan:  Looking at the room.  We have questions.  We have ten minutes more. 

     >> SORINA TELEANU:  We have 25 minutes.

     >> Jovan:  Listen, let's chat in the corridor, if there are no other questions or comments.  We have two questions on this side.  I hope it is not a forced question because we are asking for the questions, no.  No, no, go ahead.

     >> AUDIENCE:  I was thinking ‑‑ sorry.  I'll introduce myself.  I'm Julia.  I'm a youth from Brazil.  I am ‑‑ my delegation here, and I was thinking when you are talking about (?) and other societal aspects of the philosophy behind AI or what could be the philosophy behind AI, and I am ‑‑ it got me wondering if there is any initiative to use AI as a means to preserve and to develop small communities, history, and culture and have them not to be lost into the translation that we are experiencing of losing practical and physical ways of sharing knowledge, lake families are being estranged pie the recent modern changes, that they are moving too much, they are being displaced by technology and opportunities, job opportunities and so on.

     Is there an initiative or a group or an entity towards preserving small cultures or at least not small cultures, but having dash striving to bring access to small cities or small communities to try and update and upload your knowledge?  Basically, their knowledge.  I will stop there on knowledge, but we can also imagine that knowledge of villages of small cities can comprehend into physical practices, agricultural practices and stories and mythology and so on, because I have a personal ‑‑ that's also a personal question for me, because I think much how are we losing ‑‑ I'm from Brazil.  How much are we losing from being away from the countryside and having the cities expand and the countrysides shrink.  Although the countryside is the majority of our land mass.

     >> JOVAN KURBALIJA:  Great.  I think it's an and lengths question.  The short answer is yes.  And ‑‑

     >> AUDIENCE:  My example ‑‑

     >> Jovan:  I always try to start with myself.  They're a small organization.  We have small community somewhere in Amazonia where we basically live in the river and we have the culture, and we had to deal with questions that every humans have to deal with it.  Question of family, love, purpose of life, what do you do after you die?  What do you do with your kids, and these things.  This is knowledge.  This is very valuable knowledge.  Maybe not codified in the books, big philosophers, but this is called knowledge.

     Can it be saved?  Yes.  Should it be saved?  Yes.  Are there initiatives to save it?  No.  Why is it the case?  I can't tell you, but it's very sad, because we are losing on this diversity of humanity and I don't think there is a higher knowledge and experience.  Maybe money and power is not equally distributed, but human capability to innovate is distributed and that's basically how it can be done.

     Now, is there initiative?  No.  Can it be done with open-source tools?  Yes.  Is it easy to do technically, yes, organizationally, no.  Because you have to change the habits and you have to change quite a few things, but not undo able.

     Is the interest to support it?  No.  Well, you will hear him inclusion, cultural diversity, but when it comes to concreting things, there is no action.  And I think countries like Brazil should push, especially the new Government, I think it's keen on diversity, should push organization like UNESCO to do something.  To preserve the knowledge by using AI.  And that ‑‑ what is your name? 

     >> AUDIENCE:  Julia. 

     >> JOVAN KURBALIJA:  It could be Julie's initiative.

     We have a question from the colleague here.  The process is you have to stand next to the mic, please.

     >> AUDIENCE:  Yes.  Thank you.  My name is Nicolamis, I'm from Kenya.  I've come under the Dynamic Coalition on Accessibility and Disability.

     I work in Internet, and digital accessibility.  More specifically for persons with disabilities.

     So, there is something that has been disturbing my mind and I really need to understand when it comes to AI.  Also, that it is not deviated so much from the normal approach to machines and computers, it is based on inputs and output models.  So, we (?) that is mostly trained on perfect data.  I call it perfect because it is a predetermined that is data that is considered to be normal.  But we want that AI to work with the imperfect human, a human who makes errors.

     So, also, we African Government, the good thing is we make mistakes, but as humans resolve back too, and correct the mistakes.

     So, my question is what approach should we take that the AI is as human as us, and that it can work with persons with disabilities and ensure that they also contribute to basic life needs for person with disabilities so it does not create more of a marginalization because it will come with an interface of, say, defining another form of perfect of which not all of us are.

     Thank you.

     >> Jovan:  Sorina.

     Let me unpack the few issues.  One about people with disabilities.  AI offers possibilities, serious possibilities.  We are seeing it; we are transcribing with the other issues for people with disabilities.

     Again, people with disabilities are not prominent yet in the AI debates.  And here again, small communities, could ask, actors like UN, to check their disability quality, how people with disabilities can access.

     We recently have some study, and weigh are going to check diplomatic websites how disability friendly they are.

     And that's, I would say, that push has to be strong from the bottom-up communities and other actors.

     This is the first question.  Should we make AI look like us?  That's a philosophical issue, and I'm not sure.  I would preserve AI as a tool in the mindset.  It is a really powerful tool, but always a tool.  Which Sorina used during the course this summer, to enhance learning.

     To have it always as a good tool.  Not to have it as a master, but to have it as our servant.  It's very important mentally.  That will be a powerful servant, which may revolve and say okay, I want to have some power over it.  But that is basically I would keep it.  Obviously, we'll try to mimic it.  It excites us.

     If you read the Frankenstein, this is the best example, basically, she, Dr. Frankenstein wanted to create the perfect feature, and that can recall the book was created to be good, and then it went out of the lab and people were afraid.  Then people became aggressive.  And then creature reacted and start getting nasty, basically, how we now perceive Dr. Frankenstein.

     This is how I'm very uneasy with anthropomorphising AI.  Putting it as humans.

     Because it is exciting.  You can have a nice event, people are excited oh, Sophia, what are the names of all of these robots.  Fortunately, I don't see any Sophia at the IGF.  Sophia can answer your question.

     I said no.  What we do.  We have a coffee machine as AI.  It was ‑‑ the first session for those of you who were in IGF Berlin 2019, it was participant in one session.  You can search IGF coffee machine.  That's the element which we have to be very careful, otherwise we will finish with a creature like Dr. Frankenstein, because we will think that creature is creating problems.

     I hear maybe one suggestion.  If the IGF gives you a chance to be (?) perfect, and I share it here on the screen, you can go to the philosopher's path here in Kyoto.  I hear it is nice work.  Don't be at all sessions, except thank you for coming to our session.  But here is the leading Japanese philosopher who basically studied who used to have a walk‑through this philosopher path, and you can see that he was reflecting on society on purpose, on happiness.

     I don't know if you are going to have somebody from philosophy department, from Kyoto University, which was one of the best in Japan, but that would be an interesting discussion back to tradition of coming from Kenya, okay, it's more towards South, but all that tradition of us belonging to collective it tee and being empowered by family, by our surrounding.

     That's the answer to the practical, again, answer if the weather will be nice.  We don't have a cherry blossom.  I will criticize AI organizers why it is not in April, but we can get back to Kyoto for this.

     But philosopher's path is an interesting place where these guys walking like Kant used to walk now linen grad.  It was at that time a Prussian city.  Famous Immanuel Kant, he was walking every day the same route.  He was only late one day, and that's mystery, pettiness of philosophical discussion why he was late.

     But I forgot the name of this Japanese philosopher.  Oh, Nichikta Kitara, is basically the best Japanese philosopher.  I plan to read more and see what I can learn about AI and basically develop this discussion further.

     And may call for imperfection, try to discover this lovely city.  You will have any way Diplo's reporting for you, you can read what was happening, but that's ‑‑ I should be official on this point?  No.  I will get in trouble with the Secretariat, and thank you for coming.

     Let's walk the talk and enjoy the corridors and chat and basically continue this interesting debate about bottom‑up AI and our right to be humanly imperfect.

     Thank you.

     [applause]