IGF 2017 - Day 0 - Salle 21+22 - Data Donation: Auditing Socially Relevant Algorithms


The following are the outputs of the real-time captioning taken during the Twelfth Annual Meeting of the Internet Governance Forum (IGF) in Geneva, Switzerland, from 17 to 21 December 2017. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid to understanding the proceedings at the event, but should not be treated as an authoritative record. 




>> MATTHIAS SPIELKAMP:  Thanks for waiting.  We now manage to have our website on the screen here.  We didn't bring any slides and we expected a smaller room because we intended this to be a workshop or conversation, but we will manage.  You all have microphones, so I suppose we get into a conversation if we want to.  My name is Matthias Spielkamp.  This is Lorena Jaume‑Palasi.  We are founders of AlgorithmWatch, which is an advocacy organisation that is focused on doing research on automated decision making.  We are called AlgorithmWatch because we didn't want to call ourselves automated decision making watch, but we do know that algorithms are only part of the systems that we are looking at, that's why we refer to this as automated decision making.

But we don't want to tell you too much about the organisation itself.  I will say a couple of words about it, but then we will present to you a project that we did in the last summer in the runup to the German general elections that we hope you will find interesting to discuss because it's one example of how you can manage automated systems and we would like to present this to you, to the community and we would like to discuss whether you have similar ideas, whether you have seen stuff like this being done in your respective countries or in your communities, so on, so forth.  The Conference is worldwide now where things are debated.  There have been conferences for quite a while.  It's not that we came early to that game or to the discussion.  It's been going on for quite some time.

Well, how do you do that, or let's rather start with the question that we had in mind.  There is a lot of debate about personalization of news results.  There is this hypothesis that everyone who does a Google search sees different things, you know, whether you are a woman sitting at a computer that is logged into Google in London or a guy who is not logged in to Google and sitting in Bangladesh.  You will see completely different search results and that means that you get a very different set of information and, therefore, this influences your world view.

Now, we wanted to test this a little better to see some evidence on this because there is, at least we think that there is too little evidence about these questions.  So how do you go about that?  How do you come up with a model or with an idea to test this?  And what we came up with was a crowd‑sourced approach.  So what we did is Logans and his company, they programmed a browser extension that could be downloaded to users' computers and installed on Chrome or FireFox, and this browser extension would then send activation when the browser was open, the PC was running would send the same 16 search queries to Google six times a day for as long as the thing was installed, and, until a week after the German elections ended.

Then Google would send back the search routes to these different users, and their computers would send the search results to us.  So we were in the position to not do this in let's say an artificial kind of way by using proxies or something like that, but real users' computers, and they would send data sets to us and we were able to analyze these data sets.

From the beginning, we said everyone is invited to do research on that.  We will publish the data sets as soon as we have them on a continuous basis as we collected them and the browser plug‑in as well that's on GitHub, so people could make use of the data or the browser plug‑in.  We don't know of anyone who used the browser plug‑in and used it for testing in their own countries or for that matter in Germany for a different purpose, but we do know of people who downloaded the data and did their own research on that.  So that is part of the, we think, success of that project.

Now, we partnered with SpiegelOnline, which is one of the largest news sites in Germany because we needed that kind of attention, you know, the more people who would download that browser extension, the more data we would have available for analysis.  And more than 4,000 people participated.  There is basically, I mean, there is not really any kind of baseline here, how to measure that.  Is that a lot of people?  Is that a few people?  You can't really say because it was never done before, but we were very satisfied with that number.  4,000 people was a high number of users who were sending us their search results.

And not all of them kept this, the extension running, but enough of them to be able to collect more than 5 million search results in the end.  So that is quite a data set that you can do analysis on, and we were, yes, we were pretty happy about this.  Now, what happened then?  Two different things.  First of all, we as an organisation did an analysis of the results, and it is still ongoing because, you know, it's ‑‑ you need to find ways to test this.  You need to find, to build a hypothesis of what you are testing for, and the way we are publishing this is, let's say, two installments.  There was a first report that came out in September where there was a small one like a preliminary report in July, and then there was the first report that came out in September; and then there was another one, there will be another one at the beginning of next year.  And also another researcher from Germany looked at the data and also did his own analysis.

So this was all, this was done in the past, and then the next report will be coming out in a couple of weeks from now.  Now, I suppose many of you are very interested in hearing what the results are, and they are a little complicated.  The first thing is that in general ‑‑ let me start this way.  Google says that they are not doing a lot of personalization.  Of course, we talked to them, and we looked at the information they provided, and they say that they provide a lot of transparency about how the search engine works as much as they can without giving an advantage to, you know, shady search engine optimizers and stuff like that.  So they have a blog that gives a lot of information about how they tweak their mechanisms to give users the most relevant results.

And as part of that, they say that they are not doing a lot of personalization.  Yes, 2%, they say about 2% of search results are personalized.  Now, what does that mean?  That's complicated, but we will come to that.  What we then ourselves found out that actually it's pretty, we saw something, results that were very close to what Google themselves say.  We only discovered very little difference between the search results from different users' computers, meaning that, for example, out of the nine organic search results, organic is what Google calls search results that have nothing to do with advertisement, eight did not differ from one computer user's computer to another.  That is sort of one of the general results that underscores what Google says about how they, how they go about this personalization.

When they do personalization, they say it's a lot of regionalization, meaning that if your user, first of all, if you are a user in England, and you are looking for, or let's say a user in the United States, and you are looking for, what was the example that Cornelius used?  CSU, you are probably looking, you will probably be looking for Colorado State University.  If you are a user in Germany looking for CSU, you are probably looking for the Christian Social Union, the party that is the ruling party in Bavarian, and the leader of the Christian?  (?).  It is clear that this regionalization is being done, and it cannot only be done on the language basis, it can also be done on a geolocation basis.

How does this translate into a smaller part of the world meaning Germany in that case?  We also saw regionalization there, especially when people were looking for ‑‑ I forgot to say, sorry about that, I forgot to say what the search terms are actually.  We use very, very simple search terms for this first attempt.  It was like a pilot project we were doing.  So we used search terms, 16 different ones, and they were like Angela Merkel and Martin Schulz, the candidates for the major partners and CDU and FDP, the parties' names, so those were the search terms we used, and we tested for.

And if you looked at the politicians' names, you would not see a lot of difference between the search results, but if you, for example, look for the parties' names, you could see more difference because if you are looking for CDU in one part of the country, Google would try to display to you, for example, the local chapter of that political party.  And if you looked in a different part of the country, then it would show you the local chapter there respectively.

So but that was also in line what Google claimed they were doing.  And then we had very, very little difference also which was a, probably a more surprising result, that little difference between people who were logged and not logged into their accounts.  There is this assumption that if you are logged into your account, there is a lot of information that Google has stored for you, for your search history, for, you know, probably information that they are taking out from your email account, and your calendar applications and Google Drive and what have you.

But apparently, or not apparently, but we were not able to see much of a difference there either.  Then it becomes a little more detailed when we come into the, you know, when we start looking at, for example, what kinds of, for example, news outlets are shown to people.  There you see a difference between people who are logged in and who are not logged in, and there are a couple of assumptions why that is the case.  But I would like to stop here for a moment, give Lorena an opportunity to also comment on that and then probably ask for first questions to clarify what I'm talking about because this is, this, you know, it's not complicated, but I could imagine that there are questions about the setup of this whole thing.

Oh, let's go straight‑away.  Are there already questions?  Clarification what we did, why we did it, so on?  Is there?

>> LORENA JAUME‑PALASI:  Please say your you name. 

>> AUDIENCE MEMBER:  My name is Emily Avrihulu, European Parliament.  So my question was you mentioned that there were small differences, but were these differences concrete enough, like good enough ‑‑ or what are the small differences?  You say that there were differences, but what were these differences?

>> For instance, we would have out of nine organic search results, you would have, if you were searching for a politician, you would have eight of the results that were everywhere the same, and then you would have one link left that would be sort of personalization, or we assume must be some sort of personalization.  So basically you could say it really, it really seems to be that there is this 2% of personalization Google is claiming they are doing, and that sort of assumption could be somehow not proved, but it seemed plausible that that 2% would lead to the one link left.  With parties the results were slightly different because as Matthias said, it would be five to six organic results that were pretty much the same, and then we would have one to three rest of the links and two of them would be regional parties or things like that. 

So we could see that depending on where the person was located, that person would have different, so to say, regional party suggestions and they would be sometimes one link, sometimes not even that where we must assume that this is some kind of personalization either based on gender or something of that kind or age, perhaps.

Yes, please.  I see two hands raised.  The lady in front of me and then the gentleman.

>> AUDIENCE MEMBER:  Thank you.  Sonia from Third World Network.  I was wondering whether you looked at the ad results as well because apparently there was the study that found when women and men were searching on Google, men were shown more ads for chief executive offices than women, so I wondered if you looked at the ad results as well and whether there was any gender difference?

>> No, we explicitly didn't want to look at that, and for us, it was pretty much by the organic results during the German elections because we wanted to understand what people are saying.  And there is not much advertising on this respect, but we also wanted to concentrate both on the organic search, oh, and by the way, also concentrate on the Google news results because this was also part of what we were collecting, and we are still working on the analysis on that part.

So we hope to come up with results at the very beginning of next year, so we can give you more insight on the Google news search since it's slightly different and we are sure that we are going to have slightly different patterns there.

>> AUDIENCE MEMBER:  Question on the group project.  When you look at the differences, you never said anything about the time because you collected every day the results, and I would expect that given it's about news, news changes every day.  But, now, if it's on the same day different user, it's still a different time of day when the result might be clicked.  So it might be that the results change because of time of day as well.  How did you factor that in when you said the results were different between users?

>> I was only talking about the non‑Google news results because of exactly that, you know, the Google news results, of course, change on an hourly basis or even faster than that, so we will have to do a very different analysis on the Google news results, but we don't have that yet.

>> AUDIENCE MEMBER:  The rest is more stable.

>> Yes, the rest is very stable, the rest is very available except if I remember correctly, we have these AFD party, which is the new right‑winged party in Germany, and their search results are more in flux.  And, of course, the hypothesis behind that is because there is not so much, let's say, stable information that has been in the index for the last couple of years, there is more dynamic in the search results that are shown there, which is also true, and I would like to highlight that because, again, you know, we see differences.  We are not saying that we don't see differences, but we see differences and it's very hard to develop hypothesis as to why we see some of these differences.

For example, you see a lot more variety on the news sites that are displayed when users are searching for Angela Merkel than when they are searching for Martin Schulz.  You have a couple of hypothesis.  Angela Merkel has been the German chancellor for a very long time.  There is a lot of international reporting on her.  There is more national reporting on her in general than on Martin Schulz, although he is a prominent politician and he was the President of the E.U. Parliament and so on and so forth.  So that goes to ‑‑ or that results, wider variety of news sources being shown to people who search for Angela Merkel, than those who search for Martin Schulz, but you can imagine that, first of all, we see these results and then we have to develop methods to test for this, you know, what ‑‑ is that a correct hypothesis or is it not, what other factors could be in there?

I would just before the next question, I would like to point out to you, I have not commented on this very much.  This is a very simple website.  It is not an AlgorithmWatch website.  It's a project website where we just compiled English language information because we thought this would be interesting for an international audience as well.  Our reports so far have only been published in German because the funding comes from German media regulatory agencies and they, of course, their audience is Germany and the German, the German policy makers and the German audience.  So we will work on this, and we will also publish English language results in the future, but so far, we have the English language websites that only basically explained what the project is about and how it works, but we have a list of results that are displayed at the end of it.

You can also see if you are interested, of course, you can see what the search terms are.  We list the search terms, and as I said, they are really straight forward.  These are the names of politicians who were running for election in Germany for chancellor, and then the main parties and not even all parties the, I don't know, 26 or 30 parties that were on the ballot in Germany.  So we didn't include all of them because they wouldn't have made sense for our kind of search, our kind of research that we were doing.

And but you can read, I mean, if you only read English or if you don't read German, let's put it that way, and you read English you can look at the results that Cornelius Pushman presented.  He is working with the Hans Praedo Institute and he published his results in English.  He did some of the research I was referring to, for example, finding that there are more news outlets that are shown in the search reports for Angela Merkel than for Martin Schulz, for example.  That was a long answer, but.

>> Yes, please.

>> AUDIENCE MEMBER:  I'm a researcher in Turkey.  I was wondering what is the most striking outcome to you so far from this ongoing research?  Thank you.

>> Striking, what do you mean, surprising?

>> AUDIENCE MEMBER:  I wouldn't say surprising.  We found some glitches we cannot explain, and this is one of the things that's now a question mark.  We need to understand what are those patterns.  They don't seem to be focused on either language regionalization, and for that we are trying to talk with, we are in talks with Google to understand what is that and what could be a possible interpretation for those type of patterns that we are seeing.  But overall, we sort of, we sort of saw, confirmed what they were telling us.

So there was nothing striking on that.  To some extent, I think the lesson learned is that if something concerning the search results of Google and the way Google goes with Google's understanding that mainstreaming place, the big role that people are very much influenced by their region and that by doing that, by being influenced by the surroundings they live in, they decided rather to go for more regionalization apparently than to be more fine granular.  And that's interesting because I think it's 52.  They told us it's 52 different criteria they apply.

And out of that, regionalization seems to have a major role in that.  But, again, we are still analyzing the Google news algorithm and I think there is a lot of missing into that, and that we might find a few things that will be surprising.  But I cannot tell yet.

But I would like, just a second, I would also like to highlight here, I mean, this is exactly what I referred to in the beginning when we, when I said we are ‑‑ we are a non‑governmental organisation.  We are Civil Society organisation, but we are research focused.  That means that we would like to find out how these things work, and probably, you know, find out a little more how they influence our daily lives.  But this was not about scandalizing, for example, how Google treats search results or anything like that.

On the contrary, basically we ourselves had the assumption that what Google is saying in a general way is mostly correct.  They are not trying to fool us in the sense that they say, oh, no, we don't do any personalization, only 20%, and then we would find out that ‑‑ 2%, and then, gosh, basically everything is totally personalized and, you know, my neighbor sees completely different search results than I see myself.  We didn't have that hypothesis.  We said that it is important as societies to be able to develop methods how to, you know, externally audit these things.  And these are not usually low hanging fruits, and one of the reasons why we chose Google here was because we were able to do it with their setup because we can control the input and we can see the output with very sophisticated method, right.  I mean, this crowd sourcing is quite some work, and we were really happy that it worked out in the way that many people participated, so it would be, it would be very difficult to do this in, let's say, a laboratory setup.

But, for example, if we discussed whether it would not be a more interesting target to look at what's going on at Facebook, we would probably agree with you right away, but we could tell you a lot about the problems that you would be facing if you wanted to do something like that with Facebook.

>> LORENA JAUME‑PALASI:  Even with Google, we had to consider a lot of data protection issues, what can we collect, what can we not collect, how do we deal with that?  This is something that was dear and important to us, and this is one of the issues that I think when you try ‑‑ so think about importantly in social media must you be aware of if you start creating plug‑ins that are going to collect and try to find some sort of intelligence about what is happening in social media, this implies collecting a lot of personal data.  We decided for this project that we wanted to have no personal data. 

So everything we did was without knowing who the persons behind the download of the plug‑in were, which gives you limitations because we cannot tell, of course, how many women we had in the project, how many men we had in the project, how many people with migration background were in the project.  None of that can we say.

We only collected the first, just from the IP address, we just connected the first name, so we could understand what region are they coming from.  Yes, please.  

>> AUDIENCE MEMBER:  I understand you are trying to refrain from what GPRS is, but on the other side, do you think knowing those datas would change the direction of this research?  Do you think if we add another level of, oh, like these people from these groups have this research?  Would it bring another outcome?

>> LORENA JAUME‑PALASI:  What we did was pretty much prove that we can together as a society crowd source together and gather data together so that we can have some insight on what this type of processes are doing without seeing the algorithm.  I think this is a lot of added value because you can use this method for many other things.  So, yes, I do think that this is going to have an impact if we try using this type of ideas, and developing similar ideas more and more to understand how platforms or solutions that are using machine learning or some of the automated decision making mechanism can be ordered by society or by parts of society by doing a joint effort.  However, of course, it cannot completely reverse engineer the algorithm and tell you how exactly it is working.  It's only a partial audit.

>> I think the question if I understood you correctly was get towards, would we have had better results if we had asked for more of that personal data?  And, you know, we had to make a choice.  On the one hand, yes, more information about the users would have been better.  If we had known, you know, whether they were male or female, what age group they are in, where they live exactly instead of just doing a very, very unreliable geolocation of the users, that would have been, that could have been valuable.  In a sense to test for how representative is that?  We never claimed that this is representative at all.  It was distributed via SpiegelOnline. 

So there is certainly to be a user group that reads the digital news on SpiegelOnline on there.  But we had to balance that with the question how many users can we convince to participate in that, and if you show them a data usage agreement that tells them that, yes, we would like to collect your exact location, please give us your age and gender and where you live and what, you know, postal code, district and things like that, then we thought that we would turn many people away because they exactly did not want to volunteer that information.  So that was the balance we had to strike.  And we decided for many reasons, but that was part of it, not to collect that information.  And you also had another question, right?

>> AUDIENCE MEMBER:  Yes, I have a question, and I might be confused, but as I understand, you have these queries that were sent from the computer every day six times, yes, six times a day when the computer was open.  So I assume that the results are personalized at the end of the day since these queries were sent six times a day, meaning regardless of misreading these queries, these queries were sent to Google either way so the results if they refer to these queries were actually personalized.  Do you understand what I mean?

>> LORENA JAUME‑PALASI:  Because they come from different computers and different computers search six times for that.  No.  No.  No.

>> AUDIENCE MEMBER:  So my computer sends these queries six times a day, plus I do some search on my own so the results you collect at the end, you said that there was small differences like 1% was something that was about me, maybe like sports or something, but the other results were on these queries, no?

>> MATTHIAS SPIELKAMP:  No, I think that is a misunderstanding.  The only results we collected were the results to the search queries our extension sent.  We did not collect the results of your personal other searches, none of that.  Only, you know, your browser extension would search for these 16 search terms that are here on the, you know, that are displayed on the website.

>> AUDIENCE MEMBER:  Perhaps, just to make it a bit more clear, when you download the plug‑in, what happens is that six times a day suddenly a small window opens and starts opening tabs for all of the 16 queries and all of the results that are gathered within those 16 tabs, they are sent to us.  So then the window closes, and nothing else is collected.  This is how it looked like.

>> So we would only collect the search results for these queries, none of your other queries.

>> AUDIENCE MEMBER:  So you saw that these queries were the same to most of the people, so that's how you assume that they were not, they were not personalized results? 

>> LORENA JAUME‑PALASI:  The basic assumption is if there's personalization, then we should have like very different patterns, very different types of results.  It should be very messy.  So all of the people in Berlin should have very different results, all of the people in Bavaria should have very different results, and overall all of the people in Germany should have very different results and that was not the case.

>> AUDIENCE MEMBER:  Thank you.

>> AUDIENCE MEMBER:  Hi, my name Barbara Rosen Jenkinsons from DiploFoundation and the Geneva Internet Platform.  I have a question about the search queries because I was wondering whether you considered looking at policy topics as well because they might be more divisive than the politicians or the parties themselves and less general in that way, such as refugees or migration or healthcare.

And my second question is do you have any plans for future research or on the basis of this research?  What is it that you would like to continue researching?

>> LORENA JAUME‑PALASI:  This is a fantastic question, this is one of the best practices that we would now give to someone who wants to apply our project to their own project.  We wouldn't recommend to use this search key words, but we would either suggest to add more to that or to concentrate on others processing what you were posing.  It would be really interesting to have collected.  Now, we would do it slightly differently and would try to collect precisely those things you are suggesting.  Migration was one of the key words where we thought, oh, man, we should have looked for that because that was one of the controversial topics within the German election where we think that we could have some omit to that.  But we wanted to keep things simple on one side and on the other side it was a pilot project just to test the process, to test the idea, to test the tools, it was more about that, but absolutely, you are very right.

And what we were thinking, yes, we want to do other things.  Actually, I'm in talks with a colleague of yours because we think that, for instance, this would be something interesting to be testing, and Eastern Europe, I can imagine there are a lot of people using laptops or computers in general, and there is some countries that are going to have elections within the next months to year.  So I think that this could be really interesting to be applied there.  Yes, please.

>> AUDIENCE MEMBER:  Sorry to ask another question.  So some of the other studies on algorithmic discrimination and so on were finding that, for example, price decision, if I go on line with a fancy Apple Mac and I search umbrella, I get shown more expensive umbrellas or depending on the distance from a physical shop and so on.  My first question is whether you think these kinds of known differences in the results would hold for other kinds of searches necessarily.  And my second question is sort of more broadly on AlgorithmWatches do you think there are areas where this kind of methodology is limited in the kinds of examinations we need to do? 

And I'm thinking here about the fatal Toyota car crashes in the U.S. when the brakes were failing, and the way that they found the problem in the source code was the U.S. Government regulators looked at it, they got the NASA scientists to look at it, and then the plaintiffs in the court cases, the victim's families got their IT experts to examine the source code and they found the problem that caused the fatal car crash.

So my question is whether there are times when kind of this testing or algorithm level is not enough and you have to get down into the source code.  The reason I ask is because a number of Governments including the EU is prohibiting regulators from looking at source code in a number of trade agreements including at the WTO which would apply to 164 countries with no exceptions.

>> LORENA JAUME‑PALASI:  Of course, you cannot use this tool to test everything, because this tool cannot apply to things that are not of commercial use, and are not used by a wider audience.  And, of course, to test different algorithmic processes, you need, from different context, of course, you are going to need very different methods, so, yes, this is not the golden Apple that is going to make a change overall on all types of auditing.  Of course not.

With regards to looking at the source code, yes, this is also one of the things that are relevant, especially when it's about really, really important or relevant types of algorithms like DNA testing when used in courts, of course, you should be able to look at the code, but not only the code, you should be able to look at many other things because one of the main issues when we talk about this type of process is that it is not only about the algorithm, it is about a whole complex process where you need to look at the data banks, you need to look at the selection made, the data selection made. 

You need to look at what context is this?  Is this the right algorithm for this type of context or is this a process that should not be applied to this kind of context because it was considered an issue for a different story or a different set?  So there is a lot of things that you need to look at.

And with that, I would say this is one of our main claims, we don't think that there is a one‑size‑fits‑all solution.  We don't think that there is one entity that can ever be able to analyze all types of algorithm.  It's a very contextual thing.  You need expertise not only from the technical side, but also from the side of the context where it is being used.  So let's say if you are using this within border control, you need people that work at border control and understand context of border control and how they are giving Visas or denying Visas, on how the whole databanks is being processed, the data is being input and who is putting the data in the databank, so how they off search and get that into the system, whether it's manually.  So there are many things where you usually need a team of people helping you to get an insight of this.

>> I think, I guess most of the people here in the room are aware that we are at a very early stage in this discussion.  Although there have been discussions about automated decision making for a very long time, I mean, as I already, and auditing processes, and looking at, for example, how airlines allocate seats on planes and things like that.  That's been going on for quite some time.  But still, we feel that there is a new dynamic in this entire field for different reasons.  One is that there is a much wider deployment of these systems now because of technical developments, but also there is a higher sensitivity to this.

So there are initiatives worldwide working on these questions of how in different context you can audit these systems.  And even in our team, in our so far small team at AlgorithmWatch, we sometimes agree on what actually needs auditing and how you could go about that.  I think the diesel gate case is an interesting one because the first time it was discovered that there were manipulative software at work was by testing for the emissions.  And then a lot later or much later on, a German software engineer actually looked at the source code and found how the manipulation was actually going on.  So there can be very, very different means of auditing these systems.

>> AUDIENCE MEMBER:  My last question, do you have any other plans for the future, some other project?  Maybe I can give an example, like something similar to the content ID by You Tube since we have the copyright form under discussion in the Parliament at the moment where Article 13 actually wants to impose this obligation to platforms to have this tool by default, something similar to content ID at least, so do you have any future plans on that?

>> LORENA JAUME‑PALASI:  Yes, we were able to secure two major grants funds, and from next year on we are going to look at ADM processes, automated decision making processes in the workplace used for, by human resources departments to evaluate and manage workers.  So it's not going to be about recruiting, but how, when work is out of work, how they are being managed, how those tools are being applied to evaluate them.  We are pre‑agnostic when we research, so we are fact‑based.  So we see that there can be added value to this, to the deployment of these types of systems, but we also see that there are risks.

So we will analyze it with, together with a data protection professor, data protection, working rights professor that, but they will do the legal analysis part.  There will be an ethical analysis on that, and we will have, of course, a technical analysis of all of the tools being deployed in the German market because it's a German grant that we got there even though we are aware that many of the solutions that we will be analyzing in Germany are for sure being deployed in many other countries within Europe, because there are a few big players that are the software, the common software being applied with many companies, not only German companies.  This is one of the projects that we are going to do. 

The second project we got funding also from the Bertisl Mung Stifdon and we are trying to shed more light on where are those systems being deployed and which context?  So we are going to have mapping of all types of algorithmic decision making processes and the different life contexts where they are being applied.

And we will also map regulation because there is a lot of regulation going on that has an ADM dimension to that, and it's still not very much known.  So that will be also part of our work because we also work theoretically.  We want not only to have facts, but we also have a theory of the facts that we are collecting.  So that is also part of our work as well.  We are trying to make a first sense of what, or put a structure on what we are recentering on.  And, of course, there is another few other small projects about auditing.  I don't know if we can talk about that.  Should I talk about that?

>> Not, no.

>> LORENA JAUME‑PALASI:  There are other small projects in the pipeline that we want to do this next year, so keep tuned and visit our website, but that's pretty much it.

>> MATTHIAS SPIELKAMP:  I suppose we need to wrap up.  It doesn't seem that there are people in this room after us, but there are other sessions going on, so we want to end on time.  We have three more minutes.  So if someone has a burning question, please raise your hand, but I would like to add to what Lorena just said, you can go to AlgorithmWatch.org and then either slash DE and slash newsletter or slash EN and slash newsletter.  If you want to subscribe to the newsletter you will be informed about the projects going on and what we are planning to do and what we are actually doing, so if you are interested in that, you can keep in touch via that, and I will Tweet those links right now if you want to, if you want to keep informed about, stay informed about what we are doing.  Okay.  So is there a last question? 

Thank you very much, everyone, and I guess we will see most of you around the next couple of days so if there is anything that you would like to talk to us about directly, please approach us.  Thanks a lot.

(Concluded at 1:55 p.m. local time).