IGF 2023 – Day 2 – Open Forum #82 AI Technology-a source of empowerment in consumer protection – RAW

The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.

***

 

>> Hello, everyone we can start the last session of Tuesday and we would like to concentrate on the artificial intelligence as the toolkit that we can use also for consumer protection.  And now I would like to give the floor to the online moderator, Martyna, the floor is yours.

Martyna, are you with us because we cannot hear you.

Maybe you are muted.

>> MARTYNA DERSZNIAK-NOIRJEAN: Hi, everybody, can you hear me now?

>> Yes, weak hear you.

>> MARTYNA DERSZNIAK-NOIRJEAN: Can you see me?

>> Not yet.

>> MARTYNA DERSZNIAK-NOIRJEAN: Okay not yet.  So let me give it a try.  Before ‑‑ before I will start, it would make a little bit of sense that you could see me as well.

So let me see.  Otherwise, please the technical assistance, if you could try to help me with this, that would be wonderful.

Either way, I will not take more time with my technical issues.  Welcome, everybody.  It's great to be here for the their time at the Internet Governance Forum.  We are happy that this year we can alert the forum to consumer protection issues.  This year we have wonderful panelists.  Welcome, everybody.  And thank you for giving us this opportunity.  I will start saying one of the biggest reasons that you have heard in the last times and also one of the most heard things these days which is that AI has been changing our lives.  And I'm pretty sure you guys are tired of hearing.  This even though we have heard it so many times it doesn't make it any less important.  So we need to discuss and converge around this issue and this is why we have organized this panel.

And now the question is why is it important to discuss AI in the context of consumer protection?  For us, consumer protection authorities and protection, the issue is basically firms have advantages over consumers.  They can use AI to take ‑‑ to have greater possibilities of doing unfair practices against consumers.  This is one option, of course.  AI can be used for good purposes.

And our task as consumer protection enforcers and all stakeholders active in the area of consumer protection, our task is to understand to what extent we should curb AI use and what extent we should try and allow it and flourish, to actually assist consumers, for example, by having a better choice of products.

So this is a big challenge for us.  We need discussions.  We need to speak, we need to engage with this topic.  That's why we think it's very important to continue discussing it, even though we are already discussing it a lot.

And as an emerging topic, we really need to have a wider conversation about it and IGF is a great forum for that.  We have Internet stakeholders around here, people who are concerned with consumer protection and other needs and people who are more knowledgeable about different technologies and how they are being used online.  It's great and we hope we will have a wider discussion.  And I'm pretty sure that Piotr will be able to follow up on this.

And one final thing, of introduction is except for trying to understand the impact on consumers in the scope of intervention by authors in the context of AI, consumer protection, there's one more thing that we have been exploring as consumer protection agency, which is the use of AI to our own purposes in investigating unfair practices.  It's a great cool to look at our own actions and activities.  So we are also doing this.  We are conducting two projects where we develop AI tools and we are also aware there's many other such projects all over the globe.  Our panelists will tell you more about that.

So Piotr, that will be all from my side and I wish you all a great panel.  I'm pretty sure you will be able now to present the panelists.

Thanks very much.

>> PIOTR ADAMCZEWSKI: Thank you, Martyna.  We have to discuss the problem of using AI.  I have to also admit that last week, we had a panel among the other consumer protection when we were gathering together with the institutions that have the same aim, namely protection of consumers in each jurisdiction and we looked at the risks that are using AI.

And the panel on the IGF, it's the better place to discuss the possibles and how to develop further.  I strongly believe that the artificial intelligence will be ‑‑ it's already in operation by many agencies but it will be developing pretty fast and definitely, it's needed for the detection of the traditional violations but also for the infringements which are new, which are connected to the newer digital services and so for today, for that reason to that aim, we invited our permanent guest, representatives of Internet organisations, which is OECD, which look at the consumer, with Melanie MacNeil is here with us.  And Angeleo Grieco, and other people from the enforcement from Sally Foskett as well.  And last but not least, we have Kevin from Tony blair institute for global change to talk with us from the perspective of consultancy is world.

So the structure of the panel would look like two rounds.  So first we will present the tools we also have, and then in the second, we will ask our guests about the future about the possibility development.

So first, I would like to turn to Christine and ask her about the outcome of her survey.  Christine, the floor is yours.

>> CHRISTINE RIEFA: Thank you very much.  I'm trying to quickly share my slides to help with what I did e scribe.  I think you should all see them now.

So thank you very much for having me and it's a pleasure to join you only virtually but still have this very amazing conference.  I will give you and tiny little bit of background before, because I'm aware that perhaps some people joining this panel are not consumer specialists.  So consumer protection is a world with several ways of ensuring that the rights of consumers are actually respected and enforced.  It's a fairly fast‑developing area of law but it has a very unequal spread and level of maturity across the world and that does cause some problem in the enforcement of consumer rights.

We also rely in most countries of the world that have consumer law on the spread of private and public enforcement.

And AI as the subject of today can actually assist on both sides of the enforcement conundrum.  We also have a number of consumer associations and other representative organisations that can assist consumers with their rights, but can assist public enforcement and agencies in the UU.  A good example is that which the consumer association is actually able to ask the regulator and the enforcers to take some action.  So that's variable across the world but they are normally a very very important element of the equation as well.  We have seen in previous years pretty much a shrinking of court access to consumers as well.  And AGI and ADR that public enforcement through agencies is really an important aspect of the mix and how to protect consumers and sense the session is extremely important to ensuring the rights of consumers and develop our markets in a healthy way.

So the project I have been involved with EntTech and it looks at the tools, that enforcement agencies were using in their daily work.  And it also reflected a little bit about the future, I will keep those comments for the second round.

What we found is that EntTech which is actually a broader use of technology than just AI, so it would include anything that is perhaps a lower tech, if you will that artificial intelligence might be, but can be just as effective.

And we wanted to look at ways agencies could ensure markets worked optimally and not using enforcement mix, might lead to obsolescencey is of consumer protection agencies and there was an essential need to respond to technological changes.  We surveyed about 40 different practices that we came across not simply in consumer protection but more supervisory agencies as well and we ended up selectly 23 examples of EntTech that are specific to consumer protection, spanning a range of authorities, 14.  Seven were gender a consumer protection agencies and four generation of tech non‑it's only a snapshot.  It's obviously extremely difficult at this stage to work on public information about use of technology in agencies, there's also an almost of development and there are also reasons why agencies may not want to public announce that they are using particular tools.

The survey has some interesting findings.  In the report, we explain how technical issue approach will be essential and how to start rolling one out.  We give a picture of how agencies that are doing it are doing it, and have they structured themselves in order to enable themselves to rely ‑‑ to roll out EntTech tools.  We also mapped out the generations of technologies because actually, not all agencies would start from the same starting point.  Some agencies might be very new and have absolutely no data to feed into AI.  All of those might be more more structured but don't have structured data in a way that might be useful.

We also found that with very little technology, you can do a lot in consumer enforcement and therefore our report recognizes.  We provide a list of use cases so for anyone interested in what's happening on ground, then that's a very good starting point to find out pretty much all the examples of things that are currently working.  And we also reflected on some practices that we find slightly outside of the realm of consumer protection but that could easily be rolled in consumer protection and, of course, we discussed challenges.

So our key findings and I think, that a think useful.  AI is a misnomer.  We are talking to a arodite, and AI is not the panacea and we think in the future it will not solve all the problems.  It has, however, got huge potential and we found that about 40 to 45% of the consumer authorities we surveys are using AI tools.  Now, that still means that there's 60% of other tools that are still EntTech tools that are being used and they are not A I., that is a quite significant finding because just in 2020, at the start of the discussions about technology and consumer enforcement, very few reports of projects considered AI as being viable, they were looking at other technical solutions.  What we found as well that the agencies have a dual remit.  So they are not just dealing with consumer protection, have fairs a little bit better in their rollout of tools and that might be because they are able to capitalize on experience in competition law, for example, but they have a bigger stricture and that facilitates a lot of the rollout of technology.

If we compare it to other disciplines but as Piotr mentioned, they are catching up very quickly and the ‑‑ sorry, I will move on all of this.  And then the final thing to ‑‑ for me to point out before we hear from the example.  That AI in consumer enforcement needs to be built in with a framework strategy that we take into being all the potential problems that might come with it.  One the big dangers that we have identified is that if there's a lot of staffing resources and money going into developing AI as consumer protection enforcement this then it would be really ashamed to ‑‑ the one big hurdle in the way of enforcement agency and there's a legal challenge from the companies being vegged.

And we found loads of potential issues to strategize about so on that general overview, I leave you and pass on the floor to our next panelist.

>> PIOTR ADAMCZEWSKI: Thank you, Christine.  It's a lot of work but it looks promising.  And now I will give the floor to Melanie, and see how OECD is looking at consumer protection regarding the usage of A I.

>> MELANIE MacNEIL: Hi, everyone, good morning, good afternoon, depending on where you are, if you just bear with me for one moment, I will share my screen very quickly.

Sorry for the delay with that.

All right, I'm assuming everyone can see that.  Very excited to be here today.  And the previous presentation was very helpful as well in setting this up.  So I'm speaking to you today from the Organization for Economic Cooperation and Development, or the OECD, where I work in the consumer policy team.

So that OECD has 38‑member countries and we aim to create better policies for better lives through best practice work and see what our members are doing to address particular issues.

Today, I'm excited to talk to you about artificial intelligence and how it can help to empower consumers and how it can be of greatest assistance to consumer law regulators.  So I will share some information with you about the OECD's work in the ax I space more generally.

So we just touched on it, but first thing I will talk to you about is using artificial intelligence to detect and deter consumer problems online.  As a previous consumer law investigator, this is a topic very close to my heart.  We are seeing a lot of AI being used by consumer law regulators as a tool to fine and address potential breaches of consumer law.  It's particularly useful in investigations, where work that was previously manual and quite slow, like document review can be completed more quickly.

There is a significant and essential role for investigators but AI tools can support the preliminary assessments of investigations and highlight conduct that might be a breach of consumer law.

Robust principles are needed with any investigation, and the addition of AI to our toolkits doesn't change that.

I think it would be helpful to give you some great tools that we have seen our members using.

So we use web crawling technology with ax I to analyze consumer contracts looking for unfair contract terms.  So the technology searches over the fine print of terms and conditions of things like subscription contracts to ensure that there's no unfair clauses such as inability to cancel a contract.  So this work previously in many countries was undertaken by investigators reading hundreds of clauses in hundreds of contracts but the AI tool adds some efficiency to this and regulators can remove this from the contract and preventing the consumers from being caught in subscription.

That frees up a lot of investigator hours for other things and enables investigators to really focus on the key parts of investigations that do need human decision making and strategic thinking.

Another issue is fake reviews.  You have probably all seen one.  Reviews can play a huge part in our purchasing decisions but to give you an example, last year Amazon reported 23,000 different social media groups with millions of followers that existed purely to facilitate fake reviews.  This is too much for individual consumers to deal with and for regulators but machine learning models can analyze data points and it's overnames leading under consumer law.  While regulators are using AI to detect fake reviews, private companies are investing in the space as well.  So this is a good example of how businesses and regulators are working together to enable consumer to make better choices.  The OECD we are excited about work that we are doing in the fear future with member companies looking at artificial intelligence to detect consumer problems online that was referred to earlier.

There's some great efficiencies to be found online.  So the increased efficiency can deter businesses from engaging in this conduct, and similarly, if people know they are more likely to be caught they are less likely to engage in the conduct.  So we are very excited about the future work, and other regulators can benefit as well.

Another space that we are seeing great work from our members is the impact of AI on consumer product safety.  So AI has been used no detect and address product safety issues by regulators.

For example's Korea's consumer injury surveillance system searches for products online that have been the subject of a product safety recall.  So where something Thas been deemed unsafe and withdrawn from sale, there are cases where nevertheless, businesses continue to sell those items.  So Korea's consumer injury surveillance system uses ax I to search online for text and images to detect cases where those products might still be being sold.  Using AI, means that the products can be found quicker and consumer injuries are ultimately reduced.

So as well as detecting issues like that, Korea also using AI to assist consumers who might be looking for information or wanting to report an unsafe product.  So Korea has an excellent chatbot that they use on their website that consumers can use to report injuries for products.  So that if they are harmed by a product, they can report it to the authorities.  The chat box makes it simple for them to launch the information, rather than asking them to fill out a detailed form.  It's more efficient and then they use, he coding of the information to enable a more efficient analysis of the reporting.

When it's easy to report an issue, the consumers are more likely to do it and better data enables regulators to better understand the issues and address them as well.

Similar, AI technology, can have problems fixed earlier.  Some more advanced appliances that you might be able to control from your phone, they are very useful as well in terms of alerting ‑‑ alerting consumers to the product safety issues.  They can be notified that a device might need servicing, that repairs are needed or that a remost software update might be required.  So there's already been instances with smart devices such as smoke alarms that have been remotely repaired and addressed through a software update.

This type of technology in that circumstance can potentially be life saving.

So the increasing prevalence of AI can bring benefits and the gaming industry has always been pretty quick on the uptake with technology.  They are investing a lot in AI to change the way that people experience games but as the use of digital tech intensifies, the way people engage online is also changing.  So this is an issue where there's new emerging risks and they are not particularly well understood in all spaces.  Particularly in the context of is mental health.  And so one the issues we will look at in the OECD is looking at the impact on consumer health and safety.

It will be focusing on AI‑connected products and immersive reality and the impact on consumer's health, including mental health.  So we imperative to look at current gaps and to identify future actions to better equip authorities to deal with some of the new risks that are posed by AI and the new technology relating to consumer products.  We are aiming to provide practical guyance for industry and regulatory authorities to better understand and address property safety risks and we will have a real focus on consideration of those risks in safety by design.

So that's a new project to keep an eye out for.

Another space that we have seen AI provide some great benefits is in the digital and green transition.  So many consumers want to make greener choices but sometimes they done because of information overload or lack of trust in liberalling and issues that can affect all of us. ‑‑ labeling and issues that can affect all of us.

Nudges can encourage consumers to make greener choices and encourage people to behave in a specific direction and overcome some of those behavioral issues that might otherwise prevent them from making a green choice.  So AI provides an excellent opportunity to judge consumers towards greener choices.

In Germany, like many countries, heating bills are often not prepared in understandability and it's difficult to understand which company to choose.  They find it hard to pick up errors in their bills.  They end up paying more energy and services and incentives to save energy are difficult to identify so this cost consumers a lot of money.  But it also causes a lot of unnecessary emissions because it's so difficult for people to make a greener choice that they give up.  I think it's something that we are all guilty of when you look at various contracts for consumption.

The Germany government has funded a digital tool that used AI.  They can upload the energy bill, and it's ewaited to see how they can save on heating bills.  The tool is an example of a nudge that can help a consumer to make a better energy choice and help them to overcome the barrier of it being too complicated to make that choice.

Similarly, consumers experience information overload with the green badges and schemes on the sums you might see in the supermarket and the other issue is that it can be difficult to compare these and consumers have no way to verify what is actually happening in a company where they put a green marking on their packaging.

Last year in Australia, they did an online sweep and found that 50% of the claims made in a sample were misleading when it came to the green credentials.  So there are some parts the world that is using regulation to really strictly control the way that such markings and accreditation schemes can occur.  Where that's not occurring or to substitute that, AI can, used to assist consumers to make the green choice by helping to breakthrough the unmanageable amount of information out there.

So we're seeing new apps being developed to scan a background of an item of a supermark and see the ethical rating compared to other products where a product scores poorly, suggest another alternative.  This is limited but we think that AI will be used to list more products.

So the OECD is currently undertaking our project and looking at the green transition and looking at the opportunities that digital technologies use to promote greener consumption patterns.  So these protects are also going to include the work to understand consumer behaviors and attitudes towards green consumption.

Just taking you through a couple of tools developed by the OECD that can be quite relevant.  One the things that we are working on at the moment is the OECD incident monitor.  There's been a big increase in reporting of AI risks incidents.  The rise is astronomical.  The so the OECD AI expert group is looking at this and they are using natural language processing to develop the AI instant monitor.  So it aims for a common framework that could be compatible with future rent ration.

One the issues ‑‑ regulation.  One of the issues is consistency of terminology and understanding.  Part of this project is looking at the global common framework to understand those things and then the ax I incident monitor tracks it and it's designed to build and form incident definition and reporting and particularly to assist regulators with developing AI risk assessments and doing foresight work and making regulatory choices.

So that the incident monitor collected hundreds of news articles manually which was then used to illustrate trends.  You can see on the slide there where the project is up to.  They are using natural language processing with that model and now they are categorizing the incidents and it's going to be quite useful the product safety project that we're doing, looking at potential health and mental risk from AI and new technology.  We will look at the consumer property issue to the monitoring tool as well for AI.

I realize that's been fairly quick, but have there are the projects we are doing at the moment and the work that our members are doing to assist regulators and is also the OECD observatory policy which aims for policies, data and analysis for trustworthy artificial intelligence.  The policy observatory, it facilitate dialogue and provides multidisciplinary evidence‑based and data on AI's areas of impact.  So the OECD consumer policy website is very large.  We have articles from stakeholders and reports from the OECD.  So chances are if you are working in that AI space, you will find useful information there.  I have included a link to the consumer policy page and the OECD AI principles to promote AI, that's innovative, trustworthy and respects human rights and democratic values.

So there is a snippet of information this, but we are setting up policies that Al sift them more generally, as well as in specific spaces like comparing consumers.  That's all for me and thanks for the fund to have a chat with you all about our work.

>> PIOTR ADAMCZEWSKI: Thank you.  I share this idea.  It's about enhancing us but yet the first stage of the investigation, where we are working more on detection of the violations but later on definitely we need to prevent.  So it's helping us a lot but especially in the first phase of our work.

I would like to turn to Angeleo, and check what are the newest tools in the possession of the European Commission with the elab established Angelo, the floor is yours.

>> ANGELO GRIECO: Thank you very much I'm just trying to ‑‑ I don't know if you see my screen.  I will try ‑‑ can you see it?

Perfect.

Good afternoon to all of you, I would like to thank you, Piotr, and all of the colleagues for inviting this and inviting us as the European Commission to join.  I will have to do this remotely.  I'm the deputy head of the unit, the group in the commission which is responsible for enforcement of consumer legislation and in this team, we do two main things.  We coordinate enforcement activities of the Member States, in cases of union‑wide relevance.  And we build tools to cooperate and investigate, including and especially, I would say on digital markets.

Now, in this presentation, I will get a little bit more into the specifics of those tools although there's little time allowed.  I will try to go through them quite rapidly.

As you can see from the slide, you know, I will focus on three main trends of work that we are following.  The first two concern the use of AI‑powered tools to investigate breaches of consumer protection.  And then the second is behavioral experiments to test the impact of market practices on consumers.  And then the third, I will talk about enforcement challenges related to marketplaces that offer AI and platforms which offer AI services.

So if we look at the elab.  It's an I. T service powered by AI intelligence.  There's the EU national authorities the consumer protection network that we coordinate this commission.  So the need for such a tool, obviously, it comes from inability to have digital markets, particularly on monitoring with human intervention.  Too much to monitor with little resources and increased need to have rapid investigations that cover a larger portion of the market sectors.

So this tool is a virtual environment which we launched in 2022 and can be accessed remotely from anywhere in the EU, which literally means investigators can use their ‑‑ these tools sitting in their offices.  And it can be used to do large‑scale reviews of companies and practitions.  So there's a mix of web crawlers, and AI‑powered tools, other terms and analytics that run to conduct those investigations so they can analyzed vast amounts of information to have indicators of specific units and the parameters ‑‑ set for investigation specific and AI can look at different types of elements and different indicators on breaches and I will give a quick example of that later.

Elabs offers various tools and functionalities.  Let me change the slides.  We have VPN so investigators can have a hidden identity and you can collect software and evidence as you go as you transfer to your own network, including time certification.  And then the comprehensive analytic tools to find out about internet domains and companies.  These are open source tools so you can search and combine different types of sources of information across different databases and geographical areas.  And they are very useful, for example, to find out would is behind a website or a web shop but also to flag cybersecurity nets and risks.

Now, if we look at two example of how we use these tools.  Black Friday, we use a price reduction tool which we use in the Black Friday sweep that we did last year, we tested basically we used the tool to verify whether the discounts presented by online retailers.  It was misleading for almost 2,000 products and 43% of the websites we followed.  And to understand whether, of course, when discounts were generated, we have to monitor 16,000 products at least for for a month preceding Black Friday sales.

Then is the other one is the FreD, the fake review detector.  The machine scrapes and analyzes text to see whether it's human or computer generated.  And beyond that, when it's the case of computer‑generated reviews, bayed on the terminology used, it indicates the likely score of whether the review is genuine or fake.

And the machine showed 85 to 93% accuracy in this case.  So this is just to give you two examples of this.  Then the other activity that we are running at the moment is ‑‑ we literally ‑‑ it's the use of behavioral experiments to test the impact of commercial practices on consumers.

And this ‑‑ we do this in coordinated enforcement action of the CPC enforcement agency against major business players to test whether the commitments proposed by these companies to remedy specific problems are actually going to solve the problem.  And we also test these behavioral studies to test what is ‑‑ more in general, what is the impact of specific commercial practices which could ‑‑ at least to prepare the grounds for investigations or other types of measures.  So the first strand of work we use to test the labeling of commercial content in the videos broadcasted by a well‑known platform.

So whether the indication, you know, and the sort of qualification of commercial context is good enough or prominent enough for consumers to understand it and that's very important I would say in the type of platform tools that we are confronted every day on the Internet.  And the second one, so we texted, for example, to see what's the impact of cookies and choices related to targeted advertising.

What is interesting in these experiments is they are calibrated based on the needs of each specific case and we use large sample groups to produce credible, reliable scientific results.  So higher chance to identify significant statistical differences.  And we use AI tools to do this, including analytics and eye tracking technology, and we did that to test the impact of advertising on children and minors, you know, and we tested them in lab.  Now, the last thing I wanted to address here rapidly, it's an area which is drawing a lot of attention.  It was mentioned by previous speakers, enforcement level, not only in the E U. but other jurisdictions and it's the offering of AI‑based services to consumer such as AI powered language models, recently becoming popular.  And these models can generate ‑‑ we know them by now, but they can generate human text, to a given prompt, and such responses continue to improve based on, you know, massive amount of data from the Internet.  It's called re‑enforced learning.

They offer not only a standalone but it's integrated in other services offered like search engines and the marketplaces.

While these practices are being investigated in the EU and other jurisdiction, I can't say much about the ongoing investigation.  The attention, however, I can flag a few element where the focus is.  We see that one main area of problem is transparency of the business model.  So what is really offered?  What is really the service?  How is this remunerated and how is this financed?  What are the differences in between the free version, the so‑called free version and the paid for version?  And how does this relate for the use of data, the consumers for commercial purposes, for example, to send targeted advertising.  So there's this pardon and then, of course, you know, we are very focused at the moment own the risks of those models.  We see that there's manipulative content and one big concern is whether this can do an adequate mitigation of risks.  You think about minors but not only.  And associated with that is the mental health and the possible addiction which has been experienced already.

So the difficulties here is that on the one hand we have new I would say, of applying consumer legislation and we need new everyone points to apply consumerrization to these business models where the technological part is really still a little bit of obscure, you know?  So there's a technological and scientific gap between enforcement and, you know, those ‑‑ those companies who run these platforms.  And then the fact that these elements are integrated in other business models often and then the crossroad here between protection of the economic interest of consumers, data protection so data privacy, and the protection of health and safety.  Sow this adds quite a bit of complexity to the work of the enforcers who are nevertheless, you know, looking into the matter and.  Enforcement may not be enough and as we know, they may need to be complimented by regulatory intervention and we will see about that.

That's all for me at this stage.

>> PIOTR ADAMCZEWSKI: Thank you, Angelo, it's fascinating idea that there will be this possibility to share with the European Commission the software they are preparing.  So we have this possibility to create our own department with a lot of people very closely to manage for each single consumer protection agency.  We can work also on projects like we did in past and we are engaged that kind of developing of software and but of course addressing the commission and using the already prepared software is great.

So now, it's my turn to give some insights about what we actually made in past and what we are working on right now.  So I will ‑‑ I will I will talk about ARBUZ, the system which we made for the detection of unfair clauses.

But I will focus on the main aspects, not to ‑‑ not to prolong too much time.  We need to speed up a little bit.  And then I will share with you some ideas about the ongoing project on den I will prepare a white paper for the enforcers.  So going to 2020, with we learn that we can use artificial intelligence for enforcement agencies.

It's the time before ChatGPT and it was not so clear that the nature of language processing can make such amazing things.  But we thought we have to try with this possibility.  We focused mostly on our efficiency and we checked three factors for which direction we should go.  So first of all, we considered the databases in our possession, and then we also defined strictly our need.  So what is necessary for us to get more efficiency in which field and finally, we also have in our view the view of the public perspective interest and what is actually necessary for public opinion and to ‑‑ to speed up with our work.  And on that was unfair clauses because we had a huge database for that.  Almost 10,000 entries already established unfair clauses we could use them for preparation of a proper database to learn the machine how to detect it properly.

It was our need because it's very time consuming and it's quite easy task for employees but it's hugely time consumer to read all the standard contract terms and indicate which provisions could be treated as unfair and this is a huge public interest because we have to take care of all the standard practice.  And especially with first growing eCommerce market it means that we have to at just our enforcement actions and work closely with the sector.  There's no other options like automation of the actions.

We had a huge material for it, but still we had to use a lot of human work to structurize it.  It's not so easy.  You need to put it in a special format and you need to choose one and prepare it in a special way to make computer to understand it.  And then the second proper when we face at that time was the choosing of the vendor.  So we were not able to hire like 50 experts in data scientists so we decided to work with outsourcing and choosing an appropriate vendor was challenging for us.  We used a special type of public tendering and then letting the information to the mark, showing how it could be solved.  At the same time asking the market for preparing the other POC which we could compare on the very objective manner.  And only as a result of this contest, we decided on the purpose of the tool.

And finally, the implementation of the software into our institution.  It's difficult for the larger organisations to empower them with the new tools and help people already establish some kind of work with the specific problem to make it differently, to make it in future more efficiently but at some point, people need to find the good reason for accepting the change.  So taking into consider all the challenges, I have to say that we are already fully separating the system and we have the first good results but still, it's the detection.  So it's flagging.  So it definitely, it's helping us with the first phase, and then after flagging, we have to do a proper investigation.  That's ‑‑ that's what we cannot change right now.

A few words with our current project.  This is, again, the problem of detection of violations which are quite wide spread right now.  There are some studies which show a lot of are in the dark factors and we try to prepare the tool, which will allow us to work much more faster.  So not going from one website to another, not looking for the violations.  But be more protective and not just wait on the signals from the harmed consumers but to be able to protectively discover the violations.  With very to create the database, we don't have already existed database like in the first project.

And so now, we are working on the ideas, how we can do that.  Having in mind the possible of the construction of websites and also the database could be constituted open the outcomes of the marketing research which we are going to carry out.  All of that shall allow us to build some specific group of factors which can allow to figure out what is deceiving and not deceiving and fuel for the proper action in that manner.

But not least we are working on preparation of the white paper for the agencies on the same status which we have.  It's did, so it's our second project and we already have some problems and we were able to ‑‑ to solve that.  And we have some ideas about the transparency and the way how we can safely introduce, deploy the software for the work of the enforcers.  We would like to share those ideas with colleagues from other jurisdictions and we would like to make it public next year.

So going further, we also know that Australian competition and consumer commission is working right now on different projects and Sally, if you hear us, could you share with us the more insights about what is going on right now in ACCC.

>> SALLY FOSKETT: Thank you.  I will just share my slides.

I'm not used to using Zoom, I'm afraid.  Is someone able to talk me through how to share my screen?  Sorry.

>> PIOTR ADAMCZEWSKI: I am think there is a share button at the bottom.

>> SALLY FOSKETT: Okay.  Thank you.  I do see it.  Thank you.

I will present like this.  I think that's readable to everyone.

Great.  Thank you so much for having me attend today.  Thanks to IGF for hosting this meeting and to arranging this.  I'm sorry so I'm not able to attend in Kyoto.  I'm Sally, and I'm the with ACCC I will look at AI from three different angles first to protect consumer protection issues and second understanding AI in consumer protection cases, and third, perhaps a little more tenuously, enabling the development of consumer‑centered AI.  So first using AI to detect consumer protection issues.  At the ACCC like many other regulators we have several projects on foot that are looking at methods of proactive detection and these fall into two categories.  The first is streamlined web form processing.  So every year we receive hundreds of thousands of complaints from consumers about issues they have experienced when they buy products and services many of these are mitted through the AAA's website in a form that has a large field and use types out the narrative.  The issue with this approach is our analysis of the form can be quite manual.

The techniques is using natural language processing to identify parts of speech which likely refer to a particular apps on your phone, or water bottle and companies as well.  And another technique that we have experimented with is classification that is to class Phi complaints to agriculture or health or the type of issue that they relate to.  And so that's the type of consumer protection issue.  And then we have been more recently experimenting with predictive analysis to determine how relevant it is likely to be from the enforcement and compliance priorities.

I have listed some examples of our priorities from this year, which include environmental and sustainability claims.  That might be inaccurate and consumer issues in global domestic supply chains and product safety issues impacting ‑‑ now these are not at the level of reliability that we would be comfortable with before deploying them into production but it is something that we are actively working on and shows a lot of promise.

The second is not analyzing the data that we already have but collecting and analyzing new sources of information even we heard a lot of examples and so scraping retail sites to identify particular dark patterns.  As others have pointed out, dark patterns are manipulative design practices are designed that make purchasing decisions that they might not otherwise have done.  And sometimes these choices are so many that we consider them to be a breach of the consumer law.

Examples include what is now pricing and scarcity claims that are untrue.  We have also looked at subscription tracks and to a lesser extent consumer.

So if a claim like only one left in stock is hard coded into the HTML behind the page, we know we have a problem.

So a lot of this analysis is actually based on regular expressions.  So basically looking for streams of text but we do have an A I. component that we use to navigate retail sites as part of the scrapes and to identify which pages are likely to be activates.

Turning to the second lens at looking at this question of empowering consumers with AI, I thought it might be useful to touch on some of our cases where we have obtained and attained algorithms used by suppliers with the interactions with consumers.  This is a really important thing to be able to do from an enforcement perspective as algorithms are increasing ‑‑ and here I'm using algorithms instead of AI.  As Christine mentioned AI is a bit of a misnomer.  They must understand and explain what they are doing.

So we have a few cases and market inquiries.  We needed to do this and I thought I would explain a little bit more about what our approach is.  And aisle speed up.  When we need to look at how an algorithm operates we look at three types of information using our statutory information gathering power and the first one the source code.  The and we've had a few cases with we have obtained source code from firms and worked through it line by line to determine how it operates.  It's a very labor intensive processen ait's proven not critical.  The second time of information that we obtain is input/output data which is useful because it tells us how the algorithm operated in practice in relation to the actual consumers.  It helps us determine not just where conduct occurred but how much and how many.  And then the third is business documentation, emails and reports, et cetera.  It tells us what the firm was trying to achieve.

Often when firms create their algorithms they will run experiments on their consumers and circle base.  Andson takenning documentation about those experiments could shed light on what was intended to be achieved.

The last point that I will make, we use predictive coding for document review.  So we use machine learning to help look at what we obtain in our investigations.

And very briefly, I will touch on the topic that is a little more focused, which is the possible emergence of consumer‑centric AI.  This is about empowering consumers in the marketplace as opposed to consumer protection regulators.  The ACCC has a role in implementing the data right which is an economy that is in Australia and gives consumers more control over their data.  It enables them to access and share their data with accredited third parties to identify offers that might suit their needs.

Currently we are looking at including action initiation and that will enable accredited parties to hand out, and perhaps we can provide action in the future and we might see the emergence of more consumer‑centric AI.  Iten help to navigate information symmetries.  I will stop right there.

>> PIOTR ADAMCZEWSKI: So it looks like a lot is happening in this sphere, but still there is a report by the Tony blair Institute which indicates this should be some organisation and some new pending for the technological change.  Especially in UK.  And Kevin the give ology us some in information.

>> KEVIN LUCA ZANDERMANN:  We have two parts one is our work on AI for practical public services.  We do believe that AI has an enormous potential to transform the way we dedeliver public services and the other is personal healthcare and personal education and we many ways create a new paradigm, that is tech enabled and institutional to provide a new way to think and then actually offer public services.  So that's the first component and we have carries out in consumer protection, we did decision last year an important report to a consumer protection experts.  That's cool, that Christine knows very well.  So these are the two main components for this panel that I have tried to join.  So in terms of ‑‑ I thought it would have been useful to, like, offer an overview about the baseline scenario as someone ‑‑ considering I'm someone ‑‑ I'm not a regulator.

It's use Tull to aTess where we are at now.  It seems clear that main challenges ‑‑ and it's actually address where we are at now, it seems clear that the main challenges is we don't know where we are.  And it transfers to a law enforcement capacity.  Very low very low cross boarder coordination and finally the fact that action is reactive and slow, right and on the disruptive and incumbent side is the fact that, you know, incumbents can become so dominant that they offer a very selective interpretation, prioritizing like consumerD. like customer service excellence, for instance, over like other forms of safeguards.

Martin, if you could move to the next slide.  No?  Okay.

So I can't continue.  What we looked at at the institute is then the very important review that center for digital informatics has carried out.  It reaches the lever of coverage that the OECD would have in this very comprehensive global service and this comprehensive, like, review, deals with the adoption of computational antitrust and 26 countries responded to this survey.

And out of this survey, I selected two examples which I think are quite telling about how consumer protection authors are embracing AI.  The first one is fill‑in.  This is the Finnish consumer, they have carried out an interesting exercise using AI.  Using AI as part of the screening process.  And instead of sort of looking at the past data to build tools for the future, they start with the expost and retesting of A I.  They looked at previous cases in particular, some that dealt with two substantial Nordic portals and they compared the business scenario which was the real one where they didn't have any AI.  They benchmarked it against the scenario where they actually could have used AI and assessed the two different performances.  It did appear quite clearly is utilized a mix of supervised machine learning and several distribution regression tests they could have found out ‑‑ thank you.  They could have found out about those categories in a much quicker way.

Therefore, this has enabled them to basically build new investigate tools.  This could be an important tool.  You have an authority that has quite an effective tool and ex‑officio tool to detect these patterns.  And then it's a little bit less sophisticated.  Christine will no.  So there's to need to notify the regulatory about April I transaction.  It used to be that the CMA had to very manually new sources to identify the emergence and so a tremendous amount of rest the chances.  So the unit has developed recently a tool that actually tracks activities in an automatic way, using a series of techniques that are very similar to what the other panelists describes.  It looking at the low hanging fruit of AI as used by consumer protection authorities, particularly in legislation such as the UK where the notifying requirements are less onerous than in other legislation, like, for example, in the E U.

I thought it would have been nice ‑‑ Martyna, if you could move to the next slide.  With a series of policy questions that Angelo touched on briefly.  If you look at the Finnish model, we know from the application in Africa, it's not necessarily as good as detect causality.  So it can be dangerous to start from an AI detected pattern and draw conclusions without human oversight.

In the case of Finnish ‑‑ because the Finnish authority, they were very much aware of it and as part of their assessment to have a second stage where if, let's say, eye tool tells them that there's three operatings a platform.  They would basically have to find ‑‑ try to find any other possible exmappation, alternative to that.  And this is closely related in the EU to Article 14 in the AI act which is one of the most important articles that deals with human oversight.  So one of the most important challenges is where do we draw this line and where is the AI empowered ape sort of step begins and sends and when does the human oversight begin and in what modes?  Finally, the last question is the role that large language models can actually play.  I found it interesting that in the survey, are published by Sanford, out of 26 only one explicitly an LLM‑powered tool that they are using.  Now, I would imagine that this is not the case.  I'm sure, like, plenty of other consumer authorities have been using LLM.  It seems like the regulators by default are risk adverse and these large language models do pose quite important risks.  Particularly in terms of privacy.  One the competition authorities trial and they are dealing with whistle blowing.  When you are billing a tool like the privacy concerns are very important.

Does the generative capacity of these podels have anything to offer to consumer regulation, more Po are like low hanging fruit are more united for regulatory receive.

>> PIOTR ADAMCZEWSKI: Thank you very much.  We are working to set the line properly where ax I. is working and with we are making the oversight.

Very shortly we are closing to the end of the session, but very shortly, I would like to ask each of the panelists the question about future.  One minute per each.  Christine can we start with you.

>> CHRISTINE RIEFA: So one minute.  I will use three keywords.  I think a lot of homework on classification and normative work.  Are we all talking about the same thing?  What really is I. and try to get the consumer lawyers and the users it actually understand what the technologies are really talking about.

Collaboration is the next.  I think there's real urgency and I really welcome what we heard about eye span, and galvanizing the consumers because projects in common would be a better use in money and able to yield better results.  And are last keyword would be to reactive and completely transform the way consumer law is enforced if we can move from the stage we're at, where we use AI to simply detect to a place where actually we can prevent the harm being done to consumer then that would be fantastic advancement for the protection of consumers around.  World.

>> PIOTR ADAMCZEWSKI: Thank you, Melanie.

>> MELANIE MacNEIL: Thanks, Christine.  Yes.  I think as businesses are always going to move quickly.  Where there as a chance for money to be made, they will do it and unrestricted in many ways compared to regulators who I think is too slow.  We need to share our learning so we can all move quickly to address the issue and we have a good future focus on, it you know, really recognizing that we can't make regulations anywhere near the face that technology is advancing.  And I think honest and collaboration is key.  We need to not be afraid to share the things that didn't work and explain why they didn't work.  So that other people can learn from our mistakes, as well as our successes.

>> PIOTR ADAMCZEWSKI: Thank you, Melanie.  Angelo, do you want to add something?

>> ANGELO GRIECO: Yes.  For us, it's base IGF Secretariatally our priority for the next year will be to try to ‑‑ yes, it's basically our priority for the next year will be to increase the use of A I. in investigations.  So we would like to do first of all more activity it's to monitoring activities like sweeps.  We would like to make this tool to sweep and monitor images, videos sounds so basically to really be fit, you know, for what I need to monitor on the digital reality and then to cover types of infringement.  And as we mentioned, we would like to use it for a number of bridges like, for example, the lack of disclosure of material connection, influential traders and answers.

And what we would like to do also is to improve and that's what you also mentioned Piotr, the case handling tool to make it even easier also for investigator then to use the evidence on the national level, the rules concerning the gathering of evidence is very jurisdiction specific.  So maybe it's enough in one country but not another.  We would like the tool to help and already gather as much as possible the evidence in the format which is required for ‑‑ on behavioral, experiments we are planning to do seven more studies until the end of next year and then one every ten weeks.  Thank you.

>> PIOTR ADAMCZEWSKI: And Sally?

>> SALLY FOSKETT: Yes, thanks.  So a priority for us in the near future is going back to basics and thinking about our sources of data that we have available, I have been giving thought to try to go make better use of data collected by other governments and data from other third parties, hospitals even for instance, and also data that we can collect from the consumers themselves, for example, making better use of social media.

>> PIOTR ADAMCZEWSKI: Thank you, Sally.  Last word from Kevin.

>> KEVIN ZANDERMANN:  I would recommend to regulators to actually which perspective or interactivity with AI.  So answer and address the questions about human oversight, where does the automation start anden, where does the human oversight start to basically look at past cases that they know very well, and utilize the tools such as I limited, but to test the limitations of these models and I think the best way to do it is continuous process of, again, engaging with content, with assistances that we already very well and you perhaps may find that AI detected patterns that you did not noticeO. perhaps you found that some patterns that A I. detected actually were not necessarily particularly consequential for like the enforcement outcomes.

So I know the regulators are always understaffed and have to deal with limited resources but I think dedicate some time to these types of sort of retrospective activities to develop ex officio tools can be extremely useful, especially in areas like the EU, where we have to deal with a very significant piece of legislation on AI, whose certain details are not fully clear.  Evitably, this process will have to happen to understand, like, what is the right mod Tolle operate.

>> PIOTR ADAMCZEWSKI: Thank you very much, and yes, definitely, I make my notes, and we will have a lot of work to do in the near future, a lot of things to classify, a lot of meetings and.  And I strongly believe in the work which we are doing.

And now, I would like to close the panel, thank all the panelists for great discussion and, of course, thank the organizers for enabling us to have this discussion.  And to be a little bit late with the last session.  Thank you very much.