IGF 2023 – Day 1 – Launch / Award Event #169 Design Beyond Deception: A Manual for Design Practitioners

The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.

***

 

>> TITIKSHA VASHIST: Good morning, everyone, and welcome to the launch of the Design Beyond Deception by the Pranava Institute. I know it's early in the morning, not just for persons in this room, but also for people joining us online from different places. But nonetheless, in the interest of time, we'd like to go ahead and begin. 

So, for staff, I extend a very warm welcome on behalf of the Pranava Institute, three of us here present on the stage. And we are very glad to launch this project at the UN Internet Governance Forum 2023 in Kyoto. Next slide. 

So, a big hello from the Pranava Institute to all of the few. The Pranava Institute works at the intersection of emerging technology, policy, and its impact on society, based in India. Our research essentially focuses on issues such as trust and safety, deceptive design and youth and media and we've done projects, academic as well as multimedia in this space. 

So, getting right into it, what is deceptive design and why does it matter, right? I'll pick up a very simple definition, which was actually put forward by the Norwegian Consumer Council on this. Plainly put, dark patterns are often carefully designed to alter decision‑making by users or trick users into actions they did not intend to take. Now, deceptive design is something we've all encountered on the web, right? They have found their way into a plethora of online experiences, from eCommerce apps to social media, from fintech services to education and so forth.

Now, these design choices which may seem very innocent and innocuous on the outside have multi‑sided harms actually baked into them. And by tricking, manipulating, misdirecting, or hiding information from users, these patterns harm not just the single end user of the Internet, but also digital ecosystems at large, and that is also ‑‑ those are also findings which resulted from the work that we did on this issue. 

This project called Design Beyond Deception sought to understand the harmful impacts of deceptive design, specifically in understudied contexts, because a lot of the academic work so far on deceptive design was limited to the United States and European Union, and we wanted to look at what it looks like in other countries, where the natural of digitalization itself is different.

We also wanted to see how we can replace such design practices with design practices that embody values, right? And these are values that consumers, that companies, civil society, governments want reflected online. That's precisely why our project also had a very strong practice or application component and not just a theoretical one. 

Now, moving on to what are the harms caused by these deceptive design patterns, right? And there are two ways in which we categorize these harms, right? One is the personal consumer detriment, which is focused on harms which you and I as people can identify we have undergone, right? These include privacy harms, financial loss ‑‑ a lot of financial loss has been documented in countries such as India ‑‑ psychological detriment, and resource loss, which happens. But if we look deeply into the problem of deceptive design, we realize there are also structural consumer detriments as well as harms on the larger digital economy, including loss of trust.

So, a lot of research showed that when websites and apps used force registration or price comparison prevention and so on, it weakens or distorts competition in their digital market. What that essentially means is that because of the use of these deceptive patterns, there is unfair trade practice being done in the digital economy. And this currently does not find any anchoring in our laws, but that's precisely why this topic has to be discussed at a platform such as this. 

Next, I want to talk about why we are talking about deceptive design, which seems like more of a designer‑centered issue at the UN IGF. And the simple reason is, we're increasingly seeing regulators worldwide investigating deceptive practices in their specific contexts. These include the Federal Trade Commission in the United States; it includes the European Commission, which have been looking at this issue for a while and trying to understand how it can create a stronger European consumer protection law, and it's also mentioned in the DSEA. And the consumer councils in countries such as the Netherlands, Norway, Australia, and very recently India, also issued guidelines and working papers and have been trying to push policy on deceptive design.

Finally, data protection authorities have been at the forefront in several jurisdictions to talk about the privacy and data harms which result from deceptive practices.  Now, regulators are investigating the consumer harms, privacy and data harms, and competition harms which result from these patterns, and this is precisely where I want to move into a little bit about what our project was about.

So, the Design Beyond Deception project was an 18‑month‑long project which sought to bridge the gap between the theory and the practice. We held more than four large‑group focused consultations, engaged with over 50 global experts in various domains, and held 20‑plus in‑depth interviews on this issue. We also issued a Research Series, which is also being launched today by authors from across the world who focused on understudied areas. And this research was very generously supported by the University of Notre Dame and IBM's Tech Ethics Lab in the United States.

Now, quickly going over the project's process. We started out with a review of academic literature, given the multidisciplinary and cross-sectional nature of the issue itself. Second, to tap into the in‑depth expertise from multiple stakeholders placed across fields of theory and practice. We did scoping interviews with experts, which helped us give shape to the rest of the project.

Third, we thought that creating a new body of work which contextualizes deceptive design specifically, will help deepen the conversation significantly on the issue. And that led to focus groups and workshops with stakeholders, which led us to our final goal, which is the creation of a manual for design practitioners who otherwise would not have, as a part of their curriculum or training as designers, an understanding of deceptive practices and how it may harm their end users. 

So, the stakeholders we engaged with for this particular project were academics and researchers, design practitioners, start‑ups, civil society and policy folk, and of course, industry, which included a whole bunch of people from top to bottom who are involved in different decision‑making processes which very much so impact, you know, design decisions in a company.

While our manual themes span what is deceptive design for a designer and not for a researcher, we also look at rethinking the user, designing with values, design for privacy; we touch upon culturally responsible design, and finally, look at how regulation meets design, wherein we also probe the design practitioner to look at designing our collective future from a different standpoint. And since this manual has been made for practitioners, it is full of frameworks, activities, and teamwork, things that perhaps a product team can sit together and do on their own, right?

Very quickly, talking about the research series, which also we are launching today. It focused essentially on understudied areas and understudied harms, including how, for example, crafting a definition for deceptive design is harder than it may seem. And for those of you who are lawyers in this room, you would completely understand why this is a huge challenge. 

We also talk about how identifying anticompetitive harms in deceptive design discourse is crucial, also how deceptive design plays in voice interfaces and further such research pieces which were contributed from people across the world.

So, without further ado, I would request you to explore this project online or pick up a copy of the manual and research series here from the table in the first row for you to peruse, and without taking much of the time, I would now quickly like to invite the speakers who have graciously joined us online.

We have two speakers, Chandni Gupta and Maitreya Shah, who have joined us online, and I hope they can hear me. We also have videos from two speakers who, because of time zone issues, could not join us online but have been very generous.

So, to quickly introduce the speakers, Chandni is currently the Deputy CEO and Digital Policy Director at the Consumer Policy Research Centre, which is Australia's only dedicated consumer policy think tank. She has previously worked at the Australian Competition and Consumer Commission, the OECD, and the United Nations. She has over 15 years of experience in consumer policy, domestically as well as internationally, and her research focuses on exploring consumer shift from the analog towards the digital economy. Her work was extremely crucial in the sense that it was the first study in Australia which ‑‑ I'm sorry.  Yeah.  It was the first study in Australia which essentially led to policy change and consumer action on deceptive design. 

Maitreya Shah is a blind researcher and lawyer, working with ethics and governance of emerging technologies and disability rights. He was most recently at Regulatory General, a spin-out of the University of Cambridge, and was previously a LAMP to member of parliament fellow in India. He's extensively worked in areas of digital accessibility, AI governance, regulatory technologies and disability law.  Currently, he is a Fellow at the Berkman Klein Center for Internet and Society at Howard University, where he will be examining AI fairness frameworks from the standpoint of disability justice.

We also have two recordings from Caroline Sinders and Professor Cristiana Santos. Caroline Sinders is an award‑winning critical designer, researcher, and artist, the founder of a human rights and design lab called Convocation Research and Design and is currently at the Information Commissioner's Office, which is the UK's data protection and privacy regulator.

Finally, Professor Cristiana Santos is an Assistant Professor in Privacy and Data Protection Law at Utrecht University in the Netherlands. She is also an Expert of the Data Protection Unit, Council of Europe and expert for the implementation of the pool of experts among her many varied accomplishments. 

Without further ado, I would request Dhanyashri to play the video by Caroline Sinders, who will touch upon deceptive design from a design practitioner's standpoint. 

>> CAROLINE SINDERS: I'm a researcher and postdoctoral fellow with the Information Commissioner's Office in the United Kingdom, the data protection privacy regulator. I run a human rights lab called Convocation Research and Design. I really wish I could be there in person. I'm so sorry I can't be, so I've made this recording instead.  Thank you so much for the Pranava Institute for inviting me to be on this panel.

I'm one of the contributors to the recent toolkit that's out on deceptive design patterns, and I'm excited to present to you today, talk a little bit about why design and interdisciplinary thinking is so important when it comes to creating regulation, investigations, and other ways to help curb and mitigate the harms of deceptive design patterns. I've also created a very small presentation that I'm excited to show to all of you. 

Our rural design patterns are everywhere. They're very prolific in the modern web and they're universally found. I have not in all of my extensive research ever come across a country or region that does not have harmful design patterns. They are, in fact, a global phenomenon and a global menace is the way to think about it. My article for the Pranava Institute's toolkit focuses on what do we do with emergent spaces, let's say like emergent spaces like the metaverse or IoT or voice activation, when design patterns are not standardized yet for users, meaning users have not engaged with like voice activation enough to understand where all of the design patterns are within that space.

Or in the case of something like the metaverse, where there's not a lot of people using that and it's a really emergent space, what are the healthy design patterns within that. We haven't really come to that space yet. A lot of current design patterns are because we've existed in this kind of flattened, modern web for quite a few years, so there's been many years of research to figure out what could healthy or trustworthy or pro‑user design look like. And it's that subversion where harmful design patterns exist.  This research is important because it will impact how users create safety; it will impact forms of regulation. And this kind of work does really require an interdisciplinary lens. 

And so, what does policy need to help combat harmful design patterns? Again, it's this understanding that design is an expertise. And as I was saying earlier, this integral part of the web. What we need is to sort of broaden our idea of what let's say our researcher looks like or what knowledge looks like. One of the things that's been exciting in the many years that I've been researching harmful design patterns is the ability to work with all different kinds of legal experts who recognise that design is an expertise. What this means is when we're investigating things like harmful design patterns is actually having a knowledge of what are design patterns, what are different kinds of standardized design patterns, how to run different kinds of evaluations, like an evaluation or userability evaluation or accessibility evaluation. These are things that there are many different ways to do them, but there are agreed‑upon tests in a way or a series of different kinds of tests people can conduct, but these are the ways in which you can sort of look at, let's say the health of a product or how well or not well that product is designed.

Often, when investigating harmful design patterns, what you need to find or sort of look at or help surface is where does the confusion or manipulation or exploitation lie? So, where is the harmful design pattern actually subverting this expected design pattern, the expected design pattern the user thinks they're engaging with, right? Because that's what's being subverted unintentionally, let's say, or intentionally. This is how having a background in UX design is really, really important to be able to recognise that.

A paper done by the European Privacy Board actually found that they were testing with a few thousand users and found those who were less susceptible to harmful design patterns are ones who had heard of UX design or knew what UX design was. And this is really important to kind of highlight. This means we're creating an unequal and unequitable web if the only way for people to try to avoid harmful design patterns is to have a design background. 

So, conversely, I think to help investigate more, this kind of interdisciplinary knowledge is needed, understanding how products are made, how they're tested, and having, and again, being able to do different kinds of analyses, let's say on the interface itself. 

Design, inconsistent design ‑‑ and we see these a lot in harmful design patterns ‑‑ can confuse users. They can overwhelm. So, if there's too many features or too many choices, let's say, misunderstanding a core audience can also lead to poor or unhelpful design decisions. But we'll see this in an example I'm going to show. 

So, inconsistent design can be a product name change in choices or a changing name. Choices are not illustrated the same way. The name doesn't match up with what the user thinks they're doing. All of these things can confuse users. This also means sometimes if you're engaging or calling something too technical, then a user might not understand what it is. 

Thank you so much for having me here. I'm so sorry that this is a short talk. But one thing I wanted to sort of really emphasize, again, is design can be an equalizing action that distills code and policy into understandable interfaces. What we need is more research, more collaborative and interdisciplinary research between policymakers, regulators, policy analysis and designers. 

>> TITIKSHA VASHIST: Thanks, Caroline. Now moving on to Chandni, who's joined us online. I'll ask you to put up the slides. Thank you for being here.

>> CHANDNI GUPTA: Thank you so much. I just want to confirm that you can hear me and you can see my slides?

>> TITIKSHA VASHIST: Yes. 

>> CHANDNI GUPTA: Excellent.  So, thank you so much for the introduction earlier, and thank you so much for having me.  Before I begin, I have to say, congratulations, Pranava Institute, who have created such a practical tool, which I'm sure and I hope will become a valuable resource for the UX community from here on.

I'm delighted to share with you today some of the insights from our research. So, one of the things that we at the Consumer Policy Research Centre do is look at what is the evidence‑based research that can bring about systemic change. And this was one of the ones that we have been working on for a number of months now. 

So, it was about 18 months ago that we started our journey of looking at deceptive and manipulative designs. And as part of our research, what we really wanted to understand were two things: What are the common deceptive patterns that Australians come across most frequently, and what's the impact on consumers? And we had to say how important it is to be able to understand that impact. And what we really wanted to do is quantify that harm. 

Dark patterns today are so prominent across websites and apps we use every day. They're used to influence our decisions, our choices, our experiences. And is it in our best interests? Often not. Is it legal? Largely not. So, in case you're wondering where dark patterns exist, as Caroline said as well, they are so prominent. They are everywhere. Even as part of our research, we asked a national representative sample of 2,000 Australians in our survey to list the names of those businesses they could recall using deceptive designs. And businesses from almost 50 different sectors were identified. 

I mentioned before that many of the dark patterns that exist today aren't illegal. Currently in Australia, we can look through the lens of misleading and deceptive conduct in privacy terms or the privacy act, but the law currently offers a very narrow lens for how regulators can act. But are consumers experiencing harm? Well, the short answer is yes.  Our research revealed that 83% of Australians had experienced one or more negative consequences as a result of dark patterns being used on websites and apps. Yet, eight out of the ten dark patterns we looked at could be implemented here in Australia without any consequence to businesses. 

Consumers in our survey reported being compromised in their emotional well‑being, experiencing financial loss, and feeling a real loss of control over their personal information, and it was anything from feeling pressured into sharing more data than they needed or accidentally making a purchase.  In fact, as part of our qualitative part of our research, the frustration really came through, and it came down to three elements.

One, there's a lack of meaningful choice. Sometimes accepting the preferred business choice is the only way to access a product or service. For example, in our suite, we saw an example of a fitness centre that didn't let you see their timetable until you created a profile on their app. 

Two, it's the pervasive amount of pressure that's put on consumers, especially once their personal details have been shared, and suddenly, they're prone to hyper-personalized content or continuous direct mail.

And three, and finally, there's a sense of frustration that businesses aren't being held accountable for any of these practices. 

When it comes to younger consumers, the impact only compounded.  Consumers aged between 18 and 28 were more likely to experience both financial and data harms. For example, one in three spent more than they intended, and that was 65% of the national average. This demographic in Australia often has less disposable income, so the impact of harms is likely to be felt more as well. 

On the flip side, there's also a cost for businesses.  Almost one in three of the consumers we surveyed stopped using the website altogether. Almost one in six felt their trust in the organization had been undermined. And more than one in four thought negatively about the organization. So, while in the short‑term, dark patterns may lead to financial and data gains, in the long run, they will deteriorate consumer trust and loyalty. 

So, our research has highlighted that everyone in the digital ecosystem has a role to play, and this was mentioned earlier as well. There's definitely a role for governments and regulators, and we've been really pleased to see some of the changes that are coming about, such as government currently considering here introducing an unfair trading prohibition and dark patterns being included as part of that legislation and the Privacy Act is finally getting renewed, which is currently from the 1980s. So, it not only predates dark patterns, it predates the Internet. 

However, it's actually businesses who are in the best position right now to make changes today and lead by example, whether it's auditing their online presence or testing with consumers' best interests in mind. Even small businesses can be really mindful about the off‑the‑shelf eCommerce products they're choosing and which features they're turning on and off.

Now from what I've heard from UX designers that have reached out to me during conferences and events is that it's also not in their hands, and much of this is a business decision that happens in another part of the company. But one of the things that they can do is share this type of research, resources such as the handbook and other things happening in this space with their colleagues to show the effect online patterns can have, not only on consumers, but also on their business. I'll end by saying we've actually all got a role to play in ensuring a fair, safe, and inclusive digital economy for consumers. Thank you so much. 

>> TITIKSHA VASHIST: Thank you so, Chandni, for that presentation. And I would very much like to point out that Chandni's research, and the research done at her institute, in fact, very recently helped push the case for making unsubscribe easier on eCommerce platforms like Amazon, and that's a big move, right, coming from regulators. So, more power to you and thank you so much for joining us today.

I would now like to request Dhanyashri to play a recorded video we have from Professor Cristiana Santos, who will talk about deceptive design from a legal standpoint and share some of her work. 

(Speaker muted)

(No audio in Zoom)

>> CRISTIANA SANTOS: -- the first time in decisions. We suggest that along with this DP, others enforce patterns as they are in their decisions. This way, we believe that organizations can factor the risk of sanctions into their business calculations. And also, policymakers can be aware of the true extent of these practices.

And naming dark patterns is now more important than ever, especially since DSA and the DMA codified dark patterns explicitly, so it's a legal term. We also found that dark patterns are used both by big tech, also by small and public organizations.  Most decisions refer to the user interface or to the user experience or user journey, and to information‑based practices. 

Finally, we understood that harms caused by dark patterns are not called indecisions yet. Let's look at the patterns we found in these decisions.

So, in this table, you can see the data protection cases, according to the practices related to dark patterns types. The majority of dark patterns are referred to obstruction practices, and they are related to the difficulty of refusal and withdrawal of ascent. More than 30 decisions.  These are bundled by forced practices, so when consumers withdraw consent but trackers are noted or stored before consent is asked, more than 25 decisions. 

Finally ‑‑

(No audio in Zoom)

Policy to use the service at the same time, for example.  So, we understand that the enforcement cases are the way for general deterrence of dark patterns, and we showcase these dark patterns decisions in these websites, deceptive.design/cases. And this website is being updated daily with new decisions.  So, let's talk about the harms caused by dark patterns.

There is a growing body of evidence from human‑computer interaction studies, computer science studies referring to dark patterns that actually might elicit or lead to potential or actual harm, but there are actually harms related to dark patterns in privacy. Several studies focused on these interactions and show several harms caused by dark patterns: Labor and cognitive harms; loss of control, privacy concerns, and fatigue; negative emotional responses; regretting privacy choices. And all these harms provide evidence of severity of harms.

And for an example, Scholarly Works found preselected purposes and options for processing data or even accept all purposes option at the first layer of a banner can or may use user's sensitive data, depending on the website in question, and these can share these personal data by default with hundreds of third‑party advertisers.  And this might provide evidence of a potential severity and impact regarding dark patterns harms. 

However, consent claims ‑‑ at least the scoped ones ‑‑ for non‑material damages are not being used within the redress system, even though there are so many decisions related to dark patterns and related to violations of consent interactions. 

Finally, we know that dark patterns are formed in different domains, not only in privacy rights. And there are several data protection regulators and policymakers that show interest in contributing to this space of dark patterns. And we find at least five reports from the EU, from the UK, and U.S. bodies published in 2022 alone. But these sources often lack citation prominence trails for typologies and definitions, making it difficult to trace where new, specific types of dark patterns emerge and under which conditions.

On the other hand, academic literature has grown rapidly since the original list typology in 2010. In the years since published a work have headed new dark patterns. These typologies have some overlaps and also some misalignments. We analyse those academic and regulatory taxonomies and counted 245 dark patterns. Yes.  Many of these dark patterns either overlap or misalign with other types of dark patterns coming from all these different sources. 

And so, we constructed an ontology of dark patterns, identified their prominence through direct citations and inferences and clustered similar patterns, so we recreated this high‑level, meso‑level ‑‑ middle level ‑‑ and low‑level patterns. And this typology of dark patterns enabled shared vocabulary for regulators and dark patterns scholars, enabling more alignment in user studies, in mapping to decisions and discussions of harms, and for scholars also to help to trace the presence and types of dark patterns over time. 

Regulators could anticipate the presence of existing patterns in new contexts or domains and to guide alternative detection.  Thank you for your time. And if you have any question and any suggestion, please consider to send me an email. Thank you so much. 

>> TITIKSHA VASHIST: Thank you to Professor Santos for that presentation and for showing us very clearly how deceptive designs now are a part of the legal discourse increasingly, as different countries across the world look at it closer and make it a part of their case law. 

I would now, finally, like to invite Maitreya Shah to share his comments with us. And thank you so much, Maitreya, for your patience, and thank you so much for being with us. 

>> MAITREYA SHAH: Hi, Titiksha. Thank you so much for having me here.  I hope you can see my presentation. 

>> TITIKSHA VASHIST: Yes, Maitreya, you're all set.

>> MAITREYA SHAH: Thank you, and thank you for launching this at one of the biggest platforms in the world to talk about this. So, yeah, hello, everyone.  I'm Maitreya, and thank you so much for taking time and for that kind introduction.

So, my fellow speakers have already touched upon many forms of deceptive designs and how they interact with consumers, how they pose harm to people, and what are the dark patterns that exist on the Internet today. You know, dark patterns/deceptive designs are quite multidisciplinary with the rise of AI and modern technologies. I intend to talk about two things very briefly. The first is the piece that I wrote for the Research Series that is launching today which deals with accessibility overlays and their harms on people with disabilities. The other the briefly to my work, because a lot of my work is on AI bias, fairness, and ethics. And I tend to, you know, briefly touch month the deceptive design dark patterns that are emerging through AI and emerging technologies and the new modern that we see in the world today.

So, to start with, deceptive design practices overlap accessibility a little. So, I wrote analytical series, Ethical Design Research Series, and the accessibility overlay tool. Before I delve into what those tools are and what deceptive design practices are, I'll give you a brief on accessibility. So, accessibility is the idea to make websites and applications usable for people with disabilities. It is a legal right and a legal obligation through various instruments, international and domestic.  I've given here a few examples. And these accessibility overlay tools are digitally designed to subvert the legal obligation to make websites accessible. And I have tried to analyse these tools from a deceptive design lens and follow the dark patterns and how they end up harming people with disabilities on the Internet. 

So, accessibility overlay, as people who come from the design side of things know is, usually on the UI or UX side of websites or web applications, it is in the forms of JavaScript boxes that usually come up, and they tend to deviate or obstruct the attention of users on websites and, you know, shift their focus to something different, like sign‑up boxes or advertisements and so on. An accessibility overlay tool is exactly like this.

However, what it claims to do is, it claims to make the website accessible for people with disabilities. Now, in line with a lot of Internet access scandals and regulation, the worldwide consortium has come out with a web accessibility guidelines and standards that are guiding developers to make websites accessible, and these standards require a lot of manual labor and a lot of manual design input right from the source code. So, these accessibility overlay tools do not end up making anything that is in the source code. They only make changes to the user interface side of things. They will basically change the font, color, contrast or size, or maybe, you know, add some image descriptions on the website, with all the things that are already built in the accessibility. So, accessibility overlay tools are not doing anything new. Assistive technology, like screen readers for people who are blind, already have a lot of these features built in. 

So, what are the harms? So, these companies that sell these accessibility overlay tools claim that they are making the website accessible. And what ends up happening is, whenever there is an accessibility overlay tool in a website, there is a tool bar and an announcement on the top of the website, on its landing page, that says that, you know, the website is accessible and the person visiting the website can utilize this feature to get an accessible, you know, experience and interaction on the website. 

So, people with disabilities, they tend to use the website with the anticipation that the website will be accessible, and what ends up happening is that they are deceived and manipulated to choices that they do not intend to make, which is inherently the idea of deceptive design.  This is done to, as I earlier said, subvert the legal obligation to make websites accessible. Companies, they employ designers that don't incorporate accessibility features from the very inception of the website building process, and then they are afraid of lawsuits and paying hefty compensations, so they resort to these sort of contrivances and processes to make their websites accessible. 

So, there are many issues ‑‑ before I come to the strategies to countering these tools, there are many issues that end up happening with people with disabilities when these overlay tools are deployed in a website or a web interface. So, firstly, many screen readers that blind people especially use get obstructed by these overlay tools. These overlay tools also tend to impede the privacy of people with disabilities because they detect the assistive technology. And there are many other issues like false and inaccurate image descriptions that might manipulate people into purchasing things that they do not want to. 

You know, in line with the idea of today's discussion, I have given here a few points around strategies that would move from theory to practice. How do we, you know, counter these accessibility overlay tools? How do we see that companies don't use these tools and that they don't harm people with disabilities? So, these are a few examples that I have personally researched and I've gathered from across the globe that are somehow effective strategies to counter the deceptive practices of these tools, including regulatory actions, community advocacy, tools that could counter these accessibility overlays, and educating designers and website developers to start with. 

So, this was possible through Pranava consultation I could have with them to think about, you know, how these accessibility issues could be manifested in deceptive design language and how they harm people with disabilities to understand this is a point marginalized and very less talked about.

I'll quickly move to ‑‑ you know, artificial intelligence technologies. There is a lot of hype and a lot of discussion around ChatGPT and generative AI tools today. We interact with chatbots and with these new forms of large language model technologies today. So, these are the kind of issues that one faces.  I in my presentation have two broad issues that I wanted to focus on, two examples that I wanted to share with you that have come up in my research so far. And I'll be very brief because I'm mindful of the lack of time. 

So, a lot of regulators are, you know, they are talking about and making people aware about the deceptive design practices through anthropomorphism, which are basically human characteristics that are carried by non‑human identities, so for example, chatbots and generative AI models that take on human characteristics and blur those boundaries between humans and tech and that tend to manipulate users that tend to subvert user's autonomy and privacy. In the previous slide, I had given an example where a person back in 2021 was influenced by a chatbot and had attempted to assassinate the Queen of the United Kingdom. So, these are the kinds of issues that one could face because of chatbots and large language models. 

>> TITIKSHA VASHIST: Maitreya, I'm sorry to interrupt you, but could you quickly wrap up? We're one minute over time.

>> MAITREYA SHAH: I'll do that.

>> TITIKSHA VASHIST: Thank you.

>> MAITREYA SHAH: Sure, thank you. This is, briefly, again, an example from data mining practices and how they intend to violate the privacy of users.  I'll quickly move through. These are a few examples, again, to move from theory to practice of regulators, trying to skate the discussion around AI and tech deceptive design practices and how you or I as lawyers, designers, or community advocates can influence the work on this.  Yep, that's it. Thank you so much.  And sorry for running over time.

>> TITIKSHA VASHIST: Thank you so much for joining us, Maitreya, and for sharing your specific research at the intersection of deceptive design and disability, and I wish you all the best for a lot of your forthcoming work on AI and deceptive design. That being said, in the interest of time, let me thank everyone for joining us for this particular launch event. You see the QR code to our project right up here on the screen. And if you'd like to grab a physical copy of the Manual or the Research Series, they're right here on the front desk, right up here. 

Again, I would like to extend my gratitude to both Chandni and Maitreya, who are joining us at very, very odd times, but thank you for making it to this event. And thank you to everyone for attending this particular session.  We are definitely available offline, if you are interested in this issue and want to talk more about it. Thank you.