It looks like you are using Internet Explorer, which unfortunately is not supported. Please use a modern browser like Chrome, Firefox, Safari or Edge.

AI and design: What designers need to know to take AI from very engineery to usable, discoverable and trustworthy

Published in Design

Written by

Annika Madejska
Senior Designer

Annika Madejska is a Senior Designer at Nitor who has a thing for tackling complex design problems and who loves to push herself outside of the comfort zone. She carries a torch for ethical technology, especially AI driven services. In her spare time, she kicks back with some gardening, painting or textile arts.

Article

November 9, 2023 · 7 min read time

As designers, we will eventually need to design services that contain at least some aspects of interaction between a user and AI. Having used some of the novel natural language AI services, we know that the usability leaves a lot of room for improvement. In this article, we’ll give you a few pointers on where to get started to build the toolkit you need to design usable and trustworthy AI services.

All designers work with a material that speaks back to them and that affects the designs. All design materials have their own boundaries, individual constraints, and unique properties. To be able to design with a material – you need to know it well. You have to know what limitations it has and what possibilities it carries.

Machine learning, neural nets, foundation models, generative AI, and agentive AI are all examples of a long technology journey that has given us a design material that is less known to many designers. Building the capabilities of this technology required data scientists and engineers, to show that it can be done. Now that we know it can be done – it is time to improve the usability.

You need to understand it to make it usable

This means that designers need to understand the basics of what hides behind the term AI. And we have to marry our understanding of the fuzziness of AI – with all the fuzziness that is intrinsic to human behaviour and human communication. Not a small task.

If you have ever tried to write a prompt for a generative AI that produces images, you know that it is not easy to get an image that resembles what you had imagined in your mind. Much of this has to do with the way we naturally express ourselves and rely on our conversation partner for more cues or body language interpretation to fully understand.
To get the image you want out of an AI – you need to be very specific, use many parameters, and learn a lot about special terms and the use of special characters to convey your cues to the machine. Very engineery – but not really great in terms of usability, discoverability, and user experience.

Some AI services can also act on behalf of a user – for example, we can tell it what trip we want to make, when, what our price range is, and how many layovers we’re willing to deal with. We don’t have to search for the trip ourselves, because the service will ping us if and when it finds what we’re looking for. This means that designers have to be able to design for a handover of agency from human to machine and from machine back to a human.

Current AI technologies contain a plethora of new potential for designing interfaces that require a flexible mind, creative thinking, and an eye for catching the unexpected possibilities in discovery and usability research. It is a design material that really expands our means of interacting with technology and having technology act on our behalf, making it crucial for designers to understand it so we can translate it to the correct interface for a particular context for the users.

All these new opportunities have also brought new challenges to the industry. Or rather made old challenges even more urgent to solve. These challenges are, however, something that designers are particularly equipped to tackle, as our profession balances the intersection of tech, behavioural science, business, and aesthetics. We have worked with usability, learnability, emotional impact, flexibility, and experiences for a long time. Designers have usually also had to grasp the legal aspects of the services they are working on, as well as ethical considerations, suitability for a given context, and even sustainability. All these things become more important as AI outcomes become the focus of more and more regulation and legislation.

What designers need to do right now is to learn more about neural nets, machine learning, and data, so that we can build some common ground with data scientists and AI developers. A shared language and the capability of asking the right questions will equip us with the ability to design usable and trustworthy AI services.

On top of this, now more than ever, there is a need for designers to be the link between business and technology. We need to know enough to be able to explain AI in order to help decision-makers understand the properties, constraints, possibilities, costs, and risks of AI. We need to assist in establishing strategies for AI use within organisations – both for internal development projects but also for buying AI-based software to adapt to business contexts and needs.

Learn how to build trust

This also requires us to understand how to build trust for AI services. In the end, users will not continue using a service they do not trust.
There will always be someone who comes up with a more understandable, transparent, and easy-to-use solution, so even if the users might currently be forced to use a particular service – it doesn’t mean they won’t jump ship if something better comes along.

For designers, this means we have to use our knowledge and understanding of AI to make it explainable and transparent to the user. One example is to help users to understand why a service produces the outcomes they get. This can partially be done by displaying the personal data used to produce the outcome. What if it turns out that the data is old or incorrect? Another example is to design ways for users to easily give feedback on how the AI service works – and by feedback, I mean things beyond collecting analytics data from the UI and NPS scores. This feedback is crucial to improve the usability of an AI service.

Businesses utilising AI also need to be open and transparent about how the AI outcomes are monitored and how they work to prevent harmful outcomes. What has this to do with usability, you might ask? Well, just about everything.

Trust is really hard to regain once it is lost, and without trust in a service, your amazing user journey and smooth interactions will not matter. This puts new demands on designers to prototype not only for user experience and usability but also to discover and predict unfavourable outcomes in AI service.

Learn how to prototype fuzziness

A fair bit of the new AI services still rely on screen UI:s, but even this becomes different when you are dealing with a conversational interaction rather than the traditional interaction patterns. New things need to be designed, such as the personality and characteristics of a conversational assistant. Which is not something most designers have spent their careers doing. And there are many traps in creating personalities that are easy to fall into – the most obvious one is giving a virtual assistant the default of a female gender. The less obvious one is how the assistant will learn and adapt to the interaction with a particular user and how the training data will influence how this relationship develops over time.

The term relationship is used very deliberately here, as we as humans will inevitably create a relationship with a machine if it displays human properties. It is the way we are wired to respond. Which puts a lot of responsibility on the designer. Understanding the training data becomes crucial. Is the data set used to train the AI suitable for your target users? Has it been curated properly? Has the data been balanced for bias? Has it gotten enough learning material to perform the task? Automated translation of product names and descriptions has been an endless source of entertainment in memes, and it might seem like a harmless side effect. But this bites a chunk out of the trust capital that the users have in a service.
If they can’t get the product names right – what else is going wrong?

It is of course impossible to test all possible outcomes and discover all the ways that things might go wrong. But our prototyping needs to become more explorative and open to both design methods we know but rarely use (like body storming, methods brought in from acting and theatre, Wizard of Oz) and the development of methods to utilise to actively investigate how the AI service will affect the world (other businesses, different communities in society, individuals, minorities) and the development of methods to explore how someone might cause deliberate harm using the service we are creating. Designers also have to pay attention to uncover new requirements from users as these new services might raise new needs that are context dependent.

As you can see, designers already have a great knowledge base for venturing into the world of creating usable AI services. But we also have to learn a great deal about technology, data, data curation, and all the new user needs that come with AI as a design material. The users need to feel they understand and can trust the AI service, and businesses need help to understand AI and the importance of building trust as part of their business strategy for their AI products. All of this becomes the foundation of improving usability in AI services.

Explore more about our design services.

Written by

Annika Madejska
Senior Designer

Annika Madejska is a Senior Designer at Nitor who has a thing for tackling complex design problems and who loves to push herself outside of the comfort zone. She carries a torch for ethical technology, especially AI driven services. In her spare time, she kicks back with some gardening, painting or textile arts.