by Miikka Paakkinen
This post belongs to a two-part blog series on design topics related to artificial intelligence (AI) and robotics. You can read part two on ethics by clicking here.
Note: I will not go deeper in to explaining the concepts of AI and robotics in this post. For a summary on the technologies and the differences between them, check out this excellent article on Medium.com: https://medium.com/@thersa/what-is-the-difference-between-ai-robotics-d93715b4ba7f
Will artificial intelligence take our jobs and make us useless? Can we trust the robots? The public discussion around these emerging technologies often seems to paint a negative, even dystopian picture of the future. When it comes to disruptive technological change, this is nothing new though. Lack of information or transparency usually leads to fear instead of trust towards the technology. But can we tackle this issue of trust with design?
Last week I attended a Helsinki Design Week seminar called “Future Talks”. It was organized by Future Specialists Helsinki and featured four keynote speeches loosely related to designing for trust in future services. Inspired by the event, I decided to write this blog post and dig a little deeper on the theme of trust in AI and robotics.
Why is trust important?
If users don’t trust a service, they will not use it unless it’s absolutely necessary. This is obvious, but all the more important to acknowledge in the age of extreme competition and easy availability of information and alternatives. As futures researcher Ilkka Halava put it in his keynote at “Future Talks”, digitalization is a massive power shift from systems to humans. Bad and untrustworthy services will quickly become obsolete because they can easily be bypassed.
When creating services based on new technologies that users might not fully comprehend, such as AI or robotics, it’s especially important to gain trust for the service to succeed and provide value.
The question then seems to be – how can we design trust?
7 things to consider
To answer that question, we need to understand the core elements that foster trust towards such technologies.
At “Future Talks”, Olli Ohls (Robotics Lead at Futurice) talked about key points on research results regarding what creates trust in the field of social robotics.
Similar results could be noted in Innovation Management Professor Ellen Enkel’s 2017 Harvard Business Review research article related to trust in AI-based technologies (which you can read here: https://hbr.org/2017/04/to-get-consumers-to-trust-ai-show-them-its-benefits).
Based on Ohls’s speech and Enkel’s article, I compiled a summary of seven things to consider when designing for trust in AI and robotics:
- Transparency – when the purpose and intention of the AI or robot is clear, and the underlying logic is understood by the user, it is much more likely to be trusted. A major positive impact was noticed in robotics when a robot was able to verbally explain its purpose to a user, as pointed out by Ohls. The development process behind the technology should also be transparent.
- Compatibility – the technology obviously needs to match with the problem it’s trying to solve. It’s also important to consider how users feel how it matches with their values and guides them towards their goals.
- Usability – the more intuitive and easier the innovation is to use, the better the chance of creating trust. Additionally, users should be able get a basic understanding of how the technology in question works, what its limitations are, and how one should work with it. As a crude comparison: it’s hard to start driving a truck if you don’t understand the basics of what automobiles do.
- Trialability – when users can test the solution before actual implementation, perceived risk is reduced. A trial can be conducted, for example, via a prototype.
- Performance – seeing an AI or a robot make a small mistake here or there won’t likely hinder our trust toward it, but constantly underperforming will. Expectation management is important here – users need to know what the technology is supposed to achieve and how it should do it.
- Security – the technology should be perceived to be safe to use from both a physical and a data security viewpoint.
- Control vs. autonomy – it’s important to understand the context and the purpose of the technology and find the suitable level of automation. Ask the question: should we lean towards the technology making the decisions, or the technology assisting a human in making decisions?
Takeaways and thoughts
AI and robotics are still very new to most people and the concepts might seem intimidating. To use the technologies to create real value, we need to design services around them that are trustworthy for their users and for the society at large. Keeping the points above in mind during your service design project could be a good start in working towards that trust.
The author Miikka Paakkinen is an MBA student in Service Innovation and Design with a background in business management and information technology.
What do you think of the list? Could your experiences regarding trust in services be translated to AI or robotics? Please share your thoughts below!