by Miikka Paakkinen
This post is the second of a two-part blog series on design topics related to artificial intelligence (AI) and robotics. Click here to read part one on trust.
Note: I will not go deeper in to explaining the concepts of AI and robotics in this post. For a summary on the technologies and the differences between them, check out this excellent article on Medium.com: https://medium.com/@thersa/what-is-the-difference-between-ai-robotics-d93715b4ba7f
New artificial intelligence solutions are popping up everywhere, including the public sector. The amount of available data and constantly increasing computing power make it possible for algorithms to take on more and more complex tasks.
As historian and best-selling author Yuval Noah Harari notes in his latest book 21 Lessons for the 21st Century, rapid advancements in biotechnology in conjunction with AI might soon create a situation where we have both a deep understanding of how the human brain works and the technological advances for algorithms to be able to mimic what we previously thought would require “human intuition”.
Panelists at Work Up! X HDW: Artificial Intelligence and Ethics, shared similar views. AI has the potential to have such deep and disruptive effects on society that we need a larger public discussion around the technology.
In this blog I will point out some of the ethical questions that were asked in the seminar, which was a part of the Employment 2020 initiative of the Finnish Ministry of Economic Affairs and Employment. I will also briefly mention some tools to help answer those questions and provide a short overview of how Finland is preparing to get the best out of AI. Since the subject is broad and complex, I’ve embedded links to many of the references for further reading.
The questions we need to ask
With AI likely having a significant impact in our lives sooner rather than later, it’s clear that we need to think of the ethical implications. Below are some of the key questions posed in the seminar, broken down to three categories:
Transparency and responsibility:
- Who stands behind the decisions made by an AI?
- What kind of safety nets are needed for decisions made by an AI?
- How can we make the logic behind the decision making of an AI transparent?
- How can we prevent the prejudices of people or organizations from ending up in algorithms?
- Who decides the core values an AI is based on?
Use of data:
- In what context is it acceptable to make decisions based solely on available historical data?
- What if the available data is skewed?
Changing roles and skill requirements:
- How will AI affect power structures?
- Should we trust the decisions suggested by AI?
- What sort of skills will we need to work with an AI?
Obviously, we’re just scratching the surface here. For an example on what these ethical questions might look like in a real life, I recommend watching this 10-minute video by the Wall Street Journal on how AI is utilized in hiring in the US.
One interesting tool to use for identifying and managing data ethics considerations is the Data Ethics Canvas by Open Data Institute. It’s a free tool that provides 15 steps to assess a project with concrete questions related to the ethical use of data.
Some big consultancy companies are also offering their own tools to assess the ethical use of AI.
A local view
In Finland, the Ministry of Economic Affairs and Employment has been active recently in promoting Finland’s Artificial Intelligence program, which is expected to be completed in April 2019.
They’re currently challenging companies to committing to the ethical use of AI and providing tools and tips to help them do so.
This summer one of the program’s working groups gave 28 policy recommendations based on its findings on work in the age of artificial intelligence.
Reforming education to meet future needs is one key element in the program. The free online course “Elements of AI” can be seen as an early success.
It’s important that there is political discussion regarding the subject, so it’s good to see that we’re at least on the path to defining what the best use of the technology is for our society.
Philosopher Maija-Riitta Ollila pointed out that only agents can change the world. Phenomena can’t – actions are needed, and there’s always an agent behind an action. Thus, we should not think of AI as some independent uncontrollable phenomenon, but instead understand that we can be agents in shaping how AI will be utilized.
The ethical questions related to the current technological revolution will only get more complex in the future. Do you want to passively predict what organizations and the society will be in the future, or be a part of co-designing our future realities?
The author Miikka Paakkinen is an MBA student in Service Innovation and Design with a background in business management and information technology.