In her keynote speech in Interaction16 conference Mrs. Kate Darling from MIT Media Lab opened up a topic of social robotics. She started by reviewing common concerns that humans currently have – that robots will take over the world and eliminate human race. This concern was quickly dismissed by showing the state-of-the-art robots autonomously playing football. They cutely and clumsily stumble and fall even before reaching the ball to kick. That much about taking over the world, for now.

Humans empathize easily with cute, physical robots
Social robotics has its background in fundamental human property of being able to empathize with nearly anything. Physical objects are easier to empathize with than virtual ones. If the extra effort is made to make robot move, be cute and give it big eyes, there is no way human can resist it. This is why people behave like robots would be alive, even though they know this is not true. This is also what makes it possible to use social robotics to provide obviously value adding services like cheering up sick children, helping autistic children, being company to elderly people and easing consequences of demention.

Empathizing with a robot.
However, Mrs. Darling raised several ethical concerns and inconsistencies with the above use cases. Is it ethical to leave elderly in company of social robots and can that ever replace human to human interaction? And how it is different from leaving them in a company of a living animal? Will the people feel uncomfortable sharing personal data or undressing in front of a robot and will they feel like not having privacy anymore? And how is that different from satellites that can already take high definition pictures of nearly anything on earth? Is it really just the matter of design, i.e. making it hidden from people? Is it ethical to let people empathize with a robot and thereby leave room for emotional manipulation of human behavior?
Is therefore human to robot interaction something that should be regulated or left to the market to self-regulate? There is obviously still a large amount of questions to be answered.

AI should have clear opinions but should not offend humans in sensitive topics
The team behind Microsoft’s smart personal assistant service, Cortana, is in any case already all set for the empathy approach. Mr. Jonathan Foster from the team in charge of Cortana’s answers has realized that ambiguous personalities are generally disliked. In order to make Cortana an attractive and trustworthy conversation partner she needs to have personality and state clear opinions. At the same time it is important not to offend the humans in case of sensitive discussions like e.g. gun control or sexual orientation. Therefore the Cortana team consists of individuals that are extremely skillful of writing stories for people. Every answer built into Cortana is preceded by a thorough discussion on the question in order to find the most appropriate answer. The question remains for the team, however, how to distant themselves from the dangerous field of emotional manipulation. Additionally, can Cortana’s answers be percived as truth if they are defined by a relatively small group of people?
Truth as humans perceive it is a topic that Mrs. Tricia Wang also touched upon in her keynote speech. As Mrs. Wang puts it – “we conflate what we see with what is the truth”. And by doing so we are often projecting complex reality onto a single dimension that we then perceive to be truth.

Hypnotizing chicken by drawing them a single straight line to look at, danger of looking at only one dimesion
Our eyes used to be the ultimate truth machine until Venetian masters invented mirror. It quickly became the next ultimate truth machine until people realized that ”objects in the mirror may appear smaller than they really are”. The next ultimate truth machine revealed itself in form of big data. It came just perfect because humans tend to suffer from the quantitative bias and value measurable over the immeasurable. It came even more perfect because it promises objectiveness based on raw data. But as Mrs. Wang points it out, data is never raw, it is always designed. It is a design choice what and who will actually be represented in and with the data. It is therefore rather strange that the sign in below picture appears only now. But hey, at least it did not take hundreds of years, like in the case of mirror.

Danger of looking only at quantitative data, even if it is big
I could not agree more with Mrs. Wang on the need to bring together thin (quantitative, big) and thick (qualitative) data for the broader perspective. This only reinforces similar earlier conclusions, as pointed out in my previous post.
The newest truth machine in the block is virtual and augmented reality. Headlines are overwhelmed with the topic and promises are being made that VR and AR will be making us better human beings capable of feeling empathy. What is easily forgotten is the very basic definition of empathy as the ability to understand how other people see the world from their perspective. We should not forget that VR is about reality, and making things look real for us even if they are happening thousands of kilometers away from us. It is still only about making us see something that we otherwise would not be able to see. It is therefore about projecting complex reality onto only one of our truth machines, eyes. It is about us experiencing some reality as content, without necessarily knowing the context of that reality from some other person’s perspective. It is only when the context complements the content that we can talk about feeling empathy. Only then we can design VR and AR based services that are meaningful from the perspective of someone else.
References
Darling K. 2016,Robot Ethics and the future of human robot interaction, http://interaction16.sched.org/event/5ze3/keynote-robot-ethics-and-the-future-of-human-robot-interaction, Accessed on 5.Mar.2016
Foster J. 2016, Writing for Personality in Tech: A call to artists, http://interaction16.sched.org/event/5yPV/writing-for-personality-in-tech-a-call-to-artists, Accessed on 5.Mar.2016
Wang T. 2016, Design in a wiggly world of mirrors, virtual reality and big data, http://interaction16.sched.org/event/5zg9/keynote-design-in-a-wiggly-world-of-mirrors-virtual-reality-and-big-data, Accesed on 5.Mar.2016
You must be logged in to post a comment.