It’s time to close the empathy gap between humans and AI, guiding bots and virtual entities from the role of servant into more of a peer.
Since our early relationships with machine-learning-based interfaces - in other words bots - have often been affected by unpleasant, frustrating conversations, technology has led us to emotional baggage; comparable to a human relationship. Just think of unsuccessful calls to service centers where the bot just didn't understand you had "questions about the bill" - who hasn't cursed into the phone once in a while? Today the systems are smarter and an Amazon Echo plays jazz after you've wished for this style of music - whether you express your concern with the words "Alexa, play jazz" or "Alexa, I'd like to hear jazz".
Furthermore, frustrating engagements with tech are bad for our mental health: losing our temper, becoming upset with an interface — particularly one that engages in a human way — can lead us down a road that often turns to anger, bitterness and disappointment. So, we know that we’re far from reaching the upper end of what’s technically possible with bots and voice-assisted services. What steps are we missing on this path? What needs tweaking in order to overcome users’ plaguing reservations about bots and language assistants?
It’s time to close the empathy gap between humans and artificial intelligence, and it’s our responsibility to empower the next logical step in their evolution — guiding them from the role of servant into more of a peer. It is as a key challenge for developers to make technology more human: “Humanizing technology” is the order of the day.
Only then will chatbots and voice assistants get the chance to become more than just servants, but instead companions that understand people – not only their intonation and syntax, but also their mood, character and lifestyle: Let’s return to the Amazon Echo example, because jazz is not always jazz. Maybe today is more of a day for experimental jazz?
In order to be able to respond to such subtleties in a user’s mood, it is essential to give artificial intelligence a real pinch of “humanness” – the subsequent empathy it triggers in us is key for development of technologies that people want to integrate into their everyday lives smoothly and unconditionally.
Whereas in the past it was mostly the human being who turned to artificial intelligence with their concerns, we are currently in a period in which the interaction between human being and machine is increasingly characterized by reciprocity. Virtual entities turn to us with their concerns – humanizing technology the other way around: A bot, which should appeal to us, must be human enough if it is to be accepted by its audience.
Evidence of this is to be found in “virtual influencers” who are active on social media platforms such as Instagram: The first virtual super model, Shudu Gram (177,000 followers), and the influencer Lil' Miquela (1.5 million followers) together with her – of course also virtual – friend Blawko (135,000 followers). First of all, they are 3D-rendered models, but like traditional influencers they visit hotspots, upload selfies and slice-of-life stories in real time. But that’s just one aspect of their perfect Instagram lives. The much more interesting point: They wear Prada and sell KFC. In other words: They are social media users on a mission to carrying out influencer marketing.
The potential behind this development quickly becomes clear when you look at the market in which virtual influencers operate: Influencer marketing, which was still a $1.7 billion industry in 2016, has already grown to a market of $6.5 billion in 2019. A market that virtual influencers are taking on with all of their advantages: As optimized personifications of brands.
After all, they have one great advantage on their side: The essence of a virtual influence is based on a broad database. Data sources such as Google, Yelp, Facebook reviews, consumer and trend research have the potential to not only turn them into influencers that are more comprehensively informed than their human counterparts thanks to machine learning and abundant data, but also because of their potential to break the “incoherency” that plagues influencer marketing — unrealistic, pay-for-play placements are the sad new industry standard. Distrust for marketing messaging has opened the door for acceptance of an odd new norm: open inauthenticity. There is much to be said for a brand ambassador that offers unbroken coherence in their messaging and great accuracy of fit for their target group. Such coherence is rewarded: According to a survey by Mindshare, 54 percent of all British consumers find virtual entities appealing – among tech-savvy consumers it’s as high as 69 percent.
Leveraging trends and social tech advancement by incorporating data insights could lead us to more honest and empathetic relationships with machines. As openly inauthentic personalities — such as virtual influencers, bots, et al — become more common, we can develop a new form of honesty — coherence — through their upfront presentation of the façade. By empowering them with insights based on truth in data, we maintain coherency of message and the façade increasingly fails to matter. Furthermore, we must use creativity to humanize technology and equip machines with the empathy they need to become more relevant to us all – this is crucial advice for our future, which we will increasingly spend with artificial intelligence. This is a new age of creative tech and social media, which brings machines and people closer together than ever before.
Elbkind Reply is the company of the Reply Group specialized in digital communication. Elbkind Reply’s work revolves around everything digital to spark conversations, trigger recommendations and raise awareness. The company leads clients through the digital jungle with competence and a down-to-earth attitude by supporting them every step of the way. Elbkind Reply offers all the best ingredients for first class digital communication. From holistic brand consultation and strategy to one-stop realization.