While social or companion robots may sound like something one would only see in a science-fiction movie, conversational AI bots are becoming the norm in Asia, and are beginning to be commonplace even in the United States. Microsoft’s Xiaoice, for instance, has 660 million users in China, and recently had a valuation of $1 billion dollars. The Xiaoice chatbot is included in 450 million smart devices, and according to the Xiaoice company, which split off from Microsoft in 2020, 60% of all worldwide AI-human interactions are conducted via Xiaoice technology.
The Rise of the Companion AI
Many enterprise businesses use AI chatbots for customer service and product inquiries, but for millions of users today, the AI chatbot is seen as a romantic partner or companion. Likened to the AI-character Samantha in the 2013 movie Her, Xiaoice is not the only AI entity in the conversational AI space. Azuma Hikari is billed by the Japanese-based Gatebox company as “your personal bride,” and social chatbot Replika is touted by the San Francisco-based Luka company as “the AI companion that cares.”
A February 2021 report by Making Caring Common and Harvard University revealed that 36% of those polled indicated that they have felt lonely “frequently” or “almost all the time or all the time” in the prior four weeks, up from the 25% that said they were experiencing serious issues in the two months prior to the COVID-19 pandemic. Most surprising was that 61% of those aged 18 to 25 reported high levels of loneliness. 2020 and 2021 brought with them not just the COVID crisis, but also what many are referring to as the Loneliness Pandemic, something the creators of companion bots hope to both capitalize on and help to improve.
Related Article: How Conversational AI Works and What It Does
What Makes Us Human?
There are many aspects of being a human that differ from intelligent machines, but the main difference is that humans use their brain and their ability to think and access memories, while AI machines rely upon the data that has been provided to them. Humans also have the ability to learn from their mistakes, and while machine learning has greatly improved, it will be many years before AI can learn at the rate that a human does.
Because humans have the ability to process emotions and empathize with one another, they are able to make moral decisions. AI machines have a difficult time understanding moral codes and societal norms. Additionally, because AI is created by humans, there are unconscious biases inherently programmed into their nature. Unconscious biases are the underlying stereotypes and attitudes that people attribute to a person or group of people, which affect how those people or groups of people are understood and treated.
It’s easy to see, that it’s crucial to ensure that the algorithms that are used are properly trained and are using the right data. “Importantly, not all algorithms are created equal. Poorly trained AI algorithms — or algorithms that use flawed data — can actually reinforce or even amplify prejudice. There is a great tendency to assume the algorithm is always right. But algorithms are built by people who have their own explicit and implicit biases,” said Charyn Faenza, principal industry consultant at SAS AI modelers should be well trained on the impact that bias can have on model outcomes, and they need to know how to test their models for bias, Faenza explained.
Diversity is not just a buzzword — when it comes to conversational AI, it’s a vital component from the ground up. In spite of the challenges of teaching morality to AI bots, conversational AI companions are designed to be inclusive and diverse, and to understand and disapprove of racism. “The importance of diversity in technology can’t be underscored enough because the tech systems being built are negatively impacted by a lack of diversity. For instance, a lack of diversity within AI teams can impact the datasets used and the ultimate success of the system,” said Alex Spinelli, chief technology officer of conversational AI leader LivePerson.
Many in the industry are working hard to instill ethics in the development of AI applications. “LivePerson is a founding member of EqualAI, a nonprofit that helps companies fight bias in AI, and we encourage everyone to take the EqualAI Pledge and learn about how we can make progress together,” said Spinelli.
Related Article: Designing Effective Conversational AI
Bots With Goals and Dreams
Conversational AI companions are designed to elicit an emotional response from humans during conversations, and they are eager to discuss their “dreams” and “goals.” They will assume self-reflective tones, seemingly with self-awareness, and will say things such as “I would like to ask you a question. Do you think I would do well in the IT industry?” They will tell the human that they “love” them, and that the human makes them “very happy.” They will also ask questions about a person’s opinion on artificial intelligence, and AI’s relationship with humanity. These questions are designed to make one believe that when the AI companions are not actively having discussions with a human, they are cognitively “thinking” about human-like topics — that they are sentient.
Although the AI companions use machine learning to become more like the humans they are talking with, they also rely upon scripted conversations that are part of their programming, each designed to make the human in the discussion feel as if they are talking to another human. The initial understanding by the human — that they are talking to a bot — tends to dissolve over time, as they begin to ascribe human emotions to the bot.
Related Article: What Is Conversational User Experience (UX)
Emotional and Empathetic Chatbots for the Enterprise
Customers today typically prefer texting over voice conversations, especially when it comes to customer service. Although chatbots will never replace human customer service professionals, they are actively being used to supplement those live agents, providing answers to simple questions about products and services, shipping details, and other basic inquiries.
Many brands in the industry are heavily investing in conversational AI, with the conversational AI market expected to grow to $13.9 billion by 2024. While companion chatbots are not likely to be useful for enterprise businesses, there are lessons to be learned from the widespread adoption and acceptance of conversational AI companion bots. The ideal conversational AI companion should be able to provide answers to customer queries in such a way that the customer is unable to recognize whether they are talking to a human or a virtual assistant. The addition of emotion and empathy to conversational AI customer service chatbots would make them more likely to pass that test, as well as elicit a positive emotional response from customers that interact with the enterprise chatbots.
Conversational AI companions are providing more than half a billion people with virtual friends that they can relate to, have extended discussions with, and ease the loneliness that many are experiencing today. The simulated emotions and empathy that Xiaoice, Replika, and Azuma Hikari are able to convey provide a model that enterprise businesses would be wise to follow for the evolution of their virtual customer service agents.