Conversation AI applications have evolved to the point where they could easily pass the Turing Test and convince a human they are speaking with another human. Although AI has continually evolved, ethics and morality have not always been a prime design focus.
How do these AI apps respond to diversity? To questions or statements about gender? Do they recognize the difference between a child and an adult? How do they respond to tone, such as happiness or hostility? Can they take body language into consideration?
This article will look at the reasons why ethical conversational design is vital for enterprise brands considering the development and use of conversational AI applications.
AI Still Inherits Unconscious Biases
Although humanity is making great strides in the eradication of prejudice, we are still inherently flawed creatures, prone to holding grudges, having unconscious biases and subconsciously finding reasons to choose one person over another.
It’s not that humans cannot be kind, loving, caring and accepting. We can. But these negative traits often slip by, making it into real-life scenarios. Intelligence doesn’t have anything to do with it, nor does the level of education. Smart people are just as subject to the same unconscious biases.
This brings us to programmers, data scientists, engineers and designers, specifically those that work on AI. It’s reasonable to assume that, when they are working on the latest AI application, these unconscious traits get built-in. Even when brands try to solve problems using AI, such as Amazon using AI to prescreen job applicants, unconscious bias was able to slip in and unfairly muddy the playing field.
Recent data also shows accent bias in intelligent assistants like Alexa and Google Home. When looking at thousands of voice commands dictated by more than 100 people across nearly 20 cities, studies show notable discrepancies in how people from different parts of the US are understood.
Accents that were misunderstood included southern, midwest, nonnative and Spanish, with some showing up to a 30% inaccuracy rate. These accent biases were not part of the intended design. Instead, they’re present because early adopters of voice assistants were primarily white, upper-middle-class Americans. To train AI accurately, we must use a multitude of diverse voices.
Ouriel Lemmel, CEO and founder of WinIt, a LegalTech solution provider, told CMSWire that biases in AI are often the cause of prejudicial outcomes. “Bias can creep into algorithms in many ways and can often be skewed to achieve a particular outcome. An example of this might be towards greater caution in offering loans to a certain group of people based on ‘social credit’ scores. The personal bias of developers, conscious or unconscious, can creep in when writing the algorithm as well,” said Lemmel.
An emphasis on diversity, equity, inclusion and belonging should be a part of any AI initiative. “You need to be mindful at every stage of building your conversational AI to avoid bias by design, and access your training date for pre-existing bias or any bias that might emerge,” said Lemmel. “This means diversity and inclusion training for all your developers so they understand their own unconscious bias before they even start work, and that they create new algorithms that remove that bias.”
Related Article: 4 Tips for Taming the Bias in Artificial Intelligence
AI Must Be Trustworthy, Fair and Accessible
Mark Rolston, Chief Creative Officer and Founder of argodesign, a product design consultancy, and advisor to the Responsible Artificial Intelligence Institute (RAII), spoke with CMSWire about the reasons why ethical conversational design is necessary for AI to continue to safely evolve and be trusted.
Rolston said that artificial intelligence is changing the game again, providing new insights about everything we thought we knew about the way the world works. “Today most software consists of content originating from other humans, with computers aiding in the gathering and filtering of that information. The shift that AI brings about is twofold: it is making opaque decisions about which content pieces we see, and in many new situations, it is creating the very content itself,” said Rolston.
Given the multitude of ways that AI informs, creates and impacts our lives, it behooves designers to ensure that it’s doing so in a just manner. “Our job as designers is no longer simply an opportunity to make the world more beautiful, but also as a central actor in making sure it is trustworthy, fair and accessible to everybody,” said Rolston. “The role of design is driving a whole-systems view of AI in order to thoroughly understand its impact.”
Explainable AI enables humans to better understand the reasons why AI makes certain decisions. It’s also more transparent about its methodologies, something that experts such as Rolston believe is vital. “At a superficial level, we want to create transparency about how these decisions are made. How do we expose its reasoning? The more sophisticated consideration is the role design has in helping make the authoring and auditing of AI more easy, accessible and transparent,” he explained.
AI is typically very complicated, and as such, it’s reasonable for humans to distrust it. Those who play a role in developing and designing AI applications have a duty to ensure that what they create is built within an ethical framework that inspires trust. “As practitioners and stakeholders, our choice is to build a future that either will — or will not — be trusted by everyone. It is our collective responsibility to advocate and engineer for the positive. This is one of the most valuable things a creative organization can be invited to do,” said Rolston.
Guidelines and Ethical Frameworks
There has been much discussion in North America and Europe about AI guidelines. The RAII, where Rolston advises, “works to define responsible AI with practical tools and expert guidance on data rights, privacy, security, explainability and fairness.”
Similarly, the European Commission created a high-level expert group on artificial intelligence (AI HLEG) which released its Ethics Guidelines for Trustworthy Artificial Intelligence. These guidelines list seven key requirements that AI systems should meet to be trustworthy:
- Human agency and oversight
- Technical robustness and safety
- Privacy and data governance
- Diversity, non-discrimination and fairness
- Societal and environmental well-being
As AI continues to play a larger role in everyone’s lives, these ethical guidelines should become standardized, accepted and practiced by AI designers and developers worldwide. While it’s unlikely for these rules to become legislated, there will likely be a voluntary guideline that brands can add to their corporate social responsibility practices.
Related Article: What Is Ethical AI and Why Is It Vitally Important?
Artificial intelligence has inundated our lives. It connects us with people, groups and content. It makes decisions that affect our credit, where we live, where we work, how we drive. It has regular communication with us through intelligent assistants, chatbots and even our cars.
As the influence of AI continues to grow, ethical conversational design standards must become a framework that all developers and designers follow.