What Is Explainable AI (XAI) and How Will It Improve Digital Marketing?




PHOTO:
Adobe Stock

Can your brand explain how its artificial intelligence (AI) applications work, and why they make the decisions they do? Brand trust is hard to win and easy to lose, and transparent and easily explainable AI applications are a great start towards building customers’ trust and enhancing the efficiency and effectiveness of AI apps.

This article looks at Explainable AI (XAI), and why it should be a part of your brand’s AI strategy.

What Is Explainable Artificial Intelligence (XAI)?

Typical AI apps are often referred to as “black box” AI because whatever occurs within the application is relatively unknown to all but those data scientists, programmers and designers who created it. Individually, even those people may not be able to explain anything outside of their primary domain. Without being able to provide any discernible insights as to how AI comes to make decisions, an AI app cannot be fully optimized. 

Rather than being hidden in a figurative black box, XAI is to offer transparency, making it easy to see and infinitely more explainable. Typically, the aspects of AI that are hard to understand revolve around decisions the AI app makes, which are based on the actionable insights it gains from real-time and past interactions. XAI makes it easier to understand why AI algorithms decide to perform that “next best action.” Because XAI apps are transparent and easier to debug, brands can eliminate unconscious biases and explain ethical decisions.

Why Is Explainability Important?

The inner workings of relatively complex AI applications, such as online retail recommender systems, intelligent assistants or conversational chatbots, use moderately benign decision engines that are of no interest to the majority of users.  Most brands are likely unconcerned with providing transparency or explainability for these types of AI applications. 

Other decision processes are riskier, though, such as medical diagnoses, investment decisions and safety-critical systems in autonomous automobiles. For these applications, AI needs to be explainable, understandable and transparent, as trust, reliability and consistency are paramount for trust to be built.

Explainable AI allows brands to detect flaws in data models more easily, as well as unconscious biases in the data itself. XAI is also effective in improving data models, verifying predictions and gaining additional insights into what is working and what is not.

Sascha Poggemann, COO and co-founder of Cognigy, an enterprise conversational AI platform, spoke with CMSWire about the reasons why it’s crucial for AI applications to be trustworthy.

To begin with, brands must determine where AI will make the most impact. “Knowing where AI fits into the customer service strategy is a common challenge for businesses leaders,” said Poggemann. “It’s highly complex but, when done right, it seems to run seamlessly from the background. And it’s a key part component of the trustworthy AI that is mission-critical for AI acceptance and adoption.

“However, there’s the black box issue. AI applications are based on highly complex models that are hard to understand for those without technical expertise and the methods used for AI-based systems aren’t always clearly explained, for instance how a machine learning model reaches a particular decision.”

In most relationships, trust begins with a deep level of understanding, and AI applications are no exception. “Explainable AI makes it easier to understand the ‘how it works’ and ‘why it matters’ behind deeply technical AI methods like neural networks and deep learning,” added Poggemann. “The more AI applications are understood, the more they are trusted — and building this trust from the beginning sets a stronger foundation for successful AI.”

Related Article: Reasons Why Ethical Conversational Design Is Vital for Enterprise AI

XAI Allows Brands To Eliminate Unconscious Bias

One notorious example of unconscious biases occurred with retail giant Amazon and their failed attempt to use AI for job application vetting. Although Amazon did not use prejudiced algorithms on purpose, their data set looked at hiring trends over the last decade and suggested the hiring of similar applicants for open positions.

The data indicated that the majority of those who were hired were white males, which unfortunately reveals the biases within the IT industry itself. Amazon eventually gave up on the use of AI for its prescreening hiring practices and went back to relying upon human decisions. 

Biases often sneak into AI applications, including:

  • Racial bias
  • Name bias
  • Beauty bias
  • Age bias
  • Accent bias
  • Affinity bias

Thankfully, XAI can eradicate unconscious biases within AI data sets. Several AI organizations, such as OpenAI and the Future of Life Institute, are working to make sure that AI applications are ethical and equitable for everyone. 

Kimberly Nevala, AI strategic advisor at SAS, an analytics software and solutions provider, spoke with CMSWire about the ways XAI can eliminate unconscious biases. While it’s well understood that humans are biased, Nevada said “it has not always been clear how those biases (both unintentional and purposeful) materialize in our business practices and systems. XAI, in conjunction with existing data analysis methods, allows teams to quickly analyze and test the influence of sensitive variables on model predictions.

“This has been incredibly helpful in health care applications where — for instance — the rate of diagnosis varies widely by gender, ethnicity, socioeconomic status and so forth. Or in assessing credit applications where elements such as individual vs. family incomes, location or number of parking tickets can adversely and often unfairly skew results.”

Along with unconscious biases, XAI algorithms eliminate the debatable defense that a brand or person can’t be held accountable for “unforeseen” outcomes because they could not have been reasonably predicted. “For example, avoiding situations such as pronouns on a resume being used to filter out women — the argument being: that wasn’t our intent, gender isn’t even specified on a person’s resume. Or the severity of a patient’s disease or health needs being based on their billing history (i.e., how many times they’ve been seen by a doctor),” explained Nevala.

Related Article: Dealing With AI Biases, Part 1: Acknowledging the Bias

XAI Is Not Just for Consumers

One aspect of XAI that is not generally talked about is that XAI is not just important for consumers. In fact, if one thinks of all the end-user AI applications that have been used thus far, how many times was an accessible and meaningful explanation available? Most often, it wasn’t. Currently, most of the available explanations are there for the various domains involved with creating the AI applications: data scientists, engineers, designers, programmers, legal, etc. 

The information on how and why an AI app functions and makes decisions is not a one-size-fits-all solution. Each domain has different expectations, requirements and needs based on what their goals are and where they fit into the AI application development process. 

The Brookings Institute published a paper titled “Explainability Won’t Save AI” that discussed the various requirements of each domain and why one explanation is not sufficient. In the paper, they reasoned that for three AI domains, engineering, deployment and governance, each had completely different objectives for explainability. The engineering domain incorporated other engineers’ input, while the deployment domain incorporated users’ input and the governance domain focused on the impact of the AI app on broader communities and the world. XAI can play a large role in helping each domain improve the effectiveness, efficiency and acceptance of AI, but only if the explanation they have been provided with was designed for their specific goals. 

The Future of XAI

Recently, many tools have become available to assist with the creation of XAI applications. Platforms like xAI Workbench, Arize AI, and EthicalML/ai enable brands to better understand and explain the inner workings of their AI apps. 

“The next frontier of AI is the growth and improvements that will happen in Explainable AI technologies. They will become more agile, flexible and intelligent when deployed across a variety of new industries. XAI is becoming more human-centric in its coding and design,” reflected AJ Abdallat, Founder and CEO of Beyond Limits, an enterprise AI software solutions provider. 

Surprisingly, the human element is a big part of what’s allowing AI to improve and evolve. “We’ve moved beyond deep learning techniques to embed human knowledge and experiences into the AI algorithms, allowing for more complex decision-making to solve never-seen-before problems — those problems without historical data or references,” said Abdallat.

“Machine learning techniques equipped with encoded human knowledge allow for AI that lets users edit their knowledge base even after it’s been deployed. As it learns by interacting with more problems, data and domain experts, the systems will become significantly more flexible and intelligent. With XAI, the possibilities are truly endless.”

Final Thoughts

While XAI is not a solution to all of the challenges of artificial intelligence, the greater transparency and understanding it provides enables brands to create AI applications that are highly optimized, effective and efficient. Additionally, XAI helps companies eliminate unconscious biases, remain legally compliant and build consumer trust.



Source link

We will be happy to hear your thoughts

Leave a reply

THE SKIL
Logo
Shopping cart