Driving product success with explainable AI

September 20, 2021

Until recently, I did not trust the auto-park feature on my electric vehicle because it seldom worked. I assumed because it was driven by complex AI that the engineers had thought of everything, and I should be able to trust it to “just work.” My distrust only started to change as I heard more simplified explanations of the feature from other vehicle owners. Once I understood that the car detects objects (using sensors, not cameras), I changed my behavior and started choosing open parking spots between two parked vehicles. The electric vehicle manufacturer took this feedback from thousands of vehicle owners and decided to completely rewrite the auto-parking feature to use cameras and vision AI technology.

To a Product Manager, this example raises several product questions while highlighting what works in an agile product lifecycle. The product should have “just worked.” Was my trust in AI misplaced? Why did the manufacturer not create more awareness about the feature so customers could have better success? While one part of me feels like the “airplane is being built while it’s flown,” I also appreciate not having to wait another decade for the product to be perfected. Getting a product in the hands of users provides valuable feedback that leads to constant improvements. Through a cycle of open dialogue between creators and users, and transparent improvement in product performance, customer confidence in the product improves over time.

Building trust in AI is a journey we have embarked on unwittingly. Despite some setbacks, we have seen an overall increase in the adoption of AI-driven products across many industries. In financial services, AI-driven products are becoming more prevalent. In some scenarios, like detection of credit card fraud, AI “just works” behind the scenes sniffing transactions and constantly improving itself. In other cases, AI is front and center personalizing the customer experience. AI is helping drive user experience, whether the user is an account holder or a back-office user. Users continue to ask the same questions – why was my card declined, why was our cash-on-hand forecast this month so far off the mark, how did this marketing campaign achieve such a high conversion rate?

Being able to explain decisions and recommendations driven by AI is critical to building trust, which in turn propels adoption of AI-driven products.

The National Institute of Standards and Technology (NIST) recently published four key principles of Explainable AI (XAI) which can guide the design of AI-driven products. While future publications will discuss each one of these principles in detail, let us introduce them here with a sample scenario from the Financial Services industry.

Explanation: The product presents the reasoning behind the decision or recommendation.

  • An online loan application by a customer is declined. Online user experience offers a simple-to-understand explanation with the top five decision factors presented in order of importance. Concurrently, the decision and detailed explanation are stored on the backend for auditability and to serve any future customer queries

Relevance: The product provides explanations that are understandable to individual users.

  • The AI model suggests that a participant in a 401(k) rebalance their investments based on their stated tolerance level and projected retirement age. The recommendation is presented in simple and easy-to-understand language that is tested for usability with other 401(k) participants.

Explanation accuracy: The explanation correctly reflects the underlying process used by the AI. This is different from the accuracy of the AI model. Explaining the underlying process becomes harder as the AI model becomes more complex and accurate.

  • In the 401(k) example above, the process can summarize how underlying data is gathered, what subset is used for training the AI model and how the model is monitored to ensure that it remains accurate over time. Including specifics of the underlying algorithm will make the process harder to explain and may also divulge the “secret sauce” behind the model.

Knowledge limits: Declare the limits of the AI model. The model only operates under conditions for which it was designed or when it reaches sufficient confidence in its output.

  • A smartphone application to detect counterfeit currency is powered by an AI model. When the application is unable to detect a counterfeit with sufficient confidence, the following message appears: “We detected a currency note but are unable to determine whether it is fake. This application is trained to detect US dollar banknotes only.”

To succeed with products powered by AI, practitioners need to provide clear, accurate and reasonably transparent explanations for what is happening behind the scenes. Explaining AI becomes challenging as models become more complex to achieve higher accuracy. This leads to a tradeoff between explain-ability and model accuracy. Meaningful and accurate explanations empower end users to adapt their behavior and/or appeal decisions.

For data scientists, explanations help improve, maintain, and deploy products that will have a better chance of succeeding in the market. Explainable AI shouldn’t be an afterthought, but a design consideration at every step of the product lifecycle.

About the Author
Rajiv Kaushik, Data Solutions Group, FIS
Rajiv KaushikData Solutions Group, FIS

SIMPLY FINTECH EDUCATIONAL SERIES
Capture opportunities with embedded finance
Similar Articles