Thank you for your interest. We Will Contact You Soon...
Your email ID is already registered with us.
Explainable Artificial Intelligence (XAI): An Introduction
Technology - January 29, 2021
Alan Turing, widely considered the father of modern computer science, once said, "A
computer would deserve to be called intelligent if it could deceive another human
being into believing that it was human". He referred to the machine's ability to
imitate a human, to have a dialogue that is intelligible and precise.
Seventy years later we now have an advanced Artificial Intelligence (AI) in place.
AI is humanity’s continuing attempt to make the machine more intelligent and
intuitive. Today, AI is working behind the scenes to assist us in our day-to-day
activities. It is powering self-driving and parking vehicles, digital assistants on
our smartphones, and other applications that we use every day such as emails and
social media. AI is also helping researchers and authorities in the fight against
COVID-19. An example of this can be seen in late Dec 2019 when BlueDot, an AI
platform, reported a surge in "unusual pneumonia" cases around the Wuhan region in
China. This alerted the authorities to investigate the matter and take corrective
actions. Today, AI is helping researchers in COVID-19 testing and vaccine
AI applications empower autonomous systems that perceive, learn, and then decide on
their own. These systems use technologies like Machine Learning and Natural Language
Processing (NLP) to work on large data sets and then give us an output. However
there exists a problem in that systems do not tell us the reasoning behind the
decisions they make. They give us the output but do not tell us why they have given
us that output.
Consider a doctor who is using an AI system. The system gives him the recommendation
to perform a particular task, but the doctor will not be comfortable in using the
recommendation without knowing its reasoning.
It is a big gap and one which needs to be filled. AI systems will never be able to
garner trust with people if they fail to reveal their reasoning. Even AI designers
cannot explain why their system came to a specific decision. This presents a "black
box" scenario where no one knows what is under that box or how it works. This "black
box" has the potential to limit the scope and reach of AI itself. Rephrasing Alan
Turning, machines will never become intelligent if they cannot make a transparent
conversation with humans. Today, there is efficiency but not much communication, and
this can be a deal-breaker for many.
Enter Explainable AI!
What is Explainable AI
In simple terms, Explainable AI (XAI) is an artificial intelligence application that
provides understandable reasoning for how it arrived at a given conclusion.
Explainable AI adds transparency to the "black box" and allows it to be examined and
understood by human practitioners. It is a giant leap forward in making AI more
Let us look at it in detail.
A usual Machine Learning workflow looks something like this:
First, we use data to train a model with a specific learning process.
The Learning process then leads to a Learned function.
Inputs are fed into the Learned function.
The Machine predicts the output. The user sees this output.
After the function is learned, new inputs can be fed into the model, and the machine
will return the desired prediction.
Take an example. Sam wants to purchase a used car and he is interested to know its
price. We take the model and train it with a learning process. Next, we feed inputs
(which are details of used car), and the Machine Learning algorithm returns a
prediction of the input car price. Sam can look at the price and decide if it would
be a good purchase.
The critical thing to note here is that there is prediction but no
justification. It can confuse Sam as he must put blind trust in the machine
to make the right decision.
On the other hand, Explainable AI follows the following framework:
As you can see, there is a new learning process here. This learning process not only
gives us the prediction but also explains why it made such a prediction. In the new
output, the user gets additional information on why the prediction was made. The
model for the above example will look something like this:
A layer of ability to explain is added in the ML design to create an Explainable AI
The additional layer is crucial because:
Explainability helps to ensure impartiality in decision-making. It helps to
detect and correct any biases in the datasets.
Explainability highlights potential factors that could change the prediction.
XAI: Companies leading the way
In the passage below, we will look at a few companies that are pioneering XAI
technology and bringing it to the world.
IBM: IBM completed a comprehensive in-house survey and found out that
over 60% of its executives were not comfortable with the traditional AI’s
“Black box” approach. IBM has developed an advanced cloud-based AI tool and
uses XAI technology to provide the reasoning behind the AI recommendations.
Google: Google has created an XAI-enabled AI platform that provides
explanations on several of Google’s features such as image recognition, code
Darwin AI: Founded in 2017, DarwinAI is active in a process known as
“Generative synthesis”. In this method, DarwinAI uses AI to comprehend the
functioning of a deep learning neural network. It offers an Explainability
toolkit-a simple-to-use feature that performs network diagnostics, network
Flowcast: Flowcast: The company uses its proprietary AI technology to
build explainable credit assessment models. This has the potential to
completely transform the credit regulations and help financial institutions
adhere to regulatory compliance.
Imandra: Imandra is working towards making algorithms fair,
explainable, and safe. It offers its “Reasoning as a service” solution that
brings XAI technology to key domains. It started in the financial sector but
slowly expanded to other industries like transportation, robotics, etc.
Kyndi: Kyndi offers a dedicated NLP (natural language processing)
platform and is aimed to create auditable AI systems.
Factmata: Factmata was one of the earliest companies which tackled
the issues of online fake news. It uses AI techniques to filter credible
news from fake ones and use XAI techniques to justify the segregation.
Innovation must go hand in hand with trust; no business or technology can succeed
without trust. According to a PWC report, AI will be responsible for GDP gains up to
$15 trillion by 2030. The firm also surveyed in 2017 where over 70 percent of
executives believed AI would affect every aspect of the business of an organization.
As AI becomes more and more mainstream, it is important that it becomes ethical and
accountable and XAI is a great step forward in that respect.