AI Glossary
On this page you'll find important terminology related to the field of XAI.
EntrySearch
Algorithmic transparency

Algorithmic transparency deals with the ability of the user to understand the process followed by the model to produce any given output from its input data. Put it differently, a linear model is deemed transparent because its error surface can be understood and reasoned about, allowing the user to understand how the model will act in every situation it may face
Bias detection

Bias is possible to detect when the user has high appropriate trust and, in that way, can identify the general patterns of errors in the system.
Bounded Rationality

Humans act as “satisficers” who strive for satisfying and sufficient solutions (instead of optimal ones) due to cognitive limitations.
Comprehensibility

Comprehensibility refers to the ability of a learning algorithm to represent its learned knowledge in a human understandable fashion.

Comprehensible systems provide the user with symbols that enable them to draw conclusions on how properties of the input influence the output.
Conversational Agent

Consider CAs to be technological artifacts with which users interact through natural language, both in written and spoken form.
Decomposability

Decomposability stands for the ability to explain each of the parts of a model (input, parameter and calculation). It can be considered as intelligibility.
Descriptive Accuracy

Degree to which an explanation generation method accurately describes the behavior of the learned ML model
Embedding

An embedding is a relatively low-dimensional space into which you can translate high-dimensional vectors. Embeddings make it easier to do machine learning on large inputs like sparse vectors representing words
Explainability

Explainability can be viewed as an active characteristic of a model, denoting any action or procedure taken by a model with the intent of clarifying or detailing its internal functions, given a certain audience, explainability refers to the details and reasons a model gives to make its functioning clear or easy to understand. explainability is associated with the notion of explanation as an interface between humans and a decision maker that is, at the same time, both an accurate proxy of the decision maker and comprehensible to humans explainability is defined as the ability a model has to make its functioning clearer to an audience

Explainability is associated with the notion of explanation as an interface between humans and a decision maker that is, at the same time, both an accurate proxy of the decision maker and comprehensible to humans.

Explainability is defined as the ability a model has to make its functioning clearer to an audience.

Explainability is related with the notion of explanation as an interface between humans and an AI system. It comprises AI systems that are accurate and comprehensible to humans

Explainability refers to the active nature of an AI model that expresses any ability or any procedure that the model takes in order to clarify or reveal its internal functions

provides insight into the DNN’s decision to the end-user in order to build trust that the AI is making correct and non-biased decisions based on facts

Explainability provides insights to a targeted audience to fulfill a need.
Explanation

To explain an event is to provide some information about its causal history.

The goals of explanation involve answering questions such as, "How does it work?" and "What mistakes can it make?" and “Why did it just do that?”

A statement or account that makes something clear; a reason or justification given for an action or belief.

An explanation is additional meta information, generated by an external algorithm or by the machine learning model itself, to describe the feature importance or relevance of an input instance towards a particular output classification.
Explanation Satisfaction

To what extent an explanation user interface or an explanation is suitable for the intended purpose.
Fidelity

Reflects how accurately the explanation method mirrors the underlying model.
Flexibility
Identity

Identical instances should have identical explanations.
Intellegibility

Context-aware systems that seek to act upon what they infer about the context must be able to represent to their users what they know, how they know it, and what they are doing about it.

Intelligibility can help expose the inner workings and inputs of context-aware applications that tend to be opaque to users due to their implicit sensing and actions.
Interpretability

Interpretability is the degree to which a human can understand the cause of a decision.

Interpretability is the degree to which a human can consistently predict the model’s result.

Interpretability refers to a passive characteristic of a model referring to the level at which a given model makes sense for a human observer.

Interpretability is defined as the ability to explain or to provide the meaning in understandable terms to a human.

A system is interpretable if the input-output relationship (its decision or choice) can be formally determined to be optimal or correct, in either a logical or a statistical sense.

Interpretability is defined as the capacity to provide interpretations in terms that are understandable to a human.

Interpretability indicates the degree that an AI model becomes clear to humans in a passive way.

Interpretability enables developers to delve into the model’s decision-making process, boosting their confidence in understanding where the model gets its results. Instead of a simple prediction, the interpretation technique provides an interface that gives additional information or explanations that are essential for interpreting an AI system’s underlying functioning. It aids in opening a door into the black-box model for users with the required knowledge and skills, e.g., developers. The intrinsic properties of a DL model are disclosed through interpretability. This has to do with being able to comprehend how AI models make their decisions. AI systems that explain the internals of an AI model in a manner that humans can comprehend are known as model intrinsic techniques.

Interpretability is the degree to which the provided insights can make sense for the targeted audience’s domain knowledge.

Interpretable refers to the ability to understand how inputs are processed on a systems level

Interpretability is a desirable quality or feature of an algorithm which provides enough expressive data to understand how the algorithm works
Interpretation

Interpretation is a simplified representation of a complex domain, such as outputs generated by a machine learning model, to meaningful concepts which are humanunderstandable and reasonable.
Justifiability

Justifiability offers a simple way for non-technical users to perceive the inner learning processes of a learning model and allows them to justify the model.
Justification

A justification explains why a decision is good, but does not necessarily aim to give an explanation of the actual decisionmaking process.

An argument about why a particular decision was made.
Machine Learning

A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P, if its performance at tasks in T, as measured by P, improves with experience E.
Mental Model

Term for a constellation of well-developed knowledge about a system.

Representations or expressions of how a person understands some sort of event, process, or system
Naturalness

Explanations in natural language, following an dialogue.
Novelty

The instance should not come from a region in instance space far from the training data.
Overfitting

An undesirable machine learning behavior that occurs when the machine learning model gives accurate predictions for training data but not for new data.
Predictive Accuracy

Degree to which the learned ML model correctly extracts the underlying data relationships.
Random Forests

Random forests are a combination of tree predictors such that each tree depends on the values of a random vector sampled independently and with the same distribution for all trees in the forest.
Reinforcement Learning (RL)

The family of algorithms that learns the optimal policy, the goal of which is to maximize profits when interacting with the environment. For example, the ultimate reward in most games is victory. Reinforcement learning systems can become experts at playing complex games by evaluating the sequence of moves from the previous game that eventually led to winning and the sequence that ultimately led to losing.
Reinforcement Learning from Human Feedback (RLHF)

Use feedback from reviewers to improve the quality of your model's responses. For example, the RLHF mechanism could ask users to rate the quality of the model's response using an emoji 👍 or 👎. The system can then adjust future responses based on this feedback.
Relevancy

Describes if the outputs are communicated in a way that they provide insights to a particular audience into a chosen domain problem.
Representativeness

Measures how many instances are covered by the explanation.
Responsiveness (XUI)
Self-Explanation

Self-explanation is the “effort after meaning” motivated by an intrinsic need to understand.
Sensitivity

Provided explanations should be informed by the user's knowledge, goal, context and previous interaction.
Separability

Non-identical instances should not have identical explanations.
Simulatability

Simulatability denotes the ability of a model of being simulated or thought about strictly by a human.
Socio-technical system

Relies on the interplay of three key elements the human that wants to achieve a specific goal, the task that the user must accomplish to achieve the goal, and the technology.
Succinctness

Succinctness indicates how concise and compact is the generated explanations to be understandable is for humans themselves.
Temporal Data

Data recorded at different points in time. For example, winter coat sales recorded for each day of the year would be temporal data.
TensorFlow

A large-scale, distributed, machine learning platform. The term also refers to the base API layer in the TensorFlow stack, which supports general computation on dataflow graphs.

Although TensorFlow is primarily used for machine learning, you may also use TensorFlow for non-ML tasks that require numerical computation using dataflow graphs.
Theory of mind

The human ability to attribute beliefs, desires, capabilities, goals and mental states to others.
Transparency

A model is considered to be transparent if by itself it is understandable.

A model is considered to be transparent if, by itself, it has the potential to be understandable. In other words, transparency is the opposite of “black-box".

Transparency clearly describes the model structure, equations, parameter values, and assumptions to enable interested parties to understand the model.

Transparency is a “level to which a system provides information about its internal workings or structure".
Trust

An attitude formed by information about the system and previous experiences.

The user’s willingness to act based on the recommendation of the system and their confidence in the correctness of the prediction.
Underfitting

The counterpart of overfitting, happens when a machine learning model is not complex enough to accurately capture relationships between a dataset's features and a target variable.
Understandability

Understandability (or equivalently, intelligibility) denotes the characteristic of a model to make a human understand its function – how the model works – without any need for explaining its internal structure or the algorithmic means by which the model processes data internally.
User Interface

A computer-mediated means to facilitate communication between human beings or between a human being and an artifact. The user interface embodies both physical and communicative aspects of input and output, or interactive activity. The user interface includes both physical objects and computer systems (hardware and software, which includes applications, operating sys- tems, and networks). A user interface may be said to consist of user-interface components. Reasonable synonyms for user interface include human-computer interface and human-human interface. This last term seems appropriate for an era in which computers themselves disappear, leaving only “smart” rit- ual objects and displays, such as “smart eye- glasses,” “smart clothes” and “smart rooms.”
XAI (Explainable Artificial Intelligence)

A suite of machine learning techniques that enables human users to understand, appropriately trust, and effectively manage the emerging generation of artificially intelligent partners.

Given an audience, an explainable Artificial Intelligence is one that produces details or reasons to make its functioning clear or easy to understand.

Explainable Artificial Intelligence (XAI) systems aim to increase the interpretability and comprehensibility of AI systems, where interpretable refers to the ability to understand how inputs are processed on a systems level, whereas comprehensible systems provide the user with symbols that enable them to draw conclusions on how properties of the input influence the output.

XAI focuses on developing explainable techniques that empower end-users in comprehending, trusting, and efficiently managing the new age of AI systems.
XUI (Explainable User Interfaces)

Sum of outputs of an XAI system that the user can directly interact with.

Explanatory XUI: convey a single explanation --> static explanation

Exploratory XUI: let users freely explore the ML model behavior --> interactive explanation

Get involved in our project!

Stay up to date, join our initiative and be one of the first to test our product, taking your AI-powered research to new levels.
AI PAPER MAKER
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.