Learning Resources
Last Updated: 12/01/23
On this page you'll find important learning resources related to the field of XAI.  Please note that we don't guarantee the reliability and relevance of the content. The field of explainable AI is constantly evolving, and new research may lead to updates and revisions of the provided sources. Users are encouraged to verify the information independently.

EntrySearch
Resource Type
Literature Filter
  • Course (5)
  • Research paper (5)
  • Video (3)
  • Book (2)
Youtube | DeepFindr: Understanding Convolutional Neural Networks
Florian Rottach

This is a three part video series that captures both the theory and programming practices with CNN in PyTorch. Suitable for ML beginners.

By clicking the button above, you will be leaving our website.
Intro to Machine Learning on kaggle
Dan Becker

Learn the core ideas in machine learning, and build your first models. This course covers 7 lessons and takes about 3h to complete.

By clicking the button above, you will be leaving our website.
Applied Machine Learning: The power of algorithms
Derek Jedamski

In the first installment of the Applied Machine Learning series, instructor Derek Jedamski covered foundational concepts, providing you with a general recipe to follow to attack any machine learning problem in a pragmatic, thorough manner. In this 2.5h course—the second and final installment in the series—Derek builds on top of that architecture by exploring a variety of algorithms, from logistic regression to gradient boosting, and showing how to set a structure that guides you through picking the best one for the problem at hand. 

By clicking the button above, you will be leaving our website.
Supervised Machine Learning on coursera
Andrew Ng, DeepLearning.AI

This course covers three modules and is part of the The Machine Learning Specialization which is a foundational online program developed in collaboration between DeepLearning.AI and Stanford Online. In this beginner-friendly program, you will learn the fundamentals of machine learning and how to use these techniques to develop real-world AI applications. This specialization is taught by Andrew Ng, an AI visionary who conducted crucial research at Stanford University.

By clicking the button above, you will be leaving our website.
Tübingen Machine Learning
University of Tübingen

The youtube channel of the machine learning groups at the University of Tübingen covers a wide range of topics within Machine Learning. From deep learning, to probabilistics and numerics of machine learning, this YouTube channel has it all. We highly recommend the Introduction to Machine Learning lecture.

By clicking the button above, you will be leaving our website.
Icon of Kaggle project
Kaggle | Learn Machine Learning Explainability
Dan Becker

Extract human-understandable insights from any model.

By clicking the button above, you will be leaving our website.
Thumbnail of Introduction to deep learning
MIT | Introduction to Deep Learning
Alexander Amini, Ava Amini, Sadhana Lolla

MIT’s introductory program on deep learning methods with applications to computer vision, natural language processing, biology, and more! Students will gain foundational knowledge of deep learning algorithms and get practical experience in building neural networks in TensorFlow. Program concludes with a project proposal competition with feedback from staff and panel of industry sponsors. Prerequisites assume calculus (i.e. taking derivatives) and linear algebra (i.e. matrix multiplication), we’ll try to explain everything else along the way! Experience in Python is helpful but not necessary. Listeners are welcome!

By clicking the button above, you will be leaving our website.
Thumbnail of YouTube video series explainable AI explained
Youtube | DeepFindr – Explainable AI explained
Florian Rottach

Introduction of what is explainable AI what does it want to solve and what methods are there.

By clicking the button above, you will be leaving our website.
humanFactorsJournal
Trust in Automation: Designing for Appropriate Reliance
John D. Lee, Katrina A. See

Automation is often problematic because people fail to rely upon it appropriately. Because people respond to technology socially, trust influences reliance on automation. In particular, trust guides reliance when complexity and unanticipated situations make a complete understanding of the automation impractical. This review considers trust from the organizational, sociological, interpersonal, psychological, and neurological perspectives. It considers how the context, automation characteristics, and cognitive processes affect the appropriateness of trust. The context in which the automation is used influences automation performance and provides a goal-oriented perspective to assess automation characteristics along a dimension of attributional abstraction. These characteristics can influence trust through analytic, analogical, and affective processes. The challenges of extrapolating the concept of trust in people to trust in automation are discussed. A conceptual model integrates research regarding trust in automation and describes the dynamics of trust, the role of context, and the influence of display characteristics. Actual or potential applications of this research include improved designs of systems that require people to manage imperfect automation.

ISBN / ISSN
00187208
By clicking the button above, you will be leaving our website.
InternationalAIJournal
Explanation in artificial intelligence: Insights from the social sciences
Tim Miller

There has been a recent resurgence in the area of explainable artificial intelligence as researchers and practitioners seek to provide more transparency to their algorithms. Much of this research is focused on explicitly explaining decisions or actions to a human observer, and it should not be controversial to say that looking at how humans explain to each other can serve as a useful starting point for explanation in artificial intelligence. However, it is fair to say that most work in explainable artificial intelligence uses only the researchers’ intuition of what constitutes a ‘good’ explanation. There exist vast and valuable bodies of research in philosophy, psychology, and cognitive science of how people define, generate, select, evaluate, and present explanations, which argues that people employ certain cognitive biases and social expectations to the explanation process. This paper argues that the field of explainable artificial intelligence can build on this existing research, and reviews relevant papers from philosophy, cognitive psychology/science, and social psychology, which study these topics. It draws out some important findings, and discusses ways that these can be infused with work on explainable artificial intelligence.

ISBN / ISSN
0004-3702
By clicking the button above, you will be leaving our website.
human-comp-interaction
Human-XAI Interaction: A Review and Design Principles for Explanation User Interfaces
Chromik M., Butz, A.

The interdisciplinary field of explainable artificial intelligence (XAI) aims to foster human understanding of black-box machine learning models through explanation-generating methods. Although the social sciences suggest that explanation is a social and iterative process between an explainer and an explainee, explanation user interfaces and their user interactions have not been systematically explored in XAI research yet. Therefore, we review prior XAI research containing explanation user interfaces for ML-based intelligent systems and describe different concepts of interaction. Further, we present observed design principles for interactive explanation user interfaces. With our work, we inform designers of XAI systems about human-centric ways to tailor their explanation user interfaces to different target audiences and use cases.

ISBN / ISSN
978-3-030-85616-8
By clicking the button above, you will be leaving our website.
information fusion journal
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
Barredo Arrieta et al.

In the last few years, Artificial Intelligence (AI) has achieved a notable momentum that, if harnessed appropriately, may deliver the best of expectations over many application sectors across the field. For this to occur shortly in Machine Learning, the entire community stands in front of the barrier of explainability, an inherent problem of the latest techniques brought by sub-symbolism (e.g. ensembles or Deep Neural Networks) that were not present in the last hype of AI (namely, expert systems and rule based models). Paradigms underlying this problem fall within the so-called eXplainable AI (XAI) field, which is widely acknowledged as a crucial feature for the practical deployment of AI models. he overview presented in this article examines the existing literature and contributions already done in the field of XAI, including a prospect toward what is yet to be reached. For this purpose we summarize previous efforts made to define explainability in Machine Learning, establishing a novel definition of explainable Machine Learning that covers such prior conceptual propositions with a major focus on the audience for which the explainability is sought. Departing from this definition, we propose and discuss about a taxonomy of recent contributions related to the explainability of different Machine Learning models, including those aimed at explaining Deep Learning methods for which a second dedicated taxonomy is built and examined in detail. This critical literature analysis serves as the motivating background for a series of challenges faced by XAI, such as the interesting crossroads of data fusion and explainability. Our prospects lead toward the concept of Responsible Artificial Intelligence, namely, a methodology for the large-scale implementation of AI methods in real organizations with fairness, model explainability and accountability at its core. Our ultimate goal is to provide newcomers to the field of XAI with a thorough taxonomy that can serve as reference material in order to stimulate future research advances, but also to encourage experts and professionals from other disciplines to embrace the benefits of AI in their activity sectors, without any prior bias for its lack of interpretability.

ISBN / ISSN
1566-2535
By clicking the button above, you will be leaving our website.
aiMagazineCover
DARPA’s Explainable Artificial Intelligence Program
David Gunning, David W. Aha

Dramatic success in machine learning has led to a new wave of AI applications (for example, transportation, security, medicine, finance, defense) that offer tremendous benefits but cannot explain their decisions and actions to human users. DARPA’s explainable artificial intelligence (XAI) program endeavors to create AI systems whose learned models and decisions can be understood and appropriately trusted by end users. Realizing this goal requires methods for learning more explainable models, designing effective explanation interfaces, and understanding the psychologic requirements for effective explanations. The XAI developer teams are addressing the first two challenges by creating ML techniques and developing principles, strategies, and human-computer interaction techniques for generating effective explanations. Another XAI team is addressing the third challenge by summarizing, extending, and applying psychologic theories of explanation to help the XAI evaluator define a suitable evaluation framework, which the developer teams will use to test their systems. The XAI teams completed the first of this 4-year program in May 2018. In a series of ongoing evaluations, the developer teams are assessing how well their XAM systems’ explanations improve user understanding, user trust, and user task performance.

ISBN / ISSN
0738-4602
By clicking the button above, you will be leaving our website.
Explainable Artificial Intelligence
Uday Kamath, John Liu

A book about the rapid integration of Artificial Intelligence (AI) into various aspects of life, emphasizing the transformative impact and disruptions it brings to human experiences. It highlights both significant benefits and risks, with the primary focus on technical implementations and associated risks like understandability, explainability, and transparency. John Liu and Uday Kamath’s book on explainable AI (XAI) is praised as a comprehensive guide for the AI/ML community, covering various dimensions of AI technical risks and providing in-depth coverage of XAI techniques, ranging from traditional white-box models to advanced black-box models with a focus on applications in natural language processing, computer vision, and time series.

ISBN / ISSN
978-3-030-83355-8
By clicking the button above, you will be leaving our website.
Interpretable Machine Learning 
Christoph Molnar

Machine learning has great potential for improving products, processes and research. But computers usually do not explain their predictions which is a barrier to the adoption of machine learning. This book is about making machine learning models and their decisions interpretable.

ISBN / ISSN
9798411463330
By clicking the button above, you will be leaving our website.