Explainable AI with Python
By Leonida Gianfagna and Antonio Di Cecco
Explainable AI with Python provides a comprehensive overview of the concepts and techniques available today to make machine learning systems more understandable. The approaches presented can be applied to almost all current models of Machine Learning, such as the linear regression and logistic model, deep learning neural networks, natural language processing and image recognition.
Advances in machine learning are helping to increase the use of artificial agents capable of performing critical tasks previously handled by humans (such as in healthcare or legal and financial activities). Although the principles guiding the design of these agents are clear, most of the deep-learning models used remain “opaque” to human understanding. The book fills the current gap in the literature dealing with this topic, adopting both a theoretical and practical perspective, and making the reader quickly able to work with the tools and code used for Explainable AI systems.
Starting with examples of what Explainable AI (XAI) is and why it is so necessary, the book details different approaches to XAI depending on the context and specific needs. Practical examples of models that can be interpreted with the use of Python are then presented, showing how intrinsically interpretable models can be interpreted and how “human-understandable” explanations can be produced. It is shown that agnostic-model methods for XAI produce explanations that do not rely on the inner parts of ML models that are “opaque”. Using examples from Computer Vision, the authors then examine explainable models for Deep Learning and potential methods for the future. With a practical perspective, the authors also demonstrate how to effectively use ML and XAI in science. The final chapter explains Adversarial Machine Learning and how to do XAI using conflicting examples.
If you are interested in the book Explainable AI with Python click here.
The Authors
Leonida Gianfagna
Leonida Gianfagna (PhD, MBA) is a theoretical physicist currently working in Cybersecurity as R&D Director for Cyber Guru. Prior to joining Cyber Guru, she worked at IBM for 15 years in leading roles in IT Service Management (ITSM) software development. She is the author of several publications in the fields of theoretical physics and computer science and is accredited as an IBM Master Inventor.
Antonio Di Cecco
Antonio Di Cecco is a theoretical physicist with a solid mathematical background who actively engages in AIML training. The strength of his approach is the deepening of the mathematical foundations of AIML models, which are able to provide new ways to present AIML knowledge and the space for improvement with respect to the existing state of the art.