live - Local Interpretable (Model-agnostic) Visual Explanations - model visualization for regression problems and tabular data based on LIME method

  •        2

See the latest changes. improved Shiny application (see live_shiny function in development version).

https://mi2datalab.github.io/live/
https://github.com/MI2DataLab/live

Tags
Implementation
License
Platform

   




Related Projects

DALEX - Descriptive mAchine Learning EXplanations

  •    R

Machine Learning models are widely used and have various applications in classification or regression tasks. Due to increasing computational power, availability of new data sources and new methods, ML models are more and more complex. Models created with techniques like boosting, bagging of neural networks are true black boxes. It is hard to trace the link between input variables and model outcomes. They are use because of high performance, but lack of interpretability is one of their weakest sides. In many applications we need to know, understand or prove how input variables are used in the model and what impact do they have on final model prediction. DALEX is a set of tools that help to understand how complex models are working.

AIX360 - Interpretability and explainability of data and machine learning models

  •    Python

The AI Explainability 360 toolkit is an open-source library that supports interpretability and explainability of datasets and machine learning models. The AI Explainability 360 Python package includes a comprehensive set of algorithms that cover different dimensions of explanations along with proxy explainability metrics. The AI Explainability 360 interactive experience provides a gentle introduction to the concepts and capabilities by walking through an example use case for different consumer personas. The tutorials and example notebooks offer a deeper, data scientist-oriented introduction. The complete API is also available.

Skater - Python Library for Model Interpretation/Explanations

  •    Python

Skater is a unified framework to enable Model Interpretation for all forms of model to help one build an Interpretable machine learning system often needed for real world use-cases(** we are actively working towards to enabling faithful interpretability for all forms models). It is an open source python library designed to demystify the learned structures of a black box model both globally(inference on the basis of a complete data set) and locally(inference about an individual prediction). The project was started as a research idea to find ways to enable better interpretability(preferably human interpretability) to predictive "black boxes" both for researchers and practioners. The project is still in beta phase.

interpret - Fit interpretable models. Explain blackbox machine learning.

  •    C++

Historically, the most intelligible models were not very accurate, and the most accurate models were not intelligible. Microsoft Research has developed an algorithm called the Explainable Boosting Machine (EBM)* which has both high accuracy and intelligibility. EBM uses modern machine learning techniques like bagging and boosting to breathe new life into traditional GAMs (Generalized Additive Models). This makes them as accurate as random forests and gradient boosted trees, and also enhances their intelligibility and editability. In addition to EBM, InterpretML also supports methods like LIME, SHAP, linear models, partial dependence, decision trees and rule lists. The package makes it easy to compare and contrast models to find the best one for your needs.


lime - Local Interpretable Model-Agnostic Explanations (R port of original Python package)

  •    R

one observation at a time. This is an R port of the Python lime package (https://github.com/marcotcr/lime) developed by the authors of the lime (Local Interpretable Model-agnostic Explanations) approach for black-box model explanations. All credits for the invention of the approach goes to the original developers.

tensorwatch - Debugging, monitoring and visualization for Deep Learning and Reinforcement Learning

  •    Jupyter

TensorWatch is a debugging and visualization tool designed for deep learning and reinforcement learning. It fully leverages Jupyter Notebook to show real time visualizations and offers unique capabilities to query the live training process without having to sprinkle logging statements all over. You can also use TensorWatch to build your own UIs and dashboards. In addition, TensorWatch leverages several excellent libraries for visualizing model graph, review model statistics, explain prediction and so on. TensorWatch is under heavy development with a goal of providing a research platform for debugging machine learning in one easy to use, extensible and hackable package.

lucid - A collection of infrastructure and tools for research in neural network interpretability.

  •    Jupyter

Lucid is a collection of infrastructure and tools for research in neural network interpretability. In particular, it provides state of the art implementations of feature visualization techniques, and flexible abstractions that make it very easy to explore new research directions.

shap - A unified approach to explain the output of any machine learning model.

  •    Jupyter

SHAP (SHapley Additive exPlanations) is a unified approach to explain the output of any machine learning model. SHAP connects game theory with local explanations, uniting several previous methods [1-7] and representing the only possible consistent and locally accurate additive feature attribution method based on expectations (see the SHAP NIPS paper for details). The above explanation shows features each contributing to push the model output from the base value (the average model output over the training dataset we passed) to the model output. Features pushing the prediction higher are shown in red, those pushing the prediction lower are in blue.

awesome-interpretable-machine-learning

  •    Python

Opinionated list of resources facilitating model interpretability (introspection, simplification, visualization, explanation). Software related to papers is mentioned along with each publication. Here only standalone software is included.

mindsdb - Machine Learning in one line of code

  •    Python

MindsDB's is an Explainable AutoML framework for developers. MindsDB is an automated machine learning platform that allows anyone to gain powerful insights from their data. With MindsDB, users can get fast, accurate, and interpretable answers to any of their data questions within minutes.

tensorflow-image-detection - A generic image detection program that uses Google's Machine Learning library, Tensorflow and a pre-trained Deep Learning Convolutional Neural Network model called Inception

  •    Python

A generic image detection program that uses Google's Machine Learning library, Tensorflow and a pre-trained Deep Learning Convolutional Neural Network model called Inception. This model has been pre-trained for the ImageNet Large Visual Recognition Challenge using the data from 2012, and it can differentiate between 1,000 different classes, like Dalmatian, dishwasher etc. The program applies Transfer Learning to this existing model and re-trains it to classify a new set of images.

Facets - Visualizations for machine learning datasets

  •    Typescript

The facets project contains two visualizations for understanding and analyzing machine learning datasets: Facets Overview and Facets Dive. The visualizations are implemented as Polymer web components, backed by Typescript code and can be easily embedded into Jupyter notebooks or webpages.

Awesome-CoreML-Models - Largest list of models for Core ML (for iOS 11+)

  •    HTML

We've put up the largest collection of machine learning models in Core ML format, to help iOS, macOS, tvOS, and watchOS developers experiment with machine learning techniques. We've created a site with better visualization of the models CoreML.Store, and are working on more advance features. If you've converted a Core ML model, feel free to submit an issue.

lime - Lime: Explaining the predictions of any machine learning classifier

  •    Javascript

Our plan is to add more packages that help users understand and interact meaningfully with machine learning. Lime is able to explain any black box classifier, with two or more classes. All we require is that the classifier implements a function that takes in raw text or a numpy array and outputs a probability for each class. Support for scikit-learn classifiers is built-in.

Zipline - A Pythonic Algorithmic Trading Library

  •    Python

Zipline is a Pythonic algorithmic trading library. It is an event-driven system that supports both backtesting and live-trading. Zipline is currently used in production as the backtesting and live-trading engine powering Quantopian -- a free, community-centered, hosted platform for building and executing trading strategies.Note: Installing Zipline via pip is slightly more involved than the average Python package. Simply running pip install zipline will likely fail if you've never installed any scientific Python packages before.

deep-neuroevolution - Deep Neuroevolution

  •    Python

Our code is based off of code from OpenAI, who we thank. The original code and related paper from OpenAI can be found here. The repo has been modified to run both ES and our algorithms, including our Deep Genetic Algorithm (DeepGA) locally and on AWS. The folder ./visual_inspector contains implementations of VINE, i.e., Visual Inspector for NeuroEvolution, an interactive data visualization tool for neuroevolution. Refer to README.md in that folder for further instructions on running and customizing your visualization. An article describing this visualization tool can be found here.

neuralconvo - Neural conversational model in Torch

  •    Lua

This is an attempt at implementing Sequence to Sequence Learning with Neural Networks (seq2seq) and reproducing the results in A Neural Conversational Model (aka the Google chatbot). Human: What is the purpose of living? Machine: To live forever.

sockeye - Sequence-to-sequence framework with a focus on Neural Machine Translation based on Apache MXNet

  •    Python

Felix Hieber, Tobias Domhan, Michael Denkowski, David Vilar, Artem Sokolov, Ann Clifton and Matt Post (2017): Sockeye: A Toolkit for Neural Machine Translation. In eprint arXiv:cs-CL/1712.05690.If you are interested in collaborating or have any questions, please submit a pull request or issue. You can also send questions to sockeye-dev-at-amazon-dot-com.