Alibi is an open source Python library aimed at machine learning model inspection and interpretation. The focus of the library is to provide high-quality implementations of black-box, white-box, local and global explanation methods for classification and regression models. If you're interested in outlier detection, concept drift or adversarial instance detection, check out our sister project alibi-detect.
machine-learning interpretability counterfactual explanations xaiHeads Up! It's all about the V1 Spec. This document is more yours than it is mine. It makes me happy that it has been able to help people. To do better I moved this document from the original gist to this repo so multiple people can work together and improve it.
shadow-dom web-components guide doubts questions explanations examples custom-elements shadow-roots slot document-fragment light-dom-elements shadow-trees cssInterpretDL, short for interpretations of deep learning models, is a model interpretation toolkit for PaddlePaddle models. This toolkit contains implementations of many interpretation algorithms, including LIME, Grad-CAM, Integrated Gradients and more. Some SOTA and new interpretation algorithms are also implemented. The increasingly complicated deep learning models make it impossible for people to understand their internal workings. Interpretability of black-box models has become the research focus of many talented researchers. InterpretDL provides a collection of both classical and new algorithms for interpreting models.
grad-cam convolutional-neural-networks visualizations lime paddlepaddle model-interpretation nlp-models smoothgrad explanations vision-transformer interpretation-algorithms
We have large collection of open source products. Follow the tags from
Tag Cloud >>
Open source products are scattered around the web. Please provide information
about the open source projects you own / you use.
Add Projects.