Displaying 1 to 7 from 7 results

shapash - 🔅 Shapash makes Machine Learning models transparent and understandable by everyone

  •    Jupyter

Shapash is a Python library which aims to make machine learning interpretable and understandable by everyone. It provides several types of visualization that display explicit labels that everyone can understand. Data Scientists can understand their models easily and share their results. End users can understand the decision proposed by a model using a summary of the most influential criteria.

lime-flutter - Lime client built using flutter

  •    Dart

Lime is a social media app, which allows you to post images and text messages which will be visible inside a certain area. Lime was originally built in java as a native android app. In Order to provide an iOS version as well the app gets rebuilt using Flutter. We will release the iOS Flutter version soon all basic features are working and the plan is to replace the native app as soon as the flutter version becomes more powerful to provide an unified experience using one codebase. We will also build a web-version which also uses Dart as primary language. Lime is built using Dart and the amazing Flutter ❤️ For help getting started with Flutter, view our online documentation.

lime-vscode-extension - Visual Studio Code extension for Lime support

  •    Haxe

The Lime extension for Visual Studio Code adds code completion, inline documentation, populates the Haxe dependency tree and provides build, clean, test and (other) tasks automatically. This depends on the Haxe extension, and requires Haxe 3.4.2 or greater. You should also have Lime installed and properly set up.

interpretable_machine_learning_with_python - Practical techniques for interpreting machine learning models

  •    Jupyter

Monotonicity constraints can turn opaque, complex models into transparent, and potentially regulator-approved models, by ensuring predictions only increase or only decrease for any change in a given input variable. In this notebook, I will demonstrate how to use monotonicity constraints in the popular open source gradient boosting package XGBoost to train a simple, accurate, nonlinear classifier on the UCI credit card default data. Once we have trained a monotonic XGBoost model, we will use partial dependence plots and individual conditional expectation (ICE) plots to investigate the internal mechanisms of the model and to verify its monotonic behavior. Partial dependence plots show us the way machine-learned response functions change based on the values of one or two input variables of interest, while averaging out the effects of all other input variables. ICE plots can be used to create more localized descriptions of model predictions, and ICE plots pair nicely with partial dependence plots. An example of generating regulator mandated reason codes from high fidelity Shapley explanations for any model prediction is also presented. The combination of monotonic XGBoost, partial dependence, ICE, and Shapley explanations is likely the most direct way to create an interpretable machine learning model today.




eoslime - Complete EOSIO framework for development, testing, and deployment in JavaScript

  •    Javascript

EOS development and deployment framework based on eosjs.js. The framework's main purpose is to make the process of unit testing, deployment and compilation much simpler and much easier. Account.create(name, privateKey, ?creator) There are cases you have already generated your private key and you have a name for your account. You only need to create it on the chain.

InterpretDL - InterpretDL: Interpretation of Deep Learning Models,基于『飞桨』的模型可解释性算法库

  •    Jupyter

InterpretDL, short for interpretations of deep learning models, is a model interpretation toolkit for PaddlePaddle models. This toolkit contains implementations of many interpretation algorithms, including LIME, Grad-CAM, Integrated Gradients and more. Some SOTA and new interpretation algorithms are also implemented. The increasingly complicated deep learning models make it impossible for people to understand their internal workings. Interpretability of black-box models has become the research focus of many talented researchers. InterpretDL provides a collection of both classical and new algorithms for interpreting models.






We have large collection of open source products. Follow the tags from Tag Cloud >>


Open source products are scattered around the web. Please provide information about the open source projects you own / you use. Add Projects.