A curated, but probably biased and incomplete, list of awesome machine learning interpretability resources. If you want to contribute to this list (and please do!) read over the contribution guidelines, send a pull request, or contact me @jpatrickhall.
fairness xai interpretability iml fatml accountability transparency machine-learning data-science data-mining r awesome awesome-list machine-learning-interpretability interpretable-machine-learning interpretable-ml interpretable-ai interpretable-deep-learning explainable-mlMachine learning algorithms create potentially more accurate models than linear models, but any increase in accuracy over more traditional, better-understood, and more easily explainable techniques is not practical for those who must explain their models to regulators or customers. For many decades, the models created by machine learning algorithms were generally taken to be black-boxes. However, a recent flurry of research has introduced credible techniques for interpreting complex, machine-learned models. Materials presented here illustrate applications or adaptations of these techniques for practicing data scientists. Want to contribute your own examples? Just make a pull request.
machine-learning jupyter-notebooks interpretability data-science data-mining h2o mli xai fatml transparency accountability fairness xgboostMonotonicity constraints can turn opaque, complex models into transparent, and potentially regulator-approved models, by ensuring predictions only increase or only decrease for any change in a given input variable. In this notebook, I will demonstrate how to use monotonicity constraints in the popular open source gradient boosting package XGBoost to train a simple, accurate, nonlinear classifier on the UCI credit card default data. Once we have trained a monotonic XGBoost model, we will use partial dependence plots and individual conditional expectation (ICE) plots to investigate the internal mechanisms of the model and to verify its monotonic behavior. Partial dependence plots show us the way machine-learned response functions change based on the values of one or two input variables of interest, while averaging out the effects of all other input variables. ICE plots can be used to create more localized descriptions of model predictions, and ICE plots pair nicely with partial dependence plots. An example of generating regulator mandated reason codes from high fidelity Shapley explanations for any model prediction is also presented. The combination of monotonic XGBoost, partial dependence, ICE, and Shapley explanations is likely the most direct way to create an interpretable machine learning model today.
machine-learning fatml xai gradient-boosting-machine decision-tree data-science fairness interpretable-machine-learning interpretability machine-learning-interpretability iml accountability transparency data-mining interpretable-ml interpretable interpretable-ai lime h2o
We have large collection of open source products. Follow the tags from
Tag Cloud >>
Open source products are scattered around the web. Please provide information
about the open source projects you own / you use.
Add Projects.