Displaying 1 to 5 from 5 results

django-public-project - Custom Python/Django CMS - Transparency for Public Projects (used for BERwatch/BLBwatch)

  •    Javascript

Django Public Project (DPP) is a custom CMS for making large public projects, political processes and enquiry commissions more transparent.

mli-resources - Machine Learning Interpretability Resources

  •    Jupyter

Machine learning algorithms create potentially more accurate models than linear models, but any increase in accuracy over more traditional, better-understood, and more easily explainable techniques is not practical for those who must explain their models to regulators or customers. For many decades, the models created by machine learning algorithms were generally taken to be black-boxes. However, a recent flurry of research has introduced credible techniques for interpreting complex, machine-learned models. Materials presented here illustrate applications or adaptations of these techniques for practicing data scientists. Want to contribute your own examples? Just make a pull request.

interpretable_machine_learning_with_python - Practical techniques for interpreting machine learning models

  •    Jupyter

Monotonicity constraints can turn opaque, complex models into transparent, and potentially regulator-approved models, by ensuring predictions only increase or only decrease for any change in a given input variable. In this notebook, I will demonstrate how to use monotonicity constraints in the popular open source gradient boosting package XGBoost to train a simple, accurate, nonlinear classifier on the UCI credit card default data. Once we have trained a monotonic XGBoost model, we will use partial dependence plots and individual conditional expectation (ICE) plots to investigate the internal mechanisms of the model and to verify its monotonic behavior. Partial dependence plots show us the way machine-learned response functions change based on the values of one or two input variables of interest, while averaging out the effects of all other input variables. ICE plots can be used to create more localized descriptions of model predictions, and ICE plots pair nicely with partial dependence plots. An example of generating regulator mandated reason codes from high fidelity Shapley explanations for any model prediction is also presented. The combination of monotonic XGBoost, partial dependence, ICE, and Shapley explanations is likely the most direct way to create an interpretable machine learning model today.




code-mentors - A public directory of mentors willing to mentor you for 4 months — for free.

  •    

A public directory of mentors willing to mentor you for 4 months — for free. The Code Mentors Network was started by Calvin Koepke as a way to propagate a mentorship mentality. The rules of the network are a unique blend of turnover, commitment, and accountability. See the master list of available mentors.