Displaying 1 to 9 from 9 results

tpot - A Python Automated Machine Learning tool that optimizes machine learning pipelines using genetic programming

  •    Python

Consider TPOT your Data Science Assistant. TPOT is a Python Automated Machine Learning tool that optimizes machine learning pipelines using genetic programming.TPOT will automate the most tedious part of machine learning by intelligently exploring thousands of possible pipelines to find the best one for your data.

xgboost - Scalable, Portable and Distributed Gradient Boosting (GBDT, GBRT or GBM) Library, for Python, R, Java, Scala, C++ and more

  •    C++

XGBoost is an optimized distributed gradient boosting library designed to be highly efficient, flexible and portable. It implements machine learning algorithms under the Gradient Boosting framework. XGBoost provides a parallel tree boosting (also known as GBDT, GBM) that solve many data science problems in a fast and accurate way. The same code runs on major distributed environment (Hadoop, SGE, MPI) and can solve problems beyond billions of examples.XGBoost has been developed and used by a group of active community members. Your help is very valuable to make the package better for everyone.

interpret - Fit interpretable models. Explain blackbox machine learning.

  •    C++

Historically, the most intelligible models were not very accurate, and the most accurate models were not intelligible. Microsoft Research has developed an algorithm called the Explainable Boosting Machine (EBM)* which has both high accuracy and intelligibility. EBM uses modern machine learning techniques like bagging and boosting to breathe new life into traditional GAMs (Generalized Additive Models). This makes them as accurate as random forests and gradient boosted trees, and also enhances their intelligibility and editability. In addition to EBM, InterpretML also supports methods like LIME, SHAP, linear models, partial dependence, decision trees and rule lists. The package makes it easy to compare and contrast models to find the best one for your needs.




auto_ml - Automated machine learning for analytics & production

  •    Python

auto_ml is designed for production. Here's an example that includes serializing and loading the trained model, then getting predictions on single dictionaries, roughly the process you'd likely follow to deploy the trained model. All of these projects are ready for production. These projects all have prediction time in the 1 millisecond range for a single prediction, and are able to be serialized to disk and loaded into a new environment after training.

hyperband - Tuning hyperparams fast with Hyperband

  •    Python

Code for tuning hyperparams with Hyperband, adapted from Hyperband: A Novel Bandit-Based Approach to Hyperparameter Optimization. Use defs.meta/defs_regression.meta to try many models in one Hyperband run. This is an automatic alternative to constructing search spaces with multiple models (like defs.rf_xt, or defs.polylearn_fm_pn) by hand.

Apartment-Interest-Prediction - Predict people interest in renting specific NYC apartments

  •    Jupyter

Predict people interest in renting specific apartments. The challenge combines structured data, geolocalization, time data, free text and images. This solution features Gradient Boosted Trees (XGBoost and LightGBM) and does not use stacking, due to lack of time.

infiniteboost - InfiniteBoost: building infinite ensembles with gradient descent

  •    Jupyter

InfiniteBoost is an approach to building ensembles which combines best sides of random forest and gradient boosting. Trees in the ensemble encounter mistakes done by previous trees (as in gradient boosting), but due to modified scheme of encountering contributions the ensemble converges to the limit, thus avoiding overfitting (just as random forest).


drishti - Real time eye tracking for embedded and mobile devices.

  •    C++

Native iOS, Android, and "desktop" variants of the real-time facefilter application have been added here: src/examples/facefilter. These applications link against the installed public drishti::drishti package interface, which is designed without external types in the API definition. The facefilter demos are enabled by the DRISHTI_BUILD_EXAMPLES CMake option, and the entire src/examples tree is designed to be relocatable, you can cp -r src/examples ${HOME}/drishti_examples, customize, and build, by simply updating the drishti package details. The iOS facefilter target requires Xcode 9 (beta 4) or above (Swift language requirements) and will be generated directly as a standard CMake add_executable() target as part of the usual top level project build -- if you are using an appropriate CMake iOS toolchain for cross compilation from your macOS + Xcode host for your iOS device. Please see Polly Based Build and iOS Build below for more details.