Displaying 1 to 19 from 19 results

tpot - A Python Automated Machine Learning tool that optimizes machine learning pipelines using genetic programming

  •    Python

Consider TPOT your Data Science Assistant. TPOT is a Python Automated Machine Learning tool that optimizes machine learning pipelines using genetic programming.TPOT will automate the most tedious part of machine learning by intelligently exploring thousands of possible pipelines to find the best one for your data.

hyperas - Keras + Hyperopt: A very simple wrapper for convenient hyperparameter optimization

  •    Python

A very simple convenience wrapper around hyperopt for fast prototyping with keras models. Hyperas lets you use the power of hyperopt without having to learn the syntax of it. Instead, just define your keras model as you are used to, but use a simple template notation to define hyper-parameter ranges to tune. To do hyper-parameter optimization on this model, just wrap the parameters you want to optimize into double curly brackets and choose a distribution over which to run the algorithm.

xcessiv - A web-based application for quick, scalable, and automated hyperparameter tuning and stacked ensembling in Python

  •    Python

Stacked ensembles are simple in theory. You combine the predictions of smaller models and feed those into another model. However, in practice, implementing them can be a major headache. Xcessiv holds your hand through all the implementation details of creating and optimizing stacked ensembles so you're free to fully define only the things you care about.




auto_ml - Automated machine learning for analytics & production

  •    Python

auto_ml is designed for production. Here's an example that includes serializing and loading the trained model, then getting predictions on single dictionaries, roughly the process you'd likely follow to deploy the trained model. All of these projects are ready for production. These projects all have prediction time in the 1 millisecond range for a single prediction, and are able to be serialized to disk and loaded into a new environment after training.

nni - An open source AutoML toolkit for neural architecture search and hyper-parameter tuning.

  •    TypeScript

NNI (Neural Network Intelligence) is a toolkit to help users run automated machine learning experiments. The tool dispatches and runs trial jobs that generated by tuning algorithms to search the best neural architecture and/or hyper-parameters in different environments (e.g. local machine, remote servers and cloud). This command will start an experiment and a WebUI. The WebUI endpoint will be shown in the output of this command (for example, http://localhost:8080). Open this URL in your browser. You can analyze your experiment through WebUI, or browse trials' tensorboard.

hyperband - Tuning hyperparams fast with Hyperband

  •    Python

Code for tuning hyperparams with Hyperband, adapted from Hyperband: A Novel Bandit-Based Approach to Hyperparameter Optimization. Use defs.meta/defs_regression.meta to try many models in one Hyperband run. This is an automatic alternative to constructing search spaces with multiple models (like defs.rf_xt, or defs.polylearn_fm_pn) by hand.

test-tube - Python library to easily log, track machine learning code, experiments and parallelize hyperparameter search

  •    HTML

Test tube is a python library to track and parallelize hyperparameter search for Deep Learning and ML experiments. It's framework agnostic and built on top of the python argparse API for ease of use. If you're a researcher, test-tube is highly encouraged as a way to post your paper's training logs to help add transparency and show others what you've tried that didn't work.


SMAC3 - Sequential Model-based Algorithm Configuration

  •    Python

Attention: This package is under heavy development and subject to change. A stable release of SMAC (v2) in Java can be found here. The documentation can be found here.

randopt - Streamlined machine learning experiment management.

  •    HTML

Once you obtained some results, run roviz.py path/to/experiment/folder to visualize them in your web browser. For more info on visualization and roviz.py, refer to the Visualizing Results tutorial.

mgo - Purely functional genetic algorithms for multi-objective optimisation

  •    Scala

MGO implements NGSAII, CP (Calibration Profile), PSE (Pattern Search Experiment). All algorithm in MGO have version to compute on noisy fitness function. MGO handle noisy fitness functions by resampling only the most promising individuals. It uses an aggregation function to aggregate the multiple sample when needed.

mlrMBO - Toolbox for Bayesian Optimization and Model-Based Optimization in R

  •    R

Model-based optimization with mlr. mlrMBO is a highly configurable R toolbox for model-based / Bayesian optimization of black-box functions.

hyperparameters - Automatically tuning hyperparameters for deep learning

  •    Python

##Why Hyperparameter is controling how to learn the optimization algorithm. So it could directly effect the convergence performence as well as model precision. Given well tuned hyperparameters, even a simple model could be robust enough. Check the publication of "Bayesian Optimization of Text Representations". According to experiences, the optimization alogrithm is very sensitive to learning rate and regularization parameters. ##Idea Firstly, I deal with this problem from two individual spaces, one is the parameter, the other is the hyper-parameter. Learning could be consided as picking one point from the HPS(hyper parameters space) and then getting training result from the paramter space. How to map the two different spaces and pick an optimized point from HPS via the performance of parameter space? Researchers have found that reverse-mode differentiation proposed by Bengio(2000) in his paper "Gradient-based optimization of hyperpa-parameters" could resolve this issue. But there exists a big problem with RMD, it will consume thousands of times of memory to store the reverse path. To solve this problem the paper "Gradient-based Hyperparameter Optimization through Reversible Learning", which relies on momentum could reduce hunderds of times of memory compared with the origin RMD. Jie Fu's paper "DrMAD: Distilling Reverse-Mode Automatic Differentiation for Optimizing Hyperparameters of Deep Neural Networks" discards all training trajectories with zero memory consumption.

hypersearch - Hyerparameter Optimization for PyTorch

  •    Python

Tune the hyperparameters of your PyTorch models with HyperSearch. Keys are of the form {layer_num}_{hyperparameter} where layer_num can be a layer from your nn.Sequential model or all to signify all layers. Values are of the form [distribution, x] where distribution can be one of uniform, quniform, choice, etc.

GBM-tune - Tuning GBMs (hyperparameter tuning) and impact on out-of-sample predictions

  •    HTML

The goal of this repo is to study the impact of having one dataset/sample ("the dataset") when training and tuning machine learning models in practice (or in competitions) on the prediction accuracy on new data (that usually comes from a slightly different distribution due to non-stationarity). To keep things simple we focus on binary classification, use only one source dataset with mix of numeric and categorical features and no missing values, we don't perform feature engineering, tune only GBMs with lightgbm and random hyperparameter search (might also ensemble the best models later), and we use only AUC as a measure of accuracy.

mlrHyperopt - Easy Hyper Parameter Optimization with mlr and mlrMBO.

  •    HTML

Easy Hyper Parameter Optimization with mlr and mlrMBO. Mainly it uses the learner implemented in mlr and uses the tuning methods also available in mlr. Unfortunately mlr lacks of well defined search spaces for each learner to make hyperparameter tuning easy.

Hyperopt-Keras-CNN-CIFAR-100 - Auto-optimizing a neural net (and its architecture) on the CIFAR-100 dataset

  •    Python

This project acts as both a tutorial and a demo to using Hyperopt with Keras, TensorFlow and TensorBoard. Not only we try to find the best hyperparameters for the given hyperspace, but also we represent the neural network architecture as hyperparameters that can be tuned. This automates the process of searching for the best neural architecture configuration and hyperparameters. Here, we are meta-optimizing a neural net and its architecture on the CIFAR-100 dataset (100 fine labels), a computer vision task. This code could be easily transferred to another vision dataset or even to another machine learning task.