Displaying 1 to 20 from 37 results

tpot - A Python Automated Machine Learning tool that optimizes machine learning pipelines using genetic programming

  •    Python

Consider TPOT your Data Science Assistant. TPOT is a Python Automated Machine Learning tool that optimizes machine learning pipelines using genetic programming.TPOT will automate the most tedious part of machine learning by intelligently exploring thousands of possible pipelines to find the best one for your data.

xcessiv - A web-based application for quick, scalable, and automated hyperparameter tuning and stacked ensembling in Python

  •    Python

Stacked ensembles are simple in theory. You combine the predictions of smaller models and feed those into another model. However, in practice, implementing them can be a major headache. Xcessiv holds your hand through all the implementation details of creating and optimizing stacked ensembles so you're free to fully define only the things you care about.

hyperas - Keras + Hyperopt: A very simple wrapper for convenient hyperparameter optimization

  •    Python

A very simple convenience wrapper around hyperopt for fast prototyping with keras models. Hyperas lets you use the power of hyperopt without having to learn the syntax of it. Instead, just define your keras model as you are used to, but use a simple template notation to define hyper-parameter ranges to tune. To do hyper-parameter optimization on this model, just wrap the parameters you want to optimize into double curly brackets and choose a distribution over which to run the algorithm.




auto_ml - Automated machine learning for analytics & production

  •    Python

auto_ml is designed for production. Here's an example that includes serializing and loading the trained model, then getting predictions on single dictionaries, roughly the process you'd likely follow to deploy the trained model. All of these projects are ready for production. These projects all have prediction time in the 1 millisecond range for a single prediction, and are able to be serialized to disk and loaded into a new environment after training.

determined - Determined: Deep Learning Training Platform

  •    Python

Determined integrates these features into an easy-to-use, high-performance deep learning environment — which means you can spend your time building models instead of managing infrastructure. To use Determined, you can continue using popular DL frameworks such as TensorFlow and PyTorch; you just need to update your model code to integrate with the Determined API.

optuna - A hyperparameter optimization framework

  •    Python

Optuna is an automatic hyperparameter optimization software framework, particularly designed for machine learning. It features an imperative, define-by-run style user API. Thanks to our define-by-run API, the code written with Optuna enjoys high modularity, and the user of Optuna can dynamically construct the search spaces for the hyperparameters. Please refer to sample code below. The goal of a study is to find out the optimal set of hyperparameter values (e.g., classifier and svm_c) through multiple trials (e.g., n_trials=100). Optuna is a framework designed for the automation and the acceleration of the optimization studies.

autogluon - AutoGluon: AutoML for Text, Image, and Tabular Data

  •    Python

AutoGluon automates machine learning tasks enabling you to easily achieve strong predictive performance in your applications. With just a few lines of code, you can train and deploy high-accuracy machine learning and deep learning models on text, image, and tabular data. Announcement for previous users: The AutoGluon codebase has been modularized into namespace packages, which means you now only need those dependencies relevant to your prediction task of interest! For example, you can now work with tabular data without having to install dependencies required for AutoGluon's computer vision tasks (and vice versa). Unfortunately this improvement required a minor API change (eg. instead of from autogluon import TabularPrediction, you should now do: from autogluon.tabular import TabularPredictor), for all versions newer than v0.0.15. Documentation/tutorials under the old API may still be viewed for version 0.0.15 which is the last released version under the old API.


awesome-automl-papers - A curated list of automated machine learning papers, articles, tutorials, slides and projects

  •    

Awesome-AutoML-Papers is a curated list of automated machine learning papers, articles, tutorials, slides and projects. Star this repository, and then you can keep abreast of the latest developments of this booming research field. Thanks to all the people who made contributions to this project. Join us and you are welcome to be a contributor. Automated Machine Learning (AutoML) provides methods and processes to make Machine Learning available for non-Machine Learning experts, to improve efficiency of Machine Learning and to accelerate research on Machine Learning.

Hyperparameter-Optimization-of-Machine-Learning-Algorithms - Implementation of hyperparameter optimization/tuning methods for machine learning & deep learning models (easy&clear)

  •    Jupyter

This code provides a hyper-parameter optimization implementation for machine learning algorithms, as described in the paper: L. Yang and A. Shami, “On hyperparameter optimization of machine learning algorithms: Theory and practice,” Neurocomputing, vol. 415, pp. 295–316, 2020, doi: https://doi.org/10.1016/j.neucom.2020.07.061. To fit a machine learning model into different problems, its hyper-parameters must be tuned. Selecting the best hyper-parameter configuration for machine learning models has a direct impact on the model's performance. In this paper, optimizing the hyper-parameters of common machine learning models is studied. We introduce several state-of-the-art optimization techniques and discuss how to apply them to machine learning algorithms. Many available libraries and frameworks developed for hyper-parameter optimization problems are provided, and some open challenges of hyper-parameter optimization research are also discussed in this paper. Moreover, experiments are conducted on benchmark datasets to compare the performance of different optimization methods and provide practical examples of hyper-parameter optimization.

nni - An open source AutoML toolkit for neural architecture search and hyper-parameter tuning.

  •    TypeScript

NNI (Neural Network Intelligence) is a toolkit to help users run automated machine learning experiments. The tool dispatches and runs trial jobs that generated by tuning algorithms to search the best neural architecture and/or hyper-parameters in different environments (e.g. local machine, remote servers and cloud). This command will start an experiment and a WebUI. The WebUI endpoint will be shown in the output of this command (for example, http://localhost:8080). Open this URL in your browser. You can analyze your experiment through WebUI, or browse trials' tensorboard.

hyperband - Tuning hyperparams fast with Hyperband

  •    Python

Code for tuning hyperparams with Hyperband, adapted from Hyperband: A Novel Bandit-Based Approach to Hyperparameter Optimization. Use defs.meta/defs_regression.meta to try many models in one Hyperband run. This is an automatic alternative to constructing search spaces with multiple models (like defs.rf_xt, or defs.polylearn_fm_pn) by hand.

test-tube - Python library to easily log, track machine learning code, experiments and parallelize hyperparameter search

  •    HTML

Test tube is a python library to track and parallelize hyperparameter search for Deep Learning and ML experiments. It's framework agnostic and built on top of the python argparse API for ease of use. If you're a researcher, test-tube is highly encouraged as a way to post your paper's training logs to help add transparency and show others what you've tried that didn't work.

SMAC3 - Sequential Model-based Algorithm Configuration

  •    Python

Attention: This package is under heavy development and subject to change. A stable release of SMAC (v2) in Java can be found here. The documentation can be found here.

randopt - Streamlined machine learning experiment management.

  •    HTML

Once you obtained some results, run roviz.py path/to/experiment/folder to visualize them in your web browser. For more info on visualization and roviz.py, refer to the Visualizing Results tutorial.

mgo - Purely functional genetic algorithms for multi-objective optimisation

  •    Scala

MGO implements NGSAII, CP (Calibration Profile), PSE (Pattern Search Experiment). All algorithm in MGO have version to compute on noisy fitness function. MGO handle noisy fitness functions by resampling only the most promising individuals. It uses an aggregation function to aggregate the multiple sample when needed.

mlrMBO - Toolbox for Bayesian Optimization and Model-Based Optimization in R

  •    R

Model-based optimization with mlr. mlrMBO is a highly configurable R toolbox for model-based / Bayesian optimization of black-box functions.

hyperparameters - Automatically tuning hyperparameters for deep learning

  •    Python

##Why Hyperparameter is controling how to learn the optimization algorithm. So it could directly effect the convergence performence as well as model precision. Given well tuned hyperparameters, even a simple model could be robust enough. Check the publication of "Bayesian Optimization of Text Representations". According to experiences, the optimization alogrithm is very sensitive to learning rate and regularization parameters. ##Idea Firstly, I deal with this problem from two individual spaces, one is the parameter, the other is the hyper-parameter. Learning could be consided as picking one point from the HPS(hyper parameters space) and then getting training result from the paramter space. How to map the two different spaces and pick an optimized point from HPS via the performance of parameter space? Researchers have found that reverse-mode differentiation proposed by Bengio(2000) in his paper "Gradient-based optimization of hyperpa-parameters" could resolve this issue. But there exists a big problem with RMD, it will consume thousands of times of memory to store the reverse path. To solve this problem the paper "Gradient-based Hyperparameter Optimization through Reversible Learning", which relies on momentum could reduce hunderds of times of memory compared with the origin RMD. Jie Fu's paper "DrMAD: Distilling Reverse-Mode Automatic Differentiation for Optimizing Hyperparameters of Deep Neural Networks" discards all training trajectories with zero memory consumption.






We have large collection of open source products. Follow the tags from Tag Cloud >>


Open source products are scattered around the web. Please provide information about the open source projects you own / you use. Add Projects.