PyTorch implementation of Efficient Neural Architecture Search via Parameters Sharing. ENAS reduce the computational requirement (GPU-hours) of Neural Architecture Search (NAS) by 1000x via parameter sharing between models that are subgraphs within a large computational graph. SOTA on Penn Treebank language modeling.
pytorch neural-architecture-search google-brainDARTS: Differentiable Architecture Search Hanxiao Liu, Karen Simonyan, Yiming Yang. arXiv:1806.09055. NOTE: PyTorch 0.4 is not supported at this moment and would lead to OOM.
deep-learning automl image-classification language-modeling pytorch convolutional-networks recurrent-networks neural-architecture-searchFiGS, is a probabilistic approach to channel regularization that we introduced in Fine-Grained Stochastic Architecture Search. It outperforms our previous regularizers and can be used as either a pruning algorithm or a full fledged Differentiable Architecture Search method. This is the recommended way to apply MorphNet. In the below documentation it is referred to as the LogisticSigmoid regularizer. MorphNet is a method for learning deep network structure during training. The key principle is continuous relaxation of the network-structure learning problem. In short, the MorphNet regularizer pushes the influence of filters down, and once they are small enough, the corresponding output channels are marked for removal from the network.
machine-learning deep-learning tensorflow automl neural-architecture-searchPaddleSlim is an open-source library for deep model compression and architecture search.
pruning quantization nas knowledge-distillation model-compression neural-architecture-search hyperparameter-search autodlIllustration of slimmable neural networks. The same model can run at different widths (number of active channels), permitting instant and adaptive accuracy-efficiency trade-offs. Illustration of universally slimmable networks. The same model can run at arbitrary widths.
efficient on-demand adaptive automated neural-architecture-search edge-devices slimmable-networksAwesome-AutoML-Papers is a curated list of automated machine learning papers, articles, tutorials, slides and projects. Star this repository, and then you can keep abreast of the latest developments of this booming research field. Thanks to all the people who made contributions to this project. Join us and you are welcome to be a contributor. Automated Machine Learning (AutoML) provides methods and processes to make Machine Learning available for non-Machine Learning experts, to improve efficiency of Machine Learning and to accelerate research on Machine Learning.
hyperparameter-optimization automl neural-architecture-search automated-feature-engineeringDEvol (DeepEvolution) is a basic proof of concept for genetic architecture search in Keras. The current setup is designed for classification problems, though this could be extended to include any other output type as well. See example/demo.ipynb for a simple example.
machine-learning computer-vision deep-learning genetic-algorithm keras automl neural-architecture-searchAutoKeras: An AutoML system based on Keras. It is developed by DATA Lab at Texas A&M University. The goal of AutoKeras is to make machine learning accessible to everyone. Official website tutorials.
machine-learning deep-learning tensorflow keras automl automated-machine-learning neural-architecture-search autodlAdaNet is a lightweight TensorFlow-based framework for automatically learning high-quality models with minimal expert intervention. AdaNet builds on recent AutoML efforts to be fast and flexible while providing learning guarantees. Importantly, AdaNet provides a general framework for not only learning a neural network architecture, but also for learning to ensemble to obtain even better models.
machine-learning deep-learning tensorflow gpu ensemble automl learning-theory neural-architecture-search distributed-training tpuNNI (Neural Network Intelligence) is a toolkit to help users run automated machine learning experiments. The tool dispatches and runs trial jobs that generated by tuning algorithms to search the best neural architecture and/or hyper-parameters in different environments (e.g. local machine, remote servers and cloud). This command will start an experiment and a WebUI. The WebUI endpoint will be shown in the output of this command (for example, http://localhost:8080). Open this URL in your browser. You can analyze your experiment through WebUI, or browse trials' tensorboard.
automl deep-learning neural-architecture-search hyperparameter-optimization optimizerArchai is a platform for Neural Network Search (NAS) that allows you to generate efficient deep networks for your applications. Archai aspires to accelerate NAS research by enabling easy mix and match between different techniques while ensuring reproducibility, self-documented hyper-parameters and fair comparison. To achieve this, Archai uses a common code base that unifies several algorithms.
deep-learning darts nas network-architecture neural-architecture-search petridishPyTorch implementation of Neural Optimizer Search's Optimizer_1
neural-optimizer-search pytorch deep-learning neural-architecture-search optimizerBasic implementation of Controller RNN from Neural Architecture Search with Reinforcement Learning and Learning Transferable Architectures for Scalable Image Recognition. At a high level : For full training details, please see train.py.
neural-architecture-search keras tensorflowNASLib is a modular and flexible framework created with the aim of providing a common codebase to the community to facilitate research on Neural Architecture Search (NAS). It offers high-level abstractions for designing and reusing search spaces, interfaces to benchmarks and evaluation pipelines, enabling the implementation and extension of state-of-the-art NAS methods with a few lines of code. The modularized nature of NASLib allows researchers to easily innovate on individual components (e.g., define a new search space while reusing an optimizer and evaluation pipeline, or propose a new optimizer with existing search spaces). It is designed to be modular, extensible and easy to use. NASLib was developed by the AutoML Freiburg group and with the help of the NAS community, we are constantly adding new search spaces, optimizers and benchmarks to the library. Please reach out to zelaa@cs.uni-freiburg.de for any questions or potential collaborations.
nas automl neural-architecture-searchThis repository is maintained by the AutoML Group Freiburg. Please feel free to pull requests or open an issue to add papers.
transformer neural-architecture-search
We have large collection of open source products. Follow the tags from
Tag Cloud >>
Open source products are scattered around the web. Please provide information
about the open source projects you own / you use.
Add Projects.