Also, included is a jupyter notebook which shows how the Gumbel-Max trick for sampling discrete variables relates to Concrete distributions. Note: Current Dockerfile is for TensorFlow 1.5 CPU training.
https://github.com/vithursant/VAE-Gumbel-SoftmaxTags | tensorflow deeplearning variational-autoencoder gumbel-softmax vae mnist |
Implementation | Python |
License | Apache |
Platform | Windows Linux |
A PyTorch-based package containing useful models for modern deep semi-supervised learning and deep generative models. Want to jump right into it? Look into the notebooks. 2018.04.17 - The Gumbel softmax notebook has been added to show how you can use discrete latent variables in VAEs. 2018.02.28 - The β-VAE notebook was added to show how VAEs can learn disentangled representations.
semi-supervised-learning pytorch generative-modelsTensorFlow implementation of Deep Convolutional Generative Adversarial Networks, Variational Autoencoder (also Deep and Convolutional) and DRAW: A Recurrent Neural Network For Image Generation. Deep Convolutional Generative Adversarial Networks produce decent results after 10 epochs using default parameters.
tensorflow draw recurrent-neural-networks gan vaeTrain Compositional Pattern Producing Network as a Generative Model, using Generative Adversarial Networks and Variational Autoencoder techniques to produce high resolution images. Run python train.py from the command line to train from scratch and experiment with different settings.
We introduce a large-margin softmax (L-Softmax) loss for convolutional neural networks. L-Softmax loss can greatly improve the generalization ability of CNNs, so it is very suitable for general classification, feature embedding and biometrics (e.g. face) verification. We give the 2D feature visualization on MNIST to illustrate our L-Softmax loss. The paper is published in ICML 2016 and also available at arXiv.
l-softmax icml-2016 lsoftmax-loss caffe face-recognition image-recognition deep-learningCollection of generative models, e.g. GAN, VAE in Pytorch and Tensorflow. Also present here are RBM and Helmholtz Machine. Generated samples will be stored in GAN/{gan_model}/out (or VAE/{vae_model}/out, etc) directory during training.
vae gan pytorch tensorflow generative-model machine-learning rbm restricted-boltzmann-machineImplementation of the method described in our Arxiv paper. We present an autoencoder that leverages learned representations to better measure similarities in data space. By combining a variational autoencoder with a generative adversarial network we can use learned feature representations in the GAN discriminator as basis for the VAE reconstruction objective. Thereby, we replace element-wise errors with feature-wise errors to better capture the data distribution while offering invariance towards e.g. translation. We apply our method to images of faces and show that it outperforms VAEs with element-wise similarity measures in terms of visual fidelity. Moreover, we show that the method learns an embedding in which high-level abstract visual features (e.g. wearing glasses) can be modified using simple arithmetic.
This package is part of the Kadenze Academy program Creative Applications of Deep Learning w/ TensorFlow. from cadl import and then pressing tab to see the list of available modules.
deep-learning neural-network tutorial mooc gan vae vae-gan pixelcnn wavenet magenta nsynth tensorflow celeba cyclegan dcgan word2vec glove autoregressive conditional courseThe adaptive-softmax project is a Torch implementation of the efficient softmax approximation for graphical processing units (GPU), described in the paper "Efficient softmax approximation for GPUs" (http://arxiv.org/abs/1609.04309). This method is useful for training language models with large vocabularies. We provide a script to train large recurrent neural network language models, in order to reproduce the results of the paper.
All pull requests are welcome, make sure to follow the contribution guidelines when you submit pull request.
tensorflow tensorflow-tutorials mnist-classification mnist machine-learning android tensorflow-models machine-learning-android tensorflow-android tensorflow-model mnist-model deep-learning deep-neural-networks deeplearning deep-learning-tutorialNVAE is a deep hierarchical variational autoencoder that enables training SOTA likelihood-based generative models on several image datasets. These datasets will be downloaded automatically, when you run the main training for NVAE using train.py for the first time. You can use --data=$DATA_DIR/mnist or --data=$DATA_DIR/cifar10, so that the datasets are downloaded to the corresponding directories.
Some examples require MNIST dataset for training and testing. Don't worry, this dataset will automatically be downloaded when running examples (with input_data.py). MNIST is a database of handwritten digits, for a quick description of that dataset, you can check this notebook.
recurrent-neural-networks convolutional-neural-networks deep-learning-tutorial tensorflow tensorlayer keras deep-reinforcement-learning tensorflow-tutorials deep-learning machine-learning notebook autoencoder multi-layer-perceptron reinforcement-learning tflearn neural-networks neural-network neural-machine-translation nlp cnnIn a nutshell, the goal of this project is to create an rnnlm implementation that can be trained on huge datasets (several billions of words) and very large vocabularies (several hundred thousands) and used in real-world ASR and MT problems. Besides, to achieve better results this implementation supports such praised setups as ReLU+DiagonalInitialization [1], GRU [2], NCE [3], and RMSProp [4]. How fast is it? Well, on One Billion Word Benchmark [8] and 3.3GHz CPU the program with standard parameters (sigmoid hidden layer of size 256 and hierarchical softmax) processes more then 250k words per second in 8 threads, i.e. 15 millions of words per minute. As a result an epoch takes less than one hour. Check Experiments section for more numbers and figures.
SphereFace is released under the MIT License (refer to the LICENSE file for details). 2018.8.14: We recommand an interesting ECCV 2018 paper that comprehensively evaluates SphereFace (A-Softmax) on current widely used face datasets and their proposed noise-controlled IMDb-Face dataset. Interested users can try to train SphereFace on their IMDb-Face dataset. Take a look here.
face-recognition caffe sphereface cvpr-2017 face-detection angular-softmax deep-learningThe Gesture Recognition Toolkit (GRT) is a cross-platform, open-source, C++ machine learning library designed for real-time gesture recognition. Classification: Adaboost, Decision Tree, Dynamic Time Warping, Gaussian Mixture Models, Hidden Markov Models, k-nearest neighbor, Naive Bayes, Random Forests, Support Vector Machine, Softmax, and more...
gesture-recognition grt machine-learning gesture-recognition-toolkit support-vector-machine random-forest kmeans dynamic-time-warping softmax linear-regressionThis is the official PyTorch package for the discrete VAE used for DALL·E. The transformer used to generate the images from the text is not part of this code release.
텐서플로우를 기초부터 응용까지 단계별로 연습할 수 있는 소스 코드를 제공합니다. 텐서플로우 공식 사이트에서 제공하는 안내서의 대부분의 내용을 다루고 있으며, 공식 사이트에서 제공하는 소스 코드보다는 훨씬 간략하게 작성하였으므로 쉽게 개념을 익힐 수 있을 것 입니다. 또한, 모든 주석은 한글로(!) 되어 있습니다.
neural-network tensorflow mnist autoencoder rnn deep-learning tutorial chatbot seq2seq dqn word2vec cnn gan inceptionTensorflow implementation of Neural Variational Inference for Text Processing. Training details of NVDM. The best result can be achieved by onehost updates, not alternative updates.
Simple Tensorflow implementation of "Densenet" using Cifar10, MNIST
densenet tensorflow densenet-tensorflowMNIST 예제를 CNN 모델로 학습하는 코드를 조금 보강하고 정리해서 TensorFlow-MNIST 저장소에 올려두었습니다. summary 를 저장해서 TensorBoard 를 사용할 수 있게 하였으며, 모델을 생성하는 부분과 Trainer, Tester 를 분리하여 학습한 모델을 저장 후 따로 사용할 수 있도록 해 두었으니 참고 해 주세요.
We have large collection of open source products. Follow the tags from
Tag Cloud >>
Open source products are scattered around the web. Please provide information
about the open source projects you own / you use.
Add Projects.