VAE-Gumbel-Softmax - An implementation of a Variational-Autoencoder using the Gumbel-Softmax reparametrization trick in TensorFlow (tested on r1

  •        30

Also, included is a jupyter notebook which shows how the Gumbel-Max trick for sampling discrete variables relates to Concrete distributions. Note: Current Dockerfile is for TensorFlow 1.5 CPU training.

https://github.com/vithursant/VAE-Gumbel-Softmax

Tags
Implementation
License
Platform

   




Related Projects

semi-supervised-pytorch - Implementations of different VAE-based semi-supervised and generative models in PyTorch

  •    Python

A PyTorch-based package containing useful models for modern deep semi-supervised learning and deep generative models. Want to jump right into it? Look into the notebooks. 2018.04.17 - The Gumbel softmax notebook has been added to show how you can use discrete latent variables in VAEs. 2018.02.28 - The β-VAE notebook was added to show how VAEs can learn disentangled representations.

cppn-gan-vae-tensorflow - Train CPPNs as a Generative Model, using Generative Adversarial Networks and Variational Autoencoder techniques to produce high resolution images

  •    Python

Train Compositional Pattern Producing Network as a Generative Model, using Generative Adversarial Networks and Variational Autoencoder techniques to produce high resolution images. Run python train.py from the command line to train from scratch and experiment with different settings.

LargeMargin_Softmax_Loss - Implementation for <Large-Margin Softmax Loss for Convolutional Neural Networks> in ICML'16

  •    C++

We introduce a large-margin softmax (L-Softmax) loss for convolutional neural networks. L-Softmax loss can greatly improve the generalization ability of CNNs, so it is very suitable for general classification, feature embedding and biometrics (e.g. face) verification. We give the 2D feature visualization on MNIST to illustrate our L-Softmax loss. The paper is published in ICML 2016 and also available at arXiv.

generative-models - Collection of generative models, e.g. GAN, VAE in Pytorch and Tensorflow.

  •    Python

Collection of generative models, e.g. GAN, VAE in Pytorch and Tensorflow. Also present here are RBM and Helmholtz Machine. Generated samples will be stored in GAN/{gan_model}/out (or VAE/{vae_model}/out, etc) directory during training.

autoencoding_beyond_pixels - Generative image model with learned similarity measures

  •    Python

Implementation of the method described in our Arxiv paper. We present an autoencoder that leverages learned representations to better measure similarities in data space. By combining a variational autoencoder with a generative adversarial network we can use learned feature representations in the GAN discriminator as basis for the VAE reconstruction objective. Thereby, we replace element-wise errors with feature-wise errors to better capture the data distribution while offering invariance towards e.g. translation. We apply our method to images of faces and show that it outperforms VAEs with element-wise similarity measures in terms of visual fidelity. Moreover, we show that the method learns an embedding in which high-level abstract visual features (e.g. wearing glasses) can be modified using simple arithmetic.


adaptive-softmax - Implements an efficient softmax approximation as described in the paper "Efficient softmax approximation for GPUs" (http://arxiv

  •    Lua

The adaptive-softmax project is a Torch implementation of the efficient softmax approximation for graphical processing units (GPU), described in the paper "Efficient softmax approximation for GPUs" (http://arxiv.org/abs/1609.04309). This method is useful for training language models with large vocabularies. We provide a script to train large recurrent neural network language models, in order to reproduce the results of the paper.

faster-rnnlm - Faster Recurrent Neural Network Language Modeling Toolkit with Noise Contrastive Estimation and Hierarchical Softmax

  •    C++

In a nutshell, the goal of this project is to create an rnnlm implementation that can be trained on huge datasets (several billions of words) and very large vocabularies (several hundred thousands) and used in real-world ASR and MT problems. Besides, to achieve better results this implementation supports such praised setups as ReLU+DiagonalInitialization [1], GRU [2], NCE [3], and RMSProp [4]. How fast is it? Well, on One Billion Word Benchmark [8] and 3.3GHz CPU the program with standard parameters (sigmoid hidden layer of size 256 and hierarchical softmax) processes more then 250k words per second in 8 threads, i.e. 15 millions of words per minute. As a result an epoch takes less than one hour. Check Experiments section for more numbers and figures.

sphereface - Implementation for <SphereFace: Deep Hypersphere Embedding for Face Recognition> in CVPR'17

  •    Jupyter

SphereFace is released under the MIT License (refer to the LICENSE file for details). 2018.8.14: We recommand an interesting ECCV 2018 paper that comprehensively evaluates SphereFace (A-Softmax) on current widely used face datasets and their proposed noise-controlled IMDb-Face dataset. Interested users can try to train SphereFace on their IMDb-Face dataset. Take a look here.

grt - gesture recognition toolkit

  •    C++

The Gesture Recognition Toolkit (GRT) is a cross-platform, open-source, C++ machine learning library designed for real-time gesture recognition. Classification: Adaboost, Decision Tree, Dynamic Time Warping, Gaussian Mixture Models, Hidden Markov Models, k-nearest neighbor, Naive Bayes, Random Forests, Support Vector Machine, Softmax, and more...

TensorFlow-Tutorials - 텐서플로우를 기초부터 응용까지 단계별로 연습할 수 있는 소스 코드를 제공합니다

  •    Python

텐서플로우를 기초부터 응용까지 단계별로 연습할 수 있는 소스 코드를 제공합니다. 텐서플로우 공식 사이트에서 제공하는 안내서의 대부분의 내용을 다루고 있으며, 공식 사이트에서 제공하는 소스 코드보다는 훨씬 간략하게 작성하였으므로 쉽게 개념을 익힐 수 있을 것 입니다. 또한, 모든 주석은 한글로(!) 되어 있습니다.

variational-text-tensorflow - TensorFlow implementation of Neural Variational Inference for Text Processing

  •    Python

Tensorflow implementation of Neural Variational Inference for Text Processing. Training details of NVDM. The best result can be achieved by onehost updates, not alternative updates.

TensorFlow-ML-Exercises - Learning Machine Learning with TensorFlow

  •    Python

MNIST 예제를 CNN 모델로 학습하는 코드를 조금 보강하고 정리해서 TensorFlow-MNIST 저장소에 올려두었습니다. summary 를 저장해서 TensorBoard 를 사용할 수 있게 하였으며, 모델을 생성하는 부분과 Trainer, Tester 를 분리하여 학습한 모델을 저장 후 따로 사용할 수 있도록 해 두었으니 참고 해 주세요.

node-tensorflow - Node.js + TensorFlow

  •    Javascript

TensorFlow is Google's machine learning runtime. It is implemented as C++ runtime, along with Python framework to support building a variety of models, especially neural networks for deep learning. It is interesting to be able to use TensorFlow in a node.js application using just JavaScript (or TypeScript if that's your preference). However, the Python functionality is vast (several ops, estimator implementations etc.) and continually expanding. Instead, it would be more practical to consider building Graphs and training models in Python, and then consuming those for runtime use-cases (like prediction or inference) in a pure node.js and Python-free deployment. This is what this node module enables.