Displaying 1 to 20 from 25 results

generative-models - Collection of generative models, e.g. GAN, VAE in Pytorch and Tensorflow.

  •    Python

Collection of generative models, e.g. GAN, VAE in Pytorch and Tensorflow. Also present here are RBM and Helmholtz Machine. Generated samples will be stored in GAN/{gan_model}/out (or VAE/{vae_model}/out, etc) directory during training.

DiscoGAN-pytorch - PyTorch implementation of "Learning to Discover Cross-Domain Relations with Generative Adversarial Networks"

  •    Jupyter

PyTorch implementation of Learning to Discover Cross-Domain Relations with Generative Adversarial Networks. * All samples in README.md are genearted by neural network except the first image for each row. * Network structure is slightly diffferent (here) from the author's code.




DCGAN-tensorflow - A tensorflow implementation of "Deep Convolutional Generative Adversarial Networks"

  •    Javascript

Tensorflow implementation of Deep Convolutional Generative Adversarial Networks which is a stabilize Generative Adversarial Networks. The referenced torch code can be found here.

jukebox - Code for the paper "Jukebox: A Generative Model for Music"

  •    Python

The samples decoded from each level are stored in {name}/level_{level}. You can also view the samples as an html with the aligned lyrics under {name}/level_{level}/index.html. Run python -m http.server and open the html through the server to see the lyrics animate as the song plays. A summary of all sampling data including zs, x, labels and sampling_kwargs is stored in {name}/level_{level}/data.pth.tar. The hps are for a V100 GPU with 16 GB GPU memory. The 1b_lyrics, 5b, and 5b_lyrics top-level priors take up 3.8 GB, 10.3 GB, and 11.5 GB, respectively. The peak memory usage to store transformer key, value cache is about 400 MB for 1b_lyrics and 1 GB for 5b_lyrics per sample. If you are having trouble with CUDA OOM issues, try 1b_lyrics or decrease max_batch_size in sample.py, and --n_samples in the script call.

simulated-unsupervised-tensorflow - TensorFlow implementation of "Learning from Simulated and Unsupervised Images through Adversarial Training"

  •    Python

TensorFlow implementation of Learning from Simulated and Unsupervised Images through Adversarial Training. Result of lambda=1.0 with optimizer=sgd after 8,000 steps.

pixel-rnn-tensorflow - in progress

  •    Python

Samples generated with pixel_cnn after 50 epochs.


NeuralDialog-CVAE - Tensorflow Implementation of Knowledge-Guided CVAE for dialog generation

  •    OpenEdge

We provide a TensorFlow implementation of the CVAE-based dialog model described in Learning Discourse-level Diversity for Neural Dialog Models using Conditional Variational Autoencoders, published as a long paper in ACL 2017. See the paper for more details. The outputs will be printed to stdout and generated responses will be saved at test.txt in the test_path.

voxel-flow - Video Frame Synthesis using Deep Voxel Flow

  •    Python

We address the problem of synthesizing new video frames in an existing video, either in-between existing frames (interpolation), or subsequent to them (extrapolation). Our method requires no human supervision, and any video can be used as training data by dropping, and then learning to predict, existing frames. Deep Voxel Flow (DVF) is efficient, and can be applied at any video resolution. We demonstrate that our method produces results that both quantitatively and qualitatively improve upon the state-of-the-art. Note: we encourage you to check out the newly released pytorch-voxel-flow.

attend_infer_repeat - A Tensorfflow implementation of Attend, Infer, Repeat

  •    Python

This is an unofficial Tensorflow implementation of Attend, Infear, Repeat (AIR), as presented in the following paper: S. M. Ali Eslami et. al., Attend, Infer, Repeat: Fast Scene Understanding with Generative Models. I describe the implementation and the issues I run into while working on it in this blog post.

seqGAN - A simplified PyTorch implementation of "SeqGAN: Sequence Generative Adversarial Nets with Policy Gradient

  •    Python

A PyTorch implementation of "SeqGAN: Sequence Generative Adversarial Nets with Policy Gradient." (Yu, Lantao, et al.). The code is highly simplified, commented and (hopefully) straightforward to understand. The policy gradients implemented are also much simpler than in the original work (https://github.com/LantaoYu/SeqGAN/) and do not involve rollouts- a single reward is used for the entire sentence (inspired by the examples in http://karpathy.github.io/2016/05/31/rl/). The architectures used are different than those in the orignal work. Specifically, a recurrent bidirectional GRU network is used as the discriminator.

MSG-Net - Multi-style Generative Network for Real-time Transfer

  •    Lua

We also provide PyTorch implementation and MXNet implementation. Please install Torch7 with cuda and cudnn support. The code has been tested on Ubuntu 16.04 with Titan X Pascal and Maxwell. Please follow this tutorial to train a new model.

tf-vqvae - Tensorflow Implementation of the paper [Neural Discrete Representation Learning](https://arxiv

  •    Jupyter

This repository implements the paper, Neural Discrete Representation Learning (VQ-VAE) in Tensorflow. ⚠️ This is not an official implementation, and might have some glitch (,or a major defect).

BeatGAN2.0 - AI Drums:

  •    Jupyter

GANs have been used extensively for Image Synthesis, Image to Image Translation, and many other image tasks. More recently, they have been applied to the task of raw audio synthesis by Donahue et al with their WaveGAN architecture. Previous audio generation techniques relied on HMMs, autoregressive models, or applying image-based techniques to spectrograms (images of waveforms in the time-domain). Donahue et al demonstrated that by applying a 1D version of DCGAN directly to regular (normalized) audio files, one could generate high quality samples of human speech superior to these older techniques. The papers's goal was to generate speech, but the authors also applied their GAN to a small dataset of drum hits and were able to produce high quality samples. This project aims to explore the ability of the same model architecture to generate significantly more complex audio patterns, namely drum beats. In particular, using the architecture presented in the paper, a GAN is trained to generate the first bar in a 4 bar drum pattern. The model is able to produce high quality samples close to par with those published in the original paper. Furthermore, the majority of the GAN's outputs are new beats, rather than re-hashes of the training data, as measured by a quantitative similarity metric and user testing. WaveGAN's generator outputs vectors of shape (16384, c), where c is the number of audio channels. This allows the number of params in WaveGAN to be the same as in DCGAN. A larger model would require more parameters, thus significantly more data.

MNIST_Inception_Score - Training a MNIST classifier, and use it to compute inception score (ICP)

  •    Python

Note that different pre-trained models may lead to slightly different inception scores. The generated images are saved in a mat file, with a tensor named 'images' of size [10000,784], where 10000 is the number of images, and 784 is the dimension of a flattened MNIST image.

markov-chain-gan - Code for "Generative Adversarial Training for Markov Chains" (ICLR 2017 Workshop)

  •    Python

TensorFlow code for Generative Adversarial Training for Markov Chains (ICLR 2017 Workshop Track). Work by Jiaming Song, Shengjia Zhao and Stefano Ermon.

Sequential-Variational-Autoencoder - Implementation of Sequential Variational Autoencoder

  •    Python

This is the implementation of the Sequential VAE in Towards a Deeper Understanding of Variational Autoencoding Models. The paper identifies a link between power of latent code and sharpness of generated samples. We are able to generate fairly sharp samples by gradually augmenting the power of latent code.






We have large collection of open source products. Follow the tags from Tag Cloud >>


Open source products are scattered around the web. Please provide information about the open source projects you own / you use. Add Projects.