Displaying 1 to 20 from 23 results

Collection of generative models, e.g. GAN, VAE in Pytorch and Tensorflow. Also present here are RBM and Helmholtz Machine. Generated samples will be stored in GAN/{gan_model}/out (or VAE/{vae_model}/out, etc) directory during training.

vae gan pytorch tensorflow generative-model machine-learning rbm restricted-boltzmann-machineThis package is part of the Kadenze Academy program Creative Applications of Deep Learning w/ TensorFlow. from cadl import and then pressing tab to see the list of available modules.

deep-learning neural-network tutorial mooc gan vae vae-gan pixelcnn wavenet magenta nsynth tensorflow celeba cyclegan dcgan word2vec glove autoregressive conditional courseThe project was created as part of the Creative Applications of Deep Learning with TensorFlow (CADL) Kadenze course's final assignment. It is an experimental attempt to transfer artistic style learned from a series of paintings "live" onto a video sequence by fitting a variational autoencoder with 512 codes to both paintings and video frames, isolating the mean feature-space embeddings and modifying the video's embeddings to be closer to those of the paintings. Because the general visual quality of the VAE's decoded output is relatively low, a convolutional post-processing network based on residual convolutions was trained with the purpose of making the resulting image less similar to the VAE's generated output and more similar to the original input images. The basic idea was to have an upsampling network here, but it quickly turned out to be a very naive idea at this point of development. Instead, it now downsizes the input, learns filters in a residual network and then samples back up to the input frame size; I would have liked to perform convolutions directly on the input, but memory limitations prevented the usage of a useful amount of feature maps.

tensorflow deep-learning neural-network variational-inference autoencoder vae vaegan generative-art generative-adversarial-network experiment cadl kadenze online-courseVAE-Seq is a library for modeling sequences of observations. One tool that's commonly used to model sequential data is the Recurrent Neural Network (RNN), or gated variations of it such as the Long Short-Term Memory cell or the Gated Recurrent Unit cell.

vae sequential-models rnn generative-models machine-learningVedantam, Ramakrishna, Ian Fischer, Jonathan Huang, and Kevin Murphy. 2017. Generative Models of Visually Grounded Imagination. arXiv [cs.LG]. arXiv. http://arxiv.org/abs/1705.10762. NOTE: All scripts should be run from the root directory of the project.

ml vae joint-vaeVariational Autoencoders (VAEs) in Theano for Images and Text

vaeTensorFlow implementation for stochastic adversarial video prediction. Given a sequence of initial frames, our model is able to predict future frames of various possible futures. For example, in the next two sequences, we show the ground truth sequence on the left and random predictions of our model on the right. Predicted frames are indicated by the yellow bar at the bottom. For more examples, visit the project page. Stochastic Adversarial Video Prediction, Alex X. Lee, Richard Zhang, Frederik Ebert, Pieter Abbeel, Chelsea Finn, Sergey Levine. arXiv preprint arXiv:1804.01523, 2018.

video-prediction stochastic adversarial vae variational-autoencoder generative-adversarial-network gan vae-gan video-generationThis repository contains the dSprites dataset, used to assess the disentanglement properties of unsupervised learning methods. dSprites is a dataset of 2D shapes procedurally generated from 6 ground truth independent latent factors. These factors are color, shape, scale, rotation, x and y positions of a sprite.

vae beta-vae disentanglement dataset dspritesThis repository contains implementation of VAE and beta-VAE. Following are the generated samples after 82000 iterations of training on celeb-A dataset.

vae beta-vae celeba image-generation deep-learningThis is an unofficial Tensorflow implementation of Attend, Infear, Repeat (AIR), as presented in the following paper: S. M. Ali Eslami et. al., Attend, Infer, Repeat: Fast Scene Understanding with Generative Models. I describe the implementation and the issues I run into while working on it in this blog post.

tensorflow vae neural-networks attention-mechanism generative-model computer-vision computer-graphics rnn attention attend-infer-repeatThis is an official Tensorflow implementation of Sequential Attend, Infer, Repeat (SQAIR), as presented in the following paper: A. R. Kosiorek, H. Kim, I. Posner, Y. W. Teh, Sequential Attend, Infer, Repeat: Generative Modelling of Moving Objects. SQAIR learns to reconstruct a sequence of images by detecting objects in every frame and then propagating them to the following frames. This results in unsupervised object detection & tracking, which we can see in the figure below. The figure was generated from a model trained for 1M iterations. The maximum number of objects in a frame (and therefore number of detected and propagated objects) is set to four, but there are never more than two objects. The first row shows inputs to the model (time flies from left to right), while the second row shows reconstructions with marked glimpse locations. Colors of the bounding boxes correspond to object id. Here, the color is always the same, which means that objects are properly tracked.

sqair vae generative representations-learning detection tracking motion approximate-inference variational-inference vimco iwaeThis repository contains jupyter notebooks implementing several deep learning models using TensorFlow. Each notebook contains detailed explanations about each model, hopefully making it easy to understand all steps.

machine-learning deep-learning tensorflow rnn-tensorflow rnn cnn cnn-tensorflow vae variational-autoencoder recurrent-neural-networks recurrent-neural-network convolutional-neural-networks convolutional-neural-network notebook ipynbAlso, included is a jupyter notebook which shows how the Gumbel-Max trick for sampling discrete variables relates to Concrete distributions. Note: Current Dockerfile is for TensorFlow 1.5 CPU training.

tensorflow deeplearning variational-autoencoder gumbel-softmax vae mnistThis repository implements the paper, Neural Discrete Representation Learning (VQ-VAE) in Tensorflow. ⚠️ This is not an official implementation, and might have some glitch (,or a major defect).

tensorflow vae cifar10 mnist generative-model(beta-)VAE Tensorflow

vae variational-autoencoders tensorflowA CNN Variational Autoencoder (CNN-VAE) implemented in PyTorch

variational-autoencoder vae convolutional-neural-networksPyTorch Implementation of SUM-GAN from "Unsupervised Video Summarization with Adversarial LSTM Networks" (CVPR 2017)

unsupervised-learning video summarization gan vae lstm pytorch sum-gan vae-gan adversarial-lstm-networksBuilt using deeplearn.js and MusicVAE. Beat Blender requires the GCloud SDK for running the server and node + npm for javascript development.

deeplearnjs vae music machine-learning tensorflowPyTorch implementation of VQ-VAE by Aäron van den Oord et al.

pytorch vq-vae vae deep-learning
We have large collection of open source products. Follow the tags from
Tag Cloud >>

Open source products are scattered around the web. Please provide information
about the open source projects you own / you use.
**Add Projects.**