Displaying 1 to 20 from 43 results

StarGAN - Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation

  •    Python

PyTorch implementation of StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation. StarGAN can flexibly translate an input image to any desired target domain using only a single generator and a discriminator.

pix2pixHD - Synthesizing and manipulating 2048x1024 images with conditional GANs

  •    Python

Pytorch implementation of our method for high-resolution (e.g. 2048x1024) photorealistic image-to-image translation. It can be used for turning semantic label maps into photo-realistic images or synthesizing portraits from face label maps. High-Resolution Image Synthesis and Semantic Manipulation with Conditional GANs Ting-Chun Wang1, Ming-Yu Liu1, Jun-Yan Zhu2, Andrew Tao1, Jan Kautz1, Bryan Catanzaro1 1NVIDIA Corporation, 2UC Berkeley In arxiv, 2017.

iGAN - Interactive Image Generation via Generative Adversarial Networks

  •    Python

[Project] [Youtube] [Paper] A research prototype developed by UC Berkeley and Adobe CTL. Latest development: [pix2pix]: Torch implementation for learning a mapping from input images to output images. [CycleGAN]: Torch implementation for learning an image-to-image translation (i.e. pix2pix) without input-output pairs. [pytorch-CycleGAN-and-pix2pix]: PyTorch implementation for both unpaired and paired image-to-image translation.




pytorch-CycleGAN-and-pix2pix - Image-to-image translation in PyTorch (e

  •    Python

This is our PyTorch implementation for both unpaired and paired image-to-image translation. It is still under active development. The code was written by Jun-Yan Zhu and Taesung Park, and supported by Tongzhou Wang.

the-gan-zoo - A list of all named GANs!

  •    Python

You can also check out the same data in a tabular format with functionality to filter by year or do a quick search by title here. Contributions are welcome. Add links through pull requests in gans.tsv file in the same format or create an issue to lemme know something I missed or to start a discussion.

Tensorflow-Tutorial - Tensorflow tutorial from basic to hard

  •    Python

In these tutorials, we will build our first Neural Network and try to build some advanced Neural Network architectures developed recent years. All methods mentioned below have their video and text tutorial in Chinese. Visit 莫烦 Python for more.

BicycleGAN - [NIPS 2017] Toward Multimodal Image-to-Image Translation

  •    Python

Pytorch implementation for multimodal image-to-image translation. For example, given the same night image, our model is able to synthesize possible day images with different types of lighting, sky and clouds. The training requires paired data. Note: The current software works well with PyTorch 0.4. Check out the older branch that supports PyTorch 0.1-0.3.


All-About-the-GAN - All About the GANs(Generative Adversarial Networks) - Summarized lists for GAN

  •    Python

The purpose of this repository is providing the curated list of the state-of-the-art works on the field of Generative Adversarial Networks since their introduction in 2014. You can also check out the same data in a tabular format with functionality to filter by year or do a quick search by title here.

chainer-gan-lib - Chainer implementation of recent GAN variants

  •    Python

This repository collects chainer implementation of state-of-the-art GAN algorithms. These codes are evaluated with the inception score on Cifar-10 dataset. Note that our codes are not faithful re-implementation of the original paper. This implementation has been tested with the following versions.

generative-compression - TensorFlow Implementation of Generative Adversarial Networks for Extreme Learned Image Compression

  •    Python

TensorFlow Implementation for learned compression of images using Generative Adversarial Networks. The method was developed by Agustsson et. al. in Generative Adversarial Networks for Extreme Learned Image Compression. The proposed idea is very interesting and their approach is well-described. Training is conducted with batch size 1 and reconstructed samples / tensorboard summaries will be periodically written every certain number of steps (default is 128). Checkpoints are saved every 10 epochs.

AdaptSegNet - Learning to Adapt Structured Output Space for Semantic Segmentation, CVPR 2018 (spotlight)

  •    Python

Pytorch implementation of our method for adapting semantic segmentation from the synthetic dataset (source domain) to the real dataset (target domain). Based on this implementation, our result is ranked 3rd in the VisDA Challenge. Learning to Adapt Structured Output Space for Semantic Segmentation Yi-Hsuan Tsai*, Wei-Chih Hung*, Samuel Schulter, Kihyuk Sohn, Ming-Hsuan Yang and Manmohan Chandraker IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018 (spotlight) (* indicates equal contribution).

T2F - T2F: text to face generation using Deep Learning

  •    Python

Text-to-Face generation using Deep Learning. This project combines two of the recent architectures StackGAN and ProGAN for synthesizing faces from textual descriptions. The project uses Face2Text dataset which contains 400 facial images and textual captions for each of them. The data can be obtained by contacting either the RIVAL group or the authors of the aforementioned paper. The code is present in the implementation/ subdirectory. The implementation is done using the PyTorch framework. So, for running this code, please install PyTorch version 0.4.0 before continuing.

vae-style-transfer - An experiment in VAE-based artistic style transfer by embedding fiddling.

  •    Python

The project was created as part of the Creative Applications of Deep Learning with TensorFlow (CADL) Kadenze course's final assignment. It is an experimental attempt to transfer artistic style learned from a series of paintings "live" onto a video sequence by fitting a variational autoencoder with 512 codes to both paintings and video frames, isolating the mean feature-space embeddings and modifying the video's embeddings to be closer to those of the paintings. Because the general visual quality of the VAE's decoded output is relatively low, a convolutional post-processing network based on residual convolutions was trained with the purpose of making the resulting image less similar to the VAE's generated output and more similar to the original input images. The basic idea was to have an upsampling network here, but it quickly turned out to be a very naive idea at this point of development. Instead, it now downsizes the input, learns filters in a residual network and then samples back up to the input frame size; I would have liked to perform convolutions directly on the input, but memory limitations prevented the usage of a useful amount of feature maps.

opt-mmd - Learning kernels to maximize the power of MMD tests

  •    Python

Code for the paper "Generative Models and Model Criticism via Optimized Maximum Mean Discrepancy" (arXiv:1611.04488; published at ICLR 2017), by Dougal J. Sutherland (@dougalsutherland), Hsiao-Yu Tung, Heiko Strathmann (@karlnapf), Soumyajit De (@lambday), Aaditya Ramdas, Alex Smola, and Arthur Gretton. This code is under a BSD license, but if you use it, please cite the paper.

tensorflow-infogan - :dolls: InfoGAN: Interpretable Representation Learning

  •    Python

This repository contains a straightforward implementation of Generative Adversarial Networks trained to fool a discriminator that sees real MNIST images, along with Mutual Information Generative Adversarial Networks (InfoGAN). Note: generator architecture changed w.r.t. the publication, due to the fact that it was producing 32x32 images rather than 64x64 images as desired. Results may be different.

adversarial-document-model - Code needed to reproduce "Modeling documents with Generative Adversarial Networks"

  •    Python

Code needed to reproduce the results from Modeling documents with Generative Adversarial Networks, presented at the NIPS workshop on Adversarial Training, December 2016. where <path to input directory> points to a directory containing an input dataset (described below), <path to output directory> gives the path to a newly created output dataset directory (containing the preprocessed data), and <path to vocab file> gives the path to a vocabulary file (described below).

video_prediction - Stochastic Adversarial Video Prediction

  •    Python

TensorFlow implementation for stochastic adversarial video prediction. Given a sequence of initial frames, our model is able to predict future frames of various possible futures. For example, in the next two sequences, we show the ground truth sequence on the left and random predictions of our model on the right. Predicted frames are indicated by the yellow bar at the bottom. For more examples, visit the project page. Stochastic Adversarial Video Prediction, Alex X. Lee, Richard Zhang, Frederik Ebert, Pieter Abbeel, Chelsea Finn, Sergey Levine. arXiv preprint arXiv:1804.01523, 2018.