Displaying 1 to 15 from 15 results

pix2pixHD - Synthesizing and manipulating 2048x1024 images with conditional GANs

  •    Python

Pytorch implementation of our method for high-resolution (e.g. 2048x1024) photorealistic image-to-image translation. It can be used for turning semantic label maps into photo-realistic images or synthesizing portraits from face label maps. High-Resolution Image Synthesis and Semantic Manipulation with Conditional GANs Ting-Chun Wang1, Ming-Yu Liu1, Jun-Yan Zhu2, Andrew Tao1, Jan Kautz1, Bryan Catanzaro1 1NVIDIA Corporation, 2UC Berkeley In arxiv, 2017.

MUNIT - Multimodal Unsupervised Image-to-Image Translation

  •    Python

Copyright (C) 2018 NVIDIA Corporation. All rights reserved. Licensed under the CC BY-NC-SA 4.0 license (https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode). Please check out the user manual page.

pix2pix - Image-to-image translation with conditional adversarial nets

  •    Lua

Image-to-Image Translation with Conditional Adversarial Networks Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, Alexei A. Efros CVPR, 2017. On some tasks, decent results can be obtained fairly quickly and on small datasets. For example, to learn to generate facades (example shown above), we trained on just 400 images for about 2 hours (on a single Pascal Titan X GPU). However, for harder problems it may be important to train on far larger datasets, and for many hours or even days.




CycleGAN - Software that can generate photos from paintings, turn horses into zebras, perform style transfer, and more

  •    Lua

This package includes CycleGAN, pix2pix, as well as other methods like BiGAN/ALI and Apple's paper S+U learning. The code was written by Jun-Yan Zhu and Taesung Park. Note: Please check out PyTorch implementation for CycleGAN and pix2pix. The PyTorch version is under active development and can produce results comparable or better than this Torch version.

iGAN - Interactive Image Generation via Generative Adversarial Networks

  •    Python

[Project] [Youtube] [Paper] A research prototype developed by UC Berkeley and Adobe CTL. Latest development: [pix2pix]: Torch implementation for learning a mapping from input images to output images. [CycleGAN]: Torch implementation for learning an image-to-image translation (i.e. pix2pix) without input-output pairs. [pytorch-CycleGAN-and-pix2pix]: PyTorch implementation for both unpaired and paired image-to-image translation.

pytorch-CycleGAN-and-pix2pix - Image-to-image translation in PyTorch (e

  •    Python

This is our PyTorch implementation for both unpaired and paired image-to-image translation. It is still under active development. The code was written by Jun-Yan Zhu and Taesung Park, and supported by Tongzhou Wang.

UNIT - Unsupervised Image-to-Image Translation

  •    Python

Copyright (C) 2018 NVIDIA Corporation. All rights reserved. Licensed under the CC BY-NC-SA 4.0 license (https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode). Please check out our tutorial.


BicycleGAN - [NIPS 2017] Toward Multimodal Image-to-Image Translation

  •    Python

Pytorch implementation for multimodal image-to-image translation. For example, given the same night image, our model is able to synthesize possible day images with different types of lighting, sky and clouds. The training requires paired data. Note: The current software works well with PyTorch 0.4. Check out the older branch that supports PyTorch 0.1-0.3.

tensorflow-pix2pix - A lightweight pix2pix Tensorflow implementation.

  •    Python

A lightweight pix2pix Tensorflow implementation. First you need to download the CMP Facade dataset.

MAX-Image-Colorizer - Colorize black & white images

  •    Python

This repository contains code to instantiate and deploy an image translation model. This model is a Generative Adversarial Network (GAN) that was trained by the IBM CODAIT Team on COCO dataset images converted to grayscale and produces colored images. The input to the model is a grayscale image (jpeg or png), and the output is a colored 256 by 256 image (increased resolution will be added in future releases). The model is based on Christopher Hesse's Tensorflow implementation of the pix2pix model. The model files are hosted on IBM Cloud Object Storage. The code in this repository deploys the model as a web service in a Docker container. This repository was developed as part of the IBM Code Model Asset Exchange.

Pix2Pix-Film - An implementation of Pix2Pix in Tensorflow for use with frames from films

  •    Jupyter

An implementation of Pix2Pix in Tensorflow for use with colorizing and increasing the field of view in frames from classic films. For more information, see my Medium Post on the project. Pretrained model available here. It was trained using Alfred Hitchcock films, so it generalizes best to similar movies.

sketch-to-art - 🖼 Create artwork from your casual sketch with GAN and style transfer

  •    Python

This project can transform your casual sketch to beautiful painting/artwork using modern AI technology. The principle behind this is called Conditional Adversarial Networks, known as pix2pix, which is able to generate image based on the given image.

pixel-styler - Concise implementation of image-to-image translation.

  •    Python

This is a concise refactoring version of official PyTorch implementation for image-to-image translation. If you would like to apply a pre-trained model to a collection of input photos (without image pairs), please use --dataset_mode single and --model test options. Here's command to apply a model to Facade label maps (stored in the directory facades/testB).