texture_nets - Code for "Texture Networks: Feed-forward Synthesis of Textures and Stylized Images" paper

  •        32

In the paper Texture Networks: Feed-forward Synthesis of Textures and Stylized Images we describe a faster way to generate textures and stylize images. It requires learning a feedforward generator with a loss function proposed by Gatys et al.. When the model is trained, a texture sample or stylized image of any size can be generated instantly. Improved Texture Networks: Maximizing Quality and Diversity in Feed-forward Stylization and Texture Synthesis presents a better architectural design for the generator network. By switching batch_norm to Instance Norm we facilitate the learning process resulting in much better quality.




Related Projects

PyTorch-Multi-Style-Transfer - Neural Style and MSG-Net

  •    Jupyter

This repo provides PyTorch Implementation of MSG-Net (ours) and Neural Style (Gatys et al. CVPR 2016), which has been included by ModelDepot. We also provide Torch implementation and MXNet implementation. Image Style Transfer Using Convolutional Neural Networks by Leon A. Gatys, Alexander S. Ecker, and Matthias Bethge.

artistic-style-transfer - Convolutional neural networks for artistic style transfer.

  •    Jupyter

This repository contains (TensorFlow and Keras) code that goes along with a related blog post and talk (PDF). Together, they act as a systematic look at convolutional neural networks from theory to practice, using artistic style transfer as a motivating example. The blog post provides context and covers the underlying theory, while working through the Jupyter notebooks in this repository offers a more hands-on learning experience. If you have any questions about any of this stuff, feel free to open an issue or tweet at me: @copingbear.

neural-style-audio-tf - TensorFlow implementation for audio neural style.

  •    Jupyter

This is a TensorFlow reimplementation of Vadim's Lasagne code for style transfer algorithm for audio, which uses convolutions with random weights to represent audio features. To listen to examples go to the blog post. Also check out Torch implementation.

fast-neural-style - Feedforward style transfer

  •    Lua

The paper builds on A Neural Algorithm of Artistic Style by Leon A. Gatys, Alexander S. Ecker, and Matthias Bethge by training feedforward neural networks that apply artistic styles to images. After training, our feedforward networks can stylize images hundreds of times faster than the optimization-based method presented by Gatys et al. This repository also includes an implementation of instance normalization as described in the paper Instance Normalization: The Missing Ingredient for Fast Stylization by Dmitry Ulyanov, Andrea Vedaldi, and Victor Lempitsky. This simple trick significantly improves the quality of feedforward style transfer models.

fast-neural-doodle - Faster neural doodle

  •    Lua

This is my try on drawing with neural networks, which is faster than Alex J. Champandard's version, and similar in quality. This approach is based on neural artistic style method (L. Gatys), whereas Alex's version uses CNN+MRF approach of Chuan Li. It takes several minutes to redraw Renoir example using GPU and it will easily fit in 4GB GPUs. If you were able to work with Justin Johnson's code for artistic style then this code should work for you too.

neural-style - Torch implementation of neural style algorithm

  •    Lua

This is a torch implementation of the paper A Neural Algorithm of Artistic Style by Leon A. Gatys, Alexander S. Ecker, and Matthias Bethge. By resizing the style image before extracting style features, we can control the types of artistic features that are transfered from the style image; you can control this behavior with the -style_scale flag. Below we see three examples of rendering the Golden Gate Bridge in the style of The Starry Night. From left to right, -style_scale is 2.0, 1.0, and 0.5.

Neural-Style-Transfer - Keras Implementation of Neural Style Transfer from the paper "A Neural Algorithm of Artistic Style" (http://arxiv

  •    Jupyter

INetwork implements and focuses on certain improvements suggested in Improving the Neural Algorithm of Artistic Style. Color Preservation is based on the paper Preserving Color in Neural Artistic Style Transfer.

MGANs - Precomputed Real-Time Texture Synthesis with Markovian Generative Adversarial Networks

  •    Lua

Training & Testing code (torch), pre-trained models and supplementary materials for "Precomputed Real-Time Texture Synthesis with Markovian Generative Adversarial Networks". See this video for a quick explaination for our method and results.

char-rnn - Multi-layer Recurrent Neural Networks (LSTM, GRU, RNN) for character-level language models in Torch

  •    Lua

This code implements multi-layer Recurrent Neural Network (RNN, LSTM, and GRU) for training/sampling from character-level language models. In other words the model takes one text file as input and trains a Recurrent Neural Network that learns to predict the next character in a sequence. The RNN can then be used to generate text character by character that will look like the original training data. The context of this code base is described in detail in my blog post. If you are new to Torch/Lua/Neural Nets, it might be helpful to know that this code is really just a slightly more fancy version of this 100-line gist that I wrote in Python/numpy. The code in this repo additionally: allows for multiple layers, uses an LSTM instead of a vanilla RNN, has more supporting code for model checkpointing, and is of course much more efficient since it uses mini-batches and can run on a GPU.

neural-style-tf - TensorFlow (Python API) implementation of Neural Style

  •    Python

Additionally, techniques are presented for semantic segmentation and multiple style transfer. The relative weight of the style and content can be controlled.

Activity-Recognition-with-CNN-and-RNN - Temporal Segments LSTM and Temporal-Inception for Activity Recognition

  •    Lua

In this work, we demonstrate a strong baseline two-stream ConvNet using ResNet-101. We use this baseline to thoroughly examine the use of both RNNs and Temporal-ConvNets for extracting spatiotemporal information. Building upon our experimental results, we then propose and investigate two different networks to further integrate spatiotemporal information: 1) temporal segment RNN and 2) Inception-style Temporal-ConvNet. Our analysis identifies specific limitations for each method that could form the basis of future work. Our experimental results on UCF101 and HMDB51 datasets achieve state-of-the-art performances, 94.1% and 69.0%, respectively, without requiring extensive temporal augmentation.

optimize-net - OptNet - Reducing memory usage in torch neural nets

  •    Lua

Memory optimizations for torch neural networks. It goes over the network and verify which buffers can be reused. It supports both inference (evaluation) mode and training mode.

fast-style-transfer-deeplearnjs - Demo of in-browser Fast Neural Style Transfer with deeplearn

  •    TypeScript

DEPRECATED: This repository has been deprecated and is no longer actively maintained. There have been many newer versions of deeplearn.js (now called TensorFlow.js) since this implementation. This code has been integrated into the ml5.js library and the TensorFlow.js version is actively maintained over there. The demo site is still up, though, and will remain up as long as GitHub Pages exists. This repository contains an implementation of the Fast Neural Style Transfer algorithm running fully inside a browser using the Deeplearn.JS library.

waifu2x - Image Super-Resolution for Anime-Style Art

  •    Lua

Image Super-Resolution for Anime-style art using Deep Convolutional Neural Networks. And it supports photo. The demo application can be found at http://waifu2x.udp.jp/ .

zhihu - This repo contains the source code in my personal column (https://zhuanlan

  •    Jupyter

This repo contains the source code in my personal column (https://zhuanlan.zhihu.com/zhaoyeyu), implemented using Python 3.6. Including Natural Language Processing and Computer Vision projects, such as text generation, machine translation, deep convolution GAN and other actual combat code.

fast-neural-style-tensorflow - A tensorflow implementation for fast neural style!

  •    Python

A tensorflow implementation for Perceptual Losses for Real-Time Style Transfer and Super-Resolution. This code is based on Tensorflow-Slim and OlavHN/fast-neural-style.

gradient-checkpointing - Make huge neural nets fit in memory

  •    Python

Training very deep neural networks requires a lot of memory. Using the tools in this package, developed jointly by Tim Salimans and Yaroslav Bulatov, you can trade off some of this memory usage with computation to make your model fit into memory more easily. For feed-forward models we were able to fit more than 10x larger models onto our GPU, at only a 20% increase in computation time. The memory intensive part of training deep neural networks is computing the gradient of the loss by backpropagation. By checkpointing nodes in the computation graph defined by your model, and recomputing the parts of the graph in between those nodes during backpropagation, it is possible to calculate this gradient at reduced memory cost. When training deep feed-forward neural networks consisting of n layers, we can reduce the memory consumption to O(sqrt(n)) in this way, at the cost of performing one additional forward pass (see e.g. Training Deep Nets with Sublinear Memory Cost, by Chen et al. (2016)). This repository provides an implementation of this functionality in Tensorflow, using the Tensorflow graph editor to automatically rewrite the computation graph of the backward pass.

UniversalStyleTransfer - The source code of NIPS17 'Universal Style Transfer via Feature Transforms'

  •    Lua

Torch implementation of our NIPS17 paper on universal style transfer. TensorFlow implementation by Evan Davis.

psx_retroshader - Shader that "emulates" the rendering style of ps1

  •    GLSL

All shaders supports Fog, polygon cut-out & distortion amount. #Warning Like the original ps1 this shader use affine texture mapping, so if you apply a texture on a large quad you'll see it very distored. To avoid excessive distortion you have to add triangless to the mesh.

VEditorKit - Lightweight and Powerful Editor Kit

  •    Swift

Lightweight and Powerful Editor Kit built on Texture(AsyncDisplayKit) https://github.com/texturegroup/texture. VEditorKit provides the most core functionality needed for the editor. Unfortunately, When combined words are entered then UITextView selectedRange will changed and typingAttribute will cleared. So, In combined words case, Users can't continue typing the style they want. VEditorKit is available under the MIT license. See the LICENSE file for more info.