- 22

Single- and multilayer LSTM networks with no additional output nonlinearity based on aymericdamien's TensorFlow examples and Sequence prediction using recurrent neural networks. Experiments with varying numbers of hidden units, LSTM cells and techniques like gradient clipping were conducted using static_rnn and dynamic_rnn. All networks have been optimized using Adam on the MSE loss function.

https://github.com/sunsided/tensorflow-lstm-sinTags | tensorflow lstm recurrent-neural-networks neural-network timeseries prediction experiment gru |

Implementation | Python |

License | Public |

Platform | Windows Linux |

The objective is to predict continuous values, sin and cos functions in this example, based on previous observations using the LSTM architecture. This example has been updated with a new version compatible with the tensrflow-1.1.0. This new version is using a library polyaxon that provides an API to create deep learning models and experiments based on tensorflow.

lstm tensorflow recurrent-networks deep-learning sequence-prediction tensorflow-lstm-regression jupyter time-series recurrent-neural-networksCompared to a classical approach, using a Recurrent Neural Networks (RNN) with Long Short-Term Memory cells (LSTMs) require no or almost no feature engineering. Data can be fed directly into the neural network who acts like a black box, modeling the problem correctly. Other research on the activity recognition dataset can use a big amount of feature engineering, which is rather a signal processing approach combined with classical data science techniques. The approach here is rather very simple in terms of how much was the data preprocessed. Let's use Google's neat Deep Learning library, TensorFlow, demonstrating the usage of an LSTM, a type of Artificial Neural Network that can process sequential data / time series.

machine-learning deep-learning lstm human-activity-recognition neural-network rnn recurrent-neural-networks tensorflow activity-recognitionMulti-layer Recurrent Neural Networks (LSTM, RNN) for word-level language models in Python using TensorFlow. Mostly reused code from https://github.com/sherjilozair/char-rnn-tensorflow which was inspired from Andrej Karpathy's char-rnn.

rnn tensorflow rnn-tensorflow lstmThis code implements multi-layer Recurrent Neural Network (RNN, LSTM, and GRU) for training/sampling from character-level language models. In other words the model takes one text file as input and trains a Recurrent Neural Network that learns to predict the next character in a sequence. The RNN can then be used to generate text character by character that will look like the original training data. The context of this code base is described in detail in my blog post. If you are new to Torch/Lua/Neural Nets, it might be helpful to know that this code is really just a slightly more fancy version of this 100-line gist that I wrote in Python/numpy. The code in this repo additionally: allows for multiple layers, uses an LSTM instead of a vanilla RNN, has more supporting code for model checkpointing, and is of course much more efficient since it uses mini-batches and can run on a GPU.

Multi-layer Recurrent Neural Networks (LSTM, RNN) for character-level language models in Python using Tensorflow. Inspired from Andrej Karpathy's char-rnn.

This version of sketch-rnn has been depreciated. Please see an updated version of sketch-rnn, which is a full generative model for vector drawings. Implementation multi-layer recurrent neural network (RNN, LSTM GRU) used to model and generate sketches stored in .svg vector graphic files. The methodology used is to combine Mixture Density Networks with a RNN, along with modelling dynamic end-of-stroke and end-of-content probabilities learned from a large corpus of similar .svg files, to generate drawings that is simlar to the vector training data.

brain.js is a library of Neural Networks written in JavaScript. 💡 Note: This is a continuation of the harthur/brain repository (which is not maintained anymore). For more details, check out this issue.

neural-network brain recurrent-neural-networks easy-to-use api web nodejs browser convolutional-neural-networks node stream ai artificial-intelligence brainjs brain.js feed-forward classifier neural network neural-networks machine-learning synapse recurrent long-short-term-memory gated-recurrent-unit rnn lstm gruThis is a Tensorflow implementation of Conditional Image Generation with PixelCNN Decoders which introduces the Gated PixelCNN model based on PixelCNN architecture originally mentioned in Pixel Recurrent Neural Networks. The model can be conditioned on latent representation of labels or images to generate images accordingly. Images can also be modelled unconditionally. It can also act as a powerful decoder and can replace deconvolution (transposed convolution) in Autoencoders and GANs. A detailed summary of the paper can be found here. The gating accounts for remembering the context and model more complex interactions, like in LSTM. The network stack on the left is the Vertical stack that takes care of blind spots that occure while convolution due to the masking layer (Refer the Pixel RNN paper to know more about masking). Use of residual connection significantly improves the model performance.

deep-learning generative-algorithm paper convolution deepmind tensorflowDeep learning is a group of exciting new technologies for neural networks. Through a combination of advanced training techniques and neural network architectural components, it is now possible to create neural networks of much greater complexity. Deep learning allows a neural network to learn hierarchies of information in a way that is like the function of the human brain. This course will introduce the student to computer vision with Convolution Neural Networks (CNN), time series analysis with Long Short-Term Memory (LSTM), classic neural network structures and application to computer security. High Performance Computing (HPC) aspects will demonstrate how deep learning can be leveraged both on graphical processing units (GPUs), as well as grids. Focus is primarily upon the application of deep learning to problems, with some introduction mathematical foundations. Students will use the Python programming language to implement deep learning using Google TensorFlow and Keras. It is not necessary to know Python prior to this course; however, familiarity of at least one programming language is assumed. This course will be delivered in a hybrid format that includes both classroom and online instruction. This syllabus presents the expected class schedule, due dates, and reading assignments. Download current syllabus.

neural-network machine-learning tensorflow keras deeplearningImplements most of the great things that came out in 2014 concerning recurrent neural networks, and some good optimizers for these types of networks. This module also contains the SGD, AdaGrad, and AdaDelta gradient descent methods that are constructed using an objective function and a set of theano variables, and returns an updates dictionary to pass to a theano function.

machine-learning recurrent-networks theano lstm gru adadelta dropout automatic-differentiation neural-network tutorialNeural Machine Translation with Keras (Theano and Tensorflow). for obtaining the required packages for running this library.

neural-machine-translation keras deep-learning sequence-to-sequence theano machine-learning nmt machine-translation lstm-networks gru tensorflow attention-mechanism web-demo transformer attention-is-all-you-need attention-model attention-seq2seqThe goal of this project of mine is to bring users to try and experiment with the seq2seq neural network architecture. This is done by solving different simple toy problems about signal prediction. Normally, seq2seq architectures may be used for other more sophisticated purposes than for signal prediction, let's say, language modeling, but this project is an interesting tutorial in order to then get to more complicated stuff. Except the fact I made available an ".py" Python version of this tutorial within the repository, it is more convenient to run the code inside the notebook. The ".py" code exported feels a bit raw as an exportation.

seq2seq tensorflow tensorflow-tutorialsSome examples require MNIST dataset for training and testing. Don't worry, this dataset will automatically be downloaded when running examples (with input_data.py). MNIST is a database of handwritten digits, for a quick description of that dataset, you can check this notebook.

recurrent-neural-networks convolutional-neural-networks deep-learning-tutorial tensorflow tensorlayer keras deep-reinforcement-learning tensorflow-tutorials deep-learning machine-learning notebook autoencoder multi-layer-perceptron reinforcement-learning tflearn neural-networks neural-network neural-machine-translation nlp cnnTFlearn is a modular and transparent deep learning library built on top of Tensorflow. It was designed to provide a higher-level API to TensorFlow in order to facilitate and speed-up experimentations, while remaining fully transparent and compatible with it. The high-level API currently supports most of recent deep learning models, such as Convolutions, LSTM, BiRNN, BatchNorm, PReLU, Residual networks, Generative networks... In the future, TFLearn is also intended to stay up-to-date with latest deep learning techniques.

tflearn tensorflow neural-network deep-learning machine-learning data-scienceThis is the full code for 'How to Make a Simple Tensorflow Speech Recognizer' by @Sirajology on Youtube. In this demo code we build an LSTM recurrent neural network using the TFLearn high level Tensorflow-based library to train on a labeled dataset of spoken digits. Then we test it on spoken digits. Run the following code in terminal. This will take a couple hours to train fully.

The goal of this repository is to provide comprehensive tutorials for TensorFlow while maintaining the simplicity of the code. Each tutorial includes a detailed explanation (written in .ipynb) format, as well as the source code (in .py format).

deep-learning tensorflow reinforcement-learning machine-learning pattern-recognition object-detection convolutional-neural-networks recurrent-neural-networks neural-networkRNNSharp is a toolkit of deep recurrent neural network which is widely used for many different kinds of tasks, such as sequence labeling, sequence-to-sequence and so on. It's written by C# language and based on .NET framework 4.6 or above version. This page introduces what is RNNSharp, how it works and how to use it. To get the demo package, you can access release page.

rnn crf deep-learning machine-learning c-sharp sequence-labeling rnn-model recurrent-neural-networks nlp lstmTensorflow implementation of Neural Turing Machine. This implementation uses an LSTM controller. NTM models with multiple read/write heads are supported. The referenced torch code can be found here.

tensorflow neural-turing-machinesThis code implements a recurrent neural network trained to generate classical music. The model, which uses LSTM layers and draws inspiration from convolutional neural networks, learns to predict which notes will be played at each time step of a musical piece. You can read about its design and hear examples on this blog post.

This repository contains an independent TensorFlow implementation of recurrent entity networks from Tracking the World State with Recurrent Entity Networks. This paper introduces the first method to solve all of the bAbI tasks using 10k training examples. The author's original Torch implementation is now available here. Percent error for each task, comparing those in the paper to the implementation contained in this repository.

tensorflow recurrent-neural-networks deep-learning machine-learning natural-language-processing
We have large collection of open source products. Follow the tags from
Tag Cloud >>

Open source products are scattered around the web. Please provide information
about the open source projects you own / you use.
**Add Projects.**