DELTA is a deep learning based end-to-end natural language and speech processing platform. DELTA aims to provide easy and fast experiences for using, deploying, and developing natural language processing and speech models for both academia and industry use cases. DELTA is mainly implemented using TensorFlow and Python 3. For details of DELTA, please refer to this paper.
nlp deep-learning tensorflow speech sequence-to-sequence seq2seq speech-recognition text-classification speaker-verification nlu text-generation emotion-recognition tensorflow-serving tensorflow-lite inference asr serving front-endFelix Hieber, Tobias Domhan, Michael Denkowski, David Vilar, Artem Sokolov, Ann Clifton and Matt Post (2017): Sockeye: A Toolkit for Neural Machine Translation. In eprint arXiv:cs-CL/1712.05690.If you are interested in collaborating or have any questions, please submit a pull request or issue. You can also send questions to sockeye-dev-at-amazon-dot-com.
deep-learning deep-neural-networks mxnet machine-learning machine-translation neural-machine-translation encoder-decoder attention-mechanism sequence-to-sequence sequence-to-sequence-models sockeye attention-is-all-you-need attention-alignment-visualization attention-model seq2seq convolutional-neural-networks translationThe Neural Monkey package provides a higher level abstraction for sequential neural network models, most prominently in Natural Language Processing (NLP). It is built on TensorFlow. It can be used for fast prototyping of sequential models in NLP which can be used e.g. for neural machine translation or sentence classification. The higher-level API brings together a collection of standard building blocks (RNN encoder and decoder, multi-layer perceptron) and a simple way of adding new building blocks implemented directly in TensorFlow.
neural-machine-translation tensorflow nlp sequence-to-sequence neural-networks nmt machine-translation mt deep-learning image-captioning encoder-decoder gpuThis is a research project, not an official NVIDIA product. OpenSeq2Seq main goal is to allow researchers to most effectively explore various sequence-to-sequence models. The efficiency is achieved by fully supporting distributed and mixed-precision training. OpenSeq2Seq is built using TensorFlow and provides all the necessary building blocks for training encoder-decoder models for neural machine translation and automatic speech recognition. We plan to extend it with other modalities in the future.
neural-machine-translation multi-gpu deep-learning sequence-to-sequence seq2seq multi-node speech-recognition speech-to-text mixed-precision float16Feel free add to this via a pull request, with each section alphabetically ordered.
nmt mt neural-machine-translation machine-translation sequence-to-sequence deep-learningNeural Machine Translation with Keras (Theano and Tensorflow). for obtaining the required packages for running this library.
neural-machine-translation keras deep-learning sequence-to-sequence theano machine-learning nmt machine-translation lstm-networks gru tensorflow attention-mechanism web-demo transformer attention-is-all-you-need attention-model attention-seq2seqAn extension on the Hierarchical Recurrent Encoder-Decoder for Generative Context-Aware Query Suggestion, our implementation is in Tensorflow and uses an attention mechanism.
sequence-to-sequence tensorflow rnn lstmAbstract: Portmanteaus are a word formation phenomenon where two words are combined to form a new word. We propose character-level neural sequence-to-sequence (S2S) methods for the task of portmanteau generation that are end-to-end-trainable, language independent, and do not explicitly use additional phonetic information. We propose a noisy-channel-style model, which allows for the incorporation of unsupervised word lists, improving performance over a standard source-to-target model. This model is made possible by an exhaustive candidate generation strategy specifically enabled by the features of the portmanteau task. Experiments find our approach superior to a state-of-the-art FST-based baseline with respect to ground truth accuracy and human evaluation. Code/ contains the code. Data/ contains the dataset.
nlp nlp-machine-learning nlg seq2seq nlproc sequence-to-sequence char-rnn char-embedding[In-Progress] Tensorflow implementation of Sequence to Sequence Learning with Neural Networks
deep-learning sequence-to-sequenceThis is my bachelor's thesis that I wrote over the course of two months during my final year of studies, earning my Bachelor of Science in Computer Science degree. The thesis was co-authored by my good friend Tobias Ånhed. Click here for revised edition on DiVA.
lstm-neural-networks research-paper bachelor-thesis sequence-to-sequence machine-learning finance trading forex algorithmic-trading recurrent-neural-networks forex-trading technical-analysis technical-indicators artificial-neural-networks keras time-series-analysis financial-analysis white-paper publication trading-algorithmsThis is NPMT, the source codes of Towards Neural Phrase-based Machine Translation and Sequence Modeling via Segmentations from Microsoft Research. It is built on top of the fairseq toolkit in Torch. We present the setup and Neural Machine Translation (NMT) experiments in Towards Neural Phrase-based Machine Translation. Neural Phrase-based Machine Translation (NPMT) explicitly models the phrase structures in output sequences using Sleep-WAke Networks (SWAN), a recently proposed segmentation-based sequence modeling method. To mitigate the monotonic alignment requirement of SWAN, we introduce a new layer to perform (soft) local reordering of input sequences. Different from existing neural machine translation (NMT) approaches, NPMT does not use attention-based decoding mechanisms. Instead, it directly outputs phrases in a sequential order and can decode in linear time.
sequence-to-sequence machine-translation npmt swan fairseq neural-machine-translation torchI uploaded three .py files and one .ipynb file. The .py files contain the network implementation and utilities. The Jupyter Notebook is a demo of how to apply the model. Seq2Seq model As I mentioned above the model architecture is similar to the one used in "Listen, Attend and Spell", i.e. we are using pyramidal bidirectional LSTMs in the encoder. This reduces the time resolution and enhances the performance on longer sequences.
speech-recognition speech-to-text tensorflow seq2seq encoder-decoder deep-learning machine-learning sequence-to-sequence nlp listen-attend-and-spellImplementation of a seq2seq model for summarization of textual data using the latest version of tensorflow. Demonstrated on amazon reviews, github issues and news articles. I tried the network on three different datasets.
neural-network text-summarization text-summarizer seq2seq tensorflow nlp sequence-to-sequence encoder-decoder natural-language-processing deep-learning machine-learning summarizationTutorial Notebook: The Jupyter notebook that coincides with the Medium post. seq2seq_utils.py: convenience functions that are used in the tutorial notebook to make predictions.
machine-learning rnn-encoder-decoder seq2seq-tutorial sequence-to-sequence keras keras-tutorials nlp-machine-learning data-science deep-learning deeplearning medium-article
We have large collection of open source products. Follow the tags from
Tag Cloud >>
Open source products are scattered around the web. Please provide information
about the open source projects you own / you use.
Add Projects.