Displaying 1 to 16 from 16 results

Welcome to my GitHub repo. I am a Data Scientist and I code in R, Python and Wolfram Mathematica. Here you will find some Machine Learning, Deep Learning, Natural Language Processing and Artificial Intelligence models I developed.

anomaly-detection deep-learning autoencoder keras keras-models denoising-autoencoders generative-adversarial-network glove keras-layer word2vec nlp natural-language-processing sentiment-analysis opencv segnet resnet-50 variational-autoencoder t-sne svm-classifier latent-dirichlet-allocationThis repository is a collection of notebooks covering various topics of Bayesian methods for machine learning. Gaussian processes. Introduction to Gaussian processes. Example implementations with plain NumPy/SciPy as well as with libraries scikit-learn and GPy.

machine-learning bayesian-methods bayesian-machine-learning gaussian-processes bayesian-optimization variational-autoencoderThis notebook accompanies the paper "Variational autoencoders for collaborative filtering" by Dawen Liang, Rahul G. Krishnan, Matthew D. Hoffman, and Tony Jebara, in The Web Conference (aka WWW) 2018. In this notebook, we show a complete self-contained example of training a variational autoencoder (as well as a denoising autoencoder) with multinomial likelihood (described in the paper) on the public Movielens-20M dataset, including both data preprocessing and model training.

recommender-systems collaborative-filtering variational-autoencoder bayesian-inferenceTensorFlow implementation for stochastic adversarial video prediction. Given a sequence of initial frames, our model is able to predict future frames of various possible futures. For example, in the next two sequences, we show the ground truth sequence on the left and random predictions of our model on the right. Predicted frames are indicated by the yellow bar at the bottom. For more examples, visit the project page. Stochastic Adversarial Video Prediction, Alex X. Lee, Richard Zhang, Frederik Ebert, Pieter Abbeel, Chelsea Finn, Sergey Levine. arXiv preprint arXiv:1804.01523, 2018.

video-prediction stochastic adversarial vae variational-autoencoder generative-adversarial-network gan vae-gan video-generationWe provide a TensorFlow implementation of the CVAE-based dialog model described in Learning Discourse-level Diversity for Neural Dialog Models using Conditional Variational Autoencoders, published as a long paper in ACL 2017. See the paper for more details. The outputs will be printed to stdout and generated responses will be saved at test.txt in the test_path.

dialogue-systems end-to-end deep-learning generative-model variational-autoencoder chatbot variational-bayes cvaeThe repository stores scripts to train, evaluate, and extract knowledge from a variational autoencoder (VAE) trained on 33 different cancer-types from The Cancer Genome Atlas (TCGA). The specific VAE model is named Tybalt after an instigative, cat-like character in Shakespeare's "Romeo and Juliet". Just as the character Tybalt sets off the series of events in the play, the model Tybalt begins the foray of VAE manifold learning in transcriptomics. Also, deep unsupervised learning likes cats.

cancer-genomics deep-learning unsupervised-learning gene-expression variational-autoencoderDisclaimer: VAE coming soon... The last activation of the decoder layer, the loss function, and the normalization scheme used on the training data are crucial for obtaining good reconstructions and preventing exploding negative losses.

autoencoder pytorch variational-autoencoderThis repository contains jupyter notebooks implementing several deep learning models using TensorFlow. Each notebook contains detailed explanations about each model, hopefully making it easy to understand all steps.

machine-learning deep-learning tensorflow rnn-tensorflow rnn cnn cnn-tensorflow vae variational-autoencoder recurrent-neural-networks recurrent-neural-network convolutional-neural-networks convolutional-neural-network notebook ipynbAlso, included is a jupyter notebook which shows how the Gumbel-Max trick for sampling discrete variables relates to Concrete distributions. Note: Current Dockerfile is for TensorFlow 1.5 CPU training.

tensorflow deeplearning variational-autoencoder gumbel-softmax vae mnistA CNN Variational Autoencoder (CNN-VAE) implemented in PyTorch

variational-autoencoder vae convolutional-neural-networksThe following installs luaJIT and luarocks locally in $HOME/usr. If you want a system-wide installation, remove the -DCMAKE_INSTALL_PREFIX=$HOME/usr option. We assume luarocks and luajit are in $PATH. If they are not - and assuming you installed them locally in $HOME/usr - you can instead run ~/usr/bin/luarocks and ~/usr/bin/luajit.

3d-reconstruction 3d-generation depth-maps silhouette 2d-3d generative-models representation-sharing deep-learning 3d-shapes variational-autoencoder torch computer-vision 3d reconstruction 2d-to-3d perceptionImplementation of Deep Learning Algorithms using Keras and Pytorch Libraries.

deep-learning deep-learning-algorithms convnet rnn lstm generative-adversarial-network variational-autoencoderTensorFlow implementation for the paper A Lagrangian Perspective of Latent Variable Generative Models, UAI 2018 Oral. Lagrangian VAE provides a practical way to find the best trade-off between "consistency constraints" and "mutual information objectives", as opposed of performing extensive hyperparameter tuning. We demonstrate an example over InfoVAE, a latent variable generative model objective that requires tuning the strengths of corresponding hyperparameters.

variational-autoencoder variational-inference generative-adversarial-network tensorflowIf you've always wanted to learn about deep-learning but don't know where to start, then you might have stumbled upon the right place!

machine-learning deep-neural-networks reinforcement-learning computer-vision deep-learning machine-learning-algorithms deep-reinforcement-learning recurrent-neural-networks generative-adversarial-network deep-learning-algorithms convolutional-neural-networks deep-learning-tutorial keras-tensorflow variational-autoencoder pytorch-tutorial long-short-term-memory pytorch-implementationConvolutional Autoencoder for Loop Closure 2.0. To get started, download the COCO dataset and the "stuff" annotations, then run dataset/gen_tfrecords.py. Make sure to unzip the tar in the dataset directory first. Doing this will generate the sharded tfrecord files as well as loss_weights.txt.

deep-learning slam variational-autoencoderConditional variational autoencoder applied to EMNIST + an interactive demo to explore the latent space.

keras mnist variational-autoencoder
We have large collection of open source products. Follow the tags from
Tag Cloud >>

Open source products are scattered around the web. Please provide information
about the open source projects you own / you use.
**Add Projects.**