Displaying 1 to 16 from 16 results

PyMC3 - Probabilistic Programming in Python: Bayesian Modeling and Probabilistic Machine Learning with Theano

  •    Python

PyMC3 is a Python package for Bayesian statistical modeling and Probabilistic Machine Learning which focuses on advanced Markov chain Monte Carlo and variational fitting algorithms. Its flexibility and extensibility make it applicable to a large suite of problems.Note: Running pip install pymc will install PyMC 2.3, not PyMC3, from PyPI.

Pyro - Deep universal probabilistic programming with Python and PyTorch

  •    Python

Pyro is a universal probabilistic programming language (PPL) written in Python and supported by PyTorch on the backend. Pyro enables flexible and expressive deep probabilistic modeling, unifying the best of modern deep learning and Bayesian modeling.

gelato - Bayesian dessert for Lasagne

  •    Python

Recent results in Bayesian statistics for constructing robust neural networks have proved that it is one of the best ways to deal with uncertainty, overfitting but still having good performance. Gelato will help to use bayes for neural networks. Library heavily relies on Theano, Lasagne and PyMC3.I use generic approach for decorating all Lasagne at once. Thus, for using Gelato you need to replace import statements for layers only. For constructing a network you need to be the in pm.Model context environment.




vae-style-transfer - An experiment in VAE-based artistic style transfer by embedding fiddling.

  •    Python

The project was created as part of the Creative Applications of Deep Learning with TensorFlow (CADL) Kadenze course's final assignment. It is an experimental attempt to transfer artistic style learned from a series of paintings "live" onto a video sequence by fitting a variational autoencoder with 512 codes to both paintings and video frames, isolating the mean feature-space embeddings and modifying the video's embeddings to be closer to those of the paintings. Because the general visual quality of the VAE's decoded output is relatively low, a convolutional post-processing network based on residual convolutions was trained with the purpose of making the resulting image less similar to the VAE's generated output and more similar to the original input images. The basic idea was to have an upsampling network here, but it quickly turned out to be a very naive idea at this point of development. Instead, it now downsizes the input, learns filters in a residual network and then samples back up to the input frame size; I would have liked to perform convolutions directly on the input, but memory limitations prevented the usage of a useful amount of feature maps.

trlda - Implementations of various online inference algorithms for LDA, with Python interface.

  •    C++

Additional features include adaptive learning rates (Ranganath et al., 2013) and automatic tuning of hyperparameters via empirical Bayes. I have tested the code with the versions above, but older versions might also work.

AutoGP - Code for AutoGP

  •    Python

An implementation of the model described in AutoGP: Exploring the Capabilities and Limitations of Gaussian Process Models. The code was tested on Python 2.7 and TensorFlow 0.12.


Amortized_SVGD - Experiments of amortized stein variational gradient

  •    Terra

Amortized SVGD is a simple method which can be utilized to train black box inference networks to draw samples. We demonstrate we can draw samples from intractable posteriors by applying amortized svgd to train variational autoencoders with a non-Gaussian encoder, where we add dropout rate 0.3 noise in each layer. The experiments shows that our method can capture the multimodal posterior due to the new structure.

bayesian-scribbles - This repository contains code that I scribble while learning Bayesian Inference

  •    Jupyter

This repository contains code that I scribble while learning PGMs & Bayesian Inference. This will roughly follow Sir David MacKay's lectures on Information Theory, Pattern Recognition, and Neural Networks. Another great resource to read about some of the stuff in this repository is Stefano Ermon's notes.

sqair - Implementation of Sequential Attend, Infer, Repeat (SQAIR)

  •    Jupyter

This is an official Tensorflow implementation of Sequential Attend, Infer, Repeat (SQAIR), as presented in the following paper: A. R. Kosiorek, H. Kim, I. Posner, Y. W. Teh, Sequential Attend, Infer, Repeat: Generative Modelling of Moving Objects. SQAIR learns to reconstruct a sequence of images by detecting objects in every frame and then propagating them to the following frames. This results in unsupervised object detection & tracking, which we can see in the figure below. The figure was generated from a model trained for 1M iterations. The maximum number of objects in a frame (and therefore number of detected and propagated objects) is set to four, but there are never more than two objects. The first row shows inputs to the model (time flies from left to right), while the second row shows reconstructions with marked glimpse locations. Colors of the bounding boxes correspond to object id. Here, the color is always the same, which means that objects are properly tracked.

Sequential-Variational-Autoencoder - Implementation of Sequential Variational Autoencoder

  •    Python

This is the implementation of the Sequential VAE in Towards a Deeper Understanding of Variational Autoencoding Models. The paper identifies a link between power of latent code and sharpness of generated samples. We are able to generate fairly sharp samples by gradually augmenting the power of latent code.

Variational-Ladder-Autoencoder - Implementation of VLAE

  •    Python

This is the implementation of the Variational Ladder Autoencoder. Training on this architecture with standard VAE disentangles high and low level features without using any other prior information or inductive bias. This has been successful on MNIST, SVHN, and CelebA. LSUN is a little difficult for VAE with pixel-wise reconstruction loss. However with another recently work we can generate sharp results on LSUN as well. This architecture serve as the baseline architecture for that model.

lagvae - Lagrangian VAE

  •    Python

TensorFlow implementation for the paper A Lagrangian Perspective of Latent Variable Generative Models, UAI 2018 Oral. Lagrangian VAE provides a practical way to find the best trade-off between "consistency constraints" and "mutual information objectives", as opposed of performing extensive hyperparameter tuning. We demonstrate an example over InfoVAE, a latent variable generative model objective that requires tuning the strengths of corresponding hyperparameters.

MXFusion - Modular Probabilistic Programming on MXNet

  •    Python

MXFusion is a library for integrating probabilistic modelling with deep learning. With MXFusion Modules you can use state-of-the-art inference techniques for specialized probabilistic models without needing to implement those techniques yourself. MXFusion helps you rapidly build and test new methods at scale, by focusing on the modularity of probabilistic models and their integration with modern deep learning techniques.