Displaying 1 to 20 from 20 results

PyMC3 - Probabilistic Programming in Python: Bayesian Modeling and Probabilistic Machine Learning with Theano

  •    Python

PyMC3 is a Python package for Bayesian statistical modeling and Probabilistic Machine Learning which focuses on advanced Markov chain Monte Carlo and variational fitting algorithms. Its flexibility and extensibility make it applicable to a large suite of problems.Note: Running pip install pymc will install PyMC 2.3, not PyMC3, from PyPI.

pyro - Deep universal probabilistic programming with Python and PyTorch

  •    Python

Pyro was originally developed at Uber AI and is now actively maintained by community contributors, including a dedicated team at the Broad Institute. In 2019, Pyro became a project of the Linux Foundation, a neutral space for collaboration on open source software, open standards, open data, and open hardware. For more information about the high level motivation for Pyro, check out our launch blog post. For additional blog posts, check out work on experimental design and time-to-event modeling in Pyro.

Pyro - Deep universal probabilistic programming with Python and PyTorch

  •    Python

Pyro is a universal probabilistic programming language (PPL) written in Python and supported by PyTorch on the backend. Pyro enables flexible and expressive deep probabilistic modeling, unifying the best of modern deep learning and Bayesian modeling.

probreg - Python package for point cloud registration using probabilistic model (Coherent Point Drift, GMMReg, SVR, GMMTree, FilterReg, Bayesian CPD)

  •    Python

Probreg is a library that implements point cloud registration algorithms with probablistic model. The point set registration algorithms using stochastic model are more robust than ICP(Iterative Closest Point). This package implements several algorithms using stochastic models and provides a simple interface with Open3D.




gelato - Bayesian dessert for Lasagne

  •    Python

Recent results in Bayesian statistics for constructing robust neural networks have proved that it is one of the best ways to deal with uncertainty, overfitting but still having good performance. Gelato will help to use bayes for neural networks. Library heavily relies on Theano, Lasagne and PyMC3.I use generic approach for decorating all Lasagne at once. Thus, for using Gelato you need to replace import statements for layers only. For constructing a network you need to be the in pm.Model context environment.

vae-style-transfer - An experiment in VAE-based artistic style transfer by embedding fiddling.

  •    Python

The project was created as part of the Creative Applications of Deep Learning with TensorFlow (CADL) Kadenze course's final assignment. It is an experimental attempt to transfer artistic style learned from a series of paintings "live" onto a video sequence by fitting a variational autoencoder with 512 codes to both paintings and video frames, isolating the mean feature-space embeddings and modifying the video's embeddings to be closer to those of the paintings. Because the general visual quality of the VAE's decoded output is relatively low, a convolutional post-processing network based on residual convolutions was trained with the purpose of making the resulting image less similar to the VAE's generated output and more similar to the original input images. The basic idea was to have an upsampling network here, but it quickly turned out to be a very naive idea at this point of development. Instead, it now downsizes the input, learns filters in a residual network and then samples back up to the input frame size; I would have liked to perform convolutions directly on the input, but memory limitations prevented the usage of a useful amount of feature maps.

trlda - Implementations of various online inference algorithms for LDA, with Python interface.

  •    C++

Additional features include adaptive learning rates (Ranganath et al., 2013) and automatic tuning of hyperparameters via empirical Bayes. I have tested the code with the versions above, but older versions might also work.


AutoGP - Code for AutoGP

  •    Python

An implementation of the model described in AutoGP: Exploring the Capabilities and Limitations of Gaussian Process Models. The code was tested on Python 2.7 and TensorFlow 0.12.

Amortized_SVGD - Experiments of amortized stein variational gradient

  •    Terra

Amortized SVGD is a simple method which can be utilized to train black box inference networks to draw samples. We demonstrate we can draw samples from intractable posteriors by applying amortized svgd to train variational autoencoders with a non-Gaussian encoder, where we add dropout rate 0.3 noise in each layer. The experiments shows that our method can capture the multimodal posterior due to the new structure.

bayesian-scribbles - This repository contains code that I scribble while learning Bayesian Inference

  •    Jupyter

This repository contains code that I scribble while learning PGMs & Bayesian Inference. This will roughly follow Sir David MacKay's lectures on Information Theory, Pattern Recognition, and Neural Networks. Another great resource to read about some of the stuff in this repository is Stefano Ermon's notes.

sqair - Implementation of Sequential Attend, Infer, Repeat (SQAIR)

  •    Jupyter

This is an official Tensorflow implementation of Sequential Attend, Infer, Repeat (SQAIR), as presented in the following paper: A. R. Kosiorek, H. Kim, I. Posner, Y. W. Teh, Sequential Attend, Infer, Repeat: Generative Modelling of Moving Objects. SQAIR learns to reconstruct a sequence of images by detecting objects in every frame and then propagating them to the following frames. This results in unsupervised object detection & tracking, which we can see in the figure below. The figure was generated from a model trained for 1M iterations. The maximum number of objects in a frame (and therefore number of detected and propagated objects) is set to four, but there are never more than two objects. The first row shows inputs to the model (time flies from left to right), while the second row shows reconstructions with marked glimpse locations. Colors of the bounding boxes correspond to object id. Here, the color is always the same, which means that objects are properly tracked.

Sequential-Variational-Autoencoder - Implementation of Sequential Variational Autoencoder

  •    Python

This is the implementation of the Sequential VAE in Towards a Deeper Understanding of Variational Autoencoding Models. The paper identifies a link between power of latent code and sharpness of generated samples. We are able to generate fairly sharp samples by gradually augmenting the power of latent code.

Variational-Ladder-Autoencoder - Implementation of VLAE

  •    Python

This is the implementation of the Variational Ladder Autoencoder. Training on this architecture with standard VAE disentangles high and low level features without using any other prior information or inductive bias. This has been successful on MNIST, SVHN, and CelebA. LSUN is a little difficult for VAE with pixel-wise reconstruction loss. However with another recently work we can generate sharp results on LSUN as well. This architecture serve as the baseline architecture for that model.

lagvae - Lagrangian VAE

  •    Python

TensorFlow implementation for the paper A Lagrangian Perspective of Latent Variable Generative Models, UAI 2018 Oral. Lagrangian VAE provides a practical way to find the best trade-off between "consistency constraints" and "mutual information objectives", as opposed of performing extensive hyperparameter tuning. We demonstrate an example over InfoVAE, a latent variable generative model objective that requires tuning the strengths of corresponding hyperparameters.

MXFusion - Modular Probabilistic Programming on MXNet

  •    Python

MXFusion is a library for integrating probabilistic modelling with deep learning. With MXFusion Modules you can use state-of-the-art inference techniques for specialized probabilistic models without needing to implement those techniques yourself. MXFusion helps you rapidly build and test new methods at scale, by focusing on the modularity of probabilistic models and their integration with modern deep learning techniques.

boundary-gp - Know Your Boundaries: Constraining Gaussian Processes by Variational Harmonic Features

  •    Jupyter

The effect of increasing the number of inducing features for the banana classification dataset with a hard decision boundary. In each pane, the coloured points represent training data and the decision boundaries are black lines. The outermost line is the pre-defined hard decision boundary. We consider constraining GPs to arbitrarily-shaped domains with boundary conditions. We solve a Fourier-like generalised harmonic feature representation of the GP prior in the domain of interest, which both constrains the GP and attains a low-rank representation that is used for speeding up inference. The method scales as O(nm^2) in prediction and O(m^3) in hyperparameter learning for regression, where n is the number of data points and m the number of features. Furthermore, we make use of the variational approach to allow the method to deal with non-Gaussian likelihoods. This repository contains the Matlab codes for constructing the basis functions in arbitrarily-shaped domains, code for simulating constrained GP random draws, and code for solving GP regression. We also provide code in Python for for constructing the basis functions in arbitrarily-shaped domains and doing variational inference for non-Gaussian likelihoods.

AVUC - Code to accompany the paper 'Improving model calibration with accuracy versus uncertainty optimization'

  •    Python

Code to accompany the paper Improving model calibration with accuracy versus uncertainty optimization [NeurIPS 2020]. Abstract: Obtaining reliable and accurate quantification of uncertainty estimates from deep neural networks is important in safety critical applications. A well-calibrated model should be accurate when it is certain about its prediction and indicate high uncertainty when it is likely to be inaccurate. Uncertainty calibration is a challenging problem as there is no ground truth available for uncertainty estimates. We propose an optimization method that leverages the relationship between accuracy and uncertainty as an anchor for uncertainty calibration. We introduce a differentiable accuracy versus uncertainty calibration (AvUC) loss function as an additional penalty term within loss-calibrated approximate inference framework. AvUC enables a model to learn to provide well-calibrated uncertainties, in addition to improved accuracy. We also demonstrate the same methodology can be extended to post-hoc uncertainty calibration on pretrained models.






We have large collection of open source products. Follow the tags from Tag Cloud >>


Open source products are scattered around the web. Please provide information about the open source projects you own / you use. Add Projects.