Displaying 1 to 14 from 14 results

PyCNN - Image Processing with Cellular Neural Networks in Python

  •    Python

Cellular Neural Networks (CNN) [wikipedia] [paper] are a parallel computing paradigm that was first proposed in 1988. Cellular neural networks are similar to neural networks, with the difference that communication is allowed only between neighboring units. Image Processing is one of its applications. CNN processors were designed to perform image processing; specifically, the original application of CNN processors was to perform real-time ultra-high frame-rate (>10,000 frame/s) processing unachievable by digital processors. This python library is the implementation of CNN for the application of Image Processing.

Personae - 📈 Personae is a repo of implements and environment of Deep Reinforcement Learning & Supervised Learning for Quantitative Trading

  •    Python

Personae is a repo that implements papers proposed methods in Deep Reinforcement Learning & Supervised Learning and applies them to Financial Market. It will start from 2018-08-24 to 2018-09-01 a timestamp that I successfully found a job.

DenseNet - DenseNet implementation in Keras

  •    Python

The Bottleneck - Compressed DenseNets offer further performance benefits, such as reduced number of parameters, with similar or better performance. The best original model, DenseNet-100-24 (27.2 million parameters) achieves 3.74 % error, whereas the DenseNet-BC-190-40 (25.6 million parameters) achieves 3.46 % error which is a new state of the art performance on CIFAR-10.

Conditional-PixelCNN-decoder - Tensorflow implementation of Gated Conditional Pixel Convolutional Neural Network

  •    Python

This is a Tensorflow implementation of Conditional Image Generation with PixelCNN Decoders which introduces the Gated PixelCNN model based on PixelCNN architecture originally mentioned in Pixel Recurrent Neural Networks. The model can be conditioned on latent representation of labels or images to generate images accordingly. Images can also be modelled unconditionally. It can also act as a powerful decoder and can replace deconvolution (transposed convolution) in Autoencoders and GANs. A detailed summary of the paper can be found here. The gating accounts for remembering the context and model more complex interactions, like in LSTM. The network stack on the left is the Vertical stack that takes care of blind spots that occure while convolution due to the masking layer (Refer the Pixel RNN paper to know more about masking). Use of residual connection significantly improves the model performance.




All-About-the-GAN - All About the GANs(Generative Adversarial Networks) - Summarized lists for GAN

  •    Python

The purpose of this repository is providing the curated list of the state-of-the-art works on the field of Generative Adversarial Networks since their introduction in 2014. You can also check out the same data in a tabular format with functionality to filter by year or do a quick search by title here.

LanguageDetector - PHP Class to detect languages from any free text

  •    PHP

PHP Class to detect languages from any free text. It follows the approach described in the paper, a given text is tokenized into N-Grams (we cleanup whitespaces before doing this step). Then we sort the tokens and we compare against a language model.

Inception-v4 - Inception-v4, Inception - Resnet-v1 and v2 Architectures in Keras

  •    Python

Implementations of the Inception-v4, Inception - Resnet-v1 and v2 Architectures in Keras using the Functional API. The paper on these architectures is available at "Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning". The models are plotted and shown in the architecture sub folder. Due to lack of suitable training data (ILSVR 2015 dataset) and limited GPU processing power, the weights are not provided.

htmpapers - Numenta published papers code and data

  •    Python

This repository is currently under construction and will include the source code for all scripts used on Numenta's papers. This paper proposes a network model composed of columns and layers that performs robust object learning and recognition. The model introduces a new feature to cortical columns, location information, which is represented relative to the object being sensed. Pairing sensory features with locations is a requirement for modeling objects and therefore must occur somewhere in the neocortex. We propose it occurs in every column in every region.


papr - Command line tool to generate a PDF Calendars

  •    Python

Command line tool to generate a PDF template for a small foldable paper calendar.

lshensemble - LSH index for approximate set containment search

  •    Go

Presentation slides @ VLDB 2016, New Delhi. We used two datasets for evaluation. The datasets are all from public domains and can be downloaded directly from the original publisher.

RGAN - Recurrent (conditional) generative adversarial networks for generating real-valued time series data

  •    Python

This repository contains code for the paper, Real-valued (Medical) Time Series Generation with Recurrent Conditional GANs, by Stephanie L. Hyland* (@corcra), Cristóbal Esteban* (@cresteban), and Gunnar Rätsch (@ratsch), from the Ratschlab, also known as the Biomedical Informatics Group at ETH Zurich. Idea: Use generative adversarial networks (GANs) to generate real-valued time series, for medical purposes. As the title suggests. The GAN is RGAN because it uses recurrent neural networks for both encoder and decoder (specifically LSTMs).

Snapshot-Ensembles - Snapshot Ensemble in Keras

  •    Python

Snapshot Ensemble is a method to obtain multiple neural network which can be ensembled at no additional training cost. This is achieved by letting a single neural network converge into several local minima along its optimization path and save the model parameters at certain epochs, therefore the weights being "snapshots" of the model. The theory behind using a learning rate schedule which occilates between such extreme values (0.1 to 5e-4, M times) is that there exist multiple local minima when training a model. Constantly reducing the local learning rate can force the model to be stuck at a less than optimal local minima. Therefore, to escape, we use a very large learning rate to escape the current local minima and attempt to find another possibly better local minima.