- 247

Python package used for simulating spiking neural networks (SNNs) in PyTorch. At the moment, the focus is on replicating the SNN described in Unsupervised learning of digit recognition using spike-timing-dependent plasticity (original code found here, extensions thereof found in my previous project repository here).

https://github.com/djsaunde/spiketorchTags | spiking-neural-networks machine-learning pytorch neurons snn |

Implementation | Python |

License | Public |

Platform | Windows Linux |

PyTorch is a flexible deep learning framework that allows automatic differentiation through dynamic neural networks (i.e., networks that utilise dynamic control flow like if statements and while loops). It supports GPU acceleration, distributed training, various optimisations, and plenty more neat features. These are some notes on how I think about using PyTorch, and don't encompass all parts of the library or every best practice, but may be helpful to others. Neural networks are a subclass of computation graphs. Computation graphs receive input data, and data is routed to and possibly transformed by nodes which perform processing on the data. In deep learning, the neurons (nodes) in neural networks typically transform data with parameters and differentiable functions, such that the parameters can be optimised to minimise a loss via gradient descent. More broadly, the functions can be stochastic, and the structure of the graph can be dynamic. So while neural networks may be a good fit for dataflow programming, PyTorch's API has instead centred around imperative programming, which is a more common way for thinking about programs. This makes it easier to read code and reason about complex programs, without necessarily sacrificing much performance; PyTorch is actually pretty fast, with plenty of optimisations that you can safely forget about as an end user (but you can dig in if you really want to).

deep-learningI used this challenge to learn more about neural networks and machine learning. A neural network consists of layers, and each layer has neurons. My network has three layers: an input layer, a hidden layer, and an output layer. The input to my network has 64 binary numbers. These inputs are connected to the neurons in the hidden layer. The hidden layer performs some computation and passes the result to the output layer neuron out. This also performs a computation and then outputs a 0 or a 1. The input layer doesn’t actually do anything, they are just placeholders for the input value. Only the neurons in the hidden layer and the output layer perform computations. The neurons from the input layer are connected to the neurons in the hidden layer. Likewise, both neurons from the hidden layer are connected to the output layer. These kinds of layers are called fully-connected because every neuron is connected to every neuron in the next layer. Each connection between two neurons has a weight, which is just a number. These weights form the brain of my network. For the activation function in my network, I use the sigmoid function. Sigmoid is a mathematical function. The sigmoid takes in some number x and converts it into a value between 0 and 1. That is ideal for my purposes, since I am dealing with binary numbers. This will turn a linear equation into something that is non-linear. This is important because without this, the network wouldn’t be able to learn any interesting things. I have already mentioned that the input to this network are 64 binary numbers. I resize the drawn image to 8x8 pixels which makes together 64 pixels. I go through the image and check each pixel if the pixel has a pink color I add a 1 to my array else I add a 0. At the end I will have 64 binary numbers which I can add to my input layer.

neural network apple playground ipad ai machine-learning artificial-intelligenceRepository for the book Introduction to Artificial Neural Networks and Deep Learning: A Practical Guide with Applications in Python. Deep learning is not just the talk of the town among tech folks. Deep learning allows us to tackle complex problems, training artificial neural networks to recognize complex patterns for image and speech recognition. In this book, we'll continue where we left off in Python Machine Learning and implement deep learning algorithms in PyTorch.

deep-learning neural-network machine-learning tensorflow artificial-intelligence data-science pytorchA comprehensive list of Deep Learning / Artificial Intelligence and Machine Learning tutorials - rapidly expanding into areas of AI/Deep Learning / Machine Vision / NLP and industry specific areas such as Automotives, Retail, Pharma, Medicine, Healthcare by Tarry Singh until at-least 2020 until he finishes his Ph.D. (which might end up being inter-stellar cosmic networks! Who knows! ðŸ˜€)

machine-learning deep-learning tensorflow pytorch keras matplotlib aws kaggle pandas scikit-learn torch artificial-intelligence neural-network convolutional-neural-networks tensorflow-tutorials python-data ipython-notebook capsule-networkAdaTune is a library to perform gradient based hyperparameter tuning for training deep neural networks. AdaTune currently supports tuning of the learning_rate parameter but some of the methods implemented here can be extended to other hyperparameters like momentum or weight_decay etc. AdaTune provides the following gradient based hyperparameter tuning algorithms - HD, RTHO and our newly proposed algorithm, MARTHE. The repository also contains other commonly used non-adaptive learning_rate adaptation strategies like staircase-decay, exponential-decay and cosine-annealing-with-restarts. The library is implemented in PyTorch. The goal of the methods in this package is to automatically compute in an online fashion a learning rate schedule for stochastic optimization methods (such as SGD) only on the basis of the given learning task, aiming at producing models with associated small validation error.

machine-learning deep-learning pytorch neural-networks hyperparameter-tuning automl learning-rate-schedulingIgnite is a high-level library to help with training neural networks in PyTorch. As you can see, the code is more concise and readable with ignite. Furthermore, adding additional metrics, or things like early stopping is a breeze in ignite, but can start to rapidly increase the complexity of your code when "rolling your own" training loop.

pytorch neural-network machine-learning deep-learningThey are straightforward to implement and evaluate using existing tools, in particular PyTorch and the torchcde library. See torchcde.

machine-learning deep-neural-networks deep-learning time-series pytorch neural-networks dynamical-systems differential-equations rough-paths neural-differential-equations controlled-differential-equationsPyTorch is a deep learning framework that puts Python first. It is a python package that provides Tensor computation (like numpy) with strong GPU acceleration, Deep Neural Networks built on a tape-based autograd system. You can reuse your favorite python packages such as numpy, scipy and Cython to extend PyTorch when needed.

neural-network autograd gpu numpy deep-learning tensorThe ML workspace is an all-in-one web-based IDE specialized for machine learning and data science. It is simple to deploy and gets you started within minutes to productively built ML solutions on your own machines. This workspace is the ultimate tool for developers preloaded with a variety of popular data science libraries (e.g., Tensorflow, PyTorch, Keras, Sklearn) and dev tools (e.g., Jupyter, VS Code, Tensorboard) perfectly configured, optimized, and integrated. The workspace requires Docker to be installed on your machine (ðŸ“– Installation Guide).

nlp docker kubernetes data-science machine-learning r deep-learning jupyter anaconda tensorflow gpu scikit-learn vscode jupyter-notebook data-visualization pytorch neural-networks data-analysis jupyter-labNengo is a Python library for building and simulating large-scale neural models. Nengo can create sophisticated spiking and non-spiking neural simulations with sensible defaults in a few lines of code. Yet, Nengo is highly extensible and flexible. You can define your own neuron types and learning rules, get input directly from hardware, build and run deep neural networks, drive robots, and even simulate your model on a completely different neural simulator or neuromorphic hardware. Nengo depends on NumPy, and we recommend that you install NumPy before installing Nengo. If you're not sure how to do this, we recommend using Anaconda.

nengo neuroscience neural-networksNNPACK is an acceleration package for neural network computations. NNPACK aims to provide high-performance implementations of convnet layers for multi-core CPUs. NNPACK is not intended to be directly used by machine learning researchers; instead it provides low-level performance primitives leveraged in leading deep learning frameworks, such as PyTorch, Caffe2, MXNet, tiny-dnn, Caffe, Torch, and Darknet.

neural-network neural-networks convolutional-layers inference high-performance high-performance-computing simd cpu multithreading fast-fourier-transform winograd-transform matrix-multiplicationThis is a Pytorch port of OpenNMT, an open-source (MIT) neural machine translation system. It is designed to be research friendly to try out new ideas in translation, summary, image-to-text, morphology, and many other domains. Codebase is relatively stable, but PyTorch is still evolving. We currently only support PyTorch 0.4 and recommend forking if you need to have stable code.

deep-learning pytorch machine-translation neural-machine-translationQNNPACK (Quantized Neural Networks PACKage) is a mobile-optimized library for low-precision high-performance neural network inference. QNNPACK provides implementation of common neural network operators on quantized 8-bit tensors. QNNPACK is not intended to be directly used by machine learning researchers; instead it provides low-level performance primitives for high-level deep learning frameworks. As of today, QNNPACK is integrated in PyTorch 1.0 with Caffe2 graph representation.

"Data is the new oil" is a saying which you must have heard by now along with the huge interest building up around Big Data and Machine Learning in the recent past along with Artificial Intelligence and Deep Learning. Besides this, data scientists have been termed as having "The sexiest job in the 21st Century" which makes it all the more worthwhile to build up some valuable expertise in these areas. Getting started with machine learning in the real world can be overwhelming with the vast amount of resources out there on the web. "Practical Machine Learning with Python" follows a structured and comprehensive three-tiered approach packed with concepts, methodologies, hands-on examples, and code. This book is packed with over 500 pages of useful information which helps its readers master the essential skills needed to recognize and solve complex problems with Machine Learning and Deep Learning by following a data-driven mindset. By using real-world case studies that leverage the popular Python Machine Learning ecosystem, this book is your perfect companion for learning the art and science of Machine Learning to become a successful practitioner. The concepts, techniques, tools, frameworks, and methodologies used in this book will teach you how to think, design, build, and execute Machine Learning systems and projects successfully.

machine-learning deep-learning text-analytics classification clustering natural-language-processing computer-vision data-science spacy nltk scikit-learn prophet time-series-analysis convolutional-neural-networks tensorflow keras statsmodels pandas deep-neural-networksSome examples require MNIST dataset for training and testing. Don't worry, this dataset will automatically be downloaded when running examples (with input_data.py). MNIST is a database of handwritten digits, for a quick description of that dataset, you can check this notebook.

recurrent-neural-networks convolutional-neural-networks deep-learning-tutorial tensorflow tensorlayer keras deep-reinforcement-learning tensorflow-tutorials deep-learning machine-learning notebook autoencoder multi-layer-perceptron reinforcement-learning tflearn neural-networks neural-network neural-machine-translation nlp cnnNetKet is an open-source project delivering cutting-edge methods for the study of many-body quantum systems with artificial neural networks and machine learning techniques. It is a Python library built on JAX. Netket supports MacOS and Linux. We reccomend to install NetKet using pip For instructions on how to install the latest stable/beta release of NetKet see the Getting Started section of our website.

quantum machine-learning-algorithms neural-networks restricted-boltzmann-machine rbm convolutional-neural-networks physics-simulation hamiltonian markov-chain-monte-carlo monte-carlo-methods exact-diagonalization variational-method variational-monte-carlo quantum-state-tomography complex-neural-networkNeuralCoref is a pipeline extension for spaCy 2.0 that annotates and resolves coreference clusters using a neural network. NeuralCoref is production-ready, integrated in spaCy's NLP pipeline and easily extensible to new training datasets. For a brief introduction to coreference resolution and NeuralCoref, please refer to our blog post. NeuralCoref is written in Python/Cython and comes with pre-trained statistical models for English. It can be trained in other languages. NeuralCoref is accompanied by a visualization client NeuralCoref-Viz, a web interface powered by a REST server that can be tried online. NeuralCoref is released under the MIT license.

machine-learning coreference spacy coreference-resolution spacy-extension spacy-pipeline nlp neural-networks pytorchBrian is a free, open source simulator for spiking neural networks. It is written in the Python programming language and is available on almost all platforms. We believe that a simulator should not only save the time of processors, but also the time of scientists. Brian is therefore designed to be easy to learn and use, highly flexible and easily extensible. Brian2 is released under the terms of the CeCILL 2.1 license.

neuroscience science differential-equations spiking-neural-networks biological-simulations code-generation simulation-framework brianA generic image detection program that uses Google's Machine Learning library, Tensorflow and a pre-trained Deep Learning Convolutional Neural Network model called Inception. This model has been pre-trained for the ImageNet Large Visual Recognition Challenge using the data from 2012, and it can differentiate between 1,000 different classes, like Dalmatian, dishwasher etc. The program applies Transfer Learning to this existing model and re-trains it to classify a new set of images.

image-detection machine-learning deep-learning deep-neural-networks convolutional-neural-networks tensorflowThis repository contains a topic-wise curated list of Machine Learning and Deep Learning tutorials, articles and other resources. Other awesome lists can be found in this list. If you want to contribute to this list, please read Contributing Guidelines.

deep-learning-tutorial machine-learning machinelearning deeplearning neural-network neural-networks deep-neural-networks awesome-list awesome list deep-learning
We have large collection of open source products. Follow the tags from
Tag Cloud >>

Open source products are scattered around the web. Please provide information
about the open source projects you own / you use.
**Add Projects.**