Theano - Optimizing compiler for evaluating mathematical expressions on CPUs and GPUs.

  •        464

Theano is a Python library that allows you to define, optimize, and efficiently evaluate mathematical expressions involving multi-dimensional arrays. It is built on top of NumPy. Its features include tight integration with NumPy, transparent use of a GPU, dynamic C code generation and lot more.

http://deeplearning.net/software/theano/
https://github.com/Theano/Theano

Tags
Implementation
License
Platform

   




Related Projects

Gorgonia - Library that helps facilitate machine learning in Go

  •    Go

Gorgonia is a library that helps facilitate machine learning in Go. Write and evaluate mathematical equations involving multidimensional arrays easily. If this sounds like Theano or TensorFlow, it's because the idea is quite similar. Specifically, the library is pretty low-level, like Theano, but has higher goals like Tensorflow.

gorgonia - Gorgonia is a library that helps facilitate machine learning in Go.

  •    Go

Gorgonia is a library that helps facilitate machine learning in Go. Write and evaluate mathematical equations involving multidimensional arrays easily. If this sounds like Theano or TensorFlow, it's because the idea is quite similar. Specifically, the library is pretty low-level, like Theano, but has higher goals like Tensorflow.The main reason to use Gorgonia is developer comfort. If you're using a Go stack extensively, now you have access to the ability to create production-ready machine learning systems in an environment that you are already familiar and comfortable with.

EffectiveTensorflow - TensorFlow tutorials and best practices.

  •    

We aim to gradually expand this series by adding new articles and keep the content up to date with the latest releases of TensorFlow API. If you have suggestions on how to improve this series or find the explanations ambiguous, feel free to create an issue, send patches, or reach out by email. The most striking difference between TensorFlow and other numerical computation libraries such as NumPy is that operations in TensorFlow are symbolic. This is a powerful concept that allows TensorFlow to do all sort of things (e.g. automatic differentiation) that are not possible with imperative libraries such as NumPy. But it also comes at the cost of making it harder to grasp. Our attempt here is to demystify TensorFlow and provide some guidelines and best practices for more effective use of TensorFlow.

DeepLearning.scala - A simple library for creating complex neural networks

  •    Scala

DeepLearning.scala is a simple library for creating complex neural networks from object-oriented and functional programming constructs. Like other deep learning toolkits, DeepLearning.scala allows you to build neural networks from mathematical formulas. It supports floats, doubles, GPU-accelerated N-dimensional arrays, and calculates derivatives of the weights in the formulas.

PyTorch - Tensors and Dynamic neural networks in Python with strong GPU acceleration

  •    Python

PyTorch is a deep learning framework that puts Python first. It is a python package that provides Tensor computation (like numpy) with strong GPU acceleration, Deep Neural Networks built on a tape-based autograd system. You can reuse your favorite python packages such as numpy, scipy and Cython to extend PyTorch when needed.


chainer - A flexible framework of neural networks for deep learning

  •    Python

Chainer is a Python-based deep learning framework aiming at flexibility. It provides automatic differentiation APIs based on the define-by-run approach (a.k.a. dynamic computational graphs) as well as object-oriented high-level APIs to build and train neural networks. It also supports CUDA/cuDNN using CuPy for high performance training and inference. For more details of Chainer, see the documents and resources listed above and join the community in Forum, Slack, and Twitter. The stable version of current Chainer is separated in here: v3.

Arraymancer - A fast, ergonomic and portable tensor library in Nim with a deep learning focus for CPU, GPU, OpenCL and embedded devices

  •    Nim

Arraymancer is a tensor (N-dimensional array) project in Nim. The main focus is providing a fast and ergonomic CPU, Cuda and OpenCL ndarray library on which to build a scientific computing and in particular a deep learning ecosystem. The library is inspired by Numpy and PyTorch. The library provides ergonomics very similar to Numpy, Julia and Matlab but is fully parallel and significantly faster than those libraries. It is also faster than C-based Torch.

tfjs-core - WebGL-accelerated ML // linear algebra // automatic differentiation for JavaScript.

  •    TypeScript

NOTE: Building on the momentum of deeplearn.js, we have joined the TensorFlow family and we are starting a new ecosystem of libraries and tools for Machine Learning in Javascript, called TensorFlow.js. This repo moved from PAIR-code/deeplearnjs to tensorflow/tfjs-core. A part of the TensorFlow.js ecosystem, this repo hosts @tensorflow/tfjs-core, the TensorFlow.js Core API, which provides low-level, hardware-accelerated linear algebra operations and an eager API for automatic differentiation.

keras - Deep Learning library for Python. Runs on TensorFlow, Theano, or CNTK.

  •    Python

Keras is a high-level neural networks API, written in Python and capable of running on top of TensorFlow, CNTK, or Theano. It was developed with a focus on enabling fast experimentation. Being able to go from idea to result with the least possible delay is key to doing good research.

keras-rl - Deep Reinforcement Learning for Keras.

  •    Python

keras-rl implements some state-of-the art deep reinforcement learning algorithms in Python and seamlessly integrates with the deep learning library Keras. Just like Keras, it works with either Theano or TensorFlow, which means that you can train your algorithm efficiently either on CPU or GPU. Furthermore, keras-rl works with OpenAI Gym out of the box. This means that evaluating and playing around with different algorithms is easy. Of course you can extend keras-rl according to your own needs. You can use built-in Keras callbacks and metrics or define your own. Even more so, it is easy to implement your own environments and even algorithms by simply extending some simple abstract classes. In a nutshell: keras-rl makes it really easy to run state-of-the-art deep reinforcement learning algorithms, uses Keras and thus Theano or TensorFlow and was built with OpenAI Gym in mind.

neupy - NeuPy is a Python library for Artificial Neural Networks and Deep Learning.

  •    Python

About a year ago, it has been officially announced that Theano will stop support for their library. They don't add new features anymore and soon, they will stop adding bug fixes to the library. NeuPy cannot evolve having large number of features that depend on the dead library. For this reason, NeuPy was moved to the Tensorflow. All the Theano based code has been fully migrated to Tenorflow and it can be tested from the release/v0.7.0 branch.

python-neural-network - This is an efficient implementation of a fully connected neural network in NumPy

  •    Python

This is an implementation of a fully connected neural network in NumPy. By using the matrix approach to neural networks, this NumPy implementation is able to harvest the power of the BLAS library and efficiently perform the required calculations. The network can be trained by a wide range of learning algorithms. Visit the project page or Read the documentation.

awesome-jax - JAX - A curated list of resources https://github.com/google/jax

  •    

JAX brings automatic differentiation and the XLA compiler together through a NumPy-like API for high performance machine learning research on accelerators like GPUs and TPUs. This section contains libraries that are well-made and useful, but have not necessarily been battle-tested by a large userbase yet.

grokking-pytorch - The Hitchiker's Guide to PyTorch

  •    

PyTorch is a flexible deep learning framework that allows automatic differentiation through dynamic neural networks (i.e., networks that utilise dynamic control flow like if statements and while loops). It supports GPU acceleration, distributed training, various optimisations, and plenty more neat features. These are some notes on how I think about using PyTorch, and don't encompass all parts of the library or every best practice, but may be helpful to others. Neural networks are a subclass of computation graphs. Computation graphs receive input data, and data is routed to and possibly transformed by nodes which perform processing on the data. In deep learning, the neurons (nodes) in neural networks typically transform data with parameters and differentiable functions, such that the parameters can be optimised to minimise a loss via gradient descent. More broadly, the functions can be stochastic, and the structure of the graph can be dynamic. So while neural networks may be a good fit for dataflow programming, PyTorch's API has instead centred around imperative programming, which is a more common way for thinking about programs. This makes it easier to read code and reason about complex programs, without necessarily sacrificing much performance; PyTorch is actually pretty fast, with plenty of optimisations that you can safely forget about as an end user (but you can dig in if you really want to).

Hands-On-Deep-Learning-Algorithms-with-Python - Master Deep Learning Algorithms with Extensive Math by Implementing them using TensorFlow

  •    Jupyter

Deep learning is one of the most popular domains in the artificial intelligence (AI) space, which allows you to develop multi-layered models of varying complexities. This book is designed to help you grasp things, from basic deep learning algorithms to the more advanced algorithms. The book is designed in a way that first you will understand the algorithm intuitively, once you have a basic understanding of the algorithms, then you will master the underlying math behind them effortlessly and then you will learn how to implement them using TensorFlow step by step. The book covers almost all the state of the art deep learning algorithms. First, you will get a good understanding of the fundamentals of neural networks and several variants of gradient descent algorithms. Later, you will explore RNN, Bidirectional RNN, LSTM, GRU, seq2seq, CNN, capsule nets and more. Then, you will master GAN and various types of GANs and several different autoencoders.

DeepDarkFantasy - A Programming Language for Deep Learning

  •    Haskell

As we all know, a neural network is just a computable math expression (and hence a program). Of course, I must still be able to train the network.

theano_lstm - :microscope: Nano size Theano LSTM module

  •    Python

Implements most of the great things that came out in 2014 concerning recurrent neural networks, and some good optimizers for these types of networks. This module also contains the SGD, AdaGrad, and AdaDelta gradient descent methods that are constructed using an objective function and a set of theano variables, and returns an updates dictionary to pass to a theano function.

oneDNN - oneAPI Deep Neural Network Library (oneDNN)

  •    C++

This software was previously known as Intel(R) Math Kernel Library for Deep Neural Networks (Intel(R) MKL-DNN) and Deep Neural Network Library (DNNL). oneDNN is intended for deep learning applications and framework developers interested in improving application performance on Intel CPUs and GPUs. Deep learning practitioners should use one of the applications enabled with oneDNN.

tangent - Source-to-Source Debuggable Derivatives in Pure Python

  •    Python

Tangent is a new, free, and open-source Python library for automatic differentiation.Existing libraries implement automatic differentiation by tracing a program's execution (at runtime, like PyTorch) or by staging out a dynamic data-flow graph and then differentiating the graph (ahead-of-time, like TensorFlow). In contrast, Tangent performs ahead-of-time autodiff on the Python source code itself, and produces Python source code as its output. Tangent fills a unique location in the space of machine learning tools.

digit-classifier - A single handwritten digit classifier, using the MNIST dataset. Pure Numpy.

  •    Python

An implementation of multilayer neural network using numpy library. The implementation is a modified version of Michael Nielsen's implementation in Neural Networks and Deep Learning book. Note: There are other functions in use other than sigmoid, but this information for now is sufficient for beginners.






We have large collection of open source products. Follow the tags from Tag Cloud >>


Open source products are scattered around the web. Please provide information about the open source projects you own / you use. Add Projects.