Displaying 1 to 20 from 21 results

gorgonia - Gorgonia is a library that helps facilitate machine learning in Go.

  •    Go

Gorgonia is a library that helps facilitate machine learning in Go. Write and evaluate mathematical equations involving multidimensional arrays easily. If this sounds like Theano or TensorFlow, it's because the idea is quite similar. Specifically, the library is pretty low-level, like Theano, but has higher goals like Tensorflow.The main reason to use Gorgonia is developer comfort. If you're using a Go stack extensively, now you have access to the ability to create production-ready machine learning systems in an environment that you are already familiar and comfortable with.

tangent - Source-to-Source Debuggable Derivatives in Pure Python

  •    Python

Tangent is a new, free, and open-source Python library for automatic differentiation.Existing libraries implement automatic differentiation by tracing a program's execution (at runtime, like PyTorch) or by staging out a dynamic data-flow graph and then differentiating the graph (ahead-of-time, like TensorFlow). In contrast, Tangent performs ahead-of-time autodiff on the Python source code itself, and produces Python source code as its output. Tangent fills a unique location in the space of machine learning tools.

swift - Swift for TensorFlow project home page

  •    

Swift for TensorFlow is a new way to develop machine learning models. It gives you the power of TensorFlow directly integrated into the Swift programming language. With Swift, you can write the following imperative code, and Swift automatically turns it into a single TensorFlow Graph and runs it with the full performance of TensorFlow Sessions on CPU, GPU and TPU. Swift combines the flexibility of Eager Execution with the high performance of Graphs and Sessions. Behind the scenes, Swift analyzes your Tensor code and automatically builds graphs for you. Swift also catches type errors and shape mismatches before running your code, and has Automatic Differentiation built right in. We believe that machine learning tools are so important that they deserve a first-class language and a compiler.

gorgonia - Gorgonia is a library that helps facilitate machine learning in Go.

  •    Go

Gorgonia is a library that helps facilitate machine learning in Go. Write and evaluate mathematical equations involving multidimensional arrays easily. If this sounds like Theano or TensorFlow, it's because the idea is quite similar. Specifically, the library is pretty low-level, like Theano, but has higher goals like Tensorflow. The main reason to use Gorgonia is developer comfort. If you're using a Go stack extensively, now you have access to the ability to create production-ready machine learning systems in an environment that you are already familiar and comfortable with.




DeepLearning.scala - A simple library for creating complex neural networks

  •    Scala

DeepLearning.scala is a simple library for creating complex neural networks from object-oriented and functional programming constructs. Like other deep learning toolkits, DeepLearning.scala allows you to build neural networks from mathematical formulas. It supports floats, doubles, GPU-accelerated N-dimensional arrays, and calculates derivatives of the weights in the formulas.

owl - Owl is an OCaml library for scientific and engineering computing.

  •    OCaml

Owl is an emerging numerical library for scientific computing and engineering. The library is developed in the OCaml language and inherits all its powerful features such as static type checking, powerful module system, and superior runtime efficiency. Owl allows you to write succinct type-safe numerical applications in functional language without sacrificing performance, significantly reduces the cost from prototype to production use. Owl's documentation contains a lot of learning materials to help you start. The full documentation consists of two parts: Tutorial Book and API Reference. Both are perfectly synchronised with the code in the repository by the automatic building system. You can access both parts with the following link.

DeepDarkFantasy - A Programming Language for Deep Learning

  •    Haskell

As we all know, a neural network is just a computable math expression (and hence a program). Of course, I must still be able to train the network.

theano_lstm - :microscope: Nano size Theano LSTM module

  •    Python

Implements most of the great things that came out in 2014 concerning recurrent neural networks, and some good optimizers for these types of networks. This module also contains the SGD, AdaGrad, and AdaDelta gradient descent methods that are constructed using an objective function and a set of theano variables, and returns an updates dictionary to pass to a theano function.


Arraymancer - A fast, ergonomic and portable tensor library in Nim with a deep learning focus for CPU, GPU, OpenCL and embedded devices

  •    Nim

Arraymancer is a tensor (N-dimensional array) project in Nim. The main focus is providing a fast and ergonomic CPU, Cuda and OpenCL ndarray library on which to build a scientific computing and in particular a deep learning ecosystem. The library is inspired by Numpy and PyTorch. The library provides ergonomics very similar to Numpy, Julia and Matlab but is fully parallel and significantly faster than those libraries. It is also faster than C-based Torch.

math - Stan Math Library

  •    C++

The Stan Math Library is a C++, reverse-mode automatic differentiation library designed to be usable, extensive and extensible, efficient, scalable, stable, portable, and redistributable in order to facilitate the construction and utilization of algorithms that utilize derivatives. The Stan Math Library is licensed under the new BSD license.

ad.js - Automatic-Differentiation in js

  •    Javascript

A translation of the ad library from Jeff Siskind's Qobischeme.

Tensors

  •    Julia

Efficient computations with symmetric and non-symmetric tensors with support for automatic differentiation. This Julia package provides fast operations with symmetric and non-symmetric tensors of order 1, 2 and 4. The Tensors are allocated on the stack which means that there is no need to preallocate output results for performance. Unicode infix operators are provided such that the tensor expression in the source code is similar to the one written with mathematical notation. When possible, symmetry of tensors is exploited for better performance. Supports Automatic Differentiation to easily compute first and second order derivatives of tensorial functions.

adnn - Javascript neural networks on top of general scalar/tensor reverse-mode automatic differentiation

  •    Javascript

adnn provides Javascript-native neural networks on top of general scalar/tensor reverse-mode automatic differentiation. You can use just the AD code, or the NN layer built on top of it. This architecture makes it easy to define big, complex numerical computations and compute derivatives w.r.t. their inputs/parameters. adnn also includes utilities for optimizing/training the parameters of such computations. adnn provides a Tensor type for representing multidimensional arrays of numbers and various operations on them. This is the core datatype underlying neural net computations.

AbstractOperators.jl - Abstract operators for large scale optimization in Julia

  •    Julia

Abstract operators extend the syntax typically used for matrices to linear mappings of arbitrary dimensions and nonlinear functions. Unlike matrices however, abstract operators apply the mappings with specific efficient algorithms that minimize memory requirements. This is particularly useful in iterative algorithms and in first order large-scale optimization algorithms. Remember to Pkg.update() to keep the package up to date.

backprop - Heterogeneous automatic differentiation ("backpropagation") in Haskell

  •    Haskell

Automatic heterogeneous back-propagation. Write your functions to compute your result, and the library will automatically generate functions to compute your gradient.

ForwardDiff.jl - Forward Mode Automatic Differentiation for Julia

  •    Julia

ForwardDiff implements methods to take derivatives, gradients, Jacobians, Hessians, and higher-order derivatives of native Julia functions (or any callable object, really) using forward mode automatic differentiation (AD). While performance can vary depending on the functions you evaluate, the algorithms implemented by ForwardDiff generally outperform non-AD algorithms in both speed and accuracy.

ReverseDiff.jl - Reverse Mode Automatic Differentiation for Julia

  •    Julia

Note: While ReverseDiff technically supports Julia v0.7/v1.0 and is somewhat maintained, it is currently not actively developed. Instead, ForwardDiff/ReverseDiff's maintainers are focused on the development of a new AD package built on top of Cassette. In the meantime, it might be worth checking out other reverse-mode AD implementations in Nabla.jl, AutoGrad.jl, Flux.jl, or XGrad.jl. ReverseDiff implements methods to take gradients, Jacobians, Hessians, and higher-order derivatives of native Julia functions (or any callable object, really) using reverse mode automatic differentiation (AD).

TaylorSeries

  •    Julia

A Julia package for Taylor polynomial expansions in one or more independent variables. Comments, suggestions and improvements are welcome and appreciated.

Nabla.jl

  •    Julia

Nabla.jl is a reverse-mode automatic differentiation package targetting machine learning use cases. As such, we have (for example) prioritised support for linear algebra optimisations and higher-order functions over the ability to take higher-order derivatives (Nabla currently only supports first-order derivatives). The latest docs, and code in the examples folder, best indicate how to use the package. Given the early stage of development, we anticipate a number of bugs and performance issues. If you encounter any of these or have any particular feature requests, please raise an issue and let us know.

autodiff - A

  •    CSharp

A library that provides moderately fast, accurate, and automatic differentiation (computes derivative / gradient) of mathematical functions. AutoDiff provides a simple and intuitive API for computing function gradients/derivatives along with a fast algorithm for performing the computation. Such computations are mainly useful in iterative numerical optimization scenarios.