Displaying 1 to 11 from 11 results

Gorgonia is a library that helps facilitate machine learning in Go. Write and evaluate mathematical equations involving multidimensional arrays easily. If this sounds like Theano or TensorFlow, it's because the idea is quite similar. Specifically, the library is pretty low-level, like Theano, but has higher goals like Tensorflow.The main reason to use Gorgonia is developer comfort. If you're using a Go stack extensively, now you have access to the ability to create production-ready machine learning systems in an environment that you are already familiar and comfortable with.

machine-learning artificial-intelligence neural-network computation-graph differentiation gradient-descent gorgonia deep-learning deeplearning deep-neural-networks automatic-differentiation symbolic-differentiation go-libraryGorgonia is a library that helps facilitate machine learning in Go. Write and evaluate mathematical equations involving multidimensional arrays easily. If this sounds like Theano or TensorFlow, it's because the idea is quite similar. Specifically, the library is pretty low-level, like Theano, but has higher goals like Tensorflow. The main reason to use Gorgonia is developer comfort. If you're using a Go stack extensively, now you have access to the ability to create production-ready machine learning systems in an environment that you are already familiar and comfortable with.

machine-learning artificial-intelligence neural-network computation-graph differentiation gradient-descent gorgonia deep-learning deeplearning deep-neural-networks automatic-differentiation symbolic-differentiationTheano is a Python library that allows you to define, optimize, and efficiently evaluate mathematical expressions involving multi-dimensional arrays. It is built on top of NumPy. Its features include tight integration with NumPy, transparent use of a GPU, dynamic C code generation and lot more.

deep-learning neural-network math numerical symbolic blas numpy gpu autodiff differentiationC# automatic differentiation and numerical optimization library, transparent use with operator overloading, unlimited order of differentiation and unlimited number of variables, very flexible support for matrix algebra, on-the-fly function compilation to IL code for very fast ...

differentiation mathematics matrix optimizationThis literate programming exercise will construct a simple 2-layer feed-forward neural network to compute the exclusive or, using symbolic differentiation to compute the gradients automatically. In total, about 500 lines of code, including comments. The only functional dependency is numpy. I highly recommend reading Chris Olah's Calculus on Computational Graphs: Backpropagation for more background on what this code is doing.

differentiation tensorflow numpyNumDiff provides a modern Fortran interface for computing the Jacobian (derivative) matrix of m nonlinear functions which depend on n variables. The Jacobian matrix is required for various applications, including numerical optimization. The library also provides for computing the sparsity of this matrix, and returning the Jacobian in sparse or dense form. This is currently an experimental work in progress and is not production ready. The goal is a comprehensive library that contains a full suite of computationally efficient implementations of algorithms for sparsity determination and numerical differentiation.

differentiation numerical-methods optimization sparsity-optimizationThis package is unique in that it can differentiate vector-valued expressions in Einstein notation. However, if you only need gradients of scalar-valued functions (which is typicial in machine learning), please use XGrad.jl instead. XGrad.jl is re-thought and stabilized version of this package, adding many useful featues in place of (not frequently used) derivatives of vector-valued functions. If nevertheless you want to continue using XDiff.jl, please pin Espresso.jl to version v3.0.0, which is the last supporting Einstein notation. XDiff.jl is an expression differentiation package, supporting fully symbolic approach to finding tensor derivatives. Unlike automatic differentiation packages, XDiff.jl can output not only ready-to-use derivative functions, but also their symbolic expressions suitable for further optimization and code generation.

tensor einstein-notation derivative differentiationA library for parsing math expressions with rational numbers, finding their derivatives and compiling an optimal IL code. Worth mentioning that commutative functions (addition and multiplication) taken as function with several nodes for more easy and flexible travers.

derivative calculations rational-numbers c-sharp simplification parsing il-optimizations parsing-math-expressions il optimization differentiationA golang library supporting definite integration and differentiation of real valued, single variable, elementary and non-elementary functions. Alternatively, you can define any function that takes and returns a float64, (i.e. a function of the form type usrFunction func(float64) float64), and then perform calculus on it. This allows you to do calculus on practically any mapping.

calculus differentiation golang-libraryDeep learning library for F#. Provides tensor functionality, symbolic model differentiation, automatic differentiation and compilation to CUDA GPUs. It includes optimizers and model blocks used in deep learning. Deep.Net is currently being ported to .NET Standard 2.0.

deep-learning symbolic-computation symbolic-execution-engine differentiation fsharp machine-learning cuda gpu-acceleration gpu gpu-computing ndarray tensorThis toolbox implements automatic/algorithmic differentiation for matlab using sparse representation for jacobians. For more detailed explanation and list of supported files, read the documentation.

matlab automatic differentiation
We have large collection of open source products. Follow the tags from
Tag Cloud >>

Open source products are scattered around the web. Please provide information
about the open source projects you own / you use.
**Add Projects.**