- 7

This package is unique in that it can differentiate vector-valued expressions in Einstein notation. However, if you only need gradients of scalar-valued functions (which is typicial in machine learning), please use XGrad.jl instead. XGrad.jl is re-thought and stabilized version of this package, adding many useful featues in place of (not frequently used) derivatives of vector-valued functions. If nevertheless you want to continue using XDiff.jl, please pin Espresso.jl to version v3.0.0, which is the last supporting Einstein notation. XDiff.jl is an expression differentiation package, supporting fully symbolic approach to finding tensor derivatives. Unlike automatic differentiation packages, XDiff.jl can output not only ready-to-use derivative functions, but also their symbolic expressions suitable for further optimization and code generation.

https://github.com/dfdx/XDiff.jlTags | tensor einstein-notation derivative differentiation |

Implementation | Julia |

License | MIT |

Platform |

Arraymancer is a tensor (N-dimensional array) project in Nim. The main focus is providing a fast and ergonomic CPU, Cuda and OpenCL ndarray library on which to build a scientific computing and in particular a deep learning ecosystem. The library is inspired by Numpy and PyTorch. The library provides ergonomics very similar to Numpy, Julia and Matlab but is fully parallel and significantly faster than those libraries. It is also faster than C-based Torch.

tensor nim multidimensional-arrays cuda deep-learning machine-learning cudnn high-performance-computing gpu-computing matrix-library neural-networks parallel-computing openmp linear-algebra ndarray opencl gpgpu iot automatic-differentiation autogradSwift for TensorFlow is a new way to develop machine learning models. It gives you the power of TensorFlow directly integrated into the Swift programming language. With Swift, you can write the following imperative code, and Swift automatically turns it into a single TensorFlow Graph and runs it with the full performance of TensorFlow Sessions on CPU, GPU and TPU. Swift combines the flexibility of Eager Execution with the high performance of Graphs and Sessions. Behind the scenes, Swift analyzes your Tensor code and automatically builds graphs for you. Swift also catches type errors and shape mismatches before running your code, and has Automatic Differentiation built right in. We believe that machine learning tools are so important that they deserve a first-class language and a compiler.

machine-learning automatic-differentiation compilerA C++ library to calculate with truncated taylor series -- calculating derivatives of any order. In order to download please go to the "Source Code" tab.

auto-differentiation derivative gradient math mathematics numericsEngineering calculator ; Calculates also integrals, the sums, products, derivatives; Solve LSE, ODE, LSODE; Finds roots; calculations with matrixes. Interpolation with cubic splain.(math expression calculator) (scientific calculator) (numerical, integral, derivative, root, matrix, differencial, differentiation, equation )

TH++ is a C++ tensor library, implemented as a wrapper around the TH library (the low-level tensor library in Torch). There is unfortunately little documentation about TH, but the interface mimics the Lua Tensor interface. The core of the library is the Tensor<T> class template, where T is a numeric type (usually floating point, float or double). A tensor is a multi-dimensional array, usually in C (row-major) order, but many operations (transpose, slice, etc) are performed by permuting indexes and changing offsets, so the data is no longer contiguous / in row-major order. Read the numpy.ndarray documentation for more details about the strided indexing scheme.

jl ("JSON lambda") is a tiny functional language for querying and manipulating JSON. Binary releases for Linux and OS X are available here.

json command-line-tool command-line haskellADMC++ -- An Automatic Differentiation Package for MATLAB and C++ ADMC++ is an automatic differentiation package designed for MATLAB and C++. Automatic differentiation is a technique for computing derivatives of functions.

C# automatic differentiation and numerical optimization library, transparent use with operator overloading, unlimited order of differentiation and unlimited number of variables, very flexible support for matrix algebra, on-the-fly function compilation to IL code for very fast ...

differentiation mathematics matrix optimizationGorgonia is a library that helps facilitate machine learning in Go. Write and evaluate mathematical equations involving multidimensional arrays easily. If this sounds like Theano or TensorFlow, it's because the idea is quite similar. Specifically, the library is pretty low-level, like Theano, but has higher goals like Tensorflow.The main reason to use Gorgonia is developer comfort. If you're using a Go stack extensively, now you have access to the ability to create production-ready machine learning systems in an environment that you are already familiar and comfortable with.

machine-learning artificial-intelligence neural-network computation-graph differentiation gradient-descent gorgonia deep-learning deeplearning deep-neural-networks automatic-differentiation symbolic-differentiation go-libraryGorgonia is a library that helps facilitate machine learning in Go. Write and evaluate mathematical equations involving multidimensional arrays easily. If this sounds like Theano or TensorFlow, it's because the idea is quite similar. Specifically, the library is pretty low-level, like Theano, but has higher goals like Tensorflow. The main reason to use Gorgonia is developer comfort. If you're using a Go stack extensively, now you have access to the ability to create production-ready machine learning systems in an environment that you are already familiar and comfortable with.

machine-learning artificial-intelligence neural-network computation-graph differentiation gradient-descent gorgonia deep-learning deeplearning deep-neural-networks automatic-differentiation symbolic-differentiationPretty Tensor provides a high level builder API for TensorFlow. It provides thin wrappers on Tensors so that you can easily build multi-layer neural networks.Pretty Tensor provides a set of objects that behave likes Tensors, but also support a chainable object syntax to quickly define neural networks and other layered architectures in TensorFlow.

MShadow is a lightweight CPU/GPU Matrix/Tensor Template Library in C++/CUDA. The goal of mshadow is to support efficient, device invariant and simple tensor library for machine learning project that aims for maximum performance and control, while also emphasize simplicity.MShadow also provides interface that allows writing Multi-GPU and distributed deep learning programs in an easy and unified way.

Low-Rank and Sparse tools for Background Modeling and Subtraction in Videos. The LRSLibrary provides a collection of low-rank and sparse decomposition algorithms in MATLAB. The library was designed for motion segmentation in videos, but it can be also used (or adapted) for other computer vision problems (for more information, please see this page). Currently the LRSLibrary offers more than 100 algorithms based on matrix and tensor methods. The LRSLibrary was tested successfully in several MATLAB versions (e.g. R2014, R2015, R2016, R2017, on both x86 and x64 versions). It requires minimum R2014b.

rpca matrix-factorization matrix-completion tensor-decomposition tensor matlab matrix subspace-tracking subspace-learningEINSTEIN (Expert system for an Intelligent Supply of Thermal Energy in Industry). Software tool for fast and high-quality thermal energy audits and design of energy-efficient heat and cold supply systems in the industrial sector and other large applications

Objective Markup Notation, abbreviated as Mark Notation or just Mark, is a new unified notation for both object and markup data. The notation is a superset of what can be represented by JSON, HTML and XML, but overcomes many limitations these popular data formats, yet still having a very clean syntax and simple data model. The major syntax extension Mark makes to JSON is the introduction of a Mark object. It is a JSON object extended with a type name and a list of content items, similar to element in HTML and XML.

mark markup notation json json5 parser stringify html xml jsonml virtual-dom s-expressionRavi is a derivative/dialect of Lua 5.3 with limited optional static typing and features LLVM and Eclipse OMR powered JIT compilers. The name Ravi comes from the Sanskrit word for the Sun. Interestingly a precursor to Lua was Sol which had support for static types; Sol means the Sun in Portugese. Lua is perfect as a small embeddable dynamic language so why a derivative? Ravi extends Lua with static typing for greater performance under JIT compilation. However, the static typing is optional and therefore Lua programs are also valid Ravi programs.

llvm jit programming-language omr eclipse-omrThe fuzzer uses Python and runs on multiple OSs (Linux, Windows, OS X, and Freebsd). Its main goal is to detect issues based on diffential fuzzing aided with the extended capabilities to increase coverage. Still, it will found common vulnerabilities based on hangs and crashes, allowing to attach a memory debugger to the fuzzing sessions. The tool and the fuzzing process can be susceptible to code execution. Use it at your own risk always inside a VM.

Tangent is a new, free, and open-source Python library for automatic differentiation.Existing libraries implement automatic differentiation by tracing a program's execution (at runtime, like PyTorch) or by staging out a dynamic data-flow graph and then differentiating the graph (ahead-of-time, like TensorFlow). In contrast, Tangent performs ahead-of-time autodiff on the Python source code itself, and produces Python source code as its output. Tangent fills a unique location in the space of machine learning tools.

autodiff automatic-differentiation machine-learning deep-learningNOTE: Building on the momentum of deeplearn.js, we have joined the TensorFlow family and we are starting a new ecosystem of libraries and tools for Machine Learning in Javascript, called TensorFlow.js. This repo moved from PAIR-code/deeplearnjs to tensorflow/tfjs-core. A part of the TensorFlow.js ecosystem, this repo hosts @tensorflow/tfjs-core, the TensorFlow.js Core API, which provides low-level, hardware-accelerated linear algebra operations and an eager API for automatic differentiation.

deep-learning typescript webgl machine-learning neural-network deep-neural-networks gpu-accelerationAutograd can automatically differentiate native Python and Numpy code. It can handle a large subset of Python's features, including loops, ifs, recursion and closures, and it can even take derivatives of derivatives of derivatives. It supports reverse-mode differentiation (a.k.a. backpropagation), which means it can efficiently take gradients of scalar-valued functions with respect to array-valued arguments, as well as forward-mode differentiation, and the two can be composed arbitrarily. The main intended application of Autograd is gradient-based optimization. For more information, check out the tutorial and the examples directory. See the tanh example file for the code.

We have large collection of open source products. Follow the tags from
Tag Cloud >>

Open source products are scattered around the web. Please provide information
about the open source projects you own / you use.
**Add Projects.**