Gorgonia is a library that helps facilitate machine learning in Go. Write and evaluate mathematical equations involving multidimensional arrays easily. If this sounds like Theano or TensorFlow, it's because the idea is quite similar. Specifically, the library is pretty low-level, like Theano, but has higher goals like Tensorflow.The main reason to use Gorgonia is developer comfort. If you're using a Go stack extensively, now you have access to the ability to create production-ready machine learning systems in an environment that you are already familiar and comfortable with.
machine-learning artificial-intelligence neural-network computation-graph differentiation gradient-descent gorgonia deep-learning deeplearning deep-neural-networks automatic-differentiation symbolic-differentiation go-libraryTangent is a new, free, and open-source Python library for automatic differentiation.Existing libraries implement automatic differentiation by tracing a program's execution (at runtime, like PyTorch) or by staging out a dynamic data-flow graph and then differentiating the graph (ahead-of-time, like TensorFlow). In contrast, Tangent performs ahead-of-time autodiff on the Python source code itself, and produces Python source code as its output. Tangent fills a unique location in the space of machine learning tools.
autodiff automatic-differentiation machine-learning deep-learningSwift for TensorFlow is a new way to develop machine learning models. It gives you the power of TensorFlow directly integrated into the Swift programming language. With Swift, you can write the following imperative code, and Swift automatically turns it into a single TensorFlow Graph and runs it with the full performance of TensorFlow Sessions on CPU, GPU and TPU. Swift combines the flexibility of Eager Execution with the high performance of Graphs and Sessions. Behind the scenes, Swift analyzes your Tensor code and automatically builds graphs for you. Swift also catches type errors and shape mismatches before running your code, and has Automatic Differentiation built right in. We believe that machine learning tools are so important that they deserve a first-class language and a compiler.
machine-learning automatic-differentiation compilerA Machine Learning library written in pure Go designed to support relevant neural architectures in Natural Language Processing. spaGO is self-contained, in that it uses its own lightweight computational graph framework for both training and inference, easy to understand from start to finish.
nlp machine-learning natural-language-processing deep-learning neural-network automatic-differentiation artificial-intelligence recurrent-networks lstm computation-graph question-answering bart automatic-translation deeplearning language-model bert transformer-architecture bert-as-service named-entities-recognitionGorgonia is a library that helps facilitate machine learning in Go. Write and evaluate mathematical equations involving multidimensional arrays easily. If this sounds like Theano or TensorFlow, it's because the idea is quite similar. Specifically, the library is pretty low-level, like Theano, but has higher goals like Tensorflow.
machine-learning deep-neural-networks deep-learning neural-network automatic-differentiation artificial-intelligence computation-graph deeplearning gradient-descent hacktoberfest differentiation symbolic-differentiation graph-computationDeepLearning.scala is a simple library for creating complex neural networks from object-oriented and functional programming constructs. Like other deep learning toolkits, DeepLearning.scala allows you to build neural networks from mathematical formulas. It supports floats, doubles, GPU-accelerated N-dimensional arrays, and calculates derivatives of the weights in the formulas.
automatic-differentiation deep-neural-networks deep-learning neural-network functional-programming symbolic-computation dsl domain-specific-language machine-learningOwl is an emerging numerical library for scientific computing and engineering. The library is developed in the OCaml language and inherits all its powerful features such as static type checking, powerful module system, and superior runtime efficiency. Owl allows you to write succinct type-safe numerical applications in functional language without sacrificing performance, significantly reduces the cost from prototype to production use. Owl's documentation contains a lot of learning materials to help you start. The full documentation consists of two parts: Tutorial Book and API Reference. Both are perfectly synchronised with the code in the repository by the automatic building system. You can access both parts with the following link.
matrix linear-algebra ndarray statistical-functions topic-modeling regression maths gsl plotting sparse-linear-systems scientific-computing numerical-calculations statistics mcmc optimization autograd algorithmic-differentation automatic-differentiation machine-learning neural-networkThis is the Control Toolbox, an efficient C++ library for control, estimation, optimization and motion planning in robotics. Link to the wiki, quickstart! Find the detailed documentation here.
cpp robotics automatic-differentiation control-systems trajectory-optimization optimal-control model-predictive-control rigid-body-dynamics lqr-controller extended-kalman-filter ilqg ilqr disturbance-observer multiple-shooting riccati-solverPinocchio instantiates the state-of-the-art Rigid Body Algorithms for poly-articulated systems based on revisited Roy Featherstone's algorithms. Besides, Pinocchio provides the analytical derivatives of the main Rigid-Body Algorithms like the Recursive Newton-Euler Algorithm or the Articulated-Body Algorithm. Pinocchio is first tailored for robotics applications, but it can be used in extra contexts (biomechanics, computer graphics, vision, etc.). It is built upon Eigen for linear algebra and FCL for collision detection. Pinocchio comes with a Python interface for fast code prototyping, directly accessible through Conda.
c-plus-plus robotics kinematics dynamics automatic-differentiation conda motion-planning ros code-generation urdf rigid-body-dynamics cppad fcl casadi analytical-derivatives pinocchioAs we all know, a neural network is just a computable math expression (and hence a program). Of course, I must still be able to train the network.
deep-learning dsl functional-programming automatic-differentiationImplements most of the great things that came out in 2014 concerning recurrent neural networks, and some good optimizers for these types of networks. This module also contains the SGD, AdaGrad, and AdaDelta gradient descent methods that are constructed using an objective function and a set of theano variables, and returns an updates dictionary to pass to a theano function.
machine-learning recurrent-networks theano lstm gru adadelta dropout automatic-differentiation neural-network tutorialArraymancer is a tensor (N-dimensional array) project in Nim. The main focus is providing a fast and ergonomic CPU, Cuda and OpenCL ndarray library on which to build a scientific computing and in particular a deep learning ecosystem. The library is inspired by Numpy and PyTorch. The library provides ergonomics very similar to Numpy, Julia and Matlab but is fully parallel and significantly faster than those libraries. It is also faster than C-based Torch.
tensor nim multidimensional-arrays cuda deep-learning machine-learning cudnn high-performance-computing gpu-computing matrix-library neural-networks parallel-computing openmp linear-algebra ndarray opencl gpgpu iot automatic-differentiation autogradThe Stan Math Library is a C++, reverse-mode automatic differentiation library designed to be usable, extensive and extensible, efficient, scalable, stable, portable, and redistributable in order to facilitate the construction and utilization of algorithms that utilize derivatives. The Stan Math Library is licensed under the new BSD license.
stan stan-math-library math eigen boost automatic-differentiation cvodeA translation of the ad library from Jeff Siskind's Qobischeme.
automatic-differentiation overloaded-operatorsEfficient computations with symmetric and non-symmetric tensors with support for automatic differentiation. This Julia package provides fast operations with symmetric and non-symmetric tensors of order 1, 2 and 4. The Tensors are allocated on the stack which means that there is no need to preallocate output results for performance. Unicode infix operators are provided such that the tensor expression in the source code is similar to the one written with mathematical notation. When possible, symmetry of tensors is exploited for better performance. Supports Automatic Differentiation to easily compute first and second order derivatives of tensorial functions.
finite-elements cfd tensor symmetric-tensors automatic-differentiationadnn provides Javascript-native neural networks on top of general scalar/tensor reverse-mode automatic differentiation. You can use just the AD code, or the NN layer built on top of it. This architecture makes it easy to define big, complex numerical computations and compute derivatives w.r.t. their inputs/parameters. adnn also includes utilities for optimizing/training the parameters of such computations. adnn provides a Tensor type for representing multidimensional arrays of numbers and various operations on them. This is the core datatype underlying neural net computations.
automatic-differentiation neural-networksAbstract operators extend the syntax typically used for matrices to linear mappings of arbitrary dimensions and nonlinear functions. Unlike matrices however, abstract operators apply the mappings with specific efficient algorithms that minimize memory requirements. This is particularly useful in iterative algorithms and in first order large-scale optimization algorithms. Remember to Pkg.update() to keep the package up to date.
derivatives back-propagation automatic-differentiation optimization large-scale julia-languageAutomatic heterogeneous back-propagation. Write your functions to compute your result, and the library will automatically generate functions to compute your gradient.
backprop graph backpropagation deep-learning neural-network gradient-descent automatic-differentiationForwardDiff implements methods to take derivatives, gradients, Jacobians, Hessians, and higher-order derivatives of native Julia functions (or any callable object, really) using forward mode automatic differentiation (AD). While performance can vary depending on the functions you evaluate, the algorithms implemented by ForwardDiff generally outperform non-AD algorithms in both speed and accuracy.
julia calculus automatic-differentiationNote: While ReverseDiff technically supports Julia v0.7/v1.0 and is somewhat maintained, it is currently not actively developed. Instead, ForwardDiff/ReverseDiff's maintainers are focused on the development of a new AD package built on top of Cassette. In the meantime, it might be worth checking out other reverse-mode AD implementations in Nabla.jl, AutoGrad.jl, Flux.jl, or XGrad.jl. ReverseDiff implements methods to take gradients, Jacobians, Hessians, and higher-order derivatives of native Julia functions (or any callable object, really) using reverse mode automatic differentiation (AD).
julia calculus automatic-differentiation
We have large collection of open source products. Follow the tags from
Tag Cloud >>
Open source products are scattered around the web. Please provide information
about the open source projects you own / you use.
Add Projects.