Displaying 1 to 5 from 5 results

gorgonia - Gorgonia is a library that helps facilitate machine learning in Go.


Gorgonia is a library that helps facilitate machine learning in Go. Write and evaluate mathematical equations involving multidimensional arrays easily. If this sounds like Theano or TensorFlow, it's because the idea is quite similar. Specifically, the library is pretty low-level, like Theano, but has higher goals like Tensorflow.The main reason to use Gorgonia is developer comfort. If you're using a Go stack extensively, now you have access to the ability to create production-ready machine learning systems in an environment that you are already familiar and comfortable with.

nnvm - Bring deep learning to bare metal


The following code snippet demonstrates the general workflow of nnvm compiler.Licensed under an Apache-2.0 license.

assignment1 - Assignment 1: automatic differentiation


In this assignment, we would implement reverse-mode auto-diff. Our code should be able to construct simple expressions, e.g. y=x1*x2+x1, and evaluate their outputs as well as their gradients (or adjoints), e.g. y, dy/dx1 and dy/dx2.

Dispatcher.jl - Build, distribute, and execute task graphs


Dispatcher is a tool for building and executing a computation graph given a series of dependent operations. Using Dispatcher, run! builds and runs a computation graph of DispatchNodes. DispatchNodes represent units of computation that can be run. The most common DispatchNode is Op, which represents a function call on some arguments. Some of those arguments may exist when building the graph, and others may represent the results of other DispatchNodes. An Executor executes a whole DispatchGraph. Two Executors are provided. AsyncExecutor executes computations asynchronously using Julia Tasks. ParallelExecutor executes computations in parallel using all available Julia processes (by calling @spawn).




assignment2-2017 - (Spring 2017) Assignment 2: GPU Executor


In this assignment, we would implement a GPU graph executor that can train simple neural nets such as multilayer perceptron models. Our code should be able to construct a simple MLP model using computation graph API implemented in Assignment 1, and train and test the model using either numpy or GPU. If you implement everything correctly, you would see nice speedup in training neural nets with GPU executor compared to numpy executor, as expected.