crystal-fann - FANN (Fast Artifical Neural Network) binding in Crystal

  •        4

I guess more stuff will be added once more people will use it.

https://github.com/NeuraLegion/crystal-fann

Tags
Implementation
License
Platform

   




Related Projects

ruby-fann - Ruby library for interfacing with FANN (Fast Artificial Neural Network)

  •    C

RubyFann, or "ruby-fann" is a ruby gem that binds to FANN (Fast Artificial Neural Network) from within a ruby/rails environment. FANN is a is a free open source neural network library, which implements multilayer artificial neural networks with support for both fully connected and sparsely connected networks. It is easy to use, versatile, well documented, and fast. RubyFann makes working with neural networks a breeze using ruby, with the added benefit that most of the heavy lifting is done natively. This callback function can be called during training when using train_on_data, train_on_file or cascadetrain_on_data.

fann - Official github repository for Fast Artificial Neural Network Library (FANN)

  •    C++

Fast Artificial Neural Network (FANN) Library is a free open source neural network library, which implements multilayer artificial neural networks in C with support for both fully connected and sparsely connected networks. Cross-platform execution in both fixed and floating point are supported. It includes a framework for easy handling of training data sets. It is easy to use, versatile, well documented, and fast.

fann.js - FANN compiled through Emscripten

  •    Javascript

The FANN (Fast Artificial Neural Network) library compiled through Emscripten. This library contains some higher level bindings. Much of the original documentation is still relevant. The noticable changes will be documented below. Note not all functions have a binding. You may see mention of Fixed vs Float mode, FANN.js uses Float mode.

tensorflow-image-detection - A generic image detection program that uses Google's Machine Learning library, Tensorflow and a pre-trained Deep Learning Convolutional Neural Network model called Inception

  •    Python

A generic image detection program that uses Google's Machine Learning library, Tensorflow and a pre-trained Deep Learning Convolutional Neural Network model called Inception. This model has been pre-trained for the ImageNet Large Visual Recognition Challenge using the data from 2012, and it can differentiate between 1,000 different classes, like Dalmatian, dishwasher etc. The program applies Transfer Learning to this existing model and re-trains it to classify a new set of images.

t81_558_deep_learning - Washington University (in St

  •    Jupyter

Deep learning is a group of exciting new technologies for neural networks. Through a combination of advanced training techniques and neural network architectural components, it is now possible to create neural networks of much greater complexity. Deep learning allows a neural network to learn hierarchies of information in a way that is like the function of the human brain. This course will introduce the student to computer vision with Convolution Neural Networks (CNN), time series analysis with Long Short-Term Memory (LSTM), classic neural network structures and application to computer security. High Performance Computing (HPC) aspects will demonstrate how deep learning can be leveraged both on graphical processing units (GPUs), as well as grids. Focus is primarily upon the application of deep learning to problems, with some introduction mathematical foundations. Students will use the Python programming language to implement deep learning using Google TensorFlow and Keras. It is not necessary to know Python prior to this course; however, familiarity of at least one programming language is assumed. This course will be delivered in a hybrid format that includes both classroom and online instruction. This syllabus presents the expected class schedule, due dates, and reading assignments. Download current syllabus.


EmojiIntelligence - Neural Network built in Apple Playground using Swift

  •    Swift

I used this challenge to learn more about neural networks and machine learning. A neural network consists of layers, and each layer has neurons. My network has three layers: an input layer, a hidden layer, and an output layer. The input to my network has 64 binary numbers. These inputs are connected to the neurons in the hidden layer. The hidden layer performs some computation and passes the result to the output layer neuron out. This also performs a computation and then outputs a 0 or a 1. The input layer doesn’t actually do anything, they are just placeholders for the input value. Only the neurons in the hidden layer and the output layer perform computations. The neurons from the input layer are connected to the neurons in the hidden layer. Likewise, both neurons from the hidden layer are connected to the output layer. These kinds of layers are called fully-connected because every neuron is connected to every neuron in the next layer. Each connection between two neurons has a weight, which is just a number. These weights form the brain of my network. For the activation function in my network, I use the sigmoid function. Sigmoid is a mathematical function. The sigmoid takes in some number x and converts it into a value between 0 and 1. That is ideal for my purposes, since I am dealing with binary numbers. This will turn a linear equation into something that is non-linear. This is important because without this, the network wouldn’t be able to learn any interesting things. I have already mentioned that the input to this network are 64 binary numbers. I resize the drawn image to 8x8 pixels which makes together 64 pixels. I go through the image and check each pixel if the pixel has a pink color I add a 1 to my array else I add a 0. At the end I will have 64 binary numbers which I can add to my input layer.

have-fun-with-machine-learning - An absolute beginner's guide to Machine Learning and Image Classification with Neural Networks

  •    Python

Also available in Chinese (Traditional). This is a hands-on guide to machine learning for programmers with no background in AI. Using a neural network doesn’t require a PhD, and you don’t need to be the person who makes the next breakthrough in AI in order to use what exists today. What we have now is already breathtaking, and highly usable. I believe that more of us need to play with this stuff like we would any other open source technology, instead of treating it like a research topic.

Machine-Learning-Tutorials - machine learning and deep learning tutorials, articles and other resources

  •    

This repository contains a topic-wise curated list of Machine Learning and Deep Learning tutorials, articles and other resources. Other awesome lists can be found in this list. If you want to contribute to this list, please read Contributing Guidelines.

Accord.NET - Machine learning, Computer vision, Statistics and general scientific computing for .NET

  •    CSharp

The Accord.NET project provides machine learning, statistics, artificial intelligence, computer vision and image processing methods to .NET. It can be used on Microsoft Windows, Xamarin, Unity3D, Windows Store applications, Linux or mobile.

glow - Compiler for Neural Network hardware accelerators

  •    C++

Glow is a machine learning compiler and execution engine for various hardware targets. It is designed to be used as a backend for high-level machine learning frameworks. The compiler is designed to allow state of the art compiler optimizations and code generation of neural network graphs. This library is experimental and in active development. Glow lowers a traditional neural network dataflow graph into a two-phase strongly-typed intermediate representation (IR). The high-level IR allows the optimizer to perform domain-specific optimizations. The lower-level instruction-based address-only IR allows the compiler to perform memory-related optimizations, such as instruction scheduling, static memory allocation and copy elimination. At the lowest level, the optimizer performs machine-specific code generation to take advantage of specialized hardware features. Glow features a lowering phase which enables the compiler to support a high number of input operators as well as a large number of hardware targets by eliminating the need to implement all operators on all targets. The lowering phase is designed to reduce the input space and allow new hardware backends to focus on a small number of linear algebra primitives. The design philosophy is described in an arXiv paper.

LSTM-Human-Activity-Recognition - Human Activity Recognition example using TensorFlow on smartphone sensors dataset and an LSTM RNN (Deep Learning algo)

  •    Jupyter

Compared to a classical approach, using a Recurrent Neural Networks (RNN) with Long Short-Term Memory cells (LSTMs) require no or almost no feature engineering. Data can be fed directly into the neural network who acts like a black box, modeling the problem correctly. Other research on the activity recognition dataset can use a big amount of feature engineering, which is rather a signal processing approach combined with classical data science techniques. The approach here is rather very simple in terms of how much was the data preprocessed. Let's use Google's neat Deep Learning library, TensorFlow, demonstrating the usage of an LSTM, a type of Artificial Neural Network that can process sequential data / time series.

autonomous-rc-car - Autonomous RC car using Raspberry Pi and ANN

  •    Python

This project aims to build an autonomous rc car using supervised learning of a neural network with a single hidden layer. We have not used any Machine Learning libraries since we wanted to implement the neural network from scratch to understand the concepts better. We have modified a remote controlled car to remove the dependency on the RF remote controller. A Raspberry Pi controls the DC motors via an L293D Motor Driver IC. You can find a post explaining this project in detail here. Here's a video of the car in action. We will be referring the DC motor controlling the left/right direction as the front motor and the motor controlling the forward/reverse direction as the back motor. Connect the BACK_MOTOR_DATA_ONE and BACK_MOTOR_DATA_TWO GPIO pins(GPIO17 and GPIO27) of the Raspberry Pi to the Input pins for Motor 1(Input 1, Input 2) and the BACK_MOTOR_ENABLE_PIN GPIO pin(GPIO22) to the Enable pin for Motor 1(Enable 1,2) in the L293D Motor Driver IC. Connect the Output pins for Motor 1(Output 1, Output 2) of the IC to the back motor.

deep-learning-book - Repository for "Introduction to Artificial Neural Networks and Deep Learning: A Practical Guide with Applications in Python"

  •    Jupyter

Repository for the book Introduction to Artificial Neural Networks and Deep Learning: A Practical Guide with Applications in Python. Deep learning is not just the talk of the town among tech folks. Deep learning allows us to tackle complex problems, training artificial neural networks to recognize complex patterns for image and speech recognition. In this book, we'll continue where we left off in Python Machine Learning and implement deep learning algorithms in PyTorch.

machine_learning_basics - Plain python implementations of basic machine learning algorithms

  •    Jupyter

This repository contains implementations of basic machine learning algorithms in plain Python (Python Version 3.6+). All algorithms are implemented from scratch without using additional machine learning libraries. The intention of these notebooks is to provide a basic understanding of the algorithms and their underlying structure, not to provide the most efficient implementations. After several requests I started preparing notebooks on how to preprocess datasets for machine learning. Within the next months I will add one notebook for each kind of dataset (text, images, ...). As before, the intention of these notebooks is to provide a basic understanding of the preprocessing steps, not to provide the most efficient implementations.

ConvNetJS - Javascript implementation of Neural networks

  •    Javascript

ConvNetJS is a Javascript implementation of Neural networks, It currently supports Common Neural Network modules, Classification (SVM/Softmax) and Regression (L2) cost functions, A MagicNet class for fully automatic neural network learning (automatic hyperparameter search and cross-validatations), Ability to specify and train Convolutional Networks that process images, An experimental Reinforcement Learning module, based on Deep Q Learning.

PyTorch-Tutorial - Build your neural network easy and fast

  •    Jupyter

In these tutorials for pyTorch, we will build our first Neural Network and try to build some advanced Neural Network architectures developed recent years. Thanks for liufuyang's notebook files which is a great contribution to this tutorial.

DeepNeuralClassifier - Deep neural network using rectified linear units to classify hand written symbols from the MNIST dataset

  •    Julia

Example scripts for a deep, feed-forward neural network have been written from scratch. No machine learning packages are used, providing an example of how to implement the underlying algorithms of an artificial neural network. The code is written in the Julia, a programming language with a syntax similar to Matlab. The neural network is trained on the MNIST dataset of handwritten digits. On the test dataset, the neural network correctly classifies 98.42 % of the handwritten digits. The results are pretty good for a fully connected neural network that does not contain a priori knowledge about the geometric invariances of the dataset like a Convolutional Neural Network would.

php-ml - PHP-ML - Machine Learning library for PHP

  •    PHP

Fresh approach to Machine Learning in PHP. Algorithms, Cross Validation, Neural Network, Preprocessing, Feature Extraction and much more in one library. PHP-ML requires PHP >= 7.1.