PyTorch is a deep learning framework that puts Python first. It is a python package that provides Tensor computation (like numpy) with strong GPU acceleration, Deep Neural Networks built on a tape-based autograd system. You can reuse your favorite python packages such as numpy, scipy and Cython to extend PyTorch when needed.
neural-network autograd gpu numpy deep-learning tensorNyuzi is an experimental GPGPU processor hardware design focused on compute intensive tasks. It is optimized for use cases like blockchain mining, deep learning, and autonomous driving. This project includes a synthesizable hardware design written in System Verilog, an instruction set emulator, an LLVM based C/C++ compiler, software libraries, and tests. It can be used to experiment with microarchitectural and instruction set design tradeoffs.
fpga gpu-computing gpu verilog hardware microprocessor graphics processor-architectureMapD Core is an in-memory, column store, SQL relational database that was designed from the ground up to run on GPUs. MapD Core is the foundational element of a larger data exploration platform that emphasizes speed at scale. By taking advantage of the parallel processing power of the hardware, MapD Core can query billions of rows in milliseconds. Furthermore, by using the graphics pipelines of GPUs, MapD Core can render graphics directly from the server.
gpu database olap visualization sql machine-learning analytics column-store columnar-databaseThe model is based on deep AutoEncoders. The code is intended to run on GPU. Last test can take a minute or two.
deep-autoencoders collaborative-filtering deep-learning gpu recommendation-engineDIGITS (the Deep Learning GPU Training System) is a webapp for training deep learning models. The currently supported frameworks are: Caffe, Torch, and Tensorflow. Once you have installed DIGITS, visit docs/GettingStarted.md for an introductory walkthrough.
deep-learning machine-learning gpu caffe torchThe full documentation and frequently asked questions are available on the repository wiki. An introduction to the NVIDIA Container Runtime is also covered in our blog post.
nvidia-docker docker cuda gpuChainer is a Python-based deep learning framework aiming at flexibility. It provides automatic differentiation APIs based on the define-by-run approach (a.k.a. dynamic computational graphs) as well as object-oriented high-level APIs to build and train neural networks. It also supports CUDA/cuDNN using CuPy for high performance training and inference. For more details of Chainer, see the documents and resources listed above and join the community in Forum, Slack, and Twitter. The stable version of current Chainer is separated in here: v3.
deep-learning neural-networks machine-learning gpu cuda cudnn numpy cupy chainer neural-network**This project is no longer active. Please check out TensorFlow.js.** The Keras.js demos still work but is no longer updated. Run Keras models in the browser, with GPU support provided by WebGL 2. Models can be run in Node.js as well, but only in CPU mode. Because Keras abstracts away a number of frameworks as backends, the models can be trained in any backend, including TensorFlow, CNTK, etc.
deep-learning machine-learning webgl tensorflow neural-networks keras deep learning neural networks webgl2 gpuAlacritty is the fastest terminal emulator in existence. Using the GPU for rendering enables optimizations that simply aren't possible in other emulators. Alacritty currently supports FreeBSD, Linux, macOS, and OpenBSD. Windows support is planned before the 1.0 release. Alacritty is focused on simplicity and performance. The performance goal means it should be faster than any other terminal emulator available. The simplicity goal means that it doesn't have features such as tabs or splits (which can be better provided by a window manager or terminal multiplexer) nor niceties like a GUI config editor.
terminal-emulators opengl gpu vte terminalBy Jie Hu[1], Li Shen[2], Gang Sun[1]. Momenta[1] and University of Oxford[2].
senet caffe gpuAlpha Pose is an accurate multi-person pose estimator, which is the first open-source system that achieves 70+ mAP (72.3 mAP) on COCO dataset and 80+ mAP (82.1 mAP) on MPII dataset. To match poses that correspond to the same person across frames, we also provide an efficient online pose tracker called Pose Flow. It is the first open-source online pose tracker that achieves both 60+ mAP (66.5 mAP) and 50+ MOTA (58.3 MOTA) on PoseTrack Challenge dataset. Note: Please read PoseFlow/README.md for details.
pose-estimation deep-learning iccv2017 posetracking torch computer-vision machine-learning tracking state-of-the-art gpu pytorch faster-rcnnCatBoost is a machine learning method based on gradient boosting over decision trees. All CatBoost documentation is available here.
machine-learning decision-trees gradient-boosting gbm gbdt r kaggle gpu-computing catboost tutorial categorical-features distributed gpu coreml opensource data-science big-dataBoost.Compute is a GPU/parallel-computing library for C++ based on OpenCL. The core library is a thin C++ wrapper over the OpenCL API and provides access to compute devices, contexts, command queues and memory buffers.
opencl boost c-plus-plus cpp compute gpu gpgpu performance hpcEach model is built into a separate Docker image with the appropriate Python, C++, and Java/Scala Runtime Libraries for training or prediction. Use the same Docker Image from Local Laptop to Production to avoid dependency surprises.
machine-learning artificial-intelligence tensorflow kubernetes elasticsearch cassandra ipython spark kafka netflixoss presto airflow pipeline jupyter-notebook zeppelin docker redis neural-network gpu microservicesturbo.js is a small library that makes it easier to perform complex calculations that can be done in parallel. The actual calculation performed (the kernel executed) uses the GPU for execution. This enables you to work on an array of values all at once. turbo.js is compatible with all browsers (even IE when not using ES6 template strings) and most desktop and mobile GPUs.
glsl gpu vector simd calculations shaders gpgpu parallel webgl📊 A simple command-line utility for querying and monitoring GPU status
nvidia-smi gpustat pypi gpu command-lineA python library built to empower developers to build applications and systems with self-contained Deep Learning and Computer Vision capabilities using simple and few lines of code. Built with simplicity in mind, ImageAI supports a list of state-of-the-art Machine Learning algorithms for image prediction, custom image prediction, object detection, video detection, video object tracking and image predictions trainings. ImageAI currently supports image prediction and training using 4 different Machine Learning algorithms trained on the ImageNet-1000 dataset. ImageAI also supports object detection, video detection and object tracking using RetinaNet, YOLOv3 and TinyYOLOv3 trained on COCO dataset. Eventually, ImageAI will provide support for a wider and more specialized aspects of Computer Vision including and not limited to image recognition in special environments and special fields.
artificial-intelligence machine-learning prediction image-prediction python3 offline-capable imageai artificial-neural-networks algorithm image-recognition object-detection squeezenet densenet video inceptionv3 detection gpu ai-practice-recommendationsTel-Aviv Deep Learning Bootcamp is an intensive (and free!) 5-day program intended to teach you all about deep learning. It is nonprofit focused on advancing data science education and fostering entrepreneurship. The Bootcamp is a prominent venue for graduate students, researchers, and data science professionals. It offers a chance to study the essential and innovative aspects of deep learning. Participation is via a donation to the A.L.S ASSOCIATION for promoting research of the Amyotrophic Lateral Sclerosis (ALS) disease.
gpu nvidia docker-image machine-learning deep-learning data-science cuda-kernels kaggle-competition cuda pytorch pytorch-tutorials pytorch-tutorial bootcamp meetup kaggle kaggle-scripts pycudaPhenomenon is a very small, low-level WebGL library that provides the essentials to deliver a high performance experience. Its core functionality is built around the idea of moving millions of particles around using the power of the GPU. Returns an instance of Phenomenon.
webgl particles shaders gpu rendering low-level⚠ Please note that while Emu 0.2.0 is quite usable, it suffers from 2 key issues. It firstly does nothing to minimize CPU-GPU data transfer and secondly it's compiler is not well-tested. These can be reasons not to use Emu 0.2.0. A new version of Emu is in the works, however, with significant improvements in the language, compiler, and compile-time checker. This new version of Emu should be released some time in Q4 of 2019. But unlike OpenCL/CUDA/Halide/Futhark, Emu is embedded in Rust. This lets it take advantage of the ecosystem in ways...
emu gpu gpgpu gpu-computing gpu-acceleration gpu-programming
We have large collection of open source products. Follow the tags from
Tag Cloud >>
Open source products are scattered around the web. Please provide information
about the open source projects you own / you use.
Add Projects.