nVidia-modded-Inf - Modified nVidia .inf files to run drivers on all cards

  •        137

This project is unofficial and not in any relationship or supported by nVidia Cooperation. This project only support x64 Windows versions, if you like to see x86 ask nVidia to extend the support or make a pull request.

https://github.com/CHEF-KOCH/nVidia-modded-Inf

Tags
Implementation
License
Platform

   




Related Projects

TinyNvidiaUpdateChecker - Check for NVIDIA GPU driver updates!

  •    CSharp

This application has a simple concept, when launched it will check for new driver updates for your NVIDIA gpu! With this you no longer need waste your time searching if there's something new to get. HTML Agility Pack will automatically install when attempting to debug the project (make sure you're running the latest version of VS2017), or you may manually install it by doing the following: Open up your Package Manager Console and type in Install-Package HtmlAgilityPack.

webdriver.sh - bash script for managing NVIDIA web drivers on macOS

  •    Shell

Bash script for managing NVIDIA's web drivers on macOS High Sierra and later with an option to set the required build number in NVDAStartupWeb.kext and NVDAEGPUSupport.kext. Installs/updates to the latest available NVIDIA web drivers for your current version of macOS.

tensorflow-gpu-install-ubuntu-16

  •    

Before you begin, you may need to disable the opensource ubuntu NVIDIA driver called nouveau. If nouveau driver(s) are still loaded do not proceed with the installation guide and troubleshoot why it's still loaded.

Disable-Nvidia-Telemetry - Windows utility to disable Nvidia's telemetry services

  •    CSharp

Disable Nvidia Telemetry is a utility that allows you to disable the telemetry services Nvidia bundles with their drivers.

coriander - Build NVIDIA® CUDA™ code for OpenCL™ 1.2 devices

  •    LLVM

Build applications written in NVIDIA® CUDA™ code for OpenCL™ 1.2 devices. Other systems should work too, ideally. You will need at a minimum at least one OpenCL-enabled GPU, and appropriate OpenCL drivers installed, for the GPU. Both linux and Mac systems stand a reasonable chance of working ok.


nvidia-update - Install nVidia drivers on macOS the easy way.

  •    Shell

The simplest way to install nVidia drivers on macOS. This script installs the best (not necessarily the latest) official nVidia web drivers for your system.

nvidia-docker - Build and run Docker containers leveraging NVIDIA GPUs

  •    Makefile

The full documentation and frequently asked questions are available on the repository wiki. An introduction to the NVIDIA Container Runtime is also covered in our blog post.

xmrig-nvidia - Monero (XMR) NVIDIA miner

  •    C++

⚠️ You must update miners to version 2.5 before April 6 due Monero PoW change. XMRig is high performance Monero (XMR) NVIDIA miner, with the official full Windows support.

CudaSift - A CUDA implementation of SIFT for NVidia GPUs (1.6 ms on a GTX 1060)

  •    Cuda

This is the fourth version of a SIFT (Scale Invariant Feature Transform) implementation using CUDA for GPUs from NVidia. The first version is from 2007 and GPUs have evolved since then. This version is slightly more precise and considerably faster than the previous versions and has been optimized for Kepler and later generations of GPUs. On a GTX 1060 GPU the code takes about 1.6 ms on a 1280x960 pixel image and 2.4 ms on a 1920x1080 pixel image. There is also code for brute-force matching of features that takes about 2.2 ms for two sets of around 1900 SIFT features each.

nvptx - How to: Run Rust code on your NVIDIA GPU

  •    Rust

Since 2016-12-31, rustc can compile Rust code to PTX (Parallel Thread Execution) code, which is like GPU assembly, via --emit=asm and the right --target argument. This PTX code can then be loaded and executed on a GPU. However, a few days later 128-bit integer support landed in rustc and broke compilation of the core crate for NVPTX targets (LLVM assertions). Furthermore, there was no nightly release between these two events so it was not possible to use the NVPTX backend with a nightly compiler.

NVIDIA PerfGraph

  •    C++

A simple, cross platform performance monitoring application specifically designed to be used with nVidia's instrumented driver and the NVPerfSDK to give a graphical representation of internal GPU counters. Support for non-GPU counters is also available.

nvvl - A library that uses hardware acceleration to load sequences of video frames to facilitate machine learning training

  •    C++

NVVL (NVIDIA Video Loader) is a library to load random sequences of video frames from compressed video files to facilitate machine learning training. It uses FFmpeg's libraries to parse and read the compressed packets from video files and the video decoding hardware available on NVIDIA GPUs to off-load and accelerate the decoding of those packets, providing a ready-for-training tensor in GPU device memory. NVVL can additionally perform data augmentation while loading the frames. Frames can be scaled, cropped, and flipped horizontally using the GPUs dedicated texture mapping units. Output can be in RGB or YCbCr color space, normalized to [0, 1] or [0, 255], and in float, half, or uint8 tensors. Using compressed video files instead of individual frame image files significantly reduces the demands on the storage and I/O systems during training. Storing video datasets as video files consumes an order of magnitude less disk space, allowing for larger datasets to both fit in system RAM as well as local SSDs for fast access. During loading fewer bytes must be read from disk. Fitting on smaller, faster storage and reading fewer bytes at load time allievates the bottleneck of retrieving data from disks, which will only get worse as GPUs get faster. For the dataset used in our example project, H.264 compressed .mp4 files were nearly 40x smaller than storing frames as .png files.

jetson-inference - Guide to deploying deep-learning inference networks and deep vision primitives with TensorRT and NVIDIA Jetson

  •    C++

Welcome to our training guide for inference and deep vision runtime library for NVIDIA DIGITS and Jetson Xavier/TX1/TX2. This repo uses NVIDIA TensorRT for efficiently deploying neural networks onto the embedded platform, improving performance and power efficiency using graph optimizations, kernel fusion, and half-precision FP16 on the Jetson.

YanC42

  •    Java

YanC42 is a GUI configuration tool for the nVidia and ATI Linux Driver Set. It supports viewing, editing and creation of X Configuration options and environment variables.

gpu-rest-engine - A REST API for Caffe using Docker and Go

  •    C++

This repository shows how to implement a REST server for low-latency image classification (inference) using NVIDIA GPUs. This is an initial demonstration of the GRE (GPU REST Engine) software that will allow you to build your own accelerated microservices. This repository is a demo, it is not intended to be a generic solution that can accept any trained model. Code customization will be required for your use cases.

bi-att-flow - Bi-directional Attention Flow (BiDAF) network is a multi-stage hierarchical process that represents context at different levels of granularity and uses a bi-directional attention flow mechanism to achieve a query-aware context representation without early summarization

  •    Python

The model has ~2.5M parameters. The model was trained with NVidia Titan X (Pascal Architecture, 2016). The model requires at least 12GB of GPU RAM. If your GPU RAM is smaller than 12GB, you can either decrease batch size (performance might degrade), or you can use multi GPU (see below). The training converges at ~18k steps, and it took ~4s per step (i.e. ~20 hours). You can still omit them, but training will be much slower.

aind2-cnn - AIND Term 2 -- Lesson on Convolutional Neural Networks

  •    Jupyter

(Optional) If you plan to install TensorFlow with GPU support on your local machine, follow the guide to install the necessary NVIDIA software on your system. If you are using an EC2 GPU instance, you can skip this step. (Optional) If you are running the project on your local machine (and not using AWS), create (and activate) a new environment.

NiceHashMiner - NiceHash easy to use CPU&GPU Miner

  •    CSharp

Please follow us on Twitter @NiceHashMining for updates on new versions and other important information. NiceHash Miner is essentially the only tool a miner needs. No need to go through tons of configuration files, various mining software versions, configuration tuning or cryptocurrency coins market analysis. Auto-tuning for best performance and efficiency, automatic selection and runtime automatic switching to most profitable cryptocurrency algorithm are all integrated into NiceHash Miner and will enable you seamless, joyful and profitable mining experience.