Displaying 1 to 11 from 11 results

There are already a fair number of books about NumPy (see bibliography) and a legitimate question is to wonder if another book is really necessary. As you may have guessed by reading these lines, my personal answer is yes, mostly because I think there is room for a different approach concentrating on the migration from Python to NumPy through vectorization. There are a lot of techniques that you don't find in books and such techniques are mostly learned through experience. The goal of this book is to explain some of these techniques and to provide an opportunity for making this experience in the process.

numpy vectorization book open-access cc-by-nc-saSIMD (Single Instruction, Multiple Data) is a feature of microprocessors that has been available for many years. SIMD instructions perform a single operation on a batch of values at once, and thus provide a way to significantly accelerate code execution. However, these instructions differ between microprocessor vendors and compilers. xsimd provides a unified means for using these features for library authors. Namely, it enables manipulation of batches of numbers with the same arithmetic operators as for single values. It also provides accelerated implementation of common mathematical functions operating on batches.

cpp neon c-plus-plus-11 avx sse simd vectorization avx512 mathematical-functions simd-instructions simd-intrinsicstext2vec is an R package which provides an efficient framework with a concise API for text analysis and natural language processing (NLP). To learn how to use this package, see text2vec.org and the package vignettes. See also the text2vec articles on my blog.

word2vec text-mining natural-language-processing glove vectorization topic-modeling word-embeddings latent-dirichlet-allocationNeanderthal is a Clojure library for fast matrix and linear algebra computations based on the highly optimized native libraries of BLAS and LAPACK computation routines for both CPU and GPU.. Read the documentation at Neanderthal Web Site.

clojure-library matrix gpu gpu-computing gpgpu opencl cuda high-performance-computing vectorization api matrix-factorization matrix-multiplication matrix-functions matrix-calculationsRecent generations of CPUs, and GPUs in particular, require data-parallel codes for full efficiency. Data parallelism requires that the same sequence of operations is applied to different input data. CPUs and GPUs can thus reduce the necessary hardware for instruction decoding and scheduling in favor of more arithmetic and logic units, which execute the same instructions synchronously. On CPU architectures this is implemented via SIMD registers and instructions. A single SIMD register can store N values and a single SIMD instruction can execute N operations on those values. On GPU architectures N threads run in perfect sync, fed by a single instruction decoder/scheduler. Each thread has local memory and a given index to calculate the offsets in memory for loads and stores. Current C++ compilers can do automatic transformation of scalar codes to SIMD instructions (auto-vectorization). However, the compiler must reconstruct an intrinsic property of the algorithm that was lost when the developer wrote a purely scalar implementation in C++. Consequently, C++ compilers cannot vectorize any given code to its most efficient data-parallel variant. Especially larger data-parallel loops, spanning over multiple functions or even translation units, will often not be transformed into efficient SIMD code.

vectorization parallel simd-vector simd-instructions simd avx c-plus-plus avx512 sse neon cpp portable cpp11 cpp14 cpp17 avx2 simd-programming data-parallel parallel-computingSIMD (Single Instruction, Multiple Data) is a feature of microprocessors that has been available for many years. SIMD instructions perform a single operation on a batch of values at once, and thus provide a way to significantly accelerate code execution. However, these instructions differ between microprocessor vendors and compilers. xsimd provides a unified means for using these features for library authors. Namely, it enables manipulation of batches of numbers with the same arithmetic operators as for single values. It also provides accelerated implementation of common mathematical functions operating on batches.

simd-intrinsics c-plus-plus-14 vectorization simd cpp avx neon sse avx512 simd-instructions mathematical-functionsA simple interface for converting raster images into vector graphics using AutoTrace. More details about AutoTrace can be found here.

bitmap image raster vector vectorization convertPure Go implementation of Potrace vectorization library. Supports simple SVG output generation. Also includes cgo bindings for the original Potrace library.

potrace svg vectorizationThis example project demonstrates how the gradient descent algorithm may be used to solve a multivariate linear regression problem. Read more about it here.

linear-regression gradient-descent multivariate machine-learning vectorizationFormats and cleans your data to get it ready for machine learning!

neural-network machine-learning data-formatting normalization min-max-normalization min-max-normalizing brain.js automated-machine-learning bestbrain data-science kaggle scikit-learn sklearn scikit-neuralnetworks lasagne nolearn nolearn.lasagne data-cleaning data-munging data-preparation imputing-missing-values filling-in-missing-values dataset data-set training testing random-forest vectorization categorization one-hot-encoding dictvectorizer preprocessing feature-selection feature-engineeringThis project is experimental and the APIs are not considered stable. Only Python>=3.6 is officially supported, but older versions of Python likely work as well.

sandbox numpy xarray property-based-testing testing-tools vectorization hypothesis generative-testing
We have large collection of open source products. Follow the tags from
Tag Cloud >>

Open source products are scattered around the web. Please provide information
about the open source projects you own / you use.
**Add Projects.**