Multidisplay Visualization

  •        114

O *MDVis* é um projeto que consiste em pesquisar e implementar uma maneira de aumentar o *nível de interação* do usuário com seu sistema através do *gerenciamento escalável de dispositivos de visualização e interatividade*. Usar multidisplays pode ser uma tarefa *complicada* ...

http://mdvis.codeplex.com/

Tags
Implementation
License
Platform

   




Related Projects

Parallel Programming with Microsoft .NET

  •    CSharp

Code samples for the patterns & practices book on design patterns for parallel programming, Parallel Programming with Microsoft .NET.

Parallel Programming with Microsoft Visual C++

  •    C++

Code samples for the patterns & practices book on design patterns for parallel programming, Parallel Programming with Microsoft Visual C++.

cpp-taskflow - Fast C++ Parallel Programming with Task Dependency Graphs

  •    C++

A fast C++ header-only library to help you quickly build parallel programs with complex task dependencies. Cpp-Taskflow lets you quickly build parallel dependency graphs using modern C++17. It supports both static and dynamic tasking, and is by far faster, more expressive, and easier for drop-in integration than existing libraries.

Pydusa- Parallel Programming in Python

  •    C

Pydusa is a package for parallel programming using Python. It contains a module for doing MPI programming in Python. We have added parallel solver packages such as Parallel SuperLU for solving sparse linear systems.

Concurrent Programming Library

  •    

Concurrent Programming Library provides an opportunity to develop a parallel programs using .net framework 2.0 and above. It includes an implementation of various parallel algorithms, thread-safe collections and patterns.


MPJ Express - Parallel Programming in Java

  •    Java

MPJ Express is an open source Java message passing library that allows application developers to write and execute parallel applications for multicore processors and compute clusters/clouds. It allows writing parallel Java applications using an MPI-like API.

Practical Parallel and Concurrent Programming

  •    

Eight two-week units of courseware (slides, lecture notes, samples, tools) for teaching how to program parallel/concurrent applications at a high-level using Microsoft’s Parallel Extensions to the .NET Framework.

future - :rocket: R package: future: Unified Parallel and Distributed Processing in R for Everyone

  •    R

The purpose of the future package is to provide a very simple and uniform way of evaluating R expressions asynchronously using various resources available to the user. In programming, a future is an abstraction for a value that may be available at some point in the future. The state of a future can either be unresolved or resolved. As soon as it is resolved, the value is available instantaneously. If the value is queried while the future is still unresolved, the current process is blocked until the future is resolved. It is possible to check whether a future is resolved or not without blocking. Exactly how and when futures are resolved depends on what strategy is used to evaluate them. For instance, a future can be resolved using a sequential strategy, which means it is resolved in the current R session. Other strategies may be to resolve futures asynchronously, for instance, by evaluating expressions in parallel on the current machine or concurrently on a compute cluster.

Parallel Runtime Library

  •    

Parallel Runtime Library is optimized library that provide Easy to use and High Performance Parallelism Computing. Parallel Runtime Library provide: Effective Parallel Runtime, Concurrent Data Structure, Task and Data Parallel, Producer and Consumer and Agent Model.

ArrayFire - Parallel Computing Library

  •    C++

ArrayFire is a high performance software library for parallel computing with an easy-to-use API. Its array based function set makes parallel programming simple. ArrayFire's multiple backends (CUDA, OpenCL and native CPU) make it platform independent and highly portable. A few lines of code in ArrayFire can replace dozens of lines of parallel computing code, saving you valuable time and lowering development costs.

Vc - SIMD Vector Classes for C++

  •    C++

Recent generations of CPUs, and GPUs in particular, require data-parallel codes for full efficiency. Data parallelism requires that the same sequence of operations is applied to different input data. CPUs and GPUs can thus reduce the necessary hardware for instruction decoding and scheduling in favor of more arithmetic and logic units, which execute the same instructions synchronously. On CPU architectures this is implemented via SIMD registers and instructions. A single SIMD register can store N values and a single SIMD instruction can execute N operations on those values. On GPU architectures N threads run in perfect sync, fed by a single instruction decoder/scheduler. Each thread has local memory and a given index to calculate the offsets in memory for loads and stores. Current C++ compilers can do automatic transformation of scalar codes to SIMD instructions (auto-vectorization). However, the compiler must reconstruct an intrinsic property of the algorithm that was lost when the developer wrote a purely scalar implementation in C++. Consequently, C++ compilers cannot vectorize any given code to its most efficient data-parallel variant. Especially larger data-parallel loops, spanning over multiple functions or even translation units, will often not be transformed into efficient SIMD code.

GPdotNET - artificial intelligence tool

  •    DotNet

GPdotNET is artificial intelligence tool for applying Genetic Programming and Genetic Algorithm in modeling and optimization of various engineering problems.

futhark - :boom::computer::boom: A data-parallel functional programming language

  •    Haskell

Futhark is a purely functional data-parallel programming language. Its optimising compiler is able to compile it to typically very performant GPU code. The language and compiler is developed at DIKU at the University of Copenhagen, originally as part of the HIPERFIT centre. Although still under heavy development, Futhark is already useful for practical high-performance programming. For more information, see the website.

.NET Engine for Parallel Multitasked Applications.

  •    DotNet

Nepma can control execution of parallel or sequential tasks using multithreaded approach. It can group tasks and insert pauses between them according to parameter defined by the developer. It has been initially designed to automate redondant tasks originaly executed by human h...

MPAPI - Parallel and Distributed Applications Framework

  •    DotNet

Message Passing API (MPAPI) is a framework that enables programmers to easily write parallel as well as distributed software systems without having to use standard thread synchronization techniques like locks, monitors, semaphors, mutexes and volatile memory. It is written in...

StratoSphere - Cloud Computing Framework for Big Data Analytics

  •    Java

The Stratosphere System is an open-source cluster/cloud computing framework for Big Data analytics. It comprises of An extensible higher level language (Meteor) to quickly compose queries for common and recurring use cases, A parallel programming model (PACT, an extension of MapReduce) to run user-defined operations, An efficient massively parallel runtime (Nephele) for fault tolerant execution of acyclic data flows.

WebMonkeys - Massively parallel GPU programming on JavaScript, simple and clean.

  •    Javascript

Allows you to spawn thousands of parallel tasks on the GPU with the simplest, dumbest API possible. It works on the browser (with browserify) and on Node.js. It is ES5-compatible and doesn't require any WebGL extension. set/get allow you to send/receive data from the GPU, and work creates a number of parallel tasks (monkeys) that can read, process and rewrite that data. The language used is GLSL 1.0, extended array access (foo(index), usable anywhere on the source), setters (foo(index) := value, usable on the end only), and int i, a global variable with the index of the monkey.

Parallel Programming Model

  •    

This project is currently an experiment to offer a parallel programming environment that utilizes a set of networked computers to run user applications using remote pthread and object/memory management. 1st Phase is Linux and C/C++ environment only

C++ AMP: Accelerated Massive Parallelism with Microsoft Visual C++

  •    C++

Samples for the latest Microsoft Press book on programming with C++AMP using Visual Studio 2012.

FastFlow: programming multi-core

  •    C

Pattern-based multi/many-core parallel programming framework