A set of Vamp plugins for audio feature extraction using the aubio library. You will need to have Python, git, and a C++ compiler.
https://aubio.org/vamp-aubio-pluginsTags | aubio vamp-plugins tempo-tracking tempo-detection mfcc beat-detection beat-tracking tempo beat onset onset-detection audio music music-information-retrieval analysis |
Implementation | C++ |
License | GPL |
Platform |
aubio is a library to label music and sounds. It listens to audio signals and attempts to detect events. For instance, when a drum is hit, at which frequency is a note, or at what tempo is a rhythmic melody. Its features include segmenting a sound file before each of its attacks, performing pitch detection, tapping the beat and producing midi streams from live audio.
audio music analysis sound extraction annotation onset pitch beat tempo-tracking mfccOnset detector for musical signals, with an emphasis on real-time onset detection for interactive music systems. Hence this aims to be a small, efficient, lightweight onset detection system that provides good-quality detection.
Essentia is an open-source C++ library for audio analysis and audio-based music information retrieval released under the Affero GPL license. It contains an extensive collection of reusable algorithms which implement audio input/output functionality, standard digital signal processing blocks, statistical characterization of data, and a large set of spectral, temporal, tonal and high-level music descriptors. The library is also wrapped in Python and includes a number of predefined executable extractors for the available music descriptors, which facilitates its use for fast prototyping and allows setting up research experiments very rapidly. Furthermore, it includes a Vamp plugin to be used with Sonic Visualiser for visualization purposes. Essentia is designed with a focus on the robustness of the provided music descriptors and is optimized in terms of the computational cost of the algorithms. The provided functionality, specifically the music descriptors included in-the-box and signal processing algorithms, is easily expandable and allows for both research experiments and development of large-scale industrial applications. If you use example extractors (located in src/examples), or your own code employing Essentia algorithms to compute descriptors, you should be aware of possible incompatibilities when using different versions of Essentia.
audio music dsp essentia c-plus-plus music-information-retrieval audio-analysis sound-processingTarsosDSP is a Java library for audio processing. Its aim is to provide an easy-to-use interface to practical music processing algorithms implemented, as simply as possible, in pure Java and without any other external dependencies. The library tries to hit the sweet spot between being capable enough to get real tasks done but compact and simple enough to serve as a demonstration on how DSP algorithms works. TarsosDSP features an implementation of a percussion onset detector and a number of pitch detection algorithms: YIN, the Mcleod Pitch method and a “Dynamic Wavelet Algorithm Pitch Tracking” algorithm. Also included is a Goertzel DTMF decoding algorithm, a time stretch algorithm (WSOLA), resampling, filters, simple synthesis, some audio effects, and a pitch shifting algorithm. To show the capabilities of the library, TarsosDSP example applications are available. Head over to the TarosDSP release directory for freshly baked binaries and code smell free (that is the goal anyway), oven-fresh sources.
Qweely is a tool written in Java for a simple analysis of music. In its early state Qweely is focused on tempo, meter and tonality detection. Visit the project homepage for download and further informations.
This is the codebase for Ableton Link, a technology that synchronizes musical beat, tempo, and phase across multiple applications running on one or more devices. Applications on devices connected to a local network discover each other automatically and form a musical session in which each participant can perform independently: anyone can start or stop while still staying in time. Anyone can change the tempo, the others will follow. Anyone can join or leave without disrupting the session. Ableton Link is dual licensed under GPLv2+ and a proprietary license. If you would like to incorporate Link into a proprietary software application, please contact link-devs@ableton.com.
Extremely Lightweight Beat Detection Algorithm - BeatDetektor uses a very simple statistical model designed from scratch by myself to detect the BPM of music and provides real-time feedback useful for visualization and synchronization.
A python library built to empower developers to build applications and systems with self-contained Deep Learning and Computer Vision capabilities using simple and few lines of code. Built with simplicity in mind, ImageAI supports a list of state-of-the-art Machine Learning algorithms for image prediction, custom image prediction, object detection, video detection, video object tracking and image predictions trainings. ImageAI currently supports image prediction and training using 4 different Machine Learning algorithms trained on the ImageNet-1000 dataset. ImageAI also supports object detection, video detection and object tracking using RetinaNet, YOLOv3 and TinyYOLOv3 trained on COCO dataset. Eventually, ImageAI will provide support for a wider and more specialized aspects of Computer Vision including and not limited to image recognition in special environments and special fields.
artificial-intelligence machine-learning prediction image-prediction python3 offline-capable imageai artificial-neural-networks algorithm image-recognition object-detection squeezenet densenet video inceptionv3 detection gpu ai-practice-recommendationsTempo is an easy, intuitive JavaScript rendering engine that enables you to craft data templates in pure HTML. Tempo takes information encoded as JSON and renders it according to an HTML template. Below is a sample array of JSON data. Tempo can also iterate members of an associative array (object).
Beets is the media library management system for obsessive-compulsive music geeks. The purpose of beets is to get your music collection right once and for all. It catalogs your collection, automatically improving its metadata as it goes. It then provides a bouquet of tools for manipulating and accessing your music.
music musicbrainz music-library cli media-management mediaMixxx is Free DJ software that gives you everything you need to perform live DJ mixes. It integrates the tools DJs need to perform creative live mixes with digital music files.
dj music sound audio-editor sound-editing sound-mixingWebAudioFont is a set of resources and associated technology that uses sample-based synthesis to play musical instruments in the browser. You can choose from thousands of instrument, see Catalog. Add a link to WebAudioFontPlayer.js and the instrument file. Invoke queueWaveTable.
sound drums soundfont midi player sampler wavetable music synth instrument music-composition music-player synthesizer audio audiocontext play-instruments play-sounds pitch midi-player guitar piano beat mixer distortion microtonalMadmom is an audio signal processing library written in Python with a strong focus on music information retrieval (MIR) tasks. The library is internally used by the Department of Computational Perception, Johannes Kepler University, Linz, Austria (http://www.cp.jku.at) and the Austrian Research Institute for Artificial Intelligence (OFAI), Vienna, Austria (http://www.ofai.at).
audio-analysis signal-processing machine-learning music-information-retrieval numpy scipy cythonWith this library, you can build your own animoji embedded in Javascript/WebGL applications. You do not need any specific device except a standard webcam. By default a webcam feedback image is displayed with the face detection frame. The face detection is quite robust to all lighting conditions, but the evaluation of expression can be noisy if the lighting is too directional, too weak or if there is an important backlight. So the webcam feedback image is useful to see the quality of the input video feed.
webgl threejs svg animoji weboji webcam deep-learning face face-expression computer-vision augmented-reality emoji face-detection face-tracking real-timeThe most sophisticated background location-tracking & geofencing module with battery-conscious motion-detection intelligence for iOS and Android. The plugin's Philosophy of Operation is to use motion-detection APIs (using accelerometer, gyroscope and magnetometer) to detect when the device is moving and stationary.
gps location tracking geolocation geofencing background ecosystem:cordova cordova-android cordova-ios cordova phonegapIt is a real time face detection and tracking SDK. You put in image data (camera stream or single picture) and it outputs facial data. This page also includes all available packages for download.
face tracking detection emscripten web face-tracking face-detectionBy Yann Bayle (Website, GitHub) from LaBRI (Website, Twitter), Univ. Bordeaux (Website, Twitter), CNRS (Website, Twitter) and SCRIME (Website). The role of this curated list is to gather scientific articles, thesis and reports that use deep learning approaches applied to music. The list is currently under construction but feel free to contribute to the missing fields and to add other resources! To do so, please refer to the How To Contribute section. The resources provided here come from my review of the state-of-the-art for my PhD Thesis for which an article is being written. There are already surveys on deep learning for music generation, speech separation and speaker identification. However, these surveys do not cover music information retrieval tasks that are included in this repository.
awesome awesome-list unicorns list lists resources deeplearning deep-learning deep-neural-networks neural-network neural-networks music music-information-retrieval audio audio-processing article music-genre-classification bib machine-learning researchThe most sophisticated background location-tracking & geofencing module with battery-conscious motion-detection intelligence for iOS and Android. The plugin's Philosophy of Operation is to use motion-detection APIs (using accelerometer, gyroscope and magnetometer) to detect when the device is moving and stationary.
react-native react-component ios android background geolocation tracking geofence geofencingThis JavaScript library detects and tracks the face in real time from the webcam video feed captured with WebRTC. Then it is possible to overlay 3D content for augmented reality applications. We provide various demonstrations using main WebGL 3D engines. We have included in this repository the release versions of the 3D engines to work with a determined version (they are in /libs/<name of the engine>/). This library is lightweight and it does not include any 3D engine or third party library. We want to keep it framework agnostic so the outputs of the library are raw: if the a face is detected or not, the position and the scale of the detected face and the rotation Euler angles. But thanks to the featured helpers, examples and boilerplates, you can quickly deal with a higher level context (for motion head tracking, for face filter or face replacement...). We continuously add new demontrations, so stay tuned ! Also, feel free to open an issue if you have any question or suggestion.
face tracking detection snapchat 3d webgl deep-learning face-detection face-tracking threejs babylonjs webcam faceswap library picojs augmented-reality face-filters trackingjs msqrd
We have large collection of open source products. Follow the tags from
Tag Cloud >>
Open source products are scattered around the web. Please provide information
about the open source projects you own / you use.
Add Projects.