Displaying 1 to 16 from 16 results

smart-mirror - The fairest of them all. A DIY voice controlled smart mirror with IoT integration.

  •    Javascript

A voice controlled life automation hub, most commonly powered by the Raspberry Pi. A live chat to get help and discuss mirror related issues: https://discord.gg/EMb4ynW. Usually there are a few folks hanging around in the lobby, but if there arent you are probably better off filing an issue.

Porcupine - On-device wake word detection engine powered by deep learning.

  •    C

Try out Porcupine using its interactive web demo. You need a working microphone. Try out Porcupine by downloading it's Android demo application. The demo application allows you to test Porcupine on a variety of wake words in any environment.

mycroft-skills - A repository for sharing and collaboration for third-party Mycroft skills development

  •    HTML

The official home of Skills for the Mycroft ecosystem. These Skills are written by both the MycroftAI team and others within the Community. If you are building Skills, please ensure that you use the Meta Editor for your README.md file. The Skills list is generated from parsing the README.md files.

alan-sdk-flutter - Voice assistant SDK for Flutter by Alan AI lets you quickly build a voice assistant or chatbot for your app

  •    Ruby

Quickly add voice to your app with the Alan Platform. Create an in-app voice assistant to enable human-like conversations and provide a personalized voice experience for every user. A powerful web-based IDE where you can write, test and debug dialog scenarios for your voice assistant or chatbot.




alan-sdk-ios - Voice assistant SDK for iOS by Alan AI lets you quickly build a voice assistant or chatbot for your app written in Swift or Objective-C

  •    Objective-C

Quickly add voice to your app with the Alan Platform. Create an in-app voice assistant to enable human-like conversations and provide a personalized voice experience for every user. A powerful web-based IDE where you can write, test and debug dialog scenarios for your voice assistant or chatbot.

sonus - :speech_balloon: /so.nus/ STT (speech to text) for Node with offline hotword detection

  •    Javascript

Sonus lets you quickly and easily add a VUI (Voice User Interface) to any hardware or software project. Just like Alexa, Google Now, and Siri, Sonus is always listening offline for a customizable hotword. Once that hotword is detected your speech is streamed to the cloud recognition service of your choice - then you get the results. Generally, running npm install should suffice. This module however, requires you to install SoX.

alan-sdk-android - Voice assistant SDK for Android by Alan AI lets you quickly build a voice assistant or chatbot for your app written in Java or Kotlin

  •    

Quickly add voice to your app with the Alan Platform. Create an in-app voice assistant to enable human-like conversations and provide a personalized voice experience for every user. A powerful web-based IDE where you can write, test and debug dialog scenarios for your voice assistant or chatbot.


alan-sdk-ionic - Voice assistant SDK for Ionic by Alan AI lets you quickly build a voice assistant or chatbot for your app

  •    TypeScript

Quickly add voice to your app with the Alan Platform. Create an in-app voice assistant to enable human-like conversations and provide a personalized voice experience for every user. A powerful web-based IDE where you can write, test and debug dialog scenarios for your voice assistant or chatbot.

alan-sdk-pcf - Voice assistant SDK for Power Apps by Alan AI lets you quickly build a voice assistant or chatbot for your Microsoft Power Apps project

  •    

Quickly add voice to your app with the Alan Platform. Create an in-app voice assistant to enable human-like conversations and provide a personalized voice experience for every user. A powerful web-based IDE where you can write, test and debug dialog scenarios for your voice assistant or chatbot.

alan-sdk-reactnative - Voice assistant SDK for React Native by Alan AI lets you quickly build a voice assistant or chatbot for your app

  •    Ruby

Quickly add voice to your app with the Alan Platform. Create an in-app voice assistant to enable human-like conversations and provide a personalized voice experience for every user. A powerful web-based IDE where you can write, test and debug dialog scenarios for your voice assistant or chatbot.

voicer - AGI-server voice recognizer for #Asterisk

  •    Javascript

Voicer work as AGI-server. Voicer accept request from asterisk via AGI app. It run handler for each request. Handler command asterisk record file. After this send file to recognition service, receive text, search by text in source of data for finding concordance, if source have this text it return channel for call, voicer set dialplan vars RECOGNITION_RESULT as SUCCESS and RECOGNITION_TARGET for finded result.

FDSoundActivatedRecorder - Start recording when the user speaks

  •    Swift

Start recording when the user speaks. All you have to do is tell us when to start listening. Then we wait for an audible noise and start recording. This is mostly useful for user speech input and the "Start talking now prompt". First, install by adding pod 'FDSoundActivatedRecorder', '~> 1.0.0' to your Podfile.

mycroft-precise - A lightweight, simple-to-use, RNN wake word listener

  •    Python

A lightweight, simple-to-use, RNN wake word listener. Precise is a wake word listener. Like its name suggests, a wake word listener's job is to continually listen to sounds and speech around the device, and activate when the sounds or speech match a wake word. Unlike other machine learning hotword detection tools, Mycroft Precise is fully open source. Take a look at a comparison here.

LightCube - A Design of 3D Dynamic Display System Based on Voice Control

  •    C

Light Cube, as a new type of naked eye 3D display technology, can achieve the naked eye 3D display without wearing any viewing aids. Especially, it brings a new visual experience to people and has become a research hot spot in research organizations home and abroad for the past few years. In this paper, a 3D dynamic display system based on voice control is presented, which solves the existing light cube display color single, low resolution, poor human-computer interaction performance, complex design, high cost, etc. The LD3320 non-specific vocal speech recognition chip and STM32F407 are used as controller core to realize the full-color high-order voice controlled light cube. The voice recognition module sends the voice recognition result to the STM32 via UART serial port as a control command for controlling the display animation and working mode of the optical cube, and playing the background music with the speaker provided by the module. The SM16126 cascade-to-cascade drive output circuit design reduces system power consumption and provides a viable implementation for high-order optical cube designs. Finally, tests show that this light cube display system has fast voice command recognition response speed, high recognition accuracy and stable work performance, which can make people's life more intelligent and user-friendly.






We have large collection of open source products. Follow the tags from Tag Cloud >>


Open source products are scattered around the web. Please provide information about the open source projects you own / you use. Add Projects.