voicer - AGI-server voice recognizer for #Asterisk

  •        2

Voicer work as AGI-server. Voicer accept request from asterisk via AGI app. It run handler for each request. Handler command asterisk record file. After this send file to recognition service, receive text, search by text in source of data for finding concordance, if source have this text it return channel for call, voicer set dialplan vars RECOGNITION_RESULT as SUCCESS and RECOGNITION_TARGET for finded result.

https://github.com/antirek/voicer

Dependencies:

ding-dong : 0.1.1
file-exists : ^1.0.0
google-speech : 0.0.2
joi : ^6.6.1
node-uuid : ^1.4.2
parse-json : ^2.0.0
q : ^1.2.0
voicer-web : 0.0.5
winston : ^0.9.0
witai-speech : 0.0.1
xml2js : ^0.4.4
yandex-speech : ^0.0.11

Tags
Implementation
License
Platform

   




Related Projects

Jovo Framework - Build cross-platform voice applications for Amazon Alexa and Google Home

  •    Javascript

Jovo is the first open source framework that lets you build voice apps for both Amazon Alexa and Google Assistant with only one code base. Besides cross-platform development, Jovo also offers a variety of integrations and easy prototyping capabilities.

sonus - :speech_balloon: /so.nus/ STT (speech to text) for Node with offline hotword detection

  •    Javascript

Sonus lets you quickly and easily add a VUI (Voice User Interface) to any hardware or software project. Just like Alexa, Google Now, and Siri, Sonus is always listening offline for a customizable hotword. Once that hotword is detected your speech is streamed to the cloud recognition service of your choice - then you get the results. Generally, running npm install should suffice. This module however, requires you to install SoX.

SpeechKITT - 🗣 A flexible GUI for Speech Recognition

  •    Javascript

Speech KITT makes it easy to add a GUI to sites using Speech Recognition. Whether you are using annyang, a different library or webkitSpeechRecognition directly, KITT will take care of the GUI. Speech KITT provides a graphical interface for the user to start or stop Speech Recognition and see its current status. It can also help guide the user on how to interact with your site using their voice, providing instructions and sample commands. It can even be used to carry a natural conversation with the user, asking questions the user can answer with his voice, and then asking follow up questions.

annyang - :speech_balloon: Speech recognition for your site

  •    Javascript

A tiny javascript SpeechRecognition library that lets your users control your site with voice commands. annyang has no dependencies, weighs just 2 KB, and is free to use and modify under the MIT license.

Asterisk - IP telephony commuincation product suitable for call center

  •    C

Asterisk, converts an ordinary computer into a feature-rich voice communications server. Asterisk makes it simple to create and deploy a wide range of telephony applications and services, including IP PBXs, VoIP gateways, call center ACDs and IVR systems. It is maintained by Debian VoIP Team.


PowerPoint Kinect Voice Control

  •    

PowerPoint Kinect Voice Control provides control of PowerPoint to voice commands from the Kinect. This allows the user to go navigate in the presentation using voice commands "Forward", "Back", etc. This is a small sample of how the Kinect can be used for non-gaming projects.

voice-elements - :speaker: Web Component wrapper to the Web Speech API, that allows you to do voice recognition and speech synthesis using Polymer

  •    HTML

Web Component wrapper to the Web Speech API, that allows you to do voice recognition (speech to text) and speech synthesis (text to speech) using Polymer. Or download as ZIP.

Voice Conference Manager

  •    Java

Voice Conference Manager uses VoiceXML and CCXML to control speech recognition, text to speech, and voice biometrics for a telephone conference service. Say the names or numbers of people and VCM places them into the call. Can be hosted on public servers

perlbox

  •    Perl

Perlbox Voice is an voice enabled application to bring your desktop under your command. With a single word, you can start your web browser, your favorite editor or whatever you want. This is the Linux and Unix voice recognition solution.

Third Hand - Use your voice to control Visual Studio

  •    DotNet

Third Hand is a Visual Studio Add-in that allows you to use to your voice to control Visual Studio. Rather than navigate the toolbars or use the menu, say "Solution Explorer", or "Properties" and those windows will open for you. This keeps your hands on the keyboard, hopeful...

mycroft-core - Mycroft Core, the Mycroft Artificial Intelligence platform.

  •    Python

Mycroft is a hackable open source voice assistant. This script sets up dependencies and a virtualenv. If running in an environment besides Ubuntu/Debian, Arch or Fedora you may need to manually install packages as instructed by dev_setup.sh.

Smart Voice

  •    DotNet

Smart Voice let's you control Skype using your voice. It allows you to write messages, issue phone calls, etc. This application was developed thinking in cars and was made for a curricular unit at my university. Currently I'm using C#/.NET, Microsoft SAPI and the Skype API.

TensorFlow-iOS-Example - Source code for my blog post "Getting started with TensorFlow on iOS"

  •    Swift

This is the code that accompanies my blog post Getting started with TensorFlow on iOS. It uses TensorFlow to train a basic binary classifier on the Gender Recognition by Voice and Speech Analysis dataset.

Google-Actions-Java-SDK - Unofficial Google Actions Java SDK - for Android engineers and all Java lovers

  •    Java

Official Google Actions SDK is written in Node.js. But in many situations voice interfaces like Google Home or Google Assistant will extend or replace mobile apps. If you are old fashioned Android engineer and the most of your code is already written in Java, why not reuse it and build voice extension to app on your own? And this is the main reason to build Google Actions Java SDK - enabling as much developers as possible to build their brilliant ideas for Google Assistant and Home. Currently this is just working proof of concept of Google Actions Java SDK. It means that there is no documentation, fixed interface, (not much) unit tests and many, many others.

Porcupine - On-device wake word detection engine powered by deep learning.

  •    C

Try out Porcupine using its interactive web demo. You need a working microphone. Try out Porcupine by downloading it's Android demo application. The demo application allows you to test Porcupine on a variety of wake words in any environment.

Google Speech Recognition Example

  •    

Google Speech Recognition contains a working example of application that uses google speech recognition API. App contains all necessary dlls to record, decode and send your voice request to google service and recieve a text representation of what you've said. It's developed i...

vivo

  •    

vivo Vietnamese Voice Vietnamese Voice recognition project thaihung.bkhn@gmail.com http://eking.vn

vnv

  •    

VNV Vietnamese Voice Vietnamese Voice recognition project thaihung.bkhn@gmail.com http://eking.vn

sdl_core - SmartDeviceLink In-Vehicle Software and Sample HMI

  •    C++

SmartDeviceLink (SDL) is a standard set of protocols and messages that connect applications on a smartphone to a vehicle head unit. This messaging enables a consumer to interact with their application using common in-vehicle interfaces such as a touch screen display, embedded voice recognition, steering wheel controls and various vehicle knobs and buttons. There are three main components that make up the SDL ecosystem. The Core component of SDL runs on a vehicle's computing system (head unit). Core’s primary responsibility is to pass messages between connected smartphone applications and the vehicle HMI, and pass notifications from the vehicle to those applications. It can connect a smartphone to a vehicle's head unit via a variety of transport protocols such as Bluetooth, USB, Android AOA, and TCP. Once a connection is established, Core discovers compatible applications and displays them to the driver for interaction via voice or display. The core component is implemented into the vehicle HMI based on the integration guidelines above. The core component is configured to follow a set of policies defined in a policy database and updated by a policy server. The messaging between a connected application and core is defined by the Mobile API and the messaging between sdl core and the vehicle is defined by the HMI API.

mycroft-skills - A repository for sharing and collaboration for third-party Mycroft skills development

  •    HTML

The official home of Skills for the Mycroft ecosystem. These Skills are written by both the MycroftAI team and others within the Community. If you are building Skills, please ensure that you use the Meta Editor for your README.md file. The Skills list is generated from parsing the README.md files.