Displaying 1 to 15 from 15 results

pixiedust-facebook-analysis - A Jupyter notebook that uses the Watson Visual Recognition, Natural Language Understanding and Tone Analyzer services to enrich Facebook Analytics and uses PixieDust to explore and visualize the results

  •    HTML

In this Code Pattern, we will use a Jupyter notebook to glean insights from a vast body of unstructured data. Credit goes to Anna Quincy and Tyler Andersen for providing the initial notebook design. We'll start with data exported from Facebook Analytics. We'll enrich the data with Watson’s Natural Language Understanding (NLU), Tone Analyzer and Visual Recognition.

watson-calorie-counter - A mobile app that uses Watson Visual Recognition to provide nutritional analysis of captured food images

  •    Java

In this Code Pattern, we will create a calorie counter mobile app using Apache Cordova, Node.js and Watson Visual Recognition. This mobile app extracts nutritional information from captured images of food items. Currently this mobile app only runs on Android, but can be easily ported to iOS.

watson-multimedia-analyzer - A Node app that use Watson Visual Recognition, Speech to Text, Natural Language Understanding, and Tone Analyzer to enrich media files

  •    CSS

In this developer journey we will use Watson services to showcase how media (both audio and video) can be enriched on a timeline basis. Credit goes to Scott Graham for providing the initial application. Want to take your Watson app to the next level? Looking to leverage Watson Brand assets? Join the With Watson program which provides exclusive brand, marketing, and tech resources to amplify and accelerate your Watson embedded commercial solution.

watson-vehicle-damage-analyzer - A server and mobile app to send pictures of vehicle damage to IBM Watson Visual Recognition for classification

  •    Javascript

In this developer code pattern, we will create a mobile app using Apache Cordova, Node.js and Watson Visual Recognition. This mobile app sends pictures of auto and motorcycle accidents and issues to be analyzed by a server app, using Watson Visual Recognition. The server application will use pictures of auto accidents and other incidents to train Watson Visual Recognition to identify various classes of issues, i.e. vandalism, broken windshield, motorcycle accident, or flat tire. A developer can leverage this to create their own custom Visual Recognition classifiers for their use cases. Currently this mobile app only runs on Android, but can be easily ported to iOS.




watson-waste-sorter - Create an iOS phone application that sorts waste into three categories (landfill, recycling, compost) using a Watson Visual Recognition custom classifier

  •    Swift

In this developer code pattern, we will create a mobile app, Python Server with Flask, and Watson Visual Recognition. This mobile app sends pictures of waste and garbage to be analyzed by a server app, using Watson Visual Recognition. The server application will use pictures of common trash to train Watson Visual Recognition to identify various categories of waste, e.g. recycle, compost, or landfill. A developer can leverage this to create their own custom Visual Recognition classifiers for their use cases. Create an IBM Cloud account and install the Cloud Foundry CLI on your machine.

Predictive-Industrial-Visual-Analysis - Predictive Industrial Visual Analysis using Watson Visual Recognition, IBM Cloud Functions and Cloudant database

  •    Javascript

Skill Level: Beginner N.B: All services used in this repo are Lite plans. In this code pattern, we will identify industrial equipment for various damages upon visual inspection by using machine learning classification techniques. Using Watson Visual Recognition, we will analyze the image against a trained classifier to inspect oil and gas pipelines with six identifiers - Normal, Burst, Corrosion, Damaged Coating, Joint Failure and Leak. For each image we will provide a percent match with each of the category, on how closely the image matches one of the damaged identifiers or the Normal identifier. This data can then be used to create a dashboard to the pipelines needing immediate attention to no attention.

rainbow - Use Watson Visual Recognition and Core ML to create a Kitura-based iOS game that has a user search for a predetermined list of objects

  •    Swift

This code pattern is an iOS timed game that has users find items based on a list of objects developed for Apple phones. It is built to showcase visual recognition with Core ML in a fun way. This project repository consists of an iOS app and a backend server. Both components are written in the Swift programming language and leverages the Kitura framework for the server side. Cloudant is also used to persist user records and best times, and Push Notifications are used to let a user know when they have been removed from the top of the leaderboard. Our application has been published to the App Store under the name WatsonML, and we encourage folks to give it a try. It comes with a built-in model for identifying six objects; shirts, jeans, apples, plants, notebooks, and lastly a plush bee. Our app could not have been built if not for fantastic pre-existing content from other IBMers. We use David Okun's Lumina project, and Anton McConville's Avatar generator microservice, see the references below for more information.

cognitive-moderator-service - Create a cognitive moderator chatbot for anger detection, natural language understanding and explicit images removal

  •    Python

Make note of the service credentials when creating services which will be later used when creating a function. After the app is created, you will be taken to a page where you can configure the Slack app. Make note of Verification Token which will be used later in the function.


Fovea - A CLI for the Google, Clarifai, Rekognition (AWS), Imagga, Watson, SightHound, and Microsoft Computer Vision APIs

  •    Python

The table below characterizes Fovea's current feature coverage. Most vendors offer broadly similar features, but their output formats differ. Where possible, Fovea uses a tabular output mode suitable for interactive shell sessions, and scripts. If a particular feature is not supported by this tabular output mode, vendor-specific JSON is available, instead. ✅ indicates a working feature, ❌ indicates a missing feature, and empty-cells represent features not supported by a particular vendor.

augment-visual-recognition-detection-of-low-resolution-human-faces - This code pattern uses Watson Visual Recognition, Watson Studio, and a Python notebook to demonstrate a way to detect covered faces

  •    Jupyter

In this code pattern, we will demonstrate a methodology to extend Watson Visual Recognition face detection by providing a strategy that will detect the border cases such as, blur and covered faces, with Tensorflow Object Detection, compiled in Watson Studio. NOTE: The contents of the object_detection folder have been obtained from the Tensorflow Object Detection API repo. Since the entire repo was not needed and only a few folders were required.

jpetstore-kubernetes - Modernize and Extend: JPetStore on IBM Cloud Kubernetes Service

  •    Java

IBMers can access the demo script and additional collateral from here. Follow the below steps to create IBM Cloud services and resources used in this demo. You will create a Kubernetes cluster, an instance of Watson Visual Recognition and an optional Twilio account (if you want to shop for pets using text messaging).

openwhisk-darkvisionapp - Discover dark data in videos with IBM Watson and IBM Cloud Functions

  •    Javascript

Dark Vision is a technology demonstration leveraging Cloud Functions and Watson services. If you are looking for an official and supported IBM offering head over to the Watson Video Enrichment product. This product uses Watson APIs and additional technology to enrich video assets. What if we used artificial intelligence to process these videos to tell us which video has what we're looking for without us having to watch all of them.

openwhisk-visionapp - A sample iOS app for image tagging and face detection built with IBM Bluemix OpenWhisk

  •    Swift

Vision App is a sample iOS application to automatically tag images and detect faces by using IBM visual recognition technologies. Take a photo or select an existing picture, let the application generate a list of tags and detect people, buildings, objects in the picture. Share the results with your network.

Visual-Recognition-Tile-Localization - Node

  •    Javascript

The Visual-Recognition-Tile-Localization application leverages the Watson Visual Recognition service with image pre-processing techniques to deliver localized image classification. For example, "show me where there is rust on the bridge". The user drags & drops an image onto the applicaiton within the browser, and the image is uploaded to the Node.js application. Once uploaded, the image is "chopped up" into smaller images (tiles) and each individual tile is analyzed by the Watson Visual Recognition service. Once complete, all results are visualized within the browser in a heatmap-like visualization, where colorization is based on the confidence scores being returned by the Visual Recognition service's custom classifier.

cf-nodejs-c2c-demo - A demonstration of Node

  •    Javascript

This project is a demonstration of Node.js Microservices and Container-to-Container networking on IBM Cloud Foundry Public. It involves two distinct microservices in combination with a Cloudant NoSQL database and the Watson Visual Recognition API which provide a simple 'Guestbook' functionality to the user. To demonstrate the concept of Container-to-Container networking, the second microservice which connects to the Watson API is not required for the basic functionality of the application (add a guest, show all guests) and can be added separately afterwards. This helps to show that both microservices/CF applications communicate directly with each other and that the second service only needs an internal route due to the configurated networking policy. This repository includes both microservices/Node.js applications, "guestbook-main" and "guestbook-watson" which are saved in the same called sub-directories. Hereby, "guestbook-main" is responsible for the basic functionality of the guestbook (HTML, create, list, database connection) while "guestbook-watson" is optional and handles the connection with the Watson Visual Recognition API and their results. Both applications are full Node.js REST APIs and their root directories include a README file with more information about their functionality.






We have large collection of open source products. Follow the tags from Tag Cloud >>


Open source products are scattered around the web. Please provide information about the open source projects you own / you use. Add Projects.