labelme - Image Polygonal Annotation with Python (polygon, rectangle, line, point and image-level flag annotation)

  •        89

Labelme is a graphical image annotation tool inspired by http://labelme.csail.mit.edu. It is written in Python and uses Qt for its graphical interface. Fig 2. VOC dataset example of instance segmentation.

https://github.com/wkentaro/labelme

Tags
Implementation
License
Platform

   




Related Projects

cvat - Computer Vision Annotation Tool (CVAT) is a web-based tool which helps to annotate video and images for Computer Vision algorithms

  •    Javascript

CVAT is completely re-designed and re-implemented version of Video Annotation Tool from Irvine, California tool. It is free, online, interactive video and image annotation tool for computer vision. It is being used by our team to annotate million of objects with different properties. Many UI and UX decisions are based on feedbacks from professional data annotation team. Code released under the MIT License.

Labelbox - The most versatile data labeling platform for training expert AI.

  •    TypeScript

Labelbox is a data labeling tool that's purpose built for machine learning applications. Start labeling data in minutes using pre-made labeling interfaces, or create your own pluggable interface to suit the needs of your data labeling task. Labelbox is lightweight for single users or small teams and scales up to support large teams and massive data sets. Simple image labeling: Labelbox makes it quick and easy to do basic image classification or segmentation tasks. To get started, simply upload your data or a CSV file containing URLs pointing to your data hosted on a server, select a labeling interface, (optional) invite collaborators and start labeling.

AdaptSegNet - Learning to Adapt Structured Output Space for Semantic Segmentation, CVPR 2018 (spotlight)

  •    Python

Pytorch implementation of our method for adapting semantic segmentation from the synthetic dataset (source domain) to the real dataset (target domain). Based on this implementation, our result is ranked 3rd in the VisDA Challenge. Learning to Adapt Structured Output Space for Semantic Segmentation Yi-Hsuan Tsai*, Wei-Chih Hung*, Samuel Schulter, Kihyuk Sohn, Ming-Hsuan Yang and Manmohan Chandraker IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018 (spotlight) (* indicates equal contribution).

robot-surgery-segmentation - Wining solution and its improvement for MICCAI 2017 Robotic Instrument Segmentation Sub-Challenge

  •    Jupyter

Here we present our wining solution and its improvement for MICCAI 2017 Robotic Instrument Segmentation Sub-Challenge. In this work, we describe our winning solution for MICCAI 2017 Endoscopic Vision Sub-Challenge: Robotic Instrument Segmentation and demonstrate further improvement over that result. Our approach is originally based on U-Net network architecture that we improved using state-of-the-art semantic segmentation neural networks known as LinkNet and TernausNet. Our results shows superior performance for a binary as well as for multi-class robotic instrument segmentation. We believe that our methods can lay a good foundation for the tracking and pose estimation in the vicinity of surgical scenes.


TernausNetV2 - TernausNetV2: Fully Convolutional Network for Instance Segmentation

  •    Jupyter

We present network definition and weights for our second place solution in CVPR 2018 DeepGlobe Building Extraction Challenge. Automatic building detection in urban areas is an important task that creates new opportunities for large scale urban planning and population monitoring. In a CVPR 2018 Deepglobe Building Extraction Challenge participants were asked to create algorithms that would be able to perform binary instance segmentation of the building footprints from satellite imagery. Our team finished second and in this work we share the description of our approach, network weights and code that is sufficient for inference.

MNC - Instance-aware Semantic Segmentation via Multi-task Network Cascades

  •    Python

This python version is re-implemented by Haozhi Qi when he was an intern at Microsoft Research. MNC is an instance-aware semantic segmentation system based on deep convolutional networks, which won the first place in COCO segmentation challenge 2015, and test at a fraction of a second per image. We decompose the task of instance-aware semantic segmentation into related sub-tasks, which are solved by multi-task network cascades (MNC) with shared features. The entire MNC network is trained end-to-end with error gradients across cascaded stages.

LightNet - LightNet: Light-weight Networks for Semantic Image Segmentation (Cityscapes and Mapillary Vistas Dataset)

  •    Python

This repository contains the code (in PyTorch) for: "LightNet: Light-weight Networks for Semantic Image Segmentation " (underway) by Huijun Liu @ TU Braunschweig. Semantic Segmentation is a significant part of the modern autonomous driving system, as exact understanding the surrounding scene is very important for the navigation and driving decision of the self-driving car. Nowadays, deep fully convolutional networks (FCNs) have a very significant effect on semantic segmentation, but most of the relevant researchs have focused on improving segmentation accuracy rather than model computation efficiency. However, the autonomous driving system is often based on embedded devices, where computing and storage resources are relatively limited. In this paper we describe several light-weight networks based on MobileNetV2, ShuffleNet and Mixed-scale DenseNet for semantic image segmentation task, Additionally, we introduce GAN for data augmentation[17] (pix2pixHD) concurrent Spatial-Channel Sequeeze & Excitation (SCSE) and Receptive Field Block (RFB) to the proposed network. We measure our performance on Cityscapes pixel-level segmentation, and achieve up to 70.72% class mIoU and 88.27% cat. mIoU. We evaluate the trade-offs between mIoU, and number of operations measured by multiply-add (MAdd), as well as the number of parameters.

js-segment-annotator - Javascript image annotation tool based on image segmentation.

  •    Javascript

Javascript image annotation tool based on image segmentation. A browser must support HTML canvas to use this tool.

jetson-inference - Guide to deploying deep-learning inference networks and deep vision primitives with TensorRT and NVIDIA Jetson

  •    C++

Welcome to our training guide for inference and deep vision runtime library for NVIDIA DIGITS and Jetson Xavier/TX1/TX2. This repo uses NVIDIA TensorRT for efficiently deploying neural networks onto the embedded platform, improving performance and power efficiency using graph optimizations, kernel fusion, and half-precision FP16 on the Jetson.

Accord.NET - Machine learning, Computer vision, Statistics and general scientific computing for .NET

  •    CSharp

The Accord.NET project provides machine learning, statistics, artificial intelligence, computer vision and image processing methods to .NET. It can be used on Microsoft Windows, Xamarin, Unity3D, Windows Store applications, Linux or mobile.

ImageAI - A python library built to empower developers to build applications and systems with self-contained Computer Vision capabilities

  •    Python

A python library built to empower developers to build applications and systems with self-contained Deep Learning and Computer Vision capabilities using simple and few lines of code. Built with simplicity in mind, ImageAI supports a list of state-of-the-art Machine Learning algorithms for image prediction, custom image prediction, object detection, video detection, video object tracking and image predictions trainings. ImageAI currently supports image prediction and training using 4 different Machine Learning algorithms trained on the ImageNet-1000 dataset. ImageAI also supports object detection, video detection and object tracking using RetinaNet, YOLOv3 and TinyYOLOv3 trained on COCO dataset. Eventually, ImageAI will provide support for a wider and more specialized aspects of Computer Vision including and not limited to image recognition in special environments and special fields.

lightnet - 🌓 Bringing pjreddie's DarkNet out of the shadows #yolo

  •    C

LightNet provides a simple and efficient Python interface to DarkNet, a neural network library written by Joseph Redmon that's well known for its state-of-the-art object detection models, YOLO and YOLOv2. LightNet's main purpose for now is to power Prodigy's upcoming object detection and image segmentation features. However, it may be useful to anyone interested in the DarkNet library. Once you've downloaded LightNet, you can install a model using the lightnet download command. This will save the models in the lightnet/data directory. If you've installed LightNet system-wide, make sure to run the command as administrator.

LabelMeAnnotationTool - Source code for the LabelMe annotation tool.

  •    Javascript

Here you will find the source code to install the LabelMe annotation tool on your server. LabelMe is an annotation tool writen in Javascript for online image labeling. The advantage with respect to traditional image annotation tools is that you can access the tool from anywhere and people can help you to annotate your images without having to install or copy a large dataset onto their computers. You can download a zip file of the source code directly.

TuSimple-DUC - Understanding Convolution for Semantic Segmentation

  •    Python

by Panqu Wang, Pengfei Chen, Ye Yuan, Ding Liu, Zehua Huang, Xiaodi Hou, and Garrison Cottrell. This repository is for Understanding Convolution for Semantic Segmentation (WACV 2018), which achieved state-of-the-art result on the CityScapes, PASCAL VOC 2012, and Kitti Road benchmark.

video-classification-3d-cnn-pytorch - Video classification tools using 3D ResNet

  •    Python

This is a pytorch code for video (action) classification using 3D ResNet trained by this code. The 3D ResNet is trained on the Kinetics dataset, which includes 400 action classes. This code uses videos as inputs and outputs class names and predicted class scores for each 16 frames in the score mode. In the feature mode, this code outputs features of 512 dims (after global average pooling) for each 16 frames. Torch (Lua) version of this code is available here.

food-101-keras - Food Classification with Deep Learning in Keras / Tensorflow

  •    Jupyter

If you are reading this on GitHub, the demo looks like this. Please follow the link below to view the live demo on my blog. Convolutional Neural Networks (CNN), a technique within the broader Deep Learning field, have been a revolutionary force in Computer Vision applications, especially in the past half-decade or so. One main use-case is that of image classification, e.g. determining whether a picture is that of a dog or cat.

vatic - Efficiently Scaling Up Video Annotation with Crowdsourced Marketplaces. IJCV 2012

  •    HTML

VATIC is an online video annotation tool for computer vision research that crowdsources work to Amazon's Mechanical Turk. Our tool makes it easy to build massive, affordable video data sets. Note: VATIC has only been tested on Ubuntu with Apache 2.2 HTTP server and a MySQL server. This document will describe installation on this platform, however it should work any operating system and with any server.