SFA3D - Super Fast and Accurate 3D Object Detection based on 3D LiDAR Point Clouds (The PyTorch implementation)

  •        242

The instructions for setting up a virtual environment is here. Download the 3D KITTI detection dataset from here.

https://github.com/maudzung/Super-Fast-Accurate-3D-Object-Detection
https://github.com/maudzung/SFA3D

Tags
Implementation
License
Platform

   




Related Projects

AB3DMOT - (IROS 2020, ECCVW 2020) Official Python Implementation for "3D Multi-Object Tracking: A Baseline and New Evaluation Metrics"

  •    Python

3D multi-object tracking (MOT) is an essential component technology for many real-time applications such as autonomous driving or assistive robotics. However, recent works for 3D MOT tend to focus more on developing accurate systems giving less regard to computational cost and system complexity. In contrast, this work proposes a simple yet accurate real-time baseline 3D MOT system. We use an off-the-shelf 3D object detector to obtain oriented 3D bounding boxes from the LiDAR point cloud. Then, a combination of 3D Kalman filter and Hungarian algorithm is used for state estimation and data association. Although our baseline system is a straightforward combination of standard methods, we obtain the state-of-the-art results. To evaluate our baseline system, we propose a new 3D MOT extension to the official KITTI 2D MOT evaluation along with two new metrics. Our proposed baseline method for 3D MOT establishes new state-of-the-art performance on 3D MOT for KITTI, improving the 3D MOTA from 72.23 of prior art to 76.47. Surprisingly, by projecting our 3D tracking results to the 2D image plane and compare against published 2D MOT methods, our system places 2nd on the official KITTI leaderboard. Also, our proposed 3D MOT method runs at a rate of 214.7 FPS, 65 times faster than the state-of-the-art 2D MOT system. 1. Clone the github repository.

OpenPCDet - OpenPCDet Toolbox for LiDAR-based 3D Object Detection.

  •    Python

OpenPCDet is a clear, simple, self-contained open source project for LiDAR-based 3D object detection. It is also the official code release of [PointRCNN], [Part-A^2 net] and [PV-RCNN].

darknet_ros - YOLO ROS: Real-Time Object Detection for ROS

  •    C++

This is a ROS package developed for object detection in camera images. You only look once (YOLO) is a state-of-the-art, real-time object detection system. In the following ROS package you are able to use YOLO (V3) on GPU and CPU. The pre-trained model of the convolutional neural network is able to detect pre-trained classes including the data set from VOC and COCO, or you can also create a network with your own detection objects. For more information about YOLO, Darknet, available training data and training YOLO see the following link: YOLO: Real-Time Object Detection. The YOLO packages have been tested under ROS Noetic and Ubuntu 20.04. Note: We also provide branches that work under ROS Melodic, ROS Foxy and ROS2.

Det3D - A general 3D object detection codebse.

  •    Python

A general 3D Object Detection codebase in PyTorch. Please refer to INSTALATION.md.


lidar_camera_calibration - ROS package to find a rigid-body transformation between a LiDAR and a camera for "LiDAR-Camera Calibration using 3D-3D Point correspondences"

  •    C++

The package is used to calibrate a LiDAR (config to support Hesai and Velodyne hardware) with a camera (works for both monocular and stereo). The package finds a rotation and translation that transform all the points in the LiDAR frame to the (monocular) camera frame. Please see Usage for a video tutorial. The lidar_camera_calibration/pointcloud_fusion provides a script to fuse point clouds obtained from two stereo cameras. Both of which were extrinsically calibrated using a LiDAR and lidar_camera_calibration. We show the accuracy of the proposed pipeline by fusing point clouds, with near perfection, from multiple cameras kept in various positions. See Fusion using lidar_camera_calibration for results of the point cloud fusion (videos).

frustum-pointnets - Frustum PointNets for 3D Object Detection from RGB-D Data

  •    Python

Created by Charles R. Qi, Wei Liu, Chenxia Wu, Hao Su and Leonidas J. Guibas from Stanford University and Nuro Inc. This repository is code release for our CVPR 2018 paper (arXiv report here). In this work, we study 3D object detection from RGB-D data. We propose a novel detection pipeline that combines both mature 2D object detectors and the state-of-the-art 3D deep learning techniques. In our pipeline, we firstly build object proposals with a 2D detector running on RGB images, where each 2D bounding box defines a 3D frustum region. Then based on 3D point clouds in those frustum regions, we achieve 3D instance segmentation and amodal 3D bounding box estimation, using PointNet/PointNet++ networks (see references at bottom).

Objectron - Objectron is a dataset of short, object-centric video clips

  •    Jupyter

Objectron is a dataset of short object centric video clips with pose annotations. The Objectron dataset is a collection of short, object-centric video clips, which are accompanied by AR session metadata that includes camera poses, sparse point-clouds and characterization of the planar surfaces in the surrounding environment. In each video, the camera moves around the object, capturing it from different angles. The data also contain manually annotated 3D bounding boxes for each object, which describe the object’s position, orientation, and dimensions. The dataset consists of 15K annotated video clips supplemented with over 4M annotated images in the following categories: bikes, books, bottles, cameras, cereal boxes, chairs, cups, laptops, and shoes. In addition, to ensure geo-diversity, our dataset is collected from 10 countries across five continents. Along with the dataset, we are also sharing a 3D object detection solution for four categories of objects — shoes, chairs, mugs, and cameras. These models are trained using this dataset, and are released in MediaPipe, Google's open source framework for cross-platform customizable ML solutions for live and streaming media.

SuMa - Surfel-based Mapping for 3d Laser Range Data (SuMa)

  •    C++

Mapping of 3d laser range data from a rotating laser range scanner, e.g., the Velodyne HDL-64E. For representing the map, we use surfels that enables fast rendering of the map for point-to-plane ICP and loop closure detection. J. Behley, C. Stachniss. Efficient Surfel-Based SLAM using 3D Laser Range Data in Urban Environments, Proc. of Robotics: Science and Systems (RSS), 2018.

kitti2bag - Convert KITTI dataset to ROS bag file the easy way!

  •    Python

This package enjoyed significant interest from more people then I could have thought at the beginning. I'm really glad to see that. I see many PRs and issues being raised but my day job does not allow me to push this repository further. In order to allow this package to prosper, I'm opening it up for the community. I'm more than happy to add you as a collaborator to this repository. Just send me an email. And by the way. To ensure we maintain the quality of the repo you are required to get the PR approval from at least one other collaborator. You can use the Gitter to communicate with others.

3d-bat - 3D Bounding Box Annotation Tool (3D-BAT) Point cloud and Image Labeling

  •    Javascript

1. Step: draw bounding box in the camera image 2. Step: choose current bounding box by activating it 3. Step: You can move it in image space or even change its size by drag and droping 4. Step: Switch into PCD MODE into birds-eye-view 5. Step: Place 3D label into 3D scene to corresponding 2D label 6. Step: Adjust label: 1. drag and dropping directly on label to change position or size 2. use control bar to change position and size (horizontal bar -> rough adjustment, vertical bar -> fine adjustment) 3. Go into camera view to check label with higher intensity and bigger point size 7. Step: Choose label from drop down list 8. Step: Repeat steps 1-7 for all objects in the scene 9. Step: Save labels into file 10. Step: Click on 'HOLD' button if you want to keep the same label positions and sizes 11. Step: click on 'Next camera image'

mmdetection3d - OpenMMLab's next-generation platform for general 3D object detection.

  •    Python

News: We released the codebase v0.14.0. In the recent nuScenes 3D detection challenge of the 5th AI Driving Olympics in NeurIPS 2020, we obtained the best PKL award and the second runner-up by multi-modality entry, and the best vision-only results.

depth_clustering - :taxi: Fast and robust clustering of point clouds generated with a Velodyne sensor

  •    C++

This is a fast and robust algorithm to segment point clouds taken with Velodyne sensor into objects. It works with all available Velodyne sensors, i.e. 16, 32 and 64 beam ones. I recommend using a virtual environment in your catkin workspace (<catkin_ws> in this readme) and will assume that you have it set up throughout this readme. Please update your commands accordingly if needed. I will be using pipenv that you can install with pip.

SqueezeSeg - Implementation of SqueezeSeg, convolutional neural networks for LiDAR point clout segmentation

  •    Python

SqueezeSeg is released under the BSD license (See LICENSE for details). The dataset used for training, evaluation, and demostration of SqueezeSeg is modified from KITTI raw dataset. For your convenience, we provide links to download the converted dataset, which is distrubited under the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 License. The instructions are tested on Ubuntu 16.04 with python 2.7 and tensorflow 1.0 with GPU support.

dynamic_robot_localization - Point cloud registration pipeline for robot localization and 3D perception

  •    C++

The dynamic_robot_localization is a ROS package that offers 3 DoF and 6 DoF localization using PCL and allows dynamic map update using OctoMap. It's a modular localization pipeline, that can be configured using yaml files (detailed configuration layout available in drl_configs.yaml and examples of configurations available in guardian_config and dynamic_robot_localization_tests). Even though this package was developed for robot self-localization and mapping, it was implemented as a generic, configurable and extensible point cloud matching library, allowing its usage in related problems such as estimation of the 6 DoF pose of an object and 3D object scanning.

PointRCNN - PointRCNN: 3D Object Proposal Generation and Detection from Point Cloud, CVPR 2019.

  •    Python

Code release for the paper PointRCNN:3D Object Proposal Generation and Detection from Point Cloud, CVPR 2019. Authors: Shaoshuai Shi, Xiaogang Wang, Hongsheng Li.

AugmentedAutoencoder - Official Code: Implicit 3D Orientation Learning for 6D Object Detection from RGB Images

  •    Python

Martin Sundermeyer, Zoltan-Csaba Marton, Maximilian Durner, Manuel Brucker, Rudolph Triebel Best Paper Award, ECCV 2018. We propose a real-time RGB-based pipeline for object detection and 6D pose estimation. Our novel 3D orientation estimation is based on a variant of the Denoising Autoencoder that is trained on simulated views of a 3D model using Domain Randomization. This so-called Augmented Autoencoder has several advantages over existing methods: It does not require real, pose-annotated training data, generalizes to various test sensors and inherently handles object and view symmetries.

android-yolo - Real-time object detection on Android using the YOLO network with TensorFlow

  •    C++

android-yolo is the first implementation of YOLO for TensorFlow on an Android device. It is compatible with Android Studio and usable out of the box. It can detect the 20 classes of objects in the Pascal VOC dataset: aeroplane, bicycle, bird, boat, bottle, bus, car, cat, chair, cow, dining table, dog, horse, motorbike, person, potted plant, sheep, sofa, train and tv/monitor. The network only outputs one predicted bounding box at a time for now. The code can and will be extended in the future to output several predictions. To use this demo first clone the repository. Download the TensorFlow YOLO model and put it in android-yolo/app/src/main/assets. Then open the project on Android Studio. Once the project is open you can run the project on your Android device using the Run 'app' command and selecting your device.

scancontext - Global LiDAR descriptor for place recognition and long-term localization

  •    C++

basics contains a literally basic codes such as generation and can be a start point to understand Scan Context. place recognition is an example directory for our IROS18 paper. The example is conducted using KITTI sequence 00 and PlaceRecognizer.m is the main code. You can easily grasp the full pipeline of Scan Context-based place recognition via watching and following the PlaceRecognizer.m code. Our Scan Context-based place recognition system consists of two steps; description and search. The search step is then composed of two hierarchical stages (1. ring key-based KD tree for fast candidate proposal, 2. candidate to query pairwise comparison-based nearest search). We note that our coarse yaw aligning-based pairwise distance enables reverse-revisit detection well, unlike others. The pipeline is below.






We have large collection of open source products. Follow the tags from Tag Cloud >>


Open source products are scattered around the web. Please provide information about the open source projects you own / you use. Add Projects.