A real-time lidar-inertial odometry package. We strongly recommend the users read this document thoroughly and test the package with the provided dataset first. A video of the demonstration of the method can be found on YouTube.
slam velodyne loam-velodyne 3d-mapping ouster lidar-odometry lidar-slam lidar-inertial ouster-slam velodyne-slamThis is a fast and robust algorithm to segment point clouds taken with Velodyne sensor into objects. It works with all available Velodyne sensors, i.e. 16, 32 and 64 beam ones. I recommend using a virtual environment in your catkin workspace (<catkin_ws> in this readme) and will assume that you have it set up throughout this readme. Please update your commands accordingly if needed. I will be using pipenv that you can install with pip.
fast real-time clustering point-cloud range ros lidar depth segmentation pcl catkin velodyne-sensor velodyne depth-image range-image depth-clusteringAn updated lidar-initial odometry package, LIO-SAM, has been open-sourced and available for testing. You can use the following commands to download and compile the package.
mapping ros isam imu lidar ieee slam velodyne ugv odometry jackal gtsam loam iros lidar-odometryAsk questions here. Issues #71 and #7 address this problem. The current known solution is to build the same version of PCL that you have on your system from source, and set the CMAKE_PREFIX_PATH accordingly so that catkin can find it. See this issue for more details.
computer-vision mapping ros lidar pcl slam velodyne 3d pointcloud loam-velodyne loamThe package is used to calibrate a LiDAR (config to support Hesai and Velodyne hardware) with a camera (works for both monocular and stereo). The package finds a rotation and translation that transform all the points in the LiDAR frame to the (monocular) camera frame. Please see Usage for a video tutorial. The lidar_camera_calibration/pointcloud_fusion provides a script to fuse point clouds obtained from two stereo cameras. Both of which were extrinsically calibrated using a LiDAR and lidar_camera_calibration. We show the accuracy of the proposed pipeline by fusing point clouds, with near perfection, from multiple cameras kept in various positions. See Fusion using lidar_camera_calibration for results of the point cloud fusion (videos).
camera camera-calibration point-cloud ros calibration lidar velodyne point-clouds data-fusion ros-kinetic aruco-markers lidar-camera-calibration 3d-points ros-melodic hesai stereo-cameras camera-frame lidar-frameThis package is built on top of the ROS ecosystem. You can start building a map with a pose graph constructed by hdl_graph_slam or a customized LeGO-LOAM, or odometry data generated by any ROS package. This package has been tested on Ubuntu 18.04 & ROS melodic or later.
gui point-cloud ros lidar slam velodyne hdl-graph-slamTool for labeling of a single point clouds or a stream of point clouds. Given the poses of a KITTI point cloud dataset, we load tiles of overlapping point clouds. Thus, multiple point clouds are labeled at once in a certain area.
labeling velodyne point-clouds semantickittiMapping of 3d laser range data from a rotating laser range scanner, e.g., the Velodyne HDL-64E. For representing the map, we use surfels that enables fast rendering of the map for point-to-plane ICP and loop closure detection. J. Behley, C. Stachniss. Efficient Surfel-Based SLAM using 3D Laser Range Data in Urban Environments, Proc. of Robotics: Science and Systems (RSS), 2018.
opengl slam velodyne kitti-dataset rss2018Applications based on OpenDLV are grouped in UDP multicast sessions belonging to IPv4 address 225.0.0.X, where X is from the within the range [1,254]. All microservices belonging to the same UDP multicast group are able to communicate with each other; thus, two applications running in different UDP multicast sessions do not see each other and are completely separated. The actual UDP multicast session is selected using the commandline parameter --cid=111, where 111 would define the UDP multicast address 225.0.0.111. Microservices exchange data using the message Envelope that contains besides the actual message to send further meta information like sent and received timestamp and the point in time when the contained message was actually sampled. All messages are encoded in Google's Protobuf data format (example) that has been adjusted to preserve forwards and backwards compatibility using libcluon's native implementation of Protobuf.
opendlv libcluon cpp14 microservice docker self-driving-car autonomous-driving amd64 armhf aarch64 trimble-gps gps nmea oxts-gps velodyne lidar applanix video4linux openh264 h264hdl_graph_slam is an open source ROS package for real-time 3D slam using a 3D LIDAR. It is based on 3D Graph SLAM with NDT scan matching-based odometry estimation and loop detection. It also utilizes floor plane detection to generate an environmental map with a completely flat floor. We have tested this packaged mainly in indoor environments, but it can be applied to outdoor environment mapping as well. hdl_graph_slam consists of four nodelets.
ros slam lidar velodyne hdl-graph-slam point-cloud rslidarhdl_localization is a ROS package for real-time 3D localization using a 3D LIDAR, such as velodyne HDL32e and VLP16. This package performs Unscented Kalman Filter-based pose estimation. It first estimates the sensor pose from IMU data implemented on the LIDAR, and then performs multi-threaded NDT scan matching between a globalmap point cloud and input point clouds to correct the estimated pose. IMU-based pose prediction is optional. If you disable it, the system predicts the sensor pose with the constant velocity model without IMU information. All parameters are listed in launch/hdl_localization.launch as ros params. You can specify the initial sensor pose using "2D Pose Estimate" on rviz, or using ros params (see example launch file).
localization ros lidar real-time velodynehdl_people_tracking is a ROS package for real-time people tracking using a 3D LIDAR. It first performs Haselich's clustering technique to detect human candidate clusters, and then applies Kidono's person classifier to eliminate false detections. The detected clusters are tracked by using Kalman filter with a contant velocity model. If it doesn't work well, change ndt_neighbor_search_method in hdl_localization.launch to "DIRECT1". It makes the scan matching significantly fast, but a little bit unstable.
ros lidar velodyne person-tracking human-detectionYou can use the following commands to download and compile the package. It's recommended that upgrade your Gazebo to the latest version, which has great performance improvements.
velodyne jackal velodyne-simulation vlp-16 hdl-32e jackal-simulationThis dataset is captured using a Velodyne VLP-16, which is mounted on an UGV - Clearpath Jackal, on Stevens Institute of Technology campus. The VLP-16 rotation rate is set to 10Hz. This data-set features over 20K scans and many loop-closures. The TF transform in the bags is provided by LeGO-LOAM (without enabling the loop-cloure function, pure lidar odometry). The whole mapping process is also recorded in videos.
dataset velodyne jackal clearpath vlp16 loam vlp-16 loop-cloure lidar-odometryTransfer SemanticKITTI labeles into other dataset/sensor formats. Copyright 2020, Ferdinand Langer, Cyrill Stachniss. University of Bonn.
lidar velodyne domain-adaptation nuscenes semantic-kitti iros2020
We have large collection of open source products. Follow the tags from
Tag Cloud >>
Open source products are scattered around the web. Please provide information
about the open source projects you own / you use.
Add Projects.