Displaying 1 to 13 from 13 results

lidar_camera_calibration - ROS package to find a rigid-body transformation between a LiDAR and a camera for "LiDAR-Camera Calibration using 3D-3D Point correspondences"

  •    C++

The package is used to calibrate a LiDAR (config to support Hesai and Velodyne hardware) with a camera (works for both monocular and stereo). The package finds a rotation and translation that transform all the points in the LiDAR frame to the (monocular) camera frame. Please see Usage for a video tutorial. The lidar_camera_calibration/pointcloud_fusion provides a script to fuse point clouds obtained from two stereo cameras. Both of which were extrinsically calibrated using a LiDAR and lidar_camera_calibration. We show the accuracy of the proposed pipeline by fusing point clouds, with near perfection, from multiple cameras kept in various positions. See Fusion using lidar_camera_calibration for results of the point cloud fusion (videos).

handeye_calib_camodocal - Easy to use and accurate hand eye calibration which has been working reliably for years (2016-present) with kinect, kinectv2, rgbd cameras, optical trackers, and several robots including the ur5 and kuka iiwa

  •    C++

This is a ROS node integrating the Hand Eye Calibration implemented in CamOdoCal. See this stack exchange question explaining how Hand Eye Calibration works. keynote presentation explaining many details about hand eye calibration for those that are interested. Practical code and instructions to calibrate your robot can be found below.




video2calibration - Camera intrinsic parameters calibration from chessboard video sequence.

  •    Python

Python scripts for camera intrinsic parameters calibration and image undistortion. using video of a moving chessboard pattern or a sequence of images as an input.

ofxMVG - OFX plugins for Multiple View Geometry

  •    C++

CameraLocalizer estimates the camera pose of an image regarding an existing 3D reconstruction generated by openMVG. The plugin supports multiple clips in input to localize a RIG of cameras (multiple cameras rigidly fixed). CameraLocalizer on ShuttleOFX.

extrinsic_lidar_camera_calibration - This is a package for extrinsic calibration between a 3D LiDAR and a camera, described in paper: Improvements to Target-Based 3D LiDAR to Camera Calibration

  •    MATLAB

[Release Note July 2020] This work has been accepted by IEEE Access and has been uploaded to arXiv. [Release Note March 2020] This is the new master branch from March 2020. The current master branch supports a revised version of the arXiv paper, namely paper. The original master branch from Oct 2019 to March 2020 is now moved to v1-2019 branch, and it supports the functions associated with the first version of the Extrinsic Calibration paper that we placed on the arXiv, namely paper. Please be aware that there are functions in the older branch that have been removed from the current master branch.


multicam_calibration

  •    C++

Adjust the topics to match your camera sources. You must use an aprilgrid target for calibration, layout follows Kalibr conventions and is specified in config/aprilgrid.yaml.

android-camera-calibration - Updated (opencv3 and camera2 API) android camera calibration application

  •    C++

This android app allow for calibration of a mobile camera. Currently OpenCV does not support opening of the api camera2 objects. Meaning that the default OpenCV java view will not work with the newest phone on the market. In this app we use only the camera2 api to first capture the image, convert it into an OpenCV format and then process it using the native OpenCV methods. This seems to have a lot of overhead so in many cases the framerate is very low. A bit of overview of the folder and files in this repository. This uses the gradle experiential branch so that the android ndk format is supported for native debugging in android studio. To open this project, just load up android studio, and go File > Open.... Select this repository folder (not a sub-folder in this repository) and the project should automatically open and download all needed sdk and libraries. It will ask you to download the ndk if it is not installed already on your machine.

DREAM - DREAM: Deep Robot-to-Camera Extrinsics for Articulated Manipulators (ICRA 2020)

  •    Python

This is the official implementation of "Camera-to-robot pose estimation from a single image" (ICRA 2020). The DREAM system uses a robot-specific deep neural network to detect keypoints (typically joint locations) in the RGB image of a robot manipulator. Using these keypoint locations along with the robot forward kinematics, the camera pose with respect to the robot is estimated using a perspective-n-point (PnP) algorithm. For more details, please see our paper and video. We have tested on Ubuntu 16.04 and 18.04 with an NVIDIA GeForce RTX 2080 and Titan X, with both Python 2.7 and Python 3.6. The code may work on other systems.