The package is used to calibrate a LiDAR (config to support Hesai and Velodyne hardware) with a camera (works for both monocular and stereo). The package finds a rotation and translation that transform all the points in the LiDAR frame to the (monocular) camera frame. Please see Usage for a video tutorial. The lidar_camera_calibration/pointcloud_fusion provides a script to fuse point clouds obtained from two stereo cameras. Both of which were extrinsically calibrated using a LiDAR and lidar_camera_calibration. We show the accuracy of the proposed pipeline by fusing point clouds, with near perfection, from multiple cameras kept in various positions. See Fusion using lidar_camera_calibration for results of the point cloud fusion (videos).
camera camera-calibration point-cloud ros calibration lidar velodyne point-clouds data-fusion ros-kinetic aruco-markers lidar-camera-calibration 3d-points ros-melodic hesai stereo-cameras camera-frame lidar-frameORB-SLAM2 Authors: Raul Mur-Artal, Juan D. Tardos, J. M. M. Montiel and Dorian Galvez-Lopez (DBoW2). The original implementation can be found here. This is the ROS implementation of the ORB-SLAM2 real-time SLAM library for Monocular, Stereo and RGB-D cameras that computes the camera trajectory and a sparse 3D reconstruction (in the stereo and RGB-D case with true scale). It is able to detect loops and relocalize the camera in real time. This implementation removes the Pangolin dependency, and the original viewer. All data I/O is handled via ROS topics. For visualization you can use RViz. This repository is maintained by Lennart Haller on behalf of appliedAI.
robotics ros slam orb-slam2 visual-slam ros-kinetic orbslam2 ros-melodicLARVIO is short for Lightweight, Accurate and Robust monocular Visual Inertial Odometry, which is based on hybrid EKF VIO. It is featured by augmenting features with long track length into the filter state of MSCKF by 1D IDP to provide accurate positioning results. The core algorithm of LARVIO depends on Eigen, Boost, Suitesparse, Ceres and OpenCV, making the algorithm of good portability.
localization ros-node msckf visual-inertial-odometry ros-kinetic sensor-calibration ros-melodic ekf-mono-slam larvioBecause the Stewart platform is a closed loop manipulator, the description was written in SDF rather than URDF. However, ROS does not support SDF by default, so a plugin was written to make the joints in Gazebo visible to ROS. Here is a link to an in-progress generalized version of this plugin: https://github.com/daniel-s-ingram/ros_sdf.
ros gazebo gazebo-plugin sdf stewart-platform dualshock4 ps4 playstation-4 inverse-kinematics simulation ros-kinetic stewart platformThis repository contains ROS message definitions for lgsvl_msgs to subscribe ROS messages being published by LG SVL Automotive Simulator via rosbridge. Copyright (c) 2018-2020 LG Electronics, Inc.
simulator apollo ros autonomous vehicle baidu ros-indigo ros-messages ros2 ros-kinetic autoware ros-melodic lgsvlA ROS WEB console to control remotely your robot. Based with robotwebtools.
website web robot ros ros-kinetic ros-melodic robotwebtools rosjslibAn example of how to utilize ROS with Unity3D. Developed for Unity Technologies. This project is intended to demonstrate ROS.Net usage from within the Unity engine to communicate with a native ROS instance.
ubuntu unity-editor unity3d ros ubuntu1604 ros-kineticROS-in-a-box: a containerized version of various ROS nodes. Contains ROS and tools to use it over websockets with rosbridge-suite.
docker docker-image ros moveit ros-kineticCollision Avoidance System for Self-Driving Vehicles by Delta Autonomy, Robotics Institute, CMU. This stack was developed for my MRSD capstone project. Our use-case involves an oncoming vehicle encroaching into the ego-vehicle's (heavy-duty truck) lane, on a two-lane countryside highway. The perception algorithms perform the detection and tracking of vehicles, and lane marking detection, using a sensor fusion of a monocular camera and RADAR. The prediction algorithms predict the trajectories of all vehicles in the environment including the ego-vehicle. Based on the predicted trajectories, the probability of collision, position and time-to-impact is computed. An evasive maneuver, such as steering or braking, is planned and executed to avoid or mitigate the crash. The project was developed in Carla simulator and ROS.
collision-detection sensor-fusion adas roslibjs collision-avoidance carnegie-mellon-university self-driving ros-kinetic carla delta-autonomy robotics-institute mrsd camera-radar-trackingInstall dependencies. Run the following to clone the lidar_camera_calibration package in ros_workspace/src directory.
opencv camera-calibration point-cloud ros lidar pcl perspective-transform ros-kinetic lidar-camera-calibration camera-geometry camera-lidar-calibrationWithout assigning any of the abovementioned parameters the demo scenario 0012 is replayed at 20% of its speed with a 3 second delay so RViz has enough time to boot up. If you have any questions, things you would love to add or ideas how to actualize the points in the Area of Improvements, send me an email at simonappel62@gmail.com ! More than interested to collaborate and hear any kind of feedback.
deep-learning cpp evaluation ros ros-node object-detection unscented-kalman-filter sensor-fusion ros-nodes semantic-segmentation dbscan rviz rosbag kitti-dataset ros-packages multi-object-tracking kitti deeplab ros-kineticThis is a side project of my main project: https://github.com/appinho/SARosPerceptionKitti Orientate yourself there to set up the project and acquire the data.
ros ros-node stereo-algorithms rviz rosbag kitti-dataset stereo-vision stereo-matching kitti ros-kinetic stereo-camera stereo-imagesThis is a nice android application that allows for publishing of data from a phone to a publish ROS master. This can be used to record a ROS bag of data or to preform SLAM on a higher powered computer. Note that this only works on phones that use the Camera1 API (so no newer camera2 api phones). If you are interested in recording data for a Camera2 API phone check out our other repository android-dataset-recorder. Also note that on some phones the number of cameras that you can use is limited (and thus if you select more the app will fail). This is caused by a limit to the bandwidth over the camera buses on the physical phone hardware. Original codebase is from here and has been updated to both compile with the new versions and added new features. This has been tested on the "Yellowstone" Tango Tablet Development Kit but should work on all Camera1 API phones.
android opencv computer-vision ros ros-kinetic camera1-apiThis repository has a LiDAR-inertial 3D plane simulator in it that allows for custom trajectory through 3D enviroments to be created, and a sensor suite to be sent through it at a given rate. The simplest way to get started is to clone this repository into your ROS workspace, and play around with the example datasets. Please read the guides below on how to get started, and see the ReadMe files in the package directories for more details on each package. Here is a video of a LiDAR-inertial estimator runnning on the top of this simulator. The paper leveraging this simulator "LIPS: LiDAR-Inertial 3D Plane SLAM" will be presented at IROS 2018.
plane simulation lidar inertial-sensors ros-kineticThis driver is an expansion of the Ladybug driver in Autoware driver. The key changes are more config options and added support for the Ladybug 5+ camera. See below for the install instructions, and the parameters that can be set in the launch file. Also this prints out more information about the configuration, so you should check if your camera is running on USB2 or USB3 according to the SDK library.
driver ladybug ros-kinetic pointgreyThis is a very simple ROS node that allows for publishing of NMEA messages onto the ROS framework. This package aims to support the Reach RTK GNSS module by Emlid. Right now this supports all NMEA messages from the package, while some are not used to publish anything onto ROS. Original starting point of the driver was the ROS driver nmea_navsat_driver which was then expanded by CearLab to work with the Reach RTK. This package is more complete, and aims to allow for use of the Reach RTK in actual robotic systems, please open a issue if you run into any issues. Be sure to checkout this other driver by enwaytech for the Reach RS.
gps-location rtk ros-driver ros-kinetic reach-rtkThis simulator uses a combination of multiple physics packages to build a test environment for Unmanned Surface Vehicles (USV). We'll use it, at first, to develop and test control and trajectory strategies for USVs. but it can be easily adapted to other applications. It contains multiple robot models such as propeled boats(rudder boat, differential boat, airboat) and sailboat. Boats are affected by waves, wind and water currents. To do that, we curently use UWsim for water surface modeling, we also load HEC-RAS output files with water speed of river and channel simulations. We simulate wind current with OpenFoam simulator. All those features alow to disturb the movement of boats in a realistic way. To run the packages of usv_sim you need a catkin workspace. If you already have a workspace you may jump to the Downloading and installing subsection.
gazebo boat openfoam sailboat environmental-modelling ros-kinetic usv hec-ras disaster-response robotics-simulation usv-simulator unmanned-surface-vehicle water-simulation water-surface usv-sim
We have large collection of open source products. Follow the tags from
Tag Cloud >>
Open source products are scattered around the web. Please provide information
about the open source projects you own / you use.
Add Projects.