Displaying 1 to 20 from 40 results

donkey - self driving car

  •    Python

Donkeycar is minimalist and modular self driving library for Python. It is developed for hobbyists and students with a focus on allowing fast experimentation and easy community contributions. After building a Donkey2 you can turn on your car and go to http://localhost:8887 to drive.

apollo - An open autonomous driving platform

  •    C++

Apollo is a high performance, flexible architecture which accelerates the development, testing, and deployment of Autonomous Vehicles. For business and partnership, please visit our website.

carla - Open-source simulator for autonomous driving research.

  •    C++

CARLA is an open-source simulator for autonomous driving research. CARLA has been developed from the ground up to support development, training, and validation of autonomous urban driving systems. In addition to open-source code and protocols, CARLA provides open digital assets (urban layouts, buildings, vehicles) that were created for this purpose and can be used freely. The simulation platform supports flexible specification of sensor suites and environmental conditions. If you want to benchmark your model in the same conditions as in our CoRL’17 paper, check out Benchmarking.

simulator - A ROS/ROS2 Multi-robot Simulator for Autonomous Vehicles

  •    CSharp

Check out our latest news and subscribe to our mailing list to get the latest updates. LG Electronics America R&D Lab has developed an HDRP Unity-based multi-robot simulator for autonomous vehicle developers. We provide an out-of-the-box solution which can meet the needs of developers wishing to focus on testing their autonomous vehicle algorithms. It currently has integration with The Autoware Foundation's Autoware.auto and Baidu's Apollo platforms, can generate HD maps, and can be immediately used for testing and validation of a whole system with little need for custom integrations. We hope to build a collaborative community among robotics and autonomous vehicle developers by open sourcing our efforts.




awesome-robotic-tooling - Tooling for professional robotic development in C++ and Python with a touch of ROS, autonomous driving and aerospace: https://freerobotics

  •    

To stop reinventing the wheel you need to know about the wheel. This list is an attempt to show the variety of open and free tools in software and hardware development, which are useful in professional robotic development. Your contribution is necessary to keep this list alive, increase the quality and to expand it. You can read more about it's origin and how you can participate in the contribution guide and related blog post. All new project entries will have a tweet from protontypes.

donkeycar - Open source hardware and software platform to build a small scale self driving car.

  •    Python

Donkeycar is minimalist and modular self driving library for Python. It is developed for hobbyists and students with a focus on allowing fast experimentation and easy community contributions. After building a Donkey2 you can turn on your car and go to http://localhost:8887 to drive.

vehicle-detection - Vehicle detection using machine learning and computer vision techniques for Udacity's Self-Driving Car Engineer Nanodegree

  •    Jupyter

Vehicle detection using these machine learning and computer vision techniques. First, you need to get training data(cars and not-cars). You can get car images from GTI vehicle image database, KITTI vision benchmark). And over 1500 images per each is good for this project.


AirSim - Open source simulator based on Unreal Engine for autonomous vehicles from Microsoft AI & Research

  •    C++

AirSim is a simulator for drones (and soon other vehicles) built on Unreal Engine. It is open-source, cross platform and supports hardware-in-loop with popular flight controllers such as PX4 for physically and visually realistic simulations. It is developed as an Unreal plugin that can simply be dropped in to any Unreal environment you want.

self-driving-toy-car - A self driving toy car using end-to-end learning

  •    Jupyter

To make a lane follower based on a standard RC car using Raspberry Pi and a camera. The software is a simple Convolutional Network, which takes in the image fetched from the camera and outputs the steering angle. During data collection, we will simply hook the steering PWM of the car to pin GPIO17. The script raspberry_pi/collect_data.py will record the values of steering PWM and the associated images. The data of each trial are collectively stored in driving_trial_*. The trial folders are automatically numbered.

xviz - A protocol for real-time transfer and visualization of autonomy data

  •    Javascript

XVIZ is a protocol for real-time transfer and visualization of autonomy data. Learn more in the docs and specification. You need Node.js and yarn to run the examples.

burro - Platform for small-scale self-driving vehicles.

  •    Python

Burro is a platform for small-scale self-driving cars. Using Burro you can build either a RC-style Ackermann steering car, or a Differential steering car (like a three-wheel robot). Depending on your hardware Burro will automatically select and setup the right kind of vehicle each time you run it. Thus you may share the same SD card among different vehicles without any changes.

AutonomousDrivingCookbook - Scenarios, tutorials and demos for Autonomous Driving

  •    Jupyter

This project is developed and being maintained by the Microsoft Deep Learning and Robotics Garage Chapter. This is currently a work in progress. We will continue to add more tutorials and scenarios based on requests from our users and the availability of our collaborators.Autonomous Driving has transcended far beyond being a crazy moonshot idea over the last half decade or so. It is quickly becoming the biggest technology today that promises to shape our tomorrow, not very unlike when cars came into existence in the first place. Almost every single car manufacturer, every big technology company, and a number of very promising startups have been working on different aspects of autonomous driving to help shape this revolution. Some of the biggest drivers powering this change have been the recent advances in software (robotics and deep learning techniques), hardware technology (GPUs, FPGAs etc.) and cloud computing. Cloud platforms like Azure have enabled ingest and processing of large amounts of data, making it possible for companies to push for levels 4 and 5 of AD autonomy.

MachineLearning - Machine learning code base of Meng Li

  •    Python

Welcome to my blog 听雨居. It contains detailed description of the code here.

CarND-Behavioral-Cloning-Project - Built and trained a convolutional neural network for end-to-end driving in a simulator, using TensorFlow and Keras

  •    Python

The purpose of this project is to build and learn a deep neural network that can mimic the behavior of humans driving a car. A simple simulator is used for this purpose. When we drive a car, the simulator stores images and steering angles for training. You can learn the neural network with the stored training data and check the results of training in the simulator's autonomous mode. When you first run the simulator, you’ll see a configuration screen asking what size and graphical quality you would like.

CarND-Behavioral-Cloning - Self-driving car in a simulator controlled by a tiny neural network

  •    Jupyter

This is a project for Udacity Self-Driving Car Nanodegree program. The aim of this project is to control a car in a simulator using neural network. This implementation uses a convolutional neural network (CNN) with only 63 parameters, yet performs well on both the training and test tracks. The implementation of the project is in the files drive.py and model.py and the explanation of the implementation is in project-3.ipynb. Videos of the actions of this neural network are here: Track 1 Track 2. A post about this solution is at Self-driving car in a simulator with a tiny neural network. in the terminal. The connection between the simulator and the controlling neural network will be taken care automatically.

lane-detection-with-opencv - Apply computer vision to label the lanes in a driving video

  •    Jupyter

This project uses computer vision to label the lanes in a driving video, calculate the curvature of the lane, and estimate the distance of the vehicle from the center of the lane. If you don't already have tools like Jupyter and OpenCV installed, follow the Udacity instructions to configure an anaconda environment that will give you these tools.

image-to-3d-bbox - Build a CNN network to predict 3D bounding box of car from 2D image.

  •    Jupyter

This is an implementation in Keras of the paper "3D Bounding Box Estimation Using Deep Learning and Geometry" (https://arxiv.org/abs/1612.00496).






We have large collection of open source products. Follow the tags from Tag Cloud >>


Open source products are scattered around the web. Please provide information about the open source projects you own / you use. Add Projects.