Bullet Physics SDK - Real-time collision detection and multi-physics simulation for VR, games, visual effects, robotics, machine learning etc

  •        152

Bullet Physics SDK: real-time collision detection and multi-physics simulation for VR, games, visual effects, robotics, machine learning etc.

https://pybullet.org/wordpress/
https://github.com/bulletphysics/bullet3
https://pypi.org/project/pybullet

Tags
Implementation
License
Platform

   




Related Projects

GibsonEnv - Gibson Environments: Real-World Perception for Embodied Agents

  •    C

You shouldn't play video games all day, so shouldn't your AI! We built a virtual environment simulator, Gibson, that offers real-world experience for learning perception. I. being from the real-world and reflecting its semantic complexity through virtualizing real spaces, II. having a baked-in mechanism for transferring to real-world (Goggles function), and III. embodiment of the agent and making it subject to constraints of space and physics via integrating a physics engine (Bulletphysics).

jetson-reinforcement - Deep reinforcement learning GPU libraries for NVIDIA Jetson with PyTorch, OpenAI Gym, and Gazebo robotics simulator

  •    C++

In this tutorial, we'll be creating artificially intelligent agents that learn from interacting with their environment, gathering experience, and a system of rewards with deep reinforcement learning (deep RL). Using end-to-end neural networks that translate raw pixels into actions, RL-trained agents are capable of exhibiting intuitive behaviors and performing complex tasks. Ultimately, our aim will be to train reinforcement learning agents from virtual robotic simulation in 3D and transfer the agent to a real-world robot. Reinforcement learners choose the best action for the agent to perform based on environmental state (like camera inputs) and rewards that provide feedback to the agent about it's performance. Reinforcement learning can learn to behave optimally in it's environment given a policy, or task - like obtaining the reward.

rex-gym - OpenAI Gym environments for an open-source quadruped robot (SpotMicro)

  •    Python

The goal of this project is to train an open-source 3D printed quadruped robot exploring Reinforcement Learning and OpenAI Gym. The aim is to let the robot learns domestic and generic tasks in the simulations and then successfully transfer the knowledge (Control Policies) on the real robot without any other manual tuning. This project is mostly inspired by the incredible works done by Boston Dynamics.

flightmare - An Open Flexible Quadrotor Simulator

  •    C++

Flightmare is a flexible modular quadrotor simulator. Flightmare is composed of two main components: a configurable rendering engine built on Unity and a flexible physics engine for dynamics simulation. Those two components are totally decoupled and can run independently from each other. Flightmare comes with several desirable features: (i) a large multi-modal sensor suite, including an interface to extract the 3D point-cloud of the scene; (ii) an API for reinforcement learning which can simulate hundreds of quadrotors in parallel; and (iii) an integration with a virtual-reality headset for interaction with the simulated environment. Flightmare can be used for various applications, including path-planning, reinforcement learning, visual-inertial odometry, deep learning, human-robot interaction, etc. Installation instructions can be found in our Wiki.

habitat-lab - A modular high-level library to train embodied AI agents across a variety of tasks, environments, and simulators

  •    Python

Habitat Lab is a modular high-level library for end-to-end development in embodied AI -- defining embodied AI tasks (e.g. navigation, instruction following, question answering), configuring embodied agents (physical form, sensors, capabilities), training these agents (via imitation or reinforcement learning, or no learning at all as in classical SLAM), and benchmarking their performance on the defined tasks using standard metrics. Habitat Lab currently uses Habitat-Sim as the core simulator, but is designed with a modular abstraction for the simulator backend to maintain compatibility over multiple simulators. For documentation refer here.


habitat-sim - A flexible, high-performance 3D simulator for Embodied AI research.

  •    C++

The design philosophy of Habitat is to prioritize simulation speed over the breadth of simulation capabilities. When rendering a scene from the Matterport3D dataset, Habitat-Sim achieves several thousand frames per second (FPS) running single-threaded and reaches over 10,000 FPS multi-process on a single GPU. Habitat-Sim simulates a Fetch robot interacting in ReplicaCAD scenes at over 8,000 steps per second (SPS), where each ‘step’ involves rendering 1 RGBD observation (128×128 pixels) and rigid-body dynamics for 1/30sec. Habitat-Sim is typically used with Habitat-Lab, a modular high-level library for end-to-end experiments in embodied AI -- defining embodied AI tasks (e.g. navigation, instruction following, question answering), training agents (via imitation or reinforcement learning, or no learning at all as in classical SensePlanAct pipelines), and benchmarking their performance on the defined tasks using standard metrics.

tensor2robot - Distributed machine learning infrastructure for large-scale robotics research

  •    Python

This repository contains distributed machine learning and reinforcement learning infrastructure. It is used internally at Alphabet, and open-sourced with the intention of making research at Robotics @ Google more reproducible for the broader robotics and computer vision communities.

SerpentAI - Game Agent Framework. Helping you create AIs / Bots to play any game you own!

  •    Jupyter

The framework features a large assortment of supporting modules that provide solutions to commonly encountered scenarios when using video games as environments as well as CLI tools to accelerate development. It provides some useful conventions but is absolutely NOT opiniated about what you put in your agents: Want to use the latest, cutting-edge deep reinforcement learning algorithm? ALLOWED. Want to use computer vision techniques, image processing and trigonometry? ALLOWED. Want to randomly press the Left or Right buttons? sigh ALLOWED. To top it all off, Serpent.AI was designed to be entirely plugin-based (for both game support and game agents) so your experiments are actually portable and distributable to your peers and random strangers on the Internet. You'll also be glad to hear that all 3 major OSes are supported: Linux, Windows & macOS.

hexapod - Blazing fast hexapod robot simulator for the web.

  •    Javascript

You can use this web app to solve inverse kinematics, simulate various gaits, and more. In real time, you can also view all the angles the robot's eighteen joints make at any particular pose. All the computations are solely done in your browser, nothing's fetching data from somewhere else, so it should be fast. Another (somewhat) cool thing is that this app does NOT depend on any external mathematics library; it only uses Javascript's built-in Math object. If you'd like to build you're own user interface with Node, you can download the algorithm alone as a package: Hexapod Kinematics Library. There is also a "fork" modified where you can use the app to control a physical hexapod robot as you can see in the gif below.

simulator - A ROS/ROS2 Multi-robot Simulator for Autonomous Vehicles

  •    CSharp

Check out our latest news and subscribe to our mailing list to get the latest updates. LG Electronics America R&D Lab has developed an HDRP Unity-based multi-robot simulator for autonomous vehicle developers. We provide an out-of-the-box solution which can meet the needs of developers wishing to focus on testing their autonomous vehicle algorithms. It currently has integration with The Autoware Foundation's Autoware.auto and Baidu's Apollo platforms, can generate HD maps, and can be immediately used for testing and validation of a whole system with little need for custom integrations. We hope to build a collaborative community among robotics and autonomous vehicle developers by open sourcing our efforts.

mujoco - Multi-Joint dynamics with Contact. A general purpose physics simulator.

  •    C

MuJoCo stands for Multi-Joint dynamics with Contact. It is a general purpose physics engine that aims to facilitate research and development in robotics, biomechanics, graphics and animation, machine learning, and other areas which demand fast and accurate simulation of articulated structures interacting with their environment. DeepMind has acquired MuJoCo, and we are currently making preparations to open source the codebase. In the meantime, MuJoCo is available for download as a free and unrestricted precompiled binary under the Apache 2.0 license from mujoco.org.

carla - Open-source simulator for autonomous driving research.

  •    C++

CARLA is an open-source simulator for autonomous driving research. CARLA has been developed from the ground up to support development, training, and validation of autonomous urban driving systems. In addition to open-source code and protocols, CARLA provides open digital assets (urban layouts, buildings, vehicles) that were created for this purpose and can be used freely. The simulation platform supports flexible specification of sensor suites and environmental conditions. If you want to benchmark your model in the same conditions as in our CoRL’17 paper, check out Benchmarking.

tensorforce - TensorForce: A TensorFlow library for applied reinforcement learning

  •    Python

TensorForce is an open source reinforcement learning library focused on providing clear APIs, readability and modularisation to deploy reinforcement learning solutions both in research and practice. TensorForce is built on top of TensorFlow and compatible with Python 2.7 and >3.5 and supports multiple state inputs and multi-dimensional actions to be compatible with any type of simulation or application environment. TensorForce also aims to move all reinforcement learning logic into the TensorFlow graph, including control flow. This both reduces dependencies on the host language (Python), thus enabling portable computation graphs that can be used in other languages and contexts, and improves performance.

Deep-Reinforcement-Learning-Survey - My Exploration on Deep Reinforcement Learning Survey

  •    

If you're a newbie in deep reinforcement learning, I suggest you to read the blog post and open course first.

stable-baselines3 - PyTorch version of Stable Baselines, reliable implementations of reinforcement learning algorithms

  •    Python

Stable Baselines3 (SB3) is a set of reliable implementations of reinforcement learning algorithms in PyTorch. It is the next major version of Stable Baselines. You can read a detailed presentation of Stable Baselines3 in the v1.0 blog post.

Reinforcement Learning Simulation

  •    CSharp

This is a third year computer science project. A software system for simulating and animating Reinforcement Learning (RL) algorithms mainly for modular robots.

ml-agents - Unity Machine Learning Agents

  •    CSharp

Unity Machine Learning Agents (ML-Agents) is an open-source Unity plugin that enables games and simulations to serve as environments for training intelligent agents. Agents can be trained using reinforcement learning, imitation learning, neuroevolution, or other machine learning methods through a simple-to-use Python API. We also provide implementations (based on TensorFlow) of state-of-the-art algorithms to enable game developers and hobbyists to easily train intelligent agents for 2D, 3D and VR/AR games. These trained agents can be used for multiple purposes, including controlling NPC behavior (in a variety of settings such as multi-agent and adversarial), automated testing of game builds and evaluating different game design decisions pre-release. ML-Agents is mutually beneficial for both game developers and AI researchers as it provides a central platform where advances in AI can be evaluated on Unity’s rich environments and then made accessible to the wider research and game developer communities. For more information, in addition to installation and usage instructions, see our documentation home. If you have used a version of ML-Agents prior to v0.3, we strongly recommend our guide on migrating to v0.3.

recsim - A Configurable Recommender Systems Simulation Platform

  •    Python

RecSim is a configurable platform for authoring simulation environments for recommender systems (RSs) that naturally supports sequential interaction with users. RecSim allows the creation of new environments that reflect particular aspects of user behavior and item structure at a level of abstraction well-suited to pushing the limits of current reinforcement learning (RL) and RS techniques in sequential interactive recommendation problems. Environments can be easily configured that vary assumptions about: user preferences and item familiarity; user latent state and its dynamics; and choice models and other user response behavior. We outline how RecSim offers value to RL and RS researchers and practitioners, and how it can serve as a vehicle for academic-industrial collaboration. For a detailed description of the RecSim architecture please read Ie et al. Please cite the paper if you use the code from this repository in your work. This is not an officially supported Google product.

webots - Webots Robot Simulator

  •    C++

Webots is an open-source robot simulator released under the terms of the Apache 2.0 license. It provides a complete development environment to model, program and simulate robots, vehicles and biomechanical systems. You can download pre-compiled binaries for Windows, macOS and Linux of the latest release, as well as older releases and nightly builds.






We have large collection of open source products. Follow the tags from Tag Cloud >>


Open source products are scattered around the web. Please provide information about the open source projects you own / you use. Add Projects.