pysc2 - StarCraft II Learning Environment

  •        7

PySC2 is DeepMind's Python component of the StarCraft II Learning Environment (SC2LE). It exposes Blizzard Entertainment's StarCraft II Machine Learning API as a Python RL Environment. This is a collaboration between DeepMind and Blizzard to develop StarCraft II into a rich environment for RL research. PySC2 provides an interface for RL agents to interact with StarCraft 2, getting observations and sending actions. We have published an accompanying blogpost and paper, which outlines our motivation for using StarCraft II for DeepRL research, and some initial research results using the environment.

https://github.com/deepmind/pysc2

Tags
Implementation
License
Platform

   




Related Projects

Replicating-DeepMind - Reproducing the results of "Playing Atari with Deep Reinforcement Learning" by DeepMind

  •    C++

Reproducing the results of "Playing Atari with Deep Reinforcement Learning" by DeepMind. All the information is in our Wiki. Progress: System is up and running on a GPU cluster with cuda-convnet2. It can learn to play better than random but not much better yet :) It is rather fast but still about 2x slower than DeepMind's original system. It does not have RMSprop implemented at the moment which is our next goal.

TorchCraft - Connecting Torch to StarCraft

  •    C++

A bridge between Torch and StarCraft. Synnaeve, G., Nardelli, N., Auvolat, A., Chintala, S., Lacroix, T., Lin, Z., Richoux, F. and Usunier, N., 2016. TorchCraft: a Library for Machine Learning Research on Real-Time Strategy Games - arXiv:1611.00625.

s2client-proto - StarCraft II Client - protocol definitions used to communicate with StarCraft II.

  •    Protocol

The StarCraft II API is an interface that provides full external control of StarCraft II.The API is available in the retail Windows and Mac clients. There are also Linux clients available at the download links below.

DeepMind-Atari-Deep-Q-Learner - The original code from the DeepMind article + my tweaks

  •    Lua

This repository hosts the original code published along with the article in Nature and my experiments (if any) with it. This project contains the source code of DQN 3.0, a Lua-based deep reinforcement learning architecture, necessary to reproduce the experiments described in the paper "Human-level control through deep reinforcement learning", Nature 518, 529–533 (26 February 2015) doi:10.1038/nature14236.

lab - A customisable 3D platform for agent-based AI research

  •    C

DeepMind Lab is a 3D learning environment based on id Software's Quake III Arena via ioquake3 and other open source software. DeepMind Lab provides a suite of challenging 3D navigation and puzzle-solving tasks for learning agents. Its primary purpose is to act as a testbed for research in artificial intelligence, especially deep reinforcement learning.


gym-starcraft - StarCraft environment for OpenAI Gym, based on Facebook's TorchCraft. (In progress)

  •    Python

Gym StarCraft is an environment bundle for OpenAI Gym. It is based on Facebook's TorchCraft, which is a bridge between Torch and StarCraft for AI research.Install OpenAI Gym and its dependencies.

.NET StarCraft II Replay Parser

  •    DotNet

A .NET 3.5 Library used to parse StarCraft II replays. Developed in C# 3.5.

coach - Reinforcement Learning Coach by Intel® AI Lab enables easy experimentation with state of the art Reinforcement Learning algorithms

  •    Python

Coach is a python reinforcement learning research framework containing implementation of many state-of-the-art algorithms. It exposes a set of easy-to-use APIs for experimenting with new RL algorithms, and allows simple integration of new environments to solve. Basic RL components (algorithms, environments, neural network architectures, exploration policies, ...) are well decoupled, so that extending and reusing existing components is fairly painless.

s2client-api - StarCraft II Client - C++ library supported on Windows, Linux and Mac designed for building scripted bots and research using the SC2API

  •    C++

The StarCraft II API provides access to in-game state observation and unit control. The API is a wrapper around protobuf defined protocol over a websocket connection.While it's possible to write directly to the protocol, this library provides a C++ and class-based abstraction. You can see a simple example below.

Starcraft Calendar

  •    

A calendar application for Starcraft community , based on Team Liquid Calendar Xml & Service.

s2protocol - Python library to decode StarCraft II replay protocols

  •    Python

s2protocol is a reference Python library and standalone tool to decode StarCraft II replay files into Python data structures.s2protocol can be used as a base-build-specific library to decode binary blobs, or it can be run as a standalone tool to pretty print information from supported replay files.

ml-agents - Unity Machine Learning Agents

  •    CSharp

Unity Machine Learning Agents (ML-Agents) is an open-source Unity plugin that enables games and simulations to serve as environments for training intelligent agents. Agents can be trained using reinforcement learning, imitation learning, neuroevolution, or other machine learning methods through a simple-to-use Python API. We also provide implementations (based on TensorFlow) of state-of-the-art algorithms to enable game developers and hobbyists to easily train intelligent agents for 2D, 3D and VR/AR games. These trained agents can be used for multiple purposes, including controlling NPC behavior (in a variety of settings such as multi-agent and adversarial), automated testing of game builds and evaluating different game design decisions pre-release. ML-Agents is mutually beneficial for both game developers and AI researchers as it provides a central platform where advances in AI can be evaluated on Unity’s rich environments and then made accessible to the wider research and game developer communities. For more information, in addition to installation and usage instructions, see our documentation home. If you have used a version of ML-Agents prior to v0.3, we strongly recommend our guide on migrating to v0.3.

GibsonEnv - Gibson Environments: Real-World Perception for Embodied Agents

  •    C

You shouldn't play video games all day, so shouldn't your AI! We built a virtual environment simulator, Gibson, that offers real-world experience for learning perception. I. being from the real-world and reflecting its semantic complexity through virtualizing real spaces, II. having a baked-in mechanism for transferring to real-world (Goggles function), and III. embodiment of the agent and making it subject to constraints of space and physics via integrating a physics engine (Bulletphysics).

MDX2DAE

  •    

MDX2DAE is a tool for 3D model conversion. It will be able to convert models of the MDX format (used by Blizzard Entertainment) into the open standard COLLADA format. This will enable users to edit the files in 3D software like 3ds Max.

dm_control - The DeepMind Control Suite and Package

  •    Python

A set of Python Reinforcement Learning environments powered by the MuJoCo physics engine. See the suite subdirectory. Libraries that provide Python bindings to the MuJoCo physics engine.

StarData - Starcraft AI Research Dataset

  •    Python

We release the largest StarCraft: Brood War replay dataset yet, with 65646 games. The full dataset after compression is 365 GB, 1535 million frames, and 496 million player actions. The entire frame data was dumped out at 8 frames per second. We made a big effort to ensure this dataset is clean and has mostly high quality replays. You can access it with TorchCraft in C++, Python, and Lua. The replays are in an AWS S3 bucket at s3://stardata. Read below for more details, or our whitepaper on arXiv for more details. Note: The current set of replays are only compatible with the 1.3.0 version of torchcraft included here.

ThunderGraft

  •    C++

ThunderGraft is an MPQDraft plugin that allows Diablo, Diablo II, Starcraft, and Warcraft II Battle.net Edition to use newer, superior audio compression formats including MP3, Ogg Vorbis, and FLAC, in addition to the existing ADPCM WAV compression.

keras-rl - Deep Reinforcement Learning for Keras.

  •    Python

keras-rl implements some state-of-the art deep reinforcement learning algorithms in Python and seamlessly integrates with the deep learning library Keras. Just like Keras, it works with either Theano or TensorFlow, which means that you can train your algorithm efficiently either on CPU or GPU. Furthermore, keras-rl works with OpenAI Gym out of the box. This means that evaluating and playing around with different algorithms is easy. Of course you can extend keras-rl according to your own needs. You can use built-in Keras callbacks and metrics or define your own. Even more so, it is easy to implement your own environments and even algorithms by simply extending some simple abstract classes. In a nutshell: keras-rl makes it really easy to run state-of-the-art deep reinforcement learning algorithms, uses Keras and thus Theano or TensorFlow and was built with OpenAI Gym in mind.

chainerrl - ChainerRL is a deep reinforcement learning library built on top of Chainer.

  •    Python

ChainerRL is a deep reinforcement learning library that implements various state-of-the-art deep reinforcement algorithms in Python using Chainer, a flexible deep learning framework. ChainerRL is tested with Python 2.7+ and 3.5.1+. For other requirements, see requirements.txt.





We have large collection of open source products. Follow the tags from Tag Cloud >>


Open source products are scattered around the web. Please provide information about the open source projects you own / you use. Add Projects.