This is a simple application that learns to play Othello by reinforcement learning. TD(0) is used to evaluate a policy.
https://github.com/qiyiping/othelloTags | reinforcement-learning othello |
Implementation | Python |
License | Apache |
Platform | Windows Linux |
A simplified, highly flexible, commented and (hopefully) easy to understand implementation of self-play based reinforcement learning based on the AlphaGo Zero paper (Silver et al). It is designed to be easy to adopt for any two-player turn-based adversarial game and any deep learning framework of your choice. A sample implementation has been provided for the game of Othello in PyTorch, Keras and TensorFlow. An accompanying tutorial can be found here. We also have implementations for GoBang and TicTacToe. To use a game of your choice, subclass the classes in Game.py and NeuralNet.py and implement their functions. Example implementations for Othello can be found in othello/OthelloGame.py and othello/{pytorch,keras,tensorflow}/NNet.py.
tensorflow pytorch keras gobang gomoku alpha-zero alphago-zero alphago reinforcement-learning self-play mcts monte-carlo-tree-search othello tf deep-learning alphazeroReinforcement Learning with Python will help you to master basic reinforcement learning algorithms to the advanced deep reinforcement learning algorithms. The book starts with an introduction to Reinforcement Learning followed by OpenAI and Tensorflow. You will then explore various RL algorithms and concepts such as the Markov Decision Processes, Monte-Carlo methods, and dynamic programming, including value and policy iteration. This example-rich guide will introduce you to deep learning, covering various deep learning algorithms. You will then explore deep reinforcement learning in depth, which is a combination of deep learning and reinforcement learning. You will master various deep reinforcement learning algorithms such as DQN, Double DQN. Dueling DQN, DRQN, A3C, DDPG, TRPO, and PPO. You will also learn about recent advancements in reinforcement learning such as imagination augmented agents, learn from human preference, DQfD, HER and many more.
reinforcement-learning deep-reinforcement-learning sarsa q-learning policy-gradients deep-q-network deep-learning-algorithms asynchronous-advantage-actor-critic deep-deterministic-policy-gradient deep-recurrent-q-network double-dqn dueling-dqn hindsight-experience-replay drqn trpo ppoThis is a program for Oware and Reversi (Othello) for J2ME/Cell and (PlamOS with IBM WME VM). Oware (popular in Africa) is a Mancala game suitable for adults. Reversi (Othello) is included. Based on mobilesuite (on sourceforge) with modifications.
Othello is a classic strategy game, also known as Reversi. Its objective is to finish the game with the greater amount of pieces (circles) of the same color.
RLzoo is a collection of the most practical reinforcement learning algorithms, frameworks and applications. It is implemented with Tensorflow 2.0 and API of neural network layers in TensorLayer 2, to provide a hands-on fast-developing approach for reinforcement learning practices and benchmarks. It supports basic toy-tests like OpenAI Gym and DeepMind Control Suite with very simple configurations. Moreover, RLzoo supports robot learning benchmark environment RLBench based on Vrep/Pyrep simulator. Other large-scale distributed training framework for more realistic scenarios with Unity 3D, Mujoco, Bullet Physics, etc, will be supported in the future. A Springer textbook is also provided, you can get the free PDF if your institute has Springer license. Different from RLzoo for simple usage with high-level APIs, we also have a RL tutorial that aims to make the reinforcement learning tutorial simple, transparent and straight-forward with low-level APIs, as this would not only benefits new learners of reinforcement learning, but also provide convenience for senior researchers to testify their new ideas quickly.
reinforcement-learning deep-learning tensorflow deep-reinforcement-learning tensorlayer reinforcement-learning-practicesThis repository contains the code and pdf of a series of blog post called "dissecting reinforcement learning" which I published on my blog mpatacchiola.io/blog. Moreover there are links to resources that can be useful for a reinforcement learning practitioner. If you have some good references which may be of interest please send me a pull request and I will integrate them in the README. The source code is contained in src with the name of the subfolders following the post number. In pdf there are the A3 documents of each post for offline reading. In images there are the raw svg file containing the images used in each post.
reinforcement-learning deep-reinforcement-learning markov-chain temporal-differencing-learning sarsa q-learning actor-critic multi-armed-bandit inverted-pendulum mountain-car drone-landing dissecting-reinforcement-learning genetic-algorithmThis repository contains material related to Udacity's Deep Reinforcement Learning Nanodegree program. The tutorials lead you through implementing various algorithms in reinforcement learning. All of the code is in PyTorch (v0.4) and Python 3.
deep-reinforcement-learning reinforcement-learning reinforcement-learning-algorithms neural-networks pytorch pytorch-rl ddpg dqn ppo dynamic-programming cross-entropy hill-climbing ml-agents openai-gym-solutions openai-gym rl-algorithmsCoach is a python reinforcement learning framework containing implementation of many state-of-the-art algorithms. It exposes a set of easy-to-use APIs for experimenting with new RL algorithms, and allows simple integration of new environments to solve. Basic RL components (algorithms, environments, neural network architectures, exploration policies, ...) are well decoupled, so that extending and reusing existing components is fairly painless.
reinforcement-learning deep-learning mxnet tensorflow openai-gym rl starcraft imitation-learning hierarchical-reinforcement-learning coach mujoco starcraft2 onnx roboschool carla starcraft2-ai distributed-reinforcement-learningTensorForce is an open source reinforcement learning library focused on providing clear APIs, readability and modularisation to deploy reinforcement learning solutions both in research and practice. TensorForce is built on top of TensorFlow and compatible with Python 2.7 and >3.5 and supports multiple state inputs and multi-dimensional actions to be compatible with any type of simulation or application environment. TensorForce also aims to move all reinforcement learning logic into the TensorFlow graph, including control flow. This both reduces dependencies on the host language (Python), thus enabling portable computation graphs that can be used in other languages and contexts, and improves performance.
reinforcement-learning tensorflow deep-reinforcement-learning deep-q-networkThis ensemble strategy is reimplemented in a Jupiter Notebook at FinRL. Stock trading strategies play a critical role in investment. However, it is challenging to design a profitable strategy in a complex and dynamic stock market. In this paper, we propose a deep ensemble reinforcement learning scheme that automatically learns a stock trading strategy by maximizing investment return. We train a deep reinforcement learning agent and obtain an ensemble trading strategy using the three actor-critic based algorithms: Proximal Policy Optimization (PPO), Advantage Actor Critic (A2C), and Deep Deterministic Policy Gradient (DDPG). The ensemble strategy inherits and integrates the best features of the three algorithms, thereby robustly adjusting to different market conditions. In order to avoid the large memory consumption in training networks with continuous action space, we employ a load-on-demand approach for processing very large data. We test our algorithms on the 30 Dow Jones stocks which have adequate liquidity. The performance of the trading agent with different reinforcement learning algorithms is evaluated and compared with both the Dow Jones Industrial Average index and the traditional min-variance portfolio allocation strategy. The proposed deep ensemble scheme is shown to outperform the three individual algorithms and the two baselines in terms of the risk-adjusted return measured by the Sharpe ratio.
deep-reinforcement-learning openai-gym sharpe-ratio ddpg stock-trading ppo a2c-algorithm ensemble-strategy stock-trading-strategy automated-stock-tradingCoach is a python reinforcement learning research framework containing implementation of many state-of-the-art algorithms. It exposes a set of easy-to-use APIs for experimenting with new RL algorithms, and allows simple integration of new environments to solve. Basic RL components (algorithms, environments, neural network architectures, exploration policies, ...) are well decoupled, so that extending and reusing existing components is fairly painless.
coach openai-gym reinforcement-learning tensorflow rl carla imitation-learning mujoco roboschool deep-learning hierarchical-reinforcement-learning starcraft starcraft2 starcraft2-aiMAgent is a research platform for many-agent reinforcement learning. Unlike previous research platforms that focus on reinforcement learning research with a single agent or only few agents, MAgent aims at supporting reinforcement learning research that scales up from hundreds to millions of agents. MAgent supports Linux and OS X running Python 2.7 or python 3. We make no assumptions about the structure of your agents. You can write rule-based algorithms or use deep learning frameworks.
reinforcement-learning multi-agent deep-learningThis project is built for people who are learning and researching on latest deep reinforcement learning methods. Recommendations and suggestions are welcome.
deep-reinforcement-learning reinforcement-learning game reward artificial-general-intelligence exploration-exploitation hierarchical-reinforcement-learning distributional multiagent-reinforcement-learning planning theoretical-computer-science inverse-rl icml aamas ijcai aaai aistats uai agiIn this tutorial, we'll be creating artificially intelligent agents that learn from interacting with their environment, gathering experience, and a system of rewards with deep reinforcement learning (deep RL). Using end-to-end neural networks that translate raw pixels into actions, RL-trained agents are capable of exhibiting intuitive behaviors and performing complex tasks. Ultimately, our aim will be to train reinforcement learning agents from virtual robotic simulation in 3D and transfer the agent to a real-world robot. Reinforcement learners choose the best action for the agent to perform based on environmental state (like camera inputs) and rewards that provide feedback to the agent about it's performance. Reinforcement learning can learn to behave optimally in it's environment given a policy, or task - like obtaining the reward.
NOTICE: Please use the next version, SLM-Lab. An experimentation framework for Reinforcement Learning using OpenAI Gym, Tensorflow, and Keras.
keras tensorflow openai experiment policy-gradient actor-critic ddpg deep-reinforcement-learning reinforcement-learning gym lab reinforcement learningThis repository contains the codes for our ICRA 2019 paper. For more details, please refer to the paper Crowd-Robot Interaction: Crowd-aware Robot Navigation with Attention-based Deep Reinforcement Learning. Mobility in an effective and socially-compliant manner is an essential yet challenging task for robots operating in crowded spaces. Recent works have shown the power of deep reinforcement learning techniques to learn socially cooperative policies. However, their cooperation ability deteriorates as the crowd grows since they typically relax the problem as a one-way Human-Robot interaction problem. In this work, we want to go beyond first-order Human-Robot interaction and more explicitly model Crowd-Robot Interaction (CRI). We propose to (i) rethink pairwise interactions with a self-attention mechanism, and (ii) jointly model Human-Robot as well as Human-Human interactions in the deep reinforcement learning framework. Our model captures the Human-Human interactions occurring in dense crowds that indirectly affects the robot's anticipation capability. Our proposed attentive pooling mechanism learns the collective importance of neighboring humans with respect to their future states. Various experiments demonstrate that our model can anticipate human dynamics and navigate in crowds with time efficiency, outperforming state-of-the-art methods.
reinforcement-learning collision-avoidance crowd-navigationkeras-rl implements some state-of-the art deep reinforcement learning algorithms in Python and seamlessly integrates with the deep learning library Keras. Just like Keras, it works with either Theano or TensorFlow, which means that you can train your algorithm efficiently either on CPU or GPU. Furthermore, keras-rl works with OpenAI Gym out of the box. This means that evaluating and playing around with different algorithms is easy. Of course you can extend keras-rl according to your own needs. You can use built-in Keras callbacks and metrics or define your own. Even more so, it is easy to implement your own environments and even algorithms by simply extending some simple abstract classes. In a nutshell: keras-rl makes it really easy to run state-of-the-art deep reinforcement learning algorithms, uses Keras and thus Theano or TensorFlow and was built with OpenAI Gym in mind.
keras tensorflow theano reinforcement-learning neural-networks machine-learningChainerRL is a deep reinforcement learning library that implements various state-of-the-art deep reinforcement algorithms in Python using Chainer, a flexible deep learning framework. ChainerRL is tested with Python 2.7+ and 3.5.1+. For other requirements, see requirements.txt.
chainer reinforcement-learning deep-learning machine-learning dqn actor-critic
We have large collection of open source products. Follow the tags from
Tag Cloud >>
Open source products are scattered around the web. Please provide information
about the open source projects you own / you use.
Add Projects.