Displaying 1 to 8 from 8 results

FlappyBirdRL - Flappy Bird hack using Reinforcement Learning

  •    Javascript

More details here. If you just opened the index.html page you might have an error in the console regarding origin requests. Right click anywhere on the screen, click Inspect and then look at the Console.

flappy-rust - A Rust SDL2 clone of Flappy Gopher which is a clone of Flappy Bird

  •    Rust

Flappy Rust is a mostly complete clone of Flappy Gopher which is a clone of the famous Flappy Bird game developed in Rust with bindings for SDL2.Because I'm home all day after being layed off (along with 70% of the company) on Monday 4/3/2017 and decided to write some code in Rust since it's been on my TODO list AND because I need an outlet for my negative energy.

flappy-haskell - Flappy Bird Haskell Implementation

  •    Haskell

Flappy Bird implementation made with SDL2 and FRP (Yampa).




Flappy-Bird-Clone - The Coding Train's Flappy Bird Clone

  •    Javascript

This repository is starting with the code from Coding Challenge #31 on YouTube. I am accepting pull requests for bug fixes, minor improvements to gameplay, and visual design. I do not want to make the code more complex as the goal is to use this as a basis for a "neuro-evolution" tutorial with the "toy" neural network library.

AI-Plays-FlappyBird - Using genetic algorithm and neural networks to teach AI to play flappy bird.

  •    Javascript

Using genetic algorithm and neural networks to teach AI to play flappy bird. Inspired by this project and this paper. The dashboard in the html page is inspired by this website. You can try this project in this page.

flappy-es - Playing Flappy Bird using Evolution Strategies

  •    Python

After reading Evolution Strategies as a Scalable Alternative to Reinforcement Learning, I wanted to experiment something using Evolution Strategies, and Flappy Bird has always been one of my favorites when it comes to Game experiments. A simple yet challenging game. The model learns to play very well after 3000 epochs, but not completely flawless and it rarely loses in difficult cases (high difference between two wall entrances). Training process is pretty fast as there is no backpropagation, and is not very costy in terms of memory as there is no need to record actions as in policy gradients.