2 minutes
Training Neural Networks to play Icy Tower using Reinforcement Learning
Introduction
My fascination for Machine Learning and it’s capabilities sparked with famous breakthrough-projects, where machines learned to play various (video-)games. Watching AlphaGo defeat Lee Sedol was when I first saw, how algorithms could master a complex game with endless possibilities like Go or Chess and even learn something like intuition. Projects like DeepMind’s AlphaStar pushed the boundaries of AI even further by teaching machines to strategize, adapt, and even collaborate. These breakthroughs inspired me to make my own attempts of using ML to train neural networks to play games.
So this post will be a short breakdown of how I trained neural networks with neuro-evolution to play the game Icy Tower in real-time using reinforcement learning and a genetic algorithm called NEAT. Also watch my YouTube-Video explaining this approach, if you like. Even though I was still a novice to machine learning back when I started this project in 2021 and the code is a mess, I am still very proud of my achievement. What made me even more proud at the time, was that the creator of the legendary game Icy Tower Johann Peitz acknowledged my project.
Amazing project from @killerplauze1! Training an AI to play Icy Tower. Even gamers will be obsolete in the future.https://t.co/x9TcmyJIuC
— Johan Peitz - picoCAD is on Steam! (@johanpeitz) July 7, 2021
Breakdown
Following soon..
Reinforcement Learning Deep Learning Genetic Algorithm Real-Time Streaming Visualization Python Neuro-Evolution
222 Words
2021-07-13 09:50