Yohan Chalier Blog RSS

A Basic AI for Hoplite

Content

Today is an example of a project that did not went so well.

I am a huge fan of CodeBullet, a developer and YouTube creator who applies AI techniques (mostly evolution-based) for solving some well-known games, such as Tetris, Hill Climb Racing, Snake, Piano Tiles, Flappy Bird, and many more. A good demonstration of what he does is the Creature Creator, a web application for designing a creature and monitoring it learning to walk using a small neuron network within a genetic framework. I was greatly impressed by all this work, and tried to come up with my own AI solution for solving a game.

The first question was: which game to choose? And here was my mistake. I chose Hoplite, a 2013 Android game developed by Magma Fortress, for that it was my main source of entertainment on Android for a while. The game looked fairly simple: the player acts as a Greek soldier, aka. an hoplite, placed on a randomly generated hexagonal map. One tile on this map is a staircase, that leads to the level below. The player must reach that tile while avoiding getting hurt by enemies present on the map. Levels become harder and harder the further down you go. For the free version, the game ends at depth 16, where the player has to pick up a fleece and bring it back home through a portal.

The game is very fun to play, so you better enjoy it yourself first! There are a free and a premium version.

Hoplite Trailer Miniature
Original Hoplite trailer on the Google Play Store (YouTube)

Android Interface

I started by looking at what other people did on GitHub. I stumbled upon this repository (ebensh/hoplite_ai), that was using monkeyrunner, a tool provided by the Android Studio software for controlling any Android device connected to the computer (whether it is simulated or plugged-in via USB). That tool looked very promising, and I started implementing a basic interface where I could call methods such as touch(x, y) that would actually trigger actions on the Android device.

More precisely, I have two scripts for that:

Getting it to work was a little painful because of weird errors. I have finally got it to work after some intensive StackOverflow crawling. Though I still have some errors when the programs terminates abruptly and the communication with the device does not end properly. Fortunately, a classic turn-it-off-and-on-again intervention solves it.

So now, I can programmatically touch the screen and take screenshots of it. Yay!

Global Hoplite Architecture
Global Architecture

Giving My Script Some Eyes

Now that I can retrieve screenshots from the game, I need a way to parse into an internal data structure that will be interpreted by the AI:

I tried various techniques for that matter, such as minimizing the squared difference of parts of the screen to templates or using a small neuron network for classification. Yet, all those methods were quite greedy and I felt like I could find something better.

First I extract the part of the screenshot that needs classification. For instance, in the case of a terrain tile (whose position is always the same), I look at a square box around that tile. Graphics in the game are clean pixel textures. Even the colors are constant from one screenshot to another. Therefore, I only need a few (sometimes one is enough) pixels for classifying that tile into one of the templates.

I wrote a small script that identifies, for each template, one pixel that differs from all other templates. I then put this pixel value in a large if clause that identifies the image. See that extract of classifiers.py:

if is_close(part[45, 40], [0.937255, 0.541176, 0.192157]):
    return hoplite.game.terrain.SurfaceElement.FOOTMAN
if is_close(part[15, 26], [0.611765, 0.890196, 0.352941]):
    return hoplite.game.terrain.SurfaceElement.ARCHER
if is_close(part[37, 37], [0.741176, 0.141176, 0.192157]):
    return hoplite.game.terrain.SurfaceElement.PLAYER
if is_close(part[20, 23], [1.000000, 0.764706, 0.258824]):
    return hoplite.game.terrain.SurfaceElement.BOMB
if is_close(part[26, 26], [0.4509804, 0.27058825, 0.09411765]):
    return hoplite.game.terrain.SurfaceElement.SPEAR
if is_close(part[26, 26], [0.9372549, 0.5411765, 0.19215687]):
    return hoplite.game.terrain.SurfaceElement.SPEAR

This way, I have a 100% accuracy on detecting what's happening on screen in about 10ms per screenshot.

Implementing The Game Mechanics

Here is where things went complex. The actual game mechanics are not publicly available, and a lot of details, necessary for making accurate predictions, are hard to comprehend. I wrote what I could in RULES.md. I used various sources to complete my own experience:

For instance, look at the rules applied for knocking back enemies after a bash. One does not simply guess that from a few examples. Here are the two major difficulties I encountered when exploring for those rules:

  1. Some rules are not deterministic, which makes generalizing rules harder.
  2. Controlling the experiment settings is very hard. Use emulator state saves helps but getting the perfect settings for an experiment is still very rare.

Because of those, I gave up on predict enemies movement. My best guess was that enemies tried to get in range of the player while still getting closer to it, but it seems like in some situations, slight behavior changes can occur leading to the global prediction being irrelevant.

The Brain

With my current implementation of the game rules, I am able to predict the exact state I will be in after the player's move, but not where the enemies will be after their move. Therefore, I am not able to perform some complex exploration like what a chess engine could do (which was my first intention). Then, the AI's brain only enumerates the player's possible moves, and selects the best one in a very short-term objective. That's unfortunate, but it kind of works. I managed to finish the first 16 levels without being hit with this simple mechanism.

To evaluate the goodness of a move, just as in chess, I have a function for evaluating the current state of the game. This function relies on numerical features, manually weighted, that are summed up into a final score. The greater the better it looks for the player. The brain enumerates the possible moves, predicts the state that would result from that move, evaluates this state and so ranks the possible moves.

Feature Range Scaling Weight
Whether the player is alive or dead 0 or 1 1 -100
Player's health 0 to 8 0.125 25
Player's energy 0 to 100+ 0.01 1
Player's cooldown 0 to 4 0.25 -0.5
Distance (A*) to stairs 0 to 9+ 0.11 -0.5
Distance (A*) to the portal 0 to 9+ 0.11 -1
Distance (A*) to the fleece 0 to 9+ 0.11 -2
Distance (A*) to the altar 0 to 9+ 0.11 -1
Distance (A*) to the spear (if thrown) 0 to 9+ 0.11 -2
Enemies danger 0 to 25+ 0.04 -6

where the enemies danger is the sum over all the enemies on the map of:

Enemy type Danger
Footman 1
Demolitionist 2
Archer 3
Wizard 4

After scaling all features so they usually end up in a [0, 1] interval, I manually set the weights. This was a way to rank all possible moves: simply walking one tile towards the stairs rewards less than killing an archer which rewards less than not taking one hit. Of course, such weights need to be fine-tuned through and actual optimization method, by playing the game a lot. But I did not manage to recreate the game in its entirety, and only having a partial prediction tool would make converging towards the golden weights incredibly slow, even slower because of the realtime speed of the game.

Results

That is all there is to it. The code is available on GitHub at ychalier/hoplite. I recorded some runs along the development process, showing how the AI got better. Here is the playlist:

YouTube thumbnail
Run Recordings Playlist (YouTube)