Do Neural Networks Dream of Pokémon?

One thing I wanted to experiment with TensorFlow (and neural network) is a some simple numerical processing. I have decided to put it into test.

I was looking for some subject, and then I noticed something about Pokémon Sun & Moon. There is a Festival Plaza feature in the game where players can play set of minigame. One of those minigame is where player can attempt to match “most effective” elements given a single or a set of element. For example, one of such example with be Bug and Grass — the answer is Fire gives Super Effective, which earns you point. There are four classes of effectiveness, from none to most effective are, No Effect, Not Very Effective, Normal and Super Effective.

There is a truth table on how this is layouted, but I wanted to see if some learning from mistake approach can predict outcome of this minigame.

Using a dozen of festival tickets, I’ve collected around 80 of log. Next quest is to research how these data can be normalized to test — as I am not yet very familiar with the way TensorFlow works in this regard, with a lot of focus on image recognition, it is surprisingly hard to find information about where numeral data is processed, then I stumbled upon existing effort by mtitg, using TensorFlow to see if TensorFlow can predict survivors of Titanic incident based on multiple parameters.

mtitg’s example uses many parameters, 8 to be exact, but in my case, I will be using three. mtitg’s code is very great starting point as it covers topics about how textual data can be prepared to be converted to numeral value for TensorFlow to process.

I have adopted mtitg’s code to work with my test data.

Long story short, with my limited testing on 80, it turns out to be 50% at most I can get leverage out of. I think the reason as follows:

  • Not enough variety of data; since this is obtained through manual process, there are manual process and considering fairly complexity of Pokémon’s elements, data represented is certainly not a exhaustive set of data.
  • Too many “Normal” outcome. Default for most of outcome that are not affected by certain elements, for example, fire element for bug is normal. Thus data tend to revert to normal, which provided more example where outcome is normal than other three classes.
  • Perhaps neural network is not very good approach for this problem. Perhaps simple logistic regression, and/or Bayesian algorithms would work better.

Conclusion

As I wrote earlier, with existence of actual element pairing data, there is no practical reason of this attempt; it’s really for learning and fun, after all.

With further optimization and research, TensorFlow and machine learning method have a great potential in making sense of data as well as to provide added value from dataset that may be already present. Machine visions and self-driving cars are very cool application for the machine learning technologies, but we shouldn’t forget adopting this technology on our personal computers we already have.