About

In Super Auto Pets, players build teams of cute emoji animals then send them into battle (think Pokémon, not dog-fighting).

Gameplay starts in a shop, where you're given a certain amount of gold and can buy either friends (animals) or food (bonuses).

Store

Screenshots courtesy of Team Wood Games

After building a team, you send it into battle, which proceeds automatically:

Battle

The emoji-laden art style hides surprising strategic depth and a constantly-evolving metagame.

If you've never played it, I encourage you to try a round or two: you can play for free on Itch, without even creating an account.

(this writeup will make more sense if you're familiar with the basic mechanics)

After playing many rounds over the course of a few months, I decided to build a game simulator and do some science. The state space explodes quickly, so I limited myself to asking questions about the very first turn.

This writeup explains how to do deterministic exploration of the state space, then presents my results (including the best possible team for the first turn).

Generating every possible team

Available actions in the shop

The first turn (like every turn) starts in the shop, where you're given 10 gold and a set of things you could buy. Here's what that looks like:

First turn

You have a few choices:

You can take actions until you run out of gold, or choose to go battle at any point (even with an empty team, which is not recommended!).

Every animal has two statistics: health (❤️) and attack (⚔️)., which determine how it performs in battle. You can boost these statistics by buying food: for example, an apple 🍎 gives +1 to each stat.

In addition, many animals have actions that occur when they are bought or sold.

As one example, an otter will give a random friend on your team [+1, +1] when it is bought. This is where things get tricky, because it's non-deterministic: you can end up with a different team depending on which friend gets the bonus!

Exploring a non-deterministic space

If you want to fully explore the state space, you need to either

The former is much easier to implement. You can imagine a trait that lets you roll a random dice in a particular range:

pub trait Dice {
    fn roll(&mut self, range: std::ops::Range<usize>) -> usize;
}

Then, you can implement this trait for anything that implements rand::Rng:

impl<R: rand::Rng> Dice for R {
    fn roll(&mut self, range: std::ops::Range<usize>) -> usize {
        rand::Rng::gen_range(self, range)
    }
}

Here's a very simple game which uses this Dice trait:

fn play<R: Dice>(r: &mut R) {
    let a = r.roll(0..6);
    println!("Roll: {}", a);
    if a == 5 {
        let b = r.roll(0..6);
        println!("   Bonus roll: {}", b);
    }
}

Finally, we can play this game a few times and see what we get:

fn main() {
    let mut rng = rand::thread_rng();
    for _ in 0..10 {
        play(&mut rng)
    }
}
Roll: 3
Roll: 4
Roll: 1
Roll: 2
Roll: 0
Roll: 0
Roll: 0
Roll: 4
Roll: 5
   Bonus roll: 5
Roll: 2

(Playground link)

This play function is very easy to write, but may need many iterations to fully explore the state space. Here's a modification that records how many tries it takes; I see something like 73 iterations to find all 11 possible combinations of dice rolls.

Searching the state space deterministically is much harder to write than our straightforward play function: you need to track your current state, generate all possible new states, explore them one by one...

(I'd encourage you to try this and see how it looks for this simple function)

To deterministically explore the state space while preserving easy-to-read code, I came up with an interesting idea: what if we store the state of the "random number generator", then proceed deterministically through the possible random numbers?

For example, if the game asks us for a random number in the range 0..6, we'd generate 0 on the first run, 1 on the second, and so on.

Here's what that looks like:

#[derive(Debug)]
pub struct DeterministicDice {
    initialized: bool,
    index: usize,
    data: Vec<(usize, std::ops::Range<usize>)>,
}

impl DeterministicDice {
    pub fn new() -> Self {
        Self {
            initialized: false,
            index: 0,
            data: vec![],
        }
    }
    pub fn next(&mut self) -> bool {
        if !self.initialized {
            self.initialized = true;
            true
        } else {
            while let Some((mut v, r)) = self.data.pop() {
                v += 1;
                if v >= r.end {
                    continue;
                } else {
                    self.data.push((v, r));
                    break;
                }
            }
            self.index = 0;
            !self.data.is_empty()
        }
    }
}
impl Dice for DeterministicDice {
    fn roll(&mut self, range: std::ops::Range<usize>) -> usize {
        let out = if let Some((v, r)) = self.data.get_mut(self.index) {
            assert!(*r == range);
            assert!(range.contains(v));
            *v
        } else {
            self.data.push((range.start, range.clone()));
            range.start
        };
        self.index += 1;
        out
    }
}

The DeterministicDice::next() function advances to the next state, or returns false if we've fully explored the space.

Using the DeterministicDice instead of thread_rng(), here's our new main:

fn main() {
    let mut rng = DeterministicDice::new();
    while rng.next() {
        play(&mut rng);
    }
}

(Playground link)

Now, the output generates all 11 possibilities in exactly 11 iterations:

Roll: 0
Roll: 1
Roll: 2
Roll: 3
Roll: 4
Roll: 5
   Bonus roll: 0
Roll: 5
   Bonus roll: 1
Roll: 5
   Bonus roll: 2
Roll: 5
   Bonus roll: 3
Roll: 5
   Bonus roll: 4
Roll: 5
   Bonus roll: 5

This is neat, because it means you can write simple code that makes "random" selections, but still explore the entire state space deterministically!

(I'm sure this idea exists elsewhere, but I haven't seen it before; send me prior art and I'll add it to this post!)

Exploring Shop Space

With this tool, we can explore the entirety of shop space and build every possible team. Like another recent writeup, this becomes an exercise in pruning and limiting possibilities.

For example, shops are always kept sorted. This has no impact on gameplay, but reduces the starting shop space from 1458 to a mere 330.

Our main loop randomly selects and applies one of the shop actions, or halts if the selected action is invalid. We wrap this in the DeterministicDice RNG to fully explore the available actions.

In between actions, the set of active shop states (i.e. the combination of shop animals, shop food, gold, and current team) are deduplicated.

Here's the number of active shops over time, along with the number of unique teams that we've created:

Teams

As you can see, we start with 330 shop states, i.e. every possible result of a sorted reroll. The number of active states balloons to 429K by action 4, then shrinks back down; by action 10, we have exhausted all possibilities.

This is a lot of state, but it's not ridiculous. However, this is far from naive: there are a bunch of tricks to keep it low (consider that simply rerolling for 1 gold, 10 times, would explode the space to 33010, or 15315789852644490000000000).

First, deduplication can be pretty aggressive: for example, if we've already seen a particular shop state and had more gold when we last saw it, then we can discard the current branch, since the previous exploration has strictly more possibilities.

The second big optimization is to strictly limit shop rerolling, which always creates 330 new branches. Since we're exploring every possible branch, there's no use in rerolling twice in a row: even if this particular branch doesn't contain an particular species in the shop, a different branch will have it.

Therefore, we should only reroll if the shop has empty slots.

(I also allow for rerolling if the shop has animals with non-standard stats, which can happen: sellng a duck adds +1 health to every shop animal. This makes a tiny different to the number of active shops, but still generates the same total number of teams, so I'm not sure if there's a situation where it matters.)

The last optimization is to also sort the team when deduplicating shop states. This is less obvious, because team order matters in battle. The trick is to store two separate deduplicated sets:

After each shop action, we insert every possible team permutation into the set of teams, but only store the sorted shop state in the set of next shops.

The result is 3514 teams. We can discard 90 of them as obviously bad, because they contain fewer than 3 animals with no compensating bonus stats or equipment; this includes the empty team, which wins no battles (although it can tie if facing another empty team!).

This leaves us with 3424 teams to evaluate.

Battle

Since we have 3424 teams, doing a battle per pair requires 11723776 battles.

However, there are also non-deterministic situations that arise in battle: for example, a mosquito randomly chooses an enemy to attack at the beginning of the round.

We can use the same DeterministicDice to fully evaluate every battle outcome between a pair of teams. To explore every battle outcome, we end up simulating 24946805 battles, recording the win/lose/draw record for every pair of teams.

Summing the scores gives a total win rate across the entire tournament:

Total win rate

The mean win rate is 35.7%, and the median is 33%.

Results

Our best team, with a 94.6% win rate:

🦗🦟🦟
❤️ 1 ❤️ 2❤️ 2
⚔️ 2⚔️ 2⚔️ 2

(shown in the same orientation as the game, i.e. the cricket will fight last)

The second-best team, with a 93.4% win rate, is

🐴🦗🦗
❤️ 2 ❤️ 1❤️ 1
⚔️ 1⚔️ 3⚔️ 3

There's a ton of random facts that I can pull from this data. I've presented a few below, but feel free to run the code and make new discoveries for yourself!

The best team with two friends

🦗🦦
❤️ 2 ❤️ 3
⚔️ 3⚔️ 4

(71% win rate)

The best team with one friend

🐷
🍯
❤️ 5
⚔️ 3

(50% win rate; all hail the Big Pig)

The worst team with three friends

🦆🦦🐴
❤️ 1 ❤️ 1❤️ 2
⚔️ 4⚔️ 2⚔️ 1

(3.9% win rate)

Notes on implementation and future work

I've published the code on Github, and encourage folks to mess with it and send me any interesting discoveries.

The code is about 1.2KLOC of Rust, so it's fairly approachable.

It takes about 37 seconds to run, broken down as follows:

Intermediate results are cached in local files on disk (teams.binz and scores.binz), to make it easier to test small changes to the final analysis.

Cases where I've simplified the logic are marked with XXX; these are typically assumptions which work for the first turn but could break down later in the game.

I only implemented animals from the Free-To-Play pack; testing with other packs would be a fun extension of this work. The code is up to date as of the February 10th patch; later patches could invalidate these results!

There's also one hilarious corner case which I didn't implement. It is possible, on the very first turn, to do the following:

In other words, given a sufficiently lucky shop, you can go into battle on turn 1 with with a Tier 2 unit, which normally wouldn't be available until a later round.

I decided not to implement all of the new logic for Tier 2 units, but PRs are welcome if you want to take it on!