Perhaps it's too late, but it would be interesting to read about how you programed the "AI" for each level. It took me a while to discover that there are a few opponents, but then I was surprised they have very different strategies.
I used that strategy a few times when it came out. I found it interesting to see how fast humans could responded and nullify the advantage. I think after 3-4 wins with the strategy I started getting countered effectively.
So it took a few days and all the players at my level knew about it and could handle it easily.
It's hard to beat a general purpose strong AI in the long run!
I remember watching this AI play and thinking if I could write my own strategy into an AI. The AI's strategy suffers from the issues that my strategy tries to combat. Perhaps someone can tell from my screenshot what my strategy is.
The problem with this particular algorithm is that I needed to have a lot of data to make it sort of work. I'm not sure how your AI was trained to play that game, sounds like a difficult problem-
I wonder if knowing the different strategies would make pro-humans better at playing the game. I feel like they might already have internalized the different strategies the computer uses, but still pretty surprising to see how simple the source code is.
Also cool to think about would be AI-vs-AI street fighter competitions. Or SF AI that learns from live matches currently being played online.
This was my first "AI" project, back in high school!
I had a "win statistic" for each possible play, for each game state. It taught itself to play by playing partially random moves, then going back and updating the win statistic for each game state in the play chain.
I was mesmerized, watching it teach itself to play.
I spent enormous amounts of time tweaking the computer's strategy so it would play a reasonably competent game. Curiously, each decision looks like a sequence of input values with a coefficient.
What does that look like? You guessed it, a neural network! I obviously had no idea what I was doing. I never hit on the notion of generalizing it, and then doing training to find the best coefficients and the inputs that mattered. (Of course, I didn't have the computer power to run 10,000 game iterations, either, when it'd take an hour to run one game.)
I bet now one could use neural networks to turn the computer strategy into a formidable opponent.
I watched the youtube videos of all the battles so far and I was really surprised to see that a very simple strategy (in retrospect) was sufficient to win the competitions.
I've been waiting to see if someone implements an AI bot that's trained on all the bots in the competition but maybe you just don't need to.
This is super interesting. I would like to see a real (learning, à la AlphaGo) A.I. play on Showdown OU someday, without cheating (i.e. about the number of matches per day a real human player would play).
I think there are a few challenges not found in most other online games, one being that strategies that win on different strata of the ladder are not the same (e.g. hyper offense is the most efficient on the lower ladder, but at some point around 1500 / 1600 you will start losing using it...). Also, I wonder how well an A.I. trained on the ladder would do in a tournament (e.g. SPL), where the metagame is a bit different.
Your code looks like a good entry point for that, all that's needed is to write a new bot using ML libraries...
I have a question - with AI nowadays that can dominate pretty much any board game - is there any hope for anyone 'hand coding' a strategy here against someone who just trains an AI model.
How does the problem space change in a game such as this where it may be 1v8 opponents vs a 1v1 game such as chess I wonder
Very impressive. If my understanding of how the AI works is correct, it is using a pre-computed strategy developed by playing trillions of hands, but it is not dynamically updating that during game play, nor building any kind of profiles of opponents. I wonder if by playing against it many times, human opponents could discern any tendencies they could exploit. Especially if the pre-computed strategy remains static.
I've written an AI for a similar game, where you can't always even iterate over all your moves for this turn, let alone do a multi-ply tree search.
Using their terminology of Action and Turn, my method was to first evaluate individual Actions using a simpler evaluation function, and apply a cutoff so that only a reasonably number of Actions had to evaluated, then combine that reduced group of Actions together to make a reasonable subset of full Turn moves, then use the full evaluation function to find the best Turn.
Writing a good evaluation function for a complex game is hard, so I picked a whole bunch of inputs and then used a genetic algorithm to find weights for them. (Let the various AIs play entire games, and let the winners breed.)
Works okay, but still can't beat a decent human player very often. (There's enough randomness in the game that the better player doesn't always win.)
In this paper, their Online Evolution beat the other four computer strategies, but they don't mention whether it can beat good human players. If it can't, it's not clear to me whether Online Evolution is a good algorithm. Beating their other four algorithms doesn't seem to be a very high bar.
I've seen this in some of the player-made AI bot competitions- it's exciting to see how an opponent reacts to an all-in strategy where a programmer has likely not trained to expect one.
I'd just run games, look at results, and endlessly tweak the strategy. Recently I learned how neural networks worked, and realize I could finally make a computer strategy that was competent. It could be trained by playing zillions of games against itself.
My only defense is that training a neural network was impractical on the machines Empire was developed on.
It's hard to resist going back to Empire and doing this.
Haha yes I weakened the AI somewhat compared to the original version created by RedBrogdon. It's not fun to be constantly thrashed by an opponent who takes 0.2 seconds per move! Despite my changes I still lose about 3 out of every 4 games :(
The interesting part to me is that, as far as I understand, the AI figured out this strategy by itself, basically deciding that it would be a good way for it to win games, rather than being specifically programmed to do it. That's actually pretty cool!
Other than that, I agree, and am also much more interested in what happens when you have a more level playing field (using camera movement rather than API, limiting reaction times and CPM, etc). I look forward to future matches where this happens.
These algorithms are similar to the ones I developed for the computer strategy for the Empire game. They worked, but I was unable to get them good enough to reliably defeat a human player.
The first strategy I tried was simply randomness. It was pretty ineffective. A rule based one was also ineffective, as it tended to get itself into a box. But I found that adding a dose of randomness to the rule based strategy worked well.
Most computer strategies of the day cheated, or altered the rules to give an advantage to the AI. Empire didn't. I did have thoughts about creating an interface so people could provide their own AIs and then the various AIs could battle it out.
classicempire.com
There's been a lot of research on the Life game. But I bet Empire would be a much better research platform - the rules are simple, but the game play can get pretty complex. I had a lot of fun developing AI algorithms for it. I wish the ant algorithms were known at the time, I bet they'd help!
(One of the reasons the simple rules worked so well was the pieces had a hammer-paper-scissors relationship to each other.)
reply