Printable Version of Topic

Click here to view this topic in its original format

Forums - Hard Drop - Tetris Community _ Strategy/Help _ Tools for Analyzing Surface Connectivity

Posted by: pqqu Jun 29 2019, 05:44 PM

Hi all!

I'm trying to work on my upstacking very intentionally I'm concerned about building in bad habits. I had some ideas about moneyball-styled tools to measure and analyze surface connectivity live as one is upstacking, and I wanted to ask if there's anything like it out there. My goal would be to continue tuning my intuition alongside hard data I'm receiving from the game, as pitchers do in baseball.

I want to be able to live-measure variables as I upstack, such as:
how many spots there are for an upcoming piece;
the total length of the current surface (bewteen 10, and... I dunno, 100 I guess, though that's extreme)
if hills or valleys are created, the surface connectivity of those individual surfaces (to get better data while doing the 3-6 split)
seven sub-variables showing the number of available drop positions for the seven pieces, summed to give a measure of the "openness" or "closedness" of new pieces.

I wanted to ask if there are tools like already out there, that can be built into nullpo? Or if there are other variable-training strategies other people have used. Lastly, I feel no emnity toward (just doing it) and learning by experience, but I also want to seek opportunities to tune my muscle memory as I build. Thank you!

Posted by: Shuey Jun 29 2019, 07:33 PM

Ryan Heise did a study on this long ago, and also created an AI for it:

Posted by: caffeine Jun 29 2019, 08:59 PM

That's a great question, and also something I have been interested in.

Probably the simplest way to capture connectivity is a method introduced by Dellacherie's algorithm: "row transitions." You look at each cell, side by side, and count the number of times there's a transition from empty to filled or vice versa. Walls count as filled. An empty board will have 40 transitions. An empty board with one filled cell in the middle will have 42. This is a robust measure and is very good when building a Tetris bot.

Another simple method is to find if there are any pieces that force a hole on a surface. This true/false measure also has value when building a bot, mainly because it helps the bot avoid building surfaces that don't accommodate S and Z.

Another method is to, for each piece and for each orientation of that piece, and for each column that orientation can be placed on, count the number of placements that do not create a hole. While playing with this, I have found that it is quite good at matching my intuition of what a good surface looks like.

This can also be efficiently calculated by keeping track of column heights and column height differences. This is unfortunately not such a useful feature for bots, since it creates a strong perverse incentive for the bot to cover up problem surfaces with pieces even at the expense of creating holes.

There is the Pagerank Tetris algorithm, which was cleverly implemented by Ryan Heise. It is excellent at matching my intuition of what good and bad surfaces should look like. The algorithm itself is actually a Value Iteration Markov Decision Process without discounting. Nullpomino supports an implementation of it. You just have to run it first in order to build up a database. You can even configure Nullpomino to show you what placements its Pagerank bot would do.

Since MDP value iteration converges on provably optimal policies, there's no doubt why it appears to evaluate surfaces well. It does so, however, under limited conditions.

Nullpo's implementation of Pagerank did not exactly match the results I've gotten independently. This could be because of the way endgame conditions were encoded, how many iterations were run, or perhaps some other reason. For example, of the 9^8 possible stacks we can represent with Pagerank Tetris, many are not actually reachable in a real game. If the implementation does not account for that, and is simplified to the point that these unreachable surfaces are reachable, then that would tarnish the quality of the policy.

Although I haven't tested it personally, I would expect Pagerank not to be very useful as a feature for bots in general, for the same reasons I mentioned earlier. It is nonetheless an interesting idea to have it as feedback for the user. I've thought about possibly creating a "trainer," which would be a 9x9 playfield where the user could only select surfaces that do not create holes. The game could then tell how good the user's placement was and make recommendations, perhaps even give them a score at the end saying their average Pagerank.

What I'm particularly interested in is a more holistic approach that looks at not just the surface, but the entire playfield and guides the user accordingly. This would be similar to how chess players use software to analyze their games. Of course, it's necessary to first have a bot with superhuman skill if the guidance is actually going to be useful.

Posted by: XaeL Jun 30 2019, 12:54 AM

Wow caffeine, i'm extremely impressed. You went from not coding at all, to entering a simple dizzy bot into a tournament, and now you have more AI knowledge than most computer science graduates!

Powered by Invision Power Board (
© Invision Power Services (