home » 2014 » 12 » 1,000,000 times faster computers

1,000,000 times faster computers

Posted:
2014-12-09
Tags:
algorithms chess

What would it buy us? If someone magically makes it so all our computers run a million times faster, what would we be able to do then (or what would become practical) that we can't today?

I think the surprising and pessimistic answer to that question is "not much." The reason, I believe, is that most algorithms that are bounded by cpu power are also combinatorially explosive.

What do I mean by that? Let's take chess engines as an example. They roughly work like this, take the board as it is now, calculate how many points white has and subtract from that the number of points black has. So if everything is equal except that white has its queen left then the positions score would be nine because a queen is worth nine points.

(How to score a position is a huge topic in itself. In these examples to simplify I'll just use the conventional scores for each piece: pawn = 1, knight = 2.5, bishop = 3, rook = 5, queen = 9, king = infinite)

What if black had two rooks as compensation for the missing queen? Since a rook is worth five points the score would be 9 - 5 - 5 = -1. Which means that black would be slightly ahead.

White wants to make the positions score as high as possible, while black wants to make it as low as possible.

To decide what the best move is, the chess engine needs to look at each possible move, calculate the score for that board position, and if the engine's side is white, choose the move that maximizes the score and if it is black, choose the one that minimizes it.

Looking one move ahead isn't so tough. If there is only 30 possible moves, then there is only 30 board positions to analyze. Obviously the number of possible moves depend on the number of pieces on the board. Early in the game the number of moves is higher and later on it decreases as pieces are removed.

But looking one move ahead isn't enough to play chess well. It might lead to the engine choosing a move in which a knight is captured by the queen, leading two a 2.5 points advantage. But then the opponent might recapture the queen with a pawn and trading a queen worth 9 for a knight worth 2.5 would be pretty horrible.

Therefore the engine has to not only look one step ahead but also look at what the opponents best reply is. If there are 30 possible moves, and for each possible move, there is 30 possible replies, then there is now 30^2 = 900 positions to analyze.

I hope it's easy to see that looking two moves ahead also is pretty bad and you need the engine to look as far ahead as possible.

Looking five moves ahead (move + reply + move + reply + move), which would make the engine play at a very basic level, would then require checking 30^5 = 24,300,000 positions. And that is what is meant by "combinatorial explosion" -- a small increase in depth results in an exponentially larger amount of work.

Stockfish analyzes lines as deep as (I think) 20 moves or 30^20 positions. Actually it has various optimizations to keep the number of positions more manageable because 30^20 is probably a larger number than the number of atoms in the known universe. But basically, it's the same in principle.

A good assumption is that the number of board positions Stockfish can analyze is proportional to the computing power it has available. With a million times faster computers, Stockfish can analyze a million times more board positions. Which sounds a lot but let's do the math using Factor!:

IN: scratchpad 30 20 ^ 1000000 * log 30 log / .
24.06195495517307

So a million times faster computers would mean Stockfish would be able to analyze lines 24 moves deep instead of 20.

I think that result is kind of underwhelming. In real life, versus a real human, the result would be even less impressive. In a game today between the world champion Magnus Carlsen and Stockfish, the former would get creamed. A Stockfish able to look 24 moves deep would beat him even harder, but the difference to a human observer would be imperceptible. He would stand no chance at all in either scenario.

At this point in time, the only use extra computing power chess engines has is to fight against other chess engines.

What about go then? I don't know enough about the game to say how go engines could utilize one million times more computing power. Currently, the top players still beat the best engines so it is possible that the extra computing power would tip the game into the engines favor. I know for sure that the engines soundly beat go newbies like me though. :)

Another combinatorially explosive problem is weather forecasting. It's kind of like chess in that you know the state of a regions weather today and you have to calculate the probabilities that it will change in various ways (more rain, sunshine, thunderstorms etc). The similarity with chess is that it becomes much harder to forecast weather the farther into the future the forecast is for.

So if we can currently fairly reliably predict the weather five days into the future, with a million times more computing power, maybe we can then predict it for six days? In my mind, that's another underwhelming result.

So that is my pessimistic thoughts on what a million times faster computers would yield: Not that much, really.