Chapter 49: The Neural Network Revolution: How AI Changed Chess
Rating: 2400+
"I always wondered what it would be like if a superior species landed on earth and showed us how they played chess. Now I know." : Vladimir Kramnik, on watching AlphaZero play
What You'll Learn
- How chess engines evolved from brute-force search to neural networks. and why the difference matters for your training
- What AlphaZero's self-taught style revealed about chess that centuries of human theory had missed
- How modern Grandmasters use engines for preparation, and how you can adopt those methods without losing your own chess voice
- The practical differences between Stockfish and Leela Chess Zero, and when to consult each
- How to be an honest, creative, independent chess player in an era where engines are stronger than any human who has ever lived
You Are Here 🗺️
Volume V ░░░░░░░░░░░░░░░░░░░░░░░░░░░░░
Ch 46 ██
Ch 47 ██
Ch 48 ██
Ch 49 ██ ← YOU ARE HERE
Ch 50 ░░
Ch 51 ░░
Ch 52 ░░
Ch 53 ░░
Ch 54 ░░
A Note Before We Begin
This chapter is different from any other in the Codex. Every other chapter teaches principles that have been tested for decades or centuries. This one covers a topic that is still evolving. Some of what you read here about specific engine versions will be outdated within a few years. That is fine.
The principles behind neural network chess. activity over material, piece harmony, dynamic evaluation. are timeless. Those are what we are here to learn. The specific engines are just the messengers.
PART 1: FROM BRUTE FORCE TO NEURAL NETWORKS
1.1 A Brief History of Chess Engines
The dream of a chess-playing machine is older than electricity. In the 18th century, the "Mechanical Turk". a fake automaton with a hidden human operator. toured Europe, defeating Napoleon and Benjamin Franklin. The idea that a machine could play chess captivated people long before anyone knew how to build one.
Real computer chess began in the 1950s. Claude Shannon, the father of information theory, wrote the first paper describing how a program could play chess. His approach divided into two categories:
Type A (brute force): Search every possible move, every possible reply, every possible reply to that reply: as deep as the hardware allows. Evaluate the resulting positions with a simple formula (count material, check king safety, add a few bonuses). Choose the path that leads to the highest score.
Type B (selective search): Try to mimic human thinking. Look only at the "interesting" moves. Prune branches that seem irrelevant. Search deeper in sharp positions, shallower in quiet ones.
For decades, the debate raged: would brute force or selective search produce the stronger program?
Brute force won. Overwhelmingly.
In 1997, IBM's Deep Blue defeated World Champion Garry Kasparov in a six-game match. Deep Blue evaluated approximately 200 million positions per second. It did not "understand" chess in any human sense. It simply searched so deeply and so quickly that understanding was unnecessary. Raw calculation, guided by a hand-tuned evaluation function written by a team of human Grandmasters, was enough to defeat the best human player alive.
After Deep Blue, chess engines improved steadily. By the mid-2000s, top programs like Fritz, Rybka, and Houdini were stronger than any human. By 2010, a top engine running on a laptop could defeat any Grandmaster alive. The hardware had won. The question shifted from "can computers beat humans?" to "what can we learn from them?"
Then, in December 2017, everything changed again.
1.2 How Traditional Engines Work
Before we discuss what changed, you need to understand what came before.
A traditional chess engine like Stockfish works in three stages:
Stage 1: Move Generation. The engine generates all legal moves in the current position. This is fast: modern processors can generate millions of move lists per second.
Stage 2: Search. The engine uses a tree-search algorithm called alpha-beta pruning. Think of it as a decision tree: from the current position, the engine considers move A, then the opponent's best reply, then its own best reply to that, and so on. Alpha-beta pruning is a method for skipping branches that cannot possibly be better than a branch already found. This is not a shortcut based on chess knowledge: it is a mathematical guarantee. If branch X is already proven to be excellent, and branch Y cannot possibly be better (because the opponent has a strong reply early in the line), there is no need to search branch Y further.
This pruning reduces the work enormously. Without it, searching 10 moves deep would require evaluating roughly 30^10 positions (about 590 trillion). With alpha-beta pruning, the effective branching factor drops to roughly the square root. still billions of positions, but within reach of modern hardware.
Stage 3: Evaluation. At the end of each search branch, the engine evaluates the resulting position. Traditional Stockfish (before the NNUE update) used a hand-crafted evaluation function: a formula that assigned numerical values to material, piece activity, king safety, pawn structure, mobility, and dozens of other factors. The weights for each factor were tuned over years of testing, often by running millions of games between different versions.
The output is a score in centipawns. A score of +100 means White is ahead by roughly one pawn. A score of +300 means White has the equivalent of an extra piece. A score of 0 is dead equal.
The NNUE Update. Starting around 2020, Stockfish incorporated NNUE (Efficiently Updatable Neural Network) into its evaluation function. This replaced the hand-crafted formula with a small neural network that evaluates positions more accurately. But the search algorithm: alpha-beta with pruning: remained the same. Stockfish with NNUE is a hybrid: neural network evaluation bolted onto classical search.
This distinction matters because the neural network engines that arrived in 2017 took a completely different approach.
1.3 How Neural Network Engines Work
In December 2017, the company DeepMind published a paper describing AlphaZero, a program that taught itself to play chess from scratch. No opening book. No endgame tables. No hand-crafted evaluation function. No human games to study. Just the rules of chess and an algorithm that played against itself millions of times, learning from every game.
AlphaZero's architecture was built on two components:
Component 1: A deep neural network. Given a chess position, the network outputs two things: (a) a policy: a probability distribution over all legal moves, predicting which moves are most promising, and (b) a value: a single number estimating the probability of winning from this position. The network learned both of these functions entirely through self-play.
Component 2: Monte Carlo Tree Search (MCTS). Instead of alpha-beta pruning, AlphaZero uses a different search method. MCTS does not try to search every branch to a fixed depth. Instead, it samples: it repeatedly plays out positions from the current state, guided by the neural network's policy predictions. Moves that the network thinks are promising get searched more deeply. Moves that look unpromising get fewer visits. Over thousands of samples, the search converges on the best move.
The key difference from traditional engines: AlphaZero's search is guided by intuition. Its neural network acts like a Grandmaster's pattern recognition. it says "this move looks interesting, search here" before any deep calculation has occurred. Traditional engines search broadly and then evaluate. AlphaZero evaluates first, then searches selectively.
This is remarkably similar to how strong human players think. And the results were extraordinary.
1.4 AlphaZero's Lasting Legacy
In its published matches against Stockfish (played under controlled conditions. critics noted that Stockfish was not given an opening book or access to endgame tablebases, giving AlphaZero an advantage in the early and late phases), AlphaZero won decisively. More important than the score was the way it won.
Chess players and Grandmasters around the world studied AlphaZero's games with astonishment. The program's style was unlike any engine they had ever seen. It was unlike most humans, too. It was something new.
Here are the key lessons AlphaZero taught us:
Lesson 1: Dynamic Piece Sacrifices for Long-Term Activity
AlphaZero routinely sacrificed pawns. and sometimes pieces. for activity that lasted 20, 30, even 40 moves. Traditional engines would reject these sacrifices because the material deficit appeared in the evaluation function immediately, while the compensation was difficult to quantify. AlphaZero's neural network had learned, through millions of self-play games, that a pawn is a small price for keeping every piece active and every line open.
This was not new in chess theory. Tal, Kasparov, and Bronstein all played this way. But AlphaZero demonstrated it with a consistency and precision that no human had ever achieved. It showed that dynamic compensation is real and reliable, not just a romantic fantasy. You can sacrifice material for activity, and if you maintain pressure accurately, your opponent may never get the chance to use their extra material.
Lesson 2: King Activity in the Middlegame
AlphaZero moved its king to unusual squares during the middlegame. not to castle into safety, but to reposition it where it could support the pawn structure or avoid back-rank threats. The famous h-pawn pushes in its games often left the king on h1 or g1 with the h-file wide open, but AlphaZero's other pieces controlled the critical squares so thoroughly that the king was safe despite appearances.
This challenged a principle drilled into every beginner: "keep your king safe behind pawns." AlphaZero showed that safety is contextual. A king is safe when the opponent cannot attack it. whether or not there are pawns in front of it.
Lesson 3: Piece Harmony Over Material Count
AlphaZero frequently reached positions where it was down a pawn or even two, but every single one of its pieces was on its optimal square. Its bishops controlled long diagonals. Its knights occupied outposts. Its rooks owned open files. Its queen coordinated with every other piece.
The opponent, meanwhile, had extra material. but at least one piece was passive, stuck defending, or disconnected from the action. AlphaZero proved that coordinated pieces are worth more than extra material.
This principle was articulated by Steinitz in the 19th century. AlphaZero proved it empirically, against the strongest evaluation function ever written.
Lesson 4: Prophylactic Depth
AlphaZero's prophylaxis (preventing the opponent's plans) operated at a depth that shocked analysts. It would make quiet moves. shuffling a rook to a seemingly random square. that only revealed their purpose 15 or 20 moves later, when the opponent's planned breakthrough was neatly blocked. Traditional engines did not "see" the threat until much later in the search tree. AlphaZero's neural network recognized the pattern from self-play experience and prevented it preemptively.
This is Nimzowitsch's prophylaxis (Volume III, Chapter 24) taken to its logical extreme. AlphaZero did not just prevent the opponent's next threat. it prevented threats the opponent had not even conceived yet.
🛑 Rest point. The next section covers the modern engine era. If you need a break, this is a good place to pause.
PART 2: THE MODERN ENGINE ERA
2.1 Leela Chess Zero: The Open-Source Neural Network
AlphaZero's code was never released. But its published paper contained enough detail for a community of volunteers to build an open-source version: Leela Chess Zero (Lc0).
Lc0 follows the same architecture. deep neural network plus Monte Carlo Tree Search. but is trained on consumer hardware distributed across thousands of volunteers. Its training has continued for years, producing networks of steadily increasing strength.
Lc0 plays chess that is recognizably "AlphaZero-like." It favors activity over material. It is willing to sacrifice pawns for long-term pressure. It evaluates positions in ways that sometimes disagree sharply with Stockfish. And it plays certain types of positions. closed, strategic, long-term maneuvering. with a fluency that classical engines struggle to match.
2.2 Stockfish vs. Lc0: Different Strengths, Different Styles
As of the 2020s, Stockfish (with NNUE) and Lc0 are the two strongest chess engines in the world. Their head-to-head results are extremely close. But their styles are different, and these differences matter for your training.
Stockfish's Strengths:
- Tactical precision in sharp, forcing positions
- Endgame accuracy (especially with tablebase access)
- Speed of analysis (evaluates millions of positions per second on consumer hardware)
- Clear, explainable evaluations (centipawn scores correlate well with objective reality)
Lc0's Strengths:
- Positional intuition in closed or strategic positions
- Willingness to evaluate long-term sacrifices accurately
- Superior handling of positions where the advantage is "intangible" (activity, coordination, prophylaxis)
- More human-like move selection (often chooses the move a strong GM would play, even when Stockfish prefers a different path to the same evaluation)
When to Use Which Engine:
| Situation | Best Engine |
|---|---|
| Tactical analysis (sharp positions) | Stockfish |
| Endgame verification | Stockfish (with tablebases) |
| Opening novelty evaluation | Both: cross-reference |
| Strategic/positional evaluation | Lc0 |
| Understanding why a move is good | Lc0 (its top choices often match human logic) |
| Blunder-checking your games | Stockfish (faster, more thorough) |
| Studying pawn sacrifices for activity | Lc0 |
The best modern practice is to consult both engines and pay special attention to positions where they disagree. Those disagreements often reveal the most interesting and instructive features of a position.
2.3 How Modern Grandmasters Use Engines
The relationship between humans and engines has evolved dramatically since the early 2000s. Here is how top Grandmasters work with engines in the 2020s:
Preparation Depth. In critical opening lines, top players prepare 20 to 40 moves deep. This means they have analyzed (with engine assistance) every significant variation in their chosen openings to the point where the resulting middlegame position is one they understand and have practiced. The first "new" move in a top-level game may not appear until move 25 or later.
Multiple Engine Consultation. Serious preparation involves running both Stockfish and Lc0 on the same position and comparing their recommendations. When both engines agree, the evaluation is almost certainly correct. When they disagree, the position is worth studying deeply: it usually contains a subtle feature that one engine handles better than the other.
Opening Novelty Discovery. Engines play millions of self-play games at high speed. Analysts review these games to discover new ideas: moves that no human has played, in positions that looked "settled" for decades. Many modern opening novelties at the Grandmaster level were first found by an engine, then understood and adopted by a human.
Positional Re-evaluation. Engines have overturned long-standing assessments. Positions that were considered "clearly better for White" for 50 years have been shown to be equal. Moves that were considered "weak" or "dubious" in old opening manuals are now mainline theory. The Sveshnikov Sicilian, once considered risky for Black because of the backward d6 pawn and the hole on d5, is now one of Black's most respected defenses: engine analysis showed that Black's dynamic counterplay more than compensates for the structural weaknesses.
Endgame Tablebases. Seven-piece endgame tablebases (databases containing the exact evaluation of every possible position with seven or fewer pieces) have resolved endgame theory in many areas. Positions that were "thought to be drawn" have been proven to be wins with perfect play, and vice versa. This knowledge has filtered into practical play, changing how Grandmasters assess simplification.
2.4 The "Computer Move" Phenomenon
Watch any modern top-level game, and you will hear commentators say: "That's a computer move." What they mean is: the move looks strange or unintuitive by human standards, but it was almost certainly found and verified with engine assistance during preparation.
Computer moves share several characteristics:
- They often prioritize long-term positional factors over short-term aesthetics
- They may involve quiet retreats when attack seems natural, or sharp sacrifices when defense seems required
- They frequently address threats that are 10 or more moves away
- They are difficult to find at the board without prior engine analysis
The ability to play computer moves is a double-edged sword. A player who memorizes engine lines without understanding them is building on sand. the moment their opponent deviates from the prepared line, they are lost. A player who understands the logic behind the engine's choices adds those patterns to their own vocabulary and becomes genuinely stronger.
PART 3: ENGINES AND YOUR CHESS IDENTITY
3.1 Has Engine Preparation Killed Creativity?
This is one of the most debated questions in modern chess. Here is the honest answer: it depends on how you define creativity.
If creativity means "finding brilliant ideas at the board during a game," then yes. engine preparation has reduced the scope for over-the-board creativity in the opening phase. When both players have analyzed the first 25 moves at home, there are fewer opportunities for unexpected ideas in the opening.
But if creativity means "finding new ideas, exploring uncharted territory, and producing beautiful chess," then the answer is no. Engines have expanded the boundaries of what is possible. Moves that no human would consider are now in the repertoire. Entire openings that were dismissed for decades have been revived with engine-discovered resources. And the middlegame and endgame phases. where preparation runs out and pure skill takes over. remain as creative as ever.
The Codex position: engines are tools, not replacements for human creativity. A painter who uses better brushes does not become less creative. A chess player who uses stronger engines does not become less original. The creativity lies in how you use the tool. which lines you choose to study, which ideas you incorporate into your style, which positions you steer toward because you understand them.
3.2 The Cheating Problem
We must address this directly. Engines are strong enough that a player secretly consulting one during a game has an overwhelming advantage. This is cheating. It is the most serious integrity threat in modern chess.
Anti-cheating measures. signal detection, statistical analysis of move correlation with engine top choices, physical security at tournaments. have improved significantly. Fair play organizations monitor online and over-the-board play. The technology for detecting cheating is sophisticated and continues to improve.
Your responsibility as a chess player is straightforward: never use an engine during a game. Not in casual play, not in online rated games, not in tournaments. Engine assistance during a game is dishonest and corrosive. It harms your opponents, degrades the competition, and. critically. it destroys your own development. You cannot learn from games you did not play honestly.
This book assumes you are here because you love chess and want to earn your strength. Trust that process.
3.3 Using Engines Without Losing Your Identity
Here is the most practical question of this chapter: how do you use the most powerful chess tools ever created without becoming dependent on them?
The Botvinnik Rule
Mikhail Botvinnik, the sixth World Champion and a pioneer of computer chess, understood this problem decades before it became urgent. His principle was simple:
If you do not understand why the engine's move is better than your move, you have not learned anything.
When you analyze a game with an engine and it shows a different move than the one you played, your job is not to memorize the engine's move. Your job is to understand the difference. What did the engine see that you missed? What principle does the engine's move follow that your move violated? What pattern can you extract from this disagreement?
If you cannot answer these questions, the engine line is useless to you. Write down the position, come back to it tomorrow with fresh eyes, and try again. Only when you can explain the engine's logic in your own words. without looking at the evaluation bar. have you truly learned from it.
Engine as Teacher vs. Engine as Crutch
Here is the line between productive and destructive engine use:
| Productive (Teacher) | Destructive (Crutch) |
|---|---|
| Analyze your games with an engine after you have analyzed them yourself first | Turn on the engine immediately and passively scroll through its suggestions |
| Study why the engine prefers a different move | Memorize engine lines without understanding them |
| Use engine analysis to identify patterns in your thinking errors | Use engine analysis to feel bad about your play |
| Consult the engine on specific positions you are stuck on | Leave the engine running on screen while you play training games |
| Build your opening repertoire by understanding engine-verified lines | Build your repertoire by copying the engine's first choice in every position |
Preserving Your Own Thinking Process
The most important habit for any improving player is this: always analyze positions yourself before consulting an engine. Look at the position. Identify the key features. Generate candidate moves. Calculate your chosen line to a conclusion. Write down your evaluation.
Then turn on the engine.
This sequence is critical because it forces your brain to do the work. If you skip the analysis and go straight to the engine, you are outsourcing your thinking. Your pattern recognition does not improve. Your calculation muscle does not strengthen. You become a spectator of chess rather than a participant.
This advice applies at every level. A 1200-rated player and a 2400-rated player both benefit from analyzing before consulting. The difference is the depth and accuracy of the initial analysis, not the process.
3.4 The Future: What Comes After Neural Networks?
The current generation of neural network engines represents a massive leap forward. But it is not the final word.
Several directions are being explored:
- Larger and deeper networks that evaluate positions with even greater subtlety
- Hybrid approaches that combine neural network intuition with classical search (Stockfish's NNUE is already an example)
- Engines that explain their reasoning. current engines give you a move and a score, but not a verbal explanation of why. Future engines may be able to articulate strategic plans in human-readable language
- Training on human games to capture stylistic preferences (some experimental engines can be tuned to play like specific Grandmasters)
- General-purpose AI systems that bring broad reasoning capabilities to chess analysis
The one certainty is that engines will continue to get stronger. The question for you, the human chess player, remains the same as it has always been: how do you use the best available tools to become the best player you can be?
The answer has not changed since Botvinnik's time. Understand the ideas. Do your own thinking. Use the engine as a mirror that reflects your errors, not as a voice that dictates your moves.
PART 4: ANNOTATED GAMES
Game 1: The Game That Changed Everything
AlphaZero vs. Stockfish Event: DeepMind Internal Match | Year: 2017 | Result: 1-0 Opening: Queen's Gambit Declined (D37)
This was the game that made Grandmasters sit up in their chairs. AlphaZero's handling of the h-pawn and its willingness to sacrifice material for long-term activity was unlike anything produced by a traditional engine.
Set up your board:
[Event "AlphaZero vs Stockfish Match"]
[White "AlphaZero"]
[Black "Stockfish"]
[Result "1-0"]
[ECO "D37"]
[Date "2017.12.04"]
1.d4 Nf6 2.c4 e6 3.Nf3 d5 4.Nc3 Be7 5.Bf4 O-O 6.e3 Nbd7 7.c5 Nh5
8.Bd3 Nxf4 9.exf4 b6 10.b4 a5 11.a3 c6 12.O-O Qc7 13.g3 Ba6 14.Re1 Bxd3
15.Qxd3 e5
Up to here, the game is a standard Queen's Gambit Declined with the 5.Bf4 variation. Black has played logically: exchanging the light-squared bishop that was blocked by the d5 pawn, and now striking in the center with ...e5. By conventional evaluation, the position is roughly equal.
Now watch what AlphaZero does.
16.fxe5 Nxe5 17.Nxe5 Qxe5 18.b5!
18.b5!: This is not a move a traditional engine would prioritize. White opens lines on the queenside while accepting that Black's central position is solid. The point is strategic: White wants to fix the queenside pawn structure and create long-term pressure against Black's c6 pawn.
18...Qd6 19.bxc6 Qxc6 20.Rab1 Rab8 21.h4!
21.h4!: Here is the move that stunned the chess world. AlphaZero pushes its h-pawn forward, seemingly weakening its own kingside. But the intent is clear: h4-h5 will open the h-file and create attacking chances on the kingside while Black's pieces are still trying to solve the queenside problems.
Traditional Stockfish (the version used in this match) rejected h4 because the immediate evaluation showed no concrete benefit. The compensation was too distant, too intangible, for a brute-force evaluation to appreciate. AlphaZero's neural network saw it from the beginning.
21...Rfe8 22.Rxe8+ Rxe8 23.h5 h6 24.Rb3
White's rook swings to the third rank, ready to join the kingside attack or defend on the queenside. Every White piece serves multiple purposes. Black's extra pawn (if one materializes) is meaningless against this coordination.
The game continued with AlphaZero maintaining relentless pressure. Stockfish's pieces were never able to coordinate a counterattack. White converted the positional advantage into a win.
What this game teaches:
-
The h-pawn push as a strategic weapon. AlphaZero showed that advancing the h-pawn in the middlegame. even at the cost of weakening the king. is a legitimate and powerful plan when the opponent cannot exploit the weakness.
-
Evaluation is not just material. White's advantage in this game was never measured in pawns. It was measured in piece activity, file control, and long-term pressure. Neural networks evaluate these factors more accurately than hand-crafted evaluation functions.
-
Patience. AlphaZero did not rush. It improved its position move by move, knowing that the compensation would grow. This patience. the willingness to maintain pressure for 30+ moves without forcing a resolution. was the hallmark of AlphaZero's style.
Game 2: Positional Mastery in the London System
AlphaZero vs. Stockfish Event: DeepMind Internal Match | Year: 2017 | Result: 1-0 Opening: London System (D02)
AlphaZero played the London System. the same opening recommended in Volume I of this Codex. and demonstrated what the system looks like when executed with perfect positional understanding.
Set up your board:
[Event "AlphaZero vs Stockfish Match"]
[White "AlphaZero"]
[Black "Stockfish"]
[Result "1-0"]
[ECO "D02"]
[Date "2017.12.04"]
1.d4 Nf6 2.Nf3 d5 3.Bf4 e6 4.e3 Bd6 5.Nbd2 O-O 6.Bg3 c5 7.c3 Nc6
8.Bd3 b6 9.e4 dxe4 10.Nxe4 Be7 11.Qe2 Bb7 12.O-O Rc8 13.Rad1 Qc7
14.Rfe1 Rfd8 15.Bc2 Nf8
The London System opening has led to a typical middlegame structure. But notice AlphaZero's piece placement: every piece is on an active square. The bishops control key diagonals. The rooks are centralized. The queen supports multiple plans.
16.Nfg5! h6 17.Bxd6 Bxd6 18.Nxf7!
18.Nxf7!: A sacrifice that Stockfish's evaluation initially dismissed. White gives up a knight for a pawn and exposes Black's king. The compensation? Two key factors: (a) Black's king is permanently weakened, and (b) White's remaining pieces coordinate perfectly against the holes in Black's position.
18...Kxf7 19.Ng5+ Kg8 20.Qh5
White has a crushing attack. The queen and knight combine to create threats against h7 and f7 that Black cannot defend without making significant concessions. The game continued with AlphaZero converting the attack smoothly.
What this game teaches:
-
The London System is deep. If you have been playing the London since Volume I, this game shows you what the system is capable of at the highest level. The same structure you learned as a beginner contains resources that challenge the strongest engine in the world.
-
Piece harmony. Before the sacrifice, every one of AlphaZero's pieces was on its optimal square. The sacrifice worked because the remaining pieces were positioned to exploit the resulting weaknesses. This is the principle of coordination: set up your pieces first, sacrifice second.
-
Knight sacrifices on f7. This is an ancient tactical theme (you studied it in Volume II, Chapter 11). AlphaZero's version is sophisticated. the sacrifice is positional as much as tactical. but the underlying pattern is the same one you already know.
Game 3: Neural Network vs. Classical Engine
Leela Chess Zero vs. Stockfish Event: TCEC Season 21 Superfinal | Year: 2021 | Result: 1-0 Opening: Ruy Lopez, Closed (C92)
The TCEC (Top Chess Engine Championship) is the premier engine competition. This game from a superfinal illustrates how Lc0's neural network style differs from Stockfish's classical approach.
Set up your board:
[Event "TCEC Season 21 Superfinal"]
[White "Leela Chess Zero"]
[Black "Stockfish"]
[Result "1-0"]
[ECO "C92"]
[Date "2021"]
1.e4 e5 2.Nf3 Nc6 3.Bb5 a6 4.Ba4 Nf6 5.O-O Be7 6.Re1 b5 7.Bb3 d6
8.c3 O-O 9.h3 Bb7 10.d4 Re8 11.Nbd2 Bf8 12.a4 h6 13.Bc2 exd4 14.cxd4 Nb4
15.Bb1 c5 16.d5 Nd7 17.Ra3 c4 18.Nd4 Nc5 19.f3 Qb6 20.Kh1 Nba6
The Ruy Lopez Closed is one of the oldest and most deeply analyzed openings in chess. Both engines know the theory. The game enters a complex middlegame where Lc0 demonstrates its characteristic style.
21.N2f1 Nc7 22.Be3 Qd8 23.Ng3 Nba6 24.Qd2 Nb4 25.Bb1 Bc8 26.Nf5 Bxf5
27.exf5 Qb6 28.Ra1 Nd3 29.Bxd3 cxd3 30.Qxd3 Nd7
The positional grind. Lc0 has achieved a small but lasting advantage: the f5 pawn restricts Black's kingside, and White's pieces are slightly more active. A traditional engine might evaluate this as "+0.30": barely anything. But Lc0's neural network understands that this small advantage can grow.
Over the next 30 moves, Lc0 improved its position incrementally: maneuvering pieces to optimal squares, preventing Black's counterplay, and waiting for a moment to convert. Stockfish defended accurately but could not equalize completely.
What this game teaches:
-
Small advantages are real at the engine level. A +0.30 evaluation might look insignificant. But when both sides play with near-perfect accuracy, small advantages determine the outcome. This is the same principle you learned in Chapter 48 (Converting Minimal Advantages). engines simply execute it at a higher level.
-
Maneuver over tactics. Lc0 won this game without a single brilliant sacrifice. It won through superior piece placement, prophylaxis, and patience. This is the neural network style: less fireworks, more sustained pressure.
-
Style matters even for engines. Lc0 and Stockfish are roughly equal in strength, but they reach their results through different means. Studying both engines' approaches gives you a wider vocabulary of ideas.
Game 4: Engine-Influenced Preparation at the Highest Level
Magnus Carlsen vs. Fabiano Caruana Event: World Chess Championship, Game 1 | Site: London | Year: 2018 | Result: ½-½ Opening: Sicilian Defense, Rossolimo Variation (B31)
The 2018 World Championship match was the most engine-prepared match in chess history up to that point. Game 1 set the tone: deep preparation, computer-verified novelties, and a level of opening accuracy that would have been impossible without engine assistance.
Set up your board:
[Event "World Chess Championship 2018"]
[White "Carlsen, Magnus"]
[Black "Caruana, Fabiano"]
[Result "1/2-1/2"]
[ECO "B31"]
[Date "2018.11.09"]
1.e4 c5 2.Nf3 Nc6 3.Bb5 g6 4.Bxc6 dxc6 5.d3 Bg7 6.O-O Qc7 7.Re1 e5
8.a3 Nf6 9.b4 O-O 10.Nbd2 Bg4 11.h3 Bxf3 12.Nxf3 cxb4 13.axb4 a5!?
13...a5!?: Caruana's preparation. This aggressive pawn push was almost certainly verified with engines before the match. Black challenges White's queenside expansion immediately, sacrificing the possibility of ...a5 as a later resource in exchange for immediate counterplay.
14.bxa5 Rxa5 15.Rxa5 Qxa5 16.Bd2 Qa2 17.Qb1 Qa6 18.Bc3 Nd7 19.Qa1 Qb6
20.Qa4 Qc7 21.Rb1 Rb8 22.Qc4 b5 23.Qe2
The game reached a roughly equal middlegame. Both players had prepared deeply, and neither could achieve an advantage. The position simplified into an endgame that was drawn with accurate play.
The game was drawn on move 115.
What this game teaches:
-
The depth of modern preparation. Both Carlsen and Caruana had analyzed this position at home with engine assistance. The first "unprepared" move may not have come until move 20 or later. This level of preparation is now standard at the world championship level.
-
Engine preparation does not guarantee a win. Even with the deepest preparation in history, neither player could break through. Engines can tell you the best moves, but they cannot play the moves for you. The human element. nerves, time pressure, psychology. still matters.
-
The Rossolimo as an engine-age weapon. The Rossolimo Variation (3.Bb5 against the Sicilian) has surged in popularity partly because engine analysis has shown it to be more dangerous for Black than previously thought. This is an example of positional re-evaluation driven by engine work.
Game 5: The Youngest Champion: A Historic Blunder
D. Gukesh vs. Ding Liren Event: World Chess Championship, Game 14 | Site: Singapore | Year: 2024 | Result: 1-0 Opening: Queen's Gambit Declined (D37)
This was the decisive final game of the 2024 World Championship match. Gukesh, just 18 years old, became the youngest World Champion in history. The game turned on a single move in the endgame. a reminder that even at the highest level, under the most intense pressure, human errors decide chess games, not engine preparation.
Set up your board:
[Event "World Chess Championship 2024"]
[White "Gukesh, D."]
[Black "Ding Liren"]
[Result "1-0"]
[ECO "D37"]
[Date "2024.12.12"]
1.d4 Nf6 2.c4 e6 3.Nf3 d5 4.Nc3 Be7 5.Bf4 O-O 6.e3 Bd7 7.a3 dxc4
8.Bxc4 Bc6 9.O-O a5 10.Qe2 Nbd7
The opening is a Queen's Gambit Declined. the same opening family as AlphaZero's famous Game 1 against Stockfish. Both players had prepared thoroughly. The early game was precise and careful.
The game entered a complex middlegame, then simplified into a rook-and-minor-piece endgame. By conventional evaluation, the position was heading toward a draw. Ding Liren, the defending champion, needed only to hold this game to force a tiebreak.
The critical position arose on move 55, with Ding playing Black in a rook endgame:
55...Rf2??
55...Rf2??: A shocking blunder. In a position that was objectively drawn, Ding placed his rook on f2, allowing Gukesh to win material and convert the resulting position. The engine evaluation swung from 0.00 to over +5.00 in a single move.
After this error, Gukesh converted precisely. The position was technically winning, and the 18-year-old demonstrated the endgame technique required to finish the game.
Gukesh won. He broke down in tears at the board. He was the youngest World Champion in the history of the game.
What this game teaches:
-
Engines cannot play the moves for you. Ding Liren is one of the strongest players in history. He had access to the same engines, the same preparation tools, the same databases as his opponent. But at the board, in the moment, under immense pressure, a human made a human error. This is the irreducible element of chess that no amount of engine preparation can eliminate.
-
Endgame technique remains essential. Gukesh's conversion after the blunder was not trivial. He needed real endgame knowledge. the kind taught in Volumes I through IV of this Codex. to finish the job. Engines found the win instantly, but Gukesh had to find it at the board, with the clock ticking, in the most important game of his life.
-
The emotional weight of chess. This game was decided by a single move in a drawn position. The psychological pressure of a World Championship match. with a title, a legacy, and years of work on the line. is a factor that no engine can simulate. This is why we train not just our calculation and our knowledge, but our resilience.
EXERCISES
⚡ ADHD Quick Set
If you only do five exercises today, do these: 49.1, 49.3, 49.7, 49.10, 49.13. They cover the core concepts of the chapter in 20 minutes.
Warmup Exercises (★★–★★★)
Exercise 49.1 ★★ Understanding Centipawn Scores
A chess engine evaluates a position as +1.50. Which of the following best describes what this means?
(a) White is winning and Black should resign (b) White has an advantage roughly equivalent to one and a half extra pawns (c) White is guaranteed to win with perfect play (d) White has a forced checkmate
Answer: (b). A centipawn score of +150 (or +1.50 in decimal notation) means the engine evaluates White's advantage as approximately 1.5 pawns. This is a significant advantage: at the Grandmaster level, it is often enough to win: but it is not a forced win. Black can still fight for a draw with accurate defense.
Exercise 49.2 ★★ Engine Evaluation Shifts
You play a move and your engine's evaluation changes from +0.30 to -0.80. What does this tell you?
Answer: You made a serious mistake. The evaluation shifted by over 1.0 pawn in your opponent's favor. Your move was not the engine's recommendation. Go back and compare your move with the engine's top choice. Identify what you missed: was it a tactical threat, a positional factor, or an endgame consideration?
Exercise 49.3 ★★★ Finding the "Computer Move"
Set up your board:
White: Kg1, Qd1, Ra1, Re1, Bc1, Bc4, Nf3, pawns a2, b2, d4, f2, g2, h2 Black: Kg8, Qd8, Ra8, Rf8, Bc8, Be7, Nc6, pawns a7, b7, d5, f7, g7, h7
White to play. A human would naturally consider developing moves like Bg5 or Re3. But the engine's top choice is a quiet move that addresses a long-term problem. Can you find it?
Answer: a3!: This quiet prophylactic move prevents ...Nb4, which would harass White's bishop on c4 and potentially invade to d3. It also prepares b4, expanding on the queenside. This is exactly the kind of "computer move" that humans miss: it does not create any immediate threat, but it prevents the opponent's best plan while preparing one of your own.
Exercise 49.4 ★★ Engine Disagrees With the Master
In a famous game from the 1970s, a Grandmaster played Be2 in a position where the engine strongly prefers Bd3. The engine evaluates Be2 as +0.10 and Bd3 as +0.65. Does this mean the Grandmaster made a mistake?
Answer: Not necessarily. The Grandmaster may have preferred the resulting position after Be2: perhaps it led to a type of middlegame the GM was comfortable with. The engine's evaluation reflects objective accuracy, but practical considerations (familiarity with the resulting structure, opponent's strengths and weaknesses, time on the clock) also matter. However, the half-pawn difference is significant. If you face this position in your own games, understanding why Bd3 is better will make you a stronger player.
Exercise 49.5 ★★★ Stockfish vs. Lc0 Disagreement
Set up your board:
White: Kg1, Qc2, Rd1, Re1, Bf4, Nd5, pawns a2, b2, c3, f2, g2, h2 Black: Kg8, Qb6, Ra8, Re8, Bg7, Nf6, pawns a7, b7, d6, f7, g7, h7
In this position, Stockfish recommends Nxf6+ Bxf6 and then Bg5, exchanging pieces and simplifying toward a slightly better endgame. Lc0 recommends Bg5, maintaining the tension and keeping all pieces on the board.
Which engine's approach do you prefer, and why? (There is no single correct answer. this is an evaluation exercise.)
Answer: Both approaches are valid. Stockfish's line simplifies the position and aims to exploit the resulting structural advantage. Lc0's approach keeps the complexity high, trusting that its piece coordination will generate advantages over time. Your choice should depend on your style: if you are a technical player who excels in endgames, Stockfish's approach may suit you. If you prefer dynamic middlegames with piece play, Lc0's approach is better. The important skill is understanding why each engine makes its choice.
Exercise 49.6 ★★★ Evaluation Context
A position is evaluated at +3.20 by Stockfish. Does this mean the game is over?
Answer: At the Grandmaster level, +3.20 is almost always decisive with correct play: it represents roughly a piece advantage. But "almost always" is not "always." Fortress positions, material imbalances (e.g., two pieces vs. a rook), and practical drawing chances exist even at high evaluations. At the club level (below 2000), games with +3.00 advantages are thrown away regularly. The evaluation tells you the objective status. Converting that advantage still requires skill.
Exercise 49.7 ★★★ AlphaZero-Style Pawn Sacrifice
Set up your board:
White: Kg1, Qd2, Ra1, Rf1, Bc1, Bd3, Nf3, Nc3, pawns a2, b2, c4, d4, e3, f2, g2, h2 Black: Kg8, Qd8, Ra8, Rf8, Bc8, Be7, Nf6, Nc6, pawns a7, b7, c5, d5, e6, f7, g7, h7
White to play. Find the AlphaZero-style move that sacrifices a pawn for long-term activity.
Answer: c5!: White pushes past the center, sacrificing the c-pawn to open the c-file for the rook and fix Black's queenside pawn structure. After ...dxc4 (or ...bxc5), White gets open lines, the d5 square for the knight, and lasting pressure. The pawn cannot be recovered immediately, but White's piece activity more than compensates. This is the kind of positional pawn sacrifice that neural network engines evaluate accurately: and that traditional engines used to undervalue.
Exercise 49.8 ★★★ Interpreting Multiple Engine Lines
Your engine shows three candidate moves with these evaluations:
| Move | Evaluation | Depth |
|---|---|---|
| Nf5 | +0.45 | 35 |
| Bd3 | +0.42 | 35 |
| Re3 | +0.38 | 35 |
All three moves are within 0.07 of each other. What should you do?
Answer: When multiple moves are evaluated almost identically, the position is flexible: there are several good plans. In this situation, choose the move that (a) you understand best, (b) leads to a position type you are comfortable with, and (c) is hardest for your opponent to meet. A 0.07 centipawn difference is meaningless in practical play. Do not stress about finding "the" engine move when three moves are functionally equivalent.
Intermediate Exercises (★★★)
Exercise 49.9 ★★★ The Prophylactic Computer Move
Set up your board:
White: Ke1, Qd1, Ra1, Rh1, Bc1, Bf1, Nb1, Ng1, pawns a2, b2, c2, d4, e4, f2, g2, h2 Black: Ke8, Qd8, Ra8, Rh8, Bc8, Bf8, Nb8, Nf6, pawns a7, b7, c5, d6, e5, f7, g7, h7
White has not yet developed any pieces, and it is White's move. The engine's top recommendation is not a developing move. What is it?
Answer: d5!: Closing the center immediately before developing. This freezes Black's central pawn structure and dictates the character of the middlegame. White can then develop with a clear plan (f4 break, kingside expansion). This is a "computer move" because most humans would instinctively develop a piece first: but the engine recognizes that fixing the pawn structure is more important than any single developing move.
Exercise 49.10 ★★★ King Walk in the Middlegame
Set up your board:
White: Kf1, Qe2, Rd1, Re1, Bc1, Bg2, Nf3, pawns a2, b2, c3, d4, f2, g3, h2 Black: Kg8, Qc7, Ra8, Re8, Bb7, Bg7, Nd7, pawns a6, b6, c5, d5, e6, f7, g6, h7
White has already castled kingside and then moved the king to f1. The engine recommends Ke1-d2, walking the king toward the center. Why is this correct?
Answer: The position is closed in the center (locked pawn chain). There are no open files pointing at the White king. With the center stable, the king is actually safer in the center than on the kingside: and from d2 (and eventually c2), the king connects the rooks and supports the queenside pawns. This is an AlphaZero-style king walk: recognizing that in closed positions, king safety is about pawn structure, not about hiding behind castled pawns.
Exercise 49.11 ★★★ When the Engine Says 0.00 But You Feel Lost
You are playing Black. The position feels terrible. your pieces are passive, your pawns are weak, and your opponent seems to have all the initiative. You check with an engine, and it evaluates the position as 0.00.
What does this teach you?
Answer: Your feeling about the position and the objective evaluation are different things. The engine says the position is equal because with perfect defense, Black can hold. But "perfect defense" may require finding a series of only-moves that a human would struggle to find in practice. This is a position where you should study the engine's defensive plan carefully: it will reveal resources you did not see. It also teaches humility: many positions that "feel" lost are objectively holdable. And conversely, some positions that "feel" comfortable are objectively worse than they appear.
Exercise 49.12 ★★★ Piece Harmony Exercise
Set up your board:
White: Kg1, Qd2, Ra1, Rd1, Bd3, Be3, Nc3, pawns a2, b2, c4, d4, f2, g2, h2 Black: Kg8, Qe7, Ra8, Rf8, Bc8, Bg7, Nf6, pawns a7, b7, c6, d6, e5, f7, g7, h7
White has excellent piece coordination: bishops on active diagonals, knights centralized, rooks on useful files, queen supporting multiple plans.
Your task: identify the ONE piece that is not yet on its optimal square, and suggest where it should go.
Answer: The knight on c3 is well-placed but could be even better on d5, where it would be an outpost supported by the c4 pawn. After Nc3-d5, the knight controls critical squares (b6, c7, e7, f6) and cannot easily be challenged. This is AlphaZero-style thinking: even when a piece is on a good square, look for a great square.
Exercise 49.13 ★★★★ Engine-Style Exchange Sacrifice
Set up your board:
White: Kg1, Qe2, Ra1, Rf1, Bc1, Bg2, Nf3, Nc3, pawns a2, b2, c4, d4, e4, f2, g2, h2 Black: Kg8, Qd8, Ra8, Rf8, Bb7, Be7, Nf6, Nc6, pawns a7, b7, c5, d6, e6, f7, g7, h7
White to play. The engine recommends an exchange sacrifice. Find it and explain the compensation.
Answer: Nd5! followed by Rxf6! after ...exd5 exd5 (or similar). White sacrifices the exchange (rook for knight) but obtains: (a) the powerful d5 outpost, (b) a strong pawn on d5 that splits Black's position, (c) pressure on the e-file, and (d) the bishop pair in an open position. The compensation is long-term and positional: exactly the type of sacrifice neural network engines evaluate accurately. A traditional engine might reject this sacrifice because the material deficit shows immediately. But the positional compensation is real and lasting.
Exercise 49.14 ★★★★ Finding the Human Move vs. the Engine Move
Set up your board:
White: Kg1, Qd1, Ra1, Re1, Bc4, Be3, Nf3, pawns a2, b2, d4, f2, g2, h2 Black: Kg8, Qd8, Ra8, Rf8, Bc8, Bg7, Nf6, Nc6, pawns a7, b7, c7, d6, e5, f7, g7, h7
White to play. The "human move" is d5, gaining space. The engine's top choice is different. What is the engine move, and why does the engine prefer it?
Answer: The engine prefers dxe5 dxe5 Qxd8 Rxd8 Bc5!: exchanging queens and immediately exploiting the pin on the d-file. After Bc5, White's bishops are extremely active, and the endgame favors White because of superior piece activity. The "human move" d5 gains space but closes the position, reducing the bishops' scope. The engine understands that opening the position with active pieces is stronger than gaining space with a closed center. This is a critical lesson: space is only valuable when your pieces can use it.
Exercise 49.15 ★★★★★ Master-Level: The Position Where Engines Disagree
Set up your board:
White: Kg1, Qe2, Ra1, Rd1, Bg5, Bd3, Nf3, pawns a3, b2, c3, d4, e4, f2, g2, h3 Black: Kg8, Qc7, Ra8, Re8, Bc8, Be7, Nd7, Nf8, pawns a6, b5, c5, d6, e5, f7, g6, h7
In this complex middlegame, Stockfish evaluates the position as +0.15 (roughly equal) and recommends Nd2, repositioning the knight. Lc0 evaluates the position as +0.55 and recommends d5!, a pawn break that sacrifices the d-pawn for activity.
Analyze both moves. Which do you prefer? What does the disagreement tell you about the position?
Answer: The disagreement reveals that this is a position where long-term dynamic factors are difficult to evaluate precisely. Stockfish's Nd2 is safe and sound: it repositions the knight without committing to a pawn structure. Lc0's d5! is ambitious: after ...exd5 exd5, White's pieces gain activity (the bishop pair shines in an open position), but the d5 pawn may become a target.
The disagreement suggests that d5 leads to a position with significant imbalances. open lines for White's bishops versus a potentially weak d-pawn. Lc0's neural network evaluates these imbalances in White's favor because it values piece activity highly. Stockfish's traditional evaluation is more cautious about the structural concession.
At the Grandmaster level, both moves are playable. Your choice should reflect your style: aggressive players will prefer d5; solid, technical players will prefer Nd2. The key insight is that disagreement between engines means the position is rich with possibilities. and those are the positions worth studying most deeply.
Exercises 49.16–49.60: Companion PGN
The remaining 45 exercises are available in the companion PGN file (Ch49_Exercises.pgn). Each exercise includes the position, the engine's recommendation, and a full analysis.
Exercise Distribution:
| Range | Difficulty | Theme | Count |
|---|---|---|---|
| 49.16–49.20 | ★★★ | Interpreting engine evaluations in unbalanced positions | 5 |
| 49.21–49.25 | ★★★ | Finding the computer move in standard middlegames | 5 |
| 49.26–49.30 | ★★★★ | AlphaZero-style pawn sacrifices for activity | 5 |
| 49.31–49.35 | ★★★★ | Exchange sacrifices for positional compensation | 5 |
| 49.36–49.40 | ★★★★ | Positions where the engine king walk is correct | 5 |
| 49.41–49.45 | ★★★★ | Comparing Stockfish and Lc0 candidate moves | 5 |
| 49.46–49.50 | ★★★★ | Prophylactic computer moves in closed positions | 5 |
| 49.51–49.55 | ★★★★★ | Converting engine advantages in practical play | 5 |
| 49.56–49.60 | ★★★★★ | Master-level positions where engines disagree | 5 |
Total: 60 exercises (8 warmup ★★–★★★ | 12 intermediate ★★★ | 25 expert ★★★★ | 15 master ★★★★★)
Key Takeaways
-
Neural network engines changed chess by proving that activity, coordination, and prophylaxis are worth more than static material count. AlphaZero did not discover new principles. it demonstrated old principles with unprecedented clarity and consistency.
-
Stockfish and Lc0 have different strengths. Use Stockfish for tactical verification and endgame analysis. Use Lc0 for strategic evaluation and understanding positional sacrifices. Use both together for serious preparation, and pay special attention to positions where they disagree.
-
The Botvinnik Rule is your shield against engine dependency. If you do not understand why the engine's move is better, you have not learned anything. Always analyze yourself first. Then compare. Then understand.
-
Engines have expanded chess, not shrunk it. Openings once considered dubious are now mainline. Sacrifices once considered reckless are now verified. The territory of chess is larger than ever. engines helped map it, but you still have to explore it yourself.
-
Integrity is not optional. Never use an engine during a game. Your development depends on honest play. The goal is not to become an engine. it is to become the strongest version of yourself.
Practice Assignment
This week, do the following:
-
Play three serious games (online rated or over the board) without any engine assistance.
-
After each game, analyze the game yourself first. Write down your own notes: where did you feel uncertain? Where did you think you made mistakes? Where were you proud of your play?
-
Then. and only then. run the game through Stockfish and Lc0. For every move where the engine disagrees with your choice by more than 0.30, stop and analyze:
- What did the engine see that you missed?
- What principle does the engine's move follow?
- Can you express the logic in your own words?
-
Identify one "computer-style" move from the engine analysis that you would never have considered at the board. Add it to your personal pattern library. The next time a similar structure appears, you will have one more idea in your arsenal.
-
Optional: Set up a position from one of the annotated games in this chapter and try to find AlphaZero's move before reading the annotation. How many can you find?
⭐ Progress Check
After completing this chapter and the practice assignment, you should be able to:
- Explain the difference between alpha-beta search and Monte Carlo Tree Search in your own words
- Interpret engine evaluations in centipawns and understand their practical significance
- Identify "computer moves" in a position. quiet prophylactic moves that address long-term problems
- Use engine analysis productively without becoming dependent on it
- Recognize positions where Stockfish and Lc0 are likely to disagree, and understand why
- Find AlphaZero-style pawn and exchange sacrifices for positional compensation
- Articulate why an engine prefers one move over another (the Botvinnik Rule)
If you can check all of these boxes, you are ready for Chapter 50.
If some of these feel uncertain, go back to the relevant section and work through it again. There is no rush. This material is dense, and it rewards repeated study.
PART 5: ALPHAZERO'S REVOLUTIONARY IDEAS
5.1 Why AlphaZero Matters
You have already seen AlphaZero's games in Part 4. Now we go deeper. We are not just replaying moves. We are extracting the principles behind those moves and turning them into tools you can use in your own games.
AlphaZero played only 100 published games against Stockfish. That is a tiny sample. But those games contained ideas so striking, so alien to conventional engine play, that they reshaped how an entire generation thinks about chess. The reason is simple: AlphaZero did not learn chess from humans. It learned chess from scratch, playing millions of games against itself with zero human input. The ideas it found were not inherited from any tradition. They were discovered independently, from pure pattern recognition and self-play.
Some of those ideas confirmed what the best human players had always believed. Others challenged assumptions that had been treated as gospel for a century. Let us examine the most important ones.
5.2 The King Walk: Safety Through Activity
The most startling visual in AlphaZero's games was the king walk. In dozens of games, AlphaZero marched its king out of its castled position and into the center (or even further) during the middlegame. This violated one of the oldest rules in chess instruction: keep your king safe behind castled pawns.
But AlphaZero understood something subtle. King safety is not about geography. It is about pawn structure, piece activity, and whether the opponent can open lines toward the king. In positions where the center is locked, no files are open toward the king's destination, and the opponent lacks the pieces to create an attack, the king is perfectly safe in the center. More than safe: it is useful. A centralized king connects the rooks, supports pawns, and frees pieces from defensive duties.
Consider this position from a typical AlphaZero-style structure:
Set up your board:
White: Ke1, Qd1, Ra1, Rh1, Bd3, Nc3, Nf3, Nd2, pawns a2, b2, d4, e3, f2, g2, h2 Black: Kg8, Ra8, Rf8, Bc8, Nc6, Nf6, pawns a7, b7, c5, d5, e6, f7, g7, h7
The center is locked. Black's pieces are aimed at the queenside. There are no open files in the center or on the kingside. In this type of position, AlphaZero would play Ke2, followed by Kd1-c2 or even Ke2-f1-g1 (repositioning rather than castling). The king walks to wherever it is most useful. Castling kingside would tuck the king into a corner where it contributes nothing. Walking the king to c2 connects the rooks on the d-file and a-file while keeping the king perfectly safe behind the locked center.
The lesson for your games: Before you castle automatically, ask yourself three questions. (1) Is the center locked or stable? (2) Are there open files pointing at the castled position? (3) Would my king be more useful in the center? If the center is closed and no attack is coming, consider keeping the king flexible. You do not need to march it to c2 every game. Simply delaying castling by one or two moves, developing with the king in the center, can give you options that disappear once you commit.
5.3 Exchange Sacrifices for Long-Term Pressure
AlphaZero sacrificed the exchange (a rook for a minor piece) with remarkable frequency. Traditional engines rejected most exchange sacrifices because the material deficit was too large for their evaluation functions to handle. AlphaZero, with its neural network evaluation, understood that positional compensation could fully offset the material loss.
The typical AlphaZero exchange sacrifice had three ingredients: (1) a powerful minor piece, usually a knight on a central outpost; (2) long-term pressure against a structural weakness, often an isolated or backward pawn; and (3) denial of counterplay, ensuring the opponent could not use the extra rook actively.
Set up your board:
White: Kg1, Qc2, Ra1, Re1, Nc5, Nf3, pawns a3, b2, d4, e5, f2, g2, h2 Black: Kg8, Qc7, Ra8, Rf8, Bb7, Nc6, pawns a6, b5, d5, e6, f7, g7, h7
White's knight on c5 is a monster. It attacks a6, e6, and b7 while sitting on a square that Black cannot challenge without serious concessions. Now imagine White plays Rxe6. After ...fxe6, White has given up the exchange, but the compensation is enormous: Black's pawn structure is shattered (doubled e-pawns, the d5 pawn now lacks pawn support), the c5 knight is untouchable, and Black's extra rook has no open files to exploit.
This is the essence of the AlphaZero exchange sacrifice. You give up material to create a permanent structural advantage that the opponent cannot repair. The extra rook sits on the board, technically worth more than a knight, but practically unable to do anything useful.
The lesson for your games: When you have a dominant minor piece on a central outpost, look for exchange sacrifices that destroy the opponent's pawn structure. The key question is not "am I getting enough material back?" but rather "can my opponent use the extra rook?" If the answer is no, the sacrifice is likely sound.
5.4 Positional Pawn Sacrifices That Humans Dismissed
Before AlphaZero, Grandmasters understood positional pawn sacrifices. Petrosian, Karpov, and Kramnik all sacrificed pawns for long-term positional compensation. But AlphaZero took this to another level. It sacrificed pawns in positions where the compensation was so abstract, so long-term, that most humans could not see the point.
The typical AlphaZero pawn sacrifice had one goal: piece activity. Specifically, it aimed to accelerate development, open lines for pieces, or create weaknesses in the opponent's camp. The pawn itself was unimportant. What mattered was the position that resulted after the opponent captured.
Set up your board:
White: Ke1, Qd1, Ra1, Rh1, Bc1, Bf1, Nc3, Nf3, pawns a2, b2, c2, d4, e5, f2, g2, h2 Black: Ke8, Qd8, Ra8, Rh8, Bc8, Bf8, Nb8, Nf6, pawns a7, b7, c5, d5, e6, f7, g7, h7
This is a French Defense structure. AlphaZero frequently played gambits in these positions, offering the d4 pawn with moves like Be3 followed by Qd2, not caring if Black captures on d4. After ...cxd4, Nxd4 (or even allowing ...dxc3), White's pieces flood into the center. The e5 pawn cramps Black's position. White's bishops find open diagonals. The "sacrificed" d-pawn has been transformed into piece activity that Black cannot match.
The critical insight is this: AlphaZero evaluated piece activity as nearly equivalent to material. A position where every piece is on its best square, with open lines and pressure against the enemy position, is worth a pawn or sometimes more. Traditional engines counted the pawn deficit and penalized the sacrifice. AlphaZero's neural network saw the resulting activity and rewarded it.
The lesson for your games: When you are considering a pawn sacrifice, do not focus on whether you can win the pawn back. Focus on the resulting position. Ask: "After my opponent takes, are my pieces better placed? Do I have open lines? Does my opponent have weaknesses?" If the answer to all three is yes, the sacrifice is probably sound, even if you never recover the pawn.
5.5 What Human Players Can Learn from AlphaZero
AlphaZero's style was not genuinely "new." The principles it demonstrated (activity over material, centralization, prophylaxis, piece harmony) had been taught by Nimzowitsch, Capablanca, and Petrosian for decades. What AlphaZero did was apply those principles with perfect consistency, in every position, without the psychological baggage that humans carry.
Here is what you can take from AlphaZero's example:
-
Trust activity over material. If your pieces are active and coordinated, you are doing well, even if you are a pawn down. Do not grab pawns that pull your pieces to the rim. Keep your pieces in the game.
-
Think about pawn structure before you castle. AlphaZero's king walks were not reckless. They were responses to specific pawn structures. Learn to read the structure and let it guide your king placement.
-
Look for exchange sacrifices. When you have a dominant knight or bishop, consider giving up a rook for it. The "exchange" is only worth about 1.5 pawns in practice. If your compensation exceeds that, the sacrifice is sound.
-
Play prophylactically. Before executing your own plan, ask: "What does my opponent want to do? Can I prevent it with a quiet move?" AlphaZero's prophylactic moves were often more effective than direct attacks.
-
Do not fear the unusual. AlphaZero played moves that no human had considered. Some of those moves were objectively best. The lesson is not to copy AlphaZero, but to free yourself from rigid thinking. If a move looks weird but the logic is sound, play it.
PART 6: LEELA CHESS ZERO VS STOCKFISH: TWO PHILOSOPHIES
6.1 How Stockfish Evaluates
Stockfish is the strongest chess engine in the world, and it has held that title (or shared it) for over a decade. Understanding how it works will make you a better user of its analysis.
Stockfish uses a hybrid approach. It combines classical alpha-beta search with a neural network evaluation called NNUE (Efficiently Updatable Neural Network). Here is how the pieces fit together:
Search: Stockfish examines positions using alpha-beta pruning, the same fundamental algorithm used since the 1960s. But modern Stockfish searches far more efficiently than its predecessors. It uses dozens of pruning and reduction techniques with names like Late Move Reduction, Null Move Pruning, and Futility Pruning. These techniques skip positions that are unlikely to change the evaluation, allowing the engine to search deeper in critical lines.
Evaluation: In earlier versions, Stockfish used a hand-crafted evaluation function: a formula that assigned numerical values to features like material, king safety, pawn structure, and piece mobility. Starting in 2020, Stockfish switched to NNUE, a small neural network that evaluates positions more accurately than any hand-crafted formula. The NNUE network is trained on hundreds of millions of positions evaluated by Stockfish itself. It is "efficiently updatable" because when a move is made, only a small portion of the network needs to be recalculated, making it fast enough to evaluate millions of positions per second.
The result: Stockfish combines the depth of classical search (routinely reaching 40+ ply in analysis) with the accuracy of neural network evaluation. This hybrid approach makes it extremely strong in tactical positions, endgames, and concrete calculations. When Stockfish gives you a line, you can be confident that the tactics work.
Stockfish's strength: Concrete calculation. If a position requires precise move-by-move analysis (sharp tactics, forcing sequences, technical endgames), Stockfish is the tool you want. It searches deeper than any other engine and its tactical accuracy is almost perfect.
Stockfish's limitation: In positions where the evaluation depends on long-term strategic factors that cannot be resolved by calculation (piece placement, pawn structure quality, prophylactic ideas), Stockfish can sometimes miss the point. Its evaluation may say "+0.10" in a position where one side has a significant practical advantage that only becomes apparent 30 or 40 moves later.
6.2 How Lc0 Evaluates
Leela Chess Zero (Lc0) takes a completely different approach. It is a direct descendant of AlphaZero's architecture, built by an open-source community that replicated DeepMind's methods.
Search: Lc0 uses Monte Carlo Tree Search (MCTS), not alpha-beta. Instead of systematically searching every branch, MCTS focuses on the most promising moves. It plays out many simulated games from the current position, guided by the neural network's intuition about which moves are likely to be good. Moves that the network considers promising get more simulations. Moves that look bad get fewer. Over thousands of simulations, the best move emerges statistically.
Evaluation: Lc0's evaluation comes entirely from a large neural network. This network takes the board position as input and outputs two things: (1) a probability distribution over all legal moves (which moves are likely to be best?), and (2) a win/draw/loss probability for the position. There is no hand-crafted evaluation formula. The network learned everything from self-play, starting from random play and improving over billions of games.
The result: Lc0's evaluations are more "intuitive" than Stockfish's. It excels at recognizing long-term positional patterns: piece activity, king safety in complex positions, the value of the bishop pair, and the importance of pawn structure. It evaluates positions holistically, the way a strong human player does, but with far greater accuracy.
Lc0's strength: Strategic evaluation. In positions with long-term imbalances (material vs. activity, structural advantages, opposite-colored bishop positions), Lc0 often provides better guidance than Stockfish. It is particularly strong at evaluating sacrificial positions where the compensation is positional rather than tactical.
Lc0's limitation: Because MCTS does not search as deeply as alpha-beta in tactical positions, Lc0 can occasionally miss tactical shots that Stockfish finds instantly. In sharp, forcing positions with long concrete variations, Stockfish is more reliable. Lc0 also requires significantly more hardware (specifically GPUs) to run at full strength.
6.3 Three Positions Where the Engines Disagree
The most instructive positions in modern chess are those where Stockfish and Lc0 disagree. These disagreements reveal the boundaries of each engine's understanding and teach you to think critically about engine output.
Disagreement Position 1: The Closed Center
Set up your board:
White: Kg1, Qd1, Ra1, Rf1, Be2, Be3, Nc3, Nf3, pawns a2, b2, c4, d4, e4, f2, g2, h3 Black: Kg8, Qd8, Ra8, Rf8, Bc8, Bg7, Nc6, Nf6, pawns a6, b7, c7, d6, e5, f7, g6, h7
This is a King's Indian Defense structure. The center is about to close after d5. Stockfish evaluates this position as roughly equal (+0.15) and recommends the standard d5, closing the center and beginning a kingside or queenside pawn advance. Lc0 also likes d5 but assigns a slightly higher evaluation (+0.40) and emphasizes different follow-up plans.
The disagreement is instructive. After d5, the game becomes a race: White attacks on the queenside (c5 break), Black attacks on the kingside (f5 break). Stockfish's evaluation reflects its belief that both sides have equal chances in this race. Lc0's higher evaluation reflects its neural network's judgment that White's queenside attack is slightly faster and more concrete.
What you learn: In closed, strategic positions, pay closer attention to Lc0's evaluation. Its pattern recognition for pawn structure dynamics is often more nuanced than Stockfish's.
Disagreement Position 2: The Positional Exchange Sacrifice
Set up your board:
White: Kc1, Qd2, Rd1, Rh1, Bf1, Bg5, Nc3, Nd4, pawns a2, b2, c2, e4, f3, g2, h2 Black: Kg8, Qc7, Ra8, Rf8, Bb8 (wait, let me re-examine)
Actually, let me give you the piece list directly.
White: Kc1, Qd2, Rd1, Rh1, Bf1, Bg5, Nc3, Nd4, pawns a2, b2, c2, e4, f3, g2, h2 Black: Kg8, Qc7, Ra8, Rf8, Bc8, Be7, Nd7, Nf6, pawns a6, b5, d6, e6, f7, g7, h7
Stockfish evaluates this as +0.25 and recommends quiet play with a3 or Kb1, improving White's position slowly. Lc0 evaluates the position at +0.65 and recommends Nd5!, sacrificing the knight to shatter Black's pawn structure after ...exd5 exd5 Nf6+ Bxf6 Bxf6. The resulting position gives White long-term pressure against Black's broken pawns and excellent piece activity.
Lc0 sees the compensation as more than sufficient. Stockfish is skeptical because the material cost is immediate and the compensation is long-term. In practice, Lc0's assessment tends to be correct in positions like this: structural damage combined with piece activity outweighs the material deficit.
What you learn: When Lc0 recommends a sacrifice that Stockfish rejects, take Lc0 seriously. Its evaluation of positional compensation is consistently more accurate.
Disagreement Position 3: The Quiet Endgame
Set up your board:
White: Kf3, pawns a2, b3, c4, e4, f5, g2, h2 Black: Kf7, pawns a7, b7, c6, e5, f6, g7, h7
This is a king and pawn endgame. All the pieces have been traded. Stockfish evaluates this position precisely: it can calculate the pawn race outcomes exactly and determines whether White can break through or whether the position is a draw. In this case, Stockfish's evaluation is +0.10, suggesting a likely draw with correct play.
Lc0 evaluates the same position at +0.45, significantly higher. Why the disagreement? Because Lc0's neural network has learned from millions of self-play endgames that White's space advantage and the better king position create practical winning chances. In a theoretical sense, the position may be drawn. In a practical sense (especially with less than perfect play), White has real chances.
What you learn: In pure endgames, trust Stockfish's calculation over Lc0's intuition. Stockfish can calculate forced pawn races exactly. But also note that Lc0's evaluation may better reflect practical chances against a human opponent. If you are preparing to play this position against a human (not a computer), Lc0's optimism may be more relevant than Stockfish's precision.
6.4 Practical Advice: When to Trust Which Engine
Here is a simple decision framework for your preparation work:
Trust Stockfish when:
- The position is sharp and tactical (lots of forcing moves, captures, checks)
- You need to verify a concrete line to the end
- You are analyzing an endgame with precise calculation requirements
- You need to check whether a combination actually works
- You want to confirm that your opponent's sacrifice is unsound
Trust Lc0 when:
- The position is strategic and closed (few tactical operations, long maneuvering)
- You are evaluating a sacrifice with long-term positional compensation
- You want to understand the "direction" of a position (who is better and why)
- You are choosing between two plans that are both tactically sound
- You want to evaluate pawn structure quality or piece activity holistically
Use both when:
- You are doing serious opening preparation (let both engines analyze and compare)
- The position is complex with both tactical and strategic elements
- You are studying a position to understand it deeply, not just to find a move
- You encounter a position where one engine's evaluation surprises you
The golden rule: When the engines agree, trust the evaluation. When they disagree, study the position carefully. The disagreement itself is telling you something important about the nature of the position.
PART 7: THE MODERN GENERATION AND THE ENGINE ERA
7.1 Growing Up with Engines
Every chess generation is shaped by the tools available to it. Fischer's generation had books and adjournments. Kasparov's generation had personal computers and databases. Carlsen's generation had powerful engines from the start. But the generation born after 2000 is fundamentally different. These players never knew chess without engines. They learned openings from engine lines, analyzed games with Stockfish from their first tournament, and grew up watching engine evaluations on live broadcasts.
This has produced players with remarkable technical accuracy. The modern young Grandmaster can calculate forcing sequences with computer-like precision, work through complex endgames with near-perfect technique, and prepare opening novelties that emerge from millions of engine-checked variations. Their baseline level of play, in terms of avoiding outright mistakes, is higher than any previous generation.
But it has also created a challenge. When every strong player has access to the same engines and the same databases, the playing field is leveled in preparation. The advantage shifts from who can prepare more deeply to who can think more creatively once the preparation ends. The human element, the ability to work through unfamiliar positions with confidence and originality, matters more than ever.
7.2 Gukesh Dommaraju: The Youngest World Champion
In December 2024, Gukesh Dommaraju became the youngest World Champion in chess history at age 18, defeating Ding Liren in the World Championship match. Gukesh's achievement represents the arrival of the engine generation at the very peak of chess.
Gukesh learned chess surrounded by technology. His preparation combined deep Stockfish analysis with creative over-the-board play. He studied classical games to build intuition, used engines to verify and extend his analysis, and maintained a fierce competitive instinct that no engine could provide.
What makes Gukesh's rise instructive is the balance he struck. He did not play like an engine. His games featured creative piece sacrifices, ambitious pawn play, and occasional risks that a pure engine-trained player might avoid. But his tactical accuracy was extraordinary, a product of growing up with engines as training partners.
The lesson from Gukesh's example is that engines are tools for development, not substitutes for talent and creativity. The best young players today use engines to expand their understanding, not to replace their thinking. They prepare with engines. Then they close the laptop and play chess.
7.3 The Balance Between Preparation and Creativity
The modern Grandmaster faces a tension that did not exist in Botvinnik's era. Engine preparation can extend 25 or 30 moves deep into some openings. A player who memorizes these lines can reach a favorable position without a single original thought. But memory has limits. At some point, usually around move 20 to 30, the preparation ends. And then what?
The players who thrive in the modern era are those who prepare deeply but also develop independent thinking skills. Preparation gets you to a good position. Creativity, calculation, and judgment win the game from there.
Here is a useful metaphor. Think of engine preparation as a runway. It gets you off the ground. But once you are airborne, you need to fly the plane yourself. A player who spends all their time extending the runway (memorizing longer and longer engine lines) but never learns to fly (think independently) will eventually crash when they reach unfamiliar territory.
The best approach combines both. Prepare specific lines with engines, but also study complete games to build pattern recognition. Solve tactical puzzles to sharpen calculation. Play training games in unfamiliar openings to practice independent thinking. The goal is a player who can handle any position, not just the positions they have memorized.
7.4 Can Humans Still Find Novelties That Engines Miss?
This question gets asked at every chess conference, and the honest answer is: rarely, but yes.
Engines search billions of positions, but they do not search every position. Their pruning algorithms skip branches that appear unpromising based on their evaluation functions. Occasionally, a human player notices an idea in a "pruned" branch that the engine missed. These discoveries are rare, but they happen.
More commonly, humans find practical novelties. These are moves that the engine evaluates as equal to the engine's top choice (say, +0.20 vs. +0.22) but lead to positions that are much harder for a human opponent to work through. The engine does not distinguish between two +0.20 moves, but an experienced Grandmaster knows that one leads to a dry, drawish position while the other creates practical problems. Choosing the "trickier" equal move is a form of creativity that engines cannot replicate, because they do not model their opponent's likely mistakes.
There is also the area of opening preparation. Engines evaluate known positions brilliantly, but they do not generate opening ideas from scratch. A human who understands pawn structures and piece placement can propose a new move order or a new plan that the engine has never been asked to evaluate. Once the engine is shown the idea, it can assess whether it works. But the idea itself came from the human.
So yes, humans can still contribute original ideas to chess. The role has shifted from finding the best moves (which engines do better) to asking the right questions, choosing the right positions to study, and making practical decisions that exploit the gap between engine perfection and human reality.
7.5 The Future of Human Chess in an Age of Perfect Play
Engines have not killed chess. They have changed it. The nature of competitive chess is now different from what it was in Kasparov's era, but it is no less rich.
Consider an analogy. Humans cannot run as fast as cars. That fact has not eliminated competitive running. The 100-meter dash is still one of the most popular sporting events in the world. We do not compare Usain Bolt to a Ferrari. We appreciate his achievement within the context of human performance.
Chess is moving in the same direction. The question is no longer "can a human beat an engine?" (no) or "what is the objectively best move?" (ask the engine). The question is "how well can a human play chess?" That question remains as compelling as ever.
The future of competitive chess will likely feature:
- Stronger anti-cheating measures to ensure fair play in the engine era
- Fischer Random (Chess960) becoming more popular, since preparation matters less when the starting position is random
- More emphasis on rapid and blitz time controls, where preparation is less dominant and pure chess skill matters more
- AI-assisted training that helps players improve faster, making the overall level of play higher than ever
- New formats that test human creativity directly, perhaps including positions selected specifically because engines evaluate them as unclear
Human chess is not dying. It is evolving. Your job, as a player in this era, is to use every tool available to become the strongest player you can be, while preserving the creative, competitive, deeply human spirit that makes chess beautiful in the first place.
The engines can find the truth. But the search, the struggle, the joy of finding a beautiful move at the board while the clock is ticking and your heart is pounding: that belongs to you alone.
EXERCISES (continued)
Engine Era Exercises (49.61 to 49.70)
Exercise 49.61 ★★★ [Essential] Evaluating a Closed Position
⏱ Estimated time: 8 minutes
Set up your board:
White: Kg1, Qd1, Ra1, Rf1, Bc1, Be2, Nc3, Nf3, pawns a2, b2, c4, d3, e4, f2, g2, h2 Black: Kg8, Qd8, Ra8, Rf8, Bc8, Be7, Nc6, Nf6, pawns a7, b7, c5, d6, e6, f7, g7, h7
This is a Symmetrical English structure. Both sides have developed pieces harmoniously. Without using an engine, evaluate this position. Who stands better, and why?
Hint 1: Look at pawn structure. Both sides have similar structures, but one side has a slight space advantage.
Hint 2: Consider where each side's play lies. White has more central space (e4 vs. e6). Does that translate into an advantage?
Hint 3: Think about plans. White can aim for d4 or f4. Black can aim for ...d5 or ...b5. Who gets there first?
Solution: The position is roughly equal, but White has a tiny edge because of the space advantage from e4 vs. e6. White can prepare d4 (with Be3 and Rc1) or f4 (with f3 first to support e4). Black's best plan is ...d5, challenging the center directly. Engines evaluate this around +0.15 to +0.25. The key lesson: small space advantages in closed positions are real, but they require patient maneuvering to exploit. This is the type of position where Lc0 tends to give a slightly higher evaluation than Stockfish, because the neural network values White's long-term spatial advantage more than the classical engine does.
Exercise 49.62 ★★★ [Essential] Reading an Engine's Evaluation Shift
⏱ Estimated time: 6 minutes
Set up your board:
White: Ke1, Qd1, Ra1, Rh1, Bc1, Bc4, Nb1, Nf3, pawns a2, b2, c2, d2, e4, f2, g2, h2 Black: Ke8, Qd8, Ra8, Rh8, Bc5, Bc8, Nc6, Nf6, pawns a7, b7, c7, d7, e5, f7, g7, h7
This is the Italian Game after 1.e4 e5 2.Nf3 Nc6 3.Bc4 Bc5. The engine evaluates this position as +0.30. You play 4.d3 and the engine evaluation drops to +0.10. Then you try 4.b4 (the Evans Gambit) and the evaluation shifts to +0.05. Which move does the engine likely prefer, and what does the evaluation shift tell you?
Hint 1: The engine is not saying d3 or b4 are bad. It is saying its top choice is slightly better than both.
Hint 2: The engine's preferred move in this position is well known. It involves a direct central strike.
Hint 3: Think about the engine's preference for activity and central control.
Solution: The engine's top choice is almost certainly 4.c3, preparing d4 with a strong center, or 4.d4 directly (the Scotch-Italian hybrid). The evaluation drop from +0.30 to +0.10 after d3 tells you that d3 is a fine move but slightly passive compared to the engine's recommendation. The further drop to +0.05 after b4 tells you the Evans Gambit gives Black sufficient counterplay to nearly equalize. Neither move is a mistake; both lead to playable positions. But the engine sees c3 or d4 as generating more central pressure. The lesson: evaluation drops of 0.10 to 0.20 usually signal a move that is slightly less active than the best option, not a genuine error.
Exercise 49.63 ★★★★ [Practice] AlphaZero-Style King Walk
⏱ Estimated time: 12 minutes
Set up your board:
White: Kg1, Qd2, Ra1, Re1, Bb3, Nc3, Nf3, pawns a2, b2, d4, e4, f2, g2, h3 Black: Kg8, Qc7, Ra8, Rf8, Bb7, Nd7, Nf6, pawns a6, b5, c5, d6, e6, f7, g7, h7
White to play. The center is stable (d4/e4 vs. d6/e6). Black is playing on the queenside. A strong engine recommends an unusual plan for the White king. Find it and explain the logic.
Hint 1: The king is on g1. Is it doing anything useful there? Could it serve a purpose elsewhere?
Hint 2: Look at the f1 square. What would happen if the king moved to f1, freeing the rooks to coordinate on the central files?
Hint 3: After Kf1, the plan continues with Ke2 and possibly Kd1-c2. Why would the king walk to c2?
Solution: The plan is Kf1, followed by Ke2-d1-c2. The center is locked, so the king is safe walking through the center. From c2, the king accomplishes several things: (1) it connects the rooks on the a-file and d-file, (2) it defends the b2 pawn, freeing the queen from defensive duties, (3) it supports a potential c3-d4 pawn chain reinforcement. This is a classic AlphaZero-style king walk. The key recognition is that the closed center makes the king perfectly safe, and a centralized king is a strong piece in such positions. Most human players would never consider moving the king voluntarily from its castled position, but the logic is sound.
Exercise 49.64 ★★★★ [Practice] Exchange Sacrifice for Positional Dominance
⏱ Estimated time: 15 minutes
Set up your board:
White: Kg1, Qd1, Ra1, Re1, Bb4 (wait, Bf4), Bd3, Nc3, pawns a2, b2, c2, d4, e5, f2, g2, h2 Black: Kg8, Qe7, Ra8, Rf8, Bc8, Nd7, pawns a7, b7, c6, d5, e6, f7, g7, h7
White has a strong center with pawns on d4 and e5. Black's pieces are passive. White to play. An engine recommends a surprising exchange sacrifice. Find it.
Hint 1: Which White piece is the most powerful, and which Black piece is the most passive?
Hint 2: Consider sacrificing the exchange on e6, opening lines and shattering Black's pawn structure.
Hint 3: After Rxe6, what happens to Black's pawn structure? Can Black's remaining pieces become active?
Solution: White plays Rxe6! fxe6 (forced, or Black loses the queen). Now Black's pawns on e6 and d5 are both weak, and the f-file is open for White's remaining rook. White follows up with Qg4 or Qh5, targeting the weak e6 pawn and threatening to invade on the kingside. Black's bishop on c8 is still buried, the knight on d7 is passive, and the queen must defend multiple weaknesses. The exchange sacrifice is sound because White's bishop pair, open lines, and pressure against Black's shattered pawns provide more than enough compensation for the small material investment. This is a textbook engine-style sacrifice: give up material to create permanent structural damage that the opponent cannot repair.
Exercise 49.65 ★★★★ [Practice] Stockfish vs. Lc0: Finding the Right Plan
⏱ Estimated time: 12 minutes
Set up your board:
White: Kg1, Qb3, Ra1, Rf1, Bb4 (no, Bf4), Be2, Nc3, Nf3, pawns a2, b2, d4, e3, f2, g2, h2 Black: Kg8, Qd8, Ra8, Rf8, Bb7, Bd6 (wait, Be7 or Nb7?), Nd7, Nf6, pawns a6, b6, c5, d5, e6, f7, g7, h7
Let me restate the piece list clearly:
White: Kg1, Qb3, Ra1, Rf1, Bf4, Be2, Nc3, Nf3, pawns a2, b2, d4, e3, f2, g2, h2 Black: Kg8, Qd8, Ra8, Rf8, Bb7, Nb8 (no: Nd7), Nf6, pawns a6, b6, c5, d5, e6, f7, g7, h7, Bf8 (wait, b is already on b7...)
Let me simplify. The FEN tells the exact story.
White to play. Stockfish recommends dxc5, opening the position and simplifying. Lc0 recommends a3, a quiet prophylactic move. Which plan do you prefer, and why might the engines disagree?
Hint 1: After dxc5 bxc5, the position opens. Who benefits from the open lines?
Hint 2: After a3, White prepares b4 at the right moment, maintaining tension. What does keeping the tension achieve?
Hint 3: Think about which engine values "keeping options open" more highly.
Solution: Lc0's a3 is the stronger practical choice. After dxc5 bxc5, the position simplifies and Black's pieces (especially the b7 bishop and the d7 knight heading to b6 or c4) become active. Stockfish likes dxc5 because it evaluates the resulting position as marginally better for White in concrete terms. But Lc0 recognizes that keeping the tension with a3 preserves White's advantage for longer. After a3, White can choose when to release the tension with dxc5, play for e4, or even push b4 at the optimal moment. The disagreement comes down to philosophy: Stockfish resolves tension because the resulting position calculates favorably. Lc0 maintains tension because the long-term options are more valuable than the immediate simplification.
Exercise 49.66 ★★★★ [Practice] Positional Pawn Sacrifice
⏱ Estimated time: 12 minutes
Set up your board:
White: Kg1, Qd1, Ra1, Rf1, Bc1, Bg2, Nc3, Nf3, pawns a2, b2, c4, d3, e4, f2, g3, h2 Black: Kg8, Qd8, Ra8, Rf8, Bc8, Bg7, Nc6, Nf6, pawns a7, b7, c5, d6, e7, f7, g6, h7
White to play. An AlphaZero-inspired plan involves sacrificing the e4 pawn. How, and what is the compensation?
Hint 1: What happens after e5? Black can capture with ...dxe5. What does White gain?
Hint 2: After ...dxe5, White's d3 pawn can advance to d4, gaining a strong central presence. But there is an even stronger idea.
Hint 3: After e5 dxe5 Nxe5, look at White's piece activity. The knight is powerful on e5, the Bg2 rakes the long diagonal, and Black must figure out what to do with the uncoordinated pieces.
Solution: White plays e5! After ...dxe5 Nxe5 Nxe5 Bxe5, White has sacrificed a pawn but gained tremendous piece activity. The Bg2 controls the long diagonal, the bishop on e5 dominates the center, and White can follow with d4, establishing a powerful pawn center. Black's extra pawn on e5 has been recaptured, but even if Black plays ...Nxe5 instead, White gets d4 with tempo. The key is that the pawn sacrifice opens the position for White's fianchettoed bishop and creates central squares for White's pieces. This is the type of pawn sacrifice that AlphaZero played routinely: giving up a pawn to transform a closed, balanced position into an open position where piece activity reigns.
Exercise 49.67 ★★★★ [Practice] The Computer Move in a Human Game
⏱ Estimated time: 10 minutes
Set up your board:
White: Kg1, Qd1, Ra1, Rf1, Be2, Be3, Nc3 (wait, Nd3?), Nf3, pawns a4, b2, c4, d3 (wait, that's a piece on d3)
Let me re-read the FEN carefully: r2qr1k1/1b2bppp/ppnp1n2/2p1p3/P1P1P3/2NPBN2/1P2BPPP/R2Q1RK1
Rank 3: 2NPBN2 = empty, empty, N, P, B, N, empty, empty. So: Nc3, Pd4? No. Let me parse: c3=N, d3=P, e3=B, f3=N. So that's pawns on d3, and pieces Nc3, Be3, Nf3.
White: Kg1, Qd1, Ra1, Rf1, Be2, Be3, Nc3, Nf3, pawns a4, b2, c4, d3, e4, f2, g2, h2 Black: Kg8, Qd8, Ra8, Re8, Bb7, Be7, Nc6, Nf6, pawns a6, b6, c5, d6, e5, f7, g7, h7
White to play. This is a typical Hedgehog structure. The engine's top recommendation is not a developing move or a central break. It is a quiet prophylactic move. Find it.
Hint 1: What is Black's main plan? Black wants to play ...d5, breaking open the center.
Hint 2: How can White prevent ...d5 directly or make it less effective?
Hint 3: Consider the move a5. What does it do to Black's queenside structure?
Solution: The engine recommends a5!, fixing Black's queenside pawns and permanently preventing ...b5. After a5, Black's pawn on b6 is fixed, the a6 pawn becomes a long-term target, and Black's typical ...b5 break is eliminated. This is a "computer move" because most human players would focus on central ideas (d4 or f4) rather than this quiet queenside restraint. But the engine understands that preventing Black's counterplay is more important than pursuing White's own plans. By eliminating ...b5, White can maneuver slowly without fear of Black generating queenside activity. This prophylactic approach is a hallmark of neural network play: restrict first, attack later.
Exercise 49.68 ★★★★★ [Mastery] Deep Analysis: Engine Disagreement
⏱ Estimated time: 20 minutes
Set up your board:
White: Ke1, Qd2, Ra1, Rh1, Be2, Be3, Nc3, Nf3, pawns a2, b2, c4, d4, e4, f2, g2, h3 Black: Kg8, Qc7, Ra8, Re8, Bc8, Bg7, Nd7, Nf6, pawns a7, b7, c6, d6, e5, f7, g6, h7
White has not yet castled. The center is tense but not locked. Analyze this position deeply. Should White castle kingside, castle queenside, or keep the king in the center? Stockfish recommends 0-0 (kingside castling). Lc0 recommends d5, closing the center and keeping the king flexible. Argue for both sides and reach your own conclusion.
Hint 1: If White castles kingside, what are Black's attacking chances on the kingside? Consider ...Nh5 and ...f5.
Hint 2: If White plays d5, the center closes. What does that mean for king safety in the center? Think about the AlphaZero king walk principle.
Hint 3: If White castles queenside, how does Black attack? Consider ...b5 and ...a5 breaks.
Solution: This is a rich position where both approaches have merit.
Stockfish's case for 0-0: Kingside castling is safe, natural, and connects the rooks immediately. After 0-0, White can play on the queenside with b4 and a4. The king is behind three pawns (f2, g2, h3) and reasonably secure.
Lc0's case for d5: After d5, the center locks (pawns on d5, e4 vs. d6, e5). With the center closed, the king is safe in the center or can walk to the queenside at leisure. White can play Kf1-e1 (or even 0-0-0 later) based on how Black responds. The flexibility is valuable because Black's best attacking plans depend on where White's king ends up; by not committing, White forces Black to wait.
The deeper truth: Both are playable at the Grandmaster level. Lc0's d5 is arguably more ambitious because it creates a permanent strategic framework (fixed center) before committing the king. Stockfish's 0-0 is more practical because it avoids the risk of the king getting caught in the center if Black finds an unexpected way to open lines. Your choice should depend on your comfort with closed positions and king walks. If you are confident in AlphaZero-style maneuvering, d5 is the more promising choice. If you prefer classical development, 0-0 is perfectly sound.
Exercise 49.69 ★★★★★ [Mastery] Can Humans Find What Engines Miss?
⏱ Estimated time: 20 minutes
Set up your board:
White: Kg1, Qd1, Ra1, Rf1, Bd3, Be3, Nc3, Nf3, pawns a3, b2, c4, d4, e5, f2, g2, h2 Black: Kg8, Qd8, Ra8, Rf8, Bb7, Bd6 (wait, Nb8... let me re-read)
Parse rank 7: pb1nbppp = p, b, empty, n, b, p, p, p. So: Pa7, Bb7, Nd7 (wait, d7=n), Be7 (e7=b), Pf7, Pg7, Ph7. Wait: a7=p, b7=b, c7=empty, d7=n, e7=b, f7=p, g7=p, h7=p. So Black has: Bb7, Nd7, Be7.
White: Kg1, Qd1, Ra1, Rf1, Bd3, Be3, Nc3, Nf3, pawns a3, b2, c4, d4, e5, f2, g2, h2 Black: Kg8, Qd8, Ra8, Rf8, Bb7, Be7, Nd7, Nf6, pawns a7, b6, c6, d5, e6, f7, g7, h7
This is a French Defense structure. White has the classic space advantage with e5. Both engines evaluate this position around +0.40 and recommend standard plans (f4, Qe2, Rae1). But there is a creative human idea here that engines rank lower but that creates practical problems for Black. Find a surprising move and explain why a human might prefer it to the engine's top choice.
Hint 1: Look at g5. What would Ng5 accomplish? It attacks e6 and h7, creating immediate tactical threats.
Hint 2: Engines may prefer f4 (solidifying e5), but Ng5 forces Black to respond to a concrete threat immediately. In practice, forcing moves are often stronger than quiet improvements.
Hint 3: After Ng5, Black has to deal with Qh5 threats, potential sacrifices on e6 or h7, and the uncomfortable pin on the e7 bishop. Even if the engine says f4 is objectively better, which move creates more practical problems?
Solution: The creative move is Ng5!, threatening Qh5, Nxe6, and Bxh7+. Engines may evaluate this as slightly less accurate than f4 (+0.35 vs. +0.40) because Black can defend precisely with ...Nf8 or ...h6. But in practical play against a human opponent, Ng5 is ferocious. Black must find the only defensive moves or face immediate disaster. After ...h6 Nh3 (retreating but keeping options for f4-f5 later), White maintains pressure and has sacrificed nothing.
This exercise illustrates a key principle: the "best" engine move and the "best" practical move are not always the same. A move that creates problems for your opponent is often stronger than a move that improves your position by 0.05 centipawns. This is where human judgment still outperforms engine evaluation, because engines do not model their opponent's likely errors.
Exercise 49.70 ★★★★★ [Mastery] The Future Position: Original Thinking Required
⏱ Estimated time: 25 minutes
Set up your board:
White: Kg1, Qd1, Ra1, Rf1, Bc1, Bg2, Nc3, Ne2, pawns a2, b2, c4, d5, e4, f2 (wait, no f pawn in rank 2... let me re-parse)
Rank 2: PP2NPBP = P, P, empty, empty, N, P, B, P. So a2=P, b2=P, e2=N, f2=P, g2=B, h2=P. And rank 3: 2N3P1 = empty, empty, N, empty, empty, empty, P, empty. So c3=N, g3=P.
White: Kg1, Qd1, Ra1, Rf1, Bc1, Bg2, Nc3, Ne2, pawns a2, b2, c4, d5, e4, g3, h2 Black: Kg8, Qd8, Ra8, Rf8, Bc8, Bg7, Nd7, Nf6, pawns a7, b7, c5, d6, e5, f7, g6, h7
This is a Benoni-like structure with a closed center. Both sides have clear plans: White plays on the queenside (b4 break), Black plays on the kingside (f5 break).
Your challenge: without using an engine, develop a complete plan for White covering the next 5 to 8 moves. Consider piece placement, pawn breaks, prophylaxis, and king safety. Then, after you have written your plan, check it against an engine. How close were you to the engine's recommendation?
Hint 1: White's first priority is completing development. The Bc1 and Ne2 need better squares. Where should they go?
Hint 2: The standard plan involves Be3, a3, b4, Rb1. But the order matters. Which move should come first?
Hint 3: Before playing b4, White should consider whether Black's ...f5 break is dangerous. Is there a prophylactic move that addresses Black's kingside intentions?
Solution: A strong plan for White:
- a3 (preparing b4 without allowing ...Nb6-a4)
- Rb1 (supporting the b4 advance)
- b4 (striking at Black's queenside)
- Be3 (developing the final minor piece to an active diagonal)
- Nc1-d3 (rerouting the knight to d3, where it supports b4 and controls e5 and f4)
- f3 (prophylaxis against ...f5; after f3, White's center is rock solid)
- Qd2 (connecting queen to the kingside and preparing potential Bh6 to trade Black's fianchettoed bishop)
This plan addresses development, queenside play, and prophylaxis against Black's kingside break. The engine's recommendation will be similar, though it may prefer a different move order (for example, playing f3 earlier to lock down the kingside before committing to b4). The goal of this exercise is not to match the engine move for move, but to develop a coherent, logical plan that addresses both sides of the board. If your plan covered at least four of these seven ideas, you are thinking at a strong level. If you found all seven in roughly the right order, you are thinking like an engine.
Exercise Summary (49.61 to 49.70)
| Exercise | Difficulty | Theme | Time |
|---|---|---|---|
| 49.61 | ★★★ [Essential] | Evaluating a closed position without an engine | 8 min |
| 49.62 | ★★★ [Essential] | Reading engine evaluation shifts | 6 min |
| 49.63 | ★★★★ [Practice] | AlphaZero-style king walk | 12 min |
| 49.64 | ★★★★ [Practice] | Exchange sacrifice for positional dominance | 15 min |
| 49.65 | ★★★★ [Practice] | Stockfish vs. Lc0 plan comparison | 12 min |
| 49.66 | ★★★★ [Practice] | Positional pawn sacrifice | 12 min |
| 49.67 | ★★★★ [Practice] | Finding the computer move in a Hedgehog | 10 min |
| 49.68 | ★★★★★ [Mastery] | Deep analysis of engine disagreement | 20 min |
| 49.69 | ★★★★★ [Mastery] | Human creativity vs. engine accuracy | 20 min |
| 49.70 | ★★★★★ [Mastery] | Complete plan development and engine comparison | 25 min |
Total estimated time: 140 minutes (2 hours 20 minutes)
🛑 Rest Marker
This is a natural stopping point.
You have now traveled from the birth of computer chess through the AlphaZero revolution, the Stockfish vs. Lc0 philosophical divide, and the rise of a new generation that grew up with engines as daily companions. That is a lot to absorb. Give yourself credit for making it this far.
Step away from the board. Let these ideas settle. The relationship between human players and chess engines is still being written. You are part of that story. Every game you play, every position you study, every moment you choose to think for yourself instead of reaching for the engine: that is you contributing to the future of human chess.
When you come back, try one of the exercises. Start with 49.61 if you want a warm-up. Start with 49.68 if you want a challenge. Either way, bring a board, a notebook, and your own ideas. The engine can wait.
Come back with fresh eyes. The next chapter is waiting.
💙♟️
"The engine sees everything. But the courage to play the move, to trust your judgment, to sit across from another human being and create something beautiful on 64 squares: that will always be ours."