Adversarial Search

Minimax & Alpha-Beta Pruning

Professor Dr. Dhaval Patel • 2025

What are Adversarial Games?

Many AI problems can be modeled as games between two players, where each makes moves in turn, trying to optimize their own outcome.

👤
🤖
  • Chess: Classic strategy game with perfect information
  • Tic-Tac-Toe: Simple but perfect for understanding concepts
  • Checkers: Another perfect information game
  • Go: Ancient game with enormous complexity
Key Characteristic: Two players alternate moves, each trying to maximize their own score while minimizing their opponent's score.

Why Study Games in AI?

🧠 Cognitive Challenges

  • Test reasoning and planning abilities
  • Require looking ahead multiple moves
  • Decision making under uncertainty
  • Strategic thinking and pattern recognition
Games provide a perfect sandbox for testing AI reasoning!

📊 Measurable Progress

  • Clear win/lose/draw outcomes
  • Direct comparison with human experts
  • Well-defined rules and objectives
  • Benchmarking against other AI systems
Human vs AI: The ultimate test of artificial intelligence!

Game Trees – The Playground of Minimax

MAX
MIN
MIN
MIN
3
5
2
9
0
1
7
5
4
Game Tree Structure:
  • Root: Current game state
  • Nodes: Possible game states after moves
  • Edges: Legal moves available
  • Leaves: Terminal states (win/lose/draw)
  • Alternating Levels: MAX player vs MIN player

The Minimax Idea

The core insight of Minimax is beautifully simple yet powerful:

Assume your opponent plays optimally!
Choose moves that maximize your minimum possible outcome.

How Minimax Works:

  • MAX nodes: Your turn - pick the move with highest value
  • MIN nodes: Opponent's turn - they pick lowest value for you
  • Backup values: Propagate scores from leaves to root
  • Best move: Root node tells you the optimal first move
💡 Think of it like: You're climbing a ladder while your opponent tries to shake it. You look for the safest, highest step you can reach!

Algorithm guarantees: If both players play perfectly, Minimax finds the best possible outcome for the MAX player.

Minimax Algorithm in Action

🎮 Interactive Minimax Demo - Step Through the Algorithm
🎯 Try the Demo Above: Click "Next Step" to watch how Minimax evaluates each node, backs up values from leaves to root, and finds the optimal move. The algorithm guarantees the best possible outcome assuming perfect opponent play.

Alpha-Beta Pruning

Search Smarter, Not Harder

The Problem with Minimax

⚠️ Minimax Bottleneck: Time complexity is O(b^m) where b = branching factor and m = maximum depth. This explodes quickly!

Enter Alpha-Beta Pruning! 🚀

Alpha-Beta pruning skips branches that cannot affect the final decision, dramatically reducing the search space while guaranteeing the same result as Minimax.

🔍 Alpha (α)

  • Best value found so far for MAX
  • Lower bound for MAX player
  • Gets updated at MAX nodes
  • Starts at -∞

✂️ Beta (β)

  • Best value found so far for MIN
  • Upper bound for MIN player
  • Gets updated at MIN nodes
  • Starts at +∞
Pruning Rule: If α ≥ β at any point, stop searching that branch! The remaining moves cannot improve the outcome.

Alpha-Beta Pruning in Action

✂️ Interactive Alpha-Beta Pruning Demo - Watch the Pruning Magic!
✂️ Watch the Pruning Power: Notice how Alpha-Beta prunes entire subtrees (shown in gray) when α ≥ β. The algorithm achieves the exact same result as Minimax but explores far fewer nodes!

Minimax vs Alpha-Beta Performance

🐌 Standard Minimax

  • Time Complexity: O(b^m)
  • Space Complexity: O(bm)
  • Nodes Explored: All nodes
  • Guarantees: Optimal solution
Explores the entire search tree - can be very slow for deep games!

🚀 Alpha-Beta Pruning

  • Best Case: O(b^(m/2))
  • Worst Case: O(b^m)
  • Typical Case: Much better than Minimax
  • Guarantees: Same optimal solution
Can search twice as deep in the same time with perfect move ordering!
VS

Real-World Game AI Victories

🏆 Historic Victories

  • 1997: Deep Blue defeats Garry Kasparov at Chess
  • 2007: Chinook solves Checkers completely
  • 2016: AlphaGo beats Lee Sedol at Go
  • 2017: Libratus dominates Poker pros

🎯 Key Techniques Used

  • ✓ Alpha-Beta Pruning
  • ✓ Evaluation Functions
  • ✓ Opening/Endgame Books
  • ✓ Monte Carlo Tree Search
🌟 Impact Beyond Games: These algorithms aren't just for entertainment! They power decision-making in robotics, automated planning, resource allocation, and any scenario where you need to plan ahead against uncertainty or opposition.

Let's Practice! 🧠

Test Your Understanding

🎯 Challenge Questions:

  1. Given a game tree with branching factor 3 and depth 4, how many leaf nodes would Minimax explore?
  2. If Alpha-Beta achieves perfect move ordering, how deep can it search compared to Minimax in the same time?
  3. At what point does Alpha-Beta prune a branch? What are the α and β values when this happens?
  4. Why is move ordering crucial for Alpha-Beta's effectiveness?
💡 Try working through these with the interactive demos above!

Key Takeaways 🎯

🧠 Minimax: Assumes perfect opponent play and finds the best possible outcome through exhaustive search.
✂️ Alpha-Beta: Achieves the same result as Minimax but prunes unnecessary branches, often doubling search depth.

Why These Algorithms Matter:

  • Foundation: Core building blocks for game AI and adversarial search
  • Optimality: Guarantee finding the best move against perfect opponents
  • Efficiency: Alpha-Beta makes deep search tractable for complex games
  • Versatility: Apply beyond games to any adversarial decision-making scenario

🚀 Next Steps in Game AI:

• Advanced evaluation functions for non-terminal positions
• Handling games with chance elements (dice, cards)
• Dealing with imperfect information
• Monte Carlo Tree Search and modern approaches
• Deep learning integration (AlphaGo, AlphaZero)

Remember: Master these fundamentals, and you'll have the foundation to understand and build sophisticated game AI systems that can compete with human experts!

Thank You! 🙏

Questions & Discussion

Professor Dr. Dhaval Patel
Ready to dive deeper into AI? 🤖