Presentation Information

[1M4-GS-5x-04]A Game-Theoretic Approach to Team Decisions in American Football

〇Yuta Fukita1, Wataru Masaka1, Atsushi Iwasaki1 (1. The University of Electro-Communications)
[[online]]

Keywords:

Multi-agent Reinforcement Learning,Game Theory,State Abstraction,Two-player Zero-sum Markov Games,American Football

Strategic play calling in American football is inherently game-theoretic because the offense and defense act simultaneously under uncertain outcomes, and performance depends on an opponent's best response. We present a data-driven two-player zero-sum Markov-game formulation of American football whose stochastic outcomes are learned from NFL play-by-play data and coupled with rule-based updates that enforce downs, field position, possession, and scoring. In a fixed-horizon endgame setting (the last four plays from a fixed initial state), this construction enables equilibrium evaluation by computing exact best responses and exploitability via dynamic programming. Using the proposed environment, we study deep reinforcement learning (Nash DQN) under three state abstractions. Empirically, moderate abstraction yields more stable learning and lower exploitability than both the full (unabstracted) state and an aggressive abstraction, while remaining measurably above the dynamic-programming equilibrium. Head-to-head simulations further reveal strong role asymmetries and show that an NFL-derived statistical baseline leaves substantial room for improvement in this setting.