Presentation Information
[2Yin-B-57]Stroke Evaluation Using Reinforcement Learning from Tennis Broadcast Video
〇Koki Nakaseko1, Kenjiro Ide1, Ning Ding3, Keisuke Fujii1,2 (1. Nagoya University, 2. Institute of Physical and Chemical Research, 3. Nagoya Institute of Technology)
Keywords:
Deep Reinforcement Learning,Time Series Modeling,Sports
Quantitatively evaluating individual tennis strokes is challenging because it requires considering multifaceted information, such as player position and pose, and relies on long-term outcomes. This study constructs a novel evaluation method combining Deep Reinforcement Learning (DRL), utilizing spatiotemporal position and pose, with Supervised Learning that mimics expert selection. Using US Open footage, we modeled the stroke decision-making process as a time series. We quantified shot outcomes as contributions to win probability and optimized the model using a hybrid loss function to achieve value estimation reflecting physical context. Verification experiments comparing our approach with a position-only baseline quantitatively evaluated the impact of integrating pose information on prediction accuracy and interpretability. Furthermore, based on case analyses of specific rallies, we examined the differences in evaluation values caused by pose information and the validity of the tactical intentions captured by the model. This study provides an analytical foundation contributing to multifaceted player performance evaluation and coaching decision-making.
