Presentation Information

[2H5-OS-2b-01]Empirical Study on the Feedback Capabilities of Large Language Models in the Automatic Generation of Stock Investment Strategies

〇Hirai Kawamura1,2, Kenji Kubo1,2, Kei Nakagawa3,2 (1. University of Tokyo, 2. Matsuo Institute, 3. Graduate School of Business, Osaka Metropolitan University)

Keywords:

Large Language Models,AI Agent,Investment Strategy Generation,Feedback Learning

In recent years, the application of large language models (LLMs) to the financial domain has attracted increasing attention; however, there remains a lack of empirical evidence on how effectively LLMs’ feedback capabilities function in the generation of trading strategies. This study constructs an automated stock trading strategy generation framework based on LLMs and empirically examines their strategy improvement capabilities within an iterative loop consisting of hypothesis generation, code synthesis, backtesting, and feedback-driven refinement. Specifically, we focus on the Japanese equity market (TOPIX 500 constituents excluding the financial sector), where backtesting results—including returns, Sharpe ratio, maximum drawdown, information coefficient (IC), factor exposures, etc. —are provided to the LLM as feedback for repeated diagnosis, modification, and regeneration of long-short strategies. The experimental results demonstrate that LLMs are capable of interpreting both quantitative and visual feedback and generating concrete improvement proposals such as parameter tuning and logic modification. At the same time, we observe that these improvements do not necessarily lead to performance gains in all cases, highlighting both the potential and the limitations of LLM-driven strategy refinement.