Title: Improving exploration in deep reinforcement learning for stock trading

Authors: Wiem Zemzem; Moncef Tagina

Addresses: National School of Computer Science, University of Manouba, Manouba, Tunisia ' National School of Computer Science, University of Manouba, Manouba, Tunisia

Abstract: Deep reinforcement learning techniques have become quite widespread over the last decades. One challenge is the Exploration-Exploitation Dilemma. Although many exploration techniques for single-agent and multi-agent deep reinforcement learning are proposed and have shown promising results in various domains, their value has not yet been demonstrated in the financial markets. In this paper, we will apply the NoisyNet-DQN method, which was previously tested and brought promising results in Atari games, to the stock trading problem. The trained reinforcement learning agent is employed to trade the S&P500 ETF (SPY) data set. Findings show that this approach can encounter the best trading action to choose at a specific moment and outperforms the classical DQN (Deep QNetwork) method.

Keywords: deep reinforcement learning; NoisyNet-DQN; exploration; stock trading.

DOI: 10.1504/IJCAT.2023.133883

International Journal of Computer Applications in Technology, 2023 Vol.72 No.4, pp.288 - 295

Received: 06 Nov 2022
Accepted: 01 Feb 2023

Published online: 04 Oct 2023 *

Full-text access for editors Full-text access for subscribers Purchase this article Comment on this article