Repository logo
Institutional Digital Repository
Shreenivas Deshpande Library, IIT (BHU), Varanasi

Transition based discount factor for model free algorithms in reinforcement learning

dc.contributor.authorSharma A.; Gupta R.; Lakshmanan K.; Gupta A.
dc.date.accessioned2025-05-23T11:27:33Z
dc.description.abstractReinforcement Learning (RL) enables an agent to learn control policies for achieving its long-term goals. One key parameter of RL algorithms is a discount factor that scales down future cost in the state’s current value estimate. This study introduces and analyses a transition-based discount factor in two model-free reinforcement learning algorithms: Q-learning and SARSA, and shows their convergence using the theory of stochastic approximation for finite state and action spaces. This causes an asymmetric discounting, favouring some transitions over others, which allows (1) faster convergence than constant discount factor variant of these algorithms, which is demonstrated by experiments on the Taxi domain and MountainCar environments, (2) provides better control over the RL agents to learn risk-averse or risk-taking policy, as demonstrated in a Cliff Walking experiment. © 2021 by the authors.
dc.identifier.doihttps://doi.org/10.3390/sym13071197
dc.identifier.urihttp://172.23.0.11:4000/handle/123456789/11538
dc.relation.ispartofseriesSymmetry
dc.titleTransition based discount factor for model free algorithms in reinforcement learning

Files

Collections