Exploration is a significant challenge in practical reinforcement learning (RL), and uncertainty-aware exploration that incorporates the quantification of epistemic and aleatory uncertainty has been recognized as an effective exploration strategy. However, capturing the combined effect of aleatory and epistemic uncertainty for decision-making is difficult. Existing works estimate aleatory and epistemic uncertainty separately and consider the composite uncertainty as an additive combination of the two. Nevertheless, the additive formulation leads to excessive risk-taking behavior, causing instability.
In this paper, we propose an algorithm that clarifies the theoretical connection between aleatory and epistemic uncertainty, unifies aleatory and epistemic uncertainty estimation, and quantifies the combined effect of both uncertainties for a risk-sensitive exploration. Our method builds on a novel extension of distributional RL that estimates a parameterized return distribution whose parameters are random variables encoding epistemic uncertainty.
Experimental results on tasks with exploration and risk challenges show that our method outperforms alternative approaches.
In this section, we evaluate the performance of UUaE on two Atari games with sparse reward functions. To test the proposed algorithm in a more realistic setting, we also run our algorithm on an autonomous vehicle driving simulator [29] in a highway domain, where rewards are designed to penalize unsafe driving behavior. The sparsity and risk-sensitivity of rewards in these tasks make uncertainty-aware exploration challenging.
Fig. 2 presents the results averaged over 10 random seeds.
@inproceedings{malekzadeh2023unified,
title={A Unified Uncertainty-Aware Exploration: Combining Epistemic and Aleatory Uncertainty},
author={Malekzadeh, Parvin and Hou, Ming and Plataniotis, Konstantinos N},
booktitle={ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={1--5},
year={2023},
organization={IEEE}
}