Sherwin Chen
by Sherwin Chen
1 min read

Categories

Tags

Preliminaries

We extend the policy-gradient objective by conditioning it on goal \(g\)

\[\begin{align} \nabla_\theta \mathcal J(\theta)&=\sum_gp(g)\underbrace{\sum_\tau p(\tau|g,\theta)}_{trajectory\ probability}\sum_{t=1}^{T-1}\nabla\log p(a_t|s_t,g,\theta)A(s_t,a_t,g)\tag {1} \end{align}\]

where \(A(s_t,a_t,g)\) is some advantage function. Notice that we expanded the expectation using summation for future usage.

Hindsight experience replay, samples future states along the trajectory as additional goals so as to provide more training signal to the agent. This technique has been demonstrated to significantly improve the training speed and performance of the agent in goal-directed problems where the reward signal is sparse and binary.

Hindsight Policy Gradients

It is theoretically sound to directly apply hindsight experience to methods of one-step \(Q\)-learning style as the action at the current step has been speficied by the \(Q\)-function and no importance sampling is required. This, however, is not the case for policy-gradient methods. Therefore, we have to apply importance sampling to Eq.\((1)\)

\[\begin{align} \nabla_\theta \mathcal J(\theta) &=\sum_gp(g)\sum_\tau {p(\tau|g',\theta)\over p(\tau|g',\theta)}p(\tau|g,\theta)\sum_{t=1}^{T-1}\nabla\log p(a_t|s_t,g,\theta)A(s_t,a_t,g)\\\ &=\sum_gp(g)\sum_\tau p(\tau|g',\theta)\underbrace{\prod_{t=1}^{T-1}{p(a_t|s_t,g,\theta)\over p(a_t|s_t,g',\theta)}}_{expand\ trajectory}\sum_{t=1}^{T-1}\nabla\log p(a_t|s_t,g,\theta)A(s_t,a_t,g)\\\ &=\sum_gp(g)\sum_\tau p(\tau|g',\theta)\sum_{t=1}^{T-1}\nabla\log p(a_t|s_t,g,\theta)\underbrace{\prod_{t'=1}^{t}{p(a_{t'}|s_{t'},g,\theta)\over p(a_{t'}|s_{t'},g',\theta)}}_{causality}A(s_t,a_t,g)\tag {2} \end{align}\]

where we expand trajectory and cancel out transition probabilities in the second step, and, in the last step, we move in the importance sampling ratios and apply causality to remove future ratios unrelated to reward at the current timestamp.

In practice, we approximate Eq.\((2)\) with a batch of trajectories and goals \({(\tau^i,g^i)}_{i=1}^N\) as follows

\[\begin{align} \nabla_\theta\mathcal J(\theta)=\sum_{i=1}^N\sum_{t=1}^{T-1}\nabla\log p(a_t^i|s_t^i,g^i,\theta){\prod_{t'=1}^{t}{p(a_{t'}^i|s_{t'}^i,g^i,\theta)\over p(a_{t'}^i|s_{t'}^i,g',\theta)}}A(s_t^i,a_t^i,g^i) \end{align}\]

In the preliminary experiments, Rauber et al. found that this estimator leads to unstable learning progress, which is probably due to its potential high variance. Therefore, they propose applying weighted importance sampling to trade variance for bias, which gives us the final gradient estimate:

\[\begin{align} \nabla_\theta\mathcal J(\theta)=\sum_{i=1}^N\sum_{t=1}^{T-1}\nabla\log p(a_t^i|s_t^i,g^i,\theta){\prod_{t'=1}^{t}{p(a_{t'}^i|s_{t'}^i,g^i,\theta)\over p(a_{t'}^i|s_{t'}^i,g',\theta)}\over\sum_{j=1}^N\prod_{t'=1}^{t}{p(a_{t'}^i|s_{t'}^i,g^i,\theta)\over p(a_{t'}^i|s_{t'}^i,g',\theta)}}A(s_t^i,a_t^i,g^i) \end{align}\]

Interstingly, the authors found that applying baselines does not help HPG much in their experiments.

Experimental Results

The author test the agent on several environments where the agent receives the remaining number of time steps and one as a reward only for reaching the goal state, which also ends the episode.

As the experiments are out of interests to us, we refer readers to the official website for HPG for more information about experimental results: http://paulorauber.com/hpg

References

Andrychowicz, Marcin, Filip Wolski, Alex Ray, Jonas Schneider, Rachel Fong, Peter Welinder, Bob McGrew, Josh Tobin, Pieter Abbeel, and Wojciech Zaremba. 2017. “Hindsight Experience Replay.” Advances in Neural Information Processing Systems 2017-Decem (Nips): 5049–59.

Paulo Rauber, Avinash Ummadisingu, Filipe Mutz, Jürgen Schmidhuber. 2019. “Hindsight Policy Gradients.” ICLR.