## Abstract

While agents trained by Reinforcement Learning (RL) can solve increasingly challenging tasks directly from visual observations, generalizing learned skills to novel environments remains very challenging. Extensive use of data augmentation is a promising technique for improving generalization in RL, but it is often found to decrease sample efficiency and can even lead to divergence. In this paper, we investigate causes of instability when using data augmentation in common off-policy RL algorithms. We identify two problems, both rooted in high-variance \(Q\)-targets. Based on our findings, we propose a simple yet effective technique for stabilizing this class of algorithms under augmentation. We perform extensive empirical evaluation of image-based RL using both ConvNets and Vision Transformers (ViT) on a family of benchmarks based on DeepMind Control Suite, as well as in robotic manipulation tasks. Our method greatly improves stability and sample efficiency of ConvNets under augmentation, and achieves generalization results competitive with state-of-the-art methods for image-based RL. We further show that our method scales to RL with ViT-based architectures, and that data augmentation may be especially important in this setting.

## Method

We identify two causes of instability in deep \(Q\)-learning under augmentation: (1) non-deterministic \(Q\)-targets; and (2) over-regularization. Our method, SVEA, optimizes a modified \(Q\)-learning objective \(\mathcal{L}^{\textrm{SVEA}}_{Q}\) that leverages two data streams \(\mathbf{s}_{t},~~\mathbf{s}^{\textrm{aug}}_{t} = \tau(\mathbf{s}_{t}, \nu)\) (unaugmented and augmented observations, respectively) for an augmentation \(\tau\) parameterized by \(\nu \sim \mathcal{V}\). We define the \(\mathcal{L}^{\textrm{SVEA}}_{Q}\) objective as

$$
\begin{align}
\mathcal{L}^{\textbf{SVEA}}_{Q}(\theta, \psi) & \triangleq \alpha \mathcal{L}_{Q}\left(\mathbf{s}_{t}, q^{\textrm{tgt}}_{t}; \theta, \psi\right) + \beta \mathcal{L}_{Q}\left(\mathbf{s}^{\textrm{aug}}_{t}, q^{\textrm{tgt}}_{t}; \theta,\psi\right) \\
& = \mathbb{E}_{\mathbf{s}_{t}, \mathbf{a}_{t}, \mathbf{s}_{t+1} \sim \mathcal{B}} \left[ \alpha \left\| Q_{\theta}(f_{\theta}(\mathbf{s}_{t}), \mathbf{a}_{t}) - q^{\textrm{tgt}}_{t} \right\|^{2}_{2} + \beta \left\| Q_{\theta}(f_\theta(\mathbf{s}^{\textrm{aug}}_{t}), \mathbf{a}_{t}) - q^{\textrm{tgt}}_{t} \right\|^{2}_{2} \right] \,,
\end{align}
$$
where \(q^{\textrm{tgt}}_{t} = r(\mathbf{s}_{t}, \mathbf{a}_{t}) + \gamma \max_{\mathbf{a}'_{t}} Q^{\textrm{tgt}}_{\psi}(f^{^{\textrm{tgt}}}_{\psi}(\mathbf{s}_{t+1}), \mathbf{a}')\) is a \(Q\)-target estimated from unaugmented data, and \(\alpha,\beta\) are constant coefficients that balance the data streams. We show that for \(\alpha=\beta\), the objective \(\mathcal{L}^{\textrm{SVEA}}_{Q}\) can be evaluated in a single, batched forward pass using a novel data-mix operation. Lastly, when a policy \(\pi_{\theta}\) is learned, it is optimized strictly from unaugmented data and implicitly learns to generalize through parameter-sharing.

## Reinforcement Learning with Vision Transformers

We scale RL to Vision Transformers (ViT), and show that our method greatly improves generalization in this setting.

## Generalization

We train policies in a single, fixed environment and evaluate generalization to unseen environments.

## Stability

We evaluate the sample efficiency of SVEA under 6 common augmentations and compare to DrQ (Kostrikov et al.).

## Video Summary

## Paper

N. Hansen, H. Su, X. WangStabilizing Deep \(Q\)-Learning with ConvNets and Vision Transformers under Data Augmentation

NeurIPS 2021