Several methods from two separate lines of works, namely, data augmentation (DA) and adversarial training techniques, rely on perturbations done in latent space. Often, these methods are either non-interpretable due to their non-invertibility or are notoriously difficult to train due to their numerous hyperparameters. We exploit the exactly reversible encoder-decoder structure of normalizing flows to perform perturbations in the latent space. We demonstrate that these on-manifold perturbations match the performance of advanced DA techniques—reaching test accuracy for CIFAR-10 using ResNet-18 and outperform existing methods particularly in low data regimes—yielding – relative improvement of test accuracy from classical training. We find our latent adversarial perturbations, adaptive to the classifier throughout its training, are most effective.