The variational autoencoder (VAE) framework is a popular option for training
unsupervised generative models, featuring ease of training and latent
representation of data. The objective function of VAE does not guarantee to
achieve the latter, however, and failure to do so leads to a frequent failure
mode called posterior collapse. Even in successful cases, VAEs often result in
low-precision reconstructions and generated samples. The introduction of the
KL-divergence weight $\beta$ can help steer the model clear of posterior
collapse, but its tuning is often a trial-and-error process with no guiding
metrics. Here we test the idea of using the total VAE loss of generated samples
(generated loss) as the proxy metric for generation quality, the related
hypothesis that VAE reconstruction from the mean latent vector tends to be a
more typical example of its class than the original, and the idea of exploiting
this property by augmenting training data with generated variants (augmented
training). The results are mixed, but repeated encoding and decoding indeed
result in qualitatively and quantitatively more typical examples from both
convolutional and fully-connected MNIST VAEs, suggesting that it may be an
inherent property of the VAE framework.